<p>Artificial Intelligence <br />
in Medicine<br />
(Part 1)<br />
<br />
</p>'+'

Artificial Intelligence
in Medicine
(Part 1)

The use of artificial intelligence (AI) applications is the most important new information technology in decades that will change health care. This creates an ongoing necessity for health care systems to regularly assess the impact and risks of AI as its development and deployment is outpacing legal, medical, or business changes. The implications carry enormous benefits, risks, and unforeseen consequences. 

One generalization could be:
AI is becoming a new member of our health care team. It is a genius that can operate almost any electronic information system at superhuman speed and imitate almost any human activity important to us, but it completely lacks morals, ethics, and can’t even understand the meaning of the output it produces.

BACKGROUND
Large language models (LLMs) are a key aspect of AI and include technology that ranges from language and learning models that interact with users through natural language in sophisticated ways. The example familiar to most people is ChatGPT, but it’s important to know that this is one of many applications that have emerged. The term “artificial intelligence” is indistinct and can be taken to mean “computer applications that imitate, emulate, or simulate activities that are traditionally considered human.” However, references to AI are being used broadly and refer to products and techniques from simpler algorithms such as rich learning and information processing systems. 

OVERVIEW OF AI/LLM CAPABILITIES
To survey the features of currently available systems is beyond the scope of this article and would be obsolete the day after it was published. AI employs new computer programming theories and techniques that were produced from decades of work. They depend on hardware, speeds, and amounts of memory unattainable just a few years ago. LLMs couldn’t exist until a substantial portion of the world’s cumulative information became digitized and thus machine readable. Access to all data and outcomes in the aggregate medical records of entire populations makes the power of LLMs in health care revolutionary.

It is important to recognize that AI applications are not “intelligent,” even in a metaphorical way. They process information on a scale not before contemplated, and then produce fluent language without understanding language or the objects language describes. LLMs have utterly zero “understanding,” in human terms, of what they are saying. 

The risk implications for medicine are critically important as LLMs can produce seemingly intelligent output, but if it is based on biased, flawed, or skewed inputs, it can be completely inaccurate and dangerous if believed to be true.

In addition to language, AI can process images, audio, video, or other digitized data, which creates new implications for medical practices. Among the controversial applications of AI in these areas are impersonation (“deep fakes”) and facial recognition, which open up important privacy and surveillance concerns. 

AI DETECTION
The battle to find “fingerprints” of AI is circular. Developers of detection apps want to tell when AI has produced something. Developers of AI want to find the best ways to fool detection algorithms. Each improvement advances both. AI and LLMs will never be licensed to practice medicine, and ultimately their output will need to be verified by the human licensee. However, the source of the output will likely be able to be shown forensically to be the product of an AI or LLM. One will need to verify their review for that output to become the licensee’s own work product.

AI APPLICATIONS FOR MEDICAL PROVIDERS
Providers are already using AI, and adoption/experimentation is accelerating. AI will be able to assist with just about every cognitive task a health care provider performs. Their interactive dialog capabilities can be used for patient interactions such as inbox management, scheduling, history taking, surveys, prescription management, translation services, recall and follow-up, answering medical questions, diagnosing conditions, performing risk assessments, triage, and referrals. Some patients will be more comfortable interacting with AI than humans with respect to sensitive issues. Many are likely to have already interacted with a diagnostic AI tool online prior to seeing a health care professional. 

Providers might use AI to help them in the informed consent process, disclosures of outcomes and results, adverse event reporting, apology and resolution discussions, and explaining medical information. AI can also be used for generating notes, reports, and correspondence, summarizing records and incoming reports, reviewing results, and creating task lists and reminders. Additional areas for AI applications include performance management, planning, procurement, research, literature review, and employee supervision. The following summarizes some of the most likely uses, their likelihood of adoption, their benefits and risks, and some strategies to address those risks.

>Virtual Scribing and Clinical Documentation
The benefits of using a tool to quickly and accurately generate a record of clinical interactions are obvious. Patients will need to be aware of and consent to the use of the recording devices in place to generate the records. Providers will need to learn to “narrate their examinations” to populate the record. A policy to erase the work product of the recording at regular, short intervals, as well as open access to the final record generated from that work product will help allay patients’ fears about how their information was captured and what is going in their permanent medical record. The need to provide the processes required by the Cures Act will be even more important. One can also predict that patients’ awareness of the record and their requests to edit, amend, or delete materials in it will increase and the provider or their staff will need to be cognizant of the necessary HIPAA processes and documentation. 

Finally, and most importantly, given how LLMs work and their inherent ability to produce fluent but possibly inaccurate, misleading, or even harmful output, the provider should read and verify the content of the notes generated by AI. The option of “dictated but not read,” in this case becomes “AI generated but not read” and should not be a common practice.

>Office Administration Tasks
Much of office administration will have AI tools adaptable to everyday practice in the near future. The need to accurately assign a visit, schedule its length, collect the necessary pre-visit information, obtain authorizations, etc. will be adopted quickly by short-staffed practices and systems. The interactions with third party payors are likely to be highly automated on their end, which will necessitate efficient and accurate automation on the provider side. Generating requests for authorization can be done effectively by LLMs as they become more adept at extracting from EHRs.

>Office Triage and AI Telemedicine Clinical Assessment
Phone triage and pre-visit clinical assessment has the ability to become very automated. This will be more sophisticated than the current algorithmically generated barrage of questions and will be able to process open-ended speech and generate interactions that will look surprisingly human. Awareness that this is being done via AI will be necessary for the patient, and the option to reach a human should be made available. Plus, the documentation from that interaction will need to be audited to ensure that the system is performing as expected. 

Ultimately the machine-generated clinical assessment will be the work product of licensed providers, and the responsibility and liability will likely rest with them, however, issues of product liability will arise and create new concerns around responsibility for errors. 

>Billing and Coding
It can be assumed that the process of assigning a code and a resulting payment for a given service will become extremely automated using AI tools by payors. Given that the vast majority of payments for your clinical interactions will be subjected to such tools, the need to capture the elements of your interactions will almost certainly necessitate the use of virtual scribing or AI clinical documentation tools on the provider side. This circular logic will likely drive adoption by payors and then providers in an accelerated fashion. It is almost analogous to having a 100% RAC audit of your clinical activity, done in real time.

>Risk Stratification and Payor Premium Pricing for Patients, Providers, and Facilities
As payors increasingly adopt AI tools and access to the entire population’s data and outcomes becomes available and sortable (by patient, provider, facility, region, specialty, or any other meaningful criteria), there will be inevitable ranking and quality measures of providers and facilities. This will further the circular logic argument posed previously that will drive the capture of clinical data by AI tools on the provider and facility side. There might be some increased efficiency in the capture of value-based purchasing and other quality measures, but that will likely be offset by the sheer increase in the numbers of such measures. “If you can measure it, you can adjust the payment for it” might be Edward Deming’s new adaptation of the measure and manage mantra.


Given the complexity of the subject, Part 2 of this article will focus on AI and the provider’s perspective, its affect on the practice of medicine, and how it may assist with clinical decision support.


Published: 4th Quarter 2023

 Information in this article is for general educational purposes and is not intended to establish practice guidelines or provide legal advice.