Artificial Intelligence in Healthcare

Artificial intelligence. It’s the latest buzzword in countless professions, and healthcare is no exception. ChatGPT and other forms of AI are touted as massive timesavers, allowing healthcare professionals to streamline the documentation process, increase the accuracy of data processing, and help decrease errors. All this, AI proponents claim, the technology can do with greater speed than any human, leading to significant cost reductions.  

What is artificial intelligence? 

The concept of artificial intelligence is hardly new. In some form or another, AI has been used in the business field for many years. Natural language processing started in the 1950s. By the 1970s, AI research had progressed to the diagnosis and treatment of disease. However, it was not readily adapted into clinical practice.  

In broad terms, the term AI covers any computer system programmed to simulate human intelligence processes. AI can theoretically perform tasks usually done by humans, as well as rapidly solving complex problems.  

Machine learning is a technique that uses computers to design and train software algorithms to assimilate data without direct human interference — essentially, machines teaching machines. Everyday examples of machine learning include speech recognition software, online customer service chatbots, and digital imaging. Other forms include: 

  • Self-driving cars
  • Virtual travel agents
  • Marketing chatbots

How does AI impact healthcare? 

AI uses information gleaned through algorithms. The program then processes and organizes this data into meaningful patterns, with the goal of helping the clinician in diagnostic decision-making, management, and treatment of diseases.  

While AI offers substantial benefits, major practical and ethical concerns remain. Currently, clinicians are held 100% accountable for their actions. In the event of a misdiagnosis or a mistake, how would society hold a computer accountable? Is the use of AI in compliance with HIPAA? Is patient data used in these sophisticated learning models secure? 

The rise of AI will, by necessity, require renewed attention to the fields of biomedical ethics, information security, and law to start answering these questions. 

Machine learning, human interpretation 

AI technology is improving on a daily basis. Small refinements allow AI to identify diseases more quickly than the human eye, particularly in radiology. However, clinicians are still responsible for interpreting lab values or other diagnostics tests.  

It also falls on clinicians, not machines, to relay this information to a patient in a concise, caring and empathetic manner. No matter how advanced, AI is not able to replace the trust that patients have in their clinicians or the comfort of therapeutic touch when a patient needs it most.  

Regulating AI in healthcare 

In response to rising concerns, the U.S. FDA has issued a regulatory framework titled “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan.” The FDA describes this plan as a “multi-pronged approach to advance the Agency’s oversight of AI/ML-based medical software,” and is a cohesive step in strengthening healthcare professionals’ understanding of this potentially disruptive technology.

Join our email list

Get new course alerts, newsletters and more delivered directly to your inbox

By providing my personal information, including phone number, I consent to (1) receive email messages with information and offers, autodialed calls, texts, and prerecorded messages from FHEA, including current and possible future services, customer service and billing; and (2) FHEA’s Privacy Policy and Terms and Conditions. I understand that my consent is not required to purchase, and that cancellation of purchase does not automatically revoke this consent.