Why Doctors Shouldn't Be 'Freaked Out' About AI, According to ChatGPT

— The chatbot said it's "not capable of replacing human healthcare professionals"

MedicalToday
 A photo of a man using the ChatGPT program on a laptop.

While concerns about the use of artificial intelligence (AI) in healthcare are understandable, doctors should recognize this technology is designed to assist, rather than replace, clinicians, according to an interview with .

In the interview, published in , David A. Asch, MD, of the Perelman School of Medicine and the Wharton School at the University of Pennsylvania in Philadelphia, explored the concerns, promises, and oddities of a future in healthcare that is heavily reliant on AI.

Asch asked ChatGPT whether doctors should be "a little bit freaked out" by it. The AI chatbot had a measured response: "It's understandable that some doctors may have concerns about the use of ChatGPT in healthcare, as it represents a significant change to the way medical care is traditionally delivered. However, it is important to recognize that ChatGPT is not intended to replace doctors, but rather to assist them in providing better care to patients ... ."

ChatGPT also urged healthcare professionals to stay informed about the latest developments in the technology and to remain open to the possibilities of using it to improve patient care. When Asch pressed ChatGPT about fears of an AI takeover in medicine, the chatbot emphasized that it is "not capable of replacing human healthcare professionals."

Asch also asked ChatGPT about crises with mental health and provider burnout, which prompted the AI chatbot to deliver detailed answers about how improvements in AI offer hope that it will be able to address those crises.

The AI chatbot also expounded on many opportunities for it to improve healthcare overall, including becoming a virtual assistant for patient care, easing clinical documentation burdens, and enhancing medical research and education.

While Asch questioned ChatGPT about many of his top concerns for the future of healthcare, he also asked about ChatGPT's main concerns: "I know you don't sleep, but if you did, what would you lose sleep over?"

ChatGPT answered that it is concerned most about data privacy and security, bias in data, and the limitation of interpreting its internal processing. Interestingly, the final reason that ChatGPT said it would "lose sleep" was a lack of government regulation to manage and rein in this rapidly evolving technology.

Near the end of the interview, Asch congratulated ChatGPT for passing the U.S. Medical Licensing Examination.

ChatGPT acknowledged the achievement as a indicator of its capability in assisting with high-level medical tasks, like making diagnoses or treatment decisions. However, the AI chatbot also said that passing the exam is "not the same as being able to practice medicine."

Asch concluded the interview with his own reflections about the experience. He explained that as recently as 2022 he was skeptical of AI playing a meaningful role in medicine in the near future, but his recent interactions with ChatGPT have changed his mind.

"ChatGPT has fundamentally changed my view of the pace of artificial intelligence (AI) for medicine," he wrote. "I am most amazed by how clearly it communicates. But I am far less confident about how well it curates the information that it communicates."

Asch noted that AI has the potential to amplify the efficiencies in healthcare, but he cautioned that it also has the ability to amplify the inefficiencies as well. He called this "the big problem" with AI in medicine right now.

"Digital sources are already rife with medical misinformation, and my worry is that misinformation is more likely to be amplified by, rather than filtered out by, programs such as ChatGPT," he wrote.

  • author['full_name']

    Michael DePeau-Wilson is a reporter on ’s enterprise & investigative team. He covers psychiatry, long covid, and infectious diseases, among other relevant U.S. clinical news.