Este sitio web fue traducido automáticamente. Para obtener más información, por favor haz clic aquí.

ChatGPT, the artificial intelligence chatbot that was released by OpenAI in December 2022, is known for its ability to answer questions and provide detailed information in seconds — all in a clear, conversational way. 

As its popularity grows, ChatGPT is popping up in virtually every industry, including education, real estate, content creation and even health care.

Although the chatbot could potentially change or improve some aspects of the patient experience, experts caution that it has limitations and risks.

They say that AI should never be used as a substitute for a physician’s care.

AI HEALTH CARE PLATFORM PREDICTS DIABETES WITH HIGH ACCURACY BUT 'WON'T REPLACE PATIENT CARE'

Searching for medical information online is nothing new — people have been googling their symptoms for years. 

But with ChatGPT, people can ask health-related questions and engage in what feels like an interactive "conversation" with a seemingly all-knowing source of medical information.

"ChatGPT is far more powerful than Google and certainly gives more compelling results, whether [those results are] right or wrong," Dr. Justin Norden, a digital health and AI expert who is an adjunct professor at Stanford University in California, told Fox News Digital in an interview. 

Woman texting with medicine

ChatGPT has potential use cases in virtually every industry, including health care. (iStock)

With internet search engines, patients get some information and links — but then they decide where to click and what to read. With ChatGPT, the answers are explicitly and directly given to them, he explained.

One big caveat is that ChatGPT’s source of data is the internet — and there is plenty of misinformation on the web, as most people know. That’s why the chatbot’s responses, however convincing they may sound, should always be vetted by a doctor. 

Additionally, ChatGPT is only "trained" on data up to September 2021, according to multiple sources. While it can increase its knowledge over time, it has limitations in terms of serving up more recent information. 

"I think this could create a collective danger for our society."

Dr. Daniel Khashabi, a computer science professor at Johns Hopkins in Baltimore, Maryland, and an expert in natural language processing systems, is concerned that as people get more accustomed to relying on conversational chatbots, they’ll be exposed to a growing amount of inaccurate information.

"There's plenty of evidence that these models perpetuate false information that they have seen in their training, regardless of where it comes from," he told Fox News Digital in an interview, referring to the chatbots' "training." 

AI AND HEART HEALTH: MACHINES DO A BETTER JOB OF READING ULTRASOUNDS THAN SONOGRAPHERS DO, SAYS STUDY

"I think this is a big concern in the public health sphere, as people are making life-altering decisions about things like medications and surgical procedures based on this feedback," Khashabi added. 

"I think this could create a collective danger for our society."

It might ‘remove' some 'non-clinical burden’

Patients could potentially use ChatGPT-based systems to do things like schedule appointments with medical providers and refill prescriptions, eliminating the need to make phone calls and endure long hold times.

"I think these types of administrative tasks are well-suited to these tools, to help remove some of the non-clinical burden from the health care system," Norden said.

The ChatGPT logo on a laptop

With ChatGPT, people can ask health-related questions and engage in what feels like an interactive "conversation" with a seemingly all-knowing source of medical information. (Gabby Jones/Bloomberg via Getty Images)

To enable these types of capabilities, the provider would have to integrate ChatGPT into their existing systems.

These types of uses could be helpful, Khashabi believes, if they're implemented the right way — but he warns that it could cause frustration for patients if the chatbot doesn’t work as expected.

"If the patient asks something and the chatbot hasn’t seen that condition or a particular way of phrasing it, it could fall apart, and that's not good customer service," he said. 

"There should be a very careful deployment of these systems to make sure they're reliable."

"It could fall apart, and that's not good customer service."

Khashabi also believes there should be a fallback mechanism so that if a chatbot realizes it is about to fail, it immediately transitions to a human instead of continuing to respond.

"These chatbots tend to ‘hallucinate’ — when they don't know something, they continue to make things up," he warned.

It might share info about a medication's uses

"While ChatGPT cannot and should not be providing medical advice, it can be used to help explain complicated medical concepts in simple terms," Norden said.

Patients use these tools to learn more about their own conditions, he added. That includes getting information about the medications they are taking or considering taking.

Patients can use the chatbot, for instance, to learn about a medication’s intended uses, side effects, drug interactions and proper storage.

Woman asking for medication advice

ChatGPT does not have the capability make prescriptions or offer medical treatments, but it could potentially be a helpful resource for getting information about medications.  (iStock)

When asked if a patient should take a certain medication, the chatbot answered that it was not qualified to make medical recommendations.

Instead, it said people should contact a licensed health care provider.

It might have details on mental health conditions

The experts agree that ChatGPT should not be regarded as a replacement for a therapist. It's an AI model, so it lacks the empathy and nuance that a human doctor would provide.

However, given the current shortage of mental health providers and sometimes long wait times to get appointments, it may be tempting for people to use AI as a means of interim support.

AI MODEL SYBIL CAN PREDICT LUNG CANCER RISK IN PATIENTS, STUDY SAYS

"With the shortage of providers amid a mental health crisis, especially among young adults, there is an incredible need," said Norden of Stanford University. "But on the other hand, these tools are not tested or proven."

He added, "We don't know exactly how they're going to interact, and we've already started to see some cases of people interacting with these chatbots for long periods of time and getting weird results that we can't explain."

Sick man texting

Patients could potentially use ChatGPT-based systems to do things like schedule appointments with medical providers and refill prescriptions. (iStock)

When asked if it could provide mental health support, ChatGPT provided a disclaimer that it cannot replace the role of a licensed mental health professional. 

However, it said it could provide information on mental health conditions, coping strategies, self-care practices and resources for professional help.

OpenAI ‘disallows’ ChatGPT use for medical guidance

OpenAI, the company that created ChatGPT, warns in its usage policies that the AI chatbot should not be used for medical instruction.

Specifically, the company’s policy said ChatGPT should not be used for "telling someone that they have or do not have a certain health condition, or providing instructions on how to cure or treat a health condition."

ChatGPT’s role in health care is expected to keep evolving.

It also stated that OpenAI’s models "are not fine-tuned to provide medical information. You should never use our models to provide diagnostic or treatment services for serious medical conditions."

Additionally, it said that "OpenAI’s platforms should not be used to triage or manage life-threatening issues that need immediate attention."

CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER

In scenarios in which providers use ChatGPT for health applications, OpenAI calls for them to "provide a disclaimer to users informing them that AI is being used and of its potential limitations."

Like the technology itself, ChatGPT’s role in health care is expected to continue to evolve.

While some believe it has exciting potential, others believe the risks need to be carefully weighed.

CLICK HERE TO GET THE FOX NEWS APP

As Dr. Tinglong Dai, a Johns Hopkins professor and renowned expert in health care analytics, told Fox News Digital, "The benefits will almost certainly outweigh the risks if the medical community is actively involved in the development effort."