Open AI recently introduced ChatGPT Health, a new feature within ChatGPT designed to support health and wellness questions. The launch is in response to the AI chatbot reportedly receiving millions of health-related questions each day, highlighting a growing public interest in AI-powered medical information.
According to OpenAI, ChatGPT Health aims to provide users with a more focused experience for navigating health concerns, wellness topics, and medical questions.
Clearly, there is an increasing demand for accessible and conversational health information. However, while the tools may increase access to information, ensuring accuracy, equity, and responsible use remains a critical challenge.
Speaking to Medical News Today, David Liebovitz, MD, an expert in artificial intelligence in clinical medicine at Northwestern University, shares his thoughts on how ChatGPT Health may affect the patient-doctor relationship, and how healthcare professionals (HCPs) can safely guide appropriate use of AI health tools.
Liebovitz: Yes, with important caveats. ChatGPT Health can help patients arrive better prepared with lab trends summarized, questions organized, and care gaps identified.
That is a step up from patients bringing fragmented Google results or no preparation at all. For HCPs, this could mean more productive visit time spent understanding a patient’s values and preferences to support shared decision-making and more rapid gap closure.
The risk is overconfidence. Patients may assume AI-synthesized information is equivalent to clinical judgment or a thorough review. HCPs will need to develop new skills: validating what patients bring, correcting AI-generated misconceptions, and recognizing when the tool has missed context that changes the clinical picture entirely.
Liebovitz: Acknowledge the tool’s value while establishing appropriate expectations.
I’d suggest framing like: “It’s helpful for organizing your questions and understanding basic concepts, but it does not know things only I can assess, such as your physical exam, your tone during conversation, or how you have responded to treatments before.”
It is important not to dismiss patients who use it. That signals we are not listening (and these tools are getting better). Instead, ask what they found and what concerns it raised. Use it as a springboard. If they bring something incorrect, treat it as a teaching moment rather than a correction.
Liebovitz: Three principles:
Preparation, not diagnosis
Use it to organize and propose questions, understand terminology, or track patterns.
Proposing helpful topics or questions for discussion at the visit is then fair game, but please do not use it to conclude what is wrong, what will happen in the future, or decide on specific treatments.
For those scenarios, it will not have all the information it needs, yet will often still provide guidance that is often erroneous or needlessly anxiety-inducing.
Always verify
Anything that changes a medical decision should be considered a soft suggestion from an incomplete AI source that needs confirmation from your care team.
With that said, gaps in care are common, and there very well may be useful guidance given 2026 diagnosis and treatment complexity, but please realize that truly helpful guidance that impacts decisions may not always be present or may be buried in noisy, inappropriate suggestions, too.
Understand privacy trade-offs
Unlike conversations with physicians or therapists, there is no legal privilege. For sensitive matters such as reproductive health, mental health, substance use or other personal concerns, please understand the privacy loss before using the tool.
Liebovitz: The biggest misunderstanding is that AI-generated information is equivalent to a second opinion from a clinician. It is not.
Large language models (LLMs), such as ChatGPT, predict plausible text; they do not verify truth or weigh clinical context the way a trained professional does. HCPs can help by being explicit: “ChatGPT can summarize information and identify patterns, but it can hallucinate, miss nuance, and lacks access to your exam, your history with me, and the things you have not told it.”
The AI tools also do not weigh evidence while providing customized guidance according to a specific patient’s preferences the way a skilled clinician does. Confidence from an AI tool does not mean correct. Responses appear with equal authority, whether it is accurate or dangerously wrong.
Liebovitz: I expect AI will become an established layer in most care (and behind-the-scenes) interactions, such as handling documentation, surfacing relevant history, or flagging potential issues.
For patients, tools like ChatGPT Health will increasingly serve as a persistent health assistant/companion that helps them track, interpret, and prepare (including gap identification for discussion and helpful behavioral nudging according to a patient’s preferences).
The core of the relationship with a clinician, that is, trust, judgment, shared decision making, will not be automated.
AI-assisted physicians who learn to work with AI-assisted patients, rather than against them, will have deeper conversations in less time. The ones who avoid use of AI personally and by their patients will find patients going elsewhere, or simply not telling them what the AI said.
Liebovitz: Because it formalizes what was already happening informally.
Apparently, 40 million people a day were already asking ChatGPT questions, including uploading lab results, describing symptoms, seeking explanations. OpenAI is now building dedicated infrastructure around that behavior: encrypted spaces, connected medical records, and explicit guardrails.
The 21st Century Cures Act requires health systems to give patients access to their records via standardized APIs. ChatGPT Health is an early major consumer tool to aggregate that access at scale. Whether physicians like it or not, this changes the information asymmetry that has defined the patient-provider relationship for decades and thereby accelerates healthcare democratization.
What makes it better
Liebovitz: ChatGPT synthesizes across sources and personalizes to the user-patient’s context.
Instead of 10 blue links with conflicting information, patients get a coherent explanation grounded in their own data, including lab trends over time, possible medication interactions, appointment preparation specific to their situation.
Where it falls short
Liebovitz: It can still hallucinate. Citations are not reliable. It lacks access to the physical exam, the clinical gestalt, the social context a skilled clinician gathers in 5 minutes of conversation.
LLM outputs optimize for plausibility, not accuracy. This makes wrong answers often sound more confident than correct ones. Critical details that a long-time physician knows about a patient may be missing. Our data systems are not fully integrated, and it is also likely that ChatGPT will have less access to full charts than physicians do.
The biggest gap
Liebovitz: No accountability. When I am wrong, there are mechanisms, including peer review, malpractice, licensing boards, my professional reputation. When ChatGPT is wrong, you can file a thumbs-down.
Liebovitz: The main misconception is that any conversation about health is protected like a conversation with their doctor. It is not.
HIPAA only covers “covered entities,” which means health plans, healthcare clearinghouses, and healthcare providers who transmit health information electronically. Consumer AI tools are not covered entities.
Therefore, when you share health information with ChatGPT, that data could theoretically be subpoenaed, accessed through legal processes, or, despite OpenAI’s stated policies, used in ways you did not anticipate.
There is nothing like patient-physician privilege. For sensitive health matters, particularly reproductive or mental health concerns in the current legal environment, that distinction matters.
Liebovitz: Absolutely. Mental health carries unique risks: AI chatbots have been implicated in multiple suicide cases where they validated harmful ideation rather than escalating appropriately.
The Brown University study published last year documented systematic ethical violations, including reinforcing negative beliefs, creating false empathy, mishandling crisis situations. LLMs are not designed to recognize decompensation.
Reproductive care carries legal risk in addition to clinical risk. In states with abortion restrictions, any digital record of reproductive health questions becomes potential evidence.
Unlike conversations with your physician, ChatGPT conversations are not protected by legal privilege. I would also add: substance use, HIV status, genetic information, anything involving legal proceedings. The common thread is linking scenarios where disclosure, even inadvertent, carries consequences beyond clinical care.


