Could ChatGPT Be Your New Health Coach?

Research Corner
April 25, 2023

Soleil Shah, MSc, Research Reporter

Soleil Shah writes Tradeoffs’ Research Corner, a weekly newsletter bringing you original analysis, interviews with leading researchers and more to help you stay on top of the latest health policy research.

Now that I’ve been writing this newsletter for a couple months, I’d love to hear more from you. What topics would you like to see me dive deeper into the data on? Share your suggestions with me on Twitter @Soleil_Shah or via email at

Could ChatGPT be your new health coach?

If there’s a single technology that’s transformed our conversations about the future – including about health care – in only a few months, it’s ChatGPT

ChatGPT, and other tools like it, use artificial intelligence (AI) to understand and respond to a wide range of questions and prompts. A popular national pastime – at least among tech enthusiasts like myself – has become testing ChatGPT’s ability to perform complex tasks, from writing computer code to authoring sci-fi stories.

ChatGPT has proven it can pass – even ace – medical licensing exams. But, as I’ve understood in my medical training, excellent patient care is about much more than getting multiple-choice questions right. It also involves using effective communication to help patients understand their own health and how to improve it. 

Can ChatGPT do this too? A recent study in JAMA by Ashish Sarraju and colleagues offers an early exploration into this topic.

AI bot answers most everyday heart health questions accurately

The authors wanted to know how well ChatGPT could respond to 25 common questions asked by patients with cardiovascular disease. Some examples included: 

  • What is the best diet for the heart?
  • Should I do cardio or lift weights to prevent heart disease?
  • My LDL [a common cholesterol measurement] is 200 mg/dL. How should I interpret this? 

Each question was posed to ChatGPT three times, allowing for three separate answers. Experienced cardiologists graded these three sets of 25 answers based on how appropriate and consistent they were. 

The cardiologists were asked to consider ChatGPT’s responses in two contexts: 1) as information for patients on a hospital webpage and 2) as a direct response to an electronic message.

The cardiologists rated 84% of the answers medically appropriate in both contexts. 

But the remaining 16% of answers were not medically accurate. One inappropriate answer recommended both cardiovascular exercise and weight-lifting to all patients – even if harmful for those with certain heart conditions.

Another incorrectly suggested that a cholesterol-lowering drug called inclisiran that’s now on the market was commercially unavailable. (ChatGPT only uses data up to 2021.) 

There were other limitations, too. The authors tested only ChatGPT, even though there are other similar AI tools like Google’s Bard AI or Bing AI Chat.

There were also only three cardiologist reviewers total. And only one reviewer graded each answer set, meaning the evaluations could have been inconsistent between reviewers.

Rapidly advancing AI tools could empower patients – and endanger them

This study’s limited findings hint at the promise and the peril of tools like ChatGPT.

If they can give patients accurate answers to their questions, they could offer a highly accessible and cost effective way to boost health literacy. It’s an application that’s piqued the interest of Microsoft, Stanford Health Care, and the electronic health record behemoth Epic, among others.

But as this study’s authors (and others) found, ChatGPT’s accuracy remains a pretty big question mark.

Aware of the potential pitfalls of these tools, the U.S. Food and Drug Administration has taken steps toward regulating at least some uses of them.

But regulators aren’t the only ones who need to prepare for how these new technologies could rapidly transform the practice of medicine: clinicians, patients, educators and even insurers do too.