We talk with physician and writer Bob Wachter about why he’s cautiously optimistic that artificial intelligence will usher in a ‘golden age’ of medicine — and the questions he still has about these powerful new tools.
For the podcast this week, we sat down with physician and writer Bob Wachter for a wide-ranging conversation about his new book — A Giant Leap: How AI Is Transforming Healthcare and What That Means for Our Future.
“This is the greatest experiment in the history of medicine,” Wachter said of the rapid rise of artificial intelligence in health care.
Wachter recalls watching health care leaders and tech entrepreneurs bungle similar endeavors before, like the clunky adoption of electronic health records two decades ago. But he sees some signs that his industry has learned from its painful past missteps.
For example, Wachter points to the rapid adoption of digital scribes — AI-powered tools that record doctor-patient conversations and convert them into notes with actionable steps for the doctor. Digital scribes are not revolutionizing medicine, but they are, according to Wachter, solving a real pain point for doctors and patients. It’s an example of health care learning to not “go too far too fast” with new technology, Wachter said.
Still there are plenty of thorny questions that Wachter believes could derail the potential for AI to improve the quality and cost of our care. Will hospitals effectively root out bias in the algorithms they use? Who is legally liable if AI makes a clinical error?
Wachter predicts the widespread adoption of AI tools in health care will be “slow and bumpy.” But he also believes it’s inevitable. “The needs of the health care system are just so massive, I don’t think they can be met without AI intervening,” he said.
We hope you’ll listen to hear more of Wachter’s thought-provoking predictions about how AI will reshape health care — including which workers are most at risk of being replaced.
Episode Transcript and Resources
Episode Transcript
Dan Gorenstein (DG): Is artificial intelligence a true health care revolution? Or just a shiny sideshow?
That’s a question I’ve been turning over these past few weeks as I’ve made the rounds at a couple of tech conferences.
The hype certainly has hit a fever pitch.
News clip: AI has been called medicine’s biggest moment since antibiotics. // One that could revolutionize the way we diagnose, treat and even predict diseases // Massive change is coming to medicine.
DG: Even for a seasoned reporter who talks to a bunch of smart people, it’s hard to know what to make of AI’s true potential to reshape health care.
So I called up a guy I’ve known for more than a decade.
DG: Before we start, Bob, I gotta ask you, did you use AI to prepare for this interview?
Bob Wachter (BW): I did not. I’m doing it just fresh using my waning number of brain cells.
DG: Bob Wachter has been a doctor for 40 years. He runs the department of medicine at UC San Francisco and works in the heart of Silicon Valley.
And, he’s got a new book out called A Giant Leap: How AI Is Transforming Healthcare and What That Means for Our Future.
Today, we talk with Bob about why he’s cautiously optimistic that AI will usher in a “golden age” of medicine, and the questions he still has about these powerful new tools.
From the studio at the Leonard Davis Institute at the University of Pennsylvania, I’m Dan Gorenstein. This is Tradeoffs.
******
DG: Okay, so Bob, I mean, why now with the book? Like at a time when it feels like this AI technology is still changing so rapidly, what did you hope to add to this already very noisy conversation?
BW: Yeah, Dan, I think this is the greatest experiment in the history of medicine that we’re bringing this technology into this high stakes world with doctors and nurses and patients and lots of money and lots of complexity, 20% of the economy.
And the books I’ve written for lay people are on topics I really want to understand better and see if I can understand it, then I can articulate it in a way that I hope is useful.
But I knew I needed to write a book that wasn’t going to feel like it was out of date the day it came out and so I tried very hard not to be so tied to the technology as it exists today and spend more time thinking about what it, what’s the meaning of this and hope that that would be sort of a durable contribution.
DG: You write in your book, Bob, about the last time technology showed this much promise to transform health care.
The arrival of the electronic health record or EHR.
And as you know, I mean so many smart people promised how their software systems would quote “democratize medicine” or revolutionize health care. I mean like ad nauseum we heard this stuff.
Instead though, as you write, the clunky arrival of EHRs reminded us of “how awful humans are at anticipating the consequences of new technologies.”
And I mean, I’m not taking a shot at you here, but like here you are attempting to anticipate the consequences of a new technology.
BW: And with hopefully some humility about, about none of us are great at this. But this is arguably our most important industry being turned upside down by a technology we don’t fully understand. So somebody’s gotta try at least to do the best they can to articulate what are the key issues at least, what are the forces that will drive success or failure.
I thought I was in as good a position to try to do that as anybody but yeah, the further we get out as we talk about what is this gonna do to health care five years from now you know, I think your humility has to bubble up a lot because I think we’re not sure. There are a lot of moving parts here.
DG: Fair enough, Bob. I really appreciate that you’ve watched this transformation going from paper to digital, and now you’re watching this second transformation.
BW: Correct.
DG: And I’d really like to talk with you about the lessons learned from health care’s shift to electronic records because in some ways I’m betting that that offers some clues as to what to expect and what questions we should all be watching for.
You write that doctors are, “Glad the electronic health record is there and would never wanna return to paper. But most of us find the EHR to be surprisingly unhelpful in our efforts to provide higher quality, safer, and less expensive care.”
Bob, you’re the doctor. Give us your one line diagnosis here. Why did the high hopes for EHRs fall so short?
BW: I’ll do it in two lines. Sorry, Dan. But first line is we did not recognize that the tools such as they were, are essentially big storage cabinets, big digital storage cabinets.
I think in some ways it was a problem with expectations that we expected that this would not only collect data, but that it would also provide a massive amount of sort of useful clinical intelligence, which it did not, and was mostly not built for. In part because so much of our data is stored in the form of notes, in the form of narrative, and there was really no tool that could look at my note and make sense of it.
That was part of it. And part of it was humans just suck at this. Humans bring in these technologies into their workplace, they turn on a switch and they say it’s gonna transform everything and make it better. It doesn’t work that way.
You have to not only iterate on the technology but more importantly you have to actually change the nature of the, the work and how you organize things and how you, you know, the leadership and governance and that frequently takes a decade or two.
DG: What I’m hearing you say here, Bob, is that you know, we thought we’d just flip the switch and things would change but really a big part of this was knowing that the new tool is only gonna be as good as those systems that were being implemented around the new tool.
BW: I think that’s accurate and part of it was health care’s failure to understand that it’s real work and takes real money and people to take all this data, analyze it, and use it for business intelligence.
You know, at UCSF until a couple of years ago, we probably had 10 or 20 people whose job it was to take all the data coursing through the digital veins of the system and make sense of it, and then feed it back to people, doctors, nurses, to help us do better, taking care of patients.
My son works for an organization that also has 20 or so people that do data analytics for the purpose of making the organization perform better. His organization’s called the Atlanta Braves. And they have a total staff of 500 people in the company. So 20 out of 500 take every bit of data from every pitch and use it to try to make them better at their jobs. We have about 20 people out of maybe 30,000.
DG: So do you think because now that data is here, now that healthcare leaders have sort of learned like, “oh, we weren’t quite really ready for the data. We didn’t have the systems in place. We didn’t have the personnel in place to sort of onboard this properly.” Do you think with the rise of AI, more leaders are actually ready to make the kind of investment you’re talking about the Atlanta Braves do?
BW: Maybe. I mean, I think part of…
DG: Is that too optimistic?
BW: Yeah, maybe a little too optimistic in that, you know, that investment in some ways it’s predicated on something that sadly is often not the case, which is, you know, what are, what is the accountability of a health care organization to be better?
The accountability that the Braves have to win games is 100%. The accountability that we have to deliver perfect care is a little wimpy.
DG: More like the Marlins or something.
BW: Yeah, right. Exactly. I think most leaders in health care are not super up to speed with AI.
Actually, one of the reasons I wrote the book was, I was hoping that some of them would read it and it would help them kind of get up to speed.
But the reason I’m pretty optimistic about AI’s role in health care is one, that the technology today is truly game changing, I mean really different than anything we had before.
The second is if you think about it over the last 10 or 20 years our go-to move has always been to hire more humans. Primary care is impossible. So what do we do? We hire nurse practitioners. The billing has become more complicated or we need a prior auth from the insurance company. We have now, you know, hundreds of people in the billing department.
This has been great if you count on health care to be your full employment engine, which it has been for the last 20 years, but I think if you’re a health care leader today, you basically say, we can’t afford all of these humans, and in many cases we can’t even find them anymore.
So, unlike in most parts of medicine of the history of medicine where we’ve been pretty late to the party, here I think the unmet needs meeting this technology that’s really remarkably good is leading to an explosion of the use of this stuff in health care and I think it’s all very exciting.
DG: The share of doctors using AI nearly doubled in just one year, according to a 2024 survey by the American Medical Association.
Two out of three docs now report tapping AI for tasks like charting or planning patient care.
When we come back, Bob explains why robots still haven’t replaced radiologists, and he envisions a world where insurance companies charge you extra to see a human doctor.
BREAK
DG: Welcome back. We’re continuing our conversation with physician and writer Bob Wachter, author of the new book, A Giant Leap: How AI Is Transforming Healthcare and What That Means for Our Future.
Before the break, Bob, you made it clear you’re quite optimistic about the potential for AI to make a real difference in health care.
And in your book, you say one of the earliest signs that things are headed in a promising direction is the rapid rise of so-called digital scribes.
These are AI-powered tools that record doctor-patient conversations and convert them into notes with actionable steps for the doctor.
To me — reporter, not physician — this sounds more like a glorified transcription service. Hardly a health care revolution, but you seem stoked. Why?
BW: It’s a proof of principle. It is. All right. Here’s an AI tool that we can bring into a health care system. It is wildly popular among physicians to the point that if we turned it off, I think we’d have docs threatened to leave. Patients notice that the doctor’s now looking them in the eye.
So I guess what I’m stoked about is AI coming into a doctor patient relationship context can serve a useful function, save some time, improve productivity, and improve the experience of both the clinician and the patient. But it’s a single, it’s not a home run. It can’t be where you end.
The system has too many unmet needs. But you gotta get the early stages right, and not just get buy-in. This is not all sort of politics. It’s actually learn lessons and, and be humbled by the experience with the EHR so that we don’t go too far too fast or over promise.
DG: So one of the reasons that you’re excited is because you see the scribes as a sign that to some degree health care has been humbled that they’re starting with the single instead of the home run.
BW: Correct. And why is that important? If you start on the really, you know, high stake stuff and something bad happens and you haven’t built up this reservoir of trust, everyone’s gonna say, “See, I told you so this thing’s not ready.”
DG: One last question about these AI scribes. You note in your book that while there are high hopes these scribes will lead to real savings, we’re certainly not there yet. One study showed, for example, these tools cut doctor’s documentation time by about 5 to 10 percent — a pretty modest return on tech that can cost big hospitals millions of dollars.
I guess, just using this sort of example in a more general way, Bob, I’m wondering, should we actually expect AI to slash the $5 trillion a year we now spend on health care?
BW: I don’t know. That’s an open question because, you know, if my health system brings in an AI scribe and it allows us to spend a few minutes less documenting each day, that doesn’t obviously translate to you as a patient having a smaller bill or a smaller bill being paid by Medicare or the insurance company. We probably pocket that because our margins are so slim that maybe we repurpose that to, you know, put one extra patient on my schedule every day. It’s not obvious it saves the health system money which obviously is the key issue.
If you think about, you know, labor disputes, government shutdowns, they’re all about health care costs. So if it doesn’t save money, you probably haven’t tackled the most important question.
DG: How could AI actually save us money?
BW: Well. If you look at the amount of hiring of human beings, we have to do things that probably don’t need to be done by human beings in health care. There’s a lot of money there.
You know, estimates that 30% of our costs are administrative. You can imagine a world where a third of that is taken away through effective use of AI even though it’s not – doesn’t cost nothing – it’s cheaper than hiring a nurse practitioner or a care coordinator or someone to be in the call center.
The second way it would save money is through decision support, meaning that, you know, I have a patient with cancer and there’s a drug for $2,000 and another drug for $20,000, and the AI decision support tells me that $2,000 drug works just as well.
That could save money if it drives us to more cost effective care. You can also imagine a world where my healthcare system makes more money if I prescribe the $20,000 drug and the system pushes me to do that.
So I think there a lot of it depends on what are the interests of the stakeholders and how does the payment system sync up with the AI. Those are the two main mechanisms, labor replacement or driving us to more cost effective choices.
DG: Got it. Okay, switching gears here on you, Bob. As you continue to weigh the pros and cons of rolling out certain AI tools, we know that there can be bias baked into these algorithms, right? These data sets, these models, can have racial, ethnic, or other disparities, you know, cook cooked into ’em. How do you think about that?
BW: Yeah. There’s clear evidence that, that in our medical care there are biases. One of the studies I quote was from Atlanta, where you looked at two patients who came to the ER with the same fracture and the white patient got more pain medicine than the Black patient, but that was with humans. And so what’s the risk of AI? AI will take that and scale it and say, well, that appears to be right, that, you know, this Black patient needs less medicine than white patients.
So there is a risk that it will scale biases. I think that if I had to take one side of the equation or the other on bias, I would say the ability of AI tools to, to look for and therefore mitigate biases are greater than the potential for the humans to do that.
DG: Bob, I’d like to end on a couple of questions about AI’s impact on the health care workforce. And I want to cut to the chase with you: How many doctors and nurses will artificial intelligence ultimately replace and how soon?
BW: I’d say none in the next five to 10 years. And I think anybody who tells you they can predict anything beyond 10 years is making it up. And the reason I say none is clearly the most vulnerable fields are radiology and pathology — the two fields that really are about looking at a collection of digital dots and comparing them to a pattern.
Ten years ago, if you said to me, which happens first, our radiologists are all unemployed, or I get in the backseat of a driverless car and take a nap, I would’ve said the radiologists are toast.
I can tell you at UCSF today, we cannot hire radiologists fast enough. And our radiologists are desperate for AI help to get their work done. And what that means is A, replacing physicians is harder than it looks and you know, physicians also have pretty good lobbies and we don’t really know how to bill if a doctor doesn’t sign the note and we don’t know who to sue if a doctor doesn’t sign the note.
So even in the most vulnerable fields the workflow pressure is so great that even if these tools make us 50% more productive, there’s still plenty of jobs.
DG: And what about people who work in primary care? Could you eventually see AI replacing them?
BW: I think the day-to-day management of your cholesterol and your diabetes and your blood pressure will be done by AI maybe with a doctor there in some triage protocol if you seem to have something really complex. But people ask me, you know, would you tell your kid to go into medicine? And the answer is yes, because I think it’s actually gonna make our jobs better and more interesting and more human for the foreseeable future.
For the nonforeseeable future who the hell knows but if doctors don’t have jobs, it means that the accountants and the lawyers and, sorry, the journalists have all gone five years before. I think medicine’s probably the hardest thing replace.
DG: Do you think I’m gonna be gone?
BW: Probably, yeah.
DG: Do you actually?
BW: Um, no because you’re great at your job and I think there will always be a role for people that are really, really good at their job, but I think there’ll be fewer jobs. My wife’s a journalist, and I think there are real risks when these tools write as well as they do and can fake being a podcast host. It can’t be that every job is safe.
What do you think, by the way, do you think you’ll have a job in 20 years?
DG: Um, I guess I am optimistic. I think the question of trust is only gonna become greater, and whether we’re talking about doctors and nurses or podcast hosts, I think wanting to know that we can trust this, that it’s reliable, that we have a relationship with a person, is going to feel that that’s gonna become even more valuable than it is today.
BW: I hope you’re right. I tried to go into this recognizing my bias as a human, that I’m rooting for the humans and, and think it’s apocalyptic if everybody loses their job. But yeah, I mean, you don’t really cost very much, in terms of, you know, people being able to listen to the show essentially for free. Whereas for a doctor, the issue may be yes, I don’t want a bot telling me if I have cancer, but what if it costs me $5,000 extra a year for my insurance plan?
You know, you could see this going in the way of accountants and travel agents that sure, certain people who can afford it or have particularly complex needs see the human. But for everybody else I think we’ve gotta go into this with as few biases as we can muster in terms of like how much these people need me.
I mean I think I’m pretty good, I’ve been doing this for 40 years, but you know, the proof will be in the pudding and the proof may be, you can see the real psychiatrist that’s $300 an hour, and you can see GPT psychiatrist, and that’s $20 a month. Is it good enough? And I think that’s the kind of question that real life is gonna pose at us.
DG: I’m glad we got to this point because I did wanna ask, as AI continues its march forward here through health care, are we gonna see some kind of tiered system where wealthy people have some amount of AI, but also some amount of human contact, whereas low income people are gonna have just almost exclusively AI?
BW: I think that if you had to guess about where this all goes, I think that would be a reasonable guess.
Health care is so expensive, we will probably move to a system where we say there’s a basic level of care that’s a akin to coach on an airplane. And it’s safe and it’s generally effective, and maybe it’s convenient because it’s AI but it’s sort of not that personal. And for the things that it can handle, like managing a lot of chronic diseases and your vaccinations, it does okay. And that if you can afford either the insurance plan or paying out of pocket for the human care, you do that.
And I guess, is that good or bad? I don’t know. I mean, ultimately we can’t afford 20% or 25% of GDP on health care and we can say, well, we love the current system ’cause patients get to see a human being, but actually they don’t. Actually, if you have good insurance, you can get in to see a primary care doctor or a cardiologist. And if you don’t, you probably can’t in many circumstances. You’re stuck going to the ER.
DG: What I think you are saying, Bob, is that for many people our current system — with humans — falls pretty short on access, on affordability, particularly for low-income patients.
Your point is that maybe this sort of tiering — where wealthier people can get more human care if they pay for it — may be an improvement if it means more people get more access to, like, an adequate level of care?
BW: Yeah. Health care shouldn’t be about jobs, really. It should be about how do you deliver the best care and the best outcomes to people at the lowest cost and the most convenience. If AI helps us do that, I think it’s great.
And for those who then can afford to buy a higher level of, of maybe care and feeding, but maybe no greater clinical outcomes, I don’t see anything super wrong with that in the same way, I don’t see anything terribly wrong with concierge care for patients that are able to afford that.
DG: And, you know, going back to what you talked about with the electronic health records, right? It’s not a switch you just turn on and AI is not gonna be a switch either. There’s going to be, I’m guessing, a level of involvement and engagement, at least for some period of time, the humans are gonna have to be really involved so the AI can work more seamlessly within these systems.
BW: Yeah. I mean these are incredibly complex systems at every level, operationally, clinically, ethically, financially. It’s gonna be slow and it’s gonna be bumpy, and there’s gonna be a step forward. And then there’s gonna be a screw up.
We see some horrible stories of mental health chatbots doing bad things with, you know, with teenagers. Awful. There should be regulation, there should be standards.
On the other hand, there are, you know, millions of people getting care that they find useful from these things at a cost of $20 a month. And so, you know, Biden’s old line, “Don’t compare to the almighty. Compare me to the alternative” is like the alternative in many cases is not very good and that the bar shouldn’t be some mythical perfect state.
I’m worried about AI in the rest of society. I think there are some real concerns and questions, but I landed in a pretty optimistic place in this book in part because the tools are really good, but in part because the needs of the health care system are just so massive, and I don’t think they can be met without AI intervening at this point.
DG: Thanks so much for taking the time to talk to us on Tradeoffs, Bob. Really appreciate it.
BW: It was a joy, Dan. Good to see you.
DG: I’m Dan Gorenstein and this is Tradeoffs.
Episode Resources
Additional Reporting and Resources on AI in Health Care:
- In a financial pinch, major health insurers are turning to AI for help (Casey Ross, STAT News, 2/17/2026)
- Stop Worrying, and Let A.I. Help Save Your Life (Bob Wachter, New York Times, 1/19/2026)
- If A.I. Can Diagnose Patients, What Are Doctors For? (Dhruv Khullar, New Yorker, 9/22/2025)
- Artificial Intelligence and Health Care Waste—Promise or Peril? (William Shrank, Suhas Gondi and David Brailer; JAMA Health Forum; 6/13/2025)
- The Economics of Artificial Intelligence: Health Care Challenges (NBER, 2024)
- Building Equitable Artificial Intelligence in Health Care (Anna Zink, et al; Urban Institute; 9/28/2023)
Episode Credits
Guests:
- Bob Wachter, Chair, Department of Medicine, UC San Francisco; Author, A Giant Leap: How AI Is Transforming Healthcare and What That Means for Our Future
This episode was produced by Leslie Walker, edited by Ryan Levi and Dan Gorenstein, and mixed by Andrew Parrella and Cedric Wilson.
The Tradeoffs theme song was composed by Ty Citerman. Additional music this episode from Blue Dot Sessions and Epidemic Sound.
