Beyond AI in Medicine’s Chatbot Hype
This article draws from an email interview with Mr. Schneier and his remarks at the 2025 T-CAIREM Conference: The Evolution of Generative A.I.
This academic year, the University of Toronto has been fortunate to have Bruce Schneier as a visiting senior policy fellow with the Schwartz Reisman Institute for Technology and Society (SRI). Schneier is internationally renowned for his work as a public-interest technologist, cybersecurity expert, and New York Times best-selling author. We recently caught up with him to learn how he sees AI evolving, and especially how it could impact medicine in the future.
As AI becomes more pervasive in medicine, what do you see as the biggest security challenge?
Data integrity is the single most important security problem, and directly affects the trustworthiness of AI systems. It’s something we are just going to have to figure out if we expect to use AI in health care. When I think about these systems and how well they work, I think about ensuring the input is correct, that the system processes it properly, and that the output is trustworthy.
Healthcare data has specific privacy and security requirements. Imagine that we train an AI on all our personal medical data—everybody, all of it. It would be an amazing system, and I think it would result in significant advances in medical science. But—YIKES!—just think about that trove of highly personal data. We are going to have to figure out how to realize the collective benefit of our medical information without risking the privacy of individuals.
Large Language Models seem to have captivated the public imagination. What’s surprised you the most about the growing use of LLMs in AI health?
That they’re being used without integrity protections. When dealing with applications as important as medicine, it’s critical to ensure that the information being processed is both accurate and complete, and that the information generated by the model is likewise. Modern LLM systems don’t provide either of those safeguards, and that’s a problem. I worry both about errors and adversarial action, and how those might affect the integrity of the model and the health of the patient.
What do you think might be lost if society eventually adopts AI doctors?
When I imagine an AI doctor replacing a human, I want to ask: compared to what? Imagine a rural clinic somewhere on this planet where there isn’t a doctor at all. Compared to what? Nothing. Maybe a minimally trained health worker with an AI doctor is way better than what was there before. You’re losing something, but also gaining something. It’s very different to sit where we are in Toronto—with middle-class access to human doctors—and say we’re going to lose something. In many places, much will be gained as well.
Do you think we’ll see AI doctors soon?
When you think about where AI can do the work of a human, it doesn’t have to be better in every way. It has to be better in one of four very specific ways: speed, scale, scope, or sophistication.I work in computer security. Humans are really good at identifying spam emails. Computers can be, too. But when you receive a billion emails per second, humans can’t process them fast enough. So a mediocre AI will do better because it’s faster. That’s speed.
An example of scale is the ability to deploy millions of propaganda chatbots on the internet. In 2016, Russia had to hire people and put them in a building in St. Petersburg to do propaganda. The scale of AI chatbots makes them a dangerous propaganda tool in 2026.There’s also scope to consider. When I saw an AI scribe transcription example of emergency room visits, it was the scope of the AI that was valuable. It knows something about everything in a way no particular doctor might.Finally, sophistication means something very specific: doing more complex modeling than humans can. The AI that basically won the Nobel Prize this year for protein folding is able to do more sophisticated modeling than humans.
What do you think will be the biggest change we’re going to see from AI in 2026?
AI changes all the time. Things that weren’t possible six months ago are possible today. Things that seem impossible now may be possible six months from now. You cannot look at the technology once, make an assessment, and walk away. You have to keep engaging because it’s changing really fast, all the time.
Categories: Text, Written Interviews