SRI Appoints Bruce Schneier as Visiting Senior Policy Fellow

‍Global security expert Bruce Schneier joins University of Toronto’s Munk School and the Schwartz Reisman Institute as a visiting fellow to tackle one of today’s defining questions: how can we build AI systems—and societies—that people can truly trust?

Few thinkers have done more to reframe how we understand security in a networked world than Bruce Schneier. To him, security isn’t just about cryptography or code—it’s about trust, power, and the human choices embedded in every system we build.

For three decades, Schneier has asked what it really means to be secure, and who gets to decide. From designing cryptographic algorithms to writing bestselling books that redefined public conversations on privacy and power, the Harvard-based security expert has become one of the world’s most trusted interpreters of how technology shapes society.

Now, for the 2025–26 academic year, Schneier joins the University of Toronto as a visiting fellow at the Munk School of Global Affairs & Public Policy, and a visiting senior policy fellow at the Schwartz Reisman Institute for Technology and Society (SRI). His appointment will bring a leading voice in digital trust and governance into dialogue with researchers, policymakers, and technologists across U of T, leveraging his deep background in security and governance to help frame the policy challenges of AI, trust, and societal risk.

During his time at U of T, Schneier will participate in several high-profile initiatives, including contributing to a newly-formed interdisciplinary AI & Trust Working Group led by SRI Research Lead Beth Coleman, and delivering a keynote address at the ­2025 T-CAIREM Conference: The Evolution of Generative AI, which gathers clinicians, data scientists and engineers to explore the rapid advance and risks of generative AI.

Schneier’s visit comes at a pivotal moment, as governments and universities alike grapple with how to secure systems that are no longer merely technological but political—systems that increasingly define how we see, decide, and trust.

As he begins his appointment at U of T, Schneier sat down with SRI to discuss the future of AI governance, the social dynamics of hacking, and why security is ultimately a story about people.

The following conversation has been lightly edited for length and clarity.

What drew you to join U of T’s Munk School of Global Affairs & Public Policy as a visiting fellow, and how does this connect to your work in the human dimensions of security?

I’ve been teaching cybersecurity at the Harvard Kennedy School, which is a public policy school; the way I explain it is that I teach cryptography to students who deliberately did not take math as undergraduates. And I enjoy doing that—it’s a great community. My partner and I wanted a sabbatical, so coming up to U of T was natural, and I’ll be doing the same thing here, teaching cybersecurity in the winter semester.

Broadly, I think about the intersections of security, policy, and people, which really means that I’m looking at technologies and how they are used, regulated, and misused. My previous book, A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend Them Back (2023), looked at hacking social systems like the tax code. My current book, Rewiring Democracy How AI Will Transform Our Politics, Government, and Citizenship (2025) is about AI and democracy. I’m thinking a lot about the notion of integrity and computer systems, and I don’t mean that in a moral sense, I mean that as the mathematical property of integrity: what does it mean to have an integrous system? How do we build integrous AI? To me, that is an underpinning of trustworthy AI. So, I’m excited to be here and engage with a diverse community of scholars on what is a very multidisciplinary way of looking at society, the world, and technology.

Your early career was rooted in cryptography and system security, while your more recent work engages with social dimensions of policy, power, and trust. How do you see this evolution from technical to policy thinking? 

My career has been an endless series of generalizations. My first book was on cryptography and the mathematics of security; then, I started writing about the computer science of security and general security technologies, followed by work on the economics, psychology, and sociology of security. Following that, I started to write more about public policy involving data privacy. In A Hackers Mind, I take the notion of hacking—which is a very computer-focused term—and applying it to non-computer systems. And now, here I am writing about AI and society. For me, it is a way of trying to figure out how things fit in, how they work.

For all of us, I think the intersection of tech and policy is critical. The reason I teach tech to a non-tech policy audience is because they need to know that stuff, in the same way that techies need to know some policy. Decades ago, those two camps could be separate. But now, technology policy is social policy, and we need people who speak both languages to figure out what to do—both on the tech and on the policy side. A lot of my work today is trying to bridge those two communities.

A Hacker’s Mind argues that hacking is about exploiting systems—not just computers but legal, financial, and political systems as well. How does that apply to AI governance? Are policy makers today thinking like hackers? Are they being hacked by the systems they regulate?

Hacking is happening everywhere. The way I think about it is that hacking is finding loopholes. You can find loopholes in computer code and you can find loopholes in the tax code. Certainly, in the U.S. especially, we are seeing lobbyists hack the legislative process—inserting loopholes, bugs, and vulnerabilities into legal code and law. So, yes, legislators are being hacked. Democracy is an information system and thinking about it that way gives us a lot of power. A lot of what I’m doing is applying systems thinking to areas where most people don’t think of it that way, and I think that’s a very fruitful way of looking at the world.

As AI becomes more embedded in decision making and infrastructure, where do you see the greatest security and trust failures emerging? 

AI systems are, in a sense, synthetic humans—in limited senses and limited applications. They are replacing human cognition. Now, if you think about it, we have a lot of experience with humans that make mistakes, that try to cheat, and that work against the system. We have millennia of experience dealing with malicious humans. To the extent that AIs mimic humans in that way, we’re kind of covered. Where we have more trouble is where AIs do it differently—where they subvert systems differently, where they make different sorts of mistakes. I worry about that divergence. I also worry about the speed in which these systems will be implemented, even if they are trustworthy. We have a lot of AI, but we have very little trustworthy AI. And that’s worrisome as these systems move into positions of power and authority.

What responsibilities do universities and other public research institutions have in shaping the next generation of technologists and policymakers, especially in a time where private companies are driving so much of the AI agenda?

Corporations have a narrow-focused near-term for-profit agenda, which makes them basically untrustworthy as a policy partner. They’re not doing things that are best for society—they’re doing things that are best for them. On the other hand, universities are a place for thinking about what’s best for society. Universities are a place where AI research happens that isn’t near-term financially motivated.

There’s been a push worldwide in the past couple of years for public AI systems that are not under the control of a corporation. Switzerland released a public AI model about a month ago. Singapore has a public AI model that’s optimized for Southeast Asian languages. Universities need to band together to build these non-corporate models. It’s possible—the cost to build these models is dropping rapidly. So, we’re starting to see non-corporate models that compete with the best corporate models, and I think it’s vitally important to have models out there that aren’t built on the profit motive.

Categories: Articles, Text, Written Interviews

Sidebar photo of Bruce Schneier by Joe MacInnis.