How Will AI Affect Democracy
Two AI experts join Governors Bredesen and Haslam to discuss the potential impact of AI on democracy
Listen to the Audio on Baker.UTK.edu
Policymakers are increasingly focused on how to regulate AI, but what impact might AI have on democracy itself? The risks of AI technology for the democratic system, including misinformed voters and manipulated election processes are becoming more evident by the day, but is it all bad news? Dr. Sarah Kreps, a political scientist and director of the Cornell Tech Policy Institute, and Bruce Schneier, a technologist and Harvard Kennedy School lecturer, join Governors Bredesen and Haslam to dig into the good, the bad, and the unknown about how AI will impact democracy.
Transcript
Sarah Kreps: Whatever set of rules or regulations is put in place should be applicable, whether or not you agree with the person in the White House. I think often there’s a tendency to craft legislation or answers that suit the party that’s in control at the time, but I think it just needs to be incumbent upon those that are making those decisions to think, “One day it might be the case that the other guy is in charge.”
Bruce Schneier: AI is a transformative technology. It’s going to change a lot of things. It might change everything. The concern is it’ll change things faster than we in society can react, and we in society are pretty slow in reacting to change, like passing new laws and new regulations.
Marianne Wanamaker: Welcome to “You Might Be Right,” a place for civil conversations about tough topics brought to you by the Baker School for Public Policy and Public Affairs at the University of Tennessee, with funding support from members of our Producers Circle. To learn more about how you can support our work, visit youmghtberight.org.
Policymakers are increasingly focused on how to regulate AI, but what impact might AI have on democracy itself? The risks of AI technology for the democratic system, including misinformed voters and manipulated election processes, are becoming more evident by the day, but is it all bad news? In this episode, our hosts, former Tennessee Governors Bill Haslam and Phil Bredesen and their guests, dig into the potential impact of AI on democracy – the good, the bad, and the unknown.
AI Phil: How are you today, Bill?
AI Bill: I’m doing quite well, thank you. It’s a pleasure to be here as always, and today, we have a fascinating episode that delves into the profound implications of artificial intelligence on democracy and the upcoming 2024 election.
AI Phil: That’s right, Bill. Artificial intelligence is transforming our world at an unprecedented pace and its impact on our democratic processes is something we must explore.
AI Bill: Absolutely, Phil. AI has the potential to revolutionize campaign strategies, voter engagement, and even decision making processes. However, as we navigate this rapidly evolving landscape, we must ensure that AI is utilized ethically, transparently, and in a way that upholds our democratic value.
AI Phil: Well said, Bill. By engaging in these discussions, we aim to unpack the opportunities and challenges that arise with the integration of AI in our electoral systems. Our goal is to foster a deeper understanding of this complex topic and empower our listeners to form their own informed opinions.
AI Bill: So dear listeners, get ready for an enlightening conversation as we explore how artificial intelligence will impact democracy and the 2024 election.
Bill Haslam: Well, Phil, that’s actually how the conversation would go between us if it was AI-generated. What you just heard, what our listeners just heard was an AI-generated discussion between the two of us.
Phil Bredesen: I’m sorry to hear that. I thought I was very articulate in that discussion.
Bill Haslam: We both did sound a little more thoughtful than we are in real life.
Phil Bredesen: Yeah, well, I’m sure it’ll get better as time goes on.
Bill Haslam: There’s profound questions, obviously, around AI and what we want to do today in today’s conversation is bring in some folks who are in the middle of this, thinking about it full-time. All of us are concerned. I put myself in the camp of people who know enough to be dangerous about what it is and how it can work. I know enough just to be worried about it.
Phil Bredesen: Well, I think that it’s one of those technologies and changes in the world that it’s very hard to see what the long-term implications are. It’s a little bit like the internet or something in the early days of that. Who would’ve imagined what it is today? It just takes on the life of its own. So I’m not sure anybody can figure out where it’s going over the next 20 or 30 years, but anytime there’s a profound change like that or a profound new piece of knowledge and technique in the world, I think it’s something in the public sector we have to talk about because it fundamentally has the potential to affect the democratic process and how our public process works.
Bill Haslam: I think your comparison to the internet is a good one. As we look at its impact on democracy, we’d say, “Oh, there’s a lot of great things.” It’s much easier to access information. It’s easier to do your research, do the homework, and then to communicate to constituents, but it’s also led to a lot of misinformation.
Phil Bredesen: I can remember back when I was mayor of Nashville back in the early mid ’90s, things were just starting. The email was just coming where people had access to it and so on. Who could have imagined at that time social media and the impact that it was going to have on the entire process? AI, I think, has the potential to be just as transformative.
Bill Haslam: Fortunately for us and particularly for me, who I know I have a lot to learn, we have two guests who spend a lot of time thinking about this and the implications.
Phil Bredesen: I think it’s going to be an interesting conversation and probably, who knows, the first of many as we go through these podcasts. This is an important subject.
Bill Haslam: What we want to do, for our listeners, is to be helpful as they think through difficult issues and to provide multiple views of very difficult issues. Let’s see if we can do that around the whole question of AI.
Phil, it’s safe to say that the whole country is aware of artificial intelligence. For some people, it’s an exciting thought, full of possibilities, and for others, it scares them to death. Fortunately, we have a guest today who knows the topic well and has thought through ramifications, and I think we’re going to have a great conversation. Dr. Sarah Kreps is a US Air Force vet – sorry about almost three in the army there – a political scientist and a professor in the Department of Government at Cornell, but she’s also an adjunct professor of law and the director of the Cornell Tech Policy Institute. She’s written five books and she’s a columnist for The Post, International Herald Tribune, the New York Times, USA Today, graduate of Harvard, master’s at Oxford and a Ph.D. from Georgetown. So she’s done her homework.
Sarah, thank you so much for joining us. We’re really looking forward to this conversation.
Sarah Kreps: Thank you. It is a real pleasure to be here.
Phil Bredesen: I’d like to start out, Sarah, I think with this podcast, there probably are a lot of listeners who see this in the news, but are maybe, because it’s so new, a little unclear about exactly what it is and how it works and what the implications are. We’ve read about AI in entertainment. We read about AI in the workplace and what it’s doing. Our interest is really AI in the public sector and the democratic process. I wonder, could you just take a little bit of time and explain to people what this is in this context and just help people get a foundation for the discussion, including the two of us?
Sarah Kreps: Yeah. So when we talk about it and this, artificial intelligence is this label that I think often gets slapped onto a lot of different things. What has happened in the last six to 10 months is that now there are very obvious consumer-facing applications of artificial intelligence. So your listeners may have heard of ChatGPT. This is sort of a fun toy that you can put in questions, you can give it a task of writing a New York Times style article on democracy and AI, but we’ve had artificial intelligence for a long time. Computers, in a way, are artificial intelligence.
So we can think, for example, of Netflix. When you watch a movie and they say, “If you watch this movie, you might also like these,” that’s machine learning. That’s trying to figure out, through a lot of data, patterns of behavior. So I think for our purposes here, talking about democracy, we’re probably most interested in the AI when it comes to machine learning and texts and visuals and videos. Is that fair to say?
Bill Haslam: I guess most of us like the fact that the algorithm is going to tell us what books we’re going to like or movies we’re going to like, but it scares us a little that as that technology gets better and better, that someone could impersonate my voice and the impact of that on democracy, the impact of that on education. Everybody can go to whatever realm they’re in and think about the danger. So I guess help us. Do you look at this as something that concerns you or that you’re optimistic about?
Sarah Kreps: Well, I actually am a technology optimist. So if you’re looking for a doomsday vision, I’m not here to paint it, although I do have some gloomy news, but I think in general, I think technology often is overhyped as being transformative in ways that I think are not necessarily born out by evidence. I’ll give you an example. So you talked about someone impersonating your voice. That may happen, but one of the things I talk a lot about in my classes is let’s unpack this. We’ve had Adobe out for years and Photoshop. So there used to be a lot of concerns about, “Well, is this thing photoshopped?” and those are valid concerns.
So I don’t think that this is that much of a step away from Photoshop that we need to now it’s a three alarm fire. I think we need to be wary and one of the things hopefully we’ll talk about is how digital literacy works in a world of generative AI, and we do have to do our homework, but I think that’s something that we’ve had to do for several years now is that we want to not have the one-stop shop for your information and your news, but you read something and you want to triangulate and think, “Okay. Does this seem right?”
The thing about ChatGPT is that it seems plausible. They use facts. Sometimes these facts are actually completely wrong, but that’s a first stop. Then the idea I think with this digital literacy in this AI era is to then track down other forms of information. I think about the Pope in the puffy coat. You may have seen the image of the Pope, and I thought it was totally plausible because he’s known as a bit of a fashion icon. So I was credulous at first and then I was like, “Oh, wait, no.” So I dug around a little bit. That’s when you realize that these things are just, again, furnished in this case. Again, not much of a difference between Photoshop in that case and an AI-generated image.
I think all of this, AI just makes information so much easier and faster to generate that I think we’re awash in it and need to then just do our homework and adjudicating, “Okay. Is this correct information or is this just a more internet junk?”
Bill Haslam: Our concern is for the promotion and preservation of democracy in the country. Are there ways that you can see that AI will benefit democracy in our country?
Sarah Kreps: I actually do, and here, again, is the optimism coming out. So again, through talking to a number of elected officials as I’ve done this work, it seems really clear that people are just saturated with content all the time. I talk to a member from … He was a US congressman from New Hampshire. He just left office a few years ago and he said he’d get … I’m trying to make sure I get the zeros right on this, 7,000 emails a week. It was a lot.
Bill Haslam: Sure, no, it’s very possible.
Sarah Kreps: So his inbound is ridiculous and he’s trying to figure out what’s the pulse of his constituency and he can’t process it all and he can’t … So what these kinds of tools can also do is categorize a sentiment analysis, topic analysis, what do people think because you can’t, as elected officials, represent the people if you don’t know what they think. So I think these same technologies can be used for good in analyzing sentiment and if they want to go this way, it also allows them to start to craft responses back and make things easier for staffers.
Phil Bredesen: One of the discussions you hear about is demands that we not leave the management of this technology, the control of it in the hands of big tech in the country, that somehow the government should play a role. I’ve always found it difficult to understand exactly what that role should be, how that would work, but is there a role for the public sector as this technology develops and in some way interjecting itself into the way that it’s used?
Sarah Kreps: This is a really tricky issue, and I think one of the big issues of our day, really, and we saw this both in elections but also social media platforms with COVID, these are private firms that are basically managing a public debate, and how those choices get made I think is it, in a way, seems problematic that private firms are making those calls. At the same time, it’s problematic, I think, because these are private firms for the government to be insinuating itself in the process, and that’s what that’s … People may have heard of section 230 of the Communications Decency Act, and that allows a lot of latitude on the internet for free speech.
I think overall that is a good thing. I think it runs into problems once in a while. I think the question, and I think about this a lot because in Europe, they take a much more constrained view of what’s permissible in terms of privacy and tech, and I am not sure over a net net that has served them well, but I guess it’s a question of paraphrasing Churchill, is the US the least bad option when it comes to how to think about content moderation and private firms basically self-regulating?
Bill Haslam: Your point, and I’ve heard a lot of folks say, “We didn’t get it right with the internet. With AI, do we need to get ahead of it?” Interestingly, some of the political spectrum is a little bit flipped on its head. You have Republicans, traditionally the party of less regulation, a lot of Republican senators and Congress folks saying, “No. Big tech has gotten out of control.” So can you look at lessons we should have learned from the internet, and maybe it’s the same question you just answered and say, “If I got to be the ruler that decided this, here’s how I would handle AI”?
Sarah Kreps: I think I’m going to punt a little bit on this. What I would just say is that I think whatever set of rules or regulations is put in place should be applicable whether or not you agree with the person in the White House. In other words, you should have a set of rules … I hope this fits the political tenor of this podcast, which is that I think often there’s a tendency to craft legislation or answers that suit the party that’s in control at the time, but I think it just needs to be incumbent upon those that are making those decisions to think one day it might be the case that the other guy is in charge and you need to have the same set of value structures regardless.
Phil Bredesen: Again, our interest and focus has been on the public processes rather than the business processes or entertainment or any of those other kinds of things. If you were to think forward in terms of what impacts of all the possibilities that are out there, what do you think are the most likely impacts of AI over the next election cycle or something like that? Where would we look for those? Where do you think would be most common?
Sarah Kreps: I actually think in some ways some of the biggest AI impacts will be in areas like finance, in science where these tools are so helpful in determining the sequence of proteins. I think that the progress will be a bit slower in politics because there’s a sense, and I have a colleague who works on where people are okay with AI and where they’re more resistant, where people don’t love the idea of AI is where you want empathy. I think people want empathy, at least the perception of empathy, in their politics. I think that’s why you might see slower integration of AI tools and politics because it will just seem like this person was not empathetic and they couldn’t even write their own speech or they couldn’t even write their own email response.
So I think it might lag a little bit in this area for that reason. It’s a little bit like one of the examples she was giving me was, “Well, you wouldn’t necessarily want a robot to read a story to your four-year-old child.” That’s where you really want empathy. I almost feel like politics is closer to that than it is protein sequencing.
Bill Haslam: Couple of last questions. What have we not asked you that we should or what would you like to make certain you say about this whole topic of AI and democracy that maybe we don’t know enough to even ask you?
Sarah Kreps: Well, I think that as with most new things, and I guess it’s a “if it bleeds, it leads,” that the media loves covering things that are sensationally bad. So I think so much of the coverage has been like, “Oh, ChatGPT has taken everyone’s jobs.” Well, unemployment is pretty low, so I don’t know that that’s a huge concern or, “ChatGPT is, fill in the blank, bad thing.” I think that we would be remiss if we didn’t think about some of the ways that these technologies can really help bridge the constituency gap. People who are trying their best to represent the public, but groping for a sense, the pulse of, “What does the public think in order for me to represent them?” I think these tools can really help in that regard. It can help as a kind of AI assist in replying to people so that they feel heard. So I think there really is an AI for the public good message that I would want people to know about.
Phil Bredesen: I’d also wonder, along those lines, let me just ask that in government, there’s a lot of things you have responsibility for. In a health department, you’re trying to identify health threats and so on. Is there a role for AI in assisting people like that in doing their jobs? Could you have found, I don’t know, AIDS much earlier with the data you had, if you had something more sophisticated looking at the Medicare data for example or something like that?
Sarah Kreps: Yeah, no, I think that’s a really good example as well is just, and this is not quite example of what you’re talking about, but polling. So how do we use a lot of data because now we’re awash in data? How do we use that data to better understand how whatever works, whether it’s viruses or whether it’s how people are thinking about things? I think that, again, elected officials, how can they understand? Amazon is mining the search and purchasing behavior to uncover insights about what consumers are buying and predicting what they might buy in the future. I think AI can tap similarly into these sentiments to try to analyze how people think and what their next moves are and what is going to lead to fulfillment.
The reason why I think that’s important again for democracy is that one of the things I think that is really undermining democracy is the distrust and people feeling disillusioned. So again, I think AI can really help with those insights, those machine learning insights, help provide context that hopefully tamps down some of that disillusionment because it allows, again, elected officials to better represent people.
Bill Haslam: Sarah, this has been incredibly helpful. The name of the podcast is “You Might Be Right,” and it’s from Senator Howard Baker’s reminder to always remember the other person might be right. Is there something that, particularly on this topic of AI, that you’ve been able to realize like, “Hey, you know what? I once thought this, but now I thought that this other person might be right”? Do you have an example of something you’ve learned that you can share with us on that?
Sarah Kreps: This is actually the hardest question that you’ve posed, and I had so much respect for Senator Baker. I just always thought it was great that he was really good at seeing both sides of things. I honestly think that that has been my approach to thinking about AI is that I come into it pretty agnostic. So I don’t feel like I end up having to retract my ideas because I’m just constantly trying to update. So I don’t know if that’s a constitutional lawyer perspective where it’s like, “Well, on the one hand this, on the other hand that.” So I think I’m just sort of sympathetic to the Senator Baker approach of trying to see the world through both sides, and hopefully that means I’m never totally right, but I’m never totally wrong.
Phil Bredesen: We thank you very much.
Bill Haslam: We really do thank you for – like I said, you’ve done just what we asked. We wanted somebody that could talk about, “Here are the possible uses and here are the things that we should be concerned about, and then there’s a lot that we just don’t know until it plays out.”
Sarah Kreps: Well, thank you for the great work you do with your podcast.
Bill Haslam: Thank you so much, Dr. Kreps. We appreciate you being a part of this.
Sarah Kreps: Thank you.
Phil Bredesen: Great. We have our second guest with us today and I’d like to introduce him. It’s Bruce Schneier. Bruce is a Harvard Kennedy School lecturer. He’s written 14 books and has a newsletter and a blog. He’s a board member of the Electronic Frontier Foundation, which is a nonprofit defending digital privacy and free speech and innovation, and really an expert on the subjects we have here today and we’re great to, lucky to have him on board.
Bruce, welcome.
Bruce Schneier: Thanks for having me.
Bill Haslam: Hey, let me start with this question. I’ve been able to read a little bit of what you’ve written and I’d say you’re in the camp of generally not being an alarmist about AI. You see a lot of positive benefits for us out of it, but let me start this. That being said, what are the things that concern you? What are the things that, as somebody that really knows this, that folks like us should be thinking about when it comes to AI in terms of concern?
Bruce Schneier: AI is a transformative technology. It’s going to change a lot of things. It might change everything, and we’re very much in the early years of working AI. So we don’t know. The concern is it’ll change things faster than we in society can react, and I can’t tell you what they are. It’s going to affect work, it’s going to affect how we communicate, it’s going to affect everything about how we live our lives.
So the concerns are, we don’t know the concerns. We in society are pretty slow in reacting to change like passing new laws and new regulations or figuring out how society changes in response to a technology, and these changes are going to come pretty fast is my guess. So maybe that’s it, is that the speed of change is going to be faster than our speed of reacting to change.
Phil Bredesen: I’ve often wondered, for example, whether if you had good information in a way of looking at it, things like AIDS could have been detected in the world a lot earlier than it was or those kinds of things in public health and so on. Where do you see the most fruitful places in terms of the functions of government that AI could be applied?
Bruce Schneier: So I think AI, it can do things that humans could do, but we don’t have the humans to do it. So what you talk about, disease detection, I think there is a real value in general anomaly detection, things that are weird, what’s going on. We could have humans do it, and United States, the CIA probably does a lot of that, but they’re limited by the number of employees they have. So I think there is definitely value there.
I think there’s value in having AI help individuals navigate the government processes. So right now, filling out forms to get different government assistance or things you are entitled to can be onerous and hard for people. AIs can help people do that. If you are a veteran, it can help you navigate the veteran’s health system and get the benefits that you deserve. Now, a human could do that, but we just don’t have enough humans. So I think there’s a lot of benefit in AI doing tasks that humans used to do, but we don’t have enough humans to do it.
Bill Haslam: So I want to back up a minute. You were talking about one of the benefits might be that AI could moderate a discussion among millions of people and then come up with some consensus about what people’s thoughts and opinions are. I guess my concern is we’ve both seen the impact that the ability to have false data enter the discussion, whether it be on Twitter, I guess it’s X now, that two-thirds of it’s bots or folks sitting in Russia in a basement somewhere, does AI have the ability if it’s moderating that discussion or gathering those opinions to separate out the false data that would come in?
Bruce Schneier: So maybe, you could have AI know facts and diminish opinions that are not factually based. I think a lot of the propaganda doesn’t lend itself to fact checking, but when people share memes, they’re not sharing facts, they’re social signaling – “My team good, your team bad” – and whether the facts are true or not don’t matter. In fact, most of the things the Russians came up with in 2016 and 2018 and even 2020 were not factual based. They were basically, “My team good, your team bad,” and the people who share those care less whether they’re true or not.
So there is a place for fact checking and I think it’s important, but it’s not going to solve the problem of those sorts of memes because they’re not fact-based. But an AI can – this is not yet, I think this is still a little science fictiony – can moderate a discussion, ensure that everyone is heard, highlight points of agreement, bring up points of disagreement, move people towards consensus, do the kinds of things a human moderator would do in that setting, if we had enough human moderators. Now, this assumes we want to do that. There’s a whole lot meta in the political process, but there’s nothing about these tasks that make them hard for an AI to do that. Perfectly reasonable things.
Bill Haslam: No, I get that. I guess my question is, could you do the digital equivalent of stuffing the ballot box? In other words, as they moderate and build this consensus opinion, can I flood the zone with false information to sway the conversation the way I want it to go?
Bruce Schneier: So probably. So this is the general hard question of, “Do you know if you’re talking to a human or a bot?” So am I a human or a bot? Voice is pretty good. Maybe in a few years I don’t have to be actually sitting here doing this interview. I can have the bot that has been trained on me to do it in my place. It sounds like me, it reacts as me, you don’t know the difference, and we’re fine. That’s going to be really hard to detect.
When you have things like short comments, tweets, short updates, those are going to be really hard to detect. There’s going to be an arms race between detecting artificial speech and creating artificial speech. My guess is the creating is going to edge out because it’s going to be easier. It’s going to be hard to detect.
So now the question is, “Can you verify that there’s a person behind the speech?” So I want to authenticate the Twitter account, authenticate the email address, so I know when I receive the text that there’s a human maybe that didn’t say it, but is standing behind it, and we’re going to see a bunch of technologies in that. They’re going to be hacked. So this is going to be, I think, a big issue going forward. How do we ensure that we’re interacting with a person not a bot, if we care? If a bot is just as good on the other end of a airline reservations line, maybe I don’t care, but you’re right, if it’s making political statements, we want to know there’s a human being there.
Phil Bredesen: There’s been a lot of discussion and people in Washington calling for the Congress to regulate AI and so on. Assuming that there was some consensus that there might need to be some involvement on the part of the federal government and how this develops and how it’s used, what form would that take? What could be useful if it could be designed?
Bruce Schneier: In general, I want regulation that regulates the humans behind the AI more than the AI itself. So if we think that propaganda in the political sphere is bad, it doesn’t matter if a human or a computer generated it. If we think that taking away people’s jobs is bad, it doesn’t matter if it’s an AI that did it or a non-AI that did it. For the near term, the AIs will be directed by people. If we don’t like the direction they’re being given, it’s the people that are responsible, not the AI.
There is a talk right now in a few countries in Europe enforcing AI-generated images used in advertising to disclose themselves. If that’s the case, we’ve been manipulating images for decades, and whether the AI manipulates them or some artist does in either a dark room or now on Photoshop, it’s the same thing. It’s manipulated images and that’s the problem. So focus less on the AI and more on the behavior you don’t like that the AI did because likely, humans used to do it, just not as much because it was more expensive.
Bill Haslam: Let me shift a little. Again, we’re coming at this as two folks who’ve been in public office and we’re trying to – One of our concerns always is, how do we make certain that we encourage great people to stay in public office? Is there a concern that or I guess let me ask you this. Should we be concerned that AI is one more tool that having more money in a campaign will help you use in a more effective way? In other words, is this something that will equalize or will it enure to the benefit of the folks who can pay the most for it?
Bruce Schneier: Like in many of these questions, we don’t know. If you asked me that question in March, I would say definitely those with more money gain more power by using AI. You’re asking me this now in August, and it seems like cheaper public domain open source models are doing just as well. So now I’ll tell you, it’s democratizing, that it gives power to people with less power. Ask me this in December and I don’t know what I’m going to say.
This speaks to how fast and weird this technology is changing. I tend to think, in general, technologies give power to the quick and less powerful first, and then as they become more mature, they give power to the more powerful. We saw that with the internet in general. In the early 2000s, they were empowering revolutions around the planet and now they are empowering totalitarians who are stifling revolutions around the planet. I think of that as the quick versus the strong, but AI can be tricky, and I don’t know. My long-term bet is always on power using technology to increase power, but this is an exception right now and it might continue to be an exception.
Phil Bredesen: So the two of us have been governors, and suppose the world turned out differently and the two of us had decided to go to Washington and we were the two senators from the state here and we came to see you and say, “I’m confused about this. I want to be a good steward of our democracy and help people in our state. I realize we’re going to do things in a stepwise fashion. What should I be doing right now? What should I be thinking about right now as a beginning to get our arms around this process in some way?” What would you tell us?
Bruce Schneier: That this is going to change everything and we need to have government adapt and that’s going to be hard right now just because the US government isn’t really good at making major changes, passing major legislation. I think a new regulatory agency is something we should think about. New technologies have in the past regularly caused a formation of new government agencies. The airplane did, radio did, nuclear power did, automobiles did. The first step was government needs expertise and an agency is a place to put it. I’ve seen calls for a federal robotics commission or an AI commission because we are going to need to regulate this space in the same way we regulate nuclear power and airplanes and pharmaceuticals and all of these major dangerous, transforming technologies.
Now, that’s going to be hard. It’s going to be a hard sell to a senator right now because that kind of major change is difficult. I think that’s important. We need to think about this in terms of individual power that, in general, in our society, we let people and corporations do whatever they want unless it’s prohibited. We have very much a right space society. The exceptions are where doing the thing can kill people and that’s like airplanes, cars, pharmaceuticals. There we’re permissions based. You can’t do the thing unless we allow it, and we do that because getting it wrong means people die. AI is going to move very quickly into that latter category.
So what do we do as a country, as a society where getting it wrong means people die? That’s going to be moving this from rights-based to permissions-based and no one’s going to like that. You’re not going to like it. I’m not going to like it. The companies going to hate it. We’re all going to hate it, but it’s going to be necessary because we are going to become so powerful as individuals that we’re going to need to do that. These are really hard things, but speaks to the transformative nature of these technologies.
Bill Haslam: That’s actually, I think, very– You’ve given me a lot to think about. I think one of the distinctions I always make is a situation are you getting ready to run out of gas in a car or in an airplane? There’s a lot of difference, and the hard thing is to know which one of those is happening. As you said, you started this with the dangerous thing is we don’t know what we don’t know and–
Bruce Schneier: Things are changing so fast.
Phil Bredesen: You sound like Don Rumsfeld here today.
Bill Haslam: I know. I’m sorry.
Bruce Schneier: It’s true. There are things I wrote in March that are not true anymore, and that was four months ago. This is changing so fast.
Bill Haslam: So I guess my question is given your premise like “Hey, some of these technologies could actually cause the loss of life,” so it’s that serious. I’m trying to figure out how we as a country trying to govern itself, struggling at that right now, how we get ahead of those decisions fast enough to figure out how we’re going to decide if we’re running out of gas in the car or the airplane and if it’s the airplane, exactly how do we regulate or legislate that?
Bruce Schneier: We do the thing that we in the United States are really hard at doing, which is be proactive. We are terrible at proactive. We’re okay at reactive. Right now, we’re not really that good at that either, but we are terrible at proactive. This is going to be a technology where we will need to be proactive. Right now, I think that the regulations on the humans and the people are largely good enough, if we choose to enforce it. We are not good in society at enforcing rules against corporations. We just don’t do it very well. If we go after them, it is rare. If we fine them, the fines are rounding errors compared to the attorney costs.
Right now, Europe is the regulatory superpower on the planet, and they’re right now in the middle of preparing, and going to pass very soon, a comprehensive AI act. It’s okay. I have some complaints about it. There are ways it can be better, but at least they’re trying in a way that we are not. So Europe is where the action is right now, and we’re going to see when the EU AI act goes into force, how well it works, what it did, what it didn’t. In a lot of ways, this is similar to GDPR, the European privacy regulation, which was passed, oh, I don’t know, maybe a decade ago. Since it’s been emulated in California, Virginia, Colorado, a couple of other states, there’s talk about US regulation. Europe is leading here and is leading here again.
Phil Bredesen: Since I’m not familiar with it, in the EU AI proposals, roughly speaking, what is it they’re tackling there? What is that about?
Bruce Schneier: Roughly speaking, they’re looking at the use of AI and they divide uses in high, medium, and low impact. They have a better way of saying it – high, medium, and low danger. So one is AI in a children’s toy, one is AI in your car. I just made that up. Then there are different regulations and controls for AI used in these critical applications where you can kill people. In all these regulations, you know how it works, the devil’s in the details. It matters whether there’s a comma here and whether these three words say this or that, and lobbyists are all over them like they are in this country, but they are trying.
Bill Haslam: You’ve been terrific. You’ve helped bring me from a first grade level up to somewhere in the fourth or fifth grade, and I really appreciate that. Let me ask you a final question that we ask all of our guests. The podcast takes its name from Senator Baker’s quote about listening and keeping an open mind because the other person might be right. In that spirit, can you tell us about a time that you realized that the other person or side might be right?
Bruce Schneier: I think AI research in the past two years has been full of those, that there are things that I think are right, that I argue with people on, that turn out to be wrong. So many of us are trying to extrapolate from what we know to what’s in the future and the science is changing. When I was in college, we learned that the game of Go would never be solved by an AI, and not because the rules are complicated, they’re trivial, but because the board is so complicated. There are so many possible moves.
2016, an AI beats a human world champion, Lee Sedol. He’s Korean, five-game match. This stunned both the Go playing and the AI worlds. They did it with some advances in AI theory, but mostly was throwing more computers at the problem. We were all wrong about how AI works, about how advances happen, and that’s happening again and again. Studying this is an exercise in humility. What you say could be wrong. Have me back in December, we’ll do this again. We’ll have different answers.
Phil Bredesen: All right. We may take you up on that.
Bill Haslam: We might, and I actually wrote down on my paper while you were talking the need for humility in all of this discussion. I think you ended in a great place. That’s a really good place for us to start, and whether you’re a legislator looking at the impact of this or just a person wondering how this is going to affect the world, realizing that we don’t know what we don’t know is a really good place to start.
Bruce Schneier: Of course, I for one welcome my robot overlords.
Bill Haslam: Tell me what you think.
Phil Bredesen: This in some ways seems very different from other issues that we’ve talked about in that most of the other issues have been beaten to death on all sides and that’s why we’re talking about them. Whereas this one is just, we’re just wandering off into a land that no one’s explored before and not knowing where the paths are. I thought that Bruce was exactly right in his notion that we’re not very good, probably in any democracy, about being proactive about things. You know as well as I do, it’s just you don’t get any benefit for something you prevented.
Bill Haslam: There is zero political benefit in being proactive, whether it’s about solving the border crisis or figuring out an answer for AI. There’s just not.
Phil Bredesen: I’ve always felt, legislatively, I’ve often said it’s like legislators are just, they’re designed almost to not do anything significant until they absolutely, absolutely, absolutely have to and there’s no other choice.
Bill Haslam: I had one thought and a couple of, I guess, questions or conclusions after this. Number one, one of the interesting things about this is which jobs will be threatened by AI compared to technology advances in the past. In the past, it’s primarily been more blue collar jobs that automation was able to say, “Well, we can produce at scale,” and a lot of the blue collar jobs went away. This will threaten a lot of white collar jobs.
Phil Bredesen: I think it’s going to be interesting that a lot of people who think technology is wonderful who are white collar, as long as it makes cars with fewer people–
Bill Haslam: Exactly. Exactly.
Phil Bredesen: –are going to find themselves in the center of the storm.
Bill Haslam: I think that’s one thought. The second is I really did like his point because I think it’s true. The most dangerous position is when you don’t know what you don’t know, and he was really clear. He said himself, “I think differently than I did, did three months ago, and I will three months from now and I’m in the middle of this,” and given that, and yet he’s saying we need to be proactive and not reactive, I just don’t know that our modern political world is in any way equipped to be proactive for something like this.
Phil Bredesen: One of the things I took out of it was the notion that, and I may be paraphrasing him or getting him wrong, but maybe you shouldn’t depend on legislatures to do this. He talks about the idea of the equivalent of the FCC or the FAA or something like that, but it’s also a chance to think about just, “Okay, the world is different in 2023 and maybe different than it was in 1776,” and are there governmental structures or things that we should be thinking about that are better at responding to the very complex kinds of issues that we face today that require some proactivity?
Bill Haslam: I think that’s well said. My last concern is this, and I’m not going to express it very well, but I feel like we’re living in a world that’s post-truth. In other words, something he talked about that, too, he said, whether facts are true or not doesn’t matter, and we’re seeing that in today’s political. I’ll pick on my own political party as we talk. One of the things that President Trump has been indicted for is the whole issue of classified documents being kept at Mar-a-Lago, and he himself has never even disputed that he had classified documents. He said, “But I’ve got the right to do them,” the various legal arguments that he’s made, but he’s never ever once said, “No, I didn’t have classified documents,” and yet I just saw a poll from Marquette University that 50% of the people in my party say, “Oh, no, he didn’t have any classified documents.”
I know this is a long way away from the AI discussion, but in a world where the facts don’t really matter, then I just worry about AI multiplying on top of that because now in a world where, “Well, I don’t believe what the media says, what the judiciary says, what the election results,” whatever it is, if I don’t believe that and now AI gives us the capacity to blow away truth in whole new ways, it just scares me.
Phil Bredesen: I’ve often thought of what you see, and to be fair, my party has got exactly the same failings. Trump is in the news right now, but we certainly have had any number of examples where the truth was kind of obscured, but I think we human beings, and we’ve talked about this before in this podcast, we have this confirmation bias. We selectively find things that support a point of view we have. In fact, he even mentioned that today, and AI just makes it even easier to find those things. I think it’s actually going to take a lot of discipline and maybe focus on the part of citizens who really care about making a democracy work to be maybe a little more analytical about what they’re being told and understand that this confirmation bias exists and you’ve got to actively seek other opinions, which, of course, is what Howard Baker would’ve said, I suspect.
Bill Haslam: That’s right. No, I think what you said is well said. I don’t think anybody would say that we’re at full stride as a democracy right now, and this is another thing that’s going to shake the earth.
Marianne Wanamaker: Thanks for listening to “You Might Be Right.” Be sure to follow on Apple Podcasts, Spotify, or wherever you listen to your favorite shows. And please help spread the word by sharing, rating and reviewing the show.
Thank you, Governors Bredesen and Haslam, for hosting these conversations. “You Might Be Right” is brought to you by the Baker School of Public Policy and Public Affairs at the University of Tennessee with support from the Boyd Fund for Leadership and Civil Discourse. To learn more about the show and our work, go to youmightberight.org and follow the show on social media @YMBRpodcast.
This episode was produced in partnership with Relationary Marketing and Stones River Group.
Categories: Audio, Recorded Interviews, Text, Written Interviews