Entries Tagged "propaganda"

Page 1 of 2

Adversarial ML Attack that Secretly Gives a Language Model a Point of View

Machine learning security is extraordinarily difficult because the attacks are so varied—and it seems that each new one is weirder than the last. Here’s the latest: a training-time attack that forces the model to exhibit a point of view: Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures.”

Abstract: We investigate a new threat to neural sequence-to-sequence (seq2seq) models: training-time attacks that cause models to “spin” their outputs so as to support an adversary-chosen sentiment or point of view—but only when the input contains adversary-chosen trigger words. For example, a spinned summarization model outputs positive summaries of any text that mentions the name of some individual or organization.

Model spinning introduces a “meta-backdoor” into a model. Whereas conventional backdoors cause models to produce incorrect outputs on inputs with the trigger, outputs of spinned models preserve context and maintain standard accuracy metrics, yet also satisfy a meta-task chosen by the adversary.

Model spinning enables propaganda-as-a-service, where propaganda is defined as biased speech. An adversary can create customized language models that produce desired spins for chosen triggers, then deploy these models to generate disinformation (a platform attack), or else inject them into ML training pipelines (a supply-chain attack), transferring malicious functionality to downstream models trained by victims.

To demonstrate the feasibility of model spinning, we develop a new backdooring technique. It stacks an adversarial meta-task onto a seq2seq model, backpropagates the desired meta-task output to points in the word-embedding space we call “pseudo-words,” and uses pseudo-words to shift the entire output distribution of the seq2seq model. We evaluate this attack on language generation, summarization, and translation models with different triggers and meta-tasks such as sentiment, toxicity, and entailment. Spinned models largely maintain their accuracy metrics (ROUGE and BLEU) while shifting their outputs to satisfy the adversary’s meta-task. We also show that, in the case of a supply-chain attack, the spin functionality transfers to downstream models.

This new attack dovetails with something I’ve been worried about for a while, something Latanya Sweeney has dubbed “persona bots.” This is what I wrote in my upcoming book (to be published in February):

One example of an extension of this technology is the “persona bot,” an AI posing as an individual on social media and other online groups. Persona bots have histories, personalities, and communication styles. They don’t constantly spew propaganda. They hang out in various interest groups: gardening, knitting, model railroading, whatever. They act as normal members of those communities, posting and commenting and discussing. Systems like GPT-3 will make it easy for those AIs to mine previous conversations and related Internet content and to appear knowledgeable. Then, once in a while, the AI might post something relevant to a political issue, maybe an article about a healthcare worker having an allergic reaction to the COVID-19 vaccine, with worried commentary. Or maybe it might offer its developer’s opinions about a recent election, or racial justice, or any other polarizing subject. One persona bot can’t move public opinion, but what if there were thousands of them? Millions?

These are chatbots on a very small scale. They would participate in small forums around the Internet: hobbyist groups, book groups, whatever. In general they would behave normally, participating in discussions like a person does. But occasionally they would say something partisan or political, depending on the desires of their owners. Because they’re all unique and only occasional, it would be hard for existing bot detection techniques to find them. And because they can be replicated by the millions across social media, they could have a greater effect. They would affect what we think, and—just as importantly—what we think others think. What we will see as robust political discussions would be persona bots arguing with other persona bots.

Attacks like these add another wrinkle to that sort of scenario.

Posted on October 21, 2022 at 6:53 AMView Comments

UK Government to Launch PR Campaign Undermining End-to-End Encryption

Rolling Stone is reporting that the UK government has hired the M&C Saatchi advertising agency to launch an anti-encryption advertising campaign. Presumably they’ll lean heavily on the “think of the children!” rhetoric we’re seeing in this current wave of the crypto wars. The technical eavesdropping mechanisms have shifted to client-side scanning, which won’t actually help—but since that’s not really the point, it’s not argued on its merits.

Posted on January 18, 2022 at 6:05 AMView Comments

Should There Be Limits on Persuasive Technologies?

Persuasion is as old as our species. Both democracy and the market economy depend on it. Politicians persuade citizens to vote for them, or to support different policy positions. Businesses persuade consumers to buy their products or services. We all persuade our friends to accept our choice of restaurant, movie, and so on. It’s essential to society; we couldn’t get large groups of people to work together without it. But as with many things, technology is fundamentally changing the nature of persuasion. And society needs to adapt its rules of persuasion or suffer the consequences.

Democratic societies, in particular, are in dire need of a frank conversation about the role persuasion plays in them and how technologies are enabling powerful interests to target audiences. In a society where public opinion is a ruling force, there is always a risk of it being mobilized for ill purposes—­such as provoking fear to encourage one group to hate another in a bid to win office, or targeting personal vulnerabilities to push products that might not benefit the consumer.

In this regard, the United States, already extremely polarized, sits on a precipice.

There have long been rules around persuasion. The US Federal Trade Commission enforces laws that claims about products “must be truthful, not misleading, and, when appropriate, backed by scientific evidence.” Political advertisers must identify themselves in television ads. If someone abuses a position of power to force another person into a contract, undue influence can be argued to nullify that agreement. Yet there is more to persuasion than the truth, transparency, or simply applying pressure.

Persuasion also involves psychology, and that has been far harder to regulate. Using psychology to persuade people is not new. Edward Bernays, a pioneer of public relations and nephew to Sigmund Freud, made a marketing practice of appealing to the ego. His approach was to tie consumption to a person’s sense of self. In his 1928 book Propaganda, Bernays advocated engineering events to persuade target audiences as desired. In one famous stunt, he hired women to smoke cigarettes while taking part in the 1929 New York City Easter Sunday parade, causing a scandal while linking smoking with the emancipation of women. The tobacco industry would continue to market lifestyle in selling cigarettes into the 1960s.

Emotional appeals have likewise long been a facet of political campaigns. In the 1860 US presidential election, Southern politicians and newspaper editors spread fears of what a “Black Republican” win would mean, painting horrific pictures of what the emancipation of slaves would do to the country. In the 2020 US presidential election, modern-day Republicans used Cuban Americans’ fears of socialism in ads on Spanish-language radio and messaging on social media. Because of the emotions involved, many voters believed the campaigns enough to let them influence their decisions.

The Internet has enabled new technologies of persuasion to go even further. Those seeking to influence others can collect and use data about targeted audiences to create personalized messaging. Tracking the websites a person visits, the searches they make online, and what they engage with on social media, persuasion technologies enable those who have access to such tools to better understand audiences and deliver more tailored messaging where audiences are likely to see it most. This information can be combined with data about other activities, such as offline shopping habits, the places a person visits, and the insurance they buy, to create a profile of them that can be used to develop persuasive messaging that is aimed at provoking a specific response.

Our senses of self, meanwhile, are increasingly shaped by our interaction with technology. The same digital environment where we read, search, and converse with our intimates enables marketers to take that data and turn it back on us. A modern day Bernays no longer needs to ferret out the social causes that might inspire you or entice you­—you’ve likely already shared that by your online behavior.

Some marketers posit that women feel less attractive on Mondays, particularly first thing in the morning—­and therefore that’s the best time to advertise cosmetics to them. The New York Times once experimented by predicting the moods of readers based on article content to better target ads, enabling marketers to find audiences when they were sad or fearful. Some music streaming platforms encourage users to disclose their current moods, which helps advertisers target subscribers based on their emotional states.

The phones in our pockets provide marketers with our location in real time, helping deliver geographically relevant ads, such as propaganda to those attending a political rally. This always-on digital experience enables marketers to know what we are doing­—and when, where, and how we might be feeling at that moment.

All of this is not intended to be alarmist. It is important not to overstate the effectiveness of persuasive technologies. But while many of them are more smoke and mirrors than reality, it is likely that they will only improve over time. The technology already exists to help predict moods of some target audiences, pinpoint their location at any given time, and deliver fairly tailored and timely messaging. How far does that ability need to go before it erodes the autonomy of those targeted to make decisions of their own free will?

Right now, there are few legal or even moral limits on persuasion­—and few answers regarding the effectiveness of such technologies. Before it is too late, the world needs to consider what is acceptable and what is over the line.

For example, it’s been long known that people are more receptive to advertisements made with people who look like them: in race, ethnicity, age, gender. Ads have long been modified to suit the general demographic of the television show or magazine they appear in. But we can take this further. The technology exists to take your likeness and morph it with a face that is demographically similar to you. The result is a face that looks like you, but that you don’t recognize. If that turns out to be more persuasive than coarse demographic targeting, is that okay?

Another example: Instead of just advertising to you when they detect that you are vulnerable, what if advertisers craft advertisements that deliberately manipulate your mood? In some ways, being able to place ads alongside content that is likely to provoke a certain emotional response enables advertisers to do this already. The only difference is that the media outlet claims it isn’t crafting the content to deliberately achieve this. But is it acceptable to actively prime a target audience and then to deliver persuasive messaging that fits the mood?

Further, emotion-based decision-making is not the rational type of slow thinking that ought to inform important civic choices such as voting. In fact, emotional thinking threatens to undermine the very legitimacy of the system, as voters are essentially provoked to move in whatever direction someone with power and money wants. Given the pervasiveness of digital technologies, and the often instant, reactive responses people have to them, how much emotion ought to be allowed in persuasive technologies? Is there a line that shouldn’t be crossed?

Finally, for most people today, exposure to information and technology is pervasive. The average US adult spends more than eleven hours a day interacting with media. Such levels of engagement lead to huge amounts of personal data generated and aggregated about you­—your preferences, interests, and state of mind. The more those who control persuasive technologies know about us, what we are doing, how we are feeling, when we feel it, and where we are, the better they can tailor messaging that provokes us into action. The unsuspecting target is grossly disadvantaged. Is it acceptable for the same services to both mediate our digital experience and to target us? Is there ever such thing as too much targeting?

The power dynamics of persuasive technologies are changing. Access to tools and technologies of persuasion is not egalitarian. Many require large amounts of both personal data and computation power, turning modern persuasion into an arms race where the better resourced will be better placed to influence audiences.

At the same time, the average person has very little information about how these persuasion technologies work, and is thus unlikely to understand how their beliefs and opinions might be manipulated by them. What’s more, there are few rules in place to protect people from abuse of persuasion technologies, much less even a clear articulation of what constitutes a level of manipulation so great it effectively takes agency away from those targeted. This creates a positive feedback loop that is dangerous for society.

In the 1970s, there was widespread fear about so-called subliminal messaging, which claimed that images of sex and death were hidden in the details of print advertisements, as in the curls of smoke in cigarette ads and the ice cubes of liquor ads. It was pretty much all a hoax, but that didn’t stop the Federal Trade Commission and the Federal Communications Commission from declaring it an illegal persuasive technology. That’s how worried people were about being manipulated without their knowledge and consent.

It is time to have a serious conversation about limiting the technologies of persuasion. This must begin by articulating what is permitted and what is not. If we don’t, the powerful persuaders will become even more powerful.

This essay was written with Alicia Wanless, and previously appeared in Foreign Policy.

EDITED TO ADD: Ukrainian translation.

Posted on December 14, 2020 at 2:03 PMView Comments

Fake Stories in Real News Sites

Fireeye is reporting that a hacking group called Ghostwriter broke into the content management systems of Eastern European news sites to plant fake stories.

From a Wired story:

The propagandists have created and disseminated disinformation since at least March 2017, with a focus on undermining NATO and the US troops in Poland and the Baltics; they’ve posted fake content on everything from social media to pro-Russian news websites. In some cases, FireEye says, Ghostwriter has deployed a bolder tactic: hacking the content management systems of news websites to post their own stories. They then disseminate their literal fake news with spoofed emails, social media, and even op-eds the propagandists write on other sites that accept user-generated content.

That hacking campaign, targeting media sites from Poland to Lithuania, has spread false stories about US military aggression, NATO soldiers spreading coronavirus, NATO planning a full-on invasion of Belarus, and more.

EDITED TO ADD (8/12): This review of three books on the topic is related.

Posted on July 30, 2020 at 2:56 PMView Comments

Chinese COVID-19 Disinformation Campaign

The New York Times is reporting on state-sponsored disinformation campaigns coming out of China:

Since that wave of panic, United States intelligence agencies have assessed that Chinese operatives helped push the messages across platforms, according to six American officials, who spoke on the condition of anonymity to publicly discuss intelligence matters. The amplification techniques are alarming to officials because the disinformation showed up as texts on many Americans’ cellphones, a tactic that several of the officials said they had not seen before.

Posted on April 23, 2020 at 12:01 PMView Comments

Artificial Personas and Public Discourse

Presidential campaign season is officially, officially, upon us now, which means it’s time to confront the weird and insidious ways in which technology is warping politics. One of the biggest threats on the horizon: artificial personas are coming, and they’re poised to take over political debate. The risk arises from two separate threads coming together: artificial intelligence-driven text generation and social media chatbots. These computer-generated “people” will drown out actual human discussions on the Internet.

Text-generation software is already good enough to fool most people most of the time. It’s writing news stories, particularly in sports and finance. It’s talking with customers on merchant websites. It’s writing convincing op-eds on topics in the news (though there are limitations). And it’s being used to bulk up “pink-slime journalism”—websites meant to appear like legitimate local news outlets but that publish propaganda instead.

There’s a record of algorithmic content pretending to be from individuals, as well. In 2017, the Federal Communications Commission had an online public-commenting period for its plans to repeal net neutrality. A staggering 22 million comments were received. Many of them—maybe half—were fake, using stolen identities. These comments were also crude; 1.3 million were generated from the same template, with some words altered to make them appear unique. They didn’t stand up to even cursory scrutiny.

These efforts will only get more sophisticated. In a recent experiment, Harvard senior Max Weiss used a text-generation program to create 1,000 comments in response to a government call on a Medicaid issue. These comments were all unique, and sounded like real people advocating for a specific policy position. They fooled the Medicaid.gov administrators, who accepted them as genuine concerns from actual human beings. This being research, Weiss subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. The next group to try this won’t be so honorable.

Chatbots have been skewing social-media discussions for years. About a fifth of all tweets about the 2016 presidential election were published by bots, according to one estimate, as were about a third of all tweets about that year’s Brexit vote. An Oxford Internet Institute report from last year found evidence of bots being used to spread propaganda in 50 countries. These tended to be simple programs mindlessly repeating slogans: a quarter million pro-Saudi “We all have trust in Mohammed bin Salman” tweets following the 2018 murder of Jamal Khashoggi, for example. Detecting many bots with a few followers each is harder than detecting a few bots with lots of followers. And measuring the effectiveness of these bots is difficult. The best analyses indicate that they did not affect the 2016 US presidential election. More likely, they distort people’s sense of public sentiment and their faith in reasoned political debate. We are all in the middle of a novel social experiment.

Over the years, algorithmic bots have evolved to have personas. They have fake names, fake bios, and fake photos—sometimes generated by AI. Instead of endlessly spewing propaganda, they post only occasionally. Researchers can detect that these are bots and not people, based on their patterns of posting, but the bot technology is getting better all the time, outpacing tracking attempts. Future groups won’t be so easily identified. They’ll embed themselves in human social groups better. Their propaganda will be subtle, and interwoven in tweets about topics relevant to those social groups.

Combine these two trends and you have the recipe for nonhuman chatter to overwhelm actual political speech.

Soon, AI-driven personas will be able to write personalized letters to newspapers and elected officials, submit individual comments to public rule-making processes, and intelligently debate political issues on social media. They will be able to comment on social-media posts, news sites, and elsewhere, creating persistent personas that seem real even to someone scrutinizing them. They will be able to pose as individuals on social media and send personalized texts. They will be replicated in the millions and engage on the issues around the clock, sending billions of messages, long and short. Putting all this together, they’ll be able to drown out any actual debate on the Internet. Not just on social media, but everywhere there’s commentary.

Maybe these persona bots will be controlled by foreign actors. Maybe it’ll be domestic political groups. Maybe it’ll be the candidates themselves. Most likely, it’ll be everybody. The most important lesson from the 2016 election about misinformation isn’t that misinformation occurred; it is how cheap and easy misinforming people was. Future technological improvements will make it all even more affordable.

Our future will consist of boisterous political debate, mostly bots arguing with other bots. This is not what we think of when we laud the marketplace of ideas, or any democratic political process. Democracy requires two things to function properly: information and agency. Artificial personas can starve people of both.

Solutions are hard to imagine. We can regulate the use of bots—a proposed California law would require bots to identify themselves—but that is effective only against legitimate influence campaigns, such as advertising. Surreptitious influence operations will be much harder to detect. The most obvious defense is to develop and standardize better authentication methods. If social networks verify that an actual person is behind each account, then they can better weed out fake personas. But fake accounts are already regularly created for real people without their knowledge or consent, and anonymous speech is essential for robust political debate, especially when speakers are from disadvantaged or marginalized communities. We don’t have an authentication system that both protects privacy and scales to the billions of users.

We can hope that our ability to identify artificial personas keeps up with our ability to disguise them. If the arms race between deep fakes and deep-fake detectors is any guide, that’ll be hard as well. The technologies of obfuscation always seem one step ahead of the technologies of detection. And artificial personas will be designed to act exactly like real people.

In the end, any solutions have to be nontechnical. We have to recognize the limitations of online political conversation, and again prioritize face-to-face interactions. These are harder to automate, and we know the people we’re talking with are actual people. This would be a cultural shift away from the internet and text, stepping back from social media and comment threads. Today that seems like a completely unrealistic solution.

Misinformation efforts are now common around the globe, conducted in more than 70 countries. This is the normal way to push propaganda in countries with authoritarian leanings, and it’s becoming the way to run a political campaign, for either a candidate or an issue.

Artificial personas are the future of propaganda. And while they may not be effective in tilting debate to one side or another, they easily drown out debate entirely. We don’t know the effect of that noise on democracy, only that it’ll be pernicious, and that it’s inevitable.

This essay previously appeared in TheAtlantic.com.

EDITED TO ADD: Jamie Susskind wrote a similar essay.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

EDITED TO ADD (6/4): This essay has been translated into Portuguese.

Posted on January 13, 2020 at 8:21 AMView Comments

Influence Operations Kill Chain

Influence operations are elusive to define. The Rand Corp.’s definition is as good as any: “the collection of tactical information about an adversary as well as the dissemination of propaganda in pursuit of a competitive advantage over an opponent.” Basically, we know it when we see it, from bots controlled by the Russian Internet Research Agency to Saudi attempts to plant fake stories and manipulate political debate. These operations have been run by Iran against the United States, Russia against Ukraine, China against Taiwan, and probably lots more besides.

Since the 2016 US presidential election, there have been an endless series of ideas about how countries can defend themselves. It’s time to pull those together into a comprehensive approach to defending the public sphere and the institutions of democracy.

Influence operations don’t come out of nowhere. They exploit a series of predictable weaknesses—and fixing those holes should be the first step in fighting them. In cybersecurity, this is known as a “kill chain.” That can work in fighting influence operations, too­—laying out the steps of an attack and building the taxonomy of countermeasures.

In an exploratory blog post, I first laid out a straw man information operations kill chain. I started with the seven commandments, or steps, laid out in a 2018 New York Times opinion video series on “Operation Infektion,” a 1980s Russian disinformation campaign. The information landscape has changed since the 1980s, and these operations have changed as well. Based on my own research and feedback from that initial attempt, I have modified those steps to bring them into the present day. I have also changed the name from “information operations” to “influence operations,” because the former is traditionally defined by the US Department of Defense in ways that don’t really suit these sorts of attacks.

Step 1: Find the cracks in the fabric of society­—the social, demographic, economic, and ethnic divisions. For campaigns that just try to weaken collective trust in government’s institutions, lots of cracks will do. But for influence operations that are more directly focused on a particular policy outcome, only those related to that issue will be effective.

Countermeasures: There will always be open disagreements in a democratic society, but one defense is to shore up the institutions that make that society possible. Elsewhere I have written about the “common political knowledge” necessary for democracies to function. That shared knowledge has to be strengthened, thereby making it harder to exploit the inevitable cracks. It needs to be made unacceptable—or at least costly—for domestic actors to use these same disinformation techniques in their own rhetoric and political maneuvering, and to highlight and encourage cooperation when politicians honestly work across party lines. The public must learn to become reflexively suspicious of information that makes them angry at fellow citizens. These cracks can’t be entirely sealed, as they emerge from the diversity that makes democracies strong, but they can be made harder to exploit. Much of the work in “norms” falls here, although this is essentially an unfixable problem. This makes the countermeasures in the later steps even more important.

Step 2: Build audiences, either by directly controlling a platform (like RT) or by cultivating relationships with people who will be receptive to those narratives. In 2016, this consisted of creating social media accounts run either by human operatives or automatically by bots, making them seem legitimate, gathering followers. In the years following, this has gotten subtler. As social media companies have gotten better at deleting these accounts, two separate tactics have emerged. The first is microtargeting, where influence accounts join existing social circles and only engage with a few different people. The other is influencer influencing, where these accounts only try to affect a few proxies (see step 6)—either journalists or other influencers—who can carry their message for them.

Countermeasures: This is where social media companies have made all the difference. By allowing groups of like-minded people to find and talk to each other, these companies have given propagandists the ability to find audiences who are receptive to their messages. Social media companies need to detect and delete accounts belonging to propagandists as well as bots and groups run by those propagandists. Troll farms exhibit particular behaviors that the platforms need to be able to recognize. It would be best to delete accounts early, before those accounts have the time to establish themselves.

This might involve normally competitive companies working together, since operations and account names often cross platforms, and cross-platform visibility is an important tool for identifying them. Taking down accounts as early as possible is important, because it takes time to establish the legitimacy and reach of any one account. The NSA and US Cyber Command worked with the FBI and social media companies to take down Russian propaganda accounts during the 2018 midterm elections. It may be necessary to pass laws requiring Internet companies to do this. While many social networking companies have reversed their “we don’t care” attitudes since the 2016 election, there’s no guarantee that they will continue to remove these accounts—especially since their profits depend on engagement and not accuracy.

Step 3: Seed distortion by creating alternative narratives. In the 1980s, this was a single “big lie,” but today it is more about many contradictory alternative truths—a “firehose of falsehood“—that distort the political debate. These can be fake or heavily slanted news stories, extremist blog posts, fake stories on real-looking websites, deepfake videos, and so on.

Countermeasures: Fake news and propaganda are viruses; they spread through otherwise healthy populations. Fake news has to be identified and labeled as such by social media companies and others, including recognizing and identifying manipulated videos known as deepfakes. Facebook is already making moves in this direction. Educators need to teach better digital literacy, as Finland is doing. All of this will help people recognize propaganda campaigns when they occur, so they can inoculate themselves against their effects. This alone cannot solve the problem, as much sharing of fake news is about social signaling, and those who share it care more about how it demonstrates their core beliefs than whether or not it is true. Still, it is part of the solution.

Step 4: Wrap those narratives in kernels of truth. A core of fact makes falsehoods more believable and helps them spread. Releasing stolen emails from Hillary Clinton’s campaign chairman John Podesta and the Democratic National Committee, or documents from Emmanuel Macron’s campaign in France, were both an example of that kernel of truth. Releasing stolen emails with a few deliberate falsehoods embedded among them is an even more effective tactic.

Countermeasures: Defenses involve exposing the untruths and distortions, but this is also complicated to put into practice. Fake news sows confusion just by being there. Psychologists have demonstrated that an inadvertent effect of debunking a piece of fake news is to amplify the message of that debunked story. Hence, it is essential to replace the fake news with accurate narratives that counter the propaganda. That kernel of truth is part of a larger true narrative. The media needs to learn skepticism about the chain of information and to exercise caution in how they approach debunked stories.

Step 5: Conceal your hand. Make it seem as if the stories came from somewhere else.

Countermeasures: Here the answer is attribution, attribution, attribution. The quicker an influence operation can be pinned on an attacker, the easier it is to defend against it. This will require efforts by both the social media platforms and the intelligence community, not just to detect influence operations and expose them but also to be able to attribute attacks. Social media companies need to be more transparent about how their algorithms work and make source publications more obvious for online articles. Even small measures like the Honest Ads Act, requiring transparency in online political ads, will help. Where companies lack business incentives to do this, regulation will be the only answer.

Step 6: Cultivate proxies who believe and amplify the narratives. Traditionally, these people have been called “useful idiots.” Encourage them to take action outside of the Internet, like holding political rallies, and to adopt positions even more extreme than they would otherwise.

Countermeasures: We can mitigate the influence of people who disseminate harmful information, even if they are unaware they are amplifying deliberate propaganda. This does not mean that the government needs to regulate speech; corporate platforms already employ a variety of systems to amplify and diminish particular speakers and messages. Additionally, the antidote to the ignorant people who repeat and amplify propaganda messages is other influencers who respond with the truth—in the words of one report, we must “make the truth louder.” Of course, there will always be true believers for whom no amount of fact-checking or counter-speech will suffice; this is not intended for them. Focus instead on persuading the persuadable.

Step 7: Deny involvement in the propaganda campaign, even if the truth is obvious. Although since one major goal is to convince people that nothing can be trusted, rumors of involvement can be beneficial. The first was Russia’s tactic during the 2016 US presidential election; it employed the second during the 2018 midterm elections.

Countermeasures: When attack attribution relies on secret evidence, it is easy for the attacker to deny involvement. Public attribution of information attacks must be accompanied by convincing evidence. This will be difficult when attribution involves classified intelligence information, but there is no alternative. Trusting the government without evidence, as the NSA’s Rob Joyce recommended in a 2016 talk, is not enough. Governments will have to disclose.

Step 8: Play the long game. Strive for long-term impact over immediate effects. Engage in multiple operations; most won’t be successful, but some will.

Countermeasures: Counterattacks can disrupt the attacker’s ability to maintain influence operations, as US Cyber Command did during the 2018 midterm elections. The NSA’s new policy of “persistent engagement” (see the article by, and interview with, US Cyber Command Commander Paul Nakasone here) is a strategy to achieve this. So are targeted sanctions and indicting individuals involved in these operations. While there is little hope of bringing them to the United States to stand trial, the possibility of not being able to travel internationally for fear of being arrested will lead some people to refuse to do this kind of work. More generally, we need to better encourage both politicians and social media companies to think beyond the next election cycle or quarterly earnings report.

Permeating all of this is the importance of deterrence. Deterring them will require a different theory. It will require, as the political scientist Henry Farrell and I have postulated, thinking of democracy itself as an information system and understanding “Democracy’s Dilemma“: how the very tools of a free and open society can be subverted to attack that society. We need to adjust our theories of deterrence to the realities of the information age and the democratization of attackers. If we can mitigate the effectiveness of influence operations, if we can publicly attribute, if we can respond either diplomatically or otherwise—we can deter these attacks from nation-states.

None of these defensive actions is sufficient on its own. Steps overlap and in some cases can be skipped. Steps can be conducted simultaneously or out of order. A single operation can span multiple targets or be an amalgamation of multiple attacks by multiple actors. Unlike a cyberattack, disrupting will require more than disrupting any particular step. It will require a coordinated effort between government, Internet platforms, the media, and others.

Also, this model is not static, of course. Influence operations have already evolved since the 2016 election and will continue to evolve over time—especially as countermeasures are deployed and attackers figure out how to evade them. We need to be prepared for wholly different kinds of influencer operations during the 2020 US presidential election. The goal of this kill chain is to be general enough to encompass a panoply of tactics but specific enough to illuminate countermeasures. But even if this particular model doesn’t fit every influence operation, it’s important to start somewhere.

Others have worked on similar ideas. Anthony Soules, a former NSA employee who now leads cybersecurity strategy for Amgen, presented this concept at a private event. Clint Watts of the Alliance for Securing Democracy is thinking along these lines as well. The Credibility Coalition’s Misinfosec Working Group proposed a “misinformation pyramid.” The US Justice Department developed a “Malign Foreign Influence Campaign Cycle,” with associated countermeasures.

The threat from influence operations is real and important, and it deserves more study. At the same time, there’s no reason to panic. Just as overly optimistic technologists were wrong that the Internet was the single technology that was going to overthrow dictators and liberate the planet, so pessimists are also probably wrong that it is going to empower dictators and destroy democracy. If we deploy countermeasures across the entire kill chain, we can defend ourselves from these attacks.

But Russian interference in the 2016 presidential election shows not just that such actions are possible but also that they’re surprisingly inexpensive to run. As these tactics continue to be democratized, more people will attempt them. And as more people, and multiple parties, conduct influence operations, they will increasingly be seen as how the game of politics is played in the information age. This means that the line will increasingly blur between influence operations and politics as usual, and that domestic influencers will be using them as part of campaigning. Defending democracy against foreign influence also necessitates making our own political debate healthier.

This essay previously appeared in Foreign Policy.

Posted on August 19, 2019 at 6:14 AMView Comments

Fake News and Pandemics

When the next pandemic strikes, we’ll be fighting it on two fronts. The first is the one you immediately think about: understanding the disease, researching a cure and inoculating the population. The second is new, and one you might not have thought much about: fighting the deluge of rumors, misinformation and flat-out lies that will appear on the internet.

The second battle will be like the Russian disinformation campaigns during the 2016 presidential election, only with the addition of a deadly health crisis and possibly without a malicious government actor. But while the two problems—misinformation affecting democracy and misinformation affecting public health—will have similar solutions, the latter is much less political. If we work to solve the pandemic disinformation problem, any solutions are likely to also be applicable to the democracy one.

Pandemics are part of our future. They might be like the 1968 Hong Kong flu, which killed a million people, or the 1918 Spanish flu, which killed over 40 million. Yes, modern medicine makes pandemics less likely and less deadly. But global travel and trade, increased population density, decreased wildlife habitats, and increased animal farming to satisfy a growing and more affluent population have made them more likely. Experts agree that it’s not a matter of if—it’s only a matter of when.

When the next pandemic strikes, accurate information will be just as important as effective treatments. We saw this in 2014, when the Nigerian government managed to contain a subcontinentwide Ebola epidemic to just 20 infections and eight fatalities. Part of that success was because of the ways officials communicated health information to all Nigerians, using government-sponsored videos, social media campaigns and international experts. Without that, the death toll in Lagos, a city of 21 million people, would have probably been greater than the 11,000 the rest of the continent experienced.

There’s every reason to expect misinformation to be rampant during a pandemic. In the early hours and days, information will be scant and rumors will abound. Most of us are not health professionals or scientists. We won’t be able to tell fact from fiction. Even worse, we’ll be scared. Our brains work differently when we are scared, and they latch on to whatever makes us feel safer—even if it’s not true.

Rumors and misinformation could easily overwhelm legitimate news channels, as people share tweets, images and videos. Much of it will be well-intentioned but wrong—like the misinformation spread by the anti-vaccination community today ­—but some of it may be malicious. In the 1980s, the KGB ran a sophisticated disinformation campaign ­—Operation Infektion ­—to spread the rumor that HIV/AIDS was a result of an American biological weapon gone awry. It’s reasonable to assume some group or country would deliberately spread intentional lies in an attempt to increase death and chaos.

It’s not just misinformation about which treatments work (and are safe), and which treatments don’t work (and are unsafe). Misinformation can affect society’s ability to deal with a pandemic at many different levels. Right now, Ebola relief efforts in the Democratic Republic of Congo are being stymied by mistrust of health workers and government officials.

It doesn’t take much to imagine how this can lead to disaster. Jay Walker, curator of the TEDMED conferences, laid out some of the possibilities in a 2016 essay: people overwhelming and even looting pharmacies trying to get some drug that is irrelevant or nonexistent, people needlessly fleeing cities and leaving them paralyzed, health workers not showing up for work, truck drivers and other essential people being afraid to enter infected areas, official sites like CDC.gov being hacked and discredited. This kind of thing can magnify the health effects of a pandemic many times over, and in extreme cases could lead to a total societal collapse.

This is going to be something that government health organizations, medical professionals, social media companies and the traditional media are going to have to work out together. There isn’t any single solution; it will require many different interventions that will all need to work together. The interventions will look a lot like what we’re already talking about with regard to government-run and other information influence campaigns that target our democratic processes: methods of visibly identifying false stories, the identification and deletion of fake posts and accounts, ways to promote official and accurate news, and so on. At the scale these are needed, they will have to be done automatically and in real time.

Since the 2016 presidential election, we have been talking about propaganda campaigns, and about how social media amplifies fake news and allows damaging messages to spread easily. It’s a hard discussion to have in today’s hyperpolarized political climate. After any election, the winning side has every incentive to downplay the role of fake news.

But pandemics are different; there’s no political constituency in favor of people dying because of misinformation. Google doesn’t want the results of peoples’ well-intentioned searches to lead to fatalities. Facebook and Twitter don’t want people on their platforms sharing misinformation that will result in either individual or mass deaths. Focusing on pandemics gives us an apolitical way to collectively approach the general problem of misinformation and fake news. And any solutions for pandemics are likely to also be applicable to the more general ­—and more political ­—problems.

Pandemics are inevitable. Bioterror is already possible, and will only get easier as the requisite technologies become cheaper and more common. We’re experiencing the largest measles outbreak in 25 years thanks to the anti-vaccination movement, which has hijacked social media to amplify its messages; we seem unable to beat back the disinformation and pseudoscience surrounding the vaccine. Those same forces will dramatically increase death and social upheaval in the event of a pandemic.

Let the Russian propaganda attacks on the 2016 election serve as a wake-up call for this and other threats. We need to solve the problem of misinformation during pandemics together—­ governments and industries in collaboration with medical officials, all across the world ­—before there’s a crisis. And the solutions will also help us shore up our democracy in the process.

This essay previously appeared in the New York Times.

Posted on June 21, 2019 at 5:10 AMView Comments

Propaganda and the Weakening of Trust in Government

On November 4, 2016, the hacker “Guccifer 2.0,” a front for Russia’s military intelligence service, claimed in a blogpost that the Democrats were likely to use vulnerabilities to hack the presidential elections. On November 9, 2018, President Donald Trump started tweeting about the senatorial elections in Florida and Arizona. Without any evidence whatsoever, he said that Democrats were trying to steal the election through “FRAUD.”

Cybersecurity experts would say that posts like Guccifer 2.0’s are intended to undermine public confidence in voting: a cyber-attack against the US democratic system. Yet Donald Trump’s actions are doing far more damage to democracy. So far, his tweets on the topic have been retweeted over 270,000 times, eroding confidence far more effectively than any foreign influence campaign.

We need new ideas to explain how public statements on the Internet can weaken American democracy. Cybersecurity today is not only about computer systems. It’s also about the ways attackers can use computer systems to manipulate and undermine public expectations about democracy. Not only do we need to rethink attacks against democracy; we also need to rethink the attackers as well.

This is one key reason why we wrote a new research paper which uses ideas from computer security to understand the relationship between democracy and information. These ideas help us understand attacks which destabilize confidence in democratic institutions or debate.

Our research implies that insider attacks from within American politics can be more pernicious than attacks from other countries. They are more sophisticated, employ tools that are harder to defend against, and lead to harsh political tradeoffs. The US can threaten charges or impose sanctions when Russian trolling agencies attack its democratic system. But what punishments can it use when the attacker is the US president?

People who think about cybersecurity build on ideas about confrontations between states during the Cold War. Intellectuals such as Thomas Schelling developed deterrence theory, which explained how the US and USSR could maneuver to limit each other’s options without ever actually going to war. Deterrence theory, and related concepts about the relative ease of attack and defense, seemed to explain the tradeoffs that the US and rival states faced, as they started to use cyber techniques to probe and compromise each others’ information networks.

However, these ideas fail to acknowledge one key differences between the Cold War and today. Nearly all states—whether democratic or authoritarian—are entangled on the Internet. This creates both new tensions and new opportunities. The US assumed that the internet would help spread American liberal values, and that this was a good and uncontroversial thing. Illiberal states like Russia and China feared that Internet freedom was a direct threat to their own systems of rule. Opponents of the regime might use social media and online communication to coordinate among themselves, and appeal to the broader public, perhaps toppling their governments, as happened in Tunisia during the Arab Spring.

This led illiberal states to develop new domestic defenses against open information flows. As scholars like Molly Roberts have shown, states like China and Russia discovered how they could "flood" internet discussion with online nonsense and distraction, making it impossible for their opponents to talk to each other, or even to distinguish between truth and falsehood. These flooding techniques stabilized authoritarian regimes, because they demoralized and confused the regime’s opponents. Libertarians often argue that the best antidote to bad speech is more speech. What Vladimir Putin discovered was that the best antidote to more speech was bad speech.

Russia saw the Arab Spring and efforts to encourage democracy in its neighborhood as direct threats, and began experimenting with counter-offensive techniques. When a Russia-friendly government in Ukraine collapsed due to popular protests, Russia tried to destabilize new, democratic elections by hacking the system through which the election results would be announced. The clear intention was to discredit the election results by announcing fake voting numbers that would throw public discussion into disarray.

This attack on public confidence in election results was thwarted at the last moment. Even so, it provided the model for a new kind of attack. Hackers don’t have to secretly alter people’s votes to affect elections. All they need to do is to damage public confidence that the votes were counted fairly. As researchers have argued, “simply put, the attacker might not care who wins; the losing side believing that the election was stolen from them may be equally, if not more, valuable.”

These two kinds of attacks—”flooding” attacks aimed at destabilizing public discourse, and “confidence” attacks aimed at undermining public belief in elections—were weaponized against the US in 2016. Russian social media trolls, hired by the “Internet Research Agency,” flooded online political discussions with rumors and counter-rumors in order to create confusion and political division. Peter Pomerantsev describes how in Russia, “one moment [Putin’s media wizard] Surkov would fund civic forums and human rights NGOs, the next he would quietly support nationalist movements that accuse the NGOs of being tools of the West.” Similarly, Russian trolls tried to get Black Lives Matter protesters and anti-Black Lives Matter protesters to march at the same time and place, to create conflict and the appearance of chaos. Guccifer 2.0’s blog post was surely intended to undermine confidence in the vote, preparing the ground for a wider destabilization campaign after Hillary Clinton won the election. Neither Putin nor anyone else anticipated that Trump would win, ushering in chaos on a vastly greater scale.

We do not know how successful these attacks were. A new book by John Sides, Michael Tesler and Lynn Vavreck suggests that Russian efforts had no measurable long-term consequences. Detailed research on the flow of news articles through social media by Yochai Benker, Robert Farris, and Hal Roberts agrees, showing that Fox News was far more influential in the spread of false news stories than any Russian effort.

However, global adversaries like the Russians aren’t the only actors who can use flooding and confidence attacks. US actors can use just the same techniques. Indeed, they can arguably use them better, since they have a better understanding of US politics, more resources, and are far more difficult for the government to counter without raising First Amendment issues.

For example, when the Federal Communication Commission asked for comments on its proposal to get rid of “net neutrality,” it was flooded by fake comments supporting the proposal. Nearly every real person who commented was in favor of net neutrality, but their arguments were drowned out by a flood of spurious comments purportedly made by identities stolen from porn sites, by people whose names and email addresses had been harvested without their permission, and, in some cases, from dead people. This was done not just to generate fake support for the FCC’s controversial proposal. It was to devalue public comments in general, making the general public’s support for net neutrality politically irrelevant. FCC decision making on issues like net neutrality used to be dominated by industry insiders, and many would like to go back to the old regime.

Trump’s efforts to undermine confidence in the Florida and Arizona votes work on a much larger scale. There are clear short-term benefits to asserting fraud where no fraud exists. This may sway judges or other public officials to make concessions to the Republicans to preserve their legitimacy. Yet they also destabilize American democracy in the long term. If Republicans are convinced that Democrats win by cheating, they will feel that their own manipulation of the system (by purging voter rolls, making voting more difficult and so on) are legitimate, and very probably cheat even more flagrantly in the future. This will trash collective institutions and leave everyone worse off.

It is notable that some Arizonan Republicans—including Martha McSally—have so far stayed firm against pressure from the White House and the Republican National Committee to claim that cheating is happening. They presumably see more long term value from preserving existing institutions than undermining them. Very plausibly, Donald Trump has exactly the opposite incentives. By weakening public confidence in the vote today, he makes it easier to claim fraud and perhaps plunge American politics into chaos if he is defeated in 2020.

If experts who see Russian flooding and confidence measures as cyberattacks on US democracy are right, then these attacks are just as dangerous—and perhaps more dangerous—when they are used by domestic actors. The risk is that over time they will destabilize American democracy so that it comes closer to Russia’s managed democracy—where nothing is real any more, and ordinary people feel a mixture of paranoia, helplessness and disgust when they think about politics. Paradoxically, Russian interference is far too ineffectual to get us there—but domestically mounted attacks by all-American political actors might.

To protect against that possibility, we need to start thinking more systematically about the relationship between democracy and information. Our paper provides one way to do this, highlighting the vulnerabilities of democracy against certain kinds of information attack. More generally, we need to build levees against flooding while shoring up public confidence in voting and other public information systems that are necessary to democracy.

The first may require radical changes in how we regulate social media companies. Modernization of government commenting platforms to make them robust against flooding is only a very minimal first step. Up until very recently, companies like Twitter have won market advantage from bot infestations—even when it couldn’t make a profit, it seemed that user numbers were growing. CEOs like Mark Zuckerberg have begun to worry about democracy, but their worries will likely only go so far. It is difficult to get a man to understand something when his business model depends on not understanding it. Sharp—and legally enforceable—limits on automated accounts are a first step. Radical redesign of networks and of trending indicators so that flooding attacks are less effective may be a second.

The second requires general standards for voting at the federal level, and a constitutional guarantee of the right to vote. Technical experts nearly universally favor robust voting systems that would combine paper records with random post-election auditing, to prevent fraud and secure public confidence in voting. Other steps to ensure proper ballot design, and standardize vote counting and reporting will take more time and discussion—yet the record of other countries show that they are not impossible.

The US is nearly unique among major democracies in the persistent flaws of its election machinery. Yet voting is not the only important form of democratic information. Apparent efforts to deliberately skew the US census against counting undocumented immigrants show the need for a more general audit of the political information systems that we need if democracy is to function properly.

It’s easier to respond to Russian hackers through sanctions, counter-attacks and the like than to domestic political attacks that undermine US democracy. To preserve the basic political freedoms of democracy requires recognizing that these freedoms are sometimes going to be abused by politicians such as Donald Trump. The best that we can do is to minimize the possibilities of abuse up to the point where they encroach on basic freedoms and harden the general institutions that secure democratic information against attacks intended to undermine them.

This essay was co-authored with Henry Farrell, and previously appeared on Motherboard, with a terrible headline that I was unable to get changed.

Posted on November 27, 2018 at 7:43 AM

Sidebar photo of Bruce Schneier by Joe MacInnis.