Entries Tagged "propaganda"

Page 2 of 3

Propaganda and the Weakening of Trust in Government

On November 4, 2016, the hacker “Guccifer 2.0,” a front for Russia’s military intelligence service, claimed in a blogpost that the Democrats were likely to use vulnerabilities to hack the presidential elections. On November 9, 2018, President Donald Trump started tweeting about the senatorial elections in Florida and Arizona. Without any evidence whatsoever, he said that Democrats were trying to steal the election through “FRAUD.”

Cybersecurity experts would say that posts like Guccifer 2.0’s are intended to undermine public confidence in voting: a cyber-attack against the US democratic system. Yet Donald Trump’s actions are doing far more damage to democracy. So far, his tweets on the topic have been retweeted over 270,000 times, eroding confidence far more effectively than any foreign influence campaign.

We need new ideas to explain how public statements on the Internet can weaken American democracy. Cybersecurity today is not only about computer systems. It’s also about the ways attackers can use computer systems to manipulate and undermine public expectations about democracy. Not only do we need to rethink attacks against democracy; we also need to rethink the attackers as well.

This is one key reason why we wrote a new research paper which uses ideas from computer security to understand the relationship between democracy and information. These ideas help us understand attacks which destabilize confidence in democratic institutions or debate.

Our research implies that insider attacks from within American politics can be more pernicious than attacks from other countries. They are more sophisticated, employ tools that are harder to defend against, and lead to harsh political tradeoffs. The US can threaten charges or impose sanctions when Russian trolling agencies attack its democratic system. But what punishments can it use when the attacker is the US president?

People who think about cybersecurity build on ideas about confrontations between states during the Cold War. Intellectuals such as Thomas Schelling developed deterrence theory, which explained how the US and USSR could maneuver to limit each other’s options without ever actually going to war. Deterrence theory, and related concepts about the relative ease of attack and defense, seemed to explain the tradeoffs that the US and rival states faced, as they started to use cyber techniques to probe and compromise each others’ information networks.

However, these ideas fail to acknowledge one key differences between the Cold War and today. Nearly all states—whether democratic or authoritarian—are entangled on the Internet. This creates both new tensions and new opportunities. The US assumed that the internet would help spread American liberal values, and that this was a good and uncontroversial thing. Illiberal states like Russia and China feared that Internet freedom was a direct threat to their own systems of rule. Opponents of the regime might use social media and online communication to coordinate among themselves, and appeal to the broader public, perhaps toppling their governments, as happened in Tunisia during the Arab Spring.

This led illiberal states to develop new domestic defenses against open information flows. As scholars like Molly Roberts have shown, states like China and Russia discovered how they could "flood" internet discussion with online nonsense and distraction, making it impossible for their opponents to talk to each other, or even to distinguish between truth and falsehood. These flooding techniques stabilized authoritarian regimes, because they demoralized and confused the regime’s opponents. Libertarians often argue that the best antidote to bad speech is more speech. What Vladimir Putin discovered was that the best antidote to more speech was bad speech.

Russia saw the Arab Spring and efforts to encourage democracy in its neighborhood as direct threats, and began experimenting with counter-offensive techniques. When a Russia-friendly government in Ukraine collapsed due to popular protests, Russia tried to destabilize new, democratic elections by hacking the system through which the election results would be announced. The clear intention was to discredit the election results by announcing fake voting numbers that would throw public discussion into disarray.

This attack on public confidence in election results was thwarted at the last moment. Even so, it provided the model for a new kind of attack. Hackers don’t have to secretly alter people’s votes to affect elections. All they need to do is to damage public confidence that the votes were counted fairly. As researchers have argued, “simply put, the attacker might not care who wins; the losing side believing that the election was stolen from them may be equally, if not more, valuable.”

These two kinds of attacks—”flooding” attacks aimed at destabilizing public discourse, and “confidence” attacks aimed at undermining public belief in elections—were weaponized against the US in 2016. Russian social media trolls, hired by the “Internet Research Agency,” flooded online political discussions with rumors and counter-rumors in order to create confusion and political division. Peter Pomerantsev describes how in Russia, “one moment [Putin’s media wizard] Surkov would fund civic forums and human rights NGOs, the next he would quietly support nationalist movements that accuse the NGOs of being tools of the West.” Similarly, Russian trolls tried to get Black Lives Matter protesters and anti-Black Lives Matter protesters to march at the same time and place, to create conflict and the appearance of chaos. Guccifer 2.0’s blog post was surely intended to undermine confidence in the vote, preparing the ground for a wider destabilization campaign after Hillary Clinton won the election. Neither Putin nor anyone else anticipated that Trump would win, ushering in chaos on a vastly greater scale.

We do not know how successful these attacks were. A new book by John Sides, Michael Tesler and Lynn Vavreck suggests that Russian efforts had no measurable long-term consequences. Detailed research on the flow of news articles through social media by Yochai Benker, Robert Farris, and Hal Roberts agrees, showing that Fox News was far more influential in the spread of false news stories than any Russian effort.

However, global adversaries like the Russians aren’t the only actors who can use flooding and confidence attacks. US actors can use just the same techniques. Indeed, they can arguably use them better, since they have a better understanding of US politics, more resources, and are far more difficult for the government to counter without raising First Amendment issues.

For example, when the Federal Communication Commission asked for comments on its proposal to get rid of “net neutrality,” it was flooded by fake comments supporting the proposal. Nearly every real person who commented was in favor of net neutrality, but their arguments were drowned out by a flood of spurious comments purportedly made by identities stolen from porn sites, by people whose names and email addresses had been harvested without their permission, and, in some cases, from dead people. This was done not just to generate fake support for the FCC’s controversial proposal. It was to devalue public comments in general, making the general public’s support for net neutrality politically irrelevant. FCC decision making on issues like net neutrality used to be dominated by industry insiders, and many would like to go back to the old regime.

Trump’s efforts to undermine confidence in the Florida and Arizona votes work on a much larger scale. There are clear short-term benefits to asserting fraud where no fraud exists. This may sway judges or other public officials to make concessions to the Republicans to preserve their legitimacy. Yet they also destabilize American democracy in the long term. If Republicans are convinced that Democrats win by cheating, they will feel that their own manipulation of the system (by purging voter rolls, making voting more difficult and so on) are legitimate, and very probably cheat even more flagrantly in the future. This will trash collective institutions and leave everyone worse off.

It is notable that some Arizonan Republicans—including Martha McSally—have so far stayed firm against pressure from the White House and the Republican National Committee to claim that cheating is happening. They presumably see more long term value from preserving existing institutions than undermining them. Very plausibly, Donald Trump has exactly the opposite incentives. By weakening public confidence in the vote today, he makes it easier to claim fraud and perhaps plunge American politics into chaos if he is defeated in 2020.

If experts who see Russian flooding and confidence measures as cyberattacks on US democracy are right, then these attacks are just as dangerous—and perhaps more dangerous—when they are used by domestic actors. The risk is that over time they will destabilize American democracy so that it comes closer to Russia’s managed democracy—where nothing is real any more, and ordinary people feel a mixture of paranoia, helplessness and disgust when they think about politics. Paradoxically, Russian interference is far too ineffectual to get us there—but domestically mounted attacks by all-American political actors might.

To protect against that possibility, we need to start thinking more systematically about the relationship between democracy and information. Our paper provides one way to do this, highlighting the vulnerabilities of democracy against certain kinds of information attack. More generally, we need to build levees against flooding while shoring up public confidence in voting and other public information systems that are necessary to democracy.

The first may require radical changes in how we regulate social media companies. Modernization of government commenting platforms to make them robust against flooding is only a very minimal first step. Up until very recently, companies like Twitter have won market advantage from bot infestations—even when it couldn’t make a profit, it seemed that user numbers were growing. CEOs like Mark Zuckerberg have begun to worry about democracy, but their worries will likely only go so far. It is difficult to get a man to understand something when his business model depends on not understanding it. Sharp—and legally enforceable—limits on automated accounts are a first step. Radical redesign of networks and of trending indicators so that flooding attacks are less effective may be a second.

The second requires general standards for voting at the federal level, and a constitutional guarantee of the right to vote. Technical experts nearly universally favor robust voting systems that would combine paper records with random post-election auditing, to prevent fraud and secure public confidence in voting. Other steps to ensure proper ballot design, and standardize vote counting and reporting will take more time and discussion—yet the record of other countries show that they are not impossible.

The US is nearly unique among major democracies in the persistent flaws of its election machinery. Yet voting is not the only important form of democratic information. Apparent efforts to deliberately skew the US census against counting undocumented immigrants show the need for a more general audit of the political information systems that we need if democracy is to function properly.

It’s easier to respond to Russian hackers through sanctions, counter-attacks and the like than to domestic political attacks that undermine US democracy. To preserve the basic political freedoms of democracy requires recognizing that these freedoms are sometimes going to be abused by politicians such as Donald Trump. The best that we can do is to minimize the possibilities of abuse up to the point where they encroach on basic freedoms and harden the general institutions that secure democratic information against attacks intended to undermine them.

This essay was co-authored with Henry Farrell, and previously appeared on Motherboard, with a terrible headline that I was unable to get changed.

Posted on November 27, 2018 at 7:43 AM

Research into the Root Causes of Terrorism

Interesting article in Science discussing field research on how people are radicalized to become terrorists.

The potential for research that can overcome existing constraints can be seen in recent advances in understanding violent extremism and, partly, in interdiction and prevention. Most notable is waning interest in simplistic root-cause explanations of why individuals become violent extremists (e.g., poverty, lack of education, marginalization, foreign occupation, and religious fervor), which cannot accommodate the richness and diversity of situations that breed terrorism or support meaningful interventions. A more tractable line of inquiry is how people actually become involved in terror networks (e.g., how they radicalize and are recruited, move to action, or come to abandon cause and comrades).

Reports from the The Soufan Group, International Center for the Study of Radicalisation (King’s College London), and the Combating Terrorism Center (U.S. Military Academy) indicate that approximately three-fourths of those who join the Islamic State or al-Qaeda do so in groups. These groups often involve preexisting social networks and typically cluster in particular towns and neighborhoods.. This suggests that much recruitment does not need direct personal appeals by organization agents or individual exposure to social media (which would entail a more dispersed recruitment pattern). Fieldwork is needed to identify the specific conditions under which these processes play out. Natural growth models of terrorist networks then might be based on an epidemiology of radical ideas in host social networks rather than built in the abstract then fitted to data and would allow for a public health, rather than strictly criminal, approach to violent extremism.

Such considerations have implications for countering terrorist recruitment. The present USG focus is on “counternarratives,” intended as alternative to the “ideologies” held to motivate terrorists. This strategy treats ideas as disembodied from the human conditions in which they are embedded and given life as animators of social groups. In their stead, research and policy might better focus on personalized “counterengagement,” addressing and harnessing the fellowship, passion, and purpose of people within specific social contexts, as ISIS and al-Qaeda often do. This focus stands in sharp contrast to reliance on negative mass messaging and sting operations to dissuade young people in doubt through entrapment and punishment (the most common practice used in U.S. law enforcement) rather than through positive persuasion and channeling into productive life paths. At the very least, we need field research in communities that is capable of capturing evidence to reveal which strategies are working, failing, or backfiring.

Posted on February 15, 2017 at 6:31 AMView Comments

Cybersecurity Issues for the Next Administration

On today’s Internet, too much power is concentrated in too few hands. In the early days of the Internet, individuals were empowered. Now governments and corporations hold the balance of power. If we are to leave a better Internet for the next generations, governments need to rebalance Internet power more towards the individual. This means several things.

First, less surveillance. Surveillance has become the business model of the Internet, and an aspect that is appealing to governments worldwide. While computers make it easier to collect data, and networks to aggregate it, governments should do more to ensure that any surveillance is exceptional, transparent, regulated and targeted. It’s a tall order; governments such as that of the US need to overcome their own mass-surveillance desires, and at the same time implement regulations to fetter the ability of Internet companies to do the same.

Second, less censorship. The early days of the Internet were free of censorship, but no more. Many countries censor their Internet for a variety of political and moral reasons, and many large social networking platforms do the same thing for business reasons. Turkey censors anti-government political speech; many countries censor pornography. Facebook has censored both nudity and videos of police brutality. Governments need to commit to the free flow of information, and to make it harder for others to censor.

Third, less propaganda. One of the side-effects of free speech is erroneous speech. This naturally corrects itself when everybody can speak, but an Internet with centralized power is one that invites propaganda. For example, both China and Russia actively use propagandists to influence public opinion on social media. The more governments can do to counter propaganda in all forms, the better we all are.

And fourth, less use control. Governments need to ensure that our Internet systems are open and not closed, that neither totalitarian governments nor large corporations can limit what we do on them. This includes limits on what apps you can run on your smartphone, or what you can do with the digital files you purchase or are collected by the digital devices you own. Controls inhibit innovation: technical, business, and social.

Solutions require both corporate regulation and international cooperation. They require Internet governance to remain in the hands of the global community of engineers, companies, civil society groups, and Internet users. They require governments to be agile in the face of an ever-evolving Internet. And they’ll result in more power and control to the individual and less to powerful institutions. That’s how we built an Internet that enshrined the best of our societies, and that’s how we’ll keep it that way for future generations.

This essay previously appeared on Time.com, in a section about issues for the next president. It was supposed to appear in the print magazine, but was preempted by Donald Trump coverage.

Posted on October 14, 2016 at 6:20 AMView Comments

Other GCHQ News from Snowden

There are two other Snowden stories this week about GCHQ: one about its hacking practices, and the other about its propaganda and psychology research. The second is particularly disturbing:

While some of the unit’s activities are focused on the claimed areas, JTRIG also appears to be intimately involved in traditional law enforcement areas and U.K.-specific activity, as previously unpublished documents demonstrate. An August 2009 JTRIG memo entitled “Operational Highlights” boasts of “GCHQ’s first serious crime effects operation” against a website that was identifying police informants and members of a witness protection program. Another operation investigated an Internet forum allegedly “used to facilitate and execute online fraud.” The document also describes GCHQ advice provided :to assist the UK negotiating team on climate change.”

Particularly revealing is a fascinating 42-page document from 2011 detailing JTRIG’s activities. It provides the most comprehensive and sweeping insight to date into the scope of this unit’s extreme methods. Entitled “Behavioral Science Support for JTRIG’s Effects and Online HUMINT [Human Intelligence] Operations,” it describes the types of targets on which the unit focuses, the psychological and behavioral research it commissions and exploits, and its future organizational aspirations. It is authored by a psychologist, Mandeep K. Dhami.

Among other things, the document lays out the tactics the agency uses to manipulate public opinion, its scientific and psychological research into how human thinking and behavior can be influenced, and the broad range of targets that are traditionally the province of law enforcement rather than intelligence agencies.

Posted on June 26, 2015 at 12:12 PMView Comments

Manipulating Juries with PowerPoint

Interesting article on the subconscious visual tricks used to manipulate juries and affect verdicts.

In December 2012 the Washington Supreme Court threw out Glasmann’s convictions based on the “highly inflammatory” slides. As a general rule, courts don’t want prosecutors expressing their personal opinion to a jury; they’re supposed to couch their arguments in terms of what the evidence shows. Plastering the word “GUILTY” on a slide—not once or twice, but three times—was a “flagrant and ill intentioned” violation of this principle, the Washington Supreme Court wrote. The captions superimposed on the photos were “the equivalent of unadmitted evidence.”

One justice, Tom Chambers, wrote that he was stunned at the state’s contention that there was nothing wrong with digitally altering the booking photo. “Under the State’s logic, in a shooting case, there would be nothing improper with the State altering an image of the accused by photoshopping a gun into his hand,” Chambers wrote.

Jeffrey Ellis, a lawyer from Portland, Oregon, represented Glasmann on appeal. “We all know that commercials can try to persuade people on a subconscious level,” Ellis said in an interview. “But I don’t think the criminal-justice system wants to enter into that base arena.”

I think we need some clear rules as to what’s permitted.

Posted on December 23, 2014 at 2:19 PMView Comments

GCHQ Catalog of Exploit Tools

The latest Snowden story is a catalog of exploit tools from JTRIG (Joint Threat Research Intelligence Group), a unit of the British GCHQ, for both surveillance and propaganda. It’s a list of code names and short descriptions, such as these:

GLASSBACK: Technique of getting a targets IP address by pretending to be a spammer and ringing them. Target does not need to answer.

MINIATURE HERO: Active skype capability. Provision of real time call records (SkypeOut and SkypetoSkype) and bidirectional instant messaging. Also contact lists.

MOUTH: Tool for collection for downloading a user’s files from Archive.org.

PHOTON TORPEDO: A technique to actively grab the IP address of MSN messenger user.

SILVER SPECTOR: Allows batch Nmap scanning over Tor.

SPRING BISHOP: Find private photographs of targets on Facebook.

ANGRY PIRATE: is a tool that will permanently disable a target’s account on their computer.

BUMPERCAR+: is an automated system developed by JTRIG CITD to support JTRIG BUMPERCAR operations. BUMPERCAR operations are used to disrupt and deny Internet-based terror videos or other materials. The techniques employs the services provided by upload providers to report offensive materials.

BOMB BAY: is the capacity to increase website hits/rankings.

BURLESQUE: is the capacity to send spoofed SMS messages.

CLEAN SWEEP: Masquerade Facebook Wall Posts for individuals or entire countries.

CONCRETE DONKEY: is the capacity to scatter an audio message to a large number of telephones, or repeatedely bomb a target number with the same message.

GATEWAY: Ability to artificially increase traffic to a website.

GESTATOR: amplification of a given message, normally video, on popular multimedia websites (Youtube).

SCRAPHEAP CHALLENGE: Perfect spoofing of emails from Blackberry targets.

SUNBLOCK: Ability to deny functionality to send/receive email or view material online.

SWAMP DONKEY: is a tool that will silently locate all predefined types of file and encrypt them on a targets machine

UNDERPASS: Change outcome of online polls (previously known as NUBILO).

WARPATH: Mass delivery of SMS messages to support an Information Operations campaign.

HAVLOCK: Real-time website cloning techniques allowing on-the-fly alterations.

HUSK: Secure one-on-one web based dead-drop messaging platform.

There’s lots more. Go read the rest. This is a big deal, as big as the TAO catalog from December.

I would like to post the entire list. If someone has a clever way of extracting the text, or wants to retype it all, please send it to me.

EDITED TO ADD (7/16): HTML of the entire catalog is here.

Posted on July 14, 2014 at 12:35 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.