Entries Tagged "essays"

Page 8 of 48

Evaluating the GCHQ Exceptional Access Proposal

The so-called Crypto Wars have been going on for 25 years now. Basically, the FBI—and some of their peer agencies in the UK, Australia, and elsewhere—argue that the pervasive use of civilian encryption is hampering their ability to solve crimes and that they need the tech companies to make their systems susceptible to government eavesdropping. Sometimes their complaint is about communications systems, like voice or messaging apps. Sometimes it’s about end-user devices. On the other side of this debate is pretty much all technologists working in computer security and cryptography, who argue that adding eavesdropping features fundamentally makes those systems less secure.

A recent entry in this debate is a proposal by Ian Levy and Crispin Robinson, both from the UK’s GCHQ (the British signals-intelligence agency—basically, its NSA). It’s actually a positive contribution to the discourse around backdoors; most of the time government officials broadly demand that the tech companies figure out a way to meet their requirements, without providing any details. Levy and Robinson write:

In a world of encrypted services, a potential solution could be to go back a few decades. It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved—they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.

On the surface, this isn’t a big ask. It doesn’t affect the encryption that protects the communications. It only affects the authentication that assures people of whom they are talking to. But it’s no less dangerous a backdoor than any others that have been proposed: It exploits a security vulnerability rather than fixing it, and it opens all users of the system to exploitation of that same vulnerability by others.

In a blog post, cryptographer Matthew Green summarized the technical problems with this GCHQ proposal. Basically, making this backdoor work requires not only changing the cloud computers that oversee communications, but it also means changing the client program on everyone’s phone and computer. And that change makes all of those systems less secure. Levy and Robinson make a big deal of the fact that their backdoor would only be targeted against specific individuals and their communications, but it’s still a general backdoor that could be used against anybody.

The basic problem is that a backdoor is a technical capability—a vulnerability—that is available to anyone who knows about it and has access to it. Surrounding that vulnerability is a procedural system that tries to limit access to that capability. Computers, especially internet-connected computers, are inherently hackable, limiting the effectiveness of any procedures. The best defense is to not have the vulnerability at all.

That old physical eavesdropping system Levy and Robinson allude to also exploits a security vulnerability. Because telephone conversations were unencrypted as they passed through the physical wires of the phone system, the police were able to go to a switch in a phone company facility or a junction box on the street and manually attach alligator clips to a specific pair and listen in to what that phone transmitted and received. It was a vulnerability that anyone could exploit—not just the police—but was mitigated by the fact that the phone company was a monolithic monopoly, and physical access to the wires was either difficult (inside a phone company building) or obvious (on the street at a junction box).

The functional equivalent of physical eavesdropping for modern computer phone switches is a requirement of a 1994 U.S. law called CALEA—and similar laws in other countries. By law, telephone companies must engineer phone switches that the government can eavesdrop, mirroring that old physical system with computers. It is not the same thing, though. It doesn’t have those same physical limitations that make it more secure. It can be administered remotely. And it’s implemented by a computer, which makes it vulnerable to the same hacking that every other computer is vulnerable to.

This isn’t a theoretical problem; these systems have been subverted. The most public incident dates from 2004 in Greece. Vodafone Greece had phone switches with the eavesdropping feature mandated by CALEA. It was turned off by default in the Greek phone system, but the NSA managed to surreptitiously turn it on and use it to eavesdrop on the Greek prime minister and over 100 other high-ranking dignitaries.

There’s nothing distinct about a phone switch that makes it any different from other modern encrypted voice or chat systems; any remotely administered backdoor system will be just as vulnerable. Imagine a chat program added this GCHQ backdoor. It would have to add a feature that added additional parties to a chat from somewhere in the system—and not by the people at the endpoints. It would have to suppress any messages alerting users to another party being added to that chat. Since some chat programs, like iMessage and Signal, automatically send such messages, it would force those systems to lie to their users. Other systems would simply never implement the “tell me who is in this chat conversation” feature­which amounts to the same thing.

And once that’s in place, every government will try to hack it for its own purposes­—just as the NSA hacked Vodafone Greece. Again, this is nothing new. In 2010, China successfully hacked the back-door mechanism Google put in place to meet law-enforcement requests. In 2015, someone—we don’t know who—hacked an NSA backdoor in a random-number generator used to create encryption keys, changing the parameters so they could also eavesdrop on the communications. There are certainly other stories that haven’t been made public.

Simply adding the feature erodes public trust. If you were a dissident in a totalitarian country trying to communicate securely, would you want to use a voice or messaging system that is known to have this sort of backdoor? Who would you bet on, especially when the cost of losing the bet might be imprisonment or worse: the company that runs the system, or your country’s government intelligence agency? If you were a senior government official, or the head of a large multinational corporation, or the security manager or a critical technician at a power plant, would you want to use this system?

Of course not.

Two years ago, there was a rumor of a WhatsApp backdoor. The details are complicated, and calling it a backdoor or a vulnerability is largely inaccurate—but the resultant confusion caused some people to abandon the encrypted messaging service.

Trust is fragile, and transparency is essential to trust. And while Levy and Robinson state that “any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users,” this proposal does exactly that. Communications companies could no longer be honest about what their systems were doing, and we would have no reason to trust them if they tried.

In the end, all of these exceptional access mechanisms, whether they exploit existing vulnerabilities that should be closed or force vendors to open new ones, reduce the security of the underlying system. They reduce our reliance on security technologies we know how to do well—cryptography—to computer security technologies we are much less good at. Even worse, they replace technical security measures with organizational procedures. Whether it’s a database of master keys that could decrypt an iPhone or a communications switch that orchestrates who is securely chatting with whom, it is vulnerable to attack. And it will be attacked.

The foregoing discussion is a specific example of a broader discussion that we need to have, and it’s about the attack/defense balance. Which should we prioritize? Should we design our systems to be open to attack, in which case they can be exploited by law enforcement—and others? Or should we design our systems to be as secure as possible, which means they will be better protected from hackers, criminals, foreign governments and—unavoidably—law enforcement as well?

This discussion is larger than the FBI’s ability to solve crimes or the NSA’s ability to spy. We know that foreign intelligence services are targeting the communications of our elected officials, our power infrastructure, and our voting systems. Do we really want some foreign country penetrating our lawful-access backdoor in the same way the NSA penetrated Greece’s?

I have long maintained that we need to adopt a defense-dominant strategy: We should prioritize our need for security over our need for surveillance. This is especially true in the new world of physically capable computers. Yes, it will mean that law enforcement will have a harder time eavesdropping on communications and unlocking computing devices. But law enforcement has other forensic techniques to collect surveillance data in our highly networked world. We’d be much better off increasing law enforcement’s technical ability to investigate crimes in the modern digital world than we would be to weaken security for everyone. The ability to surreptitiously add ghost users to a conversation is a vulnerability, and it’s one that we would be better served by closing than exploiting.

This essay originally appeared on Lawfare.com.

EDITED TO ADD (1/30): More commentary.

Posted on January 18, 2019 at 5:54 AMView Comments

Machine Learning to Detect Software Vulnerabilities

No one doubts that artificial intelligence (AI) and machine learning (ML) will transform cybersecurity. We just don’t know how, or when. While the literature generally focuses on the different uses of AI by attackers and defenders ­ and the resultant arms race between the two ­ I want to talk about software vulnerabilities.

All software contains bugs. The reason is basically economic: The market doesn’t want to pay for quality software. With a few exceptions, such as the space shuttle, the market prioritizes fast and cheap over good. The result is that any large modern software package contains hundreds or thousands of bugs.

Some percentage of bugs are also vulnerabilities, and a percentage of those are exploitable vulnerabilities, meaning an attacker who knows about them can attack the underlying system in some way. And some percentage of those are discovered and used. This is why your computer and smartphone software is constantly being patched; software vendors are fixing bugs that are also vulnerabilities that have been discovered and are being used.

Everything would be better if software vendors found and fixed all bugs during the design and development process, but, as I said, the market doesn’t reward that kind of delay and expense. AI, and machine learning in particular, has the potential to forever change this trade-off.

The problem of finding software vulnerabilities seems well-suited for ML systems. Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like. There are challenges with that, of course, but there is already a healthy amount of academic literature on the topic—and research is continuing. There’s every reason to expect ML systems to get better at this as time goes on, and some reason to expect them to eventually become very good at it.

Finding vulnerabilities can benefit both attackers and defenders, but it’s not a fair fight. When an attacker’s ML system finds a vulnerability in software, the attacker can use it to compromise systems. When a defender’s ML system finds the same vulnerability, he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it.

But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released, the developer fixes it so it can never be used in the first place. The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.

Fast-forward a decade or so into the future. We might say to each other, “Remember those years when software vulnerabilities were a thing, before ML vulnerability finders were built into every compiler and fixed them before the software was ever released? Wow, those were crazy years.” Not only is this future possible, but I would bet on it.

Getting from here to there will be a dangerous ride, though. Those vulnerability finders will first be unleashed on existing software, giving attackers hundreds if not thousands of vulnerabilities to exploit in real-world attacks. Sure, defenders can use the same systems, but many of today’s Internet of Things systems have no engineering teams to write patches and no ability to download and install patches. The result will be hundreds of vulnerabilities that attackers can find and use.

But if we look far enough into the horizon, we can see a future where software vulnerabilities are a thing of the past. Then we’ll just have to worry about whatever new and more advanced attack techniques those AI systems come up with.

This essay previously appeared on SecurityIntelligence.com.

Posted on January 8, 2019 at 6:13 AMView Comments

Propaganda and the Weakening of Trust in Government

On November 4, 2016, the hacker “Guccifer 2.0,” a front for Russia’s military intelligence service, claimed in a blogpost that the Democrats were likely to use vulnerabilities to hack the presidential elections. On November 9, 2018, President Donald Trump started tweeting about the senatorial elections in Florida and Arizona. Without any evidence whatsoever, he said that Democrats were trying to steal the election through “FRAUD.”

Cybersecurity experts would say that posts like Guccifer 2.0’s are intended to undermine public confidence in voting: a cyber-attack against the US democratic system. Yet Donald Trump’s actions are doing far more damage to democracy. So far, his tweets on the topic have been retweeted over 270,000 times, eroding confidence far more effectively than any foreign influence campaign.

We need new ideas to explain how public statements on the Internet can weaken American democracy. Cybersecurity today is not only about computer systems. It’s also about the ways attackers can use computer systems to manipulate and undermine public expectations about democracy. Not only do we need to rethink attacks against democracy; we also need to rethink the attackers as well.

This is one key reason why we wrote a new research paper which uses ideas from computer security to understand the relationship between democracy and information. These ideas help us understand attacks which destabilize confidence in democratic institutions or debate.

Our research implies that insider attacks from within American politics can be more pernicious than attacks from other countries. They are more sophisticated, employ tools that are harder to defend against, and lead to harsh political tradeoffs. The US can threaten charges or impose sanctions when Russian trolling agencies attack its democratic system. But what punishments can it use when the attacker is the US president?

People who think about cybersecurity build on ideas about confrontations between states during the Cold War. Intellectuals such as Thomas Schelling developed deterrence theory, which explained how the US and USSR could maneuver to limit each other’s options without ever actually going to war. Deterrence theory, and related concepts about the relative ease of attack and defense, seemed to explain the tradeoffs that the US and rival states faced, as they started to use cyber techniques to probe and compromise each others’ information networks.

However, these ideas fail to acknowledge one key differences between the Cold War and today. Nearly all states—whether democratic or authoritarian—are entangled on the Internet. This creates both new tensions and new opportunities. The US assumed that the internet would help spread American liberal values, and that this was a good and uncontroversial thing. Illiberal states like Russia and China feared that Internet freedom was a direct threat to their own systems of rule. Opponents of the regime might use social media and online communication to coordinate among themselves, and appeal to the broader public, perhaps toppling their governments, as happened in Tunisia during the Arab Spring.

This led illiberal states to develop new domestic defenses against open information flows. As scholars like Molly Roberts have shown, states like China and Russia discovered how they could "flood" internet discussion with online nonsense and distraction, making it impossible for their opponents to talk to each other, or even to distinguish between truth and falsehood. These flooding techniques stabilized authoritarian regimes, because they demoralized and confused the regime’s opponents. Libertarians often argue that the best antidote to bad speech is more speech. What Vladimir Putin discovered was that the best antidote to more speech was bad speech.

Russia saw the Arab Spring and efforts to encourage democracy in its neighborhood as direct threats, and began experimenting with counter-offensive techniques. When a Russia-friendly government in Ukraine collapsed due to popular protests, Russia tried to destabilize new, democratic elections by hacking the system through which the election results would be announced. The clear intention was to discredit the election results by announcing fake voting numbers that would throw public discussion into disarray.

This attack on public confidence in election results was thwarted at the last moment. Even so, it provided the model for a new kind of attack. Hackers don’t have to secretly alter people’s votes to affect elections. All they need to do is to damage public confidence that the votes were counted fairly. As researchers have argued, “simply put, the attacker might not care who wins; the losing side believing that the election was stolen from them may be equally, if not more, valuable.”

These two kinds of attacks—”flooding” attacks aimed at destabilizing public discourse, and “confidence” attacks aimed at undermining public belief in elections—were weaponized against the US in 2016. Russian social media trolls, hired by the “Internet Research Agency,” flooded online political discussions with rumors and counter-rumors in order to create confusion and political division. Peter Pomerantsev describes how in Russia, “one moment [Putin’s media wizard] Surkov would fund civic forums and human rights NGOs, the next he would quietly support nationalist movements that accuse the NGOs of being tools of the West.” Similarly, Russian trolls tried to get Black Lives Matter protesters and anti-Black Lives Matter protesters to march at the same time and place, to create conflict and the appearance of chaos. Guccifer 2.0’s blog post was surely intended to undermine confidence in the vote, preparing the ground for a wider destabilization campaign after Hillary Clinton won the election. Neither Putin nor anyone else anticipated that Trump would win, ushering in chaos on a vastly greater scale.

We do not know how successful these attacks were. A new book by John Sides, Michael Tesler and Lynn Vavreck suggests that Russian efforts had no measurable long-term consequences. Detailed research on the flow of news articles through social media by Yochai Benker, Robert Farris, and Hal Roberts agrees, showing that Fox News was far more influential in the spread of false news stories than any Russian effort.

However, global adversaries like the Russians aren’t the only actors who can use flooding and confidence attacks. US actors can use just the same techniques. Indeed, they can arguably use them better, since they have a better understanding of US politics, more resources, and are far more difficult for the government to counter without raising First Amendment issues.

For example, when the Federal Communication Commission asked for comments on its proposal to get rid of “net neutrality,” it was flooded by fake comments supporting the proposal. Nearly every real person who commented was in favor of net neutrality, but their arguments were drowned out by a flood of spurious comments purportedly made by identities stolen from porn sites, by people whose names and email addresses had been harvested without their permission, and, in some cases, from dead people. This was done not just to generate fake support for the FCC’s controversial proposal. It was to devalue public comments in general, making the general public’s support for net neutrality politically irrelevant. FCC decision making on issues like net neutrality used to be dominated by industry insiders, and many would like to go back to the old regime.

Trump’s efforts to undermine confidence in the Florida and Arizona votes work on a much larger scale. There are clear short-term benefits to asserting fraud where no fraud exists. This may sway judges or other public officials to make concessions to the Republicans to preserve their legitimacy. Yet they also destabilize American democracy in the long term. If Republicans are convinced that Democrats win by cheating, they will feel that their own manipulation of the system (by purging voter rolls, making voting more difficult and so on) are legitimate, and very probably cheat even more flagrantly in the future. This will trash collective institutions and leave everyone worse off.

It is notable that some Arizonan Republicans—including Martha McSally—have so far stayed firm against pressure from the White House and the Republican National Committee to claim that cheating is happening. They presumably see more long term value from preserving existing institutions than undermining them. Very plausibly, Donald Trump has exactly the opposite incentives. By weakening public confidence in the vote today, he makes it easier to claim fraud and perhaps plunge American politics into chaos if he is defeated in 2020.

If experts who see Russian flooding and confidence measures as cyberattacks on US democracy are right, then these attacks are just as dangerous—and perhaps more dangerous—when they are used by domestic actors. The risk is that over time they will destabilize American democracy so that it comes closer to Russia’s managed democracy—where nothing is real any more, and ordinary people feel a mixture of paranoia, helplessness and disgust when they think about politics. Paradoxically, Russian interference is far too ineffectual to get us there—but domestically mounted attacks by all-American political actors might.

To protect against that possibility, we need to start thinking more systematically about the relationship between democracy and information. Our paper provides one way to do this, highlighting the vulnerabilities of democracy against certain kinds of information attack. More generally, we need to build levees against flooding while shoring up public confidence in voting and other public information systems that are necessary to democracy.

The first may require radical changes in how we regulate social media companies. Modernization of government commenting platforms to make them robust against flooding is only a very minimal first step. Up until very recently, companies like Twitter have won market advantage from bot infestations—even when it couldn’t make a profit, it seemed that user numbers were growing. CEOs like Mark Zuckerberg have begun to worry about democracy, but their worries will likely only go so far. It is difficult to get a man to understand something when his business model depends on not understanding it. Sharp—and legally enforceable—limits on automated accounts are a first step. Radical redesign of networks and of trending indicators so that flooding attacks are less effective may be a second.

The second requires general standards for voting at the federal level, and a constitutional guarantee of the right to vote. Technical experts nearly universally favor robust voting systems that would combine paper records with random post-election auditing, to prevent fraud and secure public confidence in voting. Other steps to ensure proper ballot design, and standardize vote counting and reporting will take more time and discussion—yet the record of other countries show that they are not impossible.

The US is nearly unique among major democracies in the persistent flaws of its election machinery. Yet voting is not the only important form of democratic information. Apparent efforts to deliberately skew the US census against counting undocumented immigrants show the need for a more general audit of the political information systems that we need if democracy is to function properly.

It’s easier to respond to Russian hackers through sanctions, counter-attacks and the like than to domestic political attacks that undermine US democracy. To preserve the basic political freedoms of democracy requires recognizing that these freedoms are sometimes going to be abused by politicians such as Donald Trump. The best that we can do is to minimize the possibilities of abuse up to the point where they encroach on basic freedoms and harden the general institutions that secure democratic information against attacks intended to undermine them.

This essay was co-authored with Henry Farrell, and previously appeared on Motherboard, with a terrible headline that I was unable to get changed.

Posted on November 27, 2018 at 7:43 AM

How Surveillance Inhibits Freedom of Expression

In my book Data and Goliath, I write about the value of privacy. I talk about how it is essential for political liberty and justice, and for commercial fairness and equality. I talk about how it increases personal freedom and individual autonomy, and how the lack of it makes us all less secure. But this is probably the most important argument as to why society as a whole must protect privacy: it allows society to progress.

We know that surveillance has a chilling effect on freedom. People change their behavior when they live their lives under surveillance. They are less likely to speak freely and act individually. They self-censor. They become conformist. This is obviously true for government surveillance, but is true for corporate surveillance as well. We simply aren’t as willing to be our individual selves when others are watching.

Let’s take an example: hearing that parents and children are being separated as they cross the US border, you want to learn more. You visit the website of an international immigrants’ rights group, a fact that is available to the government through mass Internet surveillance. You sign up for the group’s mailing list, another fact that is potentially available to the government. The group then calls or e-mails to invite you to a local meeting. Same. Your license plates can be collected as you drive to the meeting; your face can be scanned and identified as you walk into and out of the meeting. If, instead of visiting the website, you visit the group’s Facebook page, Facebook knows that you did and that feeds into its profile of you, available to advertisers and political activists alike. Ditto if you like their page, share a link with your friends, or just post about the issue.

Maybe you are an immigrant yourself, documented or not. Or maybe some of your family is. Or maybe you have friends or coworkers who are. How likely are you to get involved if you know that your interest and concern can be gathered and used by government and corporate actors? What if the issue you are interested in is pro- or anti-gun control, anti-police violence or in support of the police? Does that make a difference?

Maybe the issue doesn’t matter, and you would never be afraid to be identified and tracked based on your political or social interests. But even if you are so fearless, you probably know someone who has more to lose, and thus more to fear, from their personal, sexual, or political beliefs being exposed.

This isn’t just hypothetical. In the months and years after the 9/11 terrorist attacks, many of us censored what we spoke about on social media or what we searched on the Internet. We know from a 2013 PEN study that writers in the United States self-censored their browsing habits out of fear the government was watching. And this isn’t exclusively an American event; Internet self-censorship is prevalent across the globe, China being a prime example.

Ultimately, this fear stagnates society in two ways. The first is that the presence of surveillance means society cannot experiment with new things without fear of reprisal, and that means those experiments­—if found to be inoffensive or even essential to society—­cannot slowly become commonplace, moral, and then legal. If surveillance nips that process in the bud, change never happens. All social progress­—from ending slavery to fighting for women’s rights­—began as ideas that were, quite literally, dangerous to assert. Yet without the ability to safely develop, discuss, and eventually act on those assertions, our society would not have been able to further its democratic values in the way that it has.

Consider the decades-long fight for gay rights around the world. Within our lifetimes we have made enormous strides to combat homophobia and increase acceptance of queer folks’ right to marry. Queer relationships slowly progressed from being viewed as immoral and illegal, to being viewed as somewhat moral and tolerated, to finally being accepted as moral and legal.

In the end, it was the public nature of those activities that eventually slayed the bigoted beast, but the ability to act in private was essential in the beginning for the early experimentation, community building, and organizing.

Marijuana legalization is going through the same process: it’s currently sitting between somewhat moral, and­—depending on the state or country in question—­tolerated and legal. But, again, for this to have happened, someone decades ago had to try pot and realize that it wasn’t really harmful, either to themselves or to those around them. Then it had to become a counterculture, and finally a social and political movement. If pervasive surveillance meant that those early pot smokers would have been arrested for doing something illegal, the movement would have been squashed before inception. Of course the story is more complicated than that, but the ability for members of society to privately smoke weed was essential for putting it on the path to legalization.

We don’t yet know which subversive ideas and illegal acts of today will become political causes and positive social change tomorrow, but they’re around. And they require privacy to germinate. Take away that privacy, and we’ll have a much harder time breaking down our inherited moral assumptions.

The second way surveillance hurts our democratic values is that it encourages society to make more things illegal. Consider the things you do­—the different things each of us does­—that portions of society find immoral. Not just recreational drugs and gay sex, but gambling, dancing, public displays of affection. All of us do things that are deemed immoral by some groups, but are not illegal because they don’t harm anyone. But it’s important that these things can be done out of the disapproving gaze of those who would otherwise rally against such practices.

If there is no privacy, there will be pressure to change. Some people will recognize that their morality isn’t necessarily the morality of everyone­—and that that’s okay. But others will start demanding legislative change, or using less legal and more violent means, to force others to match their idea of morality.

It’s easy to imagine the more conservative (in the small-c sense, not in the sense of the named political party) among us getting enough power to make illegal what they would otherwise be forced to witness. In this way, privacy helps protect the rights of the minority from the tyranny of the majority.

This is how we got Prohibition in the 1920s, and if we had had today’s surveillance capabilities in the 1920s, it would have been far more effectively enforced. Recipes for making your own spirits would have been much harder to distribute. Speakeasies would have been impossible to keep secret. The criminal trade in illegal alcohol would also have been more effectively suppressed. There would have been less discussion about the harms of Prohibition, less “what if we didn’t?” thinking. Political organizing might have been difficult. In that world, the law might have stuck to this day.

China serves as a cautionary tale. The country has long been a world leader in the ubiquitous surveillance of its citizens, with the goal not of crime prevention but of social control. They are about to further enhance their system, giving every citizen a “social credit” rating. The details are yet unclear, but the general concept is that people will be rated based on their activities, both online and off. Their political comments, their friends and associates, and everything else will be assessed and scored. Those who are conforming, obedient, and apolitical will be given high scores. People without those scores will be denied privileges like access to certain schools and foreign travel. If the program is half as far-reaching as early reports indicate, the subsequent pressure to conform will be enormous. This social surveillance system is precisely the sort of surveillance designed to maintain the status quo.

For social norms to change, people need to deviate from these inherited norms. People need the space to try alternate ways of living without risking arrest or social ostracization. People need to be able to read critiques of those norms without anyone’s knowledge, discuss them without their opinions being recorded, and write about their experiences without their names attached to their words. People need to be able to do things that others find distasteful, or even immoral. The minority needs protection from the tyranny of the majority.

Privacy makes all of this possible. Privacy encourages social progress by giving the few room to experiment free from the watchful eye of the many. Even if you are not personally chilled by ubiquitous surveillance, the society you live in is, and the personal costs are unequivocal.

This essay originally appeared in McSweeney’s issue #54: “The End of Trust.” It was reprinted on Wired.com.

Posted on November 26, 2018 at 6:54 AMView Comments

Information Attacks against Democracies

Democracy is an information system.

That’s the starting place of our new paper: “Common-Knowledge Attacks on Democracy.” In it, we look at democracy through the lens of information security, trying to understand the current waves of Internet disinformation attacks. Specifically, we wanted to explain why the same disinformation campaigns that act as a stabilizing influence in Russia are destabilizing in the United States.

The answer revolves around the different ways autocracies and democracies work as information systems. We start by differentiating between two types of knowledge that societies use in their political systems. The first is common political knowledge, which is the body of information that people in a society broadly agree on. People agree on who the rulers are and what their claim to legitimacy is. People agree broadly on how their government works, even if they don’t like it. In a democracy, people agree about how elections work: how districts are created and defined, how candidates are chosen, and that their votes count­—even if only roughly and imperfectly.

We contrast this with a very different form of knowledge that we call contested political knowledge, which is, broadly, things that people in society disagree about. Examples are easy to bring to mind: how much of a role the government should play in the economy, what the tax rules should be, what sorts of regulations are beneficial and what sorts are harmful, and so on.

This seems basic, but it gets interesting when we contrast both of these forms of knowledge across autocracies and democracies. These two forms of government have incompatible needs for common and contested political knowledge.

For example, democracies draw upon the disagreements within their population to solve problems. Different political groups have different ideas of how to govern, and those groups vie for political influence by persuading voters. There is also long-term uncertainty about who will be in charge and able to set policy goals. Ideally, this is the mechanism through which a polity can harness the diversity of perspectives of its members to better solve complex policy problems. When no-one knows who is going to be in charge after the next election, different parties and candidates will vie to persuade voters of the benefits of different policy proposals.

But in order for this to work, there needs to be common knowledge both of how government functions and how political leaders are chosen. There also needs to be common knowledge of who the political actors are, what they and their parties stand for, and how they clash with each other. Furthermore, this knowledge is decentralized across a wide variety of actors­—an essential element, since ordinary citizens play a significant role in political decision making.

Contrast this with an autocracy. There, common political knowledge about who is in charge over the long term and what their policy goals are is a basic condition of stability. Autocracies do not require common political knowledge about the efficacy and fairness of elections, and strive to maintain a monopoly on other forms of common political knowledge. They actively suppress common political knowledge about potential groupings within their society, their levels of popular support, and how they might form coalitions with each other. On the other hand, they benefit from contested political knowledge about nongovernmental groups and actors in society. If no one really knows which other political parties might form, what they might stand for, and what support they might get, that itself is a significant barrier to those parties ever forming.

This difference has important consequences for security. Authoritarian regimes are vulnerable to information attacks that challenge their monopoly on common political knowledge. They are vulnerable to outside information that demonstrates that the government is manipulating common political knowledge to their own benefit. And they are vulnerable to attacks that turn contested political knowledge­—uncertainty about potential adversaries of the ruling regime, their popular levels of support and their ability to form coalitions­—into common political knowledge. As such, they are vulnerable to tools that allow people to communicate and organize more easily, as well as tools that provide citizens with outside information and perspectives.

For example, before the first stirrings of the Arab Spring, the Tunisian government had extensive control over common knowledge. It required everyone to publicly support the regime, making it hard for citizens to know how many other people hated it, and it prevented potential anti-regime coalitions from organizing. However, it didn’t pay attention in time to Facebook, which allowed citizens to talk more easily about how much they detested their rulers, and, when an initial incident sparked a protest, to rapidly organize mass demonstrations against the regime. The Arab Spring faltered in many countries, but it is no surprise that countries like Russia see the Internet openness agenda as a knife at their throats.

Democracies, in contrast, are vulnerable to information attacks that turn common political knowledge into contested political knowledge. If people disagree on the results of an election, or whether a census process is accurate, then democracy suffers. Similarly, if people lose any sense of what the other perspectives in society are, who is real and who is not real, then the debate and argument that democracy thrives on will be degraded. This is what seems to be Russia’s aims in their information campaigns against the US: to weaken our collective trust in the institutions and systems that hold our country together. This is also the situation that writers like Adrien Chen and Peter Pomerantsev describe in today’s Russia, where no one knows which parties or voices are genuine, and which are puppets of the regime, creating general paranoia and despair.

This difference explains how the same policy measure can increase the stability of one form of regime and decrease the stability of the other. We have already seen that open information flows have benefited democracies while at the same time threatening autocracies. In our language, they transform regime-supporting contested political knowledge into regime-undermining common political knowledge. And much more recently, we have seen other uses of the same information flows undermining democracies by turning regime-supporting common political knowledge into regime-undermining contested political knowledge.

In other words, the same fake news techniques that benefit autocracies by making everyone unsure about political alternatives undermine democracies by making people question the common political systems that bind their society.

This framework not only helps us understand how different political systems are vulnerable and how they can be attacked, but also how to bolster security in democracies. First, we need to better defend the common political knowledge that democracies need to function. That is, we need to bolster public confidence in the institutions and systems that maintain a democracy. Second, we need to make it harder for outside political groups to cooperate with inside political groups and organize disinformation attacks, through measures like transparency in political funding and spending. And finally, we need to treat attacks on common political knowledge by insiders as being just as threatening as the same attacks by foreigners.

There’s a lot more in the paper.

This essay was co-authored by Henry Farrell, and previously appeared on Lawfare.com.

Posted on November 21, 2018 at 7:48 AMView Comments

New IoT Security Regulations

Due to ever-evolving technological advances, manufacturers are connecting consumer goods­—from toys to light bulbs to major appliances­—to the Internet at breakneck speeds. This is the Internet of Things, and it’s a security nightmare.

The Internet of Things fuses products with communications technology to make daily life more effortless. Think Amazon’s Alexa, which not only answers questions and plays music but allows you to control your home’s lights and thermostat. Or the current generation of implanted pacemakers, which can both receive commands and send information to doctors over the Internet.

But like nearly all innovation, there are risks involved. And for products born out of the Internet of Things, this means the risk of having personal information stolen or devices being overtaken and controlled remotely. For devices that affect the world in a direct physical manner—­cars, pacemakers, thermostats­—the risks include loss of life and property.

By developing more advanced security features and building them into these products, hacks can be avoided. The problem is that there is no monetary incentive for companies to invest in the cybersecurity measures needed to keep their products secure. Consumers will buy products without proper security features, unaware that their information is vulnerable. And current liability laws make it hard to hold companies accountable for shoddy software security.

It falls upon lawmakers to create laws that protect consumers. While the US government is largely absent in this area of consumer protection, the state of California has recently stepped in and started regulating the Internet of Things, or “IoT” devices sold in the state­—and the effects will soon be felt worldwide.

California’s new SB 327 law, which will take effect in January 2020, requires all “connected devices” to have a “reasonable security feature.” The good news is that the term “connected devices” is broadly defined to include just about everything connected to the Internet. The not-so-good news is that “reasonable security” remains defined such that companies trying to avoid compliance can argue that the law is unenforceable.

The legislation requires that security features must be able to protect the device and the information on it from a variety of threats and be appropriate to both the nature of the device and the information it collects. California’s attorney general will interpret the law and define the specifics, which will surely be the subject of much lobbying by tech companies.

There’s just one specific in the law that’s not subject to the attorney general’s interpretation: default passwords are not allowed. This is a good thing; they are a terrible security practice. But it’s just one of dozens of awful “security” measures commonly found in IoT devices.

This law is not a panacea. But we have to start somewhere, and it is a start.

Though the legislation covers only the state of California, its effects will reach much further. All of us­—in the United States or elsewhere­—are likely to benefit because of the way software is written and sold.

Automobile manufacturers sell their cars worldwide, but they are customized for local markets. The car you buy in the United States is different from the same model sold in Mexico, because the local environmental laws are not the same and manufacturers optimize engines based on where the product will be sold. The economics of building and selling automobiles easily allows for this differentiation.

But software is different. Once California forces minimum security standards on IoT devices, manufacturers will have to rewrite their software to comply. At that point, it won’t make sense to have two versions: one for California and another for everywhere else. It’s much easier to maintain the single, more secure version and sell it everywhere.

The European General Data Protection Regulation (GDPR), which implemented the annoying warnings and agreements that pop up on websites, is another example of a law that extends well beyond physical borders. You might have noticed an increase in websites that force you to acknowledge you’ve read and agreed to the website’s privacy policies. This is because it is tricky to differentiate between users who are subject to the protections of the GDPR­—people physically in the European Union, and EU citizens wherever they are—­and those who are not. It’s easier to extend the protection to everyone.

Once this kind of sorting is possible, companies will, in all likelihood, return to their profitable surveillance capitalism practices on those who are still fair game. Surveillance is still the primary business model of the Internet, and companies want to spy on us and our activities as much as they can so they can sell us more things and monetize what they know about our behavior.

Insecurity is profitable only if you can get away with it worldwide. Once you can’t, you might as well make a virtue out of necessity. So everyone will benefit from the California regulation, as they would from similar security regulations enacted in any market around the world large enough to matter, just like everyone will benefit from the portion of GDPR compliance that involves data security.

Most importantly, laws like these spur innovations in cybersecurity. Right now, we have a market failure. Because the courts have traditionally not held software manufacturers liable for vulnerabilities, and because consumers don’t have the expertise to differentiate between a secure product and an insecure one, manufacturers have prioritized low prices, getting devices out on the market quickly and additional features over security.

But once a government steps in and imposes more stringent security regulations, companies have an incentive to meet those standards as quickly, cheaply, and effectively as possible. This means more security innovation, because now there’s a market for new ideas and new products. We’ve seen this pattern again and again in safety and security engineering, and we’ll see it with the Internet of Things as well.

IoT devices are more dangerous than our traditional computers because they sense the world around us, and affect that world in a direct physical manner. Increasing the cybersecurity of these devices is paramount, and it’s heartening to see both individual states and the European Union step in where the US federal government is abdicating responsibility. But we need more, and soon.

This essay previously appeared on CNN.com.

Posted on November 13, 2018 at 7:04 AMView Comments

Cell Phone Security and Heads of State

Earlier this week, the New York Times reported that the Russians and the Chinese were eavesdropping on President Donald Trump’s personal cell phone and using the information gleaned to better influence his behavior. This should surprise no one. Security experts have been talking about the potential security vulnerabilities in Trump’s cell phone use since he became president. And President Barack Obama bristled at—but acquiesced to—the security rules prohibiting him from using a “regular” cell phone throughout his presidency.

Three broader questions obviously emerge from the story. Who else is listening in on Trump’s cell phone calls? What about the cell phones of other world leaders and senior government officials? And—most personal of all—what about my cell phone calls?

There are two basic places to eavesdrop on pretty much any communications system: at the end points and during transmission. This means that a cell phone attacker can either compromise one of the two phones or eavesdrop on the cellular network. Both approaches have their benefits and drawbacks. The NSA seems to prefer bulk eavesdropping on the planet’s major communications links and then picking out individuals of interest. In 2016, WikiLeaks published a series of classified documents listing “target selectors”: phone numbers the NSA searches for and records. These included senior government officials of Germany—among them Chancellor Angela Merkel—France, Japan, and other countries.

Other countries don’t have the same worldwide reach that the NSA has, and must use other methods to intercept cell phone calls. We don’t know details of which countries do what, but we know a lot about the vulnerabilities. Insecurities in the phone network itself are so easily exploited that 60 Minutes eavesdropped on a US congressman’s phone live on camera in 2016. Back in 2005, unknown attackers targeted the cell phones of many Greek politicians by hacking the country’s phone network and turning on an already-installed eavesdropping capability. The NSA even implanted eavesdropping capabilities in networking equipment destined for the Syrian Telephone Company.

Alternatively, an attacker could intercept the radio signals between a cell phone and a tower. Encryption ranges from very weak to possibly strong, depending on which flavor the system uses. Don’t think the attacker has to put his eavesdropping antenna on the White House lawn; the Russian Embassy is close enough.

The other way to eavesdrop on a cell phone is by hacking the phone itself. This is the technique favored by countries with less sophisticated intelligence capabilities. In 2017, the public-interest forensics group Citizen Lab uncovered an extensive eavesdropping campaign against Mexican lawyers, journalists, and opposition politicians—presumably run by the government. Just last month, the same group found eavesdropping capabilities in products from the Israeli cyberweapons manufacturer NSO Group operating in Algeria, Bangladesh, Greece, India, Kazakhstan, Latvia, South Africa—45 countries in all.

These attacks generally involve downloading malware onto a smartphone that then records calls, text messages, and other user activities, and forwards them to some central controller. Here, it matters which phone is being targeted. iPhones are harder to hack, which is reflected in the prices companies pay for new exploit capabilities. In 2016, the vulnerability broker Zerodium offered $1.5 million for an unknown iOS exploit and only $200K for a similar Android exploit. Earlier this year, a new Dubai start-up announced even higher prices. These vulnerabilities are resold to governments and cyberweapons manufacturers.

Some of the price difference is due to the ways the two operating systems are designed and used. Apple has much more control over the software on an iPhone than Google does on an Android phone. Also, Android phones are generally designed, built, and sold by third parties, which means they are much less likely to get timely security updates. This is changing. Google now has its own phone—Pixel—that gets security updates quickly and regularly, and Google is now trying to pressure Android-phone manufacturers to update their phones more regularly. (President Trump reportedly uses an iPhone.)

Another way to hack a cell phone is to install a backdoor during the design process. This is a real fear; earlier this year, US intelligence officials warned that phones made by the Chinese companies ZTE and Huawei might be compromised by that government, and the Pentagon ordered stores on military bases to stop selling them. This is why China’s recommendation that if Trump wanted security, he should use a Huawei phone, was an amusing bit of trolling.

Given the wealth of insecurities and the array of eavesdropping techniques, it’s safe to say that lots of countries are spying on the phones of both foreign officials and their own citizens. Many of these techniques are within the capabilities of criminal groups, terrorist organizations, and hackers. If I were guessing, I’d say that the major international powers like China and Russia are using the more passive interception techniques to spy on Trump, and that the smaller countries are too scared of getting caught to try to plant malware on his phone.

It’s safe to say that President Trump is not the only one being targeted; so are members of Congress, judges, and other senior officials—especially because no one is trying to tell any of them to stop using their cell phones (although cell phones still are not allowed on either the House or the Senate floor).

As for the rest of us, it depends on how interesting we are. It’s easy to imagine a criminal group eavesdropping on a CEO’s phone to gain an advantage in the stock market, or a country doing the same thing for an advantage in a trade negotiation. We’ve seen governments use these tools against dissidents, reporters, and other political enemies. The Chinese and Russian governments are already targeting the US power grid; it makes sense for them to target the phones of those in charge of that grid.

Unfortunately, there’s not much you can do to improve the security of your cell phone. Unlike computer networks, for which you can buy antivirus software, network firewalls, and the like, your phone is largely controlled by others. You’re at the mercy of the company that makes your phone, the company that provides your cellular service, and the communications protocols developed when none of this was a problem. If one of those companies doesn’t want to bother with security, you’re vulnerable.

This is why the current debate about phone privacy, with the FBI on one side wanting the ability to eavesdrop on communications and unlock devices, and users on the other side wanting secure devices, is so important. Yes, there are security benefits to the FBI being able to use this information to help solve crimes, but there are far greater benefits to the phones and networks being so secure that all the potential eavesdroppers—including the FBI—can’t access them. We can give law enforcement other forensics tools, but we must keep foreign governments, criminal groups, terrorists, and everyone else out of everyone’s phones. The president may be taking heat for his love of his insecure phone, but each of us is using just as insecure a phone. And for a surprising number of us, making those phones more private is a matter of national security.

This essay previously appeared in the Atlantic.

EDITED TO ADD: Steven Bellovin and Susan Landau have a good essay on the same topic, as does Wired. Slashdot post.

Posted on October 30, 2018 at 6:38 AMView Comments

Quantum Computing and Cryptography

Quantum computing is a new way of computing—one that could allow humankind to perform computations that are simply impossible using today’s computing technologies. It allows for very fast searching, something that would break some of the encryption algorithms we use today. And it allows us to easily factor large numbers, something that would break the RSA cryptosystem for any key length.

This is why cryptographers are hard at work designing and analyzing “quantum-resistant” public-key algorithms. Currently, quantum computing is too nascent for cryptographers to be sure of what is secure and what isn’t. But even assuming aliens have developed the technology to its full potential, quantum computing doesn’t spell the end of the world for cryptography. Symmetric cryptography is easy to make quantum-resistant, and we’re working on quantum-resistant public-key algorithms. If public-key cryptography ends up being a temporary anomaly based on our mathematical knowledge and computational ability, we’ll still survive. And if some inconceivable alien technology can break all of cryptography, we still can have secrecy based on information theory—albeit with significant loss of capability.

At its core, cryptography relies on the mathematical quirk that some things are easier to do than to undo. Just as it’s easier to smash a plate than to glue all the pieces back together, it’s much easier to multiply two prime numbers together to obtain one large number than it is to factor that large number back into two prime numbers. Asymmetries of this kind—one-way functions and trap-door one-way functions—underlie all of cryptography.

To encrypt a message, we combine it with a key to form ciphertext. Without the key, reversing the process is more difficult. Not just a little more difficult, but astronomically more difficult. Modern encryption algorithms are so fast that they can secure your entire hard drive without any noticeable slowdown, but that encryption can’t be broken before the heat death of the universe.

With symmetric cryptography—the kind used to encrypt messages, files, and drives—that imbalance is exponential, and is amplified as the keys get larger. Adding one bit of key increases the complexity of encryption by less than a percent (I’m hand-waving here) but doubles the cost to break. So a 256-bit key might seem only twice as complex as a 128-bit key, but (with our current knowledge of mathematics) it’s 340,282,366,920,938,463,463,374,607,431,768,211,456 times harder to break.

Public-key encryption (used primarily for key exchange) and digital signatures are more complicated. Because they rely on hard mathematical problems like factoring, there are more potential tricks to reverse them. So you’ll see key lengths of 2,048 bits for RSA, and 384 bits for algorithms based on elliptic curves. Here again, though, the costs to reverse the algorithms with these key lengths are beyond the current reach of humankind.

This one-wayness is based on our mathematical knowledge. When you hear about a cryptographer “breaking” an algorithm, what happened is that they’ve found a new trick that makes reversing easier. Cryptographers discover new tricks all the time, which is why we tend to use key lengths that are longer than strictly necessary. This is true for both symmetric and public-key algorithms; we’re trying to future-proof them.

Quantum computers promise to upend a lot of this. Because of the way they work, they excel at the sorts of computations necessary to reverse these one-way functions. For symmetric cryptography, this isn’t too bad. Grover’s algorithm shows that a quantum computer speeds up these attacks to effectively halve the key length. This would mean that a 256-bit key is as strong against a quantum computer as a 128-bit key is against a conventional computer; both are secure for the foreseeable future.

For public-key cryptography, the results are more dire. Shor’s algorithm can easily break all of the commonly used public-key algorithms based on both factoring and the discrete logarithm problem. Doubling the key length increases the difficulty to break by a factor of eight. That’s not enough of a sustainable edge.

There are a lot of caveats to those two paragraphs, the biggest of which is that quantum computers capable of doing anything like this don’t currently exist, and no one knows when—or even if ­- we’ll be able to build one. We also don’t know what sorts of practical difficulties will arise when we try to implement Grover’s or Shor’s algorithms for anything but toy key sizes. (Error correction on a quantum computer could easily be an unsurmountable problem.) On the other hand, we don’t know what other techniques will be discovered once people start working with actual quantum computers. My bet is that we will overcome the engineering challenges, and that there will be many advances and new techniques­but they’re going to take time to discover and invent. Just as it took decades for us to get supercomputers in our pockets, it will take decades to work through all the engineering problems necessary to build large-enough quantum computers.

In the short term, cryptographers are putting considerable effort into designing and analyzing quantum-resistant algorithms, and those are likely to remain secure for decades. This is a necessarily slow process, as both good cryptanalysis transitioning standards take time. Luckily, we have time. Practical quantum computing seems to always remain “ten years in the future,” which means no one has any idea.

After that, though, there is always the possibility that those algorithms will fall to aliens with better quantum techniques. I am less worried about symmetric cryptography, where Grover’s algorithm is basically an upper limit on quantum improvements, than I am about public-key algorithms based on number theory, which feel more fragile. It’s possible that quantum computers will someday break all of them, even those that today are quantum resistant.

If that happens, we will face a world without strong public-key cryptography. That would be a huge blow to security and would break a lot of stuff we currently do, but we could adapt. In the 1980s, Kerberos was an all-symmetric authentication and encryption system. More recently, the GSM cellular standard does both authentication and key distribution—at scale—with only symmetric cryptography. Yes, those systems have centralized points of trust and failure, but it’s possible to design other systems that use both secret splitting and secret sharing to minimize that risk. (Imagine that a pair of communicants get a piece of their session key from each of five different key servers.) The ubiquity of communications also makes things easier today. We can use out-of-band protocols where, for example, your phone helps you create a key for your computer. We can use in-person registration for added security, maybe at the store where you buy your smartphone or initialize your Internet service. Advances in hardware may also help to secure keys in this world. I’m not trying to design anything here, only to point out that there are many design possibilities. We know that cryptography is all about trust, and we have a lot more techniques to manage trust than we did in the early years of the Internet. Some important properties like forward secrecy will be blunted and far more complex, but as long as symmetric cryptography still works, we’ll still have security.

It’s a weird future. Maybe the whole idea of number theory­-based encryption, which is what our modern public-key systems are, is a temporary detour based on our incomplete model of computing. Now that our model has expanded to include quantum computing, we might end up back to where we were in the late 1970s and early 1980s: symmetric cryptography, code-based cryptography, Merkle hash signatures. That would be both amusing and ironic.

Yes, I know that quantum key distribution is a potential replacement for public-key cryptography. But come on—does anyone expect a system that requires specialized communications hardware and cables to be useful for anything but niche applications? The future is mobile, always-on, embedded computing devices. Any security for those will necessarily be software only.

There’s one more future scenario to consider, one that doesn’t require a quantum computer. While there are several mathematical theories that underpin the one-wayness we use in cryptography, proving the validity of those theories is in fact one of the great open problems in computer science. Just as it is possible for a smart cryptographer to find a new trick that makes it easier to break a particular algorithm, we might imagine aliens with sufficient mathematical theory to break all encryption algorithms. To us, today, this is ridiculous. Public- key cryptography is all number theory, and potentially vulnerable to more mathematically inclined aliens. Symmetric cryptography is so much nonlinear muddle, so easy to make more complex, and so easy to increase key length, that this future is unimaginable. Consider an AES variant with a 512-bit block and key size, and 128 rounds. Unless mathematics is fundamentally different than our current understanding, that’ll be secure until computers are made of something other than matter and occupy something other than space.

But if the unimaginable happens, that would leave us with cryptography based solely on information theory: one-time pads and their variants. This would be a huge blow to security. One-time pads might be theoretically secure, but in practical terms they are unusable for anything other than specialized niche applications. Today, only crackpots try to build general-use systems based on one-time pads—and cryptographers laugh at them, because they replace algorithm design problems (easy) with key management and physical security problems (much, much harder). In our alien-ridden science-fiction future, we might have nothing else.

Against these godlike aliens, cryptography will be the only technology we can be sure of. Our nukes might refuse to detonate and our fighter jets might fall out of the sky, but we will still be able to communicate securely using one-time pads. There’s an optimism in that.

This essay originally appeared in IEEE Security and Privacy.

Posted on September 14, 2018 at 6:15 AMView Comments

Don't Fear the TSA Cutting Airport Security. Be Glad That They're Talking about It.

Last week, CNN reported that the Transportation Security Administration is considering eliminating security at U.S. airports that fly only smaller planes—60 seats or fewer. Passengers connecting to larger planes would clear security at their destinations.

To be clear, the TSA has put forth no concrete proposal. The internal agency working group’s report obtained by CNN contains no recommendations. It’s nothing more than 20 people examining the potential security risks of the policy change. It’s not even new: The TSA considered this back in 2011, and the agency reviews its security policies every year. But commentary around the news has been strongly negative. Regardless of the idea’s merit, it will almost certainly not happen. That’s the result of politics, not security: Sen. Charles E. Schumer (D-N.Y.), one of numerous outraged lawmakers, has already penned a letter to the agency saying that “TSA documents proposing to scrap critical passenger security screenings, without so much as a metal detector in place in some airports, would effectively clear the runway for potential terrorist attacks.” He continued, “It simply boggles the mind to even think that the TSA has plans like this on paper in the first place.”

We don’t know enough to conclude whether this is a good idea, but it shouldn’t be dismissed out of hand. We need to evaluate airport security based on concrete costs and benefits, and not continue to implement security theater based on fear. And we should applaud the agency’s willingness to explore changes in the screening process.

There is already a tiered system for airport security, varying for both airports and passengers. Many people are enrolled in TSA PreCheck, allowing them to go through checkpoints faster and with less screening. Smaller airports don’t have modern screening equipment like full-body scanners or CT baggage screeners, making it impossible for them to detect some plastic explosives. Any would-be terrorist is already able to pick and choose his flight conditions to suit his plot.

Over the years, I have written many essays critical of the TSA and airport security, in general. Most of it is security theater—measures that make us feel safer without improving security. For example, the liquids ban makes no sense as implemented, because there’s no penalty for repeatedly trying to evade the scanners. The full-body scanners are terrible at detecting the explosive material PETN if it is well concealed—which is their whole point.

There are two basic kinds of terrorists. The amateurs will be deterred or detected by even basic security measures. The professionals will figure out how to evade even the most stringent measures. I’ve repeatedly said that the two things that have made flying safer since 9/11 are reinforcing the cockpit doors and persuading passengers that they need to fight back. Everything beyond that isn’t worth it.

It’s always possible to increase security by adding more onerous—and expensive—procedures. If that were the only concern, we would all be strip-searched and prohibited from traveling with luggage. Realistically, we need to analyze whether the increased security of any measure is worth the cost, in money, time and convenience. We spend $8 billion a year on the TSA, and we’d like to get the most security possible for that money.

This is exactly what that TSA working group was doing. CNN reported that the group specifically evaluated the costs and benefits of eliminating security at minor airports, saving $115 million a year with a “small (nonzero) undesirable increase in risk related to additional adversary opportunity.” That money could be used to bolster security at larger airports or to reduce threats totally removed from airports.

We need more of this kind of thinking, not less. In 2017, political scientists Mark Stewart and John Mueller published a detailed evaluation of airport security measures based on the cost to implement and the benefit in terms of lives saved. They concluded that most of what our government does either isn’t effective at preventing terrorism or is simply too expensive to justify the security it does provide. Others might disagree with their conclusions, but their analysis provides enough detailed information to have a meaningful argument.

The more we politicize security, the worse we are. People are generally terrible judges of risk. We fear threats in the news out of proportion with the actual dangers. We overestimate rare and spectacular risks, and underestimate commonplace ones. We fear specific “movie-plot threats” that we can bring to mind. That’s why we fear flying over driving, even though the latter kills about 35,000 people each year—about a 9/11’s worth of deaths each month. And it’s why the idea of the TSA eliminating security at minor airports fills us with fear. We can imagine the plot unfolding, only without Bruce Willis saving the day.

Very little today is immune to politics, including the TSA. It drove most of the agency’s decisions in the early years after the 9/11 terrorist attacks. That the TSA is willing to consider politically unpopular ideas is a credit to the organization. Let’s let them perform their analyses in peace.

This essay originally appeared in the Washington Post.

Posted on August 10, 2018 at 6:10 AMView Comments

Russian Censorship of Telegram

Internet censors have a new strategy in their bid to block applications and websites: pressuring the large cloud providers that host them. These providers have concerns that are much broader than the targets of censorship efforts, so they have the choice of either standing up to the censors or capitulating in order to maximize their business. Today’s Internet largely reflects the dominance of a handful of companies behind the cloud services, search engines and mobile platforms that underpin the technology landscape. This new centralization radically tips the balance between those who want to censor parts of the Internet and those trying to evade censorship. When the profitable answer is for a software giant to acquiesce to censors’ demands, how long can Internet freedom last?

The recent battle between the Russian government and the Telegram messaging app illustrates one way this might play out. Russia has been trying to block Telegram since April, when a Moscow court banned it after the company refused to give Russian authorities access to user messages. Telegram, which is widely used in Russia, works on both iPhone and Android, and there are Windows and Mac desktop versions available. The app offers optional end-to-end encryption, meaning that all messages are encrypted on the sender’s phone and decrypted on the receiver’s phone; no part of the network can eavesdrop on the messages.

Since then, Telegram has been playing cat-and-mouse with the Russian telecom regulator Roskomnadzor by varying the IP address the app uses to communicate. Because Telegram isn’t a fixed website, it doesn’t need a fixed IP address. Telegram bought tens of thousands of IP addresses and has been quickly rotating through them, staying a step ahead of censors. Cleverly, this tactic is invisible to users. The app never sees the change, or the entire list of IP addresses, and the censor has no clear way to block them all.

A week after the court ban, Roskomnadzor countered with an unprecedented move of its own: blocking 19 million IP addresses, many on Amazon Web Services and Google Cloud. The collateral damage was widespread: The action inadvertently broke many other web services that use those platforms, and Roskomnadzor scaled back after it became clear that its action had affected services critical for Russian business. Even so, the censor is still blocking millions of IP addresses.

More recently, Russia has been pressuring Apple not to offer the Telegram app in its iPhone App Store. As of this writing, Apple has not complied, and the company has allowed Telegram to download a critical software update to iPhone users (after what the app’s founder called a delay last month). Roskomnadzor could further pressure Apple, though, including by threatening to turn off its entire iPhone app business in Russia.

Telegram might seem a weird app for Russia to focus on. Those of us who work in security don’t recommend the program, primarily because of the nature of its cryptographic protocols. In general, proprietary cryptography has numerous fatal security flaws. We generally recommend Signal for secure SMS messaging, or, if having that program on your computer is somehow incriminating, WhatsApp. (More than 1.5 billion people worldwide use WhatsApp.) What Telegram has going for it is that it works really well on lousy networks. That’s why it is so popular in places like Iran and Afghanistan. (Iran is also trying to ban the app.)

What the Russian government doesn’t like about Telegram is its anonymous broadcast feature­—channel capability and chats—­which makes it an effective platform for political debate and citizen journalism. The Russians might not like that Telegram is encrypted, but odds are good that they can simply break the encryption. Telegram’s role in facilitating uncontrolled journalism is the real issue.

Iran attempts to block Telegram have been more successful than Russia’s, less because Iran’s censorship technology is more sophisticated but because Telegram is not willing to go as far to defend Iranian users. The reasons are not rooted in business decisions. Simply put, Telegram is a Russian product and the designers are more motivated to poke Russia in the eye. Pavel Durov, Telegram’s founder, has pledged millions of dollars to help fight Russian censorship.

For the moment, Russia has lost. But this battle is far from over. Russia could easily come back with more targeted pressure on Google, Amazon and Apple. A year earlier, Zello used the same trick Telegram is using to evade Russian censors. Then, Roskomnadzor threatened to block all of Amazon Web Services and Google Cloud; and in that instance, both companies forced Zello to stop its IP-hopping censorship-evasion tactic.

Russia could also further develop its censorship infrastructure. If its capabilities were as finely honed as China’s, it would be able to more effectively block Telegram from operating. Right now, Russia can block only specific IP addresses, which is too coarse a tool for this issue. Telegram’s voice capabilities in Russia are significantly degraded, however, probably because high-capacity IP addresses are easier to block.

Whatever its current frustrations, Russia might well win in the long term. By demonstrating its willingness to suffer the temporary collateral damage of blocking major cloud providers, it prompted cloud providers to block another and more effective anti-censorship tactic, or at least accelerated the process. In April, Google and Amazon banned­—and technically blocked­—the practice of “domain fronting,” a trick anti-censorship tools use to get around Internet censors by pretending to be other kinds of traffic. Developers would use popular websites as a proxy, routing traffic to their own servers through another website­—in this case Google.com­—to fool censors into believing the traffic was intended for Google.com. The anonymous web-browsing tool Tor has used domain fronting since 2014. Signal, since 2016. Eliminating the capability is a boon to censors worldwide.

Tech giants have gotten embroiled in censorship battles for years. Sometimes they fight and sometimes they fold, but until now there have always been options. What this particular fight highlights is that Internet freedom is increasingly in the hands of the world’s largest Internet companies. And while freedom may have its advocates—­the American Civil Liberties Union has tweeted its support for those companies, and some 12,000 people in Moscow protested against the Telegram ban­—actions such as disallowing domain fronting illustrate that getting the big tech companies to sacrifice their near-term commercial interests will be an uphill battle. Apple has already removed anti-censorship apps from its Chinese app store.

In 1993, John Gilmore famously said that “The Internet interprets censorship as damage and routes around it.” That was technically true when he said it but only because the routing structure of the Internet was so distributed. As centralization increases, the Internet loses that robustness, and censorship by governments and companies becomes easier.

This essay previously appeared on Lawfare.com.

Posted on June 13, 2018 at 6:55 AMView Comments

1 6 7 8 9 10 48

Sidebar photo of Bruce Schneier by Joe MacInnis.