Friday Squid Blogging: Squid-Shaped Pancakes
Here are pictures of squid-shaped pancakes.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Here are pictures of squid-shaped pancakes.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
SEC Consult has published an advisory warning people not to use a government eavesdropping product called Recording eXpress, sold by the Israeli company Nice Systems. Basically, attackers can completely compromise the system. There are good stories on this by Brian Krebs and Dan Goodin.
I have no idea what’s going on with TrueCrypt. There’s a good summary of the story at ArsTechnica, and Slashdot, Hacker News, and Reddit all have long comment threads. See also Brian Krebs and Cory Doctorow.
Speculations include a massive hack of the TrueCrypt developers, some Lavabit-like forced shutdown, and an internal power struggle within TrueCrypt. I suppose we’ll have to wait and see what develops.
This is well worth reading. It’s based on a series of talks he gave last fall.
Ross Anderson has an important new paper on the economics that drive government-on-population bulk surveillance:
My first big point is that all the three factors which lead to monopoly – network effects, low marginal costs and technical lock-in – are present and growing in the national-intelligence nexus itself. The Snowden papers show that neutrals like Sweden and India are heavily involved in information sharing with the NSA, even though they have tried for years to pretend otherwise. A non-aligned country such as India used to be happy to buy warplanes from Russia; nowadays it still does, but it shares intelligence with the NSA rather then the FSB. If you have a choice of joining a big spy network like America’s or a small one like Russia’s then it’s like choosing whether to write software for the PC or the Mac back in the 1990s. It may be partly an ideological choice, but the economics can often be stronger than the ideology.
Second, modern warfare, like the software industry, has seen the bulk of its costs turn from variable costs into fixed costs. In medieval times, warfare was almost entirely a matter of manpower, and society was organised appropriately; as well as rent or produce, tenants owed their feudal lord forty days’ service in peacetime, and sixty days during a war. Barons held their land from the king in return for an oath of fealty, and a duty to provide a certain size of force on demand; priests and scholars paid a tax in lieu of service, so that a mercenary could be hired in their place. But advancing technology brought steady industrialisation. When the UK and the USA attacked Germany in 1944, we did not send millions of men to Europe, as in the first world war, but a combat force of a couple of hundred thousand troops – though with thousands of tanks and backed by larger numbers of men in support roles in tens of thousands of aircraft and ships. Nowadays the transition from labour to capital has gone still further: to kill a foreign leader, we could get a drone fire a missile that costs $30,000. But that’s backed by colossal investment – the firms whose data are tapped by PRISM have a combined market capitalisation of over $1 trillion.
Third is the technical lock-in, which operates at a number of levels. First, there are lock-in effects in the underlying industries, where (for example) Cisco dominates the router market: those countries that have tried to build US-free information infrastructures (China) or even just government information infrastructures (Russia, Germany) find it’s expensive. China went to the trouble of sponsoring an indigenous vendor, Huawei, but it’s unclear how much separation that buys them because of the common code shared by router vendors: a vulnerability discovered in one firm’s products may affect another. Thus the UK government lets BT buy Huawei routers for all but its network’s most sensitive parts (the backbone and the lawful-intercept functions). Second, technical lock-in affects the equipment used by the intelligence agencies themselves, and is in fact promoted by the agencies via ETSI standards for functions such as lawful intercept.
Just as these three factors led to the IBM network dominating the mainframe age, the Intel/Microsoft network dominating the PC age, and Facebook dominating the social networking scene, so they push strongly towards global surveillance becoming a single connected ecosystem.
These are important considerations when trying to design national policies around surveillance.
Ross’s blog post.
Del Campo, a restaurant in Washington DC, has a Bloody Mary made with squid ink.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Biologist Peter Watts makes some good points:
Mammals don’t respond well to surveillance. We consider it a threat. It makes us paranoid, and aggressive and vengeful.
[…]
“Natural selection favors the paranoid,” Watts said. Those who run away. In the earliest days of man on the savannah, when we roamed among the predatory, wild animals, someone realized pretty quickly that lions stalked their prey from behind the tall, untamed grass. And so anyone hoping to keep on breathing developed a healthy fear of the lions in the grass and listened for the rustling in the brush in order to avoid becoming lunch for an animal more powerful than themselves. It was instinct. If the rustling, the perceived surveillance, turns out to just be the wind? Well, no harm done.
“For a very long time, people who don’t see agency have a disproportionate tendency to get eaten,” Watts noted.
And so, we’ve developed those protective instincts. “We see faces in the clouds; we hear ghosts and monsters in the stairs at night,” Watts said. “The link between surveillance and fear is a lot deeper than the average privacy advocate is willing to admit.”
[…]
“A lot of critics say blanket surveillance treats us like criminals, but it’s deeper than that,” he said. “It makes us feel like prey. We’re seeing stalking behavior in the illogical sense,” he said.
This is interesting. People accept government surveillance out of fear: fear of the terrorists, fear of the criminals. If Watts is right, then there’s a conflict of fears. Because terrorists and criminals—kidnappers, child pornographers, drug dealers, whatever—is more evocative than the nebulous fear of being stalked, it wins.
EDITED TO ADD (5/23): His own post is better than the write-up.
EDITED TO ADD (5/24): Peter Watts has responded to this post, complaining about the misquotes in the article I quoted. He will post a transcript of his talk, so we can see what he actually said. My guess is that I will still agree with it.
He also recommended this post of his, which is well worth reading.
EDITED TO ADD (5/27): Here is the transcript.
There’s a debate going on about whether the US government—specifically, the NSA and United States Cyber Command—should stockpile Internet vulnerabilities or disclose and fix them. It’s a complicated problem, and one that starkly illustrates the difficulty of separating attack and defense in cyberspace.
A software vulnerability is a programming mistake that allows an adversary access into that system. Heartbleed is a recent example, but hundreds are discovered every year.
Unpublished vulnerabilities are called “zero-day” vulnerabilities, and they’re very valuable because no one is protected. Someone with one of those can attack systems world-wide with impunity.
When someone discovers one, he can either use it for defense or for offense. Defense means alerting the vendor and getting it patched. Lots of vulnerabilities are discovered by the vendors themselves and patched without any fanfare. Others are discovered by researchers and hackers. A patch doesn’t make the vulnerability go away, but most users protect themselves by patching their systems regularly.
Offense means using the vulnerability to attack others. This is the quintessential zero-day, because the vendor doesn’t even know the vulnerability exists until it starts being used by criminals or hackers. Eventually the affected software’s vendor finds out—the timing depends on how extensively the vulnerability is used—and issues a patch to close the vulnerability.
If an offensive military cyber unit discovers the vulnerability—or a cyber-weapons arms manufacturer—it keeps that vulnerability secret for use to deliver a cyber-weapon. If it is used stealthily, it might remain secret for a long time. If unused, it’ll remain secret until someone else discovers it.
Discoverers can sell vulnerabilities. There’s a rich market in zero-days for attack purposes—both military/commercial and black markets. Some vendors offer bounties for vulnerabilities to incent defense, but the amounts are much lower.
The NSA can play either defense or offense. It can either alert the vendor and get a still-secret vulnerability fixed, or it can hold on to it and use it to eavesdrop on foreign computer systems. Both are important US policy goals, but the NSA has to choose which one to pursue. By fixing the vulnerability, it strengthens the security of the Internet against all attackers: other countries, criminals, hackers. By leaving the vulnerability open, it is better able to attack others on the Internet. But each use runs the risk of the target government learning of, and using for itself, the vulnerability—or of the vulnerability becoming public and criminals starting to use it.
There is no way to simultaneously defend US networks while leaving foreign networks open to attack. Everyone uses the same software, so fixing us means fixing them, and leaving them vulnerable means leaving us vulnerable. As Harvard Law Professor Jack Goldsmith wrote, “every offensive weapon is a (potential) chink in our defense—and vice versa.”
To make matters even more difficult, there is an arms race going on in cyberspace. The Chinese, the Russians, and many other countries are finding vulnerabilities as well. If we leave a vulnerability unpatched, we run the risk of another country independently discovering it and using it in a cyber-weapon that we will be vulnerable to. But if we patch all the vulnerabilities we find, we won’t have any cyber-weapons to use against other countries.
Many people have weighed in on this debate. The president’s Review Group on Intelligence and Communications Technologies, convened post-Snowden, concluded (recommendation 30), that vulnerabilities should only be hoarded in rare instances and for short times. Cory Doctorow calls it a public health problem. I have said similar things. Dan Geer recommends that the US government corner the vulnerabilities market and fix them all. Both the FBI and the intelligence agencies claim that this amounts to unilateral disarmament.
It seems like an impossible puzzle, but the answer hinges on how vulnerabilities are distributed in software.
If vulnerabilities are sparse, then it’s obvious that every vulnerability we find and fix improves security. We render a vulnerability unusable, even if the Chinese government already knows about it. We make it impossible for criminals to find and use it. We improve the general security of our software, because we can find and fix most of the vulnerabilities.
If vulnerabilities are plentiful—and this seems to be true—the ones the US finds and the ones the Chinese find will largely be different. This means that patching the vulnerabilities we find won’t make it appreciably harder for criminals to find the next one. We don’t really improve general software security by disclosing and patching unknown vulnerabilities, because the percentage we find and fix is small compared to the total number that are out there.
But while vulnerabilities are plentiful, they’re not uniformly distributed. There are easier-to-find ones, and harder-to-find ones. Tools that automatically find and fix entire classes of vulnerabilities, and coding practices that eliminate many easy-to-find ones, greatly improve software security. And when a person finds a vulnerability, it is likely that another person soon will, or recently has, found the same vulnerability. Heartbleed, for example, remained undiscovered for two years, and then two independent researchers discovered it within two days of each other. This is why it is important for the government to err on the side of disclosing and fixing.
The NSA, and by extension US Cyber Command, tries its best to play both ends of this game. Former NSA Director Michael Hayden talks about NOBUS, “nobody but us.” The NSA has a classified process to determine what it should do about vulnerabilities, disclosing and closing most of the ones it finds, but holding back some—we don’t know how many—vulnerabilities that “nobody but us” could find for attack purposes.
This approach seems to be the appropriate general framework, but the devil is in the details. Many of us in the security field don’t know how to make NOBUS decisions, and the recent White House clarification posed more questions than it answered.
Who makes these decisions, and how? How often are they reviewed? Does this review process happen inside Department of Defense, or is it broader? Surely there needs to be a technical review of each vulnerability, but there should also be policy reviews regarding the sorts of vulnerabilities we are hoarding. Do we hold these vulnerabilities until someone else finds them, or only for a short period of time? How many do we stockpile? The US/Israeli cyberweapon Stuxnet used four zero-day vulnerabilities. Burning four on a single military operation implies that we are not hoarding a small number, but more like 100 or more.
There’s one more interesting wrinkle. Cyber-weapons are a combination of a payload—the damage the weapon does—and a delivery mechanism: the vulnerability used to get the payload into the enemy network. Imagine that China knows about a vulnerability and is using it in a still-unfired cyber-weapon, and that the NSA learns about it through espionage. Should the NSA disclose and patch the vulnerability, or should it use it itself for attack? If it discloses, then China could find a replacement vulnerability that the NSA won’t know about it. But if it doesn’t, it’s deliberately leaving the US vulnerable to cyber-attack. Maybe someday we can get to the point where we can patch vulnerabilities faster than the enemy can use them in an attack, but we’re nowhere near that point today.
The implications of US policy can be felt on a variety of levels. The NSA’s actions have resulted in a widespread mistrust of the security of US Internet products and services, greatly affecting American business. If we show that we’re putting security ahead of surveillance, we can begin to restore that trust. And by making the decision process much more public than it is today, we can demonstrate both our trustworthiness and the value of open government.
An unpatched vulnerability puts everyone at risk, but not to the same degree. The US and other Western countries are highly vulnerable, because of our critical electronic infrastructure, intellectual property, and personal wealth. Countries like China and Russia are less vulnerable—North Korea much less—so they have considerably less incentive to see vulnerabilities fixed. Fixing vulnerabilities isn’t disarmament; it’s making our own countries much safer. We also regain the moral authority to negotiate any broad international reductions in cyber-weapons; and we can decide not to use them even if others do.
Regardless of our policy towards hoarding vulnerabilities, the most important thing we can do is patch vulnerabilities quickly once they are disclosed. And that’s what companies are doing, even without any government involvement, because so many vulnerabilities are discovered by criminals.
We also need more research in automatically finding and fixing vulnerabilities, and in building secure and resilient software in the first place. Research over the last decade or so has resulted in software vendors being able to find and close entire classes of vulnerabilities. Although there are many cases of these security analysis tools not being used, all of our security is improved when they are. That alone is a good reason to continue disclosing vulnerability details, and something the NSA can do to vastly improve the security of the Internet worldwide. Here again, though, they would have to make the tools they have to automatically find vulnerabilities available for defense and not attack.
In today’s cyberwar arms race, unpatched vulnerabilities and stockpiled cyber-weapons are inherently destabilizing, especially because they are only effective for a limited time. The world’s militaries are investing more money in finding vulnerabilities than the commercial world is investing in fixing them. The vulnerabilities they discover affect the security of us all. No matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security and fix almost all the vulnerabilities we find. But not all, yet.
This essay previously appeared on TheAtlantic.com.
I am regularly asked what is the most surprising thing about the Snowden NSA documents. It’s this: the NSA is not made of magic. Its tools are no different from what we have in our world, it’s just better-funded. X-KEYSCORE is Bro plus memory. FOXACID is Metasploit with a budget. QUANTUM is AirPwn with a seriously privileged position on the backbone. The NSA breaks crypto not with super-secret cryptanalysis, but by using standard hacking tricks such as exploiting weak implementations and default keys. Its TAO implants are straightforward enhancements of attack tools developed by researchers, academics, and hackers; here’s a computer the size of a grain of rice, if you want to make your own such tools. The NSA’s collection and analysis tools are basically what you’d expect if you thought about it for a while.
That, fundamentally, is surprising. If you gave a super-secret Internet exploitation organization $10 billion annually, you’d expect some magic. And my guess is that there is some, around the edges, that has not become public yet. But that we haven’t seen any yet is cause for optimism.
New paper: “Your Secret Stingray’s No Secret Anymore: The Vanishing Government Monopoly Over Cell Phone Surveillance and its Impact on National Security and Consumer Privacy,” by Christopher Soghoian and Stephanie K. Pell:
Abstract: In the early 1990s, off-the-shelf radio scanners allowed any snoop or criminal to eavesdrop on the calls of nearby cell phone users. These radio scanners could intercept calls due to a significant security vulnerability inherent in then widely used analog cellular phone networks: calls were not encrypted as they traveled over the air. In response to this problem, Congress, rather than exploring options for improving the security of cellular networks, merely outlawed the sale of new radio scanners capable of intercepting cellular signals, which did nothing to prevent the potential use of millions of existing interception-capable radio scanners. Now, nearly two decades after Congress passed legislation intended to protect analog phones from interception by radio scanners, we are rapidly approaching a future with a widespread interception threat to cellular communications very reminiscent of the one scanner posed in the 1990s, but with a much larger range of public and private actors with access to a much more powerful cellular interception technology that exploits security vulnerabilities in our digital cellular networks.
This Article illustrates how cellular interception capabilities and technology have become, for better or worse, globalized and democratized, placing Americans’ cellular communications at risk of interception from foreign governments, criminals, the tabloid press and virtually anyone else with sufficient motive to capture cellular content in transmission. Notwithstanding this risk, US government agencies continue to treat practically everything about this cellular interception technology, as a closely guarded, necessarily secret “source and method,” shrouding the technical capabilities and limitations of the equipment from public discussion, even keeping its very name from public disclosure. This “source and method” argument, although questionable in its efficacy, is invoked to protect law enforcement agencies’ own use of this technology while allegedly preventing criminal suspects from learning how to evade surveillance.
This Article argues that current policy makers should not follow the worn path of attempting to outlaw technology while ignoring, and thus perpetuating, the significant vulnerabilities in cellular communications networks on which it depends. Moreover, lawmakers must resist the reflexive temptation to elevate the sustainability of a particular surveillance technology over the need to curtail the general threat that technology poses to the security of cellular networks. Instead, with regard to this destabilizing, unmediated technology and its increasing general availability at decreasing prices, Congress and appropriate regulators should address these network vulnerabilities directly and thoroughly as part of the larger cyber security policy debates and solutions now under consideration. This Article concludes by offering the beginnings of a way forward for legislators to address digital cellular network vulnerabilities with a new sense of urgency appropriate to the current communications security environment.
Interesting research paper on a bank card chip-and-PIN vulnerability. From the blog post:
Our new paper shows that it is possible to create clone chip cards which normal bank procedures will not be able to distinguish from the real card.
When a Chip and PIN transaction is performed, the terminal requests that the card produces an authentication code for the transaction. Part of this transaction is a number that is supposed to be random, so as to stop an authentication code being generated in advance. However, there are two ways in which the protection can be bypassed: the first requires that the Chip and PIN terminal has a poorly designed random generation (which we have observed in the wild); the second requires that the Chip and PIN terminal or its communications back to the bank can be tampered with (which again, we have observed in the wild).
At Eurocrypt this year, researchers presented a paper that completely breaks the discrete log problem in any field with a small characteristic. It’s nice work, and builds on a bunch of advances in this direction over the last several years. Despite headlines to the contrary, this does not have any cryptanalytic application—unless they can generalize the result, which seems unlikely to me.
New IETF RFC: “RFC 7258: Pervasive Monitoring Is an Attack” that designers must mitigate.
Slashdot thread.
EDITED TO ADD (6/7): Hacker News thread.
This is a pretty horrible story of a small-town mayor abusing his authority—warrants where there is no crime, police raids, incidental marijuana bust—to identify and shut down a Twitter parody account. The ACLU is taking the case.
Rare fossilized cephalopods.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
This article from Communications of the ACM outlines some of the security measures the NSA could, and should, have had in place to stop someone like Snowden. Mostly obvious stuff, although I’m not sure it would have been effective against such a skilled and tenacious leaker. What’s missing is the one thing that would have worked: have fewer secrets.
About 0.2% of all SSL certificates are forged. This is the first time I’ve ever seen a number based on real data. News article:
Of 3.45 million real-world connections made to Facebook servers using the transport layer security (TLS) or secure sockets layer protocols, 6,845, or about 0.2 percent of them, were established using forged certificates.
Actual paper.
EDITED TO ADD (6/13): I’m mis-characterizing the study. The study really says that 0.2% of HTTPS traffic to Facebook is intercepted and re-signed, and the vast majority of that interception and resigning happens either on the user’s local computer (by way of trusted security software which is acting scanning proxy) or locally on a private network behind a corporation’s intercepting proxy/firewall. Only a small percentage of intercepted traffic is a result of malware or other nefarious activity.
Symantec declared anti-virus dead, and Brian Krebs writes a good response.
He’s right: antivirus won’t protect you from the ever-increasing percentage of malware that’s specifically designed to bypass antivirus software, but it will protect you from all the random unsophisticated attacks out there: the “background radiation” of the Internet.
On April 1, I announced the Seventh Movie Plot Threat Contest:
The NSA has won, but how did it do it? How did it use its ability to conduct ubiquitous surveillance, its massive data centers, and its advanced data analytics capabilities to come out on top? Did it take over the world overtly, or is it just pulling the strings behind everyone’s backs? Did it have to force companies to build surveillance into its products, or could it just piggy-back on market trends? How does it deal with liberal democracies and ruthless totalitarian dictatorships at the same time? Is it blackmailing Congress? How does the money flow? What’s the story?
Submissions are in, and here are the semifinalists.
Cast your vote by number; voting closes at the end of the month.
According to NSA documents published in Glenn Greenwald’s new book No Place to Hide, we now know that the NSA spies on embassies and missions all over the world, including those of Brazil, Bulgaria, Colombia, the European Union, France, Georgia, Greece, India, Italy, Japan, Mexico, Slovakia, South Africa, South Korea, Taiwan, Venezuela and Vietnam.
This will certainly strain international relations, as happened when it was revealed that the U.S. is eavesdropping on German Chancellor Angela Merkel’s cell phone—but is anyone really surprised? Spying on foreign governments is what the NSA is supposed to do. Much more problematic, and dangerous, is that the NSA is spying on entire populations. It’s a mistake to have the same laws and organizations involved with both activities, and it’s time we separated the two.
The former is espionage: the traditional mission of the NSA. It’s an important military mission, both in peacetime and wartime, and something that’s not going to go away. It’s targeted. It’s focused. Decisions of whom to target are decisions of foreign policy. And secrecy is paramount.
The latter is very different. Terrorists are a different type of enemy; they’re individual actors instead of state governments. We know who foreign government officials are and where they’re located: in government offices in their home countries, and embassies abroad. Terrorists could be anyone, anywhere in the world. To find them, the NSA has to look for individual bad actors swimming in a sea of innocent people. This is why the NSA turned to broad surveillance of populations, both in the U.S. and internationally.
If you think about it, this is much more of a law enforcement sort of activity than a military activity. Both involve security, but just as the NSA’s traditional focus was governments, the FBI’s traditional focus was individuals. Before and after 9/11, both the NSA and the FBI were involved in counterterrorism. The FBI did work in the U.S. and abroad. After 9/11, the primary mission of counterterrorist surveillance was given to the NSA because it had existing capabilities, but the decision could have gone the other way.
Because the NSA got the mission, both the military norms and the legal framework from the espionage world carried over. Our surveillance efforts against entire populations were kept as secret as our espionage efforts against governments. And we modified our laws accordingly. The 1978 Foreign Intelligence Surveillance Act (FISA) that regulated NSA surveillance required targets to be “agents of a foreign power.” When the law was amended in 2008 under the FISA Amendments Act, a target could be any foreigner anywhere.
Government-on-government espionage is as old as governments themselves, and is the proper purview of the military. So let the Commander in Chief make the determination on whose cell phones to eavesdrop on, and let the NSA carry those orders out.
Surveillance is a large-scale activity, potentially affecting billions of people, and different rules have to apply – the rules of the police. Any organization doing such surveillance should apply the police norms of probable cause, due process, and oversight to population surveillance activities. It should make its activities much less secret and more transparent. It should be accountable in open courts. This is how we, and the rest of the world, regains the trust in the US’s actions.
In January, President Obama gave a speech on the NSA where he said two very important things. He said that the NSA would no longer spy on Angela Merkel’s cell phone. And while he didn’t extend that courtesy to the other 82 million citizens of Germany, he did say that he would extend some of the U.S.’s constitutional protections against warrantless surveillance to the rest of the world.
Breaking up the NSA by separating espionage from surveillance, and putting the latter under a law enforcement regime instead of a military regime, is a step toward achieving that.
This essay originally appeared on CNN.com.
The Web intelligence company Recorded Future is reporting—picked up by the Wall Street Journal—that al Qaeda is using new encryption software in the wake of the Snowden stories. I’ve been fielding press queries, asking me how this will adversely affect US intelligence efforts.
I think the reverse is true. I think this will help US intelligence efforts. Cryptography is hard, and the odds that a home-brew encryption product is better than a well-studied open-source tool is slight. Last fall, Matt Blaze said to me that he thought that the Snowden documents will usher in a new dark age of cryptography, as people abandon good algorithms and software for snake oil of their own devising. My guess is that this an example of that.
New television show—CSI: Cyber. I hope they have some good technical advisers, but I doubt they do.
Glenn Greenwald’s new book, No Place to Hide, was published today. There are about 100 pages of NSA documents on the book’s website. I haven’t gone through them yet. At a quick glance, only a few of them have been published before.
EDITED TO ADD (5/13): It’s suprising how large the FBI’s role in all of this is. On page 81, we see that they’re the point contact for BLARNEY. (BLARNEY is a decades-old AT&T data collection program.) And page 28 shows the ESCU—that’s the FBI’s Electronic Communications Surveillance Unit—is point on all the important domestic collection and interaction with companies. When companies deny that they work with the NSA, it’s likely that they’re working with the FBI and not realizing that it’s the NSA that getting all the data they’re providing.
Clever, but make sure to heed the caveats in the final two paragraphs.
In addition to turning the Internet into a worldwide surveillance platform, the NSA has surreptitiously weakened the products, protocols, and standards we all use to protect ourselves. By doing so, it has destroyed the trust that underlies the Internet. We need that trust back.
Trust is inherently social. It is personal, relative, situational, and fluid. It is not uniquely human, but it is the underpinning of everything we have accomplished as a species. We trust other people, but we also trust organizations and processes. The psychology is complex, but when we trust a technology, we basically believe that it will work as intended.
This is how we technologists trusted the security of the Internet. We didn’t have any illusions that the Internet was secure, or that governments, criminals, hackers, and others couldn’t break into systems and networks if they were sufficiently skilled and motivated. We didn’t trust that the programmers were perfect, that the code was bug-free, or even that our crypto math was unbreakable. We knew that Internet security was an arms race, and the attackers had most of the advantages.
What we trusted was that the technologies would stand or fall on their own merits.
We now know that trust was misplaced. Through cooperation, bribery, threats, and compulsion, the NSA—and the United Kingdom’s GCHQ—forced companies to weaken the security of their products and services, then lie about it to their customers.
We know of a few examples of this weakening. The NSA convinced Microsoft to make some unknown changes to Skype in order to make eavesdropping on conversations easier. The NSA also inserted a degraded random number generator into a common standard, then worked to get that generator used more widely.
I have heard engineers working for the NSA, FBI, and other government agencies delicately talk around the topic of inserting a “backdoor” into security products to allow for government access. One of them told me, “It’s like going on a date. Sex is never explicitly mentioned, but you know it’s on the table.” The NSA’s SIGINT Enabling Project has a $250 million annual budget; presumably it has more to show for itself than the fragments that have become public. Reed Hundt calls for the government to support a secure Internet, but given its history of installing backdoors, why would we trust claims that it has turned the page?
We also have to assume that other countries have been doing the same things. We have long believed that networking products from the Chinese company Huawei have been backdoored by the Chinese government. Do we trust hardware and software from Russia? France? Israel? Anywhere?
This mistrust is poison. Because we don’t know, we can’t trust any of them. Internet governance was largely left to the benign dictatorship of the United States because everyone more or less believed that we were working for the security of the Internet instead of against it. But now that system is in turmoil. Foreign companies are fleeing US suppliers because they don’t trust American firms’ security claims. Far worse governments are using these revelations to push for a more isolationist Internet, giving them more control over what their citizens see and say.
All so we could eavesdrop better.
There is a term in the NSA: “nobus,” short for “nobody but us.” The NSA believes it can subvert security in such a way that only it can take advantage of that subversion. But that is hubris. There is no way to determine if or when someone else will discover a vulnerability. These subverted systems become part of our infrastructure; the harms to everyone, once the flaws are discovered, far outweigh the benefits to the NSA while they are secret.
We can’t both weaken the enemy’s networks and protect our own. Because we all use the same products, technologies, protocols, and standards, we either allow everyone to spy on everyone, or prevent anyone from spying on anyone. By weakening security, we are weakening it against all attackers. By inserting vulnerabilities, we are making everyone vulnerable. The same vulnerabilities used by intelligence agencies to spy on each other are used by criminals to steal your passwords. It is surveillance versus security, and we all rise and fall together.
Security needs to win. The Internet is too important to the world—and trust is too important to the Internet—to squander it like this. We’ll never get every power in the world to agree not to subvert the parts of the Internet they control, but we can stop subverting the parts we control. Most of the high-tech companies that make the Internet work are US companies, so our influence is disproportionate. And once we stop subverting, we can credibly devote our resources to detecting and preventing subversion by others.
This essay previously appeared in the Boston Review.
A new study shows that Doryteuthis pealei in pain—or whatever passes for pain in that species—has heightened sensory sensitivity and heightened reactions.
The idea, although this is a major extrapolation at this point, is that pain is a security mechanism. It helps us compensate for our injured—i.e. weakened—state.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
This is not good news.
Widely known as the “bloggers law,” the new Russian measure specifies that any site with more than 3,000 visitors daily will be considered a media outlet akin to a newspaper and be responsible for the accuracy of the information published.
Besides registering, bloggers can no longer remain anonymous online, and organizations that provide platforms for their work such as search engines, social networks and other forums must maintain computer records on Russian soil of everything posted over the previous six months.
Interesting experiment shows that the retelling of stories increases conflict and bias.
For their study, which featured 196 undergraduates, the researchers created a narrative about a dispute between two groups of young people. It described four specific points of tension, but left purposely ambiguous the issue of which party was the aggressor, and “depicted the groups as equally blameworthy.”
Half of the participants read a version of the story in which the two hostile groups were from two Maryland cities. The other half read a version in which one group was from the city of Gaithersburg, but the other was identified as “your friends.”
Participants were assigned a position between one and four. Those in the first position read the initial version of the story, and then “re-told” it in their own words by writing their version of the events. This was passed on to the person in the second position, who did the same.
The procedure was repeated until all four people had created their own versions of the story. Each new version was then examined for subtle shifts in emphasis, blame, and wording.
The results: Each “partisan communicator”—that is, each student who wrote about the incident involving his or her “friends”—”contributed small distortions that, when accumulated, produced a highly biased, inaccurate representation of the original dispute,” the researchers write.
Standard disclaimer—that American undergraduates might not be the best representatives of our species—applies. But the results are not surprising. We tend to play up the us vs. them narrative when we tell stories. The result is particularly interesting in light of the echo chamber that Internet-based politics has become.
The actual paper is behind a paywall.
Al Jazeera is reporting on leaked emails (not leaked by Snowden, but by someone else) detailing close ties between the NSA and Google. There are no smoking guns in the correspondence—and the Al Jazeera article makes more of the e-mails than I think is there—but it does show a closer relationship than either side has admitted to before.
EDITED TO ADD (5/7): The correspondence was not leaked. It was obtained via a FOIA request.
Mathias Döpfner writes an open letter explaining why he fears Google:
We know of no alternative which could offer even partially comparable technological prerequisites for the automated marketing of advertising. And we cannot afford to give up this source of revenue because we desperately need the money for technological investments in the future. Which is why other publishers are increasingly doing the same. We also know of no alternative search engine which could maintain or increase our online reach. A large proportion of high quality journalistic media receives its traffic primarily via Google. In other areas, especially of a non-journalistic nature, customers find their way to suppliers almost exclusively though Google. This means, in plain language, that we and many others are dependent on Google. At the moment Google has a 91.2 percent search-engine market share in Germany. In this case, the statement “if you don’t like Google, you can remove yourself from their listings and go elsewhere” is about as realistic as recommending to an opponent of nuclear power that he just stop using electricity. He simply cannot do this in real life unless he wants to join the Amish.
Interesting article on the business of selling enhancements that allow you to cheat in online video games.
Someone has finally proven how:
How do these squid go from swimming to flying? Four phases of flight are described in the research: launching, jetting, gliding and diving.
While swimming, the squid open up their mantle and draw in water. Then these squid launch themselves into the air with a high-powered blast of the water from their bodies. Once launched by this jet propulsion, these squid spread out both their fins and their tentacles to form wings. The squid have a membrane between their tentacles similar to the webbed toes of a frog. This helps them use their tentacles as a wing and create aerodynamic lift so they can glide similar to a well-made paper airplane.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Rats have destroyed dozens of electronic voting machines by eating the cables. It would have been a better story if the rats had zeroed out the machines after the votes had been cast but before they were counted, but it seems that they just ate the machines while they were in storage.
The EVMs had been stored in a pre-designated strong room that was located near a wholesale wheat market, where the rats had apparently made their home.
There’s a general thread running through security where high-tech replacements for low-tech systems have new and unexpected failures.
EDITED TO ADD (5/14): This article says it was only a potential threat, and one being addressed.
Detailed response and analysis of the inspectors general report on the Boston Marathon bombings:
Two opposite mistakes in an after-the-fact review of a terrorist incident are equally damaging. One is to fail to recognize the powerful difference between foresight and hindsight in evaluating how an investigative or intelligence agency should have behaved. After the fact, we know on whom we should have focused attention as a suspect, and we know what we should have protected as a target. With foresight alone, we know neither of these critically important clues to what happened and why. With hindsight, we can focus all of our attention narrowly; with foresight, we have to spread it broadly, as broadly as the imagination of our attackers may roam.
The second mistake is equally important. It is to confuse the fact that people in official positions, like others, will inevitably make mistakes in carrying out any complicated task, with the idea that no mistakes were really made. We can see mistakes with hindsight that can be avoided in the future by recognizing them clearly and designing solutions. After mistakes are made, nothing is more foolish than to hide them or pretend that they were not mistakes.
Comedian John Oliver interviewed now-retired NSA director General Keith Alexander. It’s truly weird.
Interesting article on the cybersecurity branch of the Federal Reserve System.
Sidebar photo of Bruce Schneier by Joe MacInnis.