June 15, 2014
by Bruce Schneier
CTO, Co3 Systems, Inc.
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1406.html>. These same essays and news items appear in the "Schneier on Security" blog at <http://www.schneier.com/blog>, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- Disclosing vs. Hoarding Vulnerabilities
- The NSA is Not Made of Magic
- GCHQ Intercept Sites in Oman
- Chinese Hacking of the US
- The Human Side of Heartbleed
- Schneier News
- Security and Human Behavior (SHB 2014)
- Seventh Movie-Plot Threat Contest Winner
There's a debate going on about whether the US government -- specifically, the NSA and United States Cyber Command -- should stockpile Internet vulnerabilities or disclose and fix them. It's a complicated problem, and one that starkly illustrates the difficulty of separating attack and defense in cyberspace.
A software vulnerability is a programming mistake that allows an adversary access into that system. Heartbleed is a recent example, but hundreds are discovered every year.
Unpublished vulnerabilities are called "zero-day" vulnerabilities, and they're very valuable because no one is protected. Someone with one of those can attack systems world-wide with impunity.
When someone discovers one, he can either use it for defense or for offense. Defense means alerting the vendor and getting it patched. Lots of vulnerabilities are discovered by the vendors themselves and patched without any fanfare. Others are discovered by researchers and hackers. A patch doesn't make the vulnerability go away, but most users protect themselves by patching their systems regularly.
Offense means using the vulnerability to attack others. This is the quintessential zero-day, because the vendor doesn't even know the vulnerability exists until it starts being used by criminals or hackers. Eventually the affected software's vendor finds out -- the timing depends on how extensively the vulnerability is used -- and issues a patch to close the vulnerability.
If an offensive military cyber unit discovers the vulnerability -- or a cyber-weapons arms manufacturer -- it keeps that vulnerability secret for use to deliver a cyber-weapon. If it is used stealthily, it might remain secret for a long time. If unused, it'll remain secret until someone else discovers it.
Discoverers can sell vulnerabilities. There's a rich market in zero-days for attack purposes -- both military/commercial and black markets. Some vendors offer bounties for vulnerabilities to incent defense, but the amounts are much lower.
The NSA can play either defense or offense. It can either alert the vendor and get a still-secret vulnerability fixed, or it can hold on to it and use it to eavesdrop on foreign computer systems. Both are important US policy goals, but the NSA has to choose which one to pursue. By fixing the vulnerability, it strengthens the security of the Internet against all attackers: other countries, criminals, hackers. By leaving the vulnerability open, it is better able to attack others on the Internet. But each use runs the risk of the target government learning of, and using for itself, the vulnerability -- or of the vulnerability becoming public and criminals starting to use it.
There is no way to simultaneously defend US networks while leaving foreign networks open to attack. Everyone uses the same software, so fixing us means fixing them, and leaving them vulnerable means leaving us vulnerable. As Harvard Law Professor Jack Goldsmith wrote, "every offensive weapon is a (potential) chink in our defense -- and vice versa."
To make matters even more difficult, there is an arms race going on in cyberspace. The Chinese, the Russians, and many other countries are finding vulnerabilities as well. If we leave a vulnerability unpatched, we run the risk of another country independently discovering it and using it in a cyber-weapon that we will be vulnerable to. But if we patch all the vulnerabilities we find, we won't have any cyber-weapons to use against other countries.
Many people have weighed in on this debate. The president's Review Group on Intelligence and Communications Technologies, convened post-Snowden, concluded (recommendation 30), that vulnerabilities should only be hoarded in rare instances and for short times. Cory Doctorow calls it a public health problem. I have said similar things. Dan Geer recommends that the US government corner the vulnerabilities market and fix them all. Both the FBI and the intelligence agencies claim that this amounts to unilateral disarmament.
It seems like an impossible puzzle, but the answer hinges on how vulnerabilities are distributed in software.
If vulnerabilities are sparse, then it's obvious that every vulnerability we find and fix improves security. We render a vulnerability unusable, even if the Chinese government already knows about it. We make it impossible for criminals to find and use it. We improve the general security of our software, because we can find and fix most of the vulnerabilities.
If vulnerabilities are plentiful -- and this seems to be true -- the ones the US finds and the ones the Chinese find will largely be different. This means that patching the vulnerabilities we find won't make it appreciably harder for criminals to find the next one. We don't really improve general software security by disclosing and patching unknown vulnerabilities, because the percentage we find and fix is small compared to the total number that are out there.
But while vulnerabilities are plentiful, they're not uniformly distributed. There are easier-to-find ones, and harder-to-find ones. Tools that automatically find and fix entire classes of vulnerabilities, and coding practices that eliminate many easy-to-find ones, greatly improve software security. And when a person finds a vulnerability, it is likely that another person soon will, or recently has, found the same vulnerability. Heartbleed, for example, remained undiscovered for two years, and then two independent researchers discovered it within two days of each other. This is why it is important for the government to err on the side of disclosing and fixing.
The NSA, and by extension US Cyber Command, tries its best to play both ends of this game. Former NSA Director Michael Hayden talks about NOBUS, "nobody but us." The NSA has a classified process to determine what it should do about vulnerabilities, disclosing and closing most of the ones it finds, but holding back some -- we don't know how many -- vulnerabilities that "nobody but us" could find for attack purposes.
This approach seems to be the appropriate general framework, but the devil is in the details. Many of us in the security field don't know how to make NOBUS decisions, and the recent White House clarification posed more questions than it answered.
Who makes these decisions, and how? How often are they reviewed? Does this review process happen inside Department of Defense, or is it broader? Surely there needs to be a technical review of each vulnerability, but there should also be policy reviews regarding the sorts of vulnerabilities we are hoarding. Do we hold these vulnerabilities until someone else finds them, or only for a short period of time? How many do we stockpile? The US/Israeli cyberweapon Stuxnet used four zero-day vulnerabilities. Burning four on a single military operation implies that we are not hoarding a small number, but more like 100 or more.
There's one more interesting wrinkle. Cyber-weapons are a combination of a payload -- the damage the weapon does -- and a delivery mechanism: the vulnerability used to get the payload into the enemy network. Imagine that China knows about a vulnerability and is using it in a still-unfired cyber-weapon, and that the NSA learns about it through espionage. Should the NSA disclose and patch the vulnerability, or should it use it itself for attack? If it discloses, then China could find a replacement vulnerability that the NSA won't know about it. But if it doesn't, it's deliberately leaving the US vulnerable to cyber-attack. Maybe someday we can get to the point where we can patch vulnerabilities faster than the enemy can use them in an attack, but we're nowhere near that point today.
The implications of US policy can be felt on a variety of levels. The NSA's actions have resulted in a widespread mistrust of the security of US Internet products and services, greatly affecting American business. If we show that we're putting security ahead of surveillance, we can begin to restore that trust. And by making the decision process much more public than it is today, we can demonstrate both our trustworthiness and the value of open government.
An unpatched vulnerability puts everyone at risk, but not to the same degree. The US and other Western countries are highly vulnerable, because of our critical electronic infrastructure, intellectual property, and personal wealth. Countries like China and Russia are less vulnerable -- North Korea much less -- so they have considerably less incentive to see vulnerabilities fixed. Fixing vulnerabilities isn't disarmament; it's making our own countries much safer. We also regain the moral authority to negotiate any broad international reductions in cyber-weapons; and we can decide not to use them even if others do.
Regardless of our policy towards hoarding vulnerabilities, the most important thing we can do is patch vulnerabilities quickly once they are disclosed. And that's what companies are doing, even without any government involvement, because so many vulnerabilities are discovered by criminals.
We also need more research in automatically finding and fixing vulnerabilities, and in building secure and resilient software in the first place. Research over the last decade or so has resulted in software vendors being able to find and close entire classes of vulnerabilities. Although there are many cases of these security analysis tools not being used, all of our security is improved when they are. That alone is a good reason to continue disclosing vulnerability details, and something the NSA can do to vastly improve the security of the Internet worldwide. Here again, though, they would have to make the tools they have to automatically find vulnerabilities available for defense and not attack.
In today's cyberwar arms race, unpatched vulnerabilities and stockpiled cyber-weapons are inherently destabilizing, especially because they are only effective for a limited time. The world's militaries are investing more money in finding vulnerabilities than the commercial world is investing in fixing them. The vulnerabilities they discover affect the security of us all. No matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security and fix almost all the vulnerabilities we find. But not all, yet.
This essay previously appeared on TheAtlantic.com.
Cyber-weapons arms manufacturers:
Selling exploits to military/corporations:
Selling exploits on the black market:
President's Review Group on Intelligence and Communications Technologies report:
My earlier writing:
Richard Clarke and Peter Swire:
Stuxnet used four zero-day vulnerabilities:
Me on the loss of trust:
Microsoft on finding and closing vulnerabilities:
Me on the cyberwar arms race:
The game theory of cyberwar:
I am regularly asked what is the most surprising thing about the Snowden NSA documents. It's this: the NSA is not made of magic. Its tools are no different from what we have in our world, it's just better-funded. X-KEYSCORE is Bro plus memory. FOXACID is Metasploit with a budget. QUANTUM is AirPwn with a seriously privileged position on the backbone. The NSA breaks crypto not with super-secret cryptanalysis, but by using standard hacking tricks such as exploiting weak implementations and default keys. Its TAO implants are straightforward enhancements of attack tools developed by researchers, academics, and hackers; you can buy a computer the size of a grain of rice, if you want to make your own such tools. The NSA's collection and analysis tools are basically what you'd expect if you thought about it for a while.
That, fundamentally, is surprising. If you gave a super-secret Internet exploitation organization $10 billion annually, you'd expect some magic. And my guess is that there is some, around the edges, that has not become public yet. But that we haven't seen any yet is cause for optimism.
Last June, the "Guardian" published a story about GCHQ tapping fiber-optic Internet cables around the globe, part of a program codenamed TEMPORA. One of the facts not reported in that story -- and supposedly the fact that the "Guardian" agreed to withhold in exchange for not being prosecuted by the UK authorities -- was the location of the access points in the Middle East.
On Tuesday, the "Register" disclosed that they are in Oman:
The secret British spy base is part of a programme codenamed "CIRCUIT" and also referred to as Overseas Processing Centre 1 (OPC-1). It is located at Seeb, on the northern coast of Oman, where it taps in to various undersea cables passing through the Strait of Hormuz into the Persian/Arabian Gulf. Seeb is one of a three site GCHQ network in Oman, at locations codenamed "TIMPANI", "GUITAR" and "CLARINET". TIMPANI, near the Strait of Hormuz, can monitor Iraqi communications. CLARINET, in the south of Oman, is strategically close to Yemen.
Access is provided through secret agreements with BT and Vodaphone:
British national telco BT, referred to within GCHQ and the American NSA under the ultra-classified codename "REMEDY", and Vodafone Cable (which owns the former Cable & Wireless company, aka "GERONTIC") are the two top earners of secret GCHQ payments running into tens of millions of pounds annually.
There's no source document associated with the story, but it does seem to be accurate. Glenn Greenwald comments:
"Snowden has no source relationship with Duncan (who is a great journalist), and never provided documents to him directly or indirectly, as Snowden has made clear," Greenwald said in an email. "I can engage in informed speculation about how Duncan got this document - it's certainly a document that several people in the Guardian UK possessed -- but how he got it is something only he can answer."
The reporter is staying mum on his source:
When Wired.co.uk asked Duncan Campbell -- the investigative journalist behind the "Register" article revealing the Oman location -- if he too had copies proving the allegations, he responded: "I won't answer that question -- given the conduct of the authorities."
"I was able to look at some of the material provided in Britain to the "Guardian" by Edward Snowden last year," Campbell, who is a forensic expert witness on communications data, tells us.
Campbell also published this on the NSA.
This article from "Communications of the ACM" outlines some of the security measures the NSA could, and should, have had in place to stop someone like Snowden. Mostly obvious stuff, although I'm not sure it would have been effective against such a skilled and tenacious leaker. What's missing is the one thing that would have worked: have fewer secrets.
This is a pretty horrible story of a small-city mayor abusing his authority -- warrants where there is no crime, police raids, incidental marijuana bust -- to identify and shut down a Twitter parody account. The ACLU is taking the case.
New IETF RFC: "RFC 7258: Pervasive Monitoring Is an Attack" that designers must mitigate.
At Eurocrypt this year, researchers presented a paper that completely breaks the discrete log problem in any field with a small characteristic. It's nice work, and builds on a bunch of advances in this direction over the last several years. Despite headlines to the contrary, this does not have any cryptanalytic application -- unless they can generalize the result, which seems unlikely to me.
New paper: "Your Secret Stingray's No Secret Anymore: The Vanishing Government Monopoly Over Cell Phone Surveillance and its Impact on National Security and Consumer Privacy," by Christopher Soghoian and Stephanie K. Pell.
Biologist Peter Watts makes some good points on the harms of surveillance. His basic point is that mammals consider it a threat, and that it makes us feel like prey.
This is interesting. People accept government surveillance out of fear: fear of the terrorists, fear of the criminals. If Watts is right, then there's a conflict of fears. Because terrorists and criminals -- kidnappers, child pornographers, drug dealers, whatever -- are more evocative than the nebulous fear of being stalked, it wins.
This, also by Peter Watts, is well worth reading.
Ross Anderson has an important new paper on the economics that drive government-on-population bulk surveillance. He talks about network effects, high fixed costs and low marginal costs, and high switching costs (lock-in).
Eben Moglen's essay on surveillance is well worth reading.
It's based on a series of talks he gave last fall.
SEC Consult has published an advisory warning people not to use a government eavesdropping product called Recording eXpress, sold by the Israeli company Nice Systems. Basically, attackers can completely compromise the system.
Security risks from smart toilets and smart televisions.
iOS 8 is randomizing MAC addresses. This seems like a good idea.
Feedly, Evernote, and Deezer are the victims of DDoS attacks, blackmail in the case of Feedly.
First-person experience of censorship in China.
"Erotica Written By Someone With An Appropriate Sense of Privacy"
Chinese hacking of American computer networks is old news. For years we've known about their attacks against U.S. government and corporate targets. We've seen detailed reports of how they hacked the "New York Times." Google has detected them going after Gmail accounts of dissidents. They've built sophisticated worldwide eavesdropping networks. These hacks target both military secrets and corporate intellectual property. They're perpetrated by a combination of state, state-sponsored and state-tolerated hackers. It's been going on for years.
On Monday, the Justice Department indicted five Chinese hackers in absentia, all associated with the Chinese military, for stealing corporate secrets from U.S. energy, metals and manufacturing companies. It's entirely for show; the odds that the Chinese are going to send these people to the U.S. to stand trial is zero. But it does move what had been mostly a technical security problem into the world of diplomacy and foreign policy. By doing this, the U.S. government is taking a very public stand and saying "enough."
The problem with that stand is that we've been doing much the same thing to China. Documents revealed by the whistleblower Edward Snowden show that the NSA has penetrated Chinese government and commercial networks, and is exfiltrating -- that's NSA talk for stealing -- an enormous amount of secret data. We've hacked the networking hardware of one of their own companies, Huawei. We've intercepted networking equipment being sent there and installed monitoring devices. We've been listening in on their private communications channels.
The only difference between the U.S. and China's actions is that the U.S. doesn't engage in direct industrial espionage. That is, we don't steal secrets from Chinese companies and pass them directly to U.S. competitors. But we do engage in economic espionage; we steal secrets from Chinese companies for an advantage in government trade negotiations, which directly benefits U.S. competitors. We might think this difference is important, but other countries are not as as impressed with our nuance.
Already the Chinese are retaliating against the U.S. actions with rhetoric of their own. I don't know the Chinese expression for 'pot calling the kettle black,' but it certainly fits in this case.
Again, none of this is new. The U.S. and the Chinese have been conducting electronic espionage on each other throughout the Cold War, and there's no reason to think it's going to change anytime soon. What's different now is the ease with which the two countries can do this safely and remotely, over the Internet, as well as the massive amount of information that can be stolen with a few computer commands.
On the Internet today, it is much easier to attack systems and break into them than it is to defend those systems against attack, so the advantage is to the attacker. This is true for a combination of reasons: the ability of an attacker to concentrate his attack, the nature of vulnerabilities in computer systems, poor software quality and the enormous complexity of computer systems.
The computer security industry is used to coping with criminal attacks. In general, such attacks are untargeted. Criminals might have broken into Target's network last year and stolen 40 million credit and debit card numbers, but they would have been happy with any retailer's large credit card database. If Target's security had been better than its competitors, the criminals would have gone elsewhere. In this way, security is relative.
The Chinese attacks are different. For whatever reason, the government hackers wanted certain information inside the networks of Alcoa World Alumina, Westinghouse Electric, Allegheny Technologies, U.S. Steel, United Steelworkers Union and SolarWorld. It wouldn't have mattered how those companies' security compared with other companies; all that mattered was whether it was better than the ability of the attackers.
This is a fundamentally different security model -- often called APT or Advanced Persistent Threat -- and one that is much more difficult to defend against.
In a sense, American corporations are collateral damage in this battle of espionage between the U.S. and China. Taking the battle from the technical sphere into the foreign policy sphere might be a good idea, but it will work only if we have some moral high ground from which to demand that others not spy on us. As long as we run the largest surveillance network in the world and hack computer networks in foreign countries, we're going to have trouble convincing others not to attempt the same on us.
This essay previously appeared on Time.com.
NSA attacking Chinese networks:
NSA intercepting networking equipment:
Complexity and security:
NSA offensive cyberattacks:
The announcement on April 7 was alarming. A new Internet vulnerability called Heartbleed could allow hackers to steal your logins and passwords. It affected a piece of security software that is used on half a million websites worldwide. Fixing it would be hard: It would strain our security infrastructure and the patience of users everywhere.
It was a software insecurity, but the problem was entirely human.
Software has vulnerabilities because it's written by people, and people make mistakes -- thousands of mistakes. This particular mistake was made in 2011 by a German graduate student who was one of the unpaid volunteers working on a piece of software called OpenSSL. The update was approved by a British consultant.
In retrospect, the mistake should have been obvious, and it's amazing that no one caught it. But even though thousands of large companies around the world used this critical piece of software for free, no one took the time to review the code after its release.
The mistake was discovered around March 21, 2014, and was reported on April 1 by Neel Mehta of Google's security team, who quickly realized how potentially devastating it was. Two days later, in an odd coincidence, researchers at a security company called Codenomicon independently discovered it.
When a researcher discovers a major vulnerability in a widely used piece of software, he generally discloses it responsibly. Why? As soon as a vulnerability becomes public, criminals will start using it to hack systems, steal identities, and generally create mayhem, so we have to work together to fix the vulnerability quickly after it's announced.
The researchers alerted some of the larger companies quietly so that they could fix their systems before the public announcement. (Who to tell early is another very human problem: If you tell too few, you're not really helping, but if you tell too many, the secret could get out.) Then Codenomicon announced the vulnerability.
One of the biggest problems we face in the security community is how to communicate these sorts of vulnerabilities. The story is technical, and people often don't know how to react to the risk. In this case, the Codenomicon researchers did well. They created a public website explaining (in simple terms) the vulnerability and how to fix it, and they created a logo -- a red bleeding heart -- that every news outlet used for coverage of the story.
The first week of coverage varied widely, as some people panicked and others downplayed the threat. This wasn't surprising: There was a lot of uncertainty about the risk, and it wasn't immediately obvious how disastrous the vulnerability actually was.
The major Internet companies were quick to patch vulnerable systems. Individuals were less likely to update their passwords, but by and large, that was OK.
True to form, hackers started exploiting the vulnerability within minutes of the announcement. We assume that governments also exploited the vulnerability while they could. I'm sure the U.S. National Security Agency had advance warning.
By now, it's largely over. There are still lots of unpatched systems out there. (Many of them are embedded hardware systems that can't be patched.) The risk of attack is still there, but minimal. In the end, the actual damage was also minimal, although the expense of restoring security was great.
The question that remains is this: What should we expect in the future -- are there more Heartbleeds out there?
Yes. Yes there are. The software we use contains thousands of mistakes -- many of them security vulnerabilities. Lots of people are looking for these vulnerabilities: Researchers are looking for them. Criminals and hackers are looking for them. National intelligence agencies in the United States, the United Kingdom, China, Russia, and elsewhere are looking for them. The software vendors themselves are looking for them.
What happens when a vulnerability is found depends on who finds it. If the vendor finds it, it quietly fixes it. If a researcher finds it, he or she alerts the vendor and then reports it to the public. If a national intelligence agency finds the vulnerability, it either quietly uses it to spy on others or -- if we're lucky -- alerts the vendor. If criminals and hackers find it, they use it until a security company notices and alerts the vendor, and then it gets fixed -- usually within a month.
Heartbleed was unique because there was no single fix. The software had to be updated, and then websites had to regenerate their encryption keys and get new public-key certificates. After that, people had to update their passwords. This multi-stage process had to take place publicly, which is why the announcement happened the way it did.
Yes, it'll happen again. But most of the time, it'll be easier to deal with than this.
This essay previously appeared on The Mark News.
TrueCrypt -- the free hard-drive encryption program that a lot of us use -- shut down last month. There's a good summary of the story at ArsTechnica, and Slashdot, Hacker News, and Reddit all have long comment threads. See also Brian Krebs and Cory Doctorow.
Speculations include a massive hack of the TrueCrypt developers, some Lavabit-like forced shutdown, and an internal power struggle within TrueCrypt. I suppose we'll have to wait and see what develops. At this point, my guess is that the developers just got tired of maintaining the code and shut it down.
No word yet about someone else taking over the project.
I'm speaking at the IEEE 2014 Conference on Norbert Weiner in the 21st Century in Boston on 6/26.
I'm speaking at the 26th Annual FIRST Conference in Boston on 6/27.
I'm speaking at the 3rd Annual Batelle Cyber-Auto Challenge in Detroit on 7/15.
I'm speaking at the ISSA Chicago July 2014 Chapter Meeting in Rosemont, IL on 7/17.
On June 2, I had the honor of presenting Edward Snowden with a "Champion of Freedom" award at the EPIC dinner. Snowden couldn't be there in person -- his father and stepmother were there in his place -- but he recorded a message.
Recording of Snowden:
And one new audio:
Early this month, I helped organize SHB 2014: the Seventh Annual Interdisciplinary Workshop on Security and Human Behavior. This is a small invitational gathering of people studying various aspects of the human side of security. The fifty people in the room include psychologists, computer security researchers, sociologists, behavioral economists, philosophers, political scientists, lawyers, anthropologists, business school professors, neuroscientists, and a smattering of others. It's not just an interdisciplinary event; most of the people here are individually interdisciplinary.
I call these the most intellectually stimulating two days of my year. The goal is discussion amongst the group. We do that by putting everyone on panels, but only letting each person talk for 5-7 minutes The rest of the 90-minute panel is left for discussion.
The conference is organized by Alessandro Acquisti, Ross Anderson, and me. This year we were at Cambridge University, in the UK.
The conference website contains a schedule and a list of participants, which includes links to writings by each of them. Ross Anderson liveblogged the event.
Posts on previous SHBs.
On April 1, I announced the Seventh Mostly Annual Movie-Plot Threat Contest:
The NSA has won, but how did it do it? How did it use its ability to conduct ubiquitous surveillance, its massive data centers, and its advanced data analytics capabilities to come out on top? Did it take over the world overtly, or is it just pulling the strings behind everyone's backs? Did it have to force companies to build surveillance into its products, or could it just piggy-back on market trends? How does it deal with liberal democracies and ruthless totalitarian dictatorships at the same time? Is it blackmailing Congress? How does the money flow? What's the story?
On May 15, I announced the five semifinalists. The votes are in, and the winner is Doubleplusunlol:
The NSA, GCHQ et al actually *don't* have the ability to conduct the mass surveillance that we now believe they do. Edward Snowden was in fact groomed, without his knowledge, to become a whistleblower, and the leaked documents were elaborately falsified by the NSA and GCHQ.
The encryption and security systems that 'private' companies are launching in the wake of theses 'revelations', however, are in fact being covertly funded by the NSA/GCHQ -- the aim being to encourage criminals and terrorists to use these systems, which the security agencies have built massive backdoors into.
The laws that Obama is now about to pass will in fact be the laws that the NSA will abide by -- and will entrench mass surveillance as a legitimate government tool before the NSA even has the capability to perform it. That the online populace believes that they are already being watched will become a self-fulfilling prophecy; the people have built their own panopticon, wherein the *belief* that the Government is omniscient is sufficient for the Government to control them.
"He who is subjected to a field of visibility, and who knows it, assumes responsibility for the constraints of power; he makes them play spontaneously upon himself; he inscribes in himself the power relation in which he simultaneously plays both roles; he becomes the principle of his own subjection." Michel Foucault, Surveilier Et Punir, 1975
For the record, Guy Macon was a close runner-up.
Congratulations, Doubleplusunlol. Contact me by e-mail, and I'll send you your fabulous prizes.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a "security guru" by The Economist. He is the author of 12 books -- including "Liars and Outliers: Enabling the Trust Society Needs to Survive" -- as well as hundreds of articles, essays, and academic papers. His influential newsletter "Crypto-Gram" and his blog "Schneier on Security" are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation's Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Co3 Systems, Inc. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Co3 Systems, Inc.
Copyright (c) 2014 by Bruce Schneier.