June 15, 2013
by Bruce Schneier
Chief Security Technology Officer, BT
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1306.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- Government Secrets and the Need for Whistleblowers
- Prosecuting Snowden
- Trading Privacy for Convenience
- More Links on the Snowden Documents
- Essays Related to NSA Spying Documents
- The Politics of Security in a Democracy
- More on Feudal Security
- Surveillance and the Internet of Things
- The Problems with CALEA-II
- Schneier News
- Sixth Annual Movie-Plot Threat Semifinalists
- A Really Good Article on How Easy it Is to Crack Passwords
- Bluetooth-Controlled Door Lock
- Security and Human Behavior (SHB 2013)
- The Cost of Terrorism in Pakistan
Government Secrets and the Need for Whistleblowers
Recently, we learned that the NSA received all calling records from Verizon customers for a three-month period starting in April. That’s everything except the voice content: who called who, where they were, how long the call lasted—for millions of people, both Americans and foreigners. This “metadata” allows the government to track the movements of everyone during that period, and build a detailed picture of who talks to whom. It’s exactly the same data the Justice Department collected about AP journalists.
The “Guardian” delivered this revelation after receiving a copy of a secret memo about this—presumably from a whistleblower. We don’t know if the other phone companies handed data to the NSA too. We don’t know if this was a one-off demand or a continuously renewed demand; the order started a few days after the Boston bombers were captured by police.
We don’t know a lot about how the government spies on us, but we know some things. We know the FBI has issued tens of thousands of ultra-secret National Security Letters to collect all sorts of data on people—we believe on millions of people—and has been abusing them to spy on cloud-computer users. We know it can collect a wide array of personal data from the Internet without a warrant. We also know that the FBI has been intercepting cell-phone data, all but voice content, for the past 20 years without a warrant, and can use the microphone on some powered-off cell phones as a room bug—presumably only with a warrant.
We know that the NSA has many domestic-surveillance and data-mining programs with codenames like Trailblazer, Stellar Wind, and Ragtime—deliberately using different codenames for similar programs to stymie oversight and conceal what’s really going on. We know that the NSA is building an enormous computer facility in Utah to store all this data, as well as faster computer networks to process it all. We know the U.S. Cyber Command employs 4,000 people.
We know that the DHS is also collecting a massive amount of data on people, and that local police departments are running “fusion centers” to collect and analyze this data, and covering up its failures. This is all part of the militarization of the police.
Remember in 2003, when Congress defunded the decidedly creepy Total Information Awareness program? It didn’t die; it just changed names and split into many smaller programs. We know that corporations are doing an enormous amount of spying on behalf of the government: all parts.
We know all of this not because the government is honest and forthcoming, but mostly through three backchannels—inadvertent hints or outright admissions by government officials in hearings and court cases, information gleaned from government documents received under FOIA, and government whistleblowers.
There’s much more we don’t know, and often what we know is obsolete. We know quite a bit about the NSA’s ECHELON program from a 2000 European investigation, and about the DHS’s plans for Total Information Awareness from 2002, but much less about how these programs have evolved. We can make inferences about the NSA’s Utah facility based on the theoretical amount of data from various sources, the cost of computation, and the power requirements from the facility, but those are rough guesses at best. For a lot of this, we’re completely in the dark.
And that’s wrong.
The U.S. government is on a secrecy binge. It overclassifies more information than ever. And we learn, again and again, that our government regularly classifies things not because they need to be secret, but because their release would be embarrassing.
Knowing how the government spies on us is important. Not only because so much of it is illegal—or, to be as charitable as possible, based on novel interpretations of the law—but because we have a right to know. Democracy requires an informed citizenry in order to function properly, and transparency and accountability are essential parts of that. That means knowing what our government is doing to us, in our name. That means knowing that the government is operating within the constraints of the law. Otherwise, we’re living in a police state.
We need whistleblowers.
Leaking information without getting caught is difficult. It’s almost impossible to maintain privacy in the Internet Age. The WikiLeaks platform seems to have been secure—Bradley Manning was caught not because of a technological flaw, but because someone he trusted betrayed him—but the U.S. government seems to have successfully destroyed it as a platform. None of the spin-offs have risen to become viable yet. The “New Yorker” recently unveiled its Strongbox platform for leaking material, which is still new but looks good. Wired recently gave the best advice on how to leak information to the press via phone, email, or the post office. The National Whistleblowers Center has a page on national-security whistleblowers and their rights.
Leaking information is also very dangerous. The Obama Administration has embarked on a war on whistleblowers, pursuing them—both legally and through intimidation—further than any previous administration has done. Mark Klein, Thomas Drake, and William Binney have all been persecuted for exposing technical details of our surveillance state. Bradley Manning has been treated cruelly and inhumanly—and possibly tortured—for his more-indiscriminate leaking of State Department secrets.
The Obama Administration’s actions against the Associated Press, its persecution of Julian Assange, and its unprecedented prosecution of Manning on charges of “aiding the enemy” demonstrate how far it’s willing to go to intimidate whistleblowers—as well as the journalists who talk to them.
But whistleblowing is vital, even more broadly than in government spying. It’s necessary for good government, and to protect us from abuse of power.
We need details on the full extent of the FBI’s spying capabilities. We don’t know what information it routinely collects on American citizens, what extra information it collects on those on various watch lists, and what legal justifications it invokes for its actions. We don’t know its plans for future data collection. We don’t know what scandals and illegal actions—either past or present—are currently being covered up.
We also need information about what data the NSA gathers, either domestically or internationally. We don’t know how much it collects surreptitiously, and how much it relies on arrangements with various companies. We don’t know how much it uses password cracking to get at encrypted data, and how much it exploits existing system vulnerabilities. We don’t know whether it deliberately inserts backdoors into systems it wants to monitor, either with or without the permission of the communications-system vendors.
And we need details about the sorts of analysis the organizations perform. We don’t know what they quickly cull at the point of collection, and what they store for later analysis—and how long they store it. We don’t know what sort of database profiling they do, how extensive their CCTV and surveillance-drone analysis is, how much they perform behavioral analysis, or how extensively they trace friends of people on their watch lists.
We don’t know how big the U.S. surveillance apparatus is today, either in terms of money and people or in terms of how many people are monitored or how much data is collected. Modern technology makes it possible to monitor vastly more people—the recent NSA revelations demonstrate that they could easily surveil *everyone*—than could ever be done manually.
Whistleblowing is the moral response to immoral activity by those in power. What’s important here are government programs and methods, not data about individuals. I understand I am asking for people to engage in illegal and dangerous behavior. Do it carefully and do it safely, but—and I am talking directly to you, person working on one of these secret and probably illegal programs—do it.
If you see something, say something. There are many people in the U.S. that will appreciate and admire you.
For the rest of us, we can help by protesting this war on whistleblowers. We need to force our politicians not to punish them—to investigate the abuses and not the messengers—and to ensure that those unjustly persecuted can obtain redress.
Our government is putting its own self-interest ahead of the interests of the country. That needs to change.
This essay originally appeared on the “Atlantic.”
National Security Letters:
FBI intercepting cell phone calls:
Turning a cell phone into a listening device:
The NSA’s Utah computer facility:
DHS data collection:
Failures at Fusion Centers:
Total Information Awareness:
Corporate spying on behalf of governments:
Transparency and accountability:
Ruminations on our future police state:
The Internet is a surveillance state:
Wired’s advice on how to leak:
National Whistleblowers Center:
Obama’s war on whistleblowers:
Action against the AP:
“Aiding the enemy” charges against Manning:
This essay is being discussed on Reddit:
Edward Snowden broke the law by releasing classified information. This isn’t under debate; it’s something everyone with a security clearance knows. It’s written in plain English on the documents you have to sign when you get a security clearance, and it’s part of the culture. The law is there for a good reason, and secrecy has an important role in military defense.
But before the Justice Department prosecutes Snowden, there are some other investigations that ought to happen.
We need to determine whether these National Security Agency programs are themselves legal. The administration has successfully barred anyone from bringing a lawsuit challenging these laws, on the grounds of national secrecy. Now that we know those arguments are without merit, it’s time for those court challenges.
It’s clear that some of the NSA programs exposed by Snowden violate the Constitution and others violate existing laws. Other people have an opposite view. The courts need to decide.
We need to determine whether classifying these programs is legal. Keeping things secret from the people is a very dangerous practice in a democracy, and the government is permitted to do so only under very specific circumstances. Reading the documents leaked so far, I don’t see anything that needs to be kept secret. The argument that exposing these documents helps the terrorists doesn’t even pass the laugh test; there’s nothing here that changes anything any potential terrorist would do or not do. But in any case, now that the documents are public, the courts need to rule on the legality of their secrecy.
And we need to determine how we treat whistleblowers in this country. We have whistleblower protection laws that apply in some cases, particularly when exposing fraud, and other illegal behavior. NSA officials have repeatedly lied about the existence, and details, of these programs to Congress.
Only after all of these legal issues have been resolved should any prosecution of Snowden move forward. Because only then will we know the full extent of what he did, and how much of it is justified.
I believe that history will hail Snowden as a hero—his whistleblowing exposed a surveillance state and a secrecy machine run amok. I’m less optimistic of how the present day will treat him, and hope that the debate right now is less about the man and more about the government he exposed.
This essay was originally published on the “New York Times” Room for Debate blog
It’s part of a series of essays on the topic.
There’s a big discussion of this on Reddit.
Trading Privacy for Convenience
Ray Wang makes an important point about trust and our data:
This is the paradox. The companies contending to win our trust to manage our digital identities all seem to have complementary (or competing) business models that breach that trust by selling our data.
…and by turning it over to the government.
The current surveillance state is a result of a government/corporate partnership, and our willingness to give up privacy for convenience.
If the government demanded that we all carry tracking devices 24/7, we would rebel. Yet we all carry cell phones. If the government demanded that we deposit copies of all of our messages to each other with the police, we’d declare their actions unconstitutional. Yet we all use Gmail and Facebook messaging and SMS. If the government demanded that we give them access to all the photographs we take, and that we identify all of the people in them and tag them with locations, we’d refuse. Yet we do exactly that on Flickr and other sites.
Ray Ozzie is right when he said that we got what we asked for when we told the government we were scared and that they should do whatever they wanted to make us feel safer. But we also got what we asked for when we traded our privacy for convenience, trusting these corporations to look out for our best interests.
We’re living in a world of feudal security. And if you watch “Game of Thrones,” you know that feudalism benefits the powerful—at the expense of the peasants.
Last night, I was on “All In” with Chris Hayes. One of the things we talked about after the show was over is how technological solutions only work around the margins. That’s not a cause for despair. Think about technological solutions to murder. Yes, they exist—wearing a bullet-proof vest, for example—but they’re not really viable. The way we protect ourselves from murder is through laws. This is how we’re also going to protect our privacy.
Ray Wang’s essay:
The internet is a surveillance state:
The government/corporate surveillance partnership:
Ray Ozzie’s remarks:
Me on Chris Hayes:
More Links on the Snowden Documents
The whistleblower is Edward Snowden. I consider him an American hero.
Someone needs to write an essay parsing all of the precisely worded denials. Apple has never heard the word “PRISM,” but could have known of the program under a different name. Google maintained that there is no government “back door,” but left open the possibility that the data could have been just handed over. Obama said that the government isn’t “listening to your telephone calls,” ignoring 1) the metadata, 2) the fact that computers could be doing all of the listening, and 3) that text-to-speech results in phone calls being read and not listened to. And so on and on and on.
An NSA spying timeline:
Speculation about PRISM:
Defenses of NSA surveillance:
More essays worth reading:
NSA surveillance reimagined as children’s books:
Claims that PRISM foiled a terrorist attack have been debunked:
A collection of headlines:
Interesting comments by someone who thinks Snowden is a well-intentioned fool.
The *Economist* speculates on the political factors that would lead Obama to allow this. http://www.economist.com/s/democracyinamerica/…
Essays Related to NSA Spying Documents
Here’s a quick list of some of my older writings that are related to the current NSA spying documents:
The Internet Is a Surveillance State,” 2013.
The importance of government transparency and accountability, 2013.
The dangers of a government/corporate eavesdropping partnership, 2013.
Why Data Mining Won’t Stop Terror, 2006.
The Eternal Value of Privacy, 2006.
The dangers of our “data shadow,” 2008.
The politics of security and fear, 2013.
The death of ephemeral conversation, 2006.
The dangers of NSA eavesdropping, 2008.
The Politics of Security in a Democracy
Terrorism causes fear, and we overreact to that fear. Our brains aren’t very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are, and we fear them more than probability indicates we should.
Our leaders are just as prone to this overreaction as we are. But aside from basic psychology, there are other reasons that it’s smart politics to exaggerate terrorist threats, and security threats in general.
The first is that we respond to a strong leader. Bill Clinton famously said: “When people feel uncertain, they’d rather have somebody that’s strong and wrong than somebody who’s weak and right.” He’s right.
The second is that doing something—anything—is good politics. A politician wants to be seen as taking charge, demanding answers, fixing things. It just doesn’t look as good to sit back and claim that there’s nothing to do. The logic is along the lines of: “Something must be done. This is something. Therefore, we must do it.”
The third is that the “fear preacher” wins, regardless of the outcome. Imagine two politicians today. One of them preaches fear and draconian security measures. The other is someone like me, who tells people that terrorism is a negligible risk, that risk is part of life, and that while some security is necessary, we should mostly just refuse to be terrorized and get on with our lives.
Fast-forward 10 years. If I’m right and there have been no more terrorist attacks, the fear preacher takes credit for keeping us safe. But if a terrorist attack has occurred, my government career is over. Even if the incidence of terrorism is as ridiculously low as it is today, there’s no benefit for a politician to take my side of that gamble.
The fourth and final reason is money. Every new security technology, from surveillance cameras to high-tech fusion centers to airport full-body scanners, has a for-profit corporation lobbying for its purchase and use. Given the three other reasons above, it’s easy—and probably profitable—for a politician to make them happy and say yes.
For any given politician, the implications of these four reasons are straightforward. Overestimating the threat is better than underestimating it. Doing something about the threat is better than doing nothing. Doing something that is explicitly reactive is better than being proactive. (If you’re proactive and you’re wrong, you’ve wasted money. If you’re proactive and you’re right but no longer in power, whoever is in power is going to get the credit for what you did.) Visible is better than invisible. Creating something new is better than fixing something old.
Those last two maxims are why it’s better for a politician to fund a terrorist fusion center than to pay for more Arabic translators for the National Security Agency. No one’s going to see the additional appropriation in the NSA’s secret budget. On the other hand, a high-tech computerized fusion center is going to make front page news, even if it doesn’t actually do anything useful.
This leads to another phenomenon about security and government. Once a security system is in place, it can be very hard to dislodge it. Imagine a politician who objects to some aspect of airport security: the liquid ban, the shoe removal, something. If he pushes to relax security, he gets the blame if something bad happens as a result. No one wants to roll back a police power and have the lack of that power cause a well-publicized death, even if it’s a one-in-a-billion fluke.
We’re seeing this force at work in the bloated terrorist no-fly and watch lists; agents have lots of incentive to put someone on the list, but absolutely no incentive to take anyone off. We’re also seeing this in the Transportation Security Administration’s attempt to reverse the ban on small blades on airplanes. Twice it tried to make the change, and twice fearful politicians prevented it from going through with it.
Lots of unneeded and ineffective security measures are perpetrated by a government bureaucracy that is primarily concerned about the security of its members’ careers. They know the voters are more likely to punish them more if they fail to secure against a repetition of the last attack, and less if they fail to anticipate the next one.
What can we do? Well, the first step toward solving a problem is recognizing that you have one. These are not iron-clad rules; they’re tendencies. If we can keep these tendencies and their causes in mind, we’re more likely to end up with sensible security measures that are commensurate with the threat, instead of a lot of security theater and draconian police powers that are not.
Our leaders’ job is to resist these tendencies. Our job is to support politicians who do resist.
This essay originally appeared on CNN.com.
This essay has been translated into Swedish.
My essay on how to fight terrorism:
TSA prohibited from allowing small knives:
Another essay along similar lines:
All of the anti-counterfeiting features of the new Canadian $100 bill are resulting in people not bothering to verify them.
For a while now, I have been thinking about what civil disobedience looks like in the Internet Age. DDOS attacks, and politically motivated hacking in general, are certainly a part of that. This is one of the reasons I found Molly Sauter’s recent thesis, “Distributed Denial of Service Actions and the Challenge of Civil Disobedience on the Internet,” so interesting.
One of the problems with the legal system is that it doesn’t make any differentiation between civil disobedience and “normal” criminal activity on the Internet, though it does in the real world.
This 127-page report on “The Global Cyber Game” was just published by the UK Defence Academy. I have not read it yet, but it looks really interesting.
This blog post by Aleatha Parker-Wood, on the one-shot vs. the iterated Prisoner’s Dilemma, is very applicable to the things I wrote in “Liars & Outliers”:
Interesting report from the Pew Internet and American Life Project on teens, social media, and privacy:
The research by G. Gigu籥 and B.C. Love, “Limits in decision making arise from limits in memory retrieval,” in “Proceedings of the National Academy of Sciences,” v. 110 no. 19 (2013) has applications in training airport baggage screeners.
Nassim Nicholas Taleb on risk perception:
This article wonders if we are finally thinking sensibly about terrorism.
There are also these:
President Obama used my “refuse to be terrorized” line:
This bit on why we lie, by Judge Kozinski, is from a federal court ruling about false statements and First Amendment protection:
Interesting article on a greatly increased aspect of surveillance: “the ordinary citizen who by chance finds himself in a position to record events of great public import, and to share the results with the rest of us.”
New paper by Daniel Solove: “Privacy Self-Management and the Consent Dilemma”:
Someday I need to write an essay on the security risks of secret algorithms that become part of our infrastructure. This paper gives one example of that. Could Google tip an election by manipulating what comes up from search results on the candidates?
Eugene Spafford answers questions on CNN.com.
Interesting speculative article on tagging and location technologies.
Ignore the sensationalist headline. This article is a good summary of the need for trust in IT, and provides some ideas for how to enable more of it.
The psychology of conspiracy theories.
Ricin as a terrorist tool:
More on Feudal Security
Facebook regularly abuses the privacy of its users. Google has stopped supporting its popular RSS feeder. Apple prohibits all iPhone apps that are political or sexual. Microsoft might be cooperating with some governments to spy on Skype calls, but we don’t know which ones. Both Twitter and LinkedIn have recently suffered security breaches that affected the data of hundreds of thousands of their users.
If you’ve started to think of yourself as a hapless peasant in a “Game of Thrones” power struggle, you’re more right than you may realize. These are not traditional companies, and we are not traditional customers. These are feudal lords, and we are their vassals, peasants, and serfs.
Power has shifted in IT, in favor of both cloud-service providers and closed-platform vendors. This power shift affects many things, and it profoundly affects security.
Traditionally, computer security was the user’s responsibility. Users purchased their own antivirus software and firewalls, and any breaches were blamed on their inattentiveness. It’s kind of a crazy business model. Normally we expect the products and services we buy to be safe and secure, but in IT we tolerated lousy products and supported an enormous aftermarket for security.
Now that the IT industry has matured, we expect more security “out of the box.” This has become possible largely because of two technology trends: cloud computing and vendor-controlled platforms. The first means that most of our data resides on other networks: Google Docs, Salesforce.com, Facebook, Gmail. The second means that our new Internet devices are both closed and controlled by the vendors, giving us limited configuration control: iPhones, ChromeBooks, Kindles, BlackBerry PDAs. Meanwhile, our relationship with IT has changed. We used to use our computers to do things. We now use our vendor-controlled computing devices to go places. All of these places are owned by someone.
The new security model is that someone else takes care of it—without telling us any of the details. I have no control over the security of my Gmail or my photos on Flickr. I can’t demand greater security for my presentations on Prezi or my task list on Trello, no matter how confidential they are. I can’t audit any of these cloud services. I can’t delete cookies on my iPad or ensure that files are securely erased. Updates on my Kindle happen automatically, without my knowledge or consent. I have so little visibility into the security of Facebook that I have no idea what operating system they’re using.
There are a lot of good reasons why we’re all flocking to these cloud services and vendor-controlled platforms. The benefits are enormous, from cost to convenience to reliability to security itself. But it is inherently a feudal relationship. We cede control of our data and computing platforms to these companies and trust that they will treat us well and protect us from harm. And if we pledge complete allegiance to them—if we let them control our email and calendar and address book and photos and everything—we get even more benefits. We become their vassals; or, on a bad day, their serfs.
There are a lot of feudal lords out there. Google and Apple are the obvious ones, but Microsoft is trying to control both user data and the end-user platform as well. Facebook is another lord, controlling much of the socializing we do on the Internet. Other feudal lords are smaller and more specialized—Amazon, Yahoo, Verizon, and so on—but the model is the same.
To be sure, feudal security has its advantages. These companies are much better at security than the average user. Automatic backup has saved a lot of data after hardware failures, user mistakes, and malware infections. Automatic updates have increased security dramatically. This is also true for small organizations; they are more secure than they would be if they tried to do it themselves. For large corporations with dedicated IT security departments, the benefits are less clear. Sure, even large companies outsource critical functions like tax preparation and cleaning services, but large companies have specific requirements for security, data retention, audit, and so on—and that’s just not possible with most of these feudal lords.
The feudal relationship is inherently based on power. In Medieval Europe, people would pledge their allegiance to a feudal lord in exchange for that lord’s protection. This arrangement changed as the lords realized that they had all the power and could do whatever they wanted. Vassals were used and abused; peasants were tied to their land and became serfs.
It’s the Internet lords’ popularity and ubiquity that enable them to profit; laws and government relationships make it easier for them to hold onto power. These lords are vying with each other for profits and power. By spending time on their sites and giving them our personal information—whether through search queries, e-mails, status updates, likes, or simply our behavioral characteristics—we are providing the raw material for that struggle. In this way we are like serfs, toiling the land for our feudal lords. If you don’t believe me, try to take your data with you when you leave Facebook. And when war breaks out among the giants, we become collateral damage.
So how do we survive? Increasingly, we have little alternative but to trust *someone*, so we need to decide who we trust—and who we don’t—and then act accordingly. This isn’t easy; our feudal lords go out of their way not to be transparent about their actions, their security, or much of anything. Use whatever power you have—as individuals, none; as large corporations, more—to negotiate with your lords. And, finally, don’t be extreme in any way: politically, socially, culturally. Yes, you can be shut down without recourse, but it’s usually those on the edges that are affected. Not much solace, I agree, but it’s something.
On the policy side, we have an action plan. In the short term, we need to keep circumvention—the ability to modify our hardware, software, and data files—legal and preserve net neutrality. Both of these things limit how much the lords can take advantage of us, and they increase the possibility that the market will force them to be more benevolent. The last thing we want is the government—that’s us—spending resources to enforce one particular business model over another and stifling competition.
In the longer term, we all need to work to reduce the power imbalance. Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today’s Internet feudalism is both ad hoc and one-sided. We have no choice but to trust the lords, but we receive very few assurances in return. The lords have a lot of rights, but few responsibilities or limits. We need to balance this relationship, and government intervention is the only way we’re going to get it. In medieval Europe, the rise of the centralized state and the rule of law provided the stability that feudalism lacked. The Magna Carta first forced responsibilities on governments and put humans on the long road toward government by the people and for the people.
We need a similar process to rein in our Internet lords, and it’s not something that market forces are likely to provide. The very definition of power is changing, and the issues are far bigger than the Internet and our relationships with our IT providers.
This essay originally appeared on the “Harvard Business Review” website.
It is an update of this earlier essay on the same topic.
“Feudal security” is a metaphor I have been using a lot recently; I wrote this essay without rereading my previous essay.
There is another way the feudal metaphor applies to the Internet. There is no commons; every part of the Internet is owned by someone. This article explores that aspect of the metaphor.
Power and security:
The need for trust:
The Internet giants reimagined as “Game of Thrones” players:
http://.hootsuite.com/wp-content/uploads/2013/… or http://.hootsuite.com/wp-content/uploads/2013/…
Surveillance and the Internet of Things
The Internet has turned into a massive surveillance tool. We’re constantly monitored on the Internet by hundreds of companies—both familiar and unfamiliar. Everything we do there is recorded, collected, and collated—sometimes by corporations wanting to sell us stuff and sometimes by governments wanting to keep an eye on us.
Ephemeral conversation is over. Wholesale surveillance is the norm. Maintaining privacy from these powerful entities is basically impossible, and any illusion of privacy we maintain is based either on ignorance or on our unwillingness to accept what’s really going on.
It’s about to get worse, though. Companies such as Google may know more about your personal interests than your spouse, but so far it’s been limited by the fact that these companies only see computer data. And even though your computer habits are increasingly being linked to your offline behavior, it’s still only behavior that involves computers.
The Internet of Things refers to a world where much more than our computers and cell phones is Internet-enabled. Soon there will be Internet-connected modules on our cars and home appliances. Internet-enabled medical devices will collect real-time health data about us. There’ll be Internet-connected tags on our clothing. In its extreme, *everything* can be connected to the Internet. It’s really just a matter of time, as these self-powered wireless-enabled computers become smaller and cheaper.
Lots has been written about the “Internet of Things” and how it will change society for the better. It’s true that it will make a lot of wonderful things possible, but the “Internet of Things” will also allow for an even greater amount of surveillance than there is today. The Internet of Things gives the governments and corporations that follow our every move something they don’t yet have: eyes and ears.
Soon everything we do, both online and offline, will be recorded and stored forever. The only question remaining is who will have access to all of this information, and under what rules.
We’re seeing an initial glimmer of this from how location sensors on your mobile phone are being used to track you. Of course your cell provider needs to know where you are; it can’t route your phone calls to your phone otherwise. But most of us broadcast our location information to many other companies whose apps we’ve installed on our phone. Google Maps certainly, but also a surprising number of app vendors who collect that information. It can be used to determine where you live, where you work, and who you spend time with.
Another early adopter was Nike, whose Nike+ shoes communicate with your iPod or iPhone and track your exercising. More generally, medical devices are starting to be Internet-enabled, collecting and reporting a variety of health data. Wiring appliances to the Internet is one of the pillars of the smart electric grid. Yes, there are huge potential savings associated with the smart grid, but it will also allow power companies – and anyone they decide to sell the data to—to monitor how people move about their house and how they spend their time.
Drones are another “thing” moving onto the Internet. As their price continues to drop and their capabilities increase, they will become a very powerful surveillance tool. Their cameras are powerful enough to see faces clearly, and there are enough tagged photographs on the Internet to identify many of us. We’re not yet up to a real-time Google Earth equivalent, but it’s not more than a few years away. And drones are just a specific application of CCTV cameras, which have been monitoring us for years, and will increasingly be networked.
Google’s Internet-enabled glasses—Google Glass—are another major step down this path of surveillance. Their ability to record both audio and video will bring ubiquitous surveillance to the next level. Once they’re common, you might never know when you’re being recorded in both audio and video. You might as well assume that everything you do and say will be recorded and saved forever.
In the near term, at least, the sheer volume of data will limit the sorts of conclusions that can be drawn. The invasiveness of these technologies depends on asking the right questions. For example, if a private investigator is watching you in the physical world, she or he might observe odd behavior and investigate further based on that. Such serendipitous observations are harder to achieve when you’re filtering databases based on pre-programmed queries. In other words, it’s easier to ask questions about what you purchased and where you were than to ask what you did with your purchases and why you went where you did. These analytical limitations also mean that companies like Google and Facebook will benefit more from the Internet of Things than individuals—not only because they have access to more data, but also because they have more sophisticated query technology. And as technology continues to improve, the ability to automatically analyze this massive data stream will improve.
In the longer term, the Internet of Things means ubiquitous surveillance. If an object “knows” you have purchased it, and communicates via either Wi-Fi or the mobile network, then whoever or whatever it is communicating with will know where you are. Your car will know who is in it, who is driving, and what traffic laws that driver is following or ignoring. No need to show ID; your identity will already be known. Store clerks could know your name, address, and income level as soon as you walk through the door. Billboards will tailor ads to you, and record how you respond to them. Fast food restaurants will know what you usually order, and exactly how to entice you to order more. Lots of companies will know whom you spend your days—and nights—with. Facebook will know about any new relationship status before you bother to change it on your profile. And all of this information will all be saved, correlated, and studied. Even now, it feels a lot like science fiction.
Will *you* know any of this? Will your friends? It depends. Lots of these devices have, and will have, privacy settings. But these settings are remarkable not in how much privacy they afford, but in how much they deny. Access will likely be similar to your browsing habits, your files stored on Dropbox, your searches on Google, and your text messages from your phone. All of your data is saved by those companies—and many others—correlated, and then bought and sold without your knowledge or consent. You’d think that your privacy settings would keep random strangers from learning everything about you, but it only keeps random strangers who *don’t pay for the privilege*—or don’t work for the government and have the ability to demand the data. Power is what matters here: you’ll be able to keep the powerless from invading your privacy, but you’ll have no ability to prevent the powerful from doing it again and again.
This essay originally appeared in the “Guardian.”
The Internet as a massive surveillance tool:
The death of ephemeral conversation:
The rise of wholesale surveillance:
Linking online and offline behavior:
The Internet of things:
Surveillance under the Internet of things:
Giving the Internet eyes and ears:
Smart electric grid:
David Brin on the transparent society:
Science fiction story about this particular dystopia:
Power and security:
Another article on the subject:
The Problems with CALEA-II
The FBI wants a new law that will make it easier to wiretap the Internet. Although its claim is that the new law will only maintain the status quo, it’s really much worse than that. This law will result in less-secure Internet products and create a foreign industry in more-secure alternatives. It will impose costly burdens on affected companies. It will assist totalitarian governments in spying on their own citizens. And it won’t do much to hinder actual criminals and terrorists.
As the FBI sees it, the problem is that people are moving away from traditional communication systems like telephones onto computer systems like Skype. Eavesdropping on telephones used to be easy. The FBI would call the phone company, which would bring agents into a switching room and allow them to literally tap the wires with a pair of alligator clips and a tape recorder. In the 1990s, the government forced phone companies to provide an analogous capability on digital switches; but today, more and more communications happens over the Internet.
What the FBI wants is the ability to eavesdrop on *everything*f. Depending on the system, this ranges from easy to impossible. E-mail systems like Gmail are easy. The mail resides in Google’s servers, and the company has an office full of people who respond to requests for lawful access to individual accounts from governments all over the world. Encrypted voice systems like Silent Circle are impossible to eavesdrop on—the calls are encrypted from one computer to the other, and there’s no central node to eavesdrop from. In those cases, the only way to make the system eavesdroppable is to add a backdoor to the user software. This is precisely the FBI’s proposal. Companies that refuse to comply would be fined $25,000 a day.
The FBI believes it can have it both ways: that it can open systems to its eavesdropping, but keep them secure from anyone else’s eavesdropping. That’s just not possible. It’s impossible to build a communications system that allows the FBI surreptitious access but doesn’t allow similar access by others. When it comes to security, we have two options: We can build our systems to be as secure as possible from eavesdropping, or we can deliberately weaken their security. We have to choose one or the other.
This is an old debate, and one we’ve been through many times. The NSA even has a name for it: the equities issue. In the 1980s, the equities debate was about export control of cryptography. The government deliberately weakened U.S. cryptography products because it didn’t want foreign groups to have access to secure systems. Two things resulted: fewer Internet products with cryptography, to the insecurity of everybody, and a vibrant foreign security industry based on the unofficial slogan “Don’t buy the U.S. stuff—it’s lousy.”
In 1993, the debate was about the Clipper Chip. This was another deliberately weakened security product, an encrypted telephone. The FBI convinced AT&T to add a backdoor that allowed for surreptitious wiretapping. The product was a complete failure. Again, why would anyone buy a deliberately weakened security system?
In 1994, the Communications Assistance for Law Enforcement Act mandated that U.S. companies build eavesdropping capabilities into phone switches. These were sold internationally; some countries liked having the ability to spy on their citizens. Of course, so did criminals, and there were public scandals in Greece (2005) and Italy (2006) as a result.
In 2012, we learned that every phone switch sold to the Department of Defense had security vulnerabilities in its surveillance system. And just this May, we learned that Chinese hackers breached Google’s system for providing surveillance data for the FBI.
The new FBI proposal will fail in all these ways and more. The bad guys will be able to get around the eavesdropping capability, either by building their own security systems—not very difficult—or buying the more-secure foreign products that will inevitably be made available. Most of the good guys, who don’t understand the risks or the technology, will not know enough to bother and will be less secure. The eavesdropping functions will 1) result in more obscure—and less secure—product designs, and 2) be vulnerable to exploitation by criminals, spies, and everyone else. U.S. companies will be forced to compete at a disadvantage; smart customers won’t buy the substandard stuff when there are more-secure foreign alternatives. Even worse, there are lots of foreign governments who want to use these sorts of systems to spy on their own citizens. Do we really want to be exporting surveillance technology to the likes of China, Syria, and Saudi Arabia?
The FBI’s short-sighted agenda also works against the parts of the government that are still working to secure the Internet for everyone. Initiatives within the NSA, the DOD, and DHS to do everything from securing computer operating systems to enabling anonymous web browsing will all be harmed by this.
What to do, then? The FBI claims that the Internet is “going dark,” and that it’s simply trying to maintain the status quo of being able to eavesdrop. This characterization is disingenuous at best. We are entering a golden age of surveillance; there’s more electronic communications available for eavesdropping than ever before, including whole new classes of information: location tracking, financial tracking, and vast databases of historical communications such as e-mails and text messages. The FBI’s surveillance department has it better than ever. With regard to voice communications, yes, software phone calls will be harder to eavesdrop upon. (Although there are questions about Skype’s security.) That’s just part of the evolution of technology, and one that on balance is a positive thing.
Think of it this way: We don’t hand the government copies of our house keys and safe combinations. If agents want access, they get a warrant and then pick the locks or bust open the doors, just as a criminal would do. A similar system would work on computers. The FBI, with its increasingly non-transparent procedures and systems, has failed to make the case that this isn’t good enough.
Finally there’s a general principle at work that’s worth explicitly stating. All tools can be used by the good guys and the bad guys. Cars have enormous societal value, even though bank robbers can use them as getaway cars. Cash is no different. Both good guys and bad guys send e-mails, use Skype, and eat at all-night restaurants. But because society consists overwhelmingly of good guys, the good uses of these dual-use technologies greatly outweigh the bad uses. Strong Internet security makes us all safer, even though it helps the bad guys as well. And it makes no sense to harm all of us in an attempt to harm a small subset of us.
This essay originally appeared in “Foreign Policy”.
The FBI’s proposal:
The equities issue:
What happened in Greece:
What happened in Italy:
Vulnerabilities in the US:
The Chinese hacking Google:
Other essays on this:
How the government is helping secure the Internet.
The “golden age of surveillance”:
Surveillance on the Internet:
Questions about Skype security:
Forcing the FBI to use vulnerabilities to eavesdrop on people:
The need for transparency:
I’m speaking at Cornerstones of Trust 2013, in Foster City, CA, on June 18.
I’m speaking at USI 2013, in Paris on June 25.
In this podcast interview, I talk about security, power, and the various things I have been thinking about recently.
In the episode of “Elementary” that aired on May 9, about eight or nine minutes in, there’s a scene with a copy of “Applied Cryptography” prominently displayed on the coffee table. This isn’t the first time that my books have appeared on that TV show.
Sixth Annual Movie-Plot Threat Semifinalists
On April 1 on my blog, I announced the Sixth Annual Movie Plot Threat Contest:
I want a cyberwar movie-plot threat. (For those who don’t know, a movie-plot threat is a scare story that would make a great movie plot, but is much too specific to build security policy around.) Not the Chinese attacking our power grid or shutting off 911 emergency services—people are already scaring our legislators with that sort of stuff. I want something good, something no one has thought of before.
Submissions are in, and—apologies that this is a month late, but I completely forgot about it—here are the semifinalists.
1. Crashing satellites, by Chris Battey. https://www.schneier.com/blog/archives/2013/04/…
2. Attacking Dutch dams, by Russell Thomas. https://www.schneier.com/blog/archives/2013/04/…
3. Attacking a drug dispensing system, by Dave. https://www.schneier.com/blog/archives/2013/04/…
4. Attacking cars through their diagnostic ports, by RSaunders. https://www.schneier.com/blog/archives/2013/04/…
5. Embedded kill switches in chips, by Shogun. https://www.schneier.com/blog/archives/2013/04/…
Cast your vote by number; voting closes at the end of the month.
A Really Good Article on How Easy it Is to Crack Passwords
Ars Technica gave three experts a 16,000-entry encrypted password file, and asked them to break them. The winner got 90% of them, the loser 62%—in a few hours.
The list of “plains,” as many crackers refer to deciphered hashes, contains the usual list of commonly used passcodes that are found in virtually every breach involving consumer websites. “123456,” “1234567,” and “password” are there, as is “letmein,” “Destiny21,” and “pizzapizza.” Passwords of this ilk are hopelessly weak. Despite the additional tweaking, “p@$$word,” “123456789j,” “letmein1!,” and “LETMEin3” are equally awful….
As big as the word lists that all three crackers in this article wielded—close to 1 billion strong in the case of Gosney and Steube—none of them contained “Coneyisland9/,” “momof3g8kids,”or the more than 10,000 other plains that were revealed with just a few hours of effort. So how did they do it? The short answer boils down to two variables: the website’s unfortunate and irresponsible use of MD5 and the use of non- randomized passwords by the account holders.
The article goes on to explain how dictionary attacks work, how well they do, and the sorts of passwords they find.
Steube was able to crack “momof3g8kids” because he had “momof3g” in his 111 million dict and “8kids” in a smaller dict.
“The combinator attack got it! It’s cool,” he said. Then referring to the oft-cited xkcd comic, he added: “This is an answer to the batteryhorsestaple thing.”
What was remarkable about all three cracking sessions were the types of plains that got revealed. They included passcodes such as “k1araj0hns0n,” “Sh1a-labe0uf,” “Apr!l221973,” “Qbesancon321,””DG091101%,” “@Yourmom69,” “ilovetofunot,” “windermere2313,” “tmdmmj17,” and “BandGeek2014.” Also included in the list: “all of the lights” (yes, spaces are allowed on many sites), “i hate hackers,” “allineedislove,” “ilovemySister31,” “iloveyousomuch,” “Philippians4:13,” “Philippians4:6-7,” and “qeadzcwrsfxv1331.” “gonefishing1125” was another password Steube saw appear on his computer screen. Seconds after it was cracked, he noted, “You won’t ever find it using brute force.”
Great reading, but nothing theoretically new. Ars Technica wrote about this last year, and Joe Bonneau wrote an excellent commentary.
Password cracking can be evaluated on two nearly independent axes: *power* (the ability to check a large number of guesses quickly and cheaply using optimized software, GPUs, FPGAs, and so on) and *efficiency* (the ability to generate large lists of candidate passwords accurately ranked by real-world likelihood using sophisticated models).
I wrote about this same thing back in 2007. The news in 2013, such as it is, is that this kind of thing is getting easier faster than people think. Pretty much anything that can be remembered can be cracked.
If you need to memorize a password, I still stand by the Schneier scheme from 2008:
So if you want your password to be hard to guess, you should choose something that this process will miss. My advice is to take a sentence and turn it into a password. Something like “This little piggy went to market” might become “tlpWENT2m”. That nine-character password won’t be in anyone’s dictionary. Of course, don’t use this one, because I’ve written about it. Choose your own sentence—something personal.
Until this very moment, these passwords were still secure:
WIw7,mstmsritt… = When I was seven, my sister threw my stuffed rabbit in the toilet.
Wow…doestcst::amazon.cccooommm = Wow, does that couch smell terrible.
Ltime@go-inag~faaa! = Long time ago in a galaxy not far away at all.
uTVM,TPw55:utvm,tpwstillsecure = Until this very moment, these passwords were still secure.
You get the idea. Combine a personally memorable sentence, some personal memory tricks to modify that sentence into a password, and create a long-length password.
Better, though, is to use random unmemorable alphanumeric passwords (with symbols, if the site will allow them), and a password manager like Password Safe to store them. (If anyone wants to port it to the Mac, iPhone, iPad, or Android, please contact me.) Ars Technica does a good job of explaining the same thing. David Pogue likes Dashlane, but doesn’t know if it’s secure.
Of course, none of this is useful advice if the site puts artificial limits on your password.
The Ars Technica article:
XKCD password advice:
Analysis of the XKCD scheme:
Ars Technica’s 2012 article:
Joe Bonneau’s commentary:
My 2007 essay:
My password generation advice:
Various ports of Password Safe. I know nothing about them, nor can I vouch for their security:
Ars Technica password advice:
David Pogue’s advice:
In related news, Password Safe is a candidate for July’s project-of-the-month on SourceForge. Please vote for it.
Bluetooth-Controlled Door Lock
Here is a new lock that you can control via Bluetooth and an iPhone app.
That’s pretty cool, and I can imagine all sorts of reasons to get one of those. But I’m sure there are all sorts of unforeseen security vulnerabilities in this system. And even worse, a single vulnerability can affect *all* the locks. Remember that vulnerability found last year in hotel electronic locks?
Anyone care to guess how long before some researcher finds a way to hack this one? And how well the maker anticipated the need to update the firmware to fix the vulnerability once someone finds it?
I’m not saying that you shouldn’t use this lock, only that you understand that new technology brings new security risks, and electronic technology brings new kinds of security risks. Security is a trade-off, and the trade-off is particularly stark in this case.
Hotel lock vulnerability:
Security and Human Behavior (SHB 2013)
The Sixth Interdisciplinary Workshop on Security and Human Behavior (SHB 2013) took place in early June. This year we were in Los Angeles, at USC—hosted by CREATE.
My description from last year still applies:
SHB is an invitational gathering of psychologists, computer security researchers, behavioral economists, sociologists, law professors, business school professors, political scientists, anthropologists, philosophers, and others—all of whom are studying the human side of security—organized by Alessandro Acquisti, Ross Anderson, and me. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.
It is still the most intellectually stimulating conference I attend all year. The format has remained unchanged since the beginning. Each panel consists of six people. Everyone has ten minutes to talk, and then we have half an hour of questions and discussion. The format maximizes interaction, which is really important in an interdisciplinary conference like this one.
The conference website contains a schedule and a list of participants, which includes links to writings by each of them. Both Ross Anderson and Vaibhav Garg liveblogged the event.
All links, including links to previous SHB workshops, are here:
The Cost of Terrorism in Pakistan
This study claims “terrorism has cost Pakistan around 33.02% of its real national income” between the years 1973 and 2008, or about 1% per year.
The St. Louis Fed puts the real gross national income of the U.S. at about $13 trillion total, hand-waving an average over the past few years. The best estimate I’ve seen for the increased cost of homeland security in the U.S. in the ten years since 9/11 is $100 billion per year. So that puts the cost of terrorism in the US at about 0.8%—surprisingly close to the Pakistani number.
The interesting thing is that the expenditures are completely different. In Pakistan, the cost is primarily “a fall in domestic investment and lost workers’ remittances from abroad.” In the US, it’s security measures, including the invasion of Iraq.
I remember reading somewhere that about a third of all food spoils. In poor countries, that spoilage primarily happens during production and transport. In rich countries, that spoilage primarily happens after the consumer buys the food. Same rate of loss, completely different causes. This reminds me of that.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Liars and Outliers,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT, and is on the Advisory Boards of the Electronic Privacy Information Center (EPIC) and the Electronic Frontier Foundation (EFF). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2013 by Bruce Schneier.