Schneier on Security
A blog covering security and security technology.
March 2013 Archives
Friday Squid Blogging: Bomb Discovered in Squid at Market
An unexploded bomb was found inside a squid when the fish was slaughtered at a fish market in Guangdong province.
Oddly enough, this doesn't seem to be the work of terrorists:
The stall owner, who has been selling fish for 10 years, told the newspaper the 1-meter-long squid might have mistaken the bomb for food.
Clearly there's much to this story that remains unreported.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
The Dangers of Surveillance
Interesting article, "The Dangers of Surveillance," by Neil M. Richards, Harvard Law Review, 2013. From the abstract:
....We need a better account of the dangers of surveillance.
EDITED TO ADD (4/12): Reply to the article.
New RC4 Attack
This is a really clever attack on the RC4 encryption algorithm as used in TLS.
We have found a new attack against TLS that allows an attacker to recover a limited amount of plaintext from a TLS connection when RC4 encryption is used. The attacks arise from statistical flaws in the keystream generated by the RC4 algorithm which become apparent in TLS ciphertexts when the same plaintext is repeatedly encrypted at a fixed location across many TLS sessions.
The attack is very specialized:
The attack is a multi-session attack, which means that we require a target plaintext to be repeatedly sent in the same position in the plaintext stream in multiple TLS sessions. The attack currently only targets the first 256 bytes of the plaintext stream in sessions. Since the first 36 bytes of plaintext are formed from an unpredictable Finished message when SHA-1 is the selected hashing algorithm in the TLS Record Protocol, these first 36 bytes cannot be recovered. This means that the attack can recover 220 bytes of TLS-encrypted plaintext.
Is this a big deal? Yes and no. The attack requires the identical plaintext to be repeatedly encrypted. Normally, this would make for an impractical attack in the real world, but http messages often have stylized headers that are identical across a conversation -- for example, cookies. On the other hand, those are the only bits that can be decrypted. Currently, this attack is pretty raw and unoptimized -- so it's likely to become faster and better.
There's no reason to panic here. But let's start to move away from RC4 to something like AES.
Unwitting Drug Smugglers
This is a story about a physicist who got taken in by an imaginary Internet girlfriend and ended up being arrested in Argentina for drug smuggling. Readers of this blog will see it coming, of course, but it's a still a good read.
I don't know whether the professor knew what he was doing -- it's pretty clear that the reporter believes he's guilty. What's more interesting to me is that there is a drug smuggling industry that relies on recruiting mules off the Internet by pretending to be romantically inclined pretty women. Could that possibly be a useful enough recruiting strategy?
EDITED TO ADD (4/12): Here's a similar story from New Zealand, with the sexes swapped.
Security Awareness Training
Should companies spend money on security awareness training for their employees? It's a contentious topic, with respected experts on both sides of the debate. I personally believe that training users in security is generally a waste of time, and that the money can be spent better elsewhere. Moreover, I believe that our industry's focus on training serves to obscure greater failings in security design.
In order to understand my argument, it's useful to look at training's successes and failures. One area where it doesn't work very well is health. We are forever trying to train people to have healthier lifestyles: eat better, exercise more, whatever. And people are forever ignoring the lessons. One basic reason is psychological: we just aren't very good at trading off immediate gratification for long-term benefit. A healthier you is an abstract eventually; sitting in front of the television all afternoon with a McDonald's Super Monster Meal sounds really good right now. Similarly, computer security is an abstract benefit that gets in the way of enjoying the Internet. Good practices might protect me from a theoretical attack at some time in the future, but they're a lot of bother right now and I have more fun things to think about. This is the same trick Facebook uses to get people to give away their privacy; no one reads through new privacy policies; it's much easier to just click "OK" and start chatting with your friends. In short: security is never salient.
Another reason health training works poorly is that it's hard to link behaviors with benefits. We can train anyone -- even laboratory rats -- with a simple reward mechanism: push the button, get a food pellet. But with health, the connection is more abstract. If you're unhealthy, what caused it? It might have been something you did or didn't do years ago, it might have been one of the dozen things you have been doing and not doing for months, or it might have been the genes you were born with. Computer security is a lot like this, too.
Training laypeople in pharmacology also isn't very effective. We expect people to make all sorts of medical decisions at the drugstore, and they're not very good at it. Turns out that it's hard to teach expertise. We can't expect every mother to have the knowledge of a doctor or pharmacist or RN, and we certainly can't expect her to become an expert when most of the advice she's exposed to comes from manufacturers' advertising. In computer security, too, a lot of advice comes from companies with products and services to sell.
One area of health that is a training success is HIV prevention. HIV may be very complicated, but the rules for preventing it are pretty simple. And aside from certain sub-Saharan countries, we have taught people a new model of their health, and have dramatically changed their behavior. This is important: most lay medical expertise stems from folk models of health. Similarly, people have folk models of computer security. Maybe they're right and maybe they're wrong, but they're how people organize their thinking. This points to a possible way that computer security training can succeed. We should stop trying to teach expertise, and pick a few simple metaphors of security and train people to make decisions using those metaphors.
On the other hand, we still have trouble teaching people to wash their hands -- even though it's easy, fairly effective, and simple to explain. Notice the difference, though. The risks of catching HIV are huge, and the cause of the security failure is obvious. The risks of not washing your hands are low, and it's not easy to tie the resultant disease to a particular not-washing decision. Computer security is more like hand washing than HIV.
Food safety is my final example. We have a bunch of simple rules -- cooking temperatures for meat, expiration dates on refrigerated goods, the three-second rule for food being dropped on the floor -- that are mostly right, but often ignored. If we can't get people to follow these rules, what hope do we have for computer security training?
To those who think that training users in security is a good idea, I want to ask: "Have you ever met an actual user?" They're not experts, and we can't expect them to become experts. The threats change constantly, the likelihood of failure is low, and there is enough complexity that it's hard for people to understand how to connect their behavior to eventual outcomes. So they turn to folk remedies that, while simple, don't really address the threats.
Even if we could invent an effective computer security training program, there's one last problem. HIV prevention training works because affecting what the average person does is valuable. Even if only half the population practices safe sex, those actions dramatically reduce the spread of HIV. But computer security is often only as strong as the weakest link. If four-fifths of company employees learn to choose better passwords, or not to click on dodgy links, one-fifth still get it wrong and the bad guys still get in. As long as we build systems that are vulnerable to the worst case, raising the average case won't make them more secure.
The whole concept of security awareness training demonstrates how the computer industry has failed. We should be designing systems that won't let users choose lousy passwords and don't care what links a user clicks on. We should be designing systems that conform to their folk beliefs of security, rather than forcing them to learn new ones. Microsoft has a great rule about system messages that require the user to make a decision. They should be NEAT: necessary, explained, actionable, and tested. That's how we should be designing security interfaces. And we should be spending money on security training for developers. These are people who can be taught expertise in a fast-changing environment, and this is a situation where raising the average behavior increases the security of the overall system.
If we security engineers do our job right, users will get their awareness training informally and organically, from their colleagues and friends. People will learn the correct folk models of security, and be able to make decisions using them. Then maybe an organization can spend an hour a year reminding their employees what good security means at that organization, both on the computer and off. That makes a whole lot more sense.
This essay originally appeared on DarkReading.com.
EDITED TO ADD (4/4): Another commentary.
EDITED TO ADD (4/23): Another opinion.
The NSA's Cryptolog
The NSA has published declassified versions of its Cryptolog newsletter. All the issues from Aug 1974 through Summer 1997 are on the web, although there are some pretty heavy redactions in places. (Here's a link to the documents on a non-government site, in case they disappear.)
I haven't even begun to go through these yet. If you find anything good, please post it in comments.
Identifying People from Mobile Phone Location Data
Turns out that it's pretty easy:
Researchers at the Massachusetts Institute of Technology (MIT) and the Catholic University of Louvain studied 15 months' worth of anonymised mobile phone records for 1.5 million individuals.
Here's the study.
EFF maintains a good page on the issues surrounding location privacy.
Our Internet Surveillance State
I'm going to start with three data points.
One: Some of the Chinese military hackers who were implicated in a broad set of attacks against the U.S. government and corporations were identified because they accessed Facebook from the same network infrastructure they used to carry out their attacks.
Two: Hector Monsegur, one of the leaders of the LulzSec hacker movement, was identified and arrested last year by the FBI. Although he practiced good computer security and used an anonymous relay service to protect his identity, he slipped up.
And three: Paula Broadwell, who had an affair with CIA director David Petraeus, similarly took extensive precautions to hide her identity. She never logged in to her anonymous e-mail service from her home network. Instead, she used hotel and other public networks when she e-mailed him. The FBI correlated hotel registration data from several different hotels -- and hers was the common name.
The Internet is a surveillance state. Whether we admit it to ourselves or not, and whether we like it or not, we're being tracked all the time. Google tracks us, both on its pages and on other pages it has access to. Facebook does the same; it even tracks non-Facebook users. Apple tracks us on our iPhones and iPads. One reporter used a tool called Collusion to track who was tracking him; 105 companies tracked his Internet use during one 36-hour period.
Increasingly, what we do on the Internet is being combined with other data about us. Unmasking Broadwell's identity involved correlating her Internet activity with her hotel stays. Everything we do now involves computers, and computers produce data as a natural by-product. Everything is now being saved and correlated, and many big-data companies make money by building up intimate profiles of our lives from a variety of sources.
Facebook, for example, correlates your online behavior with your purchasing habits offline. And there's more. There's location data from your cell phone, there's a record of your movements from closed-circuit TVs.
This is ubiquitous surveillance: All of us being watched, all the time, and that data being stored forever. This is what a surveillance state looks like, and it's efficient beyond the wildest dreams of George Orwell.
Sure, we can take measures to prevent this. We can limit what we search on Google from our iPhones, and instead use computer web browsers that allow us to delete cookies. We can use an alias on Facebook. We can turn our cell phones off and spend cash. But increasingly, none of it matters.
There are simply too many ways to be tracked. The Internet, e-mail, cell phones, web browsers, social networking sites, search engines: these have become necessities, and it's fanciful to expect people to simply refuse to use them just because they don't like the spying, especially since the full extent of such spying is deliberately hidden from us and there are few alternatives being marketed by companies that don't spy.
This isn't something the free market can fix. We consumers have no choice in the matter. All the major companies that provide us with Internet services are interested in tracking us. Visit a website and it will almost certainly know who you are; there are lots of ways to be tracked without cookies. Cell phone companies routinely undo the web's privacy protection. One experiment at Carnegie Mellon took real-time videos of students on campus and was able to identify one-third of them by comparing their photos with publicly available tagged Facebook photos.
Maintaining privacy on the Internet is nearly impossible. If you forget even once to enable your protections, or click on the wrong link, or type the wrong thing, you've permanently attached your name to whatever anonymous service you're using. Monsegur slipped up once, and the FBI got him. If the director of the CIA can't maintain his privacy on the Internet, we've got no hope.
In today's world, governments and corporations are working together to keep things that way. Governments are happy to use the data corporations collect -- occasionally demanding that they collect more and save it longer -- to spy on us. And corporations are happy to buy data from governments. Together the powerful spy on the powerless, and they're not going to give up their positions of power, despite what the people want.
Fixing this requires strong government will, but they're just as punch-drunk on data as the corporations. Slap-on-the-wrist fines notwithstanding, no one is agitating for better privacy laws.
So, we're done. Welcome to a world where Google knows exactly what sort of porn you all like, and more about your interests than your spouse does. Welcome to a world where your cell phone company knows exactly where you are all the time. Welcome to the end of private conversations, because increasingly your conversations are conducted by e-mail, text, or social networking sites.
And welcome to a world where all of this, and everything else that you do or is done on a computer, is saved, correlated, studied, passed around from company to company without your knowledge or consent; and where the government accesses it at will without a warrant.
Welcome to an Internet without privacy, and we've ended up here with hardly a fight.
This essay previously appeared on CNN.com, where it got 23,000 Facebook likes and 2,500 tweets -- by far the most widely distributed essay I've ever written.
EDITED TO ADD (3/26): More commentary.
EDITED TO ADD (3/28): This Communist commentary seems to be mostly semantic drivel, but parts of it are interesting. The author doesn’t seem to have a problem with State surveillance, but he thinks the incentives that cause businesses to use the same tools should be revisited. This seems just as wrong-headed as the Libertarians who have no problem with corporations using surveillance tools, but don't want governments to use them.
Friday Squid Blogging: Giant Squid Genetics
Despite looking very different from each other and being distributed across the world's oceans, all giant squid are the same species. There's also not a lot of genetic diversity.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Changes to the Blog
I have made a few changes to my blog that I'd like to talk about.
The first is the various buttons associated with each post: a Facebook Like button, a Retweet button, and so on. These buttons are ubiquitous on the Internet now. We publishers like them because it makes it easier for our readers to share our content. I especially like them because I can
The problem is that these buttons use images, scripts, and/or iframes hosted on the social media site's own servers. This is partly for webmasters' convenience; it makes adoption as easy as copy-and-pasting a few lines of code. But it also gives Facebook, Twitter, Google, and so on a way to track you -- even if you don't click on the button. Remember that: if you see sharing buttons on a webpage, that page is almost certainly being tracked by social media sites or a service like AddThis. Or both.
What I'm using instead is SocialSharePrivacy, which was created by the German website Heise Online and adapted by Mathias Panzenböck. The page shows a grayed-out mockup of a sharing button. You click once to activate it, then a second time to share the page. If you don't click, nothing is loaded from the social media site, so it can't track your visit. If you don't care about the privacy issues, you can click on the Settings icon and enable the sharing buttons permanently.
It's not a perfect solution -- two clicks instead of one -- but it's much more privacy-friendly.
(If you're thinking of doing something similar on your own site, another option to consider is shareNice. ShareNice can be copied to your own webserver; but if you prefer, you can use their hosted version, which makes it as easy to install as AddThis. The difference is that shareNice doesn't set cookies or even log IP addresses -- though you'll have to trust them on the logging part. The problem is that it can't display the aggregate totals.)
The second change is the search function. I changed the site's search engine from Google to DuckDuckGo, which doesn't even store IP addresses. Again, you have to trust them on that, but I'm inclined to.
The third change is to the feed. Starting now, if you click the feed icon in the right-hand column of my blog, you'll be subscribing to a feed that's hosted locally on schneier.com, instead of one produced by Google's Feedburner service. Again, this reduces the amount of data Google collects about you. Over the next couple of days, I will transition existing subscribers off of Feedburner, but since some of you are subscribed directly to a Feedburner URL, I recommend resubscribing to the new link to be sure. And if by chance you have trouble with the new feed, this legacy link will always point to the Feedburner version.
Fighting against the massive amount of surveillance data collected about us as we surf the Internet is hard, and possibly even fruitless. But I think it's important to try.
FBI Secretly Spying on Cloud Computer Users
If you've been following along, you know that a U.S. District Court recently ruled National Security Letters unconstitutional. Not that this changes anything yet.
Text Message Retention Policies
The FBI wants cell phone carriers to store SMS messages for a long time, enabling them to conduct surveillance backwards in time. Nothing new there -- data retention laws are being debated in many countries around the world -- but this was something I did not know:
Wireless providers' current SMS retention policies vary. An internal Justice Department document (PDF) that the ACLU obtained through the Freedom of Information Act shows that, as of 2010, AT&T, T-Mobile, and Sprint did not store the contents of text messages. Verizon did for up to five days, a change from its earlier no-logs-at-all position, and Virgin Mobile kept them for 90 days. The carriers generally kept metadata such as the phone numbers associated with the text for 90 days to 18 months; AT&T was an outlier, keeping it for as long as seven years.
That second set of data is from 2009.
Leaks seems to be the primary way we learn how our privacy is being violated these days -- we need more of them.
EDITED TO ADD (4/12): Discussion of Canadian policy.
When Technology Overtakes Security
A core, not side, effect of technology is its ability to magnify power and multiply force -- for both attackers and defenders. One side creates ceramic handguns, laser-guided missiles, and new-identity theft techniques, while the other side creates anti-missile defense systems, fingerprint databases, and automatic facial recognition systems.
The problem is that it's not balanced: Attackers generally benefit from new security technologies before defenders do. They have a first-mover advantage. They're more nimble and adaptable than defensive institutions like police forces. They're not limited by bureaucracy, laws, or ethics. They can evolve faster. And entropy is on their side -- it's easier to destroy something than it is to prevent, defend against, or recover from that destruction.
For the most part, though, society still wins. The bad guys simply can't do enough damage to destroy the underlying social system. The question for us is: can society still maintain security as technology becomes more advanced?
I don't think it can.
Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious...and so on. A single attacker, or small group of attackers, can cause more destruction than ever before.
This is exactly why the whole post-9/11 weapons-of-mass-destruction debate was so overwrought: Terrorists are scary, terrorists flying airplanes into buildings are even scarier, and the thought of a terrorist with a nuclear bomb is absolutely terrifying.
As the destructive power of individual actors and fringe groups increases, so do the calls for -- and society's acceptance of -- increased security.
Traditional security largely works "after the fact". We tend not to ban or restrict the objects that can do harm; instead, we punish the people who do harm with objects. There are exceptions, of course, but they're exactly that: exceptions. This system works as long as society can tolerate the destructive effects of those objects (for example, allowing people to own baseball bats and arresting them after they use them in a riot is only viable if society can tolerate the potential for riots).
When that isn't enough, we resort to "before-the-fact" security measures. These come in two basic varieties: general surveillance of people in an effort to stop them before they do damage, and specific interdictions in an effort to stop people from using those technologies to do damage.
But these measures work better at keeping dangerous technologies out of the hands of amateurs than at keeping them out of the hands of professionals.
And in the global interconnected world we live in, they're not anywhere close to foolproof. Still, a climate of fear causes governments to try. Lots of technologies are already restricted: entire classes of drugs, entire classes of munitions, explosive materials, biological agents. There are age restrictions on vehicles and training restrictions on complex systems like aircraft. We're already almost entirely living in a surveillance state, though we don't realize it or won't admit it to ourselves. This will only get worse as technology advances… today's Ph.D. theses are tomorrow's high-school science-fair projects.
Increasingly, broad prohibitions on technologies, constant ubiquitous surveillance, and Minority Report-like preemptive security will become the norm. We can debate the effectiveness of various security measures in different circumstances. But the problem isn't that these security measures won't work -- even as they shred our freedoms and liberties -- it's that no security is perfect.
Because sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder. We'll have the technology eventually to annihilate ourselves in great numbers, and sometime after, that technology will become cheap enough to be easy.
As it gets easier for one member of a group to destroy the entire group, and the group size gets larger, the odds of someone in the group doing it approaches certainty. Our global interconnectedness means that our group size encompasses everyone on the planet, and since government hasn't kept up, we have to worry about the weakest-controlled member of the weakest-controlled country. Is this a fundamental limitation of technological advancement, one that could end civilization? First our fears grip us so strongly that, thinking about the short term, we willingly embrace a police state in a desperate attempt to keep us safe; then, someone goes off and destroys us anyway?
If security won't work in the end, what is the solution?
Resilience -- building systems able to survive unexpected and devastating attacks -- is the best answer we have right now. We need to recognize that large-scale attacks will happen, that society can survive more than we give it credit for, and that we can design systems to survive these sorts of attacks. Calling terrorism an existential threat is ridiculous in a country where more people die each month in car crashes than died in the 9/11 terrorist attacks.
If the U.S. can survive the destruction of an entire city -- witness New Orleans after Hurricane Katrina or even New York after Sandy -- we need to start acting like it, and planning for it. Still, it's hard to see how resilience buys us anything but additional time. Technology will continue to advance, and right now we don't know how to adapt any defenses -- including resilience -- fast enough.
We need a more flexible and rationally reactive approach to these problems and new regimes of trust for our information-interconnected world. We're going to have to figure this out if we want to survive, and I'm not sure how many decades we have left.
This essay originally appeared on Wired.com.
Lessons From the FBI's Insider Threat Program
This article is worth reading. One bit:
For a time the FBI put its back into coming up with predictive analytics to help predict insider behavior prior to malicious activity. Rather than coming up with a powerful tool to stop criminals before they did damage, the FBI ended up with a system that was statistically worse than random at ferreting out bad behavior. Compared to the predictive capabilities of Punxsutawney Phil, the groundhog of Groundhog Day, that system did a worse job of predicting malicious insider activity, Reidy says.
The list of countries with servers running FinSpy is now Australia, Bahrain, Bangladesh, Britain, Brunei, Canada, the Czech Republic, Estonia, Ethiopia, Germany, India, Indonesia, Japan, Latvia, Malaysia, Mexico, Mongolia, Netherlands, Qatar, Serbia, Singapore, Turkmenistan, the United Arab Emirates, the United States and Vietnam.
It's sold by the British company Gamma Group.
EDITED TO ADD (3/20): The report.
Nice summary article on the state-sponsored Gauss malware.
A 1962 Speculative Essay on Computers and Intelligence
From the CIA archives: Orrin Clotworthy, "Some Far-out Thoughts on Computers," Studies in Intelligence v. 6 (1962).
EDITED TO ADD (4/12): A transcript of the original, scanned article.
Audacious daytime prison escape by helicopter.
The escapees have since been recaptured.
EDITED TO ADD (4/12): Other helecopter prison escapes.
Friday Squid Blogging: WTF, Evolution?
WTF, Evolution? is a great blog, and they finally mentioned squid.
xkcd on PGP
How security interacts with users.
Stuxnet is Much Older than We Thought
What's impressive is how advanced the cyberattack capabilities of the U.S. and/or Israel were back then.
Interesting law paper: "The Implausibility of Secrecy," by Mark Fenster.
Abstract: Government secrecy frequently fails. Despite the executive branch’s obsessive hoarding of certain kinds of documents and its constitutional authority to do so, recent high-profile events among them the WikiLeaks episode, the Obama administration’s celebrated leak prosecutions, and the widespread disclosure by high-level officials of flattering confidential information to sympathetic reporters undercut the image of a state that can classify and control its information. The effort to control government information requires human, bureaucratic, technological, and textual mechanisms that regularly founder or collapse in an administrative state, sometimes immediately and sometimes after an interval. Leaks, mistakes, open sources each of these constitutes a path out of the government’s informational clutches. As a result, permanent, long-lasting secrecy of any sort and to any degree is costly and difficult to accomplish.
Nationalism on the Internet
For technology that was supposed to ignore borders, bring the world closer together, and sidestep the influence of national governments, the Internet is fostering an awful lot of nationalism right now. We've started to see increased concern about the country of origin of IT products and services; U.S. companies are worried about hardware from China; European companies are worried about cloud services in the U.S; no one is sure whether to trust hardware and software from Israel; Russia and China might each be building their own operating systems out of concern about using foreign ones.
I see this as an effect of all the cyberwar saber-rattling that's going on right now. The major nations of the world are in the early years of a cyberwar arms race, and we're all being hurt by the collateral damage.
A commentator on Al Jazeera makes a similar point.
Our nationalist worries have recently been fueled by a media frenzy surrounding attacks from China. These attacks aren't new -- cyber-security experts have been writing about them for at least a decade, and the popular media reported about similar attacks in 2009 and again in 2010 -- and the current allegations aren't even very different than what came before. This isn't to say that the Chinese attacks aren't serious. The country's espionage campaign is sophisticated, and ongoing. And because they're in the news, people are understandably worried about them.
But it's not just China. International espionage works in both directions, and I'm sure we are giving just as good as we're getting. China is certainly worried about the U.S. Cyber Command's recent announcement that it was expanding from 900 people to almost 5,000, and the NSA's massive new data center in Utah. The U.S. even admits that it can spy on non-U.S. citizens freely.
The fact is that governments and militaries have discovered the Internet; everyone is spying on everyone else, and countries are ratcheting up offensive actions against other countries.
At the same time, many nations are demanding more control over the Internet within their own borders. They reserve the right to spy and censor, and to limit the ability of others to do the same. This idea is now being called the "cyber sovereignty movement," and gained traction at the International Telecommunications Union meeting last December in Dubai. One analyst called that meeting the "Internet Yalta," where the Internet split between liberal-democratic and authoritarian countries. I don't think he's exaggerating.
Not that this is new, either. Remember 2010, when the governments of the UAE, Saudi Arabia, and India demanded that RIM give them the ability to spy on BlackBerry PDAs within their borders? Or last year, when Syria used the Internet to surveil its dissidents? Information technology is a surprisingly powerful tool for oppression: not just surveillance, but censorship and propaganda as well. And countries are getting better at using that tool.
But remember: none of this is cyberwar. It's all espionage, something that's been going on between countries ever since countries were invented. What moves public opinion is less the facts and more the rhetoric, and the rhetoric of war is what we're hearing.
The result of all this saber-rattling is a severe loss of trust, not just amongst nation-states but between people and nation-states. We know we're nothing more than pawns in this game, and we figure we'll be better off sticking with our own country.
Unfortunately, both the reality and the rhetoric play right into the hands of the military and corporate interests that are behind the cyberwar arms race in the first place. There is an enormous amount of power at stake here: not only power within governments and militaries, but power and profit amongst the corporations that supply the tools and infrastructure for cyber-attack and cyber-defense. The more we believe we are "at war" and believe the jingoistic rhetoric, the more willing we are to give up our privacy, freedoms, and control over how the Internet is run.
Arms races are fueled by two things: ignorance and fear. We don't know the capabilities of the other side, and we fear that they are more capable than we are. So we spend more, just in case. The other side, of course, does the same. That spending will result in more cyber weapons for attack and more cyber-surveillance for defense. It will result in more government control over the protocols of the Internet, and less free-market innovation over the same. At its worst, we might be about to enter an information-age Cold War: one with more than two "superpowers." Aside from this being a bad future for the Internet, this is inherently destabilizing. It's just too easy for this amount of antagonistic power and advanced weaponry to get used: for a mistaken attribution to be reacted to with a counterattack, for a misunderstanding to become a cause for offensive action, or for a minor skirmish to escalate into a full-fledged cyberwar.
Nationalism is rife on the Internet, and it's getting worse. We need to damp down the rhetoric and-more importantly-stop believing the propaganda from those who profit from this Internet nationalism. Those who are beating the drums of cyberwar don't have the best interests of society, or the Internet, at heart.
This essay previously appeared at Technology Review.
Security Theater on the Wells Fargo Website
Click on the "Establishing secure connection" link at the top of this page. It's a Wells Fargo page that displays a progress bar with a bunch of security phrases -- "Establishing Secure Connection," "Sending credentials," "Building Secure Environment," and so on -- and closes after a few seconds. It's complete security theater; it doesn't actually do anything but make account holders feel better.
Hacking Best-seller Lists
It turns out that you can buy a position for your book on best-seller lists.
Cisco IP Phone Hack
All current Cisco IP phones, including the ones seen on desks in the White House and aboard Air Force One, have a vulnerability that allows hackers to take complete control of the devices.
"The Logic of Surveillance"
Surveillance is part of the system of control. "The more surveillance, the more control" is the majority belief amongst the ruling elites. Automated surveillance requires fewer "watchers", and since the watchers cannot watch all the surveillance, long term storage increases the ability to find some "crime" anyone is guilty of.
Dead Drop from the 1870s
De Blowitz was staying at the Kaiserhof. Each day his confederate went there for lunch and dinner. The two never acknowledged one another, but they hung their hats on neighboring pegs. At the end of the meal the confederate departed with de Blowitz's hat, and de Blowitz innocently took the confederate's. The communications were hidden in the hat's lining.
Is Software Security a Waste of Money?
I worry that comments about the value of software security made at the RSA Conference last week will be taken out of context. John Viega did not say that software security wasn't important. He said:
For large software companies or major corporations such as banks or health care firms with large custom software bases, investing in software security can prove to be valuable and provide a measurable return on investment, but that's probably not the case for smaller enterprises, said John Viega, executive vice president of products, strategy and services at SilverSky and an authority on software security. Viega, who formerly worked on product security at McAfee and as a consultant at Cigital, said that when he was at McAfee he could not find a return on investment for software security.
I agree with that. For small companies, it's not worth worrying much about software security. But for large software companies, it's vital.
Friday Squid Blogging: Squid/Whale Yin-Yang
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Ross Anderson's Security Engineering Online
The second edition of Ross Anderson's fantastic book, Security Engineering, is now free online. Required reading for any security engineer.
Oxford University Blocks Google Docs
Google Docs is being used for phishing. Oxford University felt that it had to block the service because Google isn't responding to takedown requests quickly enough.
Think about this in light of my essay on feudal security. Oxford University has to trust that Google will act in its best interest, and has no other option if it doesn't.
How the FBI Intercepts Cell Phone Data
Good article on "Stingrays," which the FBI uses to monitor cell phone data. Basically, they trick the phone into joining a fake network. And, since cell phones inherently trust the network -- as opposed to computers which inherently do not trust the Internet -- it's easy to track people and collect data. There are lots of questions about whether or not it is illegal for the FBI to do this without a warrant. We know that the FBI has been doing this for almost twenty years, and that they know that they're on shaky legal ground.
The latest release, amounting to some 300 selectively redacted pages, not only suggests that sophisticated cellphone spy gear has been widely deployed since the mid-'90s. It reveals that the FBI conducted training sessions on cell tracking techniques in 2007 and around the same time was operating an internal "secret" website with the purpose of sharing information and interactive media about "effective tools" for surveillance. There are also some previously classified emails between FBI agents that show the feds joking about using the spy gear. "Are you smart enough to turn the knobs by yourself?" one agent asks a colleague.
Of course, if a policeman actually has your phone, he can suck pretty much everything out of it -- again, without a warrant.
Using a single "data extraction session" they were able to pull:
The NSA's Ragtime Surveillance Program and the Need for Leaks
A new book reveals details about the NSA's Ragtime surveillance program:
A book published earlier this month, "Deep State: Inside the Government Secrecy Industry," contains revelations about the NSA's snooping efforts, based on information gleaned from NSA sources. According to a detailed summary by Shane Harris at the Washingtonian yesterday, the book discloses that a codename for a controversial NSA surveillance program is "Ragtime" -- and that as many as 50 companies have apparently participated, by providing data as part of a domestic collection initiative.
The whole article is interesting, as is the detailed summary, but I thought this comment was particularly important:
The fact that NSA keeps applying separate codenames to programs that inevitably are closely intertwined is an important clue to what's really going on. The government wants to pretend they are discrete surveillance programs in order to conceal, especially from Congressional oversight, how monstrous they are in sum. So they'll give a separate briefing on Trailblazer or what have you, and for an hour everybody in the room acts as if the whole thing is carefully circumscribed and under control. And then if somebody ever finds out about another program (say 'Moonraker' or what have you), then they go ahead and offer a similarly reassuring briefing on that. And nobody in Congress has to acknowledge that the Total Information Awareness Program that was exposed and met with howls of protest...actually wasn't shut down at all, just went back under the radar after being renamed (and renamed and renamed).
He's right. The real threat isn't any one particular secret program, it's all of them put together. And by dividing up the programs into different code names, the big picture remains secret and we only ever get glimpses of it.
We need whistleblowers. Much of the information we have about the NSA's and the Justice Department's plans and capabilities -- think Echelon, Total Information Awareness, and the post-9/11 telephone eavesdropping program -- is over a decade old.
Frank Rieger of the Chaos Computer Club got it right in 2006:
We also need to know how the intelligence agencies work today. It is of highest priority to learn how the "we rather use backdoors than waste time cracking your keys"-methods work in practice on a large scale and what backdoors have been intentionally built into or left inside our systems....
The prosecution will likely not accept Manning's guilty plea to lesser offenses as the final word. When the case goes to trial in June, they will try to prove that Manning is guilty of a raft of more serious offenses. Most aggressive and novel among these harsher offenses is the charge that by giving classified materials to WikiLeaks Manning was guilty of "aiding the enemy." That's when the judge will have to decide whether handing over classified materials to ProPublica or the New York Times, knowing that Al Qaeda can read these news outlets online, is indeed enough to constitute the capital offense of "aiding the enemy."
A country that's much less free and much less secure.
Al Qaeda Document on Avoiding Drone Strikes
3 – Spreading the reflective pieces of glass on a car or on the roof of the building.
Marketing at the RSA Conference
Marcus Ranum has an interesting screed on "booth babes" in the RSA Conference exhibition hall:
I'm not making a moral argument about sexism in our industry or the objectification of women. I could (and probably should) but it's easier to just point out the obvious: the only customers that will be impressed by anyone's ability to hire pretty models to work their booth aren't going to be the ones signing the big purchase orders. And, it's possible that they're thinking your sales team are going to be a bunch of testosterone-laden assholes who'd be better off selling used tires. If some company wants to appeal to the consumer that's going to jump at the T&A maybe they should relocate up the street to O'Farrell where they can include a happy ending with their product demo.
Mark Rothman on the same topic.
EDITED TO ADD (3/11): Winn Schwartau makes a similar point.
Technologies of Surveillance
It's a new day for the New York Police Department, with technology increasingly informing the way cops do their jobs. With innovation comes new possibilities but also new concerns.
For one, the NYPD is testing a new type of security apparatus that uses terahertz radiation to detect guns under clothing from a distance. As Police Commissioner Ray Kelly explained to the Daily News back in January, If something is obstructing the flow of that radiation -- a weapon, for example -- the device will highlight that object.
Ignore, for a moment, the glaring constitutional concerns, which make the stop-and-frisk debate pale in comparison: virtual strip-searching, evasion of probable cause, potential racial profiling. Organizations like the American Civil Liberties Union are all over those, even though their opposition probably won't make a difference. We're scared of both terrorism and crime, even as the risks decrease; and when we're scared, we're willing to give up all sorts of freedoms to assuage our fears. Often, the courts go along.
A more pressing question is the effectiveness of technologies that are supposed to make us safer. These include the NYPD's Domain Awareness System, developed by Microsoft, which aims to integrate massive quantities of data to alert cops when a crime may be taking place. Other innovations are surely in the pipeline, all promising to make the city safer. But are we being sold a bill of goods?
For example, press reports make the gun-detection machine look good. We see images from the camera that pretty clearly show a gun outlined under someone's clothing. From that, we can imagine how this technology can spot gun-toting criminals as they enter government buildings or terrorize neighborhoods. Given the right inputs, we naturally construct these stories in our heads. The technology seems like a good idea, we conclude.
The reality is that we reach these conclusions much in the same way we decide that, say, drinking Mountain Dew makes you look cool. These are, after all, the products of for-profit companies, pushed by vendors looking to make sales. As such, they're marketed no less aggressively than soda pop and deodorant. Those images of criminals with concealed weapons were carefully created both to demonstrate maximum effectiveness and push our fear buttons. These companies deliberately craft stories of their effectiveness, both through advertising and placement on television and movies, where police are often showed using high-powered tools to catch high-value targets with minimum complication.
The truth is that many of these technologies are nowhere near as reliable as claimed. They end up costing us gazillions of dollars and open the door for significant abuse. Of course, the vendors hope that by the time we realize this, they're too embedded in our security culture to be removed.
The current poster child for this sort of morass is the airport full-body scanner. Rushed into airports after the underwear bomber Umar Farouk Abdulmutallab nearly blew up a Northwest Airlines flight in 2009, they made us feel better, even though they don't work very well and, ironically, wouldn't have caught Abdulmutallab with his underwear bomb. Both the Transportation Security Administration and vendors repeatedly lied about their effectiveness, whether they stored images, and how safe they were. In January, finally, backscatter X-ray scanners were removed from airports because the company who made them couldn't sufficiently blur the images so they didn't show travelers naked. Now, only millimeter-wave full-body scanners remain.
Another example is closed-circuit television (CCTV) cameras. These have been marketed as a technological solution to both crime and understaffed police and security organizations. London, for example, is rife with them, and New York has plenty of its own. To many, it seems apparent that they make us safer, despite cries of Big Brother. The problem is that in study after study, researchers have concluded that they don't.
Counterterrorist data mining and fusion centers: nowhere near as useful as those selling the technologies claimed. It's the same with DNA testing and fingerprint technologies: both are far less accurate than most people believe. Even torture has been oversold as a security system -- this time by a government instead of a company -- despite decades of evidence that it doesn't work and makes us all less safe.
It's not that these technologies are totally useless. It's that they're expensive, and none of them is a panacea. Maybe there's a use for a terahertz radar, and maybe the benefits of the technology are worth the costs. But we should not forget that there's a profit motive at work, too.
EDITED TO ADD (2/13): IBM's version massive data policing system is being tested in Rio de Jeneiro.
New Internet Porn Scam
I hadn't heard of this one before. In New Zealand, people viewing adult websites -- it's unclear whether these are honeypot sites, or malware that notices the site being viewed -- get a pop-up message claiming it's from the NZ Police and demanding payment of an instant fine for viewing illegal pornography.
EDITED TO ADD (2/12): There's a Japanese variant of this called "one-click fraud."
Getting Security Incentives Right
One of the problems with motivating proper security behavior within an organization is that the incentives are all wrong. It doesn't matter how much management tells employees that security is important, employees know when it really isn't -- when getting the job done cheaply and on schedule is much more important.
It seems to me that his co-workers understand the risks better than he does. They know what the real risks are at work, and that they all revolve around not getting the job done. Those risks are real and tangible, and employees feel them all the time. The risks of not following security procedures are much less real. Maybe the employee will get caught, but probably not. And even if he does get caught, the penalties aren't serious.
Similarly, there's a supposedly an old Chinese proverb that goes "hang one, warn a thousand." Or to put it another way, we're really good at risk management. And there's John Byng, whose execution gave rise to the Voltaire quote (in French): "in this country, it is good to kill an admiral from time to time, in order to encourage the others."
I thought of all this when I read about the new security procedures surrounding the upcoming papal election:
According to the order, which the Vatican made available in English on Monday afternoon, those few who are allowed into the secret vote to act as aides will be required to take an oath of secrecy.
Excommunication is like being fired, only it lasts for eternity.
I'm not optimistic about the College of Cardinals being able to maintain absolute secrecy during the election, because electronic devices have become so small, and electronic communications so ubiquitous. Unless someone wins on one of the first ballots -- a 2/3 majority is required to elect the next pope, so if the various factions entrench they could be at it for a while -- there are going to be leaks. Perhaps accidental, perhaps strategic: these cardinals are fallible men, after all.
Friday Squid Blogging: Another Squid Cartoon
Me on "Virtually Speaking"
Last week I was on "Virtually Speaking."
Phishing Has Gotten Very Good
This isn't phishing; it's not even spear phishing. It's laser-guided precision phishing:
One of the leaked diplomatic cables referred to one attack via email on US officials who were on a trip in Copenhagen to debate issues surrounding climate change.
Also, a new technique:
"It is known as waterholing," he explained. "Which basically involves trying to second guess where the employees of the business might actually go on the web.
I wrote this over a decade ago: "Only amateurs attack machines; professionals target people." And the professionals are getting better and better.
This is the problem. Against a sufficiently skilled, funded, and motivated adversary, no network is secure. Period. Attack is much easier than defense, and the reason we've been doing so well for so long is that most attackers are content to attack the most insecure networks and leave the rest alone.
It's a matter of motive. To a criminal, all files of credit card numbers are equally good, so your security depends in part on how much better or worse you are than those around you. If the attacker wants you specifically -- as in the examples above -- relative security is irrelevant. What matters is whether or not your security is better than the attackers' skill. And so often it's not.
I am reminded of this great quote from former NSA Information Assurance Director Brian Snow: "Your cyber systems continue to function and serve you not due to the expertise of your security staff but solely due to the sufferance of your opponents."
Actually, that whole essay is worth reading. It says much of what I've been saying, but it's nice to read someone else say it.
One of the often unspoken truths of security is that large areas of it are currently unsolved problems. We don't know how to write large applications securely yet. We don't know how to secure entire organizations with reasonable cost effective measures yet. The honest answer to almost any security question is: "it's complicated!". But there is no shortage of gungho salesmen in expensive suits peddling their security wares and no shortage of clients willing to throw money at the problem (because doing something must be better than doing nothing, right?)
Powered by Movable Type. Photo at top by Geoffrey Stone.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.