March 15, 2008
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0803.html>. These same essays appear in the "Schneier on Security" blog: <http://www.schneier.com/blog>. An RSS feed is available.
In this issue:
- Privacy and Power
- Israel Implementing IFF System for Commercial Aircraft
- Third Parties Controlling Information
- Amtrak to Start Passenger Screening
- Schneier/BT Counterpane News
- The Doghouse: Drecom
- Security Products: Suites vs. Best-of-Breed
- Comments from Readers
When I write and speak about privacy, I am regularly confronted with the mutual disclosure argument. Explained in books like David Brin's "The Transparent Society," the argument goes something like this: In a world of ubiquitous surveillance, you'll know all about me, but I will also know all about you. The government will be watching us, but we'll also be watching the government. This is different than before, but it's not automatically worse. And because I know your secrets, you can't use my secrets as a weapon against me.
This might not be everybody's idea of utopia -- and it certainly doesn't address the inherent value of privacy -- but this theory has a glossy appeal, and could easily be mistaken for a way out of the problem of technology's continuing erosion of privacy. Except it doesn't work, because it ignores the crucial dissimilarity of power.
You cannot evaluate the value of privacy and disclosure unless you account for the relative power levels of the discloser and the disclosee.
If I disclose information to you, your power with respect to me increases. One way to address this power imbalance is for you to similarly disclose information to me. We both have less privacy, but the balance of power is maintained. But this mechanism fails utterly if you and I have different power levels to begin with.
An example will make this clearer. You're stopped by a police officer, who demands to see identification. Divulging your identity will give the officer enormous power over you: He or she can search police databases using the information on your ID; he or she can create a police record attached to your name; he or she can put you on this or that secret terrorist watch list. Asking to see the officer's ID in return gives you no comparable power over him or her. The power imbalance is too great, and mutual disclosure does not make it OK.
You can think of your existing power as the exponent in an equation that determines the value, to you, of more information. The more power you have, the more additional power you derive from the new data.
Another example: When your doctor says "take off your clothes," it makes no sense for you to say, "You first, doc." The two of you are not engaging in an interaction of equals.
This is the principle that should guide decision-makers when they consider installing surveillance cameras or launching data-mining programs. It's not enough to open the efforts to public scrutiny. All aspects of government work best when the relative power between the governors and the governed remains as small as possible -- when liberty is high and control is low. Forced openness in government reduces the relative power differential between the two, and is generally good. Forced openness in laypeople increases the relative power, and is generally bad.
Seventeen-year-old Erik Crespo was arrested in 2005 in connection with a shooting in a New York City elevator. There's no question that he committed the shooting; it was captured on surveillance-camera videotape. But he claimed that while being interrogated, Detective Christopher Perino tried to talk him out of getting a lawyer, and told him that he had to sign a confession before he could see a judge.
Perino denied, under oath, that he ever questioned Crespo. But Crespo had received an MP3 player as a Christmas gift, and surreptitiously recorded the questioning. The defense brought a transcript and CD into evidence. Shortly thereafter, the prosecution offered Crespo a better deal than originally proffered (seven years rather than 15). Crespo took the deal, and Perino was separately indicted on charges of perjury.
Without that recording, it was the detective's word against Crespo's. And who would believe a murder suspect over a New York City detective? That power imbalance was reduced only because Crespo was smart enough to press the "record" button on his MP3 player. Why aren't all interrogations recorded? Why don't defendants have the right to those recordings, just as they have the right to an attorney? Police routinely record traffic stops from their squad cars for their own protection; that video record shouldn't stop once the suspect is no longer a threat.
Cameras make sense when trained on police, and in offices where lawmakers meet with lobbyists, and wherever government officials wield power over the people. Open-government laws, giving the public access to government records and meetings of governmental bodies, also make sense. These all foster liberty.
Ubiquitous surveillance programs that affect everyone without probable cause or warrant, like the National Security Agency's warrantless eavesdropping programs or various proposals to monitor everything on the internet, foster control. And no one is safer in a political system of control.
The inherent value of privacy:
Cameras catch a policeman:
Security and control:
This essay originally appeared on Wired.com.
Commentary/rebuttal by David Brin.
Israel is implementing an IFF (identification, friend or foe) system for commercial aircraft, designed to differentiate legitimate planes from terrorist-controlled planes.
The news article implies that it's a basic challenge-and-response system. Ground control issues some kind of alphanumeric challenge to the plane. The pilot types the challenge into some hand-held computer device, and reads back the reply. Authentication is achieved by 1) physical possession of the device, and 2) typing a legitimate PIN into the device to activate it.
The article talks about a distress mode, where the pilot signals that a terrorist is holding a gun to his head. Likely, that's done by typing a special distress PIN into the device, and reading back whatever the screen displays.
The military has had this sort of system -- first paper-based, and eventually computer-based -- for decades. The critical issue with using this on commercial aircraft is how to deal with user error. The system has to be easy enough to use, and the parts hard enough to lose, that there won't be a lot of false alarms.
A sonic blaster weapon:
Note to the TSA: The inventor has had no problems bringing this thing onto airplanes.
This is a story about petty crime and identity theft, but also a fascinating and impressive story about social engineering. It works even in places that take security seriously.
Every few years, the stupid notion of benevolent worms shows up. This time it was a group of Microsoft researchers from the UK:
This story is a year and a half old, but the lessons -- about spending money on the wrong security threats -- are still good:
There are a couple of interesting things about the hijacking in New Zealand last month. First, it was a traditional hijacking. Remember after 9/11 when people said that the era of airplane hijacking was over, that it would no longer be possible to hijack an airplane and demand a ransom or demand passage to some exotic location? Turns out that's just not true; there still can be traditional non-terrorist hijackings.
And even more interesting, the media coverage reflected that. Read the links above. They're calm and reasoned. There's no mention of the T-word. We're not all cautioned that we're going to die. If anything, they're recommending that everyone not overreact. Refreshing, really.
More progress: a whole article about a bomb in Times Square without ever mentioning the T-word.
Healthcare records are awfully insecure, and there are all sorts of threats from criminals, but I think the national security angle is just hyperbole.
The U.S. post office is building a database that will allow people to track commercial mail through the system.
What the article doesn't discuss is that now the government will have a database showing which businesses everyone gets mail from.
Cold-boot attack against disk encryption: a very clever hardware attack that recover keys from DRAM:
There is a general security problem illustrated here: it is very difficult to secure data when the attacker has physical control of the machine the data is stored on.
How-to, with pictures:
New cryptanalysis of A5/1 (the algorithm used in GSM cell phones). What's new about this attack is: 1) it's completely passive, 2) its total hardware cost is around $1,000, and 3) the total time to break the key is about 30 minutes. That's impressive. And it demonstrates an important cryptographic maxim: attacks always get better; they never get worse. This is why we tend to abandon algorithms at the first sign of weakness; we know that with time, the weaknesses will be exploited more effectively to yield better and faster attacks.
I've already written about secret forensic codes embedded in color laser printers. Seems like these codes may violate European privacy laws.
Interesting research on malware distribution:
This is no surprise: fear of Internet predators is largely unfounded.
More hysteria about a liquid bomb:
A good debunking:
Toy airport-security X-ray machine for kids
Reminds me of the Playmobil Security Checkpoint:
In "Underlying Reasons for Success and Failure of Terrorist Attacks: Selected Case Studies" (Homeland Security Institute, June 2007), the authors examine eight recent terrorist plots against commercial aviation and passenger rail, and come to some interesting conclusions. I especially like this quote, which echoes what I've been saying for a long time now: "One phenomenon stands out: terrorists are rarely caught in the act during the execution phase of an operation, other than instances in which their equipment or weapons fail. Rather, plots are most often foiled during the pre-execution phases." Intelligence, investigation, and emergency response: that's where we should be spending our counterterrorism dollar. Defending the targets is rarely the right answer.
A fascinating article about how kids learn to lie. (Maybe it's a bit off the security topic, but with all my reading on the psychology of security, I don't think so.)
Two good uses for RFID chips: to automatically inventory the tools a truck is carrying, and to find misrouted luggage at an airport. See, no technology is all bad or all good.
There's a new version of TrueCrypt, version 5.1, the free open-source disk encryption software.
We've all known for years that you can use Google to scan for vulnerabilities. Well, now the process has been automated: Goolag Scanner from the Cult of the Dead Cow. I've seen a lot of pre-release scanning results from these guys, and it's pretty amazing what they've found.
When I wrote the essay "Portrait of the Modern Terrorist as an Idiot," I thought a lot about the government inventing terrorist plotters and entrapping them, to make the world seem scarier. Since then, it's been on my list of topics to write about someday. "Rolling Stone" has his excellent article on the topic, about the Joint Terrorism Task Forces in the U.S.
SurveillanceSaver, a screen saver that shows live images from networked surveillance cameras around the world:
An excellent article on the risk of knowing too much about risk. Read it all:
TSA gangsta rap. Funny.
A weird, weird story about TSA's ideal laptop bag. It seems that the TSA thinks we're all going to redesign our lives around their security checkpoints. Personally, I'd rather have a laptop bag that's useful for me all the time rather than useful for the TSA when I fly -- and I go through airport security about twice a week.
This is video from my talk on dual-use technologies at CPSR's Technology in Wartime conference.
I don't know how big a deal it is that 122 FAA safety inspector badges are missing, but I'm amused nonetheless:
So, you're sitting around the house with your buddies, playing World of Warcraft. One of you wonders: "How can we get *paid* for doing this?" Another says: "I know; let's pretend we're fighting terrorism, and then get a government grant." "Having eliminated all terrorism in the real world, the U.S. intelligence community is working to develop software that will detect violent extremists infiltrating World of Warcraft and other massive multiplayer games, according to a data-mining report from the Director of National Intelligence." You just can't make this stuff up.
The German courts rule on the legality of the police spying in cyberspace. Good stuff.
Really interesting stuff about hacking implanted medical devices. More and more of them contain computers and communicate via RF.
Ross Anderson, Rainer Böhme, Richard Clayton, and Tyler Moore have published a major report on security and economics: "Security, Economics, and the Internal Market," published by the European Network and Information Security Agency (ENISA).
Physically hacking Windows computers via FireWire:
Full disk encryption seems like the only defense here.
Essay about stealing from bookstores:
The London Tube smart card is cracked. It looks like lousy cryptography.
Interesting article from Popular Mechanics on surveillance cameras -- I'm quoted in several places.
And this about watching back.
Wine Therapy is a web bulletin board for serious wine geeks. It's been active since 2000, and its database of back posts and comments is a wealth of information: tasting notes, restaurant recommendations, stories and so on. Late last year, someone hacked the board software, got administrative privileges and deleted the database. There was no backup.
Of course the board's owner should have been making backups all along, but he has been very sick for the past year and wasn't able to. And the Internet Archive has been only somewhat helpful.
More and more, information we rely on -- either created by us or by others -- is out of our control. It's out there on the internet, on someone else's website and being cared for by someone else. We use those websites, sometimes daily, and don't even think about their reliability.
Bits and pieces of the web disappear all the time. It's called "link rot," and we're all used to it. A friend saved 65 links in 1999 when he planned a trip to Tuscany; only half of them still work today. Here in Crypto-Gram and in my own blog, essays and news articles and websites that I link to regularly disappear.
It may be because of a site's policies -- some newspapers only have a couple of weeks on their website -- or it may be more random: Position papers disappear off a politician's website after he changes his mind on an issue, corporate literature disappears from the company's website after an embarrassment, etc. The ultimate link rot is "site death," where entire websites disappear: Olympic and World Cup events after the games are over, political candidates' websites after the elections are over, corporate websites after the funding runs out and so on.
Mostly, we ignore the issue. Sometimes I save a copy of a good recipe I find, or an article relevant to my research, but mostly I trust that whatever I want will be there next time. Were I planning a trip to Tuscany, I would rather search for relevant articles today than rely on a nine-year-old list anyway. Most of the time, link rot and site death aren't really a problem.
This is changing in a Web 2.0 world, with websites that are less about information and more about community. We help build these sites, with our posts or our comments. We visit them regularly and get to know others who also visit regularly. They become part of our socialization on the internet and the loss of them affects us differently, as Greatest Journal users discovered in January when their site died.
Few, if any, of the people who made Wine Therapy their home kept backup copies of their own posts and comments. I'm sure they didn't even think of it. I don't think of it, when I post to the various boards and blogs and forums I frequent. Of course I know better, but I think of these forums as extensions of my own computer -- until they disappear.
As we rely on others to maintain our writings and our relationships, we lose control over their availability. Of course, we also lose control over their security, as MySpace users learned last month when a 17-GB file of half a million supposedly private photos was uploaded to a BitTorrent site.
In the early days of the web, I remember feeling giddy over the wealth of information out there and how easy it was to get to. "The Internet is my hard drive," I told newbies. It's even more true today; I don't think I could write without so much information so easily accessible. But it's a pretty damned unreliable hard drive.
The Internet is my hard drive, but only if my needs are immediate and my requirements can be satisfied inexactly. It was easy for me to search for information about the MySpace photo hack. And it will be easy to look up, and respond to, comments to this essay, both on Wired.com and on my own website. Wired.com is a commercial venture, so there is advertising value in keeping everything accessible. My site is not at all commercial, but there is personal value in keeping everything accessible. By that analysis, all sites should be up on the internet forever, although that's certainly not true. What is true is that there's no way to predict what will disappear when.
Unfortunately, there's not much we can do about it. The security measures largely aren't in our hands. We can save copies of important web pages locally, and copies of anything important we post. The Internet Archive is remarkably valuable in saving bits and pieces of the internet. And recently, we've started seeing tools for archiving information and pages from social networking sites. But what's really important is the whole community, and we don't know which bits we want until they're no longer there.
And about Wine Therapy? I *think* it started in 2000. It might have been 2001. I can't check, because someone erased the archives.
This essay originally appeared on Wired.com.
Amtrak is going to start randomly screening passengers, in an effort to close the security-theater gap between trains and airplanes.
It's kind of random:
"The teams will show up unannounced at stations and set up baggage screening areas in front of boarding gates. Officers will randomly pull people out of line and wipe their bags with a special swab that is then put through a machine that detects explosives. If the machine detects anything, officers will open the bag for visual inspection.
"Anybody who is selected for screening and refuses will not be allowed to board and their ticket will be refunded.
"In addition to the screening, counterterrorism officers with bomb-sniffing dogs will patrol platforms and walk through trains, and sometimes will ride the trains, officials said."
This is the most telling comment:
"'There is no new or different specific threat,' [Amtrak chief executive Alex] Kummant said. 'This is just the correct step to take.'"
Why is it the correct step to take? Because it makes him feel better. That's the very definition of security theater.
An article about, and a little bit by, me:
An op-ed by me on national ID from the Minneapolis Star Tribune:
And a small Q&A from the same newspaper:
Schneier is speaking on "The Theater of Security" at the Weisman Art Museum on March 27 in Minneapolis:
Schneier is speaking at the Freedom to Connect conference on April 1 in Washington, DC.
Schneier is speaking at InterSystems DEVCON2008 on April 2 in Orlando.
Schneier is speaking at the RSA Conference on April 8 in San Francisco.
They advertise 128-bit AES encryption, but they use XOR.
This is why evaluating security products is hard: the devil is in the details.
Blog entry URL:
We know what we don't like about buying consolidated product suites: one great product and a bunch of mediocre ones. And we know what we don't like about buying best-of-breed: multiple vendors, multiple interfaces, and multiple products that don't work well together. The security industry has gone back and forth between the two, as a new generation of IT security professionals rediscovers the downsides of each solution.
The real problem is that neither solution really works, and we continually fool ourselves into believing whatever we don't have is better than what we have at the time. And the real solution is to buy results, not products.
Honestly, no one wants to buy IT security. People want to buy whatever they want -- connectivity, a Web presence, email, networked applications, whatever -- and they want it to be secure. That they're forced to spend money on IT security is an artifact of the youth of the computer industry. And sooner or later the need to buy security will disappear.
It will disappear because IT vendors are starting to realize they have to provide security as part of whatever they're selling. It will disappear because organizations are starting to buy services instead of products, and demanding security as part of those services. It will disappear because the security industry will disappear as a consumer category, and will instead market to the IT industry.
The critical driver here is outsourcing. Outsourcing is the ultimate consolidator, because the customer no longer cares about the details. If I buy my network services from a large IT infrastructure company, I don't care if it secures things by installing the hot new intrusion prevention systems, by configuring the routers and servers so as to obviate the need for network-based security, or if it uses magic security dust given to it by elven kings. I just want a contract that specifies a level and quality of service, and my vendor can figure it out.
IT is infrastructure. Infrastructure is always outsourced. And the details of how the infrastructure works are left to the companies that provide it.
This is the future of IT, and when that happens we're going to start to see a type of consolidation we haven't seen before. Instead of large security companies gobbling up small security companies, both large and small security companies will be gobbled up by non-security companies. It's already starting to happen. In 2006, IBM bought ISS. The same year BT bought my company, Counterpane, and last year it bought INS. These aren't large security companies buying small security companies; these are non-security companies buying large and small security companies.
If I were Symantec and McAfee, I would be preparing myself for a buyer.
This is good consolidation. Instead of having to choose between a single product suite that isn't very good or a best-of-breed set of products that don't work well together, we can ignore the issue completely. We can just find an infrastructure provider that will figure it out and make it work -- who cares how?
This essay originally appeared as the second half of a point/counterpoint with Marcus Ranum in "Information Security."
There are hundreds of comments -- many of them interesting -- on these topics on my blog. Search for the story you want to comment on, and join in.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of BT Counterpane, and is a member of the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
BT Counterpane is the world's leading protector of networked information - the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. BT Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT or BT Counterpane.
Copyright (c) 2008 by Bruce Schneier.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..