Schneier on Security
A blog covering security and security technology.
September 2008 Archives
The Hackers Choice has released a tool allowing people to clone and modify electronic passports.
The problem is self-signed certificates.
A CA is not a great solution:
Using a Certification Authority (CA) could solve the attack but at the same time introduces a new set of attack vectors:
EDITED TO ADD (10/13): More information.
Send your personalized message to TSA x-ray screeners using metal plates you can put in your carry-on luggage.
EDITED TO ADD (10/7): Another article.
From Roger Johnston, funny -- and all too true -- stuff.
Squid graffiti art.
International Squid Cookbook, from 1981.
Spykee is your own personal robot spy. It takes pictures and movies that you can watch on the Internet in real time or save for later. You can even talk with whoever you're spying on via Skype. More here, and you can buy one here: only $300.
They're trying to detect anomalies:
If you have ever wondered how security guards can possibly keep an unfailingly vigilant watch on every single one of dozens of television monitors, each depicting a different scene, the answer seems to be (as you suspected): they can't.
People have been asking me to comment about Sarah Palin's Yahoo e-mail account being hacked. I've already written about the security problems with "secret questions" back in 2005:
The point of all these questions is the same: a backup password. If you forget your password, the secret question can verify your identity so you can choose another password or have the site e-mail your current password to you. It's a great idea from a customer service perspective -- a user is less likely to forget his first pet's name than some random password -- but terrible for security. The answer to the secret question is much easier to guess than a good password, and the information is much more public. (I'll bet the name of my family's first pet is in some database somewhere.) And even worse, everybody seems to use the same series of secret questions.
EDITED TO ADD (9/25): Ed Felten on the issue.
Airport security found a jar of pasta sauce in my luggage last month. It was a 6-ounce jar, above the limit; the official confiscated it, because allowing it on the airplane with me would have been too dangerous. And to demonstrate how dangerous he really thought that jar was, he blithely tossed it in a nearby bin of similar liquid bottles and sent me on my way.
There are two classes of contraband at airport security checkpoints: the class that will get you in trouble if you try to bring it on an airplane, and the class that will cheerily be taken away from you if you try to bring it on an airplane. This difference is important: Making security screeners confiscate anything from that second class is a waste of time. All it does is harm innocents; it doesn't stop terrorists at all.
Let me explain. If you're caught at airport security with a bomb or a gun, the screeners aren't just going to take it away from you. They're going to call the police, and you're going to be stuck for a few hours answering a lot of awkward questions. You may be arrested, and you'll almost certainly miss your flight. At best, you're going to have a very unpleasant day.
This is why articles about how screeners don't catch every -- or even a majority -- of guns and bombs that go through the checkpoints don't bother me. The screeners don't have to be perfect; they just have to be good enough. No terrorist is going to base his plot on getting a gun through airport security if there's a decent chance of getting caught, because the consequences of getting caught are too great.
Contrast that with a terrorist plot that requires a 12-ounce bottle of liquid. There's no evidence that the London liquid bombers actually had a workable plot, but assume for the moment they did. If some copycat terrorists try to bring their liquid bomb through airport security and the screeners catch them -- like they caught me with my bottle of pasta sauce -- the terrorists can simply try again. They can try again and again. They can keep trying until they succeed. Because there are no consequences to trying and failing, the screeners have to be 100 percent effective. Even if they slip up one in a hundred times, the plot can succeed.
The same is true for knitting needles, pocketknives, scissors, corkscrews, cigarette lighters and whatever else the airport screeners are confiscating this week. If there's no consequence to getting caught with it, then confiscating it only hurts innocent people. At best, it mildly annoys the terrorists.
To fix this, airport security has to make a choice. If something is dangerous, treat it as dangerous and treat anyone who tries to bring it on as potentially dangerous. If it's not dangerous, then stop trying to keep it off airplanes. Trying to have it both ways just distracts the screeners from actually making us safer.
EDITED TO ADD (10/23): A similar article ran in The Guardian.
This seems like a whole lot of pseudo-science:
The technologies, generally regarded as promising but unproved, have yet to be widely accepted as evidence -- except in India, where in recent years judges have begun to admit brain scans. But it was only in June, in a murder case in Pune, in Maharashtra State, that a judge explicitly cited a scan as proof that the suspect's brain held "experiential knowledge" about the crime that only the killer could possess, sentencing her to life in prison.
EDITED TO ADD (10/13): An expert committee said it is unscientific, but their findings weren't accepted.
In Santa Barbara.
Among other dissection highlights, Hochberg pulled out plastic-like pieces, which comprised what could be best described as a backbone, as well as a translucent brownish-yellow piece of the beak, which is made of fingernail-like material. The giant squid's anatomy features a mouth at the top of the head, which means the esophagus travels through the brain. "So you have to get very small chunks of food," said Hochberg, "or you'll blow your brains out." The sharp beaks, then, are used to chomp food into tiny pieces before sending it down the esophagus, through the brain, and into the gut.
I was interviewed for Telecom Asia.
The Transportation Security Administration (TSA) rolled out the new uniforms and new screening policy at airports nationwide on Sept. 11.
Actually, it's not. Screeners have to go in and out of security all the time as they work. Yes, they can smuggle things in and out of the airport. But you have to remember that the airport screeners are trusted insiders for the system: there are a zillion ways they could break airport security.
On the other hand, it's probably a smart idea to screen screeners when they walk through airport security when they aren't working at that checkpoint at that time. The reason is the same reason you should screen everyone, including pilots who can crash their plane: you're not screening screeners (or pilots), you're screening people wearing screener (or pilot) uniforms and carrying screener (or pilot) IDs. You can either train your screeners to recognize authentic uniforms and IDs, or you can just screen everybody. The latter is just easier.
But this isn't a big deal.
In a presentation that rivals any of my movie-plot threat contest entries, a Pentagon researcher is worried that terrorists might plot using World of Warcraft:
In a presentation late last week at the Director of National Intelligence Open Source Conference in Washington, Dr. Dwight Toavs, a professor at the Pentagon-funded National Defense University, gave a bit of a primer on virtual worlds to an audience largely ignorant about what happens in these online spaces. Then he launched into a scenario, to demonstrate how a meatspace plot might be hidden by in-game chatter.In it, two World of Warcraft players discuss a raid on the "White Keep" inside the "Stonetalon Mountains." The major objective is to set off a "Dragon Fire spell" inside, and make off with "110 Gold and 234 Silver" in treasure. "No one will dance there for a hundred years after this spell is cast," one player, "war_monger," crows.
I don't know why he thinks that the terrorists will use World of Warcraft and not some other online world. Or Facebook. Or Usenet. Or a chat room. Or e-mail. Or the telephone. I don't even know why the particular form of communication is in any way important.
The article ends with this nice paragraph:
Steven Aftergood, the Federation of the American Scientists analyst who's been following the intelligence community for years, wonders how realistic these sorts of scenarios are, really. "This concern is out there. But it has to be viewed in context. It's the job of intelligence agencies to anticipate threats and counter them. With that orientation, they're always going to give more weight to a particular scenario than an objective analysis would allow," he tells Danger Room. "Could terrorists use Second Life? Sure, they can use anything. But is it a significant augmentation? That's not obvious. It's a scenario that an intelligence officer is duty-bound to consider. That's all."
My guess is still that some clever Pentagon researchers have figured out how to play World of Warcraft on the job, and they're not giving that perk up anytime soon.
Definitely strange bedfellows:
A United Nations agency is quietly drafting technical standards, proposed by the Chinese government, to define methods of tracing the original source of Internet communications and potentially curbing the ability of users to remain anonymous.
This is being sold as a way to go after the bad guys, but it won't help. Here's Steve Bellovin on that issue:
First, very few attacks these days use spoofed source addresses; the real IP address already tells you where the attack is coming from. Second, in case of a DDoS attack, there are too many sources; you can't do anything with the information. Third, the machine attacking you is almost certainly someone else's hacked machine and tracking them down (and getting them to clean it up) is itself time-consuming.
TraceBack is most useful in monitoring the activities of large masses of people. But of course, that's why the Chinese and the NSA are so interested in this proposal in the first place.
It's hard to figure out what the endgame is; the U.N. doesn't have the authority to impose Internet standards on anyone. In any case, this idea is counter to the U.N. Universal Declaration of Human Rights, Article 19: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers." In the U.S., it's counter to the First Amendment, which has long permitted anonymous speech. On the other hand, basic human and constitutional rights have been jettisoned left and right in the years after 9/11; why should this be any different?
But when the Chinese government and the NSA get together to enhance their ability to spy on us all, you have to wonder what's gone wrong with the world.
A recent article in the London Review of Books revealed that a number of private companies now sell off-the-shelf data-mining solutions to government spies interested in analyzing mobile-phone calling records and real-time location information. These companies include ThorpeGlen, VASTech, Kommlabs, and Aqsacom--all of which sell "passive probing" data-mining services to governments around the world.
Jon used a desktop computer attached to a GPS satellite simulator to create a fake GPS signal. Portable GPS satellite simulators can fit in the trunk of a car, and are often used for testing. They are available as commercial off-the-shelf products. You can also rent them for less than $1K a week -- peanuts to anyone thinking of hijacking a cargo truck and selling stolen goods.
EDITED TO ADD (10/13): Argonne National Labs is working on this.
Or so says leaked U.S. government documents.
The USB stick, outlining training for 70 soldiers from the 3rd Battalion, Yorkshire Regiment, was found on the floor of The Beach in Newquay in May.
It's not the first time:
More than 120 USB memory sticks, some containing secret information, have been lost or stolen from the Ministry of Defence since 2004, it was reported earlier this year.
I've written about this general problem before: we're storing ever more data in ever smaller devices.
The point is that it's now amazingly easy to lose an enormous amount of information. Twenty years ago, someone could break into my office and copy every customer file, every piece of correspondence, everything about my professional life. Today, all he has to do is steal my computer. Or my portable backup drive. Or my small stack of DVD backups. Furthermore, he could sneak into my office and copy all this data, and I'd never know it.
The solution? Encrypt them.
Shhhh. Don't tell the terrorists:
The U.S. Department of Homeland Security wrote a letter to Labbé in 2004, saying he had been placed on their watch list after falling victim to identity theft. At the time, the department said there was no way for his name to be removed.
I have a new book coming out: Schneier on Security. It's a collection of my essays, all written from June 2002 to June 2008. They're all on my website, so regular readers won't have missed anything if they don't buy this book. But for those of you who want my essays in one easy-to-read place, or are planning to be shipwrecked on a desert island without Web access and would like to spend your time there pondering the sorts of questions I discuss in my essays, or want to give copies of my essays to friends and relatives as gifts, this book is for you. There are only 90 shopping days before Christmas.
The hardcover book retails for $30, but Amazon is already selling it for $20. If you want a signed copy, e-mail me. I'll send you a signed copy for $30, including U.S. shipping, and $40, including shipping overseas. Yes, Amazon is cheaper -- and you can always find me at a conference and ask me to sign the book.
There are many weird things about the giant Humboldt squid, but here's one of the strangest: Its beak. The squid's beak is one of the hardest organic substances in existence -- such that the sharp point can slice through a fish or whale like a Ginsu knife. Yet the beak is attached to squid flesh that itself is the texture of jello. How precisely does a gelatinous animal safely wield such a razor-sharp weapon? Why doesn't it just sort of, y'know, rip off? It's as if you tried to carve a roast with a knife that doesn't have a handle: It would cut into your fingers as much as the roast.
Don't buy this:
My first discussion was with a sales guy. I asked about the encryption method. He didn't know. I asked about how the key was protected. Again, no idea. I began to suspect that this was not the person I needed to speak with, and I asked for a "technical" person. After a short wait, another sales guy got on the phone. He knew a little more. For example, the encryption method is to XOR the key with the data. Those of you in the security profession know my reaction to this news. For those of you still coming up to speed, XORing a key with data to encrypt sensitive information is bad. Very bad.
EDITED TO ADD (9/13): In the comment thread, there's a lot of talk about one-time pads. This is something I wrote on the topic in 2002:
So, let me summarize. One-time pads are useless for all but very specialized applications, primarily historical and non-computer. And almost any system that uses a one-time pad is insecure. It will claim to use a one-time pad, but actually use a two-time pad (oops). Or it will claim to use a one-time pad, but actually use a stream cipher. Or it will use a one-time pad, but won't deal with message re-synchronization and re-transmission attacks. Or it will ignore message authentication, and be susceptible to bit-flipping attacks and the like. Or it will fall prey to keystream reuse attacks. Etc., etc., etc.
"The terrifying cost of feeling safer," from the Sydney Morning Herald:
Sandler and his colleagues conducted an analysis of the costs and benefits of five different approaches to combating terrorism. I must warn you that, because of the dearth of information, this study is even more reliant on assumptions than usual. Even so, in three cases the cost of the action so far exceeds the benefits that doubts about the reliability of the estimates recede.
This really pegs the stupid meter:
He explains all the district's hydrants, including those in Alexander Ranch, have had their water turned off since just after 9/11 -- something a trade association spokesman tells us is common practice for rural systems.
One, fires are much more common than terrorism -- keeping fire hydrants on makes much more sense than turning them off. Two, what sort of terrorism is possible using working fire hydrants? Three, if the water valves can be "turned back on with a tool," how does turning them off prevent fire-hydrant-related terrorism?
More and more, it seems as if public officials in this country have simply gone insane.
Is it possible that the F.B.I. is right about the statistics it cites, and that there could be 122 nine-out-of-13 matches in Arizona's database?
EDITED TO ADD (9/14): The FBI is trying to suppress the analysis.
Seems that the idea was killed by lawyers under pressure from the credit card industry. Or maybe not; the person who started this rumor has retracted his comments. Or maybe those same lawyers made him retract his comments.
Don't they know that security by gag order never works, except temporarily?
On 60 Minutes, in an interview with Scott Pelley, reporter Bob Woodward claimed that the U.S. military has a new secret technique that's so revolutionary, it's on par with the tank and the airplane:
Woodward: This is very sensitive and very top secret, but there are secret operational capabilities that have been developed by the military to locate, target, and kill leaders of al Qaeda in Iraq, insurgent leaders, renegade militia leaders, that is one of the true breakthroughs.
It's here, 7 minutes and 55 seconds in.
Anyone have any ideas?
EDITED TO ADD (9/11): One idea:
I'm going to make a wager about what I think Woodward is talking about, and I'll be curious to see what Danger Room readers have to say. I believe he is talking about the much ballyhooed (in defense geek circles) "Tagging, Tracking and Locating" program; here's a briefing on it from Special Operations Command. These are newfangled technologies designed to track people from long distances, without the targeted people realizing they are being tracked. That can theoretically include thermal signatures, or some sort of "taggant" placed on a person. Think Will Smith in Enemy of the State. Well, not so many cameras, maybe.
Based in Europe, the Rock Phish group is a criminal collective that has been targeting banks and other financial institutions since 2004. According to RSA, they are responsible for half of the worldwide phishing attacks and have siphoned tens of millions of dollars from individuals' bank accounts. The group got its name from a now discontinued quirk in which the phishers used directory paths that contained the word "rock."
Ignoring the sensationalist headline, this is interesting:
By analysing the movements of human shadows in aerial and satellite footage, JPL engineer Adrian Stoica says it should be possible to identify people from the way they walk -- a technique called gait analysis, whose power lies in the fact that a person's walking style is very hard to disguise.
The article goes on to say that using satellite images would be harder, but that the basic idea is the same.
Of course, this is less useful for finding individuals and more useful for tracking a population as it moves about its day. But some individuals will have more distinctive gaits than others, and will be easier to track. Soon we may all need to walk with rocks in our shoes.
Let me start off by saying that I'm making this whole thing up.
Imagine you're in charge of infiltrating sleeper agents into the United States. The year is 1983, and the proliferation of identity databases is making it increasingly difficult to create fake credentials. Ten years ago, someone could have just shown up in the country and gotten a driver's license, Social Security card and bank account -- possibly using the identity of someone roughly the same age who died as a young child -- but it's getting harder. And you know that trend will only continue. So you decide to grow your own identities.
Call it "identity farming." You invent a handful of infants. You apply for Social Security numbers for them. Eventually, you open bank accounts for them, file tax returns for them, register them to vote, and apply for credit cards in their name. And now, 25 years later, you have a handful of identities ready and waiting for some real people to step into them.
There are some complications, of course. Maybe you need people to sign their name as parents -- or, at least, mothers. Maybe you need to doctors to fill out birth certificates. Maybe you need to fill out paperwork certifying that you're home-schooling these children. You'll certainly want to exercise their financial identity: depositing money into their bank accounts and withdrawing it from ATMs, using their credit cards and paying the bills, and so on. And you'll need to establish some sort of addresses for them, even if it is just a mail drop.
You won't be able to get driver's licenses or photo IDs in their name. That isn't critical, though; in the U.S., more than 20 million adult citizens don't have photo IDs. But other than that, I can't think of any reason why identity farming wouldn't work.
Here's the real question: Do you actually have to show up for any part of your life?
Again, I made this all up. I have no evidence that anyone is actually doing this. It's not something a criminal organization is likely to do; twenty-five years is too distant a payoff horizon. The same logic holds true for terrorist organizations; it's not worth it. It might have been worth it to the KGB -- although perhaps harder to justify after the Soviet Union broke up in 1991 -- and might be an attractive option for existing intelligence adversaries like China.
Immortals could also use this trick to self-perpetuate themselves, inventing their own children and gradually assuming their identity, then killing their parents off. They could even show up for their own driver's license photos, wearing a beard as the father and blue spiked hair as the son. I'm told this is a common idea in Highlander fan fiction.
The point isn't to create another movie plot threat, but to point out the central role that data has taken on in our lives. Previously, I've said that we all have a data shadow that follows us around, and that more and more institutions interact with our data shadows instead of with us. We only intersect with our data shadows once in a while -- when we apply for a driver's license or passport, for example -- and those interactions are authenticated by older, less-secure interactions. The rest of the world assumes that our photo IDs glue us to our data shadows, ignoring the rather flimsy connection between us and our plastic cards. (And, no, REAL-ID won't help.)
It seems to me that our data shadows are becoming increasingly distinct from us, almost with a life of their own. What's important now is our shadows; we're secondary. And as our society relies more and more on these shadows, we might even become unnecessary.
Our data shadows can live a perfectly normal life without us.
EDITED TO ADD (9/9): Interesting commentary.
I have long been enamored with security trade-offs in the natural world:
A 3D video tracking system revealed that although the bees became very accurate at detecting the camouflaged spiders -- they also became increasingly wary.
Over the past year I have gotten many requests, both public and private, to comment on the BT and Phorm incident.
I was not involved with BT and Phorm, then or now. Everything I know about Phorm and BT's relationship with Phorm came from the same news articles you read. I have not gotten involved as an employee of BT. But anything I say is -- by definition -- said by a BT executive. That's not good.
So I'm sorry that I can't write about Phorm. But -- honestly -- lots of others have been giving their views on the issue.
Fierce deep-sea predator? Not so much:
"We are looking at something verging on the incredibly bizarre. As she got older she got shorter and broader and was reduced to a giant gelatinous blob, carrying many thousands of eggs," he says.
Cory Doctorow wanted a secret decoder wedding ring, and he asked me to help design it. I wanted something more than the standard secret decoder ring, so this is what I asked for: "I want each wheel to be the alphabet, with each letter having either a dot above, a dot below, or no dot at all. The first wheel should have alternating above, none, below. The second wheel should be the repeating sequence of above, above, none, none, below, below. The third wheel should be the repeating sequence of above, above, above, none, none, none, below, below, below." (I know it sounds confusing, but here's a chart.)
So that's what he asked for, and that's what he got. And now it's time to create some cryptographic applications for the rings. Cory and I are holding an open contest for the cleverest application.
I don't think we can invent any encryption algorithms that will survive computer analysis -- there's just not enough entropy in the system -- but we can come up with some clever pencil-and-paper ciphers that will serve them well if they're ever stuck back in time. And there are certainly other cryptographic uses for the rings.
Here's a way to use the rings as a password mnemonic: First, choose a two-letter key. Align the three wheels according to the key. For example, if the key is "EB" for eBay, align the three wheels AEB. Take the common password "PASSWORD" and encrypt it. For each letter, find it on the top wheel. Count one letter to the left if there is a dot over the letter, and one letter to the right if there is a dot under it. Take that new letter and look at the letter below it (in the middle wheel). Count two letters to the left if there is a dot over it, and two letters to the right if there is a dot under it. Take that new letter (in the middle wheel), and look at the letter below it (in the lower wheel). Count three letters to the left if there is a dot over it, and three letters to the right if there is a dot under it. That's your encrypted letter. Do that with every letter to get your password.
"PASSWORD" and the key "EB" becomes "NXPPVVOF."
It's not very good; can anyone see why? (Ignore for now whether or not publishing this on a blog makes it no longer secure.)
How can I do that better? What else can we do with the rings? Can we incorporate other elements -- a deck of playing cards as in Solitaire, different-sized coins to make the system more secure?
Post your contest entries as comments to Cory's blog post -- you can post them here, but they're not going to count as contest submissions -- or send them to email@example.com. Deadline is October 1st.
Good luck, and have fun with this.
This seems like a really dumb idea.
New paper: "What Californians Understand About Privacy Online," by Chris Jay Hoofnagle and Jennifer King. From the abstract:
A gulf exists between California consumers' understanding of online rules and common business practices. For instance, Californians who shop online believe that privacy policies prohibit third-party information sharing. A majority of Californians believes that privacy policies create the right to require a website to delete personal information upon request, a general right to sue for damages, a right to be informed of security breaches, a right to assistance if identity theft occurs, and a right to access and correct data.
We spend far more effort defending our countries against specific movie-plot threats, rather than the real, broad threats. In the US during the months after the 9/11 attacks, we feared terrorists with scuba gear, terrorists with crop dusters and terrorists contaminating our milk supply. Both the UK and the US fear terrorists with small bottles of liquid. Our imaginations run wild with vivid specific threats. Before long, we're envisioning an entire movie plot, without Bruce Willis saving the day. And we're scared.
It's not just terrorism; it's any rare risk in the news. The big fear in Canada right now, following a particularly gruesome incident, is random decapitations on intercity buses. In the US, fears of school shootings are much greater than the actual risks. In the UK, it's child predators. And people all over the world mistakenly fear flying more than driving. But the very definition of news is something that hardly ever happens. If an incident is in the news, we shouldn't worry about it. It's when something is so common that its no longer news - car crashes, domestic violence - that we should worry. But that's not the way people think.
Psychologically, this makes sense. We are a species of storytellers. We have good imaginations and we respond more emotionally to stories than to data. We also judge the probability of something by how easy it is to imagine, so stories that are in the news feel more probable - and ominous - than stories that are not. As a result, we overreact to the rare risks we hear stories about, and fear specific plots more than general threats.
The problem with building security around specific targets and tactics is that its only effective if we happen to guess the plot correctly. If we spend billions defending the Underground and terrorists bomb a school instead, we've wasted our money. If we focus on the World Cup and terrorists attack Wimbledon, we've wasted our money.
It's this fetish-like focus on tactics that results in the security follies at airports. We ban guns and knives, and terrorists use box-cutters. We take away box-cutters and corkscrews, so they put explosives in their shoes. We screen shoes, so they use liquids. We take away liquids, and they're going to do something else. Or they'll ignore airplanes entirely and attack a school, church, theatre, stadium, shopping mall, airport terminal outside the security area, or any of the other places where people pack together tightly.
These are stupid games, so let's stop playing. Some high-profile targets deserve special attention and some tactics are worse than others. Airplanes are particularly important targets because they are national symbols and because a small bomb can kill everyone aboard. Seats of government are also symbolic, and therefore attractive, targets. But targets and tactics are interchangeable.
The following three things are true about terrorism. One, the number of potential terrorist targets is infinite. Two, the odds of the terrorists going after any one target is zero. And three, the cost to the terrorist of switching targets is zero.
We need to defend against the broad threat of terrorism, not against specific movie plots. Security is most effective when it doesn't require us to guess. We need to focus resources on intelligence and investigation: identifying terrorists, cutting off their funding and stopping them regardless of what their plans are. We need to focus resources on emergency response: lessening the impact of a terrorist attack, regardless of what it is. And we need to face the geopolitical consequences of our foreign policy.
In 2006, UK police arrested the liquid bombers not through diligent airport security, but through intelligence and investigation. It didn't matter what the bombers' target was. It didn't matter what their tactic was. They would have been arrested regardless. That's smart security. Now we confiscate liquids at airports, just in case another group happens to attack the exact same target in exactly the same way. That's just illogical.
This essay originally appeared in The Guardian. Nothing I haven't already said elsewhere.
Many throughout history.
Don't give someone your phone unless you trust them:
There is a new electronic capture device that has been developed primarily for law enforcement, surveillance, and intelligence operations that is also available to the public. It is called the Cellular Seizure Investigation Stick, or CSI Stick as a clever acronym. It is manufactured by a company called Paraben, and is a self-contained module about the size of a BIC lighter. It plugs directly into most Motorola and Samsung cell phones to capture all data that they contain. More phones will be added to the list, including many from Nokia, RIM, LG and others, in the next generation, to be released shortly.
Another news article.
Thanks to a software program called a zapper, even technologically illiterate restaurant and store owners can siphon cash from computer cash registers and cheat tax officials.
Return on investment, or ROI, is a big deal in business. Any business venture needs to demonstrate a positive return on investment, and a good one at that, in order to be viable.
It's become a big deal in IT security, too. Many corporate customers are demanding ROI models to demonstrate that a particular security investment pays off. And in response, vendors are providing ROI models that demonstrate how their particular security solution provides the best return on investment.
Before I get into the details, there's one point I have to make. "ROI" as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It's an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn't make sense in this context.
But as anyone who has lived through a company's vicious end-of-year budget-slashing exercises knows, when you're trying to make your numbers, cutting costs is the same as increasing revenues. So while security can't produce ROI, loss prevention most certainly affects a company's bottom line.
And a company should implement only security countermeasures that affect its bottom line positively. It shouldn't spend more on a security problem than the problem is worth. Conversely, it shouldn't ignore problems that are costing it money when there are cheaper mitigation alternatives. A smart company needs to approach security as it would any other business decision: costs versus benefits.
The classic methodology is called annualized loss expectancy (ALE), and it's straightforward. Calculate the cost of a security incident in both tangibles like time and money, and intangibles like reputation and competitive advantage. Multiply that by the chance the incident will occur in a year. That tells you how much you should spend to mitigate the risk. So, for example, if your store has a 10 percent chance of getting robbed and the cost of being robbed is $10,000, then you should spend $1,000 a year on security. Spend more than that, and you're wasting money. Spend less than that, and you're also wasting money.
Of course, that $1,000 has to reduce the chance of being robbed to zero in order to be cost-effective. If a security measure cuts the chance of robbery by 40 percent -- to 6 percent a year -- then you should spend no more than $400 on it. If another security measure reduces it by 80 percent, it's worth $800. And if two security measures both reduce the chance of being robbed by 50 percent and one costs $300 and the other $700, the first one is worth it and the second isn't.
The Data Imperative
The key to making this work is good data; the term of art is "actuarial tail." If you're doing an ALE analysis of a security camera at a convenience store, you need to know the crime rate in the store's neighborhood and maybe have some idea of how much cameras improve the odds of convincing criminals to rob another store instead. You need to know how much a robbery costs: in merchandise, in time and annoyance, in lost sales due to spooked patrons, in employee morale. You need to know how much not having the cameras costs in terms of employee morale; maybe you're having trouble hiring salespeople to work the night shift. With all that data, you can figure out if the cost of the camera is cheaper than the loss of revenue if you close the store at night -- assuming that the closed store won't get robbed as well. And then you can decide whether to install one.
Cybersecurity is considerably harder, because there just isn't enough good data. There aren't good crime rates for cyberspace, and we have a lot less data about how individual security countermeasures -- or specific configurations of countermeasures -- mitigate those risks. We don't even have data on incident costs.
One problem is that the threat moves too quickly. The characteristics of the things we're trying to prevent change so quickly that we can't accumulate data fast enough. By the time we get some data, there's a new threat model for which we don't have enough data. So we can't create ALE models.
But there's another problem, and it's that the math quickly falls apart when it comes to rare and expensive events. Imagine you calculate the cost -- reputational costs, loss of customers, etc. -- of having your company's name in the newspaper after an embarrassing cybersecurity event to be $20 million. Also assume that the odds are 1 in 10,000 of that happening in any one year. ALE says you should spend no more than $2,000 mitigating that risk.
So far, so good. But maybe your CFO thinks an incident would cost only $10 million. You can't argue, since we're just estimating. But he just cut your security budget in half. A vendor trying to sell you a product finds a Web analysis claiming that the odds of this happening are actually 1 in 1,000. Accept this new number, and suddenly a product costing 10 times as much is still a good investment.
It gets worse when you deal with even more rare and expensive events. Imagine you're in charge of terrorism mitigation at a chlorine plant. What's the cost to your company, in money and reputation, of a large and very deadly explosion? $100 million? $1 billion? $10 billion? And the odds: 1 in a hundred thousand, 1 in a million, 1 in 10 million? Depending on how you answer those two questions -- and any answer is really just a guess -- you can justify spending anywhere from $10 to $100,000 annually to mitigate that risk.
Or take another example: airport security. Assume that all the new airport security measures increase the waiting time at airports by -- and I'm making this up -- 30 minutes per passenger. There were 760 million passenger boardings in the United States in 2007. This means that the extra waiting time at airports has cost us a collective 43,000 years of extra waiting time. Assume a 70-year life expectancy, and the increased waiting time has "killed" 620 people per year -- 930 if you calculate the numbers based on 16 hours of awake time per day. So the question is: If we did away with increased airport security, would the result be more people dead from terrorism or fewer?
This kind of thing is why most ROI models you get from security vendors are nonsense. Of course their model demonstrates that their product or service makes financial sense: They've jiggered the numbers so that they do.
This doesn't mean that ALE is useless, but it does mean you should 1) mistrust any analyses that come from people with an agenda and 2) use any results as a general guideline only. So when you get an ROI model from your vendor, take its framework and plug in your own numbers. Don't even show the vendor your improvements; it won't consider any changes that make its product or service less cost-effective to be an "improvement." And use those results as a general guide, along with risk management and compliance analyses, when you're deciding what security products and services to buy.
This essay previously appeared in CSO Magazine.
No-fly lists and photo IDs are supposed to help protect the flying public from terrorists. Except that they don't work.
By Bruce Schneier
August 28, 2008
The TSA is tightening its photo ID rules at airport security. Previously, people with expired IDs or who claimed to have lost their IDs were subjected to secondary screening. Then the Transportation Security Administration realized that meant someone on the government's no-fly list -- the list that is supposed to keep our planes safe from terrorists -- could just fly with no ID.
Now, people without ID must also answer personal questions from their credit history to ascertain their identity. The TSA will keep records of who those ID-less people are, too, in case they're trying to probe the system.
This may seem like an improvement, except that the photo ID requirement is a joke. Anyone on the no-fly list can easily fly whenever he wants. Even worse, the whole concept of matching passenger names against a list of bad guys has negligible security value.
How to fly, even if you are on the no-fly list: Buy a ticket in some innocent person's name. At home, before your flight, check in online and print out your boarding pass. Then, save that web page as a PDF and use Adobe Acrobat to change the name on the boarding pass to your own. Print it again. At the airport, use the fake boarding pass and your valid ID to get through security. At the gate, use the real boarding pass in the fake name to board your flight.
The problem is that it is unverified passenger names that get checked against the no-fly list. At security checkpoints, the TSA just matches IDs to whatever is printed on the boarding passes. The airline checks boarding passes against tickets when people board the plane. But because no one checks ticketed names against IDs, the security breaks down.
This vulnerability isn't new. It isn't even subtle. I wrote about it in 2003, and again in 2006. I asked Kip Hawley, who runs the TSA, about it in 2007. Today, any terrorist smart enough to Google "print your own boarding pass" can bypass the no-fly list.
This gaping security hole would bother me more if the very idea of a no-fly list weren't so ineffective. The system is based on the faulty notion that the feds have this master list of terrorists, and all we have to do is keep the people on the list off the planes.
That's just not true. The no-fly list -- a list of people so dangerous they are not allowed to fly yet so innocent we can't arrest them -- and the less dangerous "watch list" contain a combined 1 million names representing the identities and aliases of an estimated 400,000 people. There aren't that many terrorists out there; if there were, we would be feeling their effects.
Almost all of the people stopped by the no-fly list are false positives. It catches innocents such as Ted Kennedy, whose name is similar to someone's on the list, and Yusuf Islam (formerly Cat Stevens), who was on the list but no one knew why.
The no-fly list is a Kafkaesque nightmare for the thousands of innocent Americans who are harassed and detained every time they fly. Put on the list by unidentified government officials, they can't get off. They can't challenge the TSA about their status or prove their innocence. (The U.S. 9th Circuit Court of Appeals decided this month that no-fly passengers can sue the FBI, but that strategy hasn't been tried yet.)
But even if these lists were complete and accurate, they wouldn't work. Timothy McVeigh, the Unabomber, the D.C. snipers, the London subway bombers and most of the 9/11 terrorists weren't on any list before they committed their terrorist acts. And if a terrorist wants to know if he's on a list, the TSA has approved a convenient, $100 service that allows him to figure it out: the Clear program, which issues IDs to "trusted travelers" to speed them through security lines. Just apply for a Clear card; if you get one, you're not on the list.
In the end, the photo ID requirement is based on the myth that we can somehow correlate identity with intent. We can't. And instead of wasting money trying, we would be far safer as a nation if we invested in intelligence, investigation and emergency response -- security measures that aren't based on a guess about a terrorist target or tactic.
That's the TSA: Not doing the right things. Not even doing right the things it does.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..