Schneier on Security
A blog covering security and security technology.
January 2009 Archives
Friday Squid Blogging: Safe Quick Undercarriage Immobilization Device (SQUID)
New security device:
But what if an officer could lay down a road trap in seconds, then activate it from a nearby hiding place? What if—like sea monsters of ancient lore—the trap could reach up from below to ensnare anything from a MINI Cooper to a Ford Expedition? What if this trap were as small as a spare tire, as light as a tire jack, and cost under a grand?
Of course, there's a lot separating a cool idea from reality. But it is a cool idea.
Jon Stewart on Closing Guantanamo and Movie-Plot Threats
Jeffrey Rosen on the Department of Homeland Security
The same elements of psychology lead people to exaggerate the likelihood of terrorist attacks: Images of terrifying but highly unusual catastrophes on television—such as the World Trade Center collapsing—are far more memorable than images of more mundane and more prevalent threats, like dying in car crashes. Psychologists call this the "availability heuristic," in which people estimate the probability of something occurring based on how easy it is to bring examples of the event to mind.
Interview with an Adware Developer
I should probably first speak about how adware works. Most adware targets Internet Explorer (IE) users because obviously they're the biggest share of the market. In addition, they tend to be the less-savvy chunk of the market. If you're using IE, then either you don't care or you don't know about all the vulnerabilities that IE has.
EDITED TO ADD (1/30): Good commentary on the interview, showing how it whitewashes history.
Helping the Terrorists
It regularly comes as a surprise to people that our own infrastructure can be used against us. And in the wake of terrorist attacks or plots, there are fear-induced calls to ban, disrupt or control that infrastructure. According to officials investigating the Mumbai attacks, the terrorists used images from Google Earth to help learn their way around. This isn't the first time Google Earth has been charged with helping terrorists: in 2007, Google Earth images of British military bases were found in the homes of Iraqi insurgents. Incidents such as these have led many governments to demand that Google remove or blur images of sensitive locations: military bases, nuclear reactors, government buildings, and so on. An Indian court has been asked to ban Google Earth entirely.
This isn't the only way our information technology helps terrorists. Last year, a US army intelligence report worried that terrorists could plan their attacks using Twitter, and there are unconfirmed reports that the Mumbai terrorists read the Twitter feeds about their attacks to get real-time information they could use. British intelligence is worried that terrorists might use voice over IP services such as Skype to communicate. Terrorists may train on Second Life and World of Warcraft. We already know they use websites to spread their message and possibly even to recruit.
Of course, all of this is exacerbated by open-wireless access, which has been repeatedly labelled a terrorist tool and which has been the object of attempted bans.
Mobile phone networks help terrorists, too. The Mumbai terrorists used them to communicate with each other. This has led some cities, including New York and London, to propose turning off mobile phone coverage in the event of a terrorist attack.
Let's all stop and take a deep breath. By its very nature, communications infrastructure is general. It can be used to plan both legal and illegal activities, and it's generally impossible to tell which is which. When I send and receive email, it looks exactly the same as a terrorist doing the same thing. To the mobile phone network, a call from one terrorist to another looks exactly the same as a mobile phone call from one victim to another. Any attempt to ban or limit infrastructure affects everybody. If India bans Google Earth, a future terrorist won't be able to use it to plan; nor will anybody else. Open Wi-Fi networks are useful for many reasons, the large majority of them positive, and closing them down affects all those reasons. Terrorist attacks are very rare, and it is almost always a bad trade-off to deny society the benefits of a communications technology just because the bad guys might use it too.
Communications infrastructure is especially valuable during a terrorist attack. Twitter was the best way for people to get real-time information about the attacks in Mumbai. If the Indian government shut Twitter down - or London blocked mobile phone coverage - during a terrorist attack, the lack of communications for everyone, not just the terrorists, would increase the level of terror and could even increase the body count. Information lessens fear and makes people safer.
None of this is new. Criminals have used telephones and mobile phones since they were invented. Drug smugglers use airplanes and boats, radios and satellite phones. Bank robbers have long used cars and motorcycles as getaway vehicles, and horses before then. I haven't seen it talked about yet, but the Mumbai terrorists used boats as well. They also wore boots. They ate lunch at restaurants, drank bottled water, and breathed the air. Society survives all of this because the good uses of infrastructure far outweigh the bad uses, even though the good uses are - by and large - small and pedestrian and the bad uses are rare and spectacular. And while terrorism turns society's very infrastructure against itself, we only harm ourselves by dismantling that infrastructure in response - just as we would if we banned cars because bank robbers used them too.
EDITED TO ADD (1/29): Other ways we help the terrorists: we put computers in our libraries, we allow anonymous chat rooms, we permit commercial databases and we engage in biomedical research. Grocery stores, too, sell food to just anyone who walks in.
EDITED TO ADD (2/3): Washington DC wants to jam cell phones too.
EDITED TO ADD (2/9): Another thing that will help the terrorists: in-flight Internet.
The Exclusionary Rule and Security
Earlier this month, the Supreme Court ruled that evidence gathered as a result of errors in a police database is admissible in court. Their narrow decision is wrong, and will only ensure that police databases remain error-filled in the future.
The specifics of the case are simple. A computer database said there was a felony arrest warrant pending for Bennie Herring when there actually wasn't. When the police came to arrest him, they searched his home and found illegal drugs and a gun. The Supreme Court was asked to rule whether the police had the right to arrest him for possessing those items, even though there was no legal basis for the search and arrest in the first place.
What's at issue here is the exclusionary rule, which basically says that unconstitutionally or illegally collected evidence is inadmissible in court. It might seem like a technicality, but excluding what is called "the fruit of the poisonous tree" is a security system designed to protect us all from police abuse.
We have a number of rules limiting what the police can do: rules governing arrest, search, interrogation, detention, prosecution, and so on. And one of the ways we ensure that the police follow these rules is by forbidding the police to receive any benefit from breaking them. In fact, we design the system so that the police actually harm their own interests by breaking them, because all evidence that stems from breaking the rules is inadmissible.
And that's what the exclusionary rule does. If the police search your home without a warrant and find drugs, they can't arrest you for possession. Since the police have better things to do than waste their time, they have an incentive to get a warrant.
The Herring case is more complicated, because the police thought they did have a warrant. The error was not a police error, but a database error. And, in fact, Judge Roberts wrote for the majority: "The exclusionary rule serves to deter deliberate, reckless, or grossly negligent conduct, or in some circumstances recurring or systemic negligence. The error in this case does not rise to that level."
Unfortunately, Roberts is wrong. Government databases are filled with errors. People often can't see data about themselves, and have no way to correct the errors if they do learn of any. And more and more databases are trying to exempt themselves from the Privacy Act of 1974, and specifically the provisions that require data accuracy. The legal argument for excluding this evidence was best made by an amicus curiae brief filed by the Electronic Privacy Information Center, but in short, the court should exclude the evidence because it's the only way to ensure police database accuracy.
We are protected from becoming a police state by limits on police power and authority. This is not a trade-off we make lightly: we deliberately hamper law enforcement's ability to do its job because we recognize that these limits make us safer. Without the exclusionary rule, your only remedy against an illegal search is to bring legal action against the police—and that can be very difficult. We, the people, would rather have you go free than motivate the police to ignore the rules that limit their power.
By not applying the exclusionary rule in the Herring case, the Supreme Court missed an important opportunity to motivate the police to purge errors from their databases. Constitutional lawyers have written many articles about this ruling, but the most interesting idea comes from George Washington University professor Daniel J. Solove, who proposes this compromise: "If a particular database has reasonable protections and deterrents against errors, then the Fourth Amendment exclusionary rule should not apply. If not, then the exclusionary rule should apply. Such a rule would create an incentive for law enforcement officials to maintain accurate databases, to avoid all errors, and would ensure that there would be a penalty or consequence for errors."
Increasingly, we are being judged by the trail of data we leave behind us. Increasingly, data accuracy is vital to our personal safety and security. And if errors made by police databases aren't held to the same legal standard as errors made by policemen, then more and more innocent Americans will find themselves the victims of incorrect data.
This essay originally appeared on the Wall Street Journal website.
EDITED TO ADD (2/1): More on the assault on the exclusionary rule.
A Rational Response to Peanut Allergies and Children
Some parents of children with peanut allergies are not asking their school to ban peanuts. They consider it more important that teachers know which children are likely to have a reaction, and how to deal with it when it happens; i.e., how to use an Epipen.
This is a much more resilient response to the threat. It works even when the peanut ban fails. It works whether the child has an anaphylactic reaction to nuts, fruit, dairy, gluten, or whatever.
It's so rare to see rational risk management when it comes to children and safety; I just had to blog it.
Related blog post, including a very lively comments section.
Remote Fireworks Launcher
How soon before these people are accused of helping the terrorists?
With around a thousand people in the UK injured every year by fireworks, a new electronic remote control 'Firework Launcher' will put safety first and ensure everyone enjoys the Christmas and new year celebrations.This innovative, compact device dramatically reduces the chance of injury by launching fireworks without a flame and at a safe distance -- so all you need to worry about is how spectacular those fireworks really are!
Do fireworks kill more people than terrorists each year? Probably.
Teaching Risk Analysis in School
"I regard myself as part of a movement we call risk literacy," Professor Spiegelhalter told The Times. "It should be a basic component of discussion about issues in media, politics and in schools.
Reminds me of John Paulos's Innumeracy.
Risk Mismanagement on Wall Street
Long article from the New York Times Magazine on Wall Street's risk management, and where it went wrong.
The most interesting part explains how the incentives for traders encouraged them to take asymmetric risks: trade-offs that would work out well 99% of the time but fail catastrophically the remaining 1%. So of course, this is exactly what happened.
Friday Squid Blogging: Squid Teething Toy
Interview with Me
BitArmor's No-Breach Guarantee
"We think this guarantee is going to encourage others to offer similar ones. Bruce Schneier has been calling on the industry to do something like this for a long time," he [BitArmor's CEO] says.
Sounds good, until you read the fine print:
If your company has to publicly report a breach while your data is protected by BitArmor, we'll refund the purchase price of your software. It's that simple. No gimmicks, no hassles.
So if BitArmor fails and someone steals your data, and then you get ridiculed by in the press, sued, and lose your customers to competitors -- BitArmor will refund the purchase price.
Bottom line: PR gimmick, nothing more.
Yes, I think that software vendors need to accept liability for their products, and that we won't see real improvements in security until then. But it has to be real liability, not this sort of token liability. And it won't happen without the insurance companies; that's the industry that knows how to buy and sell liability.
EDITED TO ADD (2/13): BitArmor responds.
When Voting Machine Audit Logs Don't Help
Computer audit logs showing what occurred on a vote tabulation system that lost ballots in the November election are raising more questions not only about how the votes were lost, but also about the general reliability of voting system audit logs to record what occurs during an election and to ensure the integrity of results.
The article gets pretty technical, but is worth reading.
New Police Computer System Impeding Arrests
In Queensland, Australia, policemen are arresting fewer people because their new data-entry system is too annoying:
He said police were growing reluctant to make arrests following the latest phased roll-out of QPRIME, or Queensland Police Records Information Management Exchange.
This is a good example of how non-security incentives affect security decisions.
Identity, Authentication, and Authorization
The Presidential Limousine
EDITED TO ADD (2/16): Just look at that door. It's massive.
Breach Notification Laws
There are three reasons for breach notification laws. One, it's common politeness that when you lose something of someone else's, you tell him. The prevailing corporate attitude before the law—"They won't notice, and if they do notice they won't know it's us, so we are better off keeping quiet about the whole thing"—is just wrong. Two, it provides statistics to security researchers as to how pervasive the problem really is. And three, it forces companies to improve their security.
That last point needs a bit of explanation. The problem with companies protecting your data is that it isn't in their financial best interest to do so. That is, the companies are responsible for protecting your data, but bear none of the costs if your data is compromised. You suffer the harm, but you have no control—or even knowledge—of the company's security practices. The idea behind such laws, and how they were sold to legislators, is that they would increase the cost—both in bad publicity and the actual notification—of security breaches, motivating companies to spend more to prevent them. In economic terms, the law reduces the externalities and forces companies to deal with the true costs of these data breaches.
So how has it worked?
Earlier this year, three researchers at the Heinz School of Public Policy and Management at Carnegie Mellon University—Sasha Romanosky, Rahul Telang and Alessandro Acquisti—tried to answer that question. They looked at reported data breaches and rates of identity theft from 2002 to 2007, comparing states with a law to states without one. If these laws had their desired effects, people in states with notification laws should experience fewer incidences of identity theft. The result: not so much. The researchers found data breach notification laws reduced identity theft by just 2 percent on average.
I think there's a combination of things going on. Identity theft is being reported far more today than five years ago, so it's difficult to compare identity theft rates before and after the state laws were enacted. Most identity theft occurs when someone's home or work computer is compromised, not from theft of large corporate databases, so the effect of these laws is small. Most of the security improvements companies made didn't make much of a difference, reducing the effect of these laws.
The laws rely on public shaming. It's embarrassing to have to admit to a data breach, and companies should be willing to spend to avoid this PR expense. The problem is, in order for this to work well, public shaming needs the cooperation of the press. And there's an attenuation effect going on. The first major breach after the first state disclosure law was in February 2005 in California, when ChoicePoint sold personal data on 145,000 people to criminals. The event was big news, ChoicePoint's stock tanked, and it was shamed into improving its security.
Next, LexisNexis exposed personal data on 300,000 individuals, and then Citigroup lost data on 3.9 million. The law worked; the only reason we knew about these security breaches was because of the law. But the breaches came in increasing numbers, and in larger quantities. Data breach stories felt more like "crying wolf" and soon, data breaches were no longer news.
Today, the remaining cost is that of the direct mail campaign to notify customers, which often turns into a marketing opportunity.
I'm still a fan of these laws, if only for the first two reasons I listed. Disclosure is important, but it's not going to solve identity theft. As I've written previously, the reason theft of personal information is common is that the data is valuable once stolen. The way to mitigate the risk of fraud due to impersonation is not to make personal information difficult to steal, it's to make it difficult to use.
Disclosure laws only deal with the economic externality of data owners protecting your personal information. What we really need are laws prohibiting financial institutions from granting credit to someone using your name with only a minimum of authentication.
The Discovery of TEMPEST
Another recently unclassified NSA document: Jeffrey Friedman, "TEMPEST: A Signal Problem," NSA Cryptologic Spectrum, Summer 1972.
EDITED TO ADD (2/12): Article on the topic.
Dognapping -- or, at least, the fear of dognapping -- is on the rise. So people are no longer leaving their dogs tied up outside stores, and are buying leashes that can't be easily cut through.
In-Person Credit Card Scam
Surely this isn't new:
Suspects entered the business, selected merchandise worth almost $8,000. They handed a credit card with no financial backing to the clerk which when swiped was rejected by the cash register's computer. The suspects then informed the clerk that this rejection was expected and to contact the credit card company by phone to receive a payment approval confirmation code. The clerk was then given a number to call which was answered by another person in the scam who approved the purchase and gave a bogus confirmation number. The suspects then left the store with the unpaid for merchandise.
Anyone reading this blog would know enough not to call a number given to you by the potential purchaser, but presumably many store clerks don't have good security sense.
"The Cost of Fearing Strangers"
As we wrote in Freakonomics, most people are pretty terrible at risk assessment. They tend to overstate the risk of dramatic and unlikely events at the expense of more common and boring (if equally devastating) events. A given person might fear a terrorist attack and mad cow disease more than anything in the world, whereas in fact she'd be better off fearing a heart attack (and therefore taking care of herself) or salmonella (and therefore washing her cutting board thoroughly).
Nothing I haven't said before. Remember, if it's in the news don't worry about it. The very definition of news is "something that almost never happens." When something is so common that it's no longer news—car crashes, domestic violence—that's when you should worry about it.
Friday Squid Blogging: Your Octopus, Squid and Cephalopod Information Center
Podcast with Me
Cato recorded a podcast with me. Nothing you haven't read before.
Top Eleven Reasons Why Lists of Top Ten Bugs Don't Work
Michael Chertoff Claims that Hijackings were Routine Prior to 9/11
I missed this interview with DHS Secretary Michael Chertoff from December. It's all worth reading, but I want to point out where he claims that airplane hijackings were routine prior to 9/11:
What I can tell you is that in the period prior to September 12, 2001, it was a regular, routine issue to have American aircraft hijacked or blown up from time to time, whether it was Lockerbie or TSA or TWA 857 [I believe he meant TWA 847 – Joel] or 9/11 itself. And we haven't had even a serious attempt at a hijacking or bombing on an American plane since then.
BoingBoing provides the actual facts:
According to Airsafe.com, the last flight previous to 9/11 to be hijacked with fatalities from an American destination was a Pacific Southwest Airlines flight on December 7th, 1987. "Lockerbie" refers to Pan Am Flight 103 which was destroyed by a bomb over Scotland after departing from London Heathrow International Airport on its way to JFK, with screening done — as now — by an organization other than the TSA. TWA Flight 847 departed from Athens (Ellinikon) International Airport, also not under TSA oversight.
Economic Distress and Fear
This was the Quotation of the Day from January 12:
Part of the debtor mentality is a constant, frantically suppressed undercurrent of terror. We have one of the highest debt-to-income ratios in the world, and apparently most of us are two paychecks from the street. Those in power -- governments, employers -- exploit this, to great effect. Frightened people are obedient -- not just physically, but intellectually and emotionally. If your employer tells you to work overtime, and you know that refusing could jeopardize everything you have, then not only do you work the overtime, but you convince yourself that you're doing it voluntarily, out of loyalty to the company; because the alternative is to acknowledge that you are living in terror. Before you know it, you've persuaded yourself that you have a profound emotional attachment to some vast multinational corporation: you've indentured not just your working hours, but your entire thought process. The only people who are capable of either unfettered action or unfettered thought are those who -- either because they're heroically brave, or because they're insane, or because they know themselves to be safe -- are free from fear.
Quote is from The Likeness, a novel set in Ireland, by Tana French.
Michael Chertoff Parodied in The Onion
"While 9/11 has historically always fallen on 9/11, we as Americans need to be prepared for a wide range of dates," Chertoff said during a White House press conference. "There's a chance we could all find ourselves living in a post-6/10 world as early as next July. Unless, that is, we're already living in a pre-2/14 world."
Stupid Security Tricks: Key Management
Health bosses today admitted the memory stick was encrypted, but the password had been attached to the device when it went missing.
I'm sure they were so proud that they chose a secure encryption algorithm.
Two Security Camera Studies
From San Francisco:
San Francisco's Community Safety Camera Program was launched in late 2005 with the dual goals of fighting crime and providing police investigators with a retroactive investigatory tool. The program placed more than 70 non-monitored cameras in mainly high-crime areas throughout the city. This report released today (January 9, 2009) consists of a multi-disciplinary collaboration examining the program's technical aspects, management and goals, and policy components, as well as a quasi-experimental statistical evaluation of crime reports in order to provide a comprehensive evaluation of the program's effectiveness. The results find that while the program did result in a 20% reduction in property crime within the view of the cameras, other forms of crime were not affected, including violent crime, one of the primary targets of the program.
From the UK:
The first study of its kind into the effectiveness of surveillance cameras revealed that almost every Scotland Yard murder inquiry uses their footage as evidence.
My own writing on security cameras is here. The question isn't whether they're useful or not, but whether their benefits are worth the costs.
Shaping the Obama Administration's Counterterrorism Strategy
I'm at a two-day conference: Shaping the Obama Adminstration's Counterterrorism Strategy, sponsored by the Cato Institute in Washington, DC. It's sold out, but you can watch or listen to the event live on the Internet. I'll be on a panel tomorrow at 9:00 AM.
Bad Password Security at Twitter
Twitter fell to a dictionary attack because the site allowed unlimited failed login attempts:
Cracking the site was easy, because Twitter allowed an unlimited number of rapid-fire log-in attempts.
Coding Horror has more, but -- come on, people -- this is basic stuff.
EDITED TO ADD (1/14): Twitter responds.
DHS's Files on Travelers
This is interesting:
I had been curious about what's in my travel dossier, so I made a Freedom of Information Act (FOIA) request for a copy. I'm posting here a few sample pages of what officials sent me.
Movie-Plot Threat: Terrorists Using Insects
Fear sells books:
Terrorists could easily contrive an "insect-based" weapon to import an exotic disease, according to an entomologist who's promoting a book on the subject.
Friday Squid Blogging: Squid Hats
Friday Squid Blogging: Bizarre Squid Reproductive Habits
Lots of them:
Hoving investigated the reproductive techniques of no fewer than ten different squids and related cuttlefish -- from the twelve-metre long giant squid to a mini-squid of no more than twenty-five millimetres in length. Along the way he made a number of remarkable discoveries. Hoving: "Reproduction is no fun if you're a squid. With one species, the Taningia danae, I discovered that the males give the females cuts of at least 5 centimetres deep in their necks with their beaks or hooks -- they don't have suction pads. They then insert their packets of sperm, also called spermatophores, into the cuts."
Impersonation isn't new. In 1556, a Frenchman was executed for impersonating Martin Guerre and this week hackers impersonated Barack Obama on Twitter. It's not even unique to humans: mockingbirds, Viceroy butterflies, and the mimic octopus all use impersonation as a survival strategy. For people, detecting impersonation is a hard problem for three reasons: we need to verify the identity of people we don't know, we interact with people through "narrow" communications channels like the telephone and Internet, and we want computerized systems to do the verification for us.
Traditional impersonation involves people fooling people. It's still done today: impersonating garbage men to collect tips, impersonating parking lot attendants to collect fees, or impersonating the French president to fool Sarah Palin. Impersonating people like policemen, security guards, and meter readers is a common criminal tactic.
These tricks work because we all regularly interact with people we don't know. No one could successfully impersonate your brother, your best friend, or your boss, because you know them intimately. But a policeman or a parking lot attendant? That's just someone with a badge or a uniform. But badges and ID cards only help if you know how to verify one. Do you know what a valid police ID looks like? Or how to tell a real telephone repairman's badge from a forged one?
Still, it's human nature to trust these credentials. We naturally trust uniforms, even though we know that anyone can wear one. When we visit a Web site, we use the professionalism of the page to judge whether or not it's really legitimate -- never mind that anyone can cut and paste graphics. Watch the next time someone other than law enforcement verifies your ID; most people barely look at it.
Impersonation is even easier over limited communications channels. On the telephone, how can you distinguish someone working at your credit card company from someone trying to steal your account details and login information? On e-mail, how can you distinguish someone from your company's tech support from a hacker trying to break into your network -- or the mayor of Paris from an impersonator? Once in a while someone frees himself from jail by faxing a forged release order to his warden. This is social engineering: impersonating someone convincingly enough to fool the victim.
These days, a lot of identity verification happens with computers. Computers are fast at computation but not very good at judgment, and can be tricked. So people can fool speed cameras by taping a fake license plate over the real one, fingerprint readers with a piece of tape, or automatic face scanners with -- and I'm not making this up -- a photograph of a face held in front of their own. Even the most bored policeman wouldn't fall for any of those tricks.
This is why identity theft is such a big problem today. So much authentication happens online, with only a small amount of information: user ID, password, birth date, Social Security number, and so on. Anyone who gets that information can impersonate you to a computer, which doesn't know any better.
Despite all of these problems, most authentication systems work most of the time. Even something as ridiculous as faxed signatures work, and can be legally binding. But no authentication system is perfect, and impersonation is always possible.
This lack of perfection is okay, though. Security is a trade-off, and any well-designed authentication system balances security with ease of use, customer acceptance, cost, and so on. More authentication isn't always better. Banks make this trade-off when they don't bother authenticating signatures on checks under amounts like $25,000; it's cheaper to deal with fraud after the fact. Web sites make this trade-off when they use simple passwords instead of something more secure, and merchants make this trade-off when they don't bother verifying your signature against your credit card. We make this trade-off when we accept police badges, Best Buy uniforms, and faxed signatures with only a cursory amount of verification.
Good authentication systems also balance false positives against false negatives. Impersonation is just one way these systems can fail; they can also fail to authenticate the real person. An ATM is better off allowing occasional fraud than preventing legitimate account holders access to their money. On the other hand, a false positive in a nuclear launch system is much more dangerous; better to not launch the missiles.
Decentralized authentication systems work better than centralized ones. Open your wallet, and you'll see a variety of physical tokens used to identify you to different people and organizations: your bank, your credit card company, the library, your health club, and your employer, as well as a catch-all driver's license used to identify you in a variety of circumstances. That assortment is actually more secure than a single centralized identity card: each system must be broken individually, and breaking one doesn't give the attacker access to everything. This is one of the reasons that centralized systems like REAL-ID make us less secure.
Finally, any good authentication system uses defense in depth. Since no authentication system is perfect, there need to be other security measures in place if authentication fails. That's why all of a corporation's assets and information isn't available to anyone who can bluff his way into the corporate offices. That is why credit card companies have expert systems analyzing suspicious spending patterns. And it's why identity theft won't be solved by making personal information harder to steal.
We can reduce the risk of impersonation, but it will always be with us; technology cannot "solve" it in any absolute sense. Like any security, the trick is to balance the trade-offs. Too little security, and criminals withdraw money from all our bank accounts. Too much security and when Barack Obama calls to congratulate you on your reelection, you won't believe it's him.
This essay originally appeared in The Wall Street Journal.
Interview with Me
I was interviewed for CSO Magazine.
Allocating Resources: Financial Fraud vs. Terrorism
The FBI has been forced to transfer agents from its counter-terrorism divisions to work on Bernard Madoff's alleged $50 billion fraud scheme as victims of the biggest scam in the world continue to emerge.
The Freakonomics blog discusses this:
This might lead you to ask an obvious counter-question: Has the anti-terror enforcement since 9/11 in the U.S. helped fuel the financial meltdown? That is, has the diversion of resources, personnel, and mindshare toward preventing future terrorist attacks -- including, you'd have to say, the wars in Afghanistan and Iraq -- contributed to a sloppy stewardship of the financial industry?
It quotes a New York Times article:
Federal officials are bringing far fewer prosecutions as a result of fraudulent stock schemes than they did eight years ago, according to new data, raising further questions about whether the Bush administration has been too lax in policing Wall Street.
We've seen this problem over and over again when it comes to counterterrorism: in an effort to defend against the rare threats, we make ourselves more vulnerable to the common threats.
Biometrics may seem new, but they're the oldest form of identification. Tigers recognize each other's scent; penguins recognize calls. Humans recognize each other by sight from across the room, voices on the phone, signatures on contracts and photographs on driver's licenses. Fingerprints have been used to identify people at crime scenes for more than 100 years.
What is new about biometrics is that computers are now doing the recognizing: thumbprints, retinal scans, voiceprints, and typing patterns. There's a lot of technology involved here, in trying to both limit the number of false positives (someone else being mistakenly recognized as you) and false negatives (you being mistakenly not recognized). Generally, a system can choose to have less of one or the other; less of both is very hard.
Biometrics can vastly improve security, especially when paired with another form of authentication such as passwords. But it's important to understand their limitations as well as their strengths. On the strength side, biometrics are hard to forge. It's hard to affix a fake fingerprint to your finger or make your retina look like someone else's. Some people can mimic voices, and make-up artists can change people's faces, but these are specialized skills.
On the other hand, biometrics are easy to steal. You leave your fingerprints everywhere you touch, your iris scan everywhere you look. Regularly, hackers have copied the prints of officials from objects they've touched, and posted them on the Internet. We haven't yet had an example of a large biometric database being hacked into, but the possibility is there. Biometrics are unique identifiers, but they're not secrets.
And a stolen biometric can fool some systems. It can be as easy as cutting out a signature, pasting it onto a contract, and then faxing the page to someone. The person on the other end doesn't know that the signature isn't valid because he didn't see it fixed onto the page. Remote logins by fingerprint fail in the same way. If there's no way to verify the print came from an actual reader, not from a stored computer file, the system is much less secure.
A more secure system is to use a fingerprint to unlock your mobile phone or computer. Because there is a trusted path from the fingerprint reader to the stored fingerprint the system uses to compare, an attacker can't inject a previously stored print as easily as he can cut and paste a signature. A photo on an ID card works the same way: the verifier can compare the face in front of him with the face on the card.
Fingerprints on ID cards are more problematic, because the attacker can try to fool the fingerprint reader. Researchers have made false fingers out of rubber or glycerin. Manufacturers have responded by building readers that also detect pores or a pulse.
The lesson is that biometrics work best if the system can verify that the biometric came from the person at the time of verification. The biometric identification system at the gates of the CIA headquarters works because there's a guard with a large gun making sure no one is trying to fool the system.
Of course, not all systems need that level of security. At Counterpane, the security company I founded, we installed hand geometry readers at the access doors to the operations center. Hand geometry is a hard biometric to copy, and the system was closed and didn't allow electronic forgeries. It worked very well.
One more problem with biometrics: they don't fail well. Passwords can be changed, but if someone copies your thumbprint, you're out of luck: you can't update your thumb. Passwords can be backed up, but if you alter your thumbprint in an accident, you're stuck. The failures don't have to be this spectacular: a voiceprint reader might not recognize someone with a sore throat, or a fingerprint reader might fail outside in freezing weather. Biometric systems need to be analyzed in light of these possibilities.
Biometrics are easy, convenient, and when used properly, very secure; they're just not a panacea. Understanding how they work and fail is critical to understanding when they improve security and when they don't.
Reporting Unruly Football Fans via Text Message
This system is available in most NFL stadiums:
Fans still are urged to complain to an usher or call a security hotline in the stadium to report unruly behavior. But text-messaging lines -- typically advertised on stadium scoreboards and on signs where fans gather -- are aimed at allowing tipsters to surreptitiously alert security personnel via cellphone without getting involved with rowdies or missing part of a game.
The article talks a lot about false alarms and prank calls, but -- in general -- this seems like a good use of technology.
The NSA on the Origins of the NSA
From its website.
Censorship on Google Maps
"Blurred Out: 51 Things You Aren't Allowed to See on Google Maps." An interesting list.
EDITED TO ADD (1/6): There seem to be a lot of problems with the list. Notably, it includes a story about the Singapore government claiming it had copyrighted its geography, which in fact was an April Fools joke.
The Best Capers of 2008
Kip Hawley Is Starting to Sound Like Me
"In the hurly-burly and the infinite variety of travel, you can end up with nonsensical results in which the T.S.A. person says, 'Well, I'm just following the rules,'" Mr. Hawley said. "But if you have an enemy who is going to study your technology and your process, and if you have something they can figure out a way to get around, and they're always figuring, then you have designed in a vulnerability."
FBI's New Cryptanalysis Contest
From their website.
Trends in Counterfeit Currency
It's getting worse:
More counterfeiters are using today's ink-jet printers, computers and copiers to make money that's just good enough to pass, he said, even though their product is awful.
It's interesting. Counterfeits are becoming easier to detect while people are becoming less skilled in detecting it:
Part of the problem, Green said, is that the government has changed the money so much to foil counterfeiting. With all the new bills out there, citizens and even many police officers don't know what they're supposed to look like.
Another article on the topic.
Friday Squid Blogging: Climate Change Affects Squids
Friday Squid Blogging: Squid Attacks ROV
Video. Looks like a Humboldt squid.
Another Recently Released NSA Document
In response to a declassification request by the National Security Archive, the secretive National Security Agency has declassified large portions of a four-part "top-secret Umbra" study, American Cryptology during the Cold War. Despite major redactions, this history discloses much new information about the agency's history and the role of SIGINT and communications intelligence (COMINT) during the Cold War. Researched and written by NSA historian Thomas Johnson, the three parts released so far provide a frank assessment of the history of the Agency and its forerunners, warts-and-all.
Real-world data on software security programs.
Powered by Movable Type. Photo at top by Geoffrey Stone.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.