Schneier on Security
A blog covering security and security technology.
August 2009 Archives
A recent report has concluded that the London's surveillance cameras have solved one crime per thousand cameras per year.
David Davis MP, the former shadow home secretary, said: "It should provoke a long overdue rethink on where the crime prevention budget is being spent."
Earlier this year separate research commissioned by the Home Office suggested that the cameras had done virtually nothing to cut crime, but were most effective in preventing vehicle crimes in car parks.
I haven't seen the report, but I know it's hard to figure out when a crime has been "solved" by a surveillance camera. To me, the crime has to have been unsolvable without the cameras. Repeatedly I see pro-camera lobbyists pointing to the surveillance-camera images that identified the 7/7 London Transport bombers, but it is obvious that they would have been identified even without the cameras.
And it would really help my understanding of that £20,000 figure (I assume it is calculated from £200 million for the cameras times 1 in 1000 cameras used to solve a crime per year divided by ten years) if I knew what sorts of crimes the cameras "solved." If the £200 million solved 10,000 murders, it might very well be a good security trade-off. But my guess is that most of the crimes were of a much lower level.
Cameras are largely security theater:
A Home Office spokeswoman said CCTVs "help communities feel safer".
I like to think this isn't a typo.
The U.S. Federal Bureau of Investigation is trying to figure out who is sending laptop computers to state governors across the U.S., including West Virginia Governor Joe Mahchin and Wyoming Governor Dave Freudenthal. Some state officials are worried that they may contain malicious software.
During chase scenes, movie protagonists often make their getaway by releasing some sort of decoy to cover their escape or distract their pursuer. But this tactic isn't reserved for action heroes—some deep-sea animals also evade their predators by releasing decoys—glowing ones.
Not beer, just the glasses:
The Home Office has commissioned a new design, in an attempt to stop glasses being used as weapons.
I don't think this will go anywhere, but the sheer idiocy is impressive. Reminds me of the call to ban pointy knives. That recommendation also came out of the UK. What's going on over there?
Someone has been charged with stealing 130 million credit card numbers.
Yes, it's a lot, but that's the sort of quantities credit card numbers come in. They come by the millions, in large database files. Even if you only want ten, you have to steal millions. I'm sure every one of us has a credit card in our wallet whose number has been stolen. It'll probably never be used for fraudulent purposes, but it's in some stolen database somewhere.
Years ago, when giving advice on how to avoid identity theft, I would tell people to shred their trash. Today, that advice is completely obsolete. No one steals credit card numbers one by one out of the trash when they can be stolen by the millions from merchant databases.
Interesting video demonstrating how a policeman can manipulate the results of a Breathalyzer.
The sorts of crimes we've been seeing perpetrated against individuals are starting to be perpetrated against small businesses:
In July, a school district near Pittsburgh sued to recover $700,000 taken from it. In May, a Texas company was robbed of $1.2 million. An electronics testing firm in Baton Rouge, La., said it was bilked of nearly $100,000.
This has the potential to grow into a very big problem. Even worse:
Businesses do not enjoy the same legal protections as consumers when banking online. Consumers typically have up to 60 days from the receipt of a monthly statement to dispute any unauthorized charges.
And, of course, the security externality means that the banks care much less:
"The banks spend a lot of money on protecting consumer customers because they owe money if the consumer loses money," Litan said. "But the banks don't spend the same resources on the corporate accounts because they don't have to refund the corporate losses."
As part of their training, federal agents engage in mock exercises in public places. Sometimes, innocent civilians get involved.
Every day, as Washingtonians go about their overt lives, the FBI, CIA, Capitol Police, Secret Service and U.S. Marshals Service stage covert dramas in and around the capital where they train. Officials say the scenarios help agents and officers integrate the intellectual, physical and emotional aspects of classroom instruction. Most exercises are performed inside restricted compounds. But they also unfold in public parks, suburban golf clubs and downtown transit stations.
EDITED TO ADD (9/11): It happened in D.C., in the Potomac River, with the Coast Guard.
It turns out that flipping a coin has all sorts of non-randomness:
Here are the broad strokes of their research:
The math doesn't look good: "When Zombies Attack!: Mathematical Modelling of an Outbreak of Zombie Infection."
An outbreak of zombies infecting humans is likely to be disastrous, unless extremely aggressive tactics are employed against the undead. While aggressive quarantine may eradicate the infection, this is unlikely to happen in practice. A cure would only result in some humans surviving the outbreak, although they will still coexist with zombies. Only sufficiently frequent attacks, with increasing force, will result in eradication, assuming the available resources can be mustered in time.
Palaeontologists have drawn with ink extracted from a preserved fossilised squid uncovered during a dig in Trowbridge, Wiltshire.
The calcified ink was ground with a solution of ammonia to turn it into ink.
From the humor website Cracked: "The 5 Most Embarrassing Failures in the History of Terrorism."
Yes, it's funny. But remember that these are the terrorist masterminds that politicians invoke to keep us scared.
My 2007 essay, "Portrait of the Modern Terrorist as an Idiot," is also relevant. But less funny.
Marc Weber Tobias again:
The new Assa Solo was recently introduced in Europe and we believe is the latest Cliq design. We were provided with samples and were able to show a reporter for Wired’s Threat Level how to completely circumvent the electronic credentials in less than thirty seconds, which she easily accomplished. This is the latest and most current example of a failure in security engineering at Assa.
Me on locks and lockpicking.
Scientists looking for better ways to detect lies have found a promising one: increasing suspects' "cognitive load." For a host of reasons, their theory goes, lying is more mentally taxing than telling the truth. Performing an extra task while lying or telling the truth should therefore affect the liars more.
A pickup truck driver is accused of trying to run over a bicyclist and then coming after him brandishing an ax after a road-rage incident in Burnsville last weekend.
Seems like a normal threat to me. Or assault, with intent to do bodily harm. What's wrong with those criminal statutes?
Let's save the word "terrorism" for things that actually are terrorism.
This isn't good:
The scientists fabricated blood and saliva samples containing DNA from a person other than the donor of the blood and saliva. They also showed that if they had access to a DNA profile in a database, they could construct a sample of DNA to match that profile without obtaining any tissue from that person.
EDITED TO ADD (8/19): A better article.
Let's all be afraid:
But it adds: "Robots that effectively mimic human appearance and movements may be used as human proxies."
I'm sure I've seen this stuff in movies.
Flash has the equivalent of cookies, and they're hard to delete:
Unlike traditional browser cookies, Flash cookies are relatively unknown to web users, and they are not controlled through the cookie privacy controls in a browser. That means even if a user thinks they have cleared their computer of tracking objects, they most likely have not.
Excellent paper: "On Locational Privacy, and How to Avoid Losing it Forever."
Some threats to locational privacy are overt: it's evident how cameras backed by face-recognition software could be misused to track people and record their movements. In this document, we're primarily concerned with threats to locational privacy that arise as a hidden side-effect of clearly useful location-based services.
I've already written about wholesale surveillance.
For over three years the pair hacked into a Department of Transportation website called Safersys.org, which maintains a list of licensed interstate-trucking companies and brokers, according to an affidavit (.pdf) filed by a DOT investigator. There, they would temporarily change the contact information for a legitimate trucking company to an address and phone number under their control.
Actually, not so clever. I'm amazed it went on for three years. You'd think that more than a few of the subcontracters would pick up the phone and call the original customers -- and they'd figure out what happened. Maybe there are just so many trucking companies, and so many people who need cargo shipped places, that they were able to hide for three years.
But this scheme was bound to unravel sooner or later. If the criminal middlemen had legitimately subcontracted the work and just pocketed the difference, they might have remained undiscovered forever. But that's much less profit per contract.
Physical locks aren't very good. They keep the honest out, but any burglar worth his salt can pick the common door lock pretty quickly.
It used to be that most people didn't know this. Sure, we all watched television criminals and private detectives pick locks with an ease only found on television and thought it realistic, but somehow we still held onto the belief that our own locks kept us safe from intruders.
The Internet changed that.
First was the MIT Guide to Lockpicking, written by the late Bob ("Ted the Tool") Baldwin. Then came Matt Blaze's 2003 paper on breaking master key systems. After that, came a flood of lock picking information on the Net: opening a bicycle lock with a Bic pen, key bumping, and more. Many of these techniques were already known in both the criminal and locksmith communities. The locksmiths tried to suppress the knowledge, believing their guildlike secrecy was better than openness. But they've lost: Never has there been more public information about lock picking -- or safecracking, for that matter.
Lock companies have responded with more complicated locks, and more complicated disinformation campaigns.
There seems to be a limit to how secure you can make a wholly mechanical lock, as well as a limit to how large and unwieldy a key the public will accept. As a result, there is increasing interest in other lock technologies.
As a security technologist, I worry that if we don't fully understand these technologies and the new sorts of vulnerabilities they bring, we may be trading a flawed technology for an even worse one. Electronic locks are vulnerable to attack, often in new and surprising ways.
Start with keypads, more and more common on house doors. These have the benefit that you don't have to carry a physical key around, but there's the problem that you can't give someone the key for a day and then take it away when that day is over. As such, the security decays over time -- the longer the keypad is in use, the more people know how to get in. More complicated electronic keypads have a variety of options for dealing with this, but electronic keypads work only when the power is on, and battery-powered locks have their own failure modes. Plus, far too many people never bother to change the default entry code.
Keypads have other security failures, as well. I regularly see keypads where four of the 10 buttons are more worn than the other six. They're worn from use, of course, and instead of 10,000 possible entry codes, I now have to try only 24.
Fingerprint readers are another technology, but there are many known security problems with those. And there are operational problems, too: They're hard to use in the cold or with sweaty hands; and leaving a key with a neighbor to let the plumber in starts having a spy-versus-spy feel.
Some companies are going even further. Earlier this year, Schlage launched a series of locks that can be opened either by a key, a four-digit code, or the Internet. That's right: The lock is online. You can send the lock SMS messages or talk to it via a Website, and the lock can send you messages when someone opens it -- or even when someone tries to open it and fails.
Sounds nifty, but putting a lock on the Internet opens up a whole new set of problems, none of which we fully understand. Even worse: Security is only as strong as the weakest link. Schlage's system combines the inherent "pickability" of a physical lock, the new vulnerabilities of electronic keypads, and the hacking risk of online. For most applications, that's simply too much risk.
This essay previously appeared on DarkReading.com.
August's Communications of the ACM has an interesting article: "An Ethics Code for U.S. Intelligence Officers," by former NSAers Brian Snow and Clint Brooks. The article is behind a paywall, but here's the code:
Draft Statement of Ethics for the Intelligence Community
It's supposed to be for U.S. intelligence officers, but with one inconsequential modification it could be made international.
There are several ways two people can divide a piece of cake in half. One way is to find someone impartial to do it for them. This works, but it requires another person. Another way is for one person to divide the piece, and the other person to complain (to the police, a judge, or his parents) if he doesn't think it’s fair. This also works, but still requires another person -- at least to resolve disputes. A third way is for one person to do the dividing, and for the other person to choose the half he wants.
That third way, known by kids, pot smokers, and everyone else who needs to divide something up quickly and fairly, is called cut-and-choose. People use it because it’s a self-enforcing protocol: a protocol designed so that neither party can cheat.
Self-enforcing protocols are useful because they don't require trusted third parties. Modern systems for transferring money -- checks, credit cards, PayPal -- require trusted intermediaries like banks and credit card companies to facilitate the transfer. Even cash transfers require a trusted government to issue currency, and they take a cut in the form of seigniorage. Modern contract protocols require a legal system to resolve disputes. Modern commerce wasn't possible until those systems were in place and generally trusted, and complex business contracts still aren't possible in areas where there is no fair judicial system. Barter is a self-enforcing protocol: nobody needs to facilitate the transaction or resolve disputes. It just works.
Self-enforcing protocols are safer than other types because participants don't gain an advantage from cheating. Modern voting systems are rife with the potential for cheating, but an open show of hands in a room -- one that everyone in the room can count for himself -- is self-enforcing. On the other hand, there’s no secret ballot, late voters are potentially subjected to coercion, and it doesn't scale well to large elections. But there are mathematical election protocols that have self-enforcing properties, and some cryptographers have suggested their use in elections.
Here’s a self-enforcing protocol for determining property tax: the homeowner decides the value of the property and calculates the resultant tax, and the government can either accept the tax or buy the home for that price. Sounds unrealistic, but the Greek government implemented exactly that system for the taxation of antiquities. It was the easiest way to motivate people to accurately report the value of antiquities.
A VAT, or value-added tax, is a self-enforcing alternative to sales tax. Sales tax is collected on the entire value of the thing at the point of retail sale; both the customer and the storeowner want to cheat the government. But VAT is collected at every step between raw materials and that final customer; it’s the difference between the price of the materials sold and the materials bought. Buyers wants official receipts with as high a purchase price as possible, so each buyer along the chain keeps each seller honest. Yes, there’s still an incentive to cheat on the final sale to the customer, but the amount of tax collected at that point is much lower.
Of course, self-enforcing protocols aren't perfect. For example, someone in a cut-and-choose can punch the other guy and run away with the entire piece of cake. But perfection isn't the goal here; the goal is to reduce cheating by taking away potential avenues of cheating. Self-enforcing protocols improve security not by implementing countermeasures that prevent cheating, but by leveraging economic incentives so that the parties don't want to cheat.
One more self-enforcing protocol. Imagine a pirate ship that encounters a storm. The pirates are all worried about their gold, so they put their personal bags of gold in the safe. During the storm, the safe cracks open, and all the gold mixes up and spills out on the floor. How do the pirates determine who owns what? They each announce to the group how much gold they had. If the total of all the announcements matches what’s in the pile, it’s divided as people announced. If it’s different, then the captain keeps it all. I can think of all kinds of ways this can go wrong -- the captain and one pirate can collude to throw off the total, for example -- but it is self-enforcing against individual misreporting.
This essay originally appeared on ThreatPost.
EDITED TO ADD (8/12): Shotgun clauses are an example of a self-enforcing protocol.
Here's some complicated advice on securing passwords that -- I'll bet -- no one follows.
I regularly break seven of those rules. How about you? (Here's my advice on choosing secure passwords.)
Humboldt squid feed in surface waters at night, then retreat to great depths during daylight hours. "They spend the day 300 meters deep where oxygen levels are very low," Seibel said. "We wanted to know how they deal with so little oxygen."
I don't trust the research, or the squid.
People have a natural intuition about risk, and in many ways it's very good. It fails at times due to a variety of cognitive biases, but for normal risks that people regularly encounter, it works surprisingly well: often better than we give it credit for.
This struck me as I listened to yet another conference presenter complaining about security awareness training. He was talking about the difficulty of getting employees at his company to actually follow his security policies: encrypting data on memory sticks, not sharing passwords, not logging in from untrusted wireless networks. "We have to make people understand the risks," he said.
It seems to me that his co-workers understand the risks better than he does. They know what the real risks are at work, and that they all revolve around not getting the job done. Those risks are real and tangible, and employees feel them all the time. The risks of not following security procedures are much less real. Maybe the employee will get caught, but probably not. And even if he does get caught, the penalties aren't serious.
Given this accurate risk analysis, any rational employee will regularly circumvent security to get his or her job done. That's what the company rewards, and that's what the company actually wants.
"Fire someone who breaks security procedure, quickly and publicly," I suggested to the presenter. "That'll increase security awareness faster than any of your posters or lectures or newsletters." If the risks are real, people will get it.
You see the same sort of risk intuition on motorways. People are less careful about posted speed limits than they are about the actual speeds police issue tickets for. It's also true on the streets: people respond to real crime rates, not public officials proclaiming that a neighbourhood is safe.
The warning stickers on ladders might make you think the things are considerably riskier than they are, but people have a good intuition about ladders and ignore most of the warnings. (This isn't to say that some people don't do stupid things around ladders, but for the most part they're safe. The warnings are more about the risk of lawsuits to ladder manufacturers than risks to people who climb ladders.)
As a species, we are naturally tuned in to the risks inherent in our environment. Throughout our evolution, our survival depended on making reasonably accurate risk management decisions intuitively, and we're so good at it, we don't even realise we're doing it.
Parents know this. Children have surprisingly perceptive risk intuition. They know when parents are serious about a threat and when their threats are empty. And they respond to the real risks of parental punishment, not the inflated risks based on parental rhetoric. Again, awareness training lectures don't work; there have to be real consequences.
It gets even weirder. The University College London professor John Adams popularised the metaphor of a mental risk thermostat. We tend to seek some natural level of risk, and if something becomes less risky, we tend to make it more risky. Motorcycle riders who wear helmets drive faster than riders who don't.
Our risk thermostats aren't perfect (that newly helmeted motorcycle rider will still decrease his overall risk) and will tend to remain within the same domain (he might drive faster, but he won't increase his risk by taking up smoking), but in general, people demonstrate an innate and finely tuned ability to understand and respond to risks.
Of course, our risk intuition fails spectacularly and often, with regards to rare risks , unknown risks, voluntary risks, and so on. But when it comes to the common risks we face every day—the kinds of risks our evolutionary survival depended on—we're pretty good.
So whenever you see someone in a situation who you think doesn't understand the risks, stop first and make sure you understand the risks. You might be surprised.
This essay previously appeared in The Guardian.
EDITED TO ADD (8/12): Commentary on risk thermostat.
Dynamite Found On Track
Imagine if the same thing happened today.
Good essay: "When Security Gets in the Way."
The numerous incidents of defeating security measures prompts my cynical slogan: The more secure you make something, the less secure it becomes. Why? Because when security gets in the way, sensible, well-meaning, dedicated people develop hacks and workarounds that defeat the security. Hence the prevalence of doors propped open by bricks and wastebaskets, of passwords pasted on the fronts of monitors or hidden under the keyboard or in the drawer, of home keys hidden under the mat or above the doorframe or under fake rocks that can be purchased for this purpose.
The New York Times has an editorial on regulating chemical plants:
Since Sept. 11, 2001, experts have warned that an attack on a chemical plant could produce hundreds of thousands of deaths and injuries. Public safety and environmental advocates have fought for strong safety rules, but the chemical industry used its clout in Congress in 2006 to ensure that only a weak law was enacted.
The problem is a classic security externality, which I wrote about in 2007:
Any rational chemical plant owner will only secure the plant up to its value to him. That is, if the plant is worth $100 million, then it makes no sense to spend $200 million on securing it. If the odds of it being attacked are less than 1 percent, it doesn't even make sense to spend $1 million on securing it. The math is more complicated than this, because you have to factor in such things as the reputational cost of having your name splashed all over the media after an incident, but that's the basic idea.
Research that proves what we already knew:
Crying Wolf: An Empirical Study of SSL Warning Effectiveness
China is the world's most successful Internet censor. While the Great Firewall of China isn't perfect, it effectively limits information flowing in and out of the country. But now the Chinese government is taking things one step further.
Under a requirement taking effect soon, every computer sold in China will have to contain the Green Dam Youth Escort software package. Ostensibly a pornography filter, it is government spyware that will watch every citizen on the Internet.
Green Dam has many uses. It can police a list of forbidden Web sites. It can monitor a user's reading habits. It can even enlist the computer in some massive botnet attack, as part of a hypothetical future cyberwar.
China's actions may be extreme, but they're not unique. Democratic governments around the world -- Sweden, Canada and the United Kingdom, for example -- are rushing to pass laws giving their police new powers of Internet surveillance, in many cases requiring communications system providers to redesign products and services they sell.
Many are passing data retention laws, forcing companies to keep information on their customers. Just recently, the German government proposed giving itself the power to censor the Internet.
The United States is no exception. The 1994 CALEA law required phone companies to facilitate FBI eavesdropping, and since 2001, the NSA has built substantial eavesdropping systems in the United States. The government has repeatedly proposed Internet data retention laws, allowing surveillance into past activities as well as present.
Systems like this invite criminal appropriation and government abuse. New police powers, enacted to fight terrorism, are already used in situations of normal crime. Internet surveillance and control will be no different.
Official misuses are bad enough, but the unofficial uses worry me more. Any surveillance and control system must itself be secured. An infrastructure conducive to surveillance and control invites surveillance and control, both by the people you expect and by the people you don't.
China's government designed Green Dam for its own use, but it's been subverted. Why does anyone think that criminals won't be able to use it to steal bank account and credit card information, use it to launch other attacks, or turn it into a massive spam-sending botnet?
Why does anyone think that only authorized law enforcement will mine collected Internet data or eavesdrop on phone and IM conversations?
These risks are not theoretical. After 9/11, the National Security Agency built a surveillance infrastructure to eavesdrop on telephone calls and e-mails within the United States.
Although procedural rules stated that only non-Americans and international phone calls were to be listened to, actual practice didn't always match those rules. NSA analysts collected more data than they were authorized to, and used the system to spy on wives, girlfriends, and famous people such as President Clinton.
But that's not the most serious misuse of a telecommunications surveillance infrastructure. In Greece, between June 2004 and March 2005, someone wiretapped more than 100 cell phones belonging to members of the Greek government -- the prime minister and the ministers of defense, foreign affairs and justice.
Ericsson built this wiretapping capability into Vodafone's products, and enabled it only for governments that requested it. Greece wasn't one of those governments, but someone still unknown -- a rival political party? organized crime? -- figured out how to surreptitiously turn the feature on.
Researchers have already found security flaws in Green Dam that would allow hackers to take over the computers. Of course there are additional flaws, and criminals are looking for them.
Surveillance infrastructure can be exported, which also aids totalitarianism around the world. Western companies like Siemens, Nokia, and Secure Computing built Iran's surveillance infrastructure. U.S. companies helped build China's electronic police state. Twitter's anonymity saved the lives of Iranian dissidents -- anonymity that many governments want to eliminate.
Every year brings more Internet censorship and control -- not just in countries like China and Iran, but in the United States, the United Kingdom, Canada and other free countries.
The control movement is egged on by both law enforcement, trying to catch terrorists, child pornographers and other criminals, and by media companies, trying to stop file sharers.
It's bad civic hygiene to build technologies that could someday be used to facilitate a police state. No matter what the eavesdroppers and censors say, these systems put us all at greater risk. Communications systems that have no inherent eavesdropping capabilities are more secure than systems with those capabilities built in.
This essay previously appeared -- albeit with fewer links -- on the Minnesota Public Radio website.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.