Schneier on Security
A blog covering security and security technology.
June 2008 Archives
This seems like a good idea:
Eager to embrace eggheads and ideas, the Pentagon has started an ambitious and unusual program to recruit social scientists and direct the nation’s brainpower to combating security threats like the Chinese military, Iraq, terrorism and religious fundamentalism.
The article talks a lot about potential conflicts of interest and such, and less on what sorts of insights the social scientists can offer. I think there is a lot of potential value here.
I'm writing from the First Interdisciplinary Workshop on Security and Human Behavior (SHB 08).
Security is both a feeling and a reality, and they're different. There are several different research communities: technologists who study security systems, and psychologists who study people, not to mention economists, anthropologists and others. Increasingly these worlds are colliding.
About a year ago Ross Anderson and I conceived this conference as a way to bring together computer security researchers, psychologists, behavioral economists, sociologists, philosophers, and others -- all of whom are studying the human side of security. I've read a lot -- and written some -- on psychology and security over the past few years, and have been continually amazed by some of the research that people outside my field have been doing on topics very relevant to my field. Ross and I both thought that bringing these diverse communities together would be fascinating to everyone. So we convinced behavioral economists Alessandro Acquisti and George Loewenstein to help us organize the workshop, invited the people we all have been reading, and also asked them who else to invite. The response was overwhelming. Almost everyone we wanted was able to attend, and the result was a 42-person conference with 35 speakers.
We're most of the way through the morning, and it's been even more fascinating than I expected. (Here's the agenda.) We've talked about detecting deception in people, organizational biases in making security decisions, building security "intuition" into Internet browsers, different techniques to prevent crime, complexity and failure, and the modeling of security feeling.
I had high hopes of liveblogging this event, but it's far too fascinating to spend time writing posts. If you want to read some of the more interesting papers written by the participants, this is a good page to start with.
I'll write more about the conference later.
EDITED TO ADD (6/30): Ross Anderson has a blog post, where he liveblogs the individual sessions in the comments. And I should add that this was an invitational event -- which is why you haven't heard about it before -- and that the room here at MIT is completely full.
EDITED TO ADD (7/1): Matt Blaze has posted audio. And Ross Anderson -- link above -- is posting paragraph-long summaries for each speaker.
Perhaps this would make a good Movie-Plot Threat Contest for next year.
I think this is the first security vulnerability found in RFC 1149: "Standard for the transmission of IP datagrams on avian carriers." Deep packet inspection seems to be the only way to prevent this attack, although adequate fencing will prevent the protocol from running in the first place.
Pervasive security cameras don't substantially reduce crime. There are exceptions, of course, and that's what gets the press. Most famously, CCTV cameras helped catch James Bulger's murderers in 1993. And earlier this year, they helped convict Steve Wright of murdering five women in the Ipswich area. But these are the well-publicised exceptions. Overall, CCTV cameras aren't very effective.
This fact has been demonstrated again and again: by a comprehensive study for the Home Office in 2005, by several studies in the US, and again with new data announced last month by New Scotland Yard. They actually solve very few crimes, and their deterrent effect is minimal.
Conventional wisdom predicts the opposite. But if that were true, then camera-happy London, with something like 500,000, would be the safest city on the planet. It isn't, of course, because of technological limitations of cameras, organisational limitations of police and the adaptive abilities of criminals.
To some, it's comforting to imagine vigilant police monitoring every camera, but the truth is very different. Most CCTV footage is never looked at until well after a crime is committed. When it is examined, it's very common for the viewers not to identify suspects. Lighting is bad and images are grainy, and criminals tend not to stare helpfully at the lens. Cameras break far too often. The best camera systems can still be thwarted by sunglasses or hats. Even when they afford quick identification — think of the 2005 London transport bombers and the 9/11 terrorists — police are often able to identify suspects without the cameras. Cameras afford a false sense of security, encouraging laziness when we need police to be vigilant.
The solution isn't for police to watch the cameras. Unlike an officer walking the street, cameras only look in particular directions at particular locations. Criminals know this, and can easily adapt by moving their crimes to someplace not watched by a camera — and there will always be such places. Additionally, while a police officer on the street can respond to a crime in progress, the same officer in front of a CCTV screen can only dispatch another officer to arrive much later. By their very nature, cameras result in underused and misallocated police resources.
Cameras aren't completely ineffective, of course. In certain circumstances, they're effective in reducing crime in enclosed areas with minimal foot traffic. Combined with adequate lighting, they substantially reduce both personal attacks and auto-related crime in car parks. And from some perspectives, simply moving crime around is good enough. If a local Tesco installs cameras in its store, and a robber targets the store next door as a result, that's money well spent by Tesco. But it doesn't reduce the overall crime rate, so is a waste of money to the township.
But the question really isn't whether cameras reduce crime; the question is whether they're worth it. And given their cost (£500 m in the past 10 years), their limited effectiveness, the potential for abuse (spying on naked women in their own homes, sharing nude images, selling best-of videos, and even spying on national politicians) and their Orwellian effects on privacy and civil liberties, most of the time they're not. The funds spent on CCTV cameras would be far better spent on hiring experienced police officers.
We live in a unique time in our society: the cameras are everywhere, and we can still see them. Ten years ago, cameras were much rarer than they are today. And in 10 years, they'll be so small you won't even notice them. Already, companies like L-1 Security Solutions are developing police-state CCTV surveillance technologies like facial recognition for China, technology that will find their way into countries like the UK. The time to address appropriate limits on this technology is before the cameras fade from notice.
This essay was previously published in The Guardian.
EDITED TO ADD (7/3): A rebuttal.
EDITED TO ADD (7/6): More commentary.
I've seen the IR screening guns at several airports, primarily in Asia. The idea is to keep out people with Bird Flu, or whatever the current fever scare is. This essay explains why it won't work:
The bottom line is that this kind of remote fever sensing had poor positive predictive value, meaning that the proportion of people correctly identified as having fever was low, ranging from 10% to 16%. Thus there were a lot of false positives. Negative predictive value, the proportion of people classified by the IR device as not having fever who in fact did not have fever was high (97% to 99%), so not many people with fevers will be missed with the IR device. Predictive values depend not only on the accuracy of the device but also how prevalent fever is in the screened population. In the early days of a pandemic, fever prevalence will be very low, leading to low positive predictive value. The false positives produced at airport security would make the days of only taking off your shoes look good.
Lots more science in the essay.
UK teens are using Google Earth to find swimming pools they can crash.
How long before someone finds a more serious crime that can be aided by Google Earth.
"In a day when you can't bring a large tube of toothpaste on a plane how can you allow guns to wander through Union Station, the biggest transit hub in Canada?" he asked his colleagues on city council.
By that logic, I think we can ban anything from anywhere.
A new study claims that insiders aren't the main threat to network security:
Verizon's 2008 Data Breach Investigations Report, which looked at 500 breach incidents over the last four years, contradicts the growing orthodoxy that insiders, rather than external agents, represent the most serious threat to network security at most organizations.
The whole insiders vs. outsiders debate has always been one of semantics more than anything else. If you count by attacks, there are a lot more outsider attacks, simply because there are orders of magnitude more outsider attackers. If you count incidents, the numbers tend to get closer: 75% vs. 18% in this case. And if you count damages, insiders generally come out on top -- mostly because they have a lot more detailed information and can target their attacks better.
Both insiders and outsiders are security risks, and you have to defend against them both. Trying to rank them isn't all that useful.
Swimming pools around Shanghai are checking liquids:
"Pool guests who bring these items must allow them to be opened and inspected. Security personnel will smell them to see whether they are safe or not," a separate report posted on the city's sport bureau's website said (www.shsports.gov.cn).
The stupidity is beyond words.
"We have found we can potentially detect an incredibly small quantity of material, as small as one dust-speck-sized particle weighing one trillionth of a gram, on an individual's clothing or baggage," Farquar said. "This is important because if a person handles explosives they are likely to have some remaining residue."
We're contaminating the squid:
The toxic chemicals that Vecchione and colleagues from the Virginia Institute of Marine Science found are a rogues gallery of scary initials: PCBs, TBTs, BDEs, and DDT among them. Scientists classify all of them as POPs, or persistent
A runner-up in last year's Underhanded C Contest was a flawed implementation of RC4 that eventually just passed plaintext through unencrypted. Plausibly deniable, and very clever.
The other winners are also clever.
A Jura F90 Coffee Machine can be hacked remotely over the Internet.
Traffic analysis works even through the encryption:
The new compression technique, called variable bitrate compression produces different size packets of data for different sounds.
The technique isn't good enough to decode entire conversations, but it's pretty impressive.
Sometimes security through obscurity works:
Yes, the New York Police Department provided an escort, but during more than eight hours on Saturday, one of the great hoards of coins and currency on the planet, worth hundreds of millions of dollars, was utterly unalarmed as it was bumped through potholes, squeezed by double-parked cars and slowed by tunnel-bound traffic during the trip to its fortresslike new vault a mile to the north.
From my book Beyond Fear, pp. 211-12:
At 3,106 carats, a little under a pound and a half, the Cullinan Diamond was the largest uncut diamond ever discovered. It was extracted from the earth at the Premier Mine, near Pretoria, South Africa, in 1905. Appreciating the literal enormity of the find, the Transvaal government bought the diamond as a gift for King Edward VII. Transporting the stone to England was a huge security problem, of course, and there was much debate on how best to do it. Detectives were sent from London to guard it on its journey. News leaked that a certain steamer was carrying it, and the presence of the detectives confirmed this. But the diamond on that steamer was a fake. Only a few people knew of the real plan; they packed the Cullinan in a small box, stuck a three-shilling stamp on it, and sent it to England anonymously by unregistered parcel post.
The 'ring of the devil' is capable of attacking this kind of electronic motor lock on two ways.
LifeLock, one of the companies that offers identity-theft protection in the United States, has been taking quite a beating recently. They're being sued by credit bureaus, competitors and lawyers in several states that are launching class action lawsuits. And the stories in the media ... it's like a piranha feeding frenzy.
There are also a lot of errors and misconceptions. With its aggressive advertising campaign and a CEO who publishes his Social Security number and dares people to steal his identity -- Todd Davis, 457-55-5462 -- LifeLock is a company that's easy to hate. But the company's story has some interesting security lessons, and it's worth understanding in some detail.
In December 2003, as part of the Fair and Accurate Credit Transactions Act, or Facta, credit bureaus were forced to allow you to put a fraud alert on their credit reports, requiring lenders to verify your identity before issuing a credit card in your name. This alert is temporary, and expires after 90 days. Several companies have sprung up -- LifeLock, Debix, LoudSiren, TrustedID -- that automatically renew these alerts and effectively make them permanent.
This service pisses off the credit bureaus and their financial customers. The reason lenders don't routinely verify your identity before issuing you credit is that it takes time, costs money and is one more hurdle between you and another credit card. (Buy, buy, buy -- it's the American way.) So in the eyes of credit bureaus, LifeLock's customers are inferior goods; selling their data isn't as valuable. LifeLock also opts its customers out of pre-approved credit card offers, further making them less valuable in the eyes of credit bureaus.
And, so began a smear campaign on the part of the credit bureaus. You can read their points of view in this New York Times article, written by a reporter who didn't do much more than regurgitate their talking points. And the class action lawsuits have piled on, accusing LifeLock of deceptive business practices, fraudulent advertising and so on. The biggest smear is that LifeLock didn't even protect Todd Davis, and that his identity was allegedly stolen.
It wasn't. Someone in Texas used Davis's SSN to get a $500 advance against his paycheck. It worked because the loan operation didn't check with any of the credit bureaus before approving the loan -- perfectly reasonable for an amount this small. The payday-loan operation called Davis to collect, and LifeLock cleared up the problem. His credit report remains spotless.
The Experian credit bureau's lawsuit basically claims that fraud alerts are only for people who have been victims of identity theft. This seems spurious; the text of the law states that anyone "who asserts a good faith suspicion that the consumer has been or is about to become a victim of fraud or related crime" can request a fraud alert. It seems to me that includes anybody who has ever received one of those notices about their financial details being lost or stolen, which is everybody.
As to deceptive business practices and fraudulent advertising -- those just seem like class action lawyers piling on. LifeLock's aggressive fear-based marketing doesn't seem any worse than a lot of other similar advertising campaigns. My guess is that the class action lawsuits won't go anywhere.
In reality, forcing lenders to verify identity before issuing credit is exactly the sort of thing we need to do to fight identity theft. Basically, there are two ways to deal with identity theft: Make personal information harder to steal, and make stolen personal information harder to use. We all know the former doesn't work, so that leaves the latter. If Congress wanted to solve the problem for real, one of the things it would do is make fraud alerts permanent for everybody. But the credit industry's lobbyists would never allow that.
LifeLock does a bunch of other clever things. They monitor the national address database, and alert you if your address changes. They look for your credit and debit card numbers on hacker and criminal websites and such, and assist you in getting a new number if they see it. They have a million-dollar service guarantee -- for complicated legal reasons, they can't call it insurance -- to help you recover if your identity is ever stolen.
But even with all of this, I am not a LifeLock customer. At $120 a year, it's just not worth it. You wouldn't know it from the press attention, but dealing with identity theft has become easier and more routine. Sure, it's a pervasive problem. The Federal Trade Commission reported that 8.3 million Americans were identity-theft victims in 2005. But that includes things like someone stealing your credit card and using it, something that rarely costs you any money and that LifeLock doesn't protect against. New account fraud is much less common, affecting 1.8 million Americans per year, or 0.8 percent of the adult population. The FTC hasn't published detailed numbers for 2006 or 2007, but the rate seems to be declining.
New card fraud is also not very damaging. The median amount of fraud the thief commits is $1,350, but you're not liable for that. Some spectacularly horrible identity-theft stories notwithstanding, the financial industry is pretty good at quickly cleaning up the mess. The victim's median out-of-pocket cost for new account fraud is only $40, plus ten hours of grief to clean up the problem. Even assuming your time is worth $100 an hour, LifeLock isn't worth more than $8 a year.
And it's hard to get any data on how effective LifeLock really is. They've been in business three years and have about a million customers, but most of them have joined up in the last year. They've paid out on their service guarantee 113 times, but a lot of those were for things that happened before their customers became customers. (It was easier to pay than argue, I assume.) But they don't know how often the fraud alerts actually catch an identity thief in the act. My guess is that it's less than the 0.8 percent fraud rate above.
LifeLock's business model is based more on the fear of identity theft than the actual risk.
It's pretty ironic of the credit bureaus to attack LifeLock on its marketing practices, since they know all about profiting from the fear of identity theft. Facta also forced the credit bureaus to give Americans a free credit report once a year upon request. Through deceptive marketing techniques, they've turned this requirement into a multimillion-dollar business.
Get LifeLock if you want, or one of its competitors if you prefer. But remember that you can do most of what these companies do yourself. You can put a fraud alert on your own account, but you have to remember to renew it every three months. You can also put a credit freeze on your account, which is more work for the average consumer but more effective if you're a privacy wonk -- and the rules differ by state. And maybe someday Congress will do the right thing and put LifeLock out of business by forcing lenders to verify identity every time they issue credit in someone's name.
This essay originally appeared in Wired.com.
I've never figured out the fuss over ransomware:
Some day soon, you may go in and turn on your Windows PC and find your most valuable files locked up tighter than Fort Knox.
How is this any worse than the old hacker viruses that put a funny message on your screen and erased your hard drive?
Here's how I see it, if someone actually manages to pull this up and put it into circulation, we're looking at malware Armegeddon. Instead of losing 'just' your credit card numbers or having your PC turned into a spam factory, you could lose vital files forever.
The single most important thing any company or individual can do to improve security is have a good backup strategy. It's been true for decades, and it's still true today.
Usually, cuttlefish eggs lie in an envelope full of black ink. But this clears as the embryos grow older, leaving them growing within translucent eggs.
From The Star in Malaysia.
A tall tale.
Oops. At least they were found and returned.
Keith Vaz MP, chairman of the powerful Home Affairs select committee told the BBC: "Such confidential documents should be locked away...they should not be read on trains."
We estimate it would take around 15 million modern computers, running for about a year, to crack such a key.
What are they smoking at Kaspersky? We've never factored a 1024-bit number -- at least, not outside any secret government agency -- and it's likely to require a lot more than 15 million computer years of work. The current factoring record is a 1023-bit number, but it was a special number that's easier to factor than a product-of-two-primes number used in RSA. Breaking that Gpcode key will take a lot more mathematical prowess than you can reasonably expect to find by asking nicely on the Internet. You've got to understand the current best mathematical and computational optimizations of the Number Field Sieve, and cleverly distribute the parts that can be distributed. You can't just post the products and hope for the best.
Is this just a way for Kaspersky to generate itself some nice press, or are they confused in Moscow?
EDITED TO ADD (6/15): Kaspersky now says:
The company clarified, however, that it's more interested in getting help in finding flaws in the encryption implementation.
"Clarified" is overly kind. There was nothing confusing about Kaspersky's post that needed clarification, and what they're saying now completely contradicts what they did post. Seems to me like they're trying to pretend it never happened.
EDITED TO ADD (6/30): A Kaspersky virus analyst comments on this entry.
The TSA has a new photo ID requirement:
Beginning Saturday, June 21, 2008 passengers that willfully refuse to provide identification at security checkpoint will be denied access to the secure area of airports. This change will apply exclusively to individuals that simply refuse to provide any identification or assist transportation security officers in ascertaining their identity.
That's right; people who refuse to show ID on principle will not be allowed to fly, but people who claim to have lost their ID will. I feel well-protected against terrorists who can't lie.
EDITED TO ADD (6/11): Daniel Solove comments.
Interesting burglar prevention device: it simulates a television. But why not just leave a real television on?
We're spending money on this?
...a new GPS device enables authorities to remotely control a bus -- slowing it down to 5 mph and preventing it from restarting once it has stopped. The device has been installed on thousands of local commuter and tourist buses.
That's what the rules say:
Sikh passengers are allowed to carry Kirpan with them on board domestic flights. The total length of the 'Kirpan' should not exceed 22.86 CMs (9 inches) and the length of the blade should not exceed 15.24 CMs. (6 inches). It is being reiterated that these instructions should be fully implemented by concerned security personnel so that religious sentiments of the Sikh passengers are not hurt.
How airport security is supposed to recognize a Sikh passenger is not explained.
Is Subivor even real?
Whether it is a train fire, a highrise building fire or worse. People should have more protection than a necktie, their shirt or paper towel to cover their mouth, nose and eyes. As you know an emergency can happen at anytime and in anyplace, leaving one vulnerable. Don't be a sitting duck. The Subivor® Subway Emergency Kit can aid you in seeing and breathing while exiting. This all-in-one compact, portable and easy to use subway emergency kit contains some items never seen before in a kit.
This could have won my Third Movie-Plot Threat Contest.
Researchers from the University of Washington have demonstrated how lousy the MPAA/RIAA/etc. tactics are by successfully framing printers on their network. These printers, which can't download anything, received nine takedown notices:
The researchers rigged the software agents to implicate three laserjet printers, which were then accused in takedown letters by the M.P.A.A. of downloading copies of “Iron Man” and the latest Indiana Jones film.
Research, including the paper, here.
I'm tickled by the idea of a motivational poster with my picture on it, but want a more interesting/amusing/clever/inspirational caption. Ideas?
Nice one from CSO Magazine.
Some expensive and impressive stuff was stolen from the University of British Columbia's Museum of Anthropology:
A dozen pieces of gold jewelry designed by prominent Canadian artist Bill Reid were stolen from the museum sometime on May 23, along with three pieces of gold-plated Mexican jewelry. The pieces that were taken are estimated to be worth close to $2 million.
Of course, it's not the museum's fault:
But museum director Anthony Shelton said that elaborate computer program printouts have determined that the museum's security system did not fail during the heist and that the construction of the building's layout did not compromise security.
Um, isn't having stuff get stolen the very definition of security failing? And does anyone have any idea how "elaborate computer program printouts" can determine that security didn't fail? What in the world is this guy talking about?
A few days later, we learned that security did indeed fail:
Four hours before the break-in on May 23, two or three key surveillance cameras at the Museum of Anthropology mysteriously went off-line.
It's a particular kind of security failure, but it's definitely a failure.
This is clever:
Michael Largent, 22, of Plumas Lake, California, allegedly exploited a loophole in a common procedure both companies follow when a customer links his brokerage account to a bank account for the first time. To verify that the account number and routing information is correct, the brokerages automatically send small "micro-deposits" of between two cents to one dollar to the account, and ask the customer to verify that they've received it.
What is it with photographers these days? Are they really all terrorists, or does everyone just think they are?
Since 9/11, there has been an increasing war on photography. Photographers have been harassed, questioned, detained, arrested or worse, and declared to be unwelcome. We've been repeatedly told to watch out for photographers, especially suspicious ones. Clearly any terrorist is going to first photograph his target, so vigilance is required.
Except that it's nonsense. The 9/11 terrorists didn't photograph anything. Nor did the London transport bombers, the Madrid subway bombers, or the liquid bombers arrested in 2006. Timothy McVeigh didn't photograph the Oklahoma City Federal Building. The Unabomber didn't photograph anything; neither did shoe-bomber Richard Reid. Photographs aren't being found amongst the papers of Palestinian suicide bombers. The IRA wasn't known for its photography. Even those manufactured terrorist plots that the US government likes to talk about -- the Ft. Dix terrorists, the JFK airport bombers, the Miami 7, the Lackawanna 6 -- no photography.
Given that real terrorists, and even wannabe terrorists, don't seem to photograph anything, why is it such pervasive conventional wisdom that terrorists photograph their targets? Why are our fears so great that we have no choice but to be suspicious of any photographer?
Because it's a movie-plot threat.
A movie-plot threat is a specific threat, vivid in our minds like the plot of a movie. You remember them from the months after the 9/11 attacks: anthrax spread from crop dusters, a contaminated milk supply, terrorist scuba divers armed with almanacs. Our imaginations run wild with detailed and specific threats, from the news, and from actual movies and television shows. These movie plots resonate in our minds and in the minds of others we talk to. And many of us get scared.
Terrorists taking pictures is a quintessential detail in any good movie. Of course it makes sense that terrorists will take pictures of their targets. They have to do reconnaissance, don't they? We need 45 minutes of television action before the actual terrorist attack -- 90 minutes if it's a movie -- and a photography scene is just perfect. It's our movie-plot terrorists that are photographers, even if the real-world ones are not.
The problem with movie-plot security is it only works if we guess the plot correctly. If we spend a zillion dollars defending Wimbledon and terrorists blow up a different sporting event, that's money wasted. If we post guards all over the Underground and terrorists bomb a crowded shopping area, that's also a waste. If we teach everyone to be alert for photographers, and terrorists don't take photographs, we've wasted money and effort, and taught people to fear something they shouldn't.
And even if terrorists did photograph their targets, the math doesn't make sense. Billions of photographs are taken by honest people every year, 50 billion by amateurs alone in the US. And the national monuments you imagine terrorists taking photographs of are the same ones tourists like to take pictures of. If you see someone taking one of those photographs, the odds are infinitesimal that he's a terrorist.
Of course, it's far easier to explain the problem than it is to fix it. Because we're a species of storytellers, we find movie-plot threats uniquely compelling. A single vivid scenario will do more to convince people that photographers might be terrorists than all the data I can muster to demonstrate that they're not.
Fear aside, there aren't many legal restrictions on what you can photograph from a public place that's already in public view. If you're harassed, it's almost certainly a law enforcement official, public or private, acting way beyond his authority. There's nothing in any post-9/11 law that restricts your right to photograph.
This is worth fighting. Search "photographer rights" on Google and download one of the several wallet documents that can help you if you get harassed; I found one for the UK, US, and Australia. Don't cede your right to photograph in public. Don't propagate the terrorist photographer story. Remind them that prohibiting photography was something we used to ridicule about the USSR. Eventually sanity will be restored, but it may take a while.
This essay originally appeared in The Guardian.
EDITED TO ADD (6/6): Interesting comment by someone who trains security guards.
EDITED TO ADD (6/13): More on photographers' rights in the U.S.
I already blogged this once: an airplane-seat camera system that tries to detect terrorists before they leap up and do whatever they were planning on doing. Amazingly enough, the EU is "testing" this system:
Each camera tracks passengers' facial expressions, with the footage then analysed by software to detect developing terrorist activity or potential air rage. Six wide-angle cameras are also positioned to monitor the plane’s aisles, presumably to catch anyone standing by the cockpit door with a suspiciously crusty bread roll.
This pegs the stupid meter. All it will do is false alarm. No one has any idea what sorts of facial characteristics are unique to terrorists. And how in the world are they "testing" this system without any real terrorists? In any case, what happens when the alarm goes off? How exactly is a ten-second warning going to save people?
Sure, you can invent a terrorist tactic where a system like this, assuming it actually works, saves people -- but that's the very definition of a movie-plot threat. How about we spend this money on something that's effective in more than just a few carefully chosen scenarios?
Yesterday, the Center for American Progress published its paper on identification and identification technologies: "The ID Divide: Addressing the Challenges of Identification and Authentication in American Society." I was one of the participants in the project that created this paper, and it's worth reading.
Among other things, the paper identifies six principles for identification systems:
From the Executive Summary:
How can these principles be honored in practice? That’s where the "due diligence" process comes into play when considering and implementing identification systems. Due diligence in the financial world of mergers and acquisitions and other important corporate transactions is conducted before a company makes a major investment. Proponents of, say, a merger (or in our case, a new identification program) can err on the side of optimism, concluding too readily that the merger (or new ID program) is clearly the way to go. Thorough due diligence protects against such over-optimism.
I participated in the panel discussion announcing this report, along with Jim Harper (Director of Information Policy Studies at the Cato Institute).
This video is priceless. A Washington, DC, news crew goes down to Union Station to interview someone from Amtrak about people who have been stopped from taking pictures, even though there's no policy against it. As the Amtrak spokesperson is explaining that there is no policy against photography, a guard comes up and tries to stop them from filming, saying it is against the rules.
EDITED TO ADD (6/7): More.
Aren't fax signatures the weirdest thing? It's trivial to cut and paste -- with real scissors and glue -- anyone's signature onto a document so that it'll look real when faxed. There is so little security in fax signatures that it's mind-boggling that anyone accepts them.
Yet people do, all the time. I've signed book contracts, credit card authorizations, nondisclosure agreements and all sorts of financial documents -- all by fax. I even have a scanned file of my signature on my computer, so I can virtually cut and paste it into documents and fax them directly from my computer without ever having to print them out. What in the world is going on here?
And, more importantly, why are fax signatures still being used after years of experience? Why aren't there many stories of signatures forged through the use of fax machines?
The answer comes from looking at fax signatures not as an isolated security measure, but in the context of the larger system. Fax signatures work because signed faxes exist within a broader communications context.
In a 2003 paper, "Economics, Psychology, and Sociology of Security," Professor Andrew Odlyzko looks at fax signatures and concludes:
Although fax signatures have become widespread, their usage is restricted. They are not used for final contracts of substantial value, such as home purchases. That means that the insecurity of fax communications is not easy to exploit for large gain. Additional protection against abuse of fax insecurity is provided by the context in which faxes are used. There are records of phone calls that carry the faxes, paper trails inside enterprises and so on. Furthermore, unexpected large financial transfers trigger scrutiny. As a result, successful frauds are not easy to carry out by purely technical means.
He's right. Thinking back, there really aren't ways in which a criminal could use a forged document sent by fax to defraud me. I suppose an unscrupulous consulting client could forge my signature on an non-disclosure agreement and then sue me, but that hardly seems worth the effort. And if my broker received a fax document from me authorizing a money transfer to a Nigerian bank account, he would certainly call me before completing it.
Credit card signatures aren't verified in person, either -- and I can already buy things over the phone with a credit card -- so there are no new risks there, and Visa knows how to monitor transactions for fraud. Lots of companies accept purchase orders via fax, even for large amounts of stuff, but there's a physical audit trail, and the goods are shipped to a physical address -- probably one the seller has shipped to before. Signatures are kind of a business lubricant: mostly, they help move things along smoothly.
Except when they don't.
On October 30, 2004, Tristian Wilson was released from a Memphis jail on the authority of a forged fax message. It wasn't even a particularly good forgery. It wasn't on the standard letterhead of the West Memphis Police Department. The name of the policeman who signed the fax was misspelled. And the time stamp on the top of the fax clearly showed that it was sent from a local McDonald's.
The success of this hack has nothing to do with the fact that it was sent over by fax. It worked because the jail had lousy verification procedures. They didn't notice any discrepancies in the fax. They didn't notice the phone number from which the fax was sent. They didn't call and verify that it was official. The jail was accustomed to getting release orders via fax, and just acted on this one without thinking. Would it have been any different had the forged release form been sent by mail or courier?
Yes, fax signatures always exist in context, but sometimes they are the linchpin within that context. If you can mimic enough of the context, or if those on the receiving end become complacent, you can get away with mischief.
Arguably, this is part of the security process. Signatures themselves are poorly defined. Sometimes a document is valid even if not signed: A person with both hands in a cast can still buy a house. Sometimes a document is invalid even if signed: The signer might be drunk, or have a gun pointed at his head. Or he might be a minor. Sometimes a valid signature isn't enough; in the United States there is an entire infrastructure of "notary publics" who officially witness signed documents. When I started filing my tax returns electronically, I had to sign a document stating that I wouldn't be signing my income tax documents. And banks don't even bother verifying signatures on checks less than $30,000; it's cheaper to deal with fraud after the fact than prevent it.
Over the course of centuries, business and legal systems have slowly sorted out what types of additional controls are required around signatures, and in which circumstances.
Those same systems will be able to sort out fax signatures, too, but it'll be slow. And that's where there will be potential problems. Already fax is a declining technology. In a few years it'll be largely obsolete, replaced by PDFs sent over e-mail and other forms of electronic documentation. In the past, we've had time to figure out how to deal with new technologies. Now, by the time we institutionalize these measures, the technologies are likely to be obsolete.
What that means is people are likely to treat fax signatures -- or whatever replaces them -- exactly the same way as paper signatures. And sometimes that assumption will get them into trouble.
But it won't cause social havoc. Wilson's story is remarkable mostly because it's so exceptional. And even he was rearrested at his home less than a week later. Fax signatures may be new, but fake signatures have always been a possibility. Our legal and business systems need to deal with the underlying problem -- false authentication -- rather than focus on the technology of the moment. Systems need to defend themselves against the possibility of fake signatures, regardless of how they arrive.
This essay previously appeared on Wired.com.
EDITED TO ADD (6/3): 2005 story, "Federal Jury Convicts N.Y. Attorney of Faking Judge's Order."
It's easy to laugh and move on. How stupid can these people be, we wonder. But there's a more important security lesson here. Security screening is hard, and every false threat the screeners watch out for make it more likely that real threats slip through. At a party the other night, someone told me about the time he accidentally brought a large knife through airport security. The screener pulled his bag aside, searched it, and pulled out a water bottle.
It's not just the water bottles and the t-shirts and the gun jewelry -- this kind of thing actually makes us all less safe.
It's easy to laugh at the You've Been Left Behind site, which purports to send automatic e-mails to your friends after the Rapture:
The unsaved will be 'left behind' on earth to go through the "tribulation period" after the "Rapture".... We have made it possible for you to send them a letter of love and a plea to receive Christ one last time. You will also be able to give them some help in living out their remaining time. In the encrypted portion of your account you can give them access to your banking, brokerage, hidden valuables, and powers of attorneys' (you won't be needing them any more, and the gift will drive home the message of love). There won't be any bodies, so probate court will take 7 years to clear your assets to your next of Kin. 7 years of course is all the time that will be left. So, basically the Government of the AntiChrist gets your stuff, unless you make it available in another way.
But what if the creator of this site isn't as scrupulous as he implies he is? What if he uses all of that account information, passwords, safe combinations, and whatever before any rapture? And even if he is an honest true believer, this seems like a mighty juicy target for any would-be identity thief.
And -- if you're curious -- this is how the triggering mechanism works:
We have set up a system to send documents by the email, to the addresses you provide, 6 days after the "Rapture" of the Church. This occurs when 3 of our 5 team members scattered around the U.S fail to log in over a 3 day period. Another 3 days are given to fail safe any false triggering of the system.
The site claims that the data can be encrypted, but it looks like the encryption key is stored on the server with the data.
EDITED TO ADD (6/14): Here's a similar site, run by atheists so they can guarantee that they'll be left behind to deliver all the messages.
This article claims that the Chinese Peoples Liberation Army was behind, among other things, the August 2003 blackout:
Computer hackers in China, including those working on behalf of the Chinese government and military, have penetrated deeply into the information systems of U.S. companies and government agencies, stolen proprietary information from American executives in advance of their business meetings in China, and, in a few cases, gained access to electric power plants in the United States, possibly triggering two recent and widespread blackouts in Florida and the Northeast, according to U.S. government officials and computer-security experts.
This is all so much nonsense I don't even know where to begin.
I wrote about this blackout already: the computer failures were caused by Blaster.
The "Interim Report: Causes of the August 14th Blackout in the United States and Canada," published in November and based on detailed research by a panel of government and industry officials, blames the blackout on an unlucky series of failures that allowed a small problem to cascade into an enormous failure.
The rest of the National Journal article is filled with hysterics and hyperbole about Chinese hackers. I have already written an essay about this -- it'll be the next point/counterpoint between Marcus Ranum and me for Information Security -- and I'll publish it here after they publish it.
EDITED TO ADD (6/2): Wired debunked this claim pretty thoroughly:
This time, though, they've attached their tale to the most thoroughly investigated power incident in U.S. history." and "It traced the root cause of the outage to the utility company FirstEnergy's failure to trim back trees encroaching on high-voltage power lines in Ohio. When the power lines were ensnared by the trees, they tripped.
Large-scale power outages are never one thing. They're a small problem that cascades into series of ever-bigger problems. But the triggering problem were those power lines.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.