Blog: February 2010 Archives
A Washington Post article concludes that small planes are not the next terror threat:
Pilots of private planes fly about 200,000 small and medium-size aircraft in the United States, using 19,000 airports, most of them small. The planes’ owners say the aircraft have little in common with airliners.
“I don’t see a gaping security hole here,” said Tom Walsh, an aviation security consultant. “In terms of aviation security, there are much bigger fish to fry than worrying [about] small aircraft.”
He said most would-be terrorists would draw the same conclusion—that tiny aircraft don’t pack a big enough punch. Planes like the one Stack flew weigh just a few thousands pounds and carry no more than 100 gallons of fuel. A Boeing 767 weighs 400,000 pounds and carries up to 25,000 gallons.
Richard L. Skinner, inspector general of the Department of Homeland Security, reviewed security at several general-aviation airports last year and concluded that general aviation “presents only limited and mostly hypothetical threats to security.”
What this analysis misses is our ability to terrorize ourselves. After all, who thought that a failed terrorist incident—nobody hurt, no plane crash, terrorist in custody—could cause so much terror?
On the face of it, Joseph Stack flying a private plane into the Austin, TX IRS office is no different than Nidal Hasan shooting up Ft. Hood: a lone extremist nutcase. If one is a terrorist and the other is a criminal, the difference is more political or religious than anything else.
Personally, I wouldn’t call either a terrorist. Nor would I call Amy Bishop, who opened fire on her department after she was denied tenure, a terrorist.
I note that the primary counterterrorist measures I advocate—investigation and intelligence—can’t possibly make a difference against any of these people. Lone nuts are pretty much impossible to detect in advance, and thus pretty much impossible to defend against: a point Cato’s Jim Harper made in a smart series of posts. And once they attack, conventional police work is how we capture those that simply don’t care if they’re caught or killed.
This is an excellent technical investigation of what actually happened.
This investigation into the remote spying allegedly being conducted against students at Lower Merion represents an attempt to find proof of spying and a look into the toolchain used to accomplish spying. Taking a look at the LMSD Staff List, Mike Perbix is listed as a Network Tech at LMSD. Mr. Perbix has a large online web forum footprint as well as a personal blog, and a lot of his posts, attributed to his role at Lower Merion, provide insight into the tools, methods, and capabilities deployed against students at LMSD. Of the three network techs employed at LMSD, Mr. Perbix appears to have been the mastermind behind a massive, highly effective digital panopticon.
Just declassified: “A Reference Guide to Selected Historical Documents Relating to the National Security Agency/Central Security Service, 1931–1985.” Formerly “Top Secret UMBRA.” From my quick scan, there are minimal redactions.
They claim to be “one of the nation’s only and most respected security and intelligence providers”—I’ve never heard of them—but their blog consists entirely of entries copied from my blog since December 24. They don’t even cull the ones that are obviously me: posts about interviews I’ve given, for example.
I contacted them last week and asked that they stop stealing my blog posts. I got an apologetic e-mail in response from Karim Hijazi, whose email sig file identifies him as “Principal/Founder,” but nothing has happened since. They haven’t stolen any new posts, but they haven’t taken down the old ones either. I suppose I could sue them, but public ridicule seems more fitting. (If you’re reading this post on the Demiurge site, I’m Bruce Schneier. Hi.)
EDITED TO ADD (2/23): The blog posts are down, and there’s a message to me in its place:
Speaking to the team that handles the blog component of the Demiurge website, I have learned not only have they been able to find at least 23 other websites syndicating content from Mr. Schneier’s blog, but there are more than three websites offering full blog post syndication links including Schneier’s blog.
Further, why would you find it offensive if we find your content very interesting to our clientele? If we really were trying to make it look like our content, don’t you think we would have scrubbed it? Besides all the links went back to your bloody blog… just more viewers for you. You weren’t thinking when you tried to flame us Bruce.
All you had to do was ask us to stop syndicating, which we did.
And for completeness, here’s Hijazi’s original e-mail response to my request: “Please stop stealing my blog posts and republishing them as your own.”
Please accept my apologies about the republishing of your blog posts.
Quite honestly our web development team was tasked with finding some interesting content to keep the blog component of our firm’s website compelling and up to date; it is clear that they took my request out of context. Ironically, I rarely even look at my own firm’s website!
I have had them stop the republishing immediately. I know of you by reputation, truly respect your work and thank you for being so gracious in your request; you very well could have been obtuse.
Again, I personally apologize for this situation.
I know, I know. But the posts are down, and that’s what matters.
I hunted up statistics, and was amazed to find that after all the glaring newspaper headings concerning railroad disasters, less than three hundred people had really lost their lives by those disasters in the preceding twelve months. The Erie road was set down as the most murderous in the list. It had killed forty-six—or twenty-six, I do not exactly remember which, but I know the number was double that of any other road. But the fact straightway suggested itself that the Erie was an immensely long road, and did more business than any other line in the country; so the double number of killed ceased to be matter for surprise.
By further figuring, it appeared that between New York and Rochester the Erie ran eight passenger trains each way every day—sixteen altogether; and carried a daily average of 6,000 persons. That is about a million in six months—the population of New York city. Well, the Erie kills from thirteen to twenty-three persons out of its million in six months; and in the same time 13,000 of New York’s million die in their beds! My flesh crept, my hair stood on end. “This is appalling!” I said. “The danger isn’t in travelling by rail, but in trusting to those deadly beds. I will never sleep in a bed again.”
Four hundred and seven votes later, we have a tie. No really; we have a tie. Rhys Gibson and “I love to fly and it shows” have 135 votes each. (It’s still a tie at 141 votes each if I give half credit for all split votes.) Both are well ahead of the third place winner, with 81 votes. There were a few ambiguous comments that could possibly break the tie, but rather than scrutinize the hanging chad any more closely, I’m going to appeal to the judges to cast the deciding votes.
Although both logos are excellent, both Patrick Smith and I vote for Rhys Gibson.
Congratulations. Send me your physical address and we’ll get you your prizes.
There was a big U.S. cyberattack exercise this week. We didn’t do so well:
In a press release issued today, the Bipartisan Policy Center (BPC)—which organized “Cyber Shockwave” using a group of former government officials and computer simulations—concluded the U.S is “unprepared for cyber threats.”
…the U.S. defenders had difficulty identifying the source of the simulated attack, which in turn made it difficult to take action.
“During the exercise, a server hosting the attack appeared to be based in Russia,” said one report. “However, the developer of the malware program was actually in the Sudan. Ultimately, the source of the attack remained unclear during the event.”
The simulation envisioned an attack that unfolds during a single day in July 2011. When the council convenes to face this crisis, 20 million of the nation’s smartphones have already stopped working. The attack—the result of a malware program that had been planted in phones months earlier through a popular “March Madness” basketball bracket application—disrupts mobile service for millions. The attack escalates, shutting down an electronic energy trading platform and crippling the power grid on the Eastern seaboard.
This is, I think, an eyewitness report.
The January 19th assassination of Mahmoud al-Mabhouh reads like a very professional operation:
Security footage of the killers’ movements during the afternoon, released by police in Dubai yesterday, underlines the professionalism of the operation. The group switched hotels several times and wore disguises including false beards and wigs, while surveillance teams rotated in pairs through the hotel lobby, never hanging around for too long and paying for everything in cash.
Folliard and another member of the party carrying an Irish passport in the name of Kevin Daveron were operating as spotters on the second floor of the hotel when the murder was committed. Both switched hotels that afternoon and dressed smartly to pose as hotel staff. The bald Daveron donned a dark wig and glasses, while Folliard appears to have removed a blonde wig to reveal dark hair.
Throughout the operation, none of the suspects made a direct call to any another. However, Dubai police traced a high volume of calls and text messages between three phones carried by the assassins and four numbers in Austria where a command centre had apparently been established.
To co-ordinate their movements on the ground, the team used discreet, sophisticated short-range communication devices as they tracked their victim.
The Dubai authorities claim there were two teams: one carried out surveillance of the target, while the other—which appears to be a group of younger men, at least as far as the camera shots show—carried out the killing.
Contrary to reports, the squad did not break into Mabhouh’s hotel room, nor did they knock on the door. They entered the room using copies of keys they had somehow acquired.
Read the whole thing—and watch (in three parts) this video compilation of all the CCTV cameras in the hotels and airprort. It’s impressive. And the professionalism leads pretty much everyone to suspect Mossad.
There are a few things I wonder about. The team didn’t know what hotel Mabhouh would be staying in, nor whether he would be alone or with others. The team also didn’t use any guns. How much of the operation was preplanned, and how much was created on the fly? Was that why there were so many people involved?
The team booked the hotel room directly across the hallway from Mabhouh. That seems like the part of the plan most likely to arouse suspicion. It’s unusual to reserve a particular room, and not unreasonable to think that the hotel desk staff might wonder who else is booked nearby.
How did they get into Mabhouh’s hotel room. The video shows evidence of them trying to reprogram the door. Given that they didn’t know the hotel until they got there, what kind of general hotel-key reprogramming devices do they have?
I wonder if any of those fake passports had RFID chips?
Dubai’s police chief said six of the suspects had British passports, three were Irish, one French and one German.
The passports are believed to be fakes.
And Mabhouh was discovered in his room, the door locked and barred from the inside. Is it really that easy to do that to a hotel room door?
Note: Please limit comments to the security considerations and lessons of the assassination, and steer clear of the politics.
EDITED TO ADD (2/19): Interesting analysis:
Investigators believe the assassins tried to reprogram the electronic lock on al-Mabhouh’s door to gain entry. Some news reports say the assassins entered the room while the victim was out and waited for him to return, while others say they were thwarted from entering the room when a hotel guest stepped off the elevator on al-Mabhouh’s floor. They then had to resort to tricking al-Mabhouh into opening his door to them after he returned.
He said the number of people involved in the operation indicates that it may have been put together in a rush.
“The less time you have to plan and carry out an operation, the more people you need to carry it out [on the ground],” he said. “The more time you have to plan . . . there’s a lot of things you eliminate.”
If you know that you can stop the elevator in the basement, for example, you don’t then need people guarding the elevator lobby on the victim’s floor to make sure no one steps off the elevator, he said.
He says it was likely that the Mossad’s second in command for operations was in the hotel or the area when the assassination took place and has gone unnoticed by the Dubai authorities.
Ostrovsky said although the operatives scattered to various parts of the world after the operation was completed, he believes they’re all back in Israel now. He says other countries are likely sifting through their airport surveillance tapes now to track the final destination of the team members.
He added that the Mossad was likely surprised by how the Dubai authorities pieced everything together so well and publicized the video and passport photos of the suspects.
Ostrovsky said that despite the Dubai operation’s success, it was amateurish at moments. He points to the bad disguises the suspects used—wigs, glasses and moustaches—and the fact that suspects seemed changed their disguises in the same place. He also points to two of the suspects who followed the victim to his hotel room while dressed in tennis outfits and didn’t seem to know what they were doing.
The two seemed to confer momentarily while the victim exited the elevator, as if deciding who would follow the victim to his room. A hotel employee accompanying the victim to his room even glanced back at the two, as if noticing their confusion.
“A lot of people in the field make those mistakes and they never come up because they’re never [caught on tape],” he said.
Interesting blog post, with video demonstration, about an improved tool to open high security locks with a key that will just “form itself” if you insert it into the lock and wiggle it a little. The basic technique is a few years old, but the improvements discussed here allow the tool to open a wider variety of locks than before.
I had no idea this was being done, but erased answers are now analyzed on standardized tests. Schools with a high number of wrong-to-right changes across multiple tests are presumed to have cheated: teachers changing the answers after the students are done.
Last month I announced a contest to redesign the TSA logo. Here are the finalists. Clicking on them will bring up a larger, and easier to read, version.
Vote in the comments. The winner will receive a copy of our most recent books, a fake boarding pass on any flight for any date, and an empty 12-ounce bottle labeled “saline” that you can refill and get through any TSA security checkpoint.
Voting will close at noon PST on Sunday, February 21.
EDITED TO ADD (2/22): Winner here.
I don’t know which is more exciting: that someone is trying to break the squid record, or that there is a squid record in the first place.
An Auckland scientist is attempting to break his own world record for rearing deep sea squid in captivity.
Neither, actually. This is what’s exciting:
The project is a warm-up for Dr Steve O’Shea from AUT University, whose main goal is to one day raise a giant squid in a tank.
The record is 150 days, BTW.
This is neat:
The Impressioner consists of a sensor that goes into the lock and sends information back to a computer via USB about the location of the lock’s tumblers—a corresponding computer program comes up with the code, depending on the make of car you’ve entered beforehand. Once you know the code, a key-cutting machine can use it to carve up a key.
Right now, it’s a prototype that only works on Ford car locks. The article points out that both locksmiths and thieves can use this device.
EDITED TO ADD (2/16): How it likely works.
Nice attack against the EMV—Eurocard Mastercard Visa—the “chip and PIN” credit card payment system. The attack allows a criminal to use a stolen card without knowing the PIN.
The flaw is that when you put a card into a terminal, a negotiation takes place about how the cardholder should be authenticated: using a PIN, using a signature or not at all. This particular subprotocol is not authenticated, so you can trick the card into thinking it’s doing a chip-and-signature transaction while the terminal thinks it’s chip-and-PIN. The upshot is that you can buy stuff using a stolen card and a PIN of 0000 (or anything you want). We did so, on camera, using various journalists’ cards. The transactions went through fine and the receipts say “Verified by PIN”.
So what went wrong? In essence, there is a gaping hole in the specifications which together create the “Chip and PIN” system. These specs consist of the EMV protocol framework, the card scheme individual rules (Visa, MasterCard standards), the national payment association rules (UK Payments Association aka APACS, in the UK), and documents produced by each individual issuer describing their own customisations of the scheme. Each spec defines security criteria, tweaks options and sets rules—but none take responsibility for listing what back-end checks are needed. As a result, hundreds of issuers independently get it wrong, and gain false assurance that all bases are covered from the common specifications. The EMV specification stack is broken, and needs fixing.
This is big. There are about a gazillion of these in circulation.
EDITED TO ADD (2/12): BBC video of the attack in action.
Scam-Detective: How did you find victims for your scams?
John: First you need to understand how the gangs work. At the bottom are the “foot soldiers”, kids who spend all of their time online to find email addresses and send out the first emails to get people interested. When they receive a reply, the victim is passed up the chain, to someone who has better English to get copies of ID from them like copies of their passport and driving licenses and build up trust. Then when they are ready to ask for money, they are passed further up again to someone who will pretend to be a barrister or shipping agent who will tell the victim that they need to pay charges or even a bribe to get the big cash amount out of the country. When they pay up, the gang master will collect the money from the Western Union office, using fake ID that they have taken from other scam victims.
Scam-Detective: Ok, I also want to talk more about how you managed to get your victims to trust you. I know it can be difficult for legitimate businesses to persuade customers to buy their products, yet you were able to convince people to part with their cash to get their hands on money that never existed in the first place, with at least one taking an international flight on top. That’s quite a skill, how did you learn to do it?
John: Once I had spent some time as a “foot soldier” (* sending out initial approaches and passing serious victims to other scammers) I was promoted to act as either a barrister, shipping agent or bank official. In the early days I had a supervisor who would read my emails and suggest responses, then I was left to do it myself. I had lots of different documents that I would use to convince the victim that I was genuine, including photographs of an official looking man in an office, fake ID and storage manifests, bank statements showing the money, whatever would best convince the victim that I, and the money, was real. I think the English term is to “worm my way” into their trust, taking it slowly and carefully so I didn’t scare them away by asking for too much money too soon.
Scam-Detective: What would you do if a victim had sent money and couldn’t afford to send more, or got cold feet?
John: I would use whatever tactics were needed to get more money. I would send faked letters which stated that the money was about to be taken out of the account by the bank or seized by the government to make them think it was urgent, or tell them that this was definitely the last obstacle to the money being released. I would encourage them to take out loans or borrow money from friends to make the last payment, but tell them that it was important that they didn’t tell anyone what the money was for. I promised them that the expenses would be paid back on top of their share of the money.
John: We had something called the recovery approach. A few months after the original scam, we would approach the victim again, this time pretending to be from the FBI, or the Nigerian Authorities. The email would tell the victim that we had caught a scammer and had found all of the details of the original scam, and that the money could be recovered. Of course there would be fees involved as well. Victims would often pay up again to try and get their money back.
This sounds just like any other confidence game; in fact, it’s a modern variation on a classic con game called the Spanish Prisoner. The only difference is that this one uses the Internet.
The iTunes Store Terms and Conditions prohibits it:
Notice, as I read this clause not only are terrorists—or at least those on terrorist watch lists—prohibited from using iTunes to manufacture WMD, they are also prohibited from even downloading and using iTunes. So all the Al-Qaeda operatives holed up in the Northwest Frontier Provinces of Pakistan, dodging drone attacks while listening to Britney Spears songs downloaded with iTunes are in violation of the terms and conditions, even if they paid for the music!
And you thought being harassed at airports was bad enough.
This appears not to be a joke:
The state’s “Subversive Activities Registration Act,” passed last year and now officially on the books, states that “every member of a subversive organization, or an organization subject to foreign control, every foreign agent and every person who advocates, teaches, advises or practices the duty, necessity or propriety of controlling, conducting, seizing or overthrowing the government of the United States … shall register with the Secretary of State.”
There’s even a $5 filing fee.
By “subversive organization,” the law means “every corporation, society, association, camp, group, bund, political party, assembly, body or organization, composed of two or more persons, which directly or indirectly advocates, advises, teaches or practices the duty, necessity or propriety of controlling, conducting, seizing or overthrowing the government of the United States [or] of this State.”
Wow, is that idiotic or what?
Here’s the form.
Does the Republican Party count as an organization that “directly … advocates … controlling … the government”? I think it does. I think all political parties count under that definition.
How about we all fill in a copy and send it to them.
EDITED TO ADD (2/9): I misquoted the statute: “(1) ‘Subversive organization’ means every corporation, society, association, camp, group, bund, political party, assembly, body or organization, composed of two or more persons, which directly or indirectly advocates, advises, teaches or practices the duty, necessity or propriety of controlling, conducting, seizing or overthrowing the government of the United States, of this State or of any political subdivision thereof by force or violence or other unlawful means;”
It’s the last clause that rules out most of us.
EDITED TO ADD (2/11): And it seems that this is from the McCarthy era: 1951.
Isn’t it a bit embarrassing for an “expert on counter-terrorism” to be quoted as saying this?
Bill Tupman, an expert on counter-terrorism from Exeter University, told BBC News: “The problem is trying to predict the mind of the al-Qaeda planner; there are so many things they might do.
“And it is also necessary to reassure the public that we are trying to outguess the al-Qaeda planner and we are in the process of protecting them from any threat.”
I think it’s necessary to convince the public to refuse to be terrorized. What frustrates me most about Abdulmutallab is that he caused terror even though his plot failed. I want us to be indomitable enough for the next attack to fail to cause terror, even if it succeeds. Remember: terrorism can’t destroy our country’s way of life; only our reaction to terrorism can.
Target prevalence powerfully influences visual search behavior. In most visual search experiments, targets appear on at least 50% of trials. However, when targets are rare (as in medical or airport screening), observers shift response criteria, leading to elevated miss error rates. Observers also speed target-absent responses and may make more motor errors. This could be a speed/accuracy tradeoff with fast, frequent absent responses producing more miss errors. Disproving this hypothesis, our experiment one shows that very high target prevalence (98%) shifts response criteria in the opposite direction, leading to elevated false alarms in a simulated baggage search. However, the very frequent target-present responses are not speeded. Rather, rare target-absent responses are greatly slowed. In experiment two, prevalence was varied sinusoidally over 1000 trials as observers’ accuracy and reaction times (RTs) were measured. Observers’ criterion and target-absent RTs tracked prevalence. Sensitivity (d’) and target-present RTs did not vary with prevalence. These results support a model in which prevalence influences two parameters: a decision criterion governing the series of perceptual decisions about each attended item, and a quitting threshold that governs the timing of target-absent responses. Models in which target prevalence only influences an overall decision criterion are not supported.
This has implications for searching for contraband at airports.
The rumor that China used a system Google put in place to enable lawful intercepts, which I used as a news hook for this essay, has not been confirmed. At this point, I doubt that it’s true.
EDITED TO ADD (2/12): Good article.
At FSE 2010 this week, Dmitry Khovratovich and Ivica Nikolic presented a paper where they cryptanalyze ARX algorithms (algorithms that use only addition, rotation, and exclusive-OR operations): “Rotational Cryptanalysis of ARX.” In the paper, they demonstrate their attack against Threefish. Their attack breaks 39 (out of 72) rounds of Threefish-256 with a complexity of 2252.4, 42 (out of 72) rounds of Threefish-512 with a complexity of 2507, and 43.5 (out of 80) rounds of Threefish-1024 with a complexity of 21014.5. (Yes, that’s over 21000. Don’t laugh; it really is a valid attack, even though it—or any of these others—will never be practical.)
This is excellent work, and represents the best attacks against Threefish to date. (I suspect that the attacks can be extended a few more rounds with some clever cryptanalytic tricks, but no further.) The security of full Threefish isn’t at risk, of course; there’s still plenty of security margin.
We have always stood by the security of Threefish with any set of non-obviously-bad constants. Still, a trivial modification—changing a single constant in the key schedule—dramatically reduces the number of rounds through which this attack can penetrate. If NIST allows another round of tweaks to the SHA-3 candidate algorithms, we will almost certainly take the opportunity to improve Skein’s security; we’ll change this constant to a value that removes the rotational symmetries that this technique exploits. If they don’t, we’re still confident of the security of Threefish and Skein.
And we’re always pleased to see more cryptanalysis against Threefish and Skein.
At Tuesday’s hearing, Senator Dianne Feinstein, Democrat of California and chairwoman of the Senate Intelligence Committee, asked Mr. Blair [the Director of National Intelligence] to assess the possibility of an attempted attack in the United States in the next three to six months.
He replied, “The priority is certain, I would say”—a response that was reaffirmed by the top officials of the C.I.A. and the F.B.I.
I don’t know what “the priority is certain” actually means, but now everyone is reporting that these agencies claim there will be a terrorist attack in the U.S. during the next six months.
Does anyone think this is a good idea?
Under an agreement that is still being finalized, the National Security Agency would help Google analyze a major corporate espionage attack that the firm said originated in China and targeted its computer networks, according to cybersecurity experts familiar with the matter. The objective is to better defend Google—and its users—from future attack.
EPIC has filed a Freedom of Information Act Request, asking for records pertaining to the partnership. That would certainly help, because otherwise we have no idea what’s actually going on.
I’ve already written about why the NSA should not be in charge of our nation’s cyber security.
Security is rarely static. Technology changes both security systems and attackers. But there’s something else that changes security’s cost/benefit trade-off: how the underlying systems being secured are used. Far too often we build security for one purpose, only to find it being used for another purpose—one it wasn’t suited for in the first place. And then the security system has to play catch-up.
Take driver’s licenses, for example. Originally designed to demonstrate a credential—the ability to drive a car—they looked like other credentials: medical licenses or elevator certificates of inspection. They were wallet-sized, of course, but they didn’t have much security associated with them. Then, slowly, driver’s licenses took on a second application: they became age-verification tokens in bars and liquor stores. Of course the security wasn’t up to the task—teenagers can be extraordinarily resourceful if they set their minds to it—and over the decades driver’s licenses got photographs, tamper-resistant features (once, it was easy to modify the birth year), and technologies that made counterfeiting harder. There was little value in counterfeiting a driver’s license, but a lot of value in counterfeiting an age-verification token.
Today, US driver’s licenses are taking on yet another function: security against terrorists. The Real ID Act—the government’s attempt to make driver’s licenses even more secure—has nothing to do with driving or even with buying alcohol, and everything to do with trying to make that piece of plastic an effective way to verify that someone is not on the terrorist watch list. Whether this is a good idea, or actually improves security, is another matter entirely.
You can see this kind of function creep everywhere. Internet security systems designed for informational Web sites are suddenly expected to provide security for banking Web sites. Security systems that are good enough to protect cheap commodities from being stolen are suddenly ineffective once the price of those commodities rises high enough. Application security systems, designed for locally owned networks, are expected to work even when the application is moved to a cloud computing environment. And cloud computing security, designed for the needs of corporations, is expected to be suitable for government applications as well—maybe even military applications.
Sometimes it’s obvious that security systems designed for one environment won’t work in another. We don’t arm our soldiers the same way we arm our policemen, and we can’t take commercial vehicles and easily turn them into ones outfitted for the military. We understand that we might need to upgrade our home security system if we suddenly come into possession of a bag of diamonds. Yet many think the same security that protects our home computers will also protect voting machines, and the same operating systems that run our businesses are suitable for military uses.
But these are all conscious decisions, and we security professionals often know better. The real problems arise when the changes happen in the background, without any conscious thought. We build a network security system that’s perfectly adequate for the threat and—like a driver’s license becoming an age-verification token—the network accrues more and more functions. But because it has already been pronounced “secure,” we can’t get any budget to re-evaluate and improve the security until after the bad guys have figured out the vulnerabilities and exploited them.
I don’t like having to play catch-up in security, but we seem doomed to keep doing so.
This essay originally appeared in the January/February 2010 issue of IEEE Security and Privacy.
Universal identification is portrayed by some as the holy grail of Internet security. Anonymity is bad, the argument goes; and if we abolish it, we can ensure only the proper people have access to their own information. We’ll know who is sending us spam and who is trying to hack into corporate networks. And when there are massive denial-of-service attacks, such as those against Estonia or Georgia or South Korea, we’ll know who was responsible and take action accordingly.
The problem is that it won’t work. Any design of the Internet must allow for anonymity. Universal identification is impossible. Even attribution—knowing who is responsible for particular Internet packets—is impossible. Attempting to build such a system is futile, and will only give criminals and hackers new ways to hide.
Imagine a magic world in which every Internet packet could be traced to its origin. Even in this world, our Internet security problems wouldn’t be solved. There’s a huge gap between proving that a packet came from a particular computer and that a packet was directed by a particular person. This is the exact problem we have with botnets, or pedophiles storing child porn on innocents’ computers. In these cases, we know the origins of the DDoS packets and the spam; they’re from legitimate machines that have been hacked. Attribution isn’t as valuable as you might think.
Implementing an Internet without anonymity is very difficult, and causes its own problems. In order to have perfect attribution, we’d need agencies—real-world organizations—to provide Internet identity credentials based on other identification systems: passports, national identity cards, driver’s licenses, whatever. Sloppier identification systems, based on things such as credit cards, are simply too easy to subvert. We have nothing that comes close to this global identification infrastructure. Moreover, centralizing information like this actually hurts security because it makes identity theft that much more profitable a crime.
And realistically, any theoretical ideal Internet would need to allow people access even without their magic credentials. People would still use the Internet at public kiosks and at friends’ houses. People would lose their magic Internet tokens just like they lose their driver’s licenses and passports today. The legitimate bypass mechanisms would allow even more ways for criminals and hackers to subvert the system.
On top of all this, the magic attribution technology doesn’t exist. Bits are bits; they don’t come with identity information attached to them. Every software system we’ve ever invented has been successfully hacked, repeatedly. We simply don’t have anywhere near the expertise to build an airtight attribution system.
Not that it really matters. Even if everyone could trace all packets perfectly, to the person or origin and not just the computer, anonymity would still be possible. It would just take one person to set up an anonymity server. If I wanted to send a packet anonymously to someone else, I’d just route it through that server. For even greater anonymity, I could route it through multiple servers. This is called onion routing and, with appropriate cryptography and enough users, it adds anonymity back to any communications system that prohibits it.
Attempts to banish anonymity from the Internet won’t affect those savvy enough to bypass it, would cost billions, and would have only a negligible effect on security. What such attempts would do is affect the average user’s access to free speech, including those who use the Internet’s anonymity to survive: dissidents in Iran, China, and elsewhere.
Mandating universal identity and attribution is the wrong goal. Accept that there will always be anonymous speech on the Internet. Accept that you’ll never truly know where a packet came from. Work on the problems you can solve: software that’s secure in the face of whatever packet it receives, identification systems that are secure enough in the face of the risks. We can do far better at these things than we’re doing, and they’ll do more to improve security than trying to fix insoluble problems.
The whole attribution problem is very similar to the copy-protection/digital-rights-management problem. Just as it’s impossible to make specific bits not copyable, it’s impossible to know where specific bits came from. Bits are bits. They don’t naturally come with restrictions on their use attached to them, and they don’t naturally come with author information attached to them. Any attempts to circumvent this limitation will fail, and will increasingly need to be backed up by the sort of real-world police-state measures that the entertainment industry is demanding in order to make copy-protection work. That’s how China does it: police, informants, and fear.
Just as the music industry needs to learn that the world of bits requires a different business model, law enforcement and others need to understand that the old ideas of identification don’t work on the Internet. For good or for bad, whether you like it or not, there’s always going to be anonymity on the Internet.
This essay originally appeared in Information Security, as part of a point/counterpoint with Marcus Ranum. You can read Marcus’s response below my essay.
EDITED TO ADD (2/5): Microsoft’s Craig Mundie wants to abolish anonymity as well.
What Mundie is proposing is to impose authentication. He draws an analogy to automobile use. If you want to drive a car, you have to have a license (not to mention an inspection, insurance, etc). If you do something bad with that car, like break a law, there is the chance that you will lose your license and be prevented from driving in the future. In other words, there is a legal and social process for imposing discipline. Mundie imagines three tiers of Internet ID: one for people, one for machines and one for programs (which often act as proxies for the other two).
The Foreign Policy website has its own list of movie-plot threats: machine-gun wielding terrorists on paragliders, disease-laden insect swarms, a dirty bomb made from smoke detector parts, planning via online games, and botulinum in the food supply. The site fleshes these threats out a bit, but it’s nothing regular readers of this blog can’t imagine for themselves.
Maybe they should have their own movie-plot threat contest.
Ross Anderson reports:
Online transactions with credit cards or debit cards are increasingly verified using the 3D Secure system, which is branded as “Verified by VISA” and “MasterCard SecureCode”. This is now the most widely-used single sign-on scheme ever, with over 200 million cardholders registered. It’s getting hard to shop online without being forced to use it.
In a paper I’m presenting today at Financial Cryptography, Steven Murdoch and I analyse 3D Secure. From the engineering point of view, it does just about everything wrong, and it’s becoming a fat target for phishing. So why did it succeed in the marketplace?
Quite simply, it has strong incentives for adoption. Merchants who use it push liability for fraud back to banks, who in turn push it on to cardholders. Properly designed single sign-on systems, like OpenID and InfoCard, can’t offer anything like this. So this is yet another case where security economics trumps security engineering, but in a predatory way that leaves cardholders less secure. We conclude with a suggestion on what bank regulators might do to fix the problem.
Sidebar photo of Bruce Schneier by Joe MacInnis.