September 2008 Archives

How to Clone and Modify E-Passports

The Hackers Choice has released a tool allowing people to clone and modify electronic passports.

The problem is self-signed certificates.

A CA is not a great solution:

Using a Certification Authority (CA) could solve the attack but at the same time introduces a new set of attack vectors:

  1. The CA becomes a single point of failure. It becomes the juicy/high-value target for the attacker. Single point of failures are not good. Attractive targets are not good.

    Any person with access to the CA key can undetectably fake passports. Direct attacks, virus, misplacing the key by accident (the UK government is good at this!) or bribery are just a few ways of getting the CA key.

  2. The single CA would need to be trusted by all governments. This is not practical as this means that passports would no longer be a national matter.

  3. Multiple CA's would not work either. Any country could use its own CA to create a valid passport of any other country. Read this sentence again: Country A can create a passport data set of Country B and sign it with Country A's CA key. The terminal will validate and display the information as data from Country B.This option also multiplies the number of 'juicy' targets. It makes it also more likely for a CA key to leak.

    Revocation lists for certificates only work when a leak/loss is detected. In most cases it will not be detected.

So what's the solution? We know that humans are good at Border Control. In the end they protected us well for the last 120 years. We also know that humans are good at pattern matching and image recognition. Humans also do an excellent job 'assessing' the person and not just the passport. Take the human part away and passport security falls apart.

EDITED TO ADD (10/13): More information.

Posted on September 30, 2008 at 12:24 PM28 Comments

Your Own Personal Robot Voyeur

Spykee is your own personal robot spy. It takes pictures and movies that you can watch on the Internet in real time or save for later. You can even talk with whoever you're spying on via Skype. More here, and you can buy one here: only $300.

Posted on September 26, 2008 at 7:39 AM23 Comments

$20M Cameras at New York's Freedom Tower are Pretty Sophisticated

They're trying to detect anomalies:

If you have ever wondered how security guards can possibly keep an unfailingly vigilant watch on every single one of dozens of television monitors, each depicting a different scene, the answer seems to be (as you suspected): they can't.

Instead, they can now rely on computers to constantly analyze the patterns, sizes, speeds, angles and motion picked up by the camera and determine -- based on how they have been programmed -- whether this constitutes a possible threat. In which case, the computer alerts the security guard whose own eyes may have been momentarily diverted. Or shut.

An alarm can be raised, for instance, if the computer discerns a vehicle that has been standing still for too long (say, a van in the drop-off lane of an airport terminal) or a person who is loitering while everyone else is in motion. By the same token, it will spot the individual who is moving rapidly while everyone else is shuffling along. It can spot a package that has been left behind and identify which figure in the crowd abandoned it. Or pinpoint the individual who is moving the wrong way down a one-way corridor.

Because one person's "abnormal situation" is another person's "hot dog vendor attracting a small crowd," the computers can be programmed to discern between times of the day and days of the week.

Certainly interesting.

Posted on September 25, 2008 at 6:32 AM53 Comments

Sarah Palin's E-Mail

People have been asking me to comment about Sarah Palin's Yahoo e-mail account being hacked. I've already written about the security problems with "secret questions" back in 2005:

The point of all these questions is the same: a backup password. If you forget your password, the secret question can verify your identity so you can choose another password or have the site e-mail your current password to you. It's a great idea from a customer service perspective -- a user is less likely to forget his first pet's name than some random password -- but terrible for security. The answer to the secret question is much easier to guess than a good password, and the information is much more public. (I'll bet the name of my family's first pet is in some database somewhere.) And even worse, everybody seems to use the same series of secret questions.

The result is the normal security protocol (passwords) falls back to a much less secure protocol (secret questions). And the security of the entire system suffers.

EDITED TO ADD (9/25): Ed Felten on the issue.

Posted on September 24, 2008 at 4:01 PM61 Comments

The Two Classes of Airport Contraband

Airport security found a jar of pasta sauce in my luggage last month. It was a 6-ounce jar, above the limit; the official confiscated it, because allowing it on the airplane with me would have been too dangerous. And to demonstrate how dangerous he really thought that jar was, he blithely tossed it in a nearby bin of similar liquid bottles and sent me on my way.

There are two classes of contraband at airport security checkpoints: the class that will get you in trouble if you try to bring it on an airplane, and the class that will cheerily be taken away from you if you try to bring it on an airplane. This difference is important: Making security screeners confiscate anything from that second class is a waste of time. All it does is harm innocents; it doesn't stop terrorists at all.

Let me explain. If you're caught at airport security with a bomb or a gun, the screeners aren't just going to take it away from you. They're going to call the police, and you're going to be stuck for a few hours answering a lot of awkward questions. You may be arrested, and you'll almost certainly miss your flight. At best, you're going to have a very unpleasant day.

This is why articles about how screeners don't catch every -- or even a majority -- of guns and bombs that go through the checkpoints don't bother me. The screeners don't have to be perfect; they just have to be good enough. No terrorist is going to base his plot on getting a gun through airport security if there's a decent chance of getting caught, because the consequences of getting caught are too great.

Contrast that with a terrorist plot that requires a 12-ounce bottle of liquid. There's no evidence that the London liquid bombers actually had a workable plot, but assume for the moment they did. If some copycat terrorists try to bring their liquid bomb through airport security and the screeners catch them -- like they caught me with my bottle of pasta sauce -- the terrorists can simply try again. They can try again and again. They can keep trying until they succeed. Because there are no consequences to trying and failing, the screeners have to be 100 percent effective. Even if they slip up one in a hundred times, the plot can succeed.

The same is true for knitting needles, pocketknives, scissors, corkscrews, cigarette lighters and whatever else the airport screeners are confiscating this week. If there's no consequence to getting caught with it, then confiscating it only hurts innocent people. At best, it mildly annoys the terrorists.

To fix this, airport security has to make a choice. If something is dangerous, treat it as dangerous and treat anyone who tries to bring it on as potentially dangerous. If it's not dangerous, then stop trying to keep it off airplanes. Trying to have it both ways just distracts the screeners from actually making us safer.

EDITED TO ADD (10/23): A similar article ran in The Guardian.

Posted on September 23, 2008 at 5:47 AM110 Comments

India Using Brain Scans to Prove Guilt in Court

This seems like a whole lot of pseudo-science:

The technologies, generally regarded as promising but unproved, have yet to be widely accepted as evidence -- except in India, where in recent years judges have begun to admit brain scans. But it was only in June, in a murder case in Pune, in Maharashtra State, that a judge explicitly cited a scan as proof that the suspect's brain held "experiential knowledge" about the crime that only the killer could possess, sentencing her to life in prison.

[...]

This latest Indian attempt at getting past criminals -- defenses begins with an electroencephalogram, or EEG, in which electrodes are placed on the head to measure electrical waves. The suspect sits in silence, eyes shut. An investigator reads aloud details of the crime -- as prosecutors see it -- and the resulting brain images are processed using software built in Bangalore.

The software tries to detect whether, when the crime's details are recited, the brain lights up in specific regions -- the areas that, according to the technology's inventors, show measurable changes when experiences are relived, their smells and sounds summoned back to consciousness. The inventors of the technology claim the system can distinguish between people's memories of events they witnessed and between deeds they committed.

EDITED TO ADD (10/13): An expert committee said it is unscientific, but their findings weren't accepted.

Posted on September 22, 2008 at 6:10 AM60 Comments

Friday Squid Blogging: Dissecting a Giant Squid

In Santa Barbara.

Among other dissection highlights, Hochberg pulled out plastic-like pieces, which comprised what could be best described as a backbone, as well as a translucent brownish-yellow piece of the beak, which is made of fingernail-like material. The giant squid's anatomy features a mouth at the top of the head, which means the esophagus travels through the brain. "So you have to get very small chunks of food," said Hochberg, "or you'll blow your brains out." The sharp beaks, then, are used to chomp food into tiny pieces before sending it down the esophagus, through the brain, and into the gut.

Posted on September 19, 2008 at 4:56 PM13 Comments

TSA Employees Bypassing Airport Screening

Airport screeners are now able to bypass airport screening:

The Transportation Security Administration (TSA) rolled out the new uniforms and new screening policy at airports nationwide on Sept. 11.

The new policy says screeners can arrive for work and walk behind security lines without any of their belongings examined or X-rayed.

"Lunch or a bomb, you can walk right through with it," said Mike Boyd, an aviation consultant in Evergreen. "This is a major security issue."

Actually, it's not. Screeners have to go in and out of security all the time as they work. Yes, they can smuggle things in and out of the airport. But you have to remember that the airport screeners are trusted insiders for the system: there are a zillion ways they could break airport security.

On the other hand, it's probably a smart idea to screen screeners when they walk through airport security when they aren't working at that checkpoint at that time. The reason is the same reason you should screen everyone, including pilots who can crash their plane: you're not screening screeners (or pilots), you're screening people wearing screener (or pilot) uniforms and carrying screener (or pilot) IDs. You can either train your screeners to recognize authentic uniforms and IDs, or you can just screen everybody. The latter is just easier.

But this isn't a big deal.

Posted on September 19, 2008 at 8:01 AM47 Comments

The Pentagon's World of Warcraft Movie-Plot Threat

In a presentation that rivals any of my movie-plot threat contest entries, a Pentagon researcher is worried that terrorists might plot using World of Warcraft:

In a presentation late last week at the Director of National Intelligence Open Source Conference in Washington, Dr. Dwight Toavs, a professor at the Pentagon-funded National Defense University, gave a bit of a primer on virtual worlds to an audience largely ignorant about what happens in these online spaces. Then he launched into a scenario, to demonstrate how a meatspace plot might be hidden by in-game chatter.

In it, two World of Warcraft players discuss a raid on the "White Keep" inside the "Stonetalon Mountains." The major objective is to set off a "Dragon Fire spell" inside, and make off with "110 Gold and 234 Silver" in treasure. "No one will dance there for a hundred years after this spell is cast," one player, "war_monger," crows.

Except, in this case, the White Keep is at 1600 Pennsylvania Avenue. "Dragon Fire" is an unconventional weapon. And "110 Gold and 234 Silver" tells the plotters how to align the game's map with one of Washington, D.C.

I don't know why he thinks that the terrorists will use World of Warcraft and not some other online world. Or Facebook. Or Usenet. Or a chat room. Or e-mail. Or the telephone. I don't even know why the particular form of communication is in any way important.

The article ends with this nice paragraph:

Steven Aftergood, the Federation of the American Scientists analyst who's been following the intelligence community for years, wonders how realistic these sorts of scenarios are, really. "This concern is out there. But it has to be viewed in context. It's the job of intelligence agencies to anticipate threats and counter them. With that orientation, they're always going to give more weight to a particular scenario than an objective analysis would allow," he tells Danger Room. "Could terrorists use Second Life? Sure, they can use anything. But is it a significant augmentation? That's not obvious. It's a scenario that an intelligence officer is duty-bound to consider. That's all."

My guess is still that some clever Pentagon researchers have figured out how to play World of Warcraft on the job, and they're not giving that perk up anytime soon.

Posted on September 18, 2008 at 1:29 PM62 Comments

The NSA Teams Up with the Chinese Government to Limit Internet Anonymity

Definitely strange bedfellows:

A United Nations agency is quietly drafting technical standards, proposed by the Chinese government, to define methods of tracing the original source of Internet communications and potentially curbing the ability of users to remain anonymous.

The U.S. National Security Agency is also participating in the "IP Traceback" drafting group, named Q6/17, which is meeting next week in Geneva to work on the traceback proposal. Members of Q6/17 have declined to release key documents, and meetings are closed to the public.

[...]

A second, apparently leaked ITU document offers surveillance and monitoring justifications that seem well-suited to repressive regimes:

A political opponent to a government publishes articles putting the government in an unfavorable light. The government, having a law against any opposition, tries to identify the source of the negative articles but the articles having been published via a proxy server, is unable to do so protecting the anonymity of the author.

This is being sold as a way to go after the bad guys, but it won't help. Here's Steve Bellovin on that issue:

First, very few attacks these days use spoofed source addresses; the real IP address already tells you where the attack is coming from. Second, in case of a DDoS attack, there are too many sources; you can't do anything with the information. Third, the machine attacking you is almost certainly someone else's hacked machine and tracking them down (and getting them to clean it up) is itself time-consuming.

TraceBack is most useful in monitoring the activities of large masses of people. But of course, that's why the Chinese and the NSA are so interested in this proposal in the first place.

It's hard to figure out what the endgame is; the U.N. doesn't have the authority to impose Internet standards on anyone. In any case, this idea is counter to the U.N. Universal Declaration of Human Rights, Article 19: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers." In the U.S., it's counter to the First Amendment, which has long permitted anonymous speech. On the other hand, basic human and constitutional rights have been jettisoned left and right in the years after 9/11; why should this be any different?

But when the Chinese government and the NSA get together to enhance their ability to spy on us all, you have to wonder what's gone wrong with the world.

Posted on September 18, 2008 at 6:34 AM71 Comments

NSA Snooping on Cell Phone Calls

From CNet:

A recent article in the London Review of Books revealed that a number of private companies now sell off-the-shelf data-mining solutions to government spies interested in analyzing mobile-phone calling records and real-time location information. These companies include ThorpeGlen, VASTech, Kommlabs, and Aqsacom--all of which sell "passive probing" data-mining services to governments around the world.

ThorpeGlen, a U.K.-based firm, offers intelligence analysts a graphical interface to the company's mobile-phone location and call-record data-mining software. Want to determine a suspect's "community of interest"? Easy. Want to learn if a single person is swapping SIM cards or throwing away phones (yet still hanging out in the same physical location)? No problem.

In a Web demo (PDF) (mirrored here) to potential customers back in May, ThorpeGlen's vice president of global sales showed off the company's tools by mining a dataset of a single week's worth of call data from 50 million users in Indonesia, which it has crunched in order to try and discover small anti-social groups that only call each other.

Posted on September 17, 2008 at 12:49 PM40 Comments

GPS Spoofing

Interesting:

Jon used a desktop computer attached to a GPS satellite simulator to create a fake GPS signal. Portable GPS satellite simulators can fit in the trunk of a car, and are often used for testing. They are available as commercial off-the-shelf products. You can also rent them for less than $1K a week -- peanuts to anyone thinking of hijacking a cargo truck and selling stolen goods.

In his first experiments, Jon placed his desktop computer and GPS satellite simulator in the cab of his small truck, and powered them off an inverter. The VAT used a second truck as the victim cargo truck. "With this setup," Jon said, "we were able to spoof the GPS receiver from about 30 feet away. If our equipment could broadcast a stronger signal, or if we had purchased stronger signal amplifiers, we certainly could have spoofed over a greater distance."

During later experiments, Jon and the VAT were able to easily achieve much greater GPS spoofing ranges. They spoofed GPS signals at ranges over three quarters of a mile. "The farthest distance we achieved was 4586 feet, at Los Alamos," said Jon. "When you radiate an RF signal, you ideally want line of sight, but in this case we were walking around buildings and near power lines. We really had a lot of obstruction in the way. It surprised us." An attacker could drive within a half mile of the victim truck, and still override the truck's GPS signals.

EDITED TO ADD (10/13): Argonne National Labs is working on this.

Posted on September 17, 2008 at 7:03 AM71 Comments

UK Ministry of Defense Loses Memory Stick with Military Secrets

Oops:

The USB stick, outlining training for 70 soldiers from the 3rd Battalion, Yorkshire Regiment, was found on the floor of The Beach in Newquay in May.

Times, locations and travel and accommodation details for the troops were included in files on the device.

It's not the first time:

More than 120 USB memory sticks, some containing secret information, have been lost or stolen from the Ministry of Defence since 2004, it was reported earlier this year.

Some 26 of those disappeared this year == including three which contained information classified as "secret", and 19 which were "restricted".

I've written about this general problem before: we're storing ever more data in ever smaller devices.

The point is that it's now amazingly easy to lose an enormous amount of information. Twenty years ago, someone could break into my office and copy every customer file, every piece of correspondence, everything about my professional life. Today, all he has to do is steal my computer. Or my portable backup drive. Or my small stack of DVD backups. Furthermore, he could sneak into my office and copy all this data, and I'd never know it.

The solution? Encrypt them.

Posted on September 16, 2008 at 6:21 AM33 Comments

Change Your Name and Avoid the TSA Watchlist

Shhhh. Don't tell the terrorists:

The U.S. Department of Homeland Security wrote a letter to Labbé in 2004, saying he had been placed on their watch list after falling victim to identity theft. At the time, the department said there was no way for his name to be removed.

Although Labbé wrote letters to the U.S. department, his efforts were in vain, prompting him to legally change his name.

"So now, my official name is François Mario Labbé," he said.

"Then you have to change everything: driver's license, social insurance, medicare, credit card -- everything."

Although it's not a big change from Mario Labbé, he said it's been enough to foil the U.S. customs computers.

Posted on September 15, 2008 at 1:25 PM30 Comments

New Book: Schneier on Security

I have a new book coming out: Schneier on Security. It's a collection of my essays, all written from June 2002 to June 2008. They're all on my website, so regular readers won't have missed anything if they don't buy this book. But for those of you who want my essays in one easy-to-read place, or are planning to be shipwrecked on a desert island without Web access and would like to spend your time there pondering the sorts of questions I discuss in my essays, or want to give copies of my essays to friends and relatives as gifts, this book is for you. There are only 90 shopping days before Christmas.

The hardcover book retails for $30, but Amazon is already selling it for $20. If you want a signed copy, e-mail me. I'll send you a signed copy for $30, including U.S. shipping, and $40, including shipping overseas. Yes, Amazon is cheaper -- and you can always find me at a conference and ask me to sign the book.

Posted on September 15, 2008 at 7:18 AM18 Comments

Adi Shamir's Cube Attack Paper is Online

The cube attack paper, discussed here, is online: I. Dinur and A. Shamir, "Cube Attacks on Tweakable Black Box Polynomials," Cryptology ePrint Archive: Report 2008/385.

Posted on September 14, 2008 at 5:21 PM26 Comments

Friday Squid Blogging: The Mystery of Humboldt Squid Beaks

They're sharp:

There are many weird things about the giant Humboldt squid, but here's one of the strangest: Its beak. The squid's beak is one of the hardest organic substances in existence -- such that the sharp point can slice through a fish or whale like a Ginsu knife. Yet the beak is attached to squid flesh that itself is the texture of jello. How precisely does a gelatinous animal safely wield such a razor-sharp weapon? Why doesn't it just sort of, y'know, rip off? It's as if you tried to carve a roast with a knife that doesn't have a handle: It would cut into your fingers as much as the roast.

Paper here.

Posted on September 12, 2008 at 4:59 PM9 Comments

The Doghouse: Tornado Plus Encrypted USB Drive

Don't buy this:

My first discussion was with a sales guy. I asked about the encryption method. He didn't know. I asked about how the key was protected. Again, no idea. I began to suspect that this was not the person I needed to speak with, and I asked for a "technical" person. After a short wait, another sales guy got on the phone. He knew a little more. For example, the encryption method is to XOR the key with the data. Those of you in the security profession know my reaction to this news. For those of you still coming up to speed, XORing a key with data to encrypt sensitive information is bad. Very bad.

EDITED TO ADD (9/13): In the comment thread, there's a lot of talk about one-time pads. This is something I wrote on the topic in 2002:

So, let me summarize. One-time pads are useless for all but very specialized applications, primarily historical and non-computer. And almost any system that uses a one-time pad is insecure. It will claim to use a one-time pad, but actually use a two-time pad (oops). Or it will claim to use a one-time pad, but actually use a stream cipher. Or it will use a one-time pad, but won't deal with message re-synchronization and re-transmission attacks. Or it will ignore message authentication, and be susceptible to bit-flipping attacks and the like. Or it will fall prey to keystream reuse attacks. Etc., etc., etc.

Posted on September 12, 2008 at 12:05 PM55 Comments

Cost/Benefit of Terrorism Security

"The terrifying cost of feeling safer," from the Sydney Morning Herald:

Sandler and his colleagues conducted an analysis of the costs and benefits of five different approaches to combating terrorism. I must warn you that, because of the dearth of information, this study is even more reliant on assumptions than usual. Even so, in three cases the cost of the action so far exceeds the benefits that doubts about the reliability of the estimates recede.

Because the loss of life is so low, they measure the benefits of successful counter-terrorism measures in terms of loss of gross domestic product avoided. Trouble is, terrorism does little to disrupt economic growth, as even September 11 demonstrated.

Using the case of the US, Sandler estimates that simply continuing the present measures involves costs exceeding benefits by a factor of at least 10. Adopting additional defensive measures (such as stepping up security at valuable targets) would, at best, entail costs 3.5 times the benefits. Taking more pro-active measures (such as invading Afghanistan) would have costs at least eight times the benefits.

According to Sandler, only greater international co-operation, or adopting more sensitive foreign policies to project a more positive image abroad, could produce benefits greater than their (minimal) costs.

What's that? You don't care what it costs because no one can put a value on saving a human life? Heard of opportunity cost? Taxpayers' money we waste on excessive counter-terrorism measures is money we can't spend reducing the gap between white and indigenous health -- or, if that doesn't appeal, on buying Olympic medals.

Posted on September 12, 2008 at 6:32 AM34 Comments

Turning off Fire Hydrants in the Name of Terrorism

This really pegs the stupid meter:

He explains all the district's hydrants, including those in Alexander Ranch, have had their water turned off since just after 9/11 -- something a trade association spokesman tells us is common practice for rural systems.

"These hydrants need to be cut off in a way to prevent vandalism or any kind of terrorist activity, including something in the water lines," Hodges said.

But Hodges says fire departments know, or should have known, the water valves can be turned back on with a tool.

One, fires are much more common than terrorism -- keeping fire hydrants on makes much more sense than turning them off. Two, what sort of terrorism is possible using working fire hydrants? Three, if the water valves can be "turned back on with a tool," how does turning them off prevent fire-hydrant-related terrorism?

More and more, it seems as if public officials in this country have simply gone insane.

Posted on September 11, 2008 at 1:59 PM76 Comments

DNA Matching and the Birthday Paradox

Nice essay:

Is it possible that the F.B.I. is right about the statistics it cites, and that there could be 122 nine-out-of-13 matches in Arizona's database?

Perhaps surprisingly, the answer turns out to be yes. Let's say that the chance of any two individuals matching at any one locus is 7.5 percent. In reality, the frequency of a match varies from locus to locus, but I think 7.5 percent is pretty reasonable. For instance, with a 7.5 percent chance of matching at each locus, the chance that any 2 random people would match at all 13 loci is about 1 in 400 trillion. If you choose exactly 9 loci for 2 random people, the chance that they will match all 9 is 1 in 13 billion. Those are the sorts of numbers the F.B.I. tosses around, I think.

So under these same assumptions, how many pairs would we expect to find matching on at least 9 of 13 loci in the Arizona database? Remarkably, about 100. If you start with 65,000 people and do a pairwise match of all of them, you are actually making over 2 billion separate comparisons (65,000 * 64,999/2). And if you aren't just looking for a match on 9 specific loci, but rather on any 9 of 13 loci, then for each of those pairs of people there are over 700 different combinations that are being searched.

So all told, you end up doing about 1.4 trillion searches! If 1 in 13 billion searches yields a positive match as noted above, this leads to roughly 100 expected matches on 9 of 13 loci in a database the size of Arizona's. (The way I did the calculations, I am allowing for 2 individuals to match on different sets of loci; so to get 100 different pairs of people who match, I need a match rate of slightly higher than 7.5 percent per locus.)

EDITED TO ADD (9/14): The FBI is trying to suppress the analysis.

Posted on September 11, 2008 at 6:21 AM30 Comments

Mythbusters Episode on RFID Security Nixed

Seems that the idea was killed by lawyers under pressure from the credit card industry. Or maybe not; the person who started this rumor has retracted his comments. Or maybe those same lawyers made him retract his comments.

Don't they know that security by gag order never works, except temporarily?

Posted on September 10, 2008 at 2:34 PM24 Comments

Secret Military Technology

On 60 Minutes, in an interview with Scott Pelley, reporter Bob Woodward claimed that the U.S. military has a new secret technique that's so revolutionary, it's on par with the tank and the airplane:

Woodward: This is very sensitive and very top secret, but there are secret operational capabilities that have been developed by the military to locate, target, and kill leaders of al Qaeda in Iraq, insurgent leaders, renegade militia leaders, that is one of the true breakthroughs.

Pelley: What are we talking about here? Some kind of surveillance, some kind of targeted way of taking out just the people that you're looking for, the leadership of the enemy?

[...]

Woodward: It is the stuff of which military novels are written.

Pelley: Do you mean to say that this special capability is such an advance in military technique and technology that it reminds you of the advent of the tank and the airplane?

Woodward: Yeah.

It's here, 7 minutes and 55 seconds in.

Anyone have any ideas?

EDITED TO ADD (9/11): One idea:

I'm going to make a wager about what I think Woodward is talking about, and I'll be curious to see what Danger Room readers have to say. I believe he is talking about the much ballyhooed (in defense geek circles) "Tagging, Tracking and Locating" program; here's a briefing on it from Special Operations Command. These are newfangled technologies designed to track people from long distances, without the targeted people realizing they are being tracked. That can theoretically include thermal signatures, or some sort of "taggant" placed on a person. Think Will Smith in Enemy of the State. Well, not so many cameras, maybe.

Posted on September 10, 2008 at 11:35 AM191 Comments

News from the Rock Phish Gang

Definitely interesting:

Based in Europe, the Rock Phish group is a criminal collective that has been targeting banks and other financial institutions since 2004. According to RSA, they are responsible for half of the worldwide phishing attacks and have siphoned tens of millions of dollars from individuals' bank accounts. The group got its name from a now discontinued quirk in which the phishers used directory paths that contained the word "rock."

The first sign the group was expanding operations came in April, when it introduced a trojan known alternately as Zeus or WSNPOEM, which steals sensitive financial information in transit from a victim's machine to a bank. Shortly afterward, the gang added more crimeware, including a custom-made botnet client that was spread, among other means, using the Neosploit infection kit.

[...]

Soon, additional signs appeared pointing to a partnership between Rock Phishers and Asprox. Most notably, the command and control server for the custom Rock Phish crimeware had exactly the same directory structure of many of the Asprox servers, leading RSA researchers to believe Rock Phish and Asprox attacks were using at least one common server. (Researchers from Damballa were able to confirm this finding after observing malware samples from each of the respective botnets establish HTTP proxy server connections to a common set of destination IPs.)

Posted on September 10, 2008 at 7:47 AM14 Comments

Gait Analysis from Satellite

Ignoring the sensationalist headline, this is interesting:

By analysing the movements of human shadows in aerial and satellite footage, JPL engineer Adrian Stoica says it should be possible to identify people from the way they walk -- a technique called gait analysis, whose power lies in the fact that a person's walking style is very hard to disguise.

Video taken from above shows only people's heads and shoulders, which makes measuring the characteristic length and rhythm of a person's stride impossible. That's not true of shadows, though, Stoica told a security conference in Edinburgh, UK, last month. Shadows, he says, provide enough gait data to deduce a positive ID. To prove it, he has written software that recognises human movement in aerial and satellite video footage. It isolates moving shadows and uses data on the time of day and the camera angle to correct shadows if they are elongated or foreshortened. Regular gait analysis is then applied to identify people. In tests on footage shot from the sixth floor of a building, Stoica says his software was indeed able to extract useful gait data.

The article goes on to say that using satellite images would be harder, but that the basic idea is the same.

Of course, this is less useful for finding individuals and more useful for tracking a population as it moves about its day. But some individuals will have more distinctive gaits than others, and will be easier to track. Soon we may all need to walk with rocks in our shoes.

Posted on September 9, 2008 at 12:22 PM47 Comments

Identity Farming

Let me start off by saying that I'm making this whole thing up.

Imagine you're in charge of infiltrating sleeper agents into the United States. The year is 1983, and the proliferation of identity databases is making it increasingly difficult to create fake credentials. Ten years ago, someone could have just shown up in the country and gotten a driver's license, Social Security card and bank account -- possibly using the identity of someone roughly the same age who died as a young child -- but it's getting harder. And you know that trend will only continue. So you decide to grow your own identities.

Call it "identity farming." You invent a handful of infants. You apply for Social Security numbers for them. Eventually, you open bank accounts for them, file tax returns for them, register them to vote, and apply for credit cards in their name. And now, 25 years later, you have a handful of identities ready and waiting for some real people to step into them.

There are some complications, of course. Maybe you need people to sign their name as parents -- or, at least, mothers. Maybe you need to doctors to fill out birth certificates. Maybe you need to fill out paperwork certifying that you're home-schooling these children. You'll certainly want to exercise their financial identity: depositing money into their bank accounts and withdrawing it from ATMs, using their credit cards and paying the bills, and so on. And you'll need to establish some sort of addresses for them, even if it is just a mail drop.

You won't be able to get driver's licenses or photo IDs in their name. That isn't critical, though; in the U.S., more than 20 million adult citizens don't have photo IDs. But other than that, I can't think of any reason why identity farming wouldn't work.

Here's the real question: Do you actually have to show up for any part of your life?

Again, I made this all up. I have no evidence that anyone is actually doing this. It's not something a criminal organization is likely to do; twenty-five years is too distant a payoff horizon. The same logic holds true for terrorist organizations; it's not worth it. It might have been worth it to the KGB -- although perhaps harder to justify after the Soviet Union broke up in 1991 -- and might be an attractive option for existing intelligence adversaries like China.

Immortals could also use this trick to self-perpetuate themselves, inventing their own children and gradually assuming their identity, then killing their parents off. They could even show up for their own driver's license photos, wearing a beard as the father and blue spiked hair as the son. I'm told this is a common idea in Highlander fan fiction.

The point isn't to create another movie plot threat, but to point out the central role that data has taken on in our lives. Previously, I've said that we all have a data shadow that follows us around, and that more and more institutions interact with our data shadows instead of with us. We only intersect with our data shadows once in a while -- when we apply for a driver's license or passport, for example -- and those interactions are authenticated by older, less-secure interactions. The rest of the world assumes that our photo IDs glue us to our data shadows, ignoring the rather flimsy connection between us and our plastic cards. (And, no, REAL-ID won't help.)

It seems to me that our data shadows are becoming increasingly distinct from us, almost with a life of their own. What's important now is our shadows; we're secondary. And as our society relies more and more on these shadows, we might even become unnecessary.

Our data shadows can live a perfectly normal life without us.

This essay previously appeared on Wired.com.

EDITED TO ADD (9/9): Interesting commentary.

Posted on September 9, 2008 at 5:42 AM60 Comments

Bumblebees Making Security Trade-Offs

I have long been enamored with security trade-offs in the natural world:

A 3D video tracking system revealed that although the bees became very accurate at detecting the camouflaged spiders -- they also became increasingly wary.

"When they come in to inspect flowers, they spend a little bit longer hovering in front of them when they know a camouflaged spider is present," said Dr Ings.

With this "trade-off", the bees may lose valuable foraging time -- but they reduce the risk of becoming the crab spider's next meal.

Posted on September 8, 2008 at 12:52 PM7 Comments

BT, Phorm, and Me

Over the past year I have gotten many requests, both public and private, to comment on the BT and Phorm incident.

I was not involved with BT and Phorm, then or now. Everything I know about Phorm and BT's relationship with Phorm came from the same news articles you read. I have not gotten involved as an employee of BT. But anything I say is -- by definition -- said by a BT executive. That's not good.

So I'm sorry that I can't write about Phorm. But -- honestly -- lots of others have been giving their views on the issue.

Posted on September 8, 2008 at 6:23 AM40 Comments

Friday Squid Blogging: Colossal Squid was a Lethargic Blob

Fierce deep-sea predator? Not so much:

"We are looking at something verging on the incredibly bizarre. As she got older she got shorter and broader and was reduced to a giant gelatinous blob, carrying many thousands of eggs," he says.

"Her shape was likely to have affected her behaviour and ability to hunt. I can't imagine her jetting herself around in the water at any great speed, and she was too gelatinous to have been a fighting machine.

"It's likely she was just blobbing around the seabed carrying her brood of eggs, living on dead fish, while her mate was off hunting."

Posted on September 5, 2008 at 4:36 PM8 Comments

Contest: Cory Doctorow's Cipher Wheel Rings

Cory Doctorow wanted a secret decoder wedding ring, and he asked me to help design it. I wanted something more than the standard secret decoder ring, so this is what I asked for: "I want each wheel to be the alphabet, with each letter having either a dot above, a dot below, or no dot at all. The first wheel should have alternating above, none, below. The second wheel should be the repeating sequence of above, above, none, none, below, below. The third wheel should be the repeating sequence of above, above, above, none, none, none, below, below, below." (I know it sounds confusing, but here's a chart.)

So that's what he asked for, and that's what he got. And now it's time to create some cryptographic applications for the rings. Cory and I are holding an open contest for the cleverest application.

I don't think we can invent any encryption algorithms that will survive computer analysis -- there's just not enough entropy in the system -- but we can come up with some clever pencil-and-paper ciphers that will serve them well if they're ever stuck back in time. And there are certainly other cryptographic uses for the rings.

Here's a way to use the rings as a password mnemonic: First, choose a two-letter key. Align the three wheels according to the key. For example, if the key is "EB" for eBay, align the three wheels AEB. Take the common password "PASSWORD" and encrypt it. For each letter, find it on the top wheel. Count one letter to the left if there is a dot over the letter, and one letter to the right if there is a dot under it. Take that new letter and look at the letter below it (in the middle wheel). Count two letters to the left if there is a dot over it, and two letters to the right if there is a dot under it. Take that new letter (in the middle wheel), and look at the letter below it (in the lower wheel). Count three letters to the left if there is a dot over it, and three letters to the right if there is a dot under it. That's your encrypted letter. Do that with every letter to get your password.

"PASSWORD" and the key "EB" becomes "NXPPVVOF."

It's not very good; can anyone see why? (Ignore for now whether or not publishing this on a blog makes it no longer secure.)

How can I do that better? What else can we do with the rings? Can we incorporate other elements -- a deck of playing cards as in Solitaire, different-sized coins to make the system more secure?

Post your contest entries as comments to Cory's blog post -- you can post them here, but they're not going to count as contest submissions -- or send them to cryptocontest@craphound.com. Deadline is October 1st.

Good luck, and have fun with this.

Posted on September 5, 2008 at 12:01 PM60 Comments

Privacy Policies: Perception vs. Reality

New paper: "What Californians Understand About Privacy Online," by Chris Jay Hoofnagle and Jennifer King. From the abstract:

A gulf exists between California consumers' understanding of online rules and common business practices. For instance, Californians who shop online believe that privacy policies prohibit third-party information sharing. A majority of Californians believes that privacy policies create the right to require a website to delete personal information upon request, a general right to sue for damages, a right to be informed of security breaches, a right to assistance if identity theft occurs, and a right to access and correct data.

These findings show that California consumers overvalue the mere fact that a website has a privacy policy, and assume that websites carrying the label have strong, default rules to protect personal data. In a way, consumers interpret "privacy policy" as a quality seal that denotes adherence to some set of standards. Website operators have little incentive to correct this misperception, thus limiting the ability of the market to produce outcomes consistent with consumers' expectations. Drawing upon earlier work, we conclude that because the term "privacy policy" has taken on a specific meaning in the minds of consumers, its use should be limited to contexts where businesses provide a set of protections that meet consumers' expectations.

Posted on September 4, 2008 at 1:15 PM18 Comments

Movie-Plot Threats in the Guardian

We spend far more effort defending our countries against specific movie-plot threats, rather than the real, broad threats. In the US during the months after the 9/11 attacks, we feared terrorists with scuba gear, terrorists with crop dusters and terrorists contaminating our milk supply. Both the UK and the US fear terrorists with small bottles of liquid. Our imaginations run wild with vivid specific threats. Before long, we're envisioning an entire movie plot, without Bruce Willis saving the day. And we're scared.

It's not just terrorism; it's any rare risk in the news. The big fear in Canada right now, following a particularly gruesome incident, is random decapitations on intercity buses. In the US, fears of school shootings are much greater than the actual risks. In the UK, it's child predators. And people all over the world mistakenly fear flying more than driving. But the very definition of news is something that hardly ever happens. If an incident is in the news, we shouldn't worry about it. It's when something is so common that its no longer news - car crashes, domestic violence - that we should worry. But that's not the way people think.

Psychologically, this makes sense. We are a species of storytellers. We have good imaginations and we respond more emotionally to stories than to data. We also judge the probability of something by how easy it is to imagine, so stories that are in the news feel more probable - and ominous - than stories that are not. As a result, we overreact to the rare risks we hear stories about, and fear specific plots more than general threats.

The problem with building security around specific targets and tactics is that its only effective if we happen to guess the plot correctly. If we spend billions defending the Underground and terrorists bomb a school instead, we've wasted our money. If we focus on the World Cup and terrorists attack Wimbledon, we've wasted our money.

It's this fetish-like focus on tactics that results in the security follies at airports. We ban guns and knives, and terrorists use box-cutters. We take away box-cutters and corkscrews, so they put explosives in their shoes. We screen shoes, so they use liquids. We take away liquids, and they're going to do something else. Or they'll ignore airplanes entirely and attack a school, church, theatre, stadium, shopping mall, airport terminal outside the security area, or any of the other places where people pack together tightly.

These are stupid games, so let's stop playing. Some high-profile targets deserve special attention and some tactics are worse than others. Airplanes are particularly important targets because they are national symbols and because a small bomb can kill everyone aboard. Seats of government are also symbolic, and therefore attractive, targets. But targets and tactics are interchangeable.

The following three things are true about terrorism. One, the number of potential terrorist targets is infinite. Two, the odds of the terrorists going after any one target is zero. And three, the cost to the terrorist of switching targets is zero.

We need to defend against the broad threat of terrorism, not against specific movie plots. Security is most effective when it doesn't require us to guess. We need to focus resources on intelligence and investigation: identifying terrorists, cutting off their funding and stopping them regardless of what their plans are. We need to focus resources on emergency response: lessening the impact of a terrorist attack, regardless of what it is. And we need to face the geopolitical consequences of our foreign policy.

In 2006, UK police arrested the liquid bombers not through diligent airport security, but through intelligence and investigation. It didn't matter what the bombers' target was. It didn't matter what their tactic was. They would have been arrested regardless. That's smart security. Now we confiscate liquids at airports, just in case another group happens to attack the exact same target in exactly the same way. That's just illogical.

This essay originally appeared in The Guardian. Nothing I haven't already said elsewhere.

Posted on September 4, 2008 at 5:56 AM49 Comments

Sucking Data off of Cell Phones

Don't give someone your phone unless you trust them:

There is a new electronic capture device that has been developed primarily for law enforcement, surveillance, and intelligence operations that is also available to the public. It is called the Cellular Seizure Investigation Stick, or CSI Stick as a clever acronym. It is manufactured by a company called Paraben, and is a self-contained module about the size of a BIC lighter. It plugs directly into most Motorola and Samsung cell phones to capture all data that they contain. More phones will be added to the list, including many from Nokia, RIM, LG and others, in the next generation, to be released shortly.

Another news article.

Posted on September 3, 2008 at 6:03 AM40 Comments

Software to Facilitate Retail Tax Fraud

Interesting:

Thanks to a software program called a zapper, even technologically illiterate restaurant and store owners can siphon cash from computer cash registers and cheat tax officials.

[...]

Zappers alter the electronic sales records in a cash register. To satisfy tax collectors, the tally of food orders, for example, must match the register's final cash total. To hide the removal of cash from the till, a crooked business owner has to erase the record of food orders equal to the amount of cash taken; otherwise, the imbalance is obvious to any auditor.

[...]

The more sophisticated zappers are easy to use, according to several experts. A dialogue box, which shows the day's tally, pops up on the register's screen.

In a second dialogue box, the thief chooses to take a dollar amount or percentage of the till. The program then calculates which orders to erase to get close to the amount of cash the person wants to remove. Then it suggests how much cash to take, and it erases the entries from the books and a corresponding amount in orders, so the register balances.

Posted on September 2, 2008 at 12:24 PM34 Comments

Security ROI

Return on investment, or ROI, is a big deal in business. Any business venture needs to demonstrate a positive return on investment, and a good one at that, in order to be viable.

It's become a big deal in IT security, too. Many corporate customers are demanding ROI models to demonstrate that a particular security investment pays off. And in response, vendors are providing ROI models that demonstrate how their particular security solution provides the best return on investment.

It's a good idea in theory, but it's mostly bunk in practice.

Before I get into the details, there's one point I have to make. "ROI" as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It's an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn't make sense in this context.

But as anyone who has lived through a company's vicious end-of-year budget-slashing exercises knows, when you're trying to make your numbers, cutting costs is the same as increasing revenues. So while security can't produce ROI, loss prevention most certainly affects a company's bottom line.

And a company should implement only security countermeasures that affect its bottom line positively. It shouldn't spend more on a security problem than the problem is worth. Conversely, it shouldn't ignore problems that are costing it money when there are cheaper mitigation alternatives. A smart company needs to approach security as it would any other business decision: costs versus benefits.

The classic methodology is called annualized loss expectancy (ALE), and it's straightforward. Calculate the cost of a security incident in both tangibles like time and money, and intangibles like reputation and competitive advantage. Multiply that by the chance the incident will occur in a year. That tells you how much you should spend to mitigate the risk. So, for example, if your store has a 10 percent chance of getting robbed and the cost of being robbed is $10,000, then you should spend $1,000 a year on security. Spend more than that, and you're wasting money. Spend less than that, and you're also wasting money.

Of course, that $1,000 has to reduce the chance of being robbed to zero in order to be cost-effective. If a security measure cuts the chance of robbery by 40 percent -- to 6 percent a year -- then you should spend no more than $400 on it. If another security measure reduces it by 80 percent, it's worth $800. And if two security measures both reduce the chance of being robbed by 50 percent and one costs $300 and the other $700, the first one is worth it and the second isn't.

The Data Imperative

The key to making this work is good data; the term of art is "actuarial tail." If you're doing an ALE analysis of a security camera at a convenience store, you need to know the crime rate in the store's neighborhood and maybe have some idea of how much cameras improve the odds of convincing criminals to rob another store instead. You need to know how much a robbery costs: in merchandise, in time and annoyance, in lost sales due to spooked patrons, in employee morale. You need to know how much not having the cameras costs in terms of employee morale; maybe you're having trouble hiring salespeople to work the night shift. With all that data, you can figure out if the cost of the camera is cheaper than the loss of revenue if you close the store at night -- assuming that the closed store won't get robbed as well. And then you can decide whether to install one.

Cybersecurity is considerably harder, because there just isn't enough good data. There aren't good crime rates for cyberspace, and we have a lot less data about how individual security countermeasures -- or specific configurations of countermeasures -- mitigate those risks. We don't even have data on incident costs.

One problem is that the threat moves too quickly. The characteristics of the things we're trying to prevent change so quickly that we can't accumulate data fast enough. By the time we get some data, there's a new threat model for which we don't have enough data. So we can't create ALE models.

But there's another problem, and it's that the math quickly falls apart when it comes to rare and expensive events. Imagine you calculate the cost -- reputational costs, loss of customers, etc. -- of having your company's name in the newspaper after an embarrassing cybersecurity event to be $20 million. Also assume that the odds are 1 in 10,000 of that happening in any one year. ALE says you should spend no more than $2,000 mitigating that risk.

So far, so good. But maybe your CFO thinks an incident would cost only $10 million. You can't argue, since we're just estimating. But he just cut your security budget in half. A vendor trying to sell you a product finds a Web analysis claiming that the odds of this happening are actually 1 in 1,000. Accept this new number, and suddenly a product costing 10 times as much is still a good investment.

It gets worse when you deal with even more rare and expensive events. Imagine you're in charge of terrorism mitigation at a chlorine plant. What's the cost to your company, in money and reputation, of a large and very deadly explosion? $100 million? $1 billion? $10 billion? And the odds: 1 in a hundred thousand, 1 in a million, 1 in 10 million? Depending on how you answer those two questions -- and any answer is really just a guess -- you can justify spending anywhere from $10 to $100,000 annually to mitigate that risk.

Or take another example: airport security. Assume that all the new airport security measures increase the waiting time at airports by -- and I'm making this up -- 30 minutes per passenger. There were 760 million passenger boardings in the United States in 2007. This means that the extra waiting time at airports has cost us a collective 43,000 years of extra waiting time. Assume a 70-year life expectancy, and the increased waiting time has "killed" 620 people per year -- 930 if you calculate the numbers based on 16 hours of awake time per day. So the question is: If we did away with increased airport security, would the result be more people dead from terrorism or fewer?

Caveat Emptor

This kind of thing is why most ROI models you get from security vendors are nonsense. Of course their model demonstrates that their product or service makes financial sense: They've jiggered the numbers so that they do.

This doesn't mean that ALE is useless, but it does mean you should 1) mistrust any analyses that come from people with an agenda and 2) use any results as a general guideline only. So when you get an ROI model from your vendor, take its framework and plug in your own numbers. Don't even show the vendor your improvements; it won't consider any changes that make its product or service less cost-effective to be an "improvement." And use those results as a general guide, along with risk management and compliance analyses, when you're deciding what security products and services to buy.

This essay previously appeared in CSO Magazine.

Posted on September 2, 2008 at 6:05 AM48 Comments

My LA Times Op Ed on Photo ID Checks at Airport

Opinion

The TSA's useless photo ID rules

No-fly lists and photo IDs are supposed to help protect the flying public from terrorists. Except that they don't work.

By Bruce Schneier

August 28, 2008

The TSA is tightening its photo ID rules at airport security. Previously, people with expired IDs or who claimed to have lost their IDs were subjected to secondary screening. Then the Transportation Security Administration realized that meant someone on the government's no-fly list -- the list that is supposed to keep our planes safe from terrorists -- could just fly with no ID.

Now, people without ID must also answer personal questions from their credit history to ascertain their identity. The TSA will keep records of who those ID-less people are, too, in case they're trying to probe the system.

This may seem like an improvement, except that the photo ID requirement is a joke. Anyone on the no-fly list can easily fly whenever he wants. Even worse, the whole concept of matching passenger names against a list of bad guys has negligible security value.

How to fly, even if you are on the no-fly list: Buy a ticket in some innocent person's name. At home, before your flight, check in online and print out your boarding pass. Then, save that web page as a PDF and use Adobe Acrobat to change the name on the boarding pass to your own. Print it again. At the airport, use the fake boarding pass and your valid ID to get through security. At the gate, use the real boarding pass in the fake name to board your flight.

The problem is that it is unverified passenger names that get checked against the no-fly list. At security checkpoints, the TSA just matches IDs to whatever is printed on the boarding passes. The airline checks boarding passes against tickets when people board the plane. But because no one checks ticketed names against IDs, the security breaks down.

This vulnerability isn't new. It isn't even subtle. I wrote about it in 2003, and again in 2006. I asked Kip Hawley, who runs the TSA, about it in 2007. Today, any terrorist smart enough to Google "print your own boarding pass" can bypass the no-fly list.

This gaping security hole would bother me more if the very idea of a no-fly list weren't so ineffective. The system is based on the faulty notion that the feds have this master list of terrorists, and all we have to do is keep the people on the list off the planes.

That's just not true. The no-fly list -- a list of people so dangerous they are not allowed to fly yet so innocent we can't arrest them -- and the less dangerous "watch list" contain a combined 1 million names representing the identities and aliases of an estimated 400,000 people. There aren't that many terrorists out there; if there were, we would be feeling their effects.

Almost all of the people stopped by the no-fly list are false positives. It catches innocents such as Ted Kennedy, whose name is similar to someone's on the list, and Yusuf Islam (formerly Cat Stevens), who was on the list but no one knew why.

The no-fly list is a Kafkaesque nightmare for the thousands of innocent Americans who are harassed and detained every time they fly. Put on the list by unidentified government officials, they can't get off. They can't challenge the TSA about their status or prove their innocence. (The U.S. 9th Circuit Court of Appeals decided this month that no-fly passengers can sue the FBI, but that strategy hasn't been tried yet.)

But even if these lists were complete and accurate, they wouldn't work. Timothy McVeigh, the Unabomber, the D.C. snipers, the London subway bombers and most of the 9/11 terrorists weren't on any list before they committed their terrorist acts. And if a terrorist wants to know if he's on a list, the TSA has approved a convenient, $100 service that allows him to figure it out: the Clear program, which issues IDs to "trusted travelers" to speed them through security lines. Just apply for a Clear card; if you get one, you're not on the list.

In the end, the photo ID requirement is based on the myth that we can somehow correlate identity with intent. We can't. And instead of wasting money trying, we would be far safer as a nation if we invested in intelligence, investigation and emergency response -- security measures that aren't based on a guess about a terrorist target or tactic.

That's the TSA: Not doing the right things. Not even doing right the things it does.

Posted on September 1, 2008 at 5:15 AM60 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..