Blog: July 2006 Archives

ScatterChat

ScatterChat is a secure instant messaging client. From the press release:

ScatterChat is unique in that it is intended for non-technical human rights activists and political dissidents operating behind oppressive national firewalls. It is an instant messaging client that provides end-to-end encryption over the Electronic Frontier Foundation-endorsed Tor network. Its security features include resiliency against partial compromise through perfect forward secrecy, immunity from replay attacks, and limited resistance to traffic analysis, all reinforced through a pro-actively secure design.

A nice application of Tor.

EDITED TO ADD (8/8): There are flaws in the protocol. There’s an advisory on one of them.

Posted on July 31, 2006 at 1:48 PM25 Comments

iPod Thefts

What happens if you distribute 50 million small,valuable, and easily sellable objects into the hands of men, women, and children all over the world, and tell them to walk around the streets with them? Why, people steal them, of course.

“Rise in crime blamed on iPods”, yells the front page of London’s Metro. “Muggers targeting iPod users”, says ITV. This is the reaction to the government’s revelation that robberies across the UK have risen by 8 per cent in the last year, from 90,747 to 98,204. The Home Secretary, John Reid, attributes this to the irresistible lure of “young people carrying expensive goods, such as mobile phones and MP3 players”. A separate British Crime Survey, however, suggests robbery has risen by 22 per cent, to 311,000.

This shouldn’t come as a surprise, just as it wasn’t a surprise in the 1990s when there was a wave of high-priced sneaker thefts. Or that there is also a wave of laptop thefts.

What to do about it? Basically, there’s not much you can do except be careful. Muggings have long been a low-risk crime, so it makes sense that we’re seeing an increase in them as the value of what people are carrying on their person goes up. And people carrying portable music players have an unmistakable indicator: those ubiquitous ear buds.

The economics of this crime are such that it will continue until one of three things happens. One, portable music players become much less valuable. Two, the costs of the crime become much higher. Three, society deals with its underclass and gives them a better career option than iPod thief.

And on a related topic, here’s a great essay by Cory Doctorow on how Apple’s iTunes copy protection screws the music industry.

EDITED TO ADD (8/5): Eric Rescorla comments.

Posted on July 31, 2006 at 7:05 AM58 Comments

Friday Squid Blogging: Unusually Large Numbers of Squid in the Bering Sea

No explanation given, but it’s annoying fishermen:

The problem took on alarming proportions in early July when fishermen netted more than 500 tons of squid bycatch in one week, Josh Keaton, a resource management specialist with the National Oceanic and Atmospheric Association, said Friday.

The amount of squid was about four times what might be expected.

“We confirmed that the numbers were real and they really did catch that amount of squid. We then tried to find out where the squid were caught,” Keaton said.

While high rates of squid bycatch had occurred before, this time it set off alarm bells because the squid were caught near the start of the mid-June-through-September pollock season.

“I just about had a heart attack. That is a lot of squid,” said Karl Haflinger, president of Sea State Inc. of Seattle, which helps the industry manage bycatch, the unwanted and often wasted fish caught along with the targeted fish.

Posted on July 28, 2006 at 3:37 PM14 Comments

Remote-Control Airplane Software

Does anyone other than me see a problem with this?

Some 30 European businesses and research institutes are working to create software that would make it possible from a distance to regain control of an aircraft from hijackers, according to the German news magazine.

The system “which could only be controlled from the ground would conduct the aircraft posing a problem to the nearest airport whether it liked it or not,” according to extracts from next Monday’s Der Spiegel released Saturday.

“A hijacker would have no chance of reaching his goal,” it said.

Unless his goal were, um, hijacking the aircraft.

It seems to me that by designing remote-control software for airplanes, you open the possibility for someone to hijack the plane without even being on board. Sure, there are going to be computer-security controls protecting this thing, but we all know how well that sort of thing has worked in the past.

The system would be designed in such a way that even a computer hacker on board could not get round it.

But what about computer hackers on the ground?

I’m not saying this is a bad idea; it might be a good idea. But this security countermeasure opens up an entirely new vulnerability, and I hope that someone is studying that new vulnerability.

Posted on July 28, 2006 at 2:09 PM106 Comments

Broadening CALEA

In 1994, Congress passed the Communications Assistance for Law Enforcement Act (CALEA). Basically, this is the law that forces the phone companies to make your telephone calls—including cell phone calls—available for government wiretapping.

But now the government wants access to VoIP calls, and SMS messages, and everything else. They’re doing their best to interpret CALEA as broadly as possible, but they’re also pursuing a legal angle. Ars Technica has the story:

The government hopes to shore up the legal basis for the program by passing amended legislation. The EFF took a look at the amendments and didn’t like what it found.

According to the Administration, the proposal would “confirm [CALEA’s] coverage of push-to-talk, short message service, voice mail service and other communications services offered on a commercial basis to the public,” along with “confirm[ing] CALEA’s application to providers of broadband Internet access, and certain types of ‘Voice-Over-Internet-Protocol’ (VOIP).” Many of CALEA’s express exceptions and limitations are also removed. Most importantly, while CALEA’s applicability currently depends on whether broadband and VOIP can be considered “substantial replacements” for existing telephone services, the new proposal would remove this limit.

Posted on July 28, 2006 at 11:09 AM18 Comments

Memoirs of an Airport Security Screener

This person worked as an airport security screener years before 9/11, before the TSA, so hopefully things are different now. It’s a pretty fascinating read, though.

Two things pop out at me. One, as I wrote, it’s a mind-numbingly boring task. And two, the screeners were trained not to find weapons, but to find the particular example weapons that the FAA would test them on.

“How do you know it’s a gun?” he asked me.

“it looks like one,” I said, and was immediately pounded on the back.

“Goddamn right it does. You get over here,” yelled Mike to Will.

“How do you know it’s a gun?”

“I look for the outline of the cartridge and the…” Will started.

“What?”

“The barrel you can see right here,” Will continued, oblivious to his pending doom.

“What the hell are you talking about? That’s not how you find this gun.”

“No sir. It’s how you find any gun, sir,” said Will. I knew right then that this was a disaster.

“Any gun? Any gun? I don’t give a fuck about any gun, dipshit. I care about this gun. The FAA will not test you with another gun. The FAA will never put any gun but this one in the machine. I don’t care if you are a fucking gun nut who can tell the caliber by sniffing the barrel, you look for this gun. THIS ONE.” Mike strode to the test bag and dumped it out at the feet of the metal detector, sending the machine into a frenzy.

“THIS bomb. This knife. I don’t care if you miss a goddamn bazooka and some son of a bitch cuts your throat with a knife you let through as long as you find THIS GUN.”

“But we’re supposed to find,” Will insisted.

“You find what I trained you to find. The other shit doesn’t get taken out of my paycheck when you miss it,” said Mike.

Not exactly the result we’re looking for, but one that makes sense given the economic incentives that were at work.

I sure hope things are different today.

Posted on July 28, 2006 at 6:22 AM65 Comments

Good Example of Smart Profiling

In Beyond Fear, I wrote about profiling (reprinted here). I talked a lot about how smart behavioral-based profiling is much more effective than dumb characteristic-based profiling, and how well-trained people are much better than computers.

The story I used was about how U.S. customs agent Diana Dean caught Ahmed Ressam in 1999. Here’s another story:

An England football shirt gave away a Senegalese man attempting to enter Cyprus on a forged French passport, police on the Mediterranean island said on Monday.

Suspicions were aroused when the man appeared at a checkpoint supervising crossings from the Turkish Cypriot north to the Greek Cypriot south of the divided island, wearing the England shirt and presenting a French passport.

“Being a football fan, the officer found it highly unlikely that a Frenchman would want to wear an England football jersey,” a police source said.

“That was his first suspicion prior to the proper check on the passport, which turned out to be a fake,” said the source.

That’s just not the kind of thing you’re going to get a computer to pick up on, at least not until artificial intelligence actually produces a working brain.

Posted on July 27, 2006 at 12:46 PM43 Comments

Bot Networks

What could you do if you controlled a network of thousands of computers—or, at least, could use the spare processor cycles on those machines? You could perform massively parallel computations: model nuclear explosions or global weather patterns, factor large numbers or find Mersenne primes, or break cryptographic problems.

All of these are legitimate applications. And you can visit distributed.net and download software that allows you to donate your spare computer cycles to some of these projects. (You can help search for Optimal Golomb Rulers—even if you have no idea what they are.) You’ve got a lot of cycles to spare. There’s no reason that your computer can’t help search for extraterrestrial life as it, for example, sits idly waiting for you to read this essay.

The reason these things work is that they are consensual; none of these projects download software onto your computer without your knowledge. None of these projects control your computer without your consent. But there are lots of software programs that do just that.

The term used for a computer remotely controlled by someone else is a “bot”. A group of computers—thousands or even millions—controlled by someone else is a bot network. Estimates are that millions of computers on the internet today are part of bot networks, and the largest bot networks have over 1.5 million machines.

Initially, bot networks were used for just one thing: denial-of-service attacks. Hackers would use them against each other, fighting hacker feuds in cyberspace by attacking each other’s computers. The first widely publicized use of a distributed intruder tool—technically not a botnet, but practically the same thing—was in February 2000, when Canadian hacker Mafiaboy directed an army of compromised computers to flood CNN.com, Amazon.com, eBay, Dell Computer and other sites with debilitating volumes of traffic. Every newspaper carried that story.

These days, bot networks are more likely to be controlled by criminals than by hackers. The important difference is the motive: profit. Networks are being used to send phishing e-mails and other spam. They’re being used for click fraud. They’re being used as an extortion tool: Pay up or we’ll DDoS you!

Mostly, they’re being used to collect personal data for fraud—commonly called “identity theft.” Modern bot software doesn’t just attack other computers; it attacks its hosts as well. The malware is packed with keystroke loggers to steal passwords and account numbers. In fact, many bots automatically hunt for financial information, and some botnets have been built solely for this purpose—to gather credit card numbers, online banking passwords, PayPal accounts, and so on, from compromised hosts.

Swindlers are also using bot networks for click fraud. Google’s anti-fraud systems are sophisticated enough to detect thousands of clicks by one computer; it’s much harder to determine if a single click by each of thousands of computers is fraud, or just popularity.

And, of course, most bots constantly search for other computers that can be infected and added to the bot network. (A 1.5 million-node bot network was discovered in the Netherlands last year. The command-and-control system was dismantled, but some of the bots are still active, infecting other computers and adding them to this defunct network.)

Modern bot networks are remotely upgradeable, so the operators can add new functionality to the bots at any time, or switch from one bot program to another. Bot authors regularly upgrade their botnets during development, or to evade detection by anti-virus and malware cleanup tools.

One application of bot networks that we haven’t seen all that much of is to launch a fast-spreading worm. (Some believe the Witty worm spread this way.) Much has been written about “flash worms” that can saturate the internet in 15 minutes or less. The situation gets even worse if 10 thousand bots synchronize their watches and release the worm at exactly the same time. Why haven’t we seen more of this? My guess is because there isn’t any profit in it.

There’s no real solution to the botnet problem, because there’s no single problem. There are many different bot networks, controlled in many different ways, consisting of computers infected through many different vulnerabilities. Really, a bot network is nothing more than an attacker taking advantage of 1) one or more software vulnerabilities, and 2) the economies of scale that computer networks bring. It’s the same thing as distributed.net or SETI@home, only the attacker doesn’t ask your permission first.

As long as networked computers have vulnerabilities—and that’ll be for the foreseeable future—there’ll be bot networks. It’s a natural side-effect of a computer network with bugs.

This essay originally appeared on Wired.com.

EDITED TO ADD (7/27): DDOS extortion is a bigger problem than you might think. Right now it’s primarily targeted against fringe industries—online gaming, online gambling, online porn—located offshore, but we’re seeing more and more of against mainstream companies in the U.S. and Europe.

EDITED TO ADD (7/27): Seems that Witty was definitely not seeded from a bot network.

Posted on July 27, 2006 at 6:35 AM46 Comments

Sloppy CIA Tradecraft

CIA agents exposed due to their use of frequent-flier miles and other mistakes:

The man and woman were pretending to be American business executives on international assignments, so they did what globe-trotting executives do. While traveling abroad they used their frequent-flier cards as often as possible to gain credits toward free flights.

In fact, the pair were covert operatives working for the CIA. Thanks to their diligent use of frequent-flier programs, Italian prosecutors have been able to reconstruct much of their itinerary during 2003, including trips to Brussels, Venice, London, Vienna and Oslo.

[…]

Aides to former CIA Director Porter Goss have used the word “horrified” to describe Goss’ reaction to the sloppiness of the Milan operation, which Italian police were able to reconstruct through the CIA operatives’ imprudent use of cell phones and other violations of basic CIA “tradecraft.”

I’m not sure how collecting frequent-flier miles is a problem, though. Assuming they’re traveling under the cover of being business executives, it makes sense for them to act just like other business executives.

It’s not like there’s no other way to reconstruct their travel.

Posted on July 26, 2006 at 1:22 PM35 Comments

Press Security Concerns in Lebanon

Problems of reporting from a war zone:

Among broadcasters there is a concern about how our small convoys of cars full of equipment and personnel look from the air. There is a risk Israelis (eyes in the sky: drones, satellites) could mistake them for a Hezbollah convoy headed closer to the border and within striking distance of Israel. So simply being on the road with several vehicles is a risk.

Plus, when we fire up our broadcast signals it is unclear what we look like to Israeli military monitoring stations. If there are a number of broadcasters firing up signals from the same remote place, the hope is that the Israelis would identify it as media signals, and not Hezbollah rocket electronics, and thus avoid being a target.

Posted on July 26, 2006 at 5:56 AM

Sky Marshals Name Innocents to Meet Quota

One news source is reporting that sky marshals are reporting on innocent people in order to meet a quota:

The air marshals, whose identities are being concealed, told 7NEWS that they’re required to submit at least one report a month. If they don’t, there’s no raise, no bonus, no awards and no special assignments.

“Innocent passengers are being entered into an international intelligence database as suspicious persons, acting in a suspicious manner on an aircraft … and they did nothing wrong,” said one federal air marshal.

[…]

These unknowing passengers who are doing nothing wrong are landing in a secret government document called a Surveillance Detection Report, or SDR. Air marshals told 7NEWS that managers in Las Vegas created and continue to maintain this potentially dangerous quota system.

“Do these reports have real life impacts on the people who are identified as potential terrorists?” 7NEWS Investigator Tony Kovaleski asked.

“Absolutely,” a federal air marshal replied.

[…]

What kind of impact would it have for a flying individual to be named in an SDR?

“That could have serious impact … They could be placed on a watch list. They could wind up on databases that identify them as potential terrorists or a threat to an aircraft. It could be very serious,” said Don Strange, a former agent in charge of air marshals in Atlanta. He lost his job attempting to change policies inside the agency.

This is so insane, it can’t possibly be true. But I have been stunned before by the stupidity of the Department of Homeland Security.

EDITED TO ADD (7/27): This is what Brock Meeks said on David Farber’s IP mailing list:

Well, it so happens that I was the one that BROKE this story… way back in 2004. There were at least two offices, Miami and Las Vegas that had this quota system for writing up and filing “SDRs.”

The requirement was totally renegade and NOT endorsed by Air Marshal officials in Washington. The Las Vegas Air Marshal field office was (I think he’s retired now) by a real cowboy at the time, someone that caused a lot of problems for the Washington HQ staff. (That official once grilled an Air Marshal for three hours in an interrogation room because he thought the air marshal was source of mine on another story. The air marshal was then taken off flight status and made to wash the office cars for two weeks… I broke that story, too. And no, the punished air marshal was never a source of mine.)

Air marshals told they were filing false reports, as they did below, just to hit the quota.

When my story hit, those in the offices of Las Vegas and Miami were reprimanded and the practice was ordered stopped by Washington HQ.

I suppose the biggest question I have for this story is the HYPE of what happens to these reports. They do NOT place the person mention on a “watch list.” These reports, filed on Palm Pilot PDAs, go into an internal Air Marshal database that is rarely seen and pretty much ignored by other intelligence agencies, from all sources I talked to.

Why? Because the air marshals are seen as little more than “sky cops” and these SDRs considered little more than “field interviews” that cops sometimes file when they question someone loitering at a 7-11 too late at night.

The quota system, if it is still going on, is heinous, but it hardly results in the big spooky data collection scare that this cheapjack Denver “investigative” TV reporter makes it out to be.

The quoted former field official from Atlanta, Don Strange, did, in fact, lose his job over trying to chance internal policies. He was the most well-liked official among the rank and file and the Atlanta office, under his command, had the highest morale in the nation.

Posted on July 25, 2006 at 9:55 AM73 Comments

Hacked MySpace Server Infects a Million Computers with Malware

According to The Washington Post:

An online banner advertisement that ran on MySpace.com and other sites over the past week used a Windows security flaw to infect more than a million users with spyware when people merely browsed the sites with unpatched versions of Windows….

Clever attack.

EDITED TO ADD (7/27): It wasn’t MySpace that was hacked, but a server belonging to the third-party advertising service that MySpace uses. The ad probably appeared on other websites as well, but MySpace seems to have been the biggest one.

EDITED TO ADD (8/5): Ed Felten comments.

Posted on July 24, 2006 at 6:46 AM62 Comments

Friday Squid Blogging: Humboldt Squid

Great article on the Humboldt squid from Outside Magazine:

I worry about these things because Cassell, 44, a world-class diver, underwater cameraman, and Special Operations vet from Escondido, California, is out to convince me—live and up close—that the undersea world’s most intriguing predator is not one of the usual suspects (like the great white shark or killer whale) but a powerful, outsize squid that features eight snakelike arms lined with suckers full of nasty little teeth, a razor-sharp beak that can rapidly rip flesh into bite-size chunks, and an unrelenting hunger. It’s called the Humboldt, or jumbo, squid, and it’s not the sort of calamari you’re used to forking off your dinner plate. This squid grows to seven feet or more and perhaps a couple hundred pounds. It has a rep as the outlaw biker of the marine world: intelligent and opportunistic, a stone-cold cannibal willing to attack divers with a seemingly deliberate hostility.

What about the giant squid, you may ask? “Wimpy,” says Cassell. The giant—which grows to 60-plus feet and is one of only four squid, out of the 400 or so species found in the oceans, that are human-size or bigger—is generally considered to be fairly placid. In any case, it’s so elusive, no modern squid hunter has ever even seen one alive. No, if you want a scary squid, you want a Humboldt. And they’re easy to find, teeming by the millions in Pacific waters from Chile to British Columbia. (It’s named after the Humboldt Current, off South America’s west coast.)

Cassell first heard about the “diablos rojos,” or red devils, in 1995, from some Mexican fishermen as he was filming gray whales for German public television in Baja’s Laguna San Ignacio. Intrigued, he made his way to La Paz, near the southern tip of Baja, to dive under the squid-fishing fleet. It was baptism by tentacle. Humboldts—mostly five-footers—swarmed around him. As Cassell tells it, one attacked his camera, which smashed into his face, while another wrapped itself around his head and yanked hard on his right arm, dislocating his shoulder. A third bit into his chest, and as he tried to protect himself he was gang-dragged so quickly from 30 to 70 feet that he didn’t have time to equalize properly, and his right eardrum ruptured. “I was in the water five minutes and I already had my first injury,” Cassell recalls, shaking his head. “It was like being in a barroom brawl.” Somehow he managed to push the squid-pile off and make his way to the surface, battered and exhilarated. “I was in love with the animal,” he says.

Posted on July 21, 2006 at 3:23 PM17 Comments

Firefox 2.0 to Contain Anti-Phishing Features

This is a good idea.

The built anti-phishing capability warns users when they come across Web forgeries, and offers to return the user to his or her home page. Meanwhile, microsummaries are regularly updated summaries of Web pages, small enough to fit in the space available to a bookmark label, but large enough to provide more useful information about pages than static page titles, and are regularly updated as new information becomes available.

Posted on July 21, 2006 at 12:55 PM20 Comments

Voice Authentication in Telephone Banking

This seems like a good idea, assuming it is reliable.

The introduction of voice verification was preceded by an extensive period of testing among more than 1,450 people and 25,000 test calls. These were made using both fixed-line and mobile telephones, at all times of day and also by relatives (including six twins). Special attention was devoted to people who were suffering from colds during the test period. ABN AMRO is the first major bank in the world to introduce this technology in this way.

Posted on July 21, 2006 at 7:43 AM42 Comments

Security Certifications

I’ve long been hostile to certifications—I’ve met too many bad security professionals with certifications and know many excellent security professionals without certifications. But, I’ve come to believe that, while certifications aren’t perfect, they’re a decent way for a security professional to learn some of the things he’s going to know, and a potential employer to assess whether a job candidate has the security expertise he’s going to need to know.

What’s changed? Both the job requirements and the certification programs.

Anyone can invent a security system that he himself cannot break. I’ve said this so often that Cory Doctorow has named it “Schneier’s Law”: When someone hands you a security system and says, “I believe this is secure,” the first thing you have to ask is, “Who the hell are you?” Show me what you’ve broken to demonstrate that your assertion of the system’s security means something.

That kind of expertise can’t be found in a certification. It’s a combination of an innate feel for security, extensive knowledge of the academic security literature, extensive experience in existing security systems, and practice. When I’ve hired people to design and evaluate security systems, I’ve paid no attention to certifications. They are meaningless; I need a different set of skills and abilities.

But most organizations don’t need to hire that kind of person. Network security has become standardized; organizations need a practitioner, not a researcher. This is good because there is so much demand for these practitioners that there aren’t enough researchers to go around. Certification programs are good at churning out practitioners.

And over the years, certification programs have gotten better. They really do teach knowledge that security practitioners need. I might not want a graduate designing a security protocol or evaluating a cryptosystem, but certs are fine for any of the handful of network security jobs a large organization needs.

At my company, we encourage our security analysts to take certification courses. We find that it’s the most cost-effective way to give them the skills they need to do ever-more-complex jobs.

Of course, none of this is perfect. I still meet bad security practitioners with certifications, and I still know excellent security professionals without any.

In the end, certifications are like profiling. They work , but they’re sloppy. Just because someone has a particular certification doesn’t mean that he has the security expertise you’re looking for (in other words, there are false positives). And just because someone doesn’t have a security certification doesn’t mean that he doesn’t have the required security expertise (false negatives). But we use them for the same reason we profile: We don’t have the time, patience, or ability to test for what we’re looking for explicitly.

Profiling based on security certifications is the easiest way for an organization to make a good hiring decision, and the easiest way for an organization to train its existing employees. And honestly, that’s usually good enough.

This essay originally appeared as a point-counterpoint with Marcus Ranum in the July 2006 issue of Information Security Magazine. (You have to fill out an annoying survey to read Marcus’s counterpoint, but 1) you can lie, and 2) it’s worth it.)

EDITED TO ADD (7/21): A Guide to Information Security Certifications.

EDITED TO ADD (9/11): Here’s Marcus’s column.

Posted on July 20, 2006 at 7:20 AM63 Comments

Top Terrorist Targets from the DHS

It’s a seriously dumb list:

A federal inspector general has analyzed the nation’s database of top terrorist targets. There are more than 77,000 of them—up from 160 a few years ago, before the entire exercise morphed into a congressional porkfest.

And on that list of national assets are … 1,305 casinos! No doubt Muckleshoot made the cut (along with every other casino in our state).

The list has 234 restaurants. I have no idea if Dick’s made it. The particulars are classified. But you have to figure it did.

Why? Because here’s more of what the inspector general found passes for “critical infrastructure.” An ice-cream parlor. A tackle shop. A flea market. An Amish popcorn factory.

Seven hundred mortuaries made the list. Terrorists know no limits if they’re planning attacks on our dead people.

The report says our state has a whopping 3,650 critical sites, sixth in the U.S. It didn’t identify them—remember, we wouldn’t want this list of eateries, zoos and golf courses to fall into the wrong hands.

That number, 3,650, is so high I’m positive we haven’t heard the most farcical of it yet.

What’s going on? Pork barrel funding, that’s what’s going on.

We’re never going to get security right if we continue to make it a parody of itself.

Posted on July 18, 2006 at 7:25 AM37 Comments

Zero-Day Microsoft PowerPoint Vulnerability

Symantec is reporting a zero-day PowerPoint exploit. Right now the threat assessment is low, but that could change overnight if someone writes an automatic worm that takes advantage of this vulnerability.

Note that the vulnerability appeared in the wild days after “Patch Tuesday,” presumably to maximize the window of exposure before Microsoft issues a patch.

Posted on July 17, 2006 at 1:38 PM27 Comments

Paris Bank Hack at Center of National Scandal

From Wired News:

Among the falsified evidence produced by the conspirators before the fraud unraveled were confidential bank records originating with the Clearstream bank in Luxembourg, which were expertly modified to make it appear that some French politicians had secretly established offshore bank accounts to receive bribes. The falsified records were then sent to investigators, with enough authentic account information left in to make them appear credible.

Posted on July 17, 2006 at 6:42 AM15 Comments

Friday Squid Blogging: A Marine Biologist Comments on "Pirates of the Caribbean"

It’s got squid:

Danna: As you can imagine, I was pleased with the strong cephalopod theme.

Charles: I thought you might be upset by the reinforcement of negative squid stereotypes.

Danna: This might be another “take what I can get” moment. I was somewhat upset that the Kraken had all those teeth instead of a beak, though.

Charles: Well, lots of teeth are scarier.

Danna: I’d have to disagree, having spent a couple of weeks getting very personal with jumbo squid beaks. They’re very, very sharp.

Charles: I’ll take your word for it. I’ve never been personal with a squid before.

Danna: That’s probably just as well. Ink and mucus isn’t for everyone.

Posted on July 14, 2006 at 10:12 PM3 Comments

Complexity and Terrorism Investigations

Good article on how complexity greatly limits the effectiveness of terror investigations. The stories of wasted resources are all from the UK, but the morals are universal.

The Committee’s report accepts that the increasing number of investigations, together with their increasing complexity, will make longer detention inevitable in the future. The core calculation is essentially the one put forward by the police and accepted by the Government – technology has been an enabler for international terrorism, with email, the Internet and mobile telephony producing wide, diffuse, international networks. The data on hard drives and mobile phones needs to be examined, contacts need to be investigated and their data examined, and in the case of an incident, vast amounts of CCTV records need to be gone through. As more and more of this needs to be done, the time taken to do it will obviously climb, and as it’s ‘necessary’ to detain the new breed of terrorist early in the investigation before he can strike, more time will be needed between arrest and charge in order to build a case.

All of which is, as far as it goes, logical. But take it a little further and the inherent futility of the route becomes apparent – ultimately, probably quite soon, the volume of data overwhelms the investigators and infinite time is needed to analyse all of it. And the less developed the plot is at the time the suspects are pulled in, the greater the number of possible outcomes (things they ‘might’ be planning) that will need to be chased-up. Short of the tech industry making the breakthrough into machine intelligence that will effectively do the analysis for them (which is a breakthrough the snake-oil salesmen suggest, and dopes in Government believe, has been achieved already), the approach itself is doomed. Essentially, as far as data is concerned police try to ‘collar the lot’ and then through analysis, attempt to build the most complete picture of a case that is possible. Use of initiative, experience and acting on probabilities will tend to be pressured out of such systems, and as the data volumes grow the result will tend to be teams of disempowered machine minders chained to a system that has ground to a halt. This effect is manifesting itself visibly across UK Government systems in general, we humbly submit. But how long will it take them to figure this out?

[…]

There is clearly a major problem for the security services in distinguishing disaffected talk from serious planning, and in deciding when an identified group constitutes a real threat. But the current technology-heavy approach to the threat doesn’t make a great deal of sense, because it produces very large numbers of suspects who are not and never will be a serious threat. Quantities of these suspects will nevertheless be found to be guilty of something, and along the way large amounts of investigative resource will have been expended to no useful purpose, aside from filling up 90 days. Overreaction to suggestions of CBRN threats is similarly counter-productive, because it makes it more likely that nascent groups will, just like the police, misunderstand the capabilities of the weapons, and start trying to research and build them. Mischaracterising the threat by inflating early, inexpert efforts as ‘major plots’ meanwhile fosters a climate of fear and ultimately undermines public confidence in the security services.

The oft-used construct, “the public would never forgive us if…” is a cop-out. It’s a spurious justification for taking the ‘collar the lot’ approach, throwing resources at it, ducking out of responsibility and failing to manage. Getting back to basics, taking ownership and telling the public the truth is more honest, and has some merit. A serious terror attack needs intent, attainable target and capability, the latter being the hard bit amateurs have trouble achieving without getting spotted along the way. Buying large bags of fertiliser if you’re not known to the vendor and you don’t look in the slightest bit like a farmer is going to put you onto MI5’s radar, and despite what it says on a lot of web sites, making your own explosives if you don’t know what you’re doing is a good way of blowing yourself up before you intended to. If disaffected youth had a more serious grasp of these realities, and had heard considerably more sense about the practicalities, then it’s quite possible that fewer of them would persist with their terror studies. Similarly, if the general public had better knowledge it would be better placed to spot signs of bomb factories. Bleached hair, dead plants, large numbers of peroxide containers? It could surely have been obvious.

Posted on July 14, 2006 at 7:25 AM35 Comments

Spy Gadgets You Can Buy

Cheap:

This is a collection of “spy equipment” we have found for sale around the internet. Everything here is completely real, is sold at online stores, and almost any item listed here costs less than $500, and often times can be bought for less than $200.

What’s interesting to me is less what is available commercially today, but what we can extrapolate is available to real spies.

Posted on July 13, 2006 at 1:50 PM20 Comments

A Minor Security Lesson from Mumbai Terrorist Bombings

Two quotes:

Authorities had also severely limited the cellular network for fear it could be used to trigger more attacks.

And:

Some of the injured were seen frantically dialing their cell phones. The mobile phone network collapsed adding to the sense of panic.

(Note: The story was changed online, and the second quote was deleted.)

Cell phones are useful to terrorists, but they’re more useful to the rest of us.

Posted on July 13, 2006 at 1:20 PM32 Comments

Click Fraud and the Problem of Authenticating People

Google’s $6 billion-a-year advertising business is at risk because it can’t be sure that anyone is looking at its ads. The problem is called click fraud, and it comes in two basic flavors.

With network click fraud, you host Google AdSense advertisements on your own website. Google pays you every time someone clicks on its ad on your site. It’s fraud if you sit at the computer and repeatedly click on the ad or—better yet—write a computer program that repeatedly clicks on the ad. That kind of fraud is easy for Google to spot, so the clever network click fraudsters simulate different IP addresses, or install Trojan horses on other people’s computers to generate the fake clicks.

The other kind of click fraud is competitive. You notice your business competitor has bought an ad on Google, paying Google for each click. So you use the above techniques to repeatedly click on his ads, forcing him to spend money—sometimes a lot of money—on nothing. (Here’s a company that will commit click fraud for you.)

Click fraud has become a classic security arms race. Google improves its fraud-detection tools, so the fraudsters get increasingly clever … and the cycle continues. Meanwhile, Google is facing multiple lawsuits from those who claim the company isn’t doing enough. My guess is that everyone is right: It’s in Google’s interest both to solve and to downplay the importance of the problem.

But the overarching problem is both hard to solve and important: How do you tell if there’s an actual person sitting in front of a computer screen? How do you tell that the person is paying attention, hasn’t automated his responses, and isn’t being assisted by friends? Authentication systems are big business, whether based on something you know (passwords), something you have (tokens) or something you are (biometrics). But none of those systems can secure you against someone who walks away and lets another person sit down at the keyboard, or a computer that’s infected with a Trojan.

This problem manifests itself in other areas as well.

For years, online computer game companies have been battling players who use computer programs to assist their play: programs that allow them to shoot perfectly or see information they normally couldn’t see.

Playing is less fun if everyone else is computer-assisted, but unless there’s a cash prize on the line, the stakes are small. Not so with online poker sites, where computer-assisted players—or even computers playing without a real person at all—have the potential to drive all the human players away from the game.

Look around the internet, and you see this problem pop up again and again. The whole point of CAPTCHAs is to ensure that it’s a real person visiting a website, not just a bot on a computer. Standard testing doesn’t work online, because the tester can’t be sure that the test taker doesn’t have his book open, or a friend standing over his shoulder helping him. The solution in both cases is a proctor, of course, but that’s not always practical and obviates the benefits of internet testing.

This problem has even come up in court cases. In one instance, the prosecution demonstrated that the defendant’s computer committed some hacking offense, but the defense argued that it wasn’t the defendant who did it—that someone else was controlling his computer. And in another case, a defendant charged with a child porn offense argued that, while it was true that illegal material was on his computer, his computer was in a common room of his house and he hosted a lot of parties—and it wasn’t him who’d downloaded the porn.

Years ago, talking about security, I complained about the link between computer and chair. The easy part is securing digital information: on the desktop computer, in transit from computer to computer or on massive servers. The hard part is securing information from the computer to the person. Likewise, authenticating a computer is much easier than authenticating a person sitting in front of the computer. And verifying the integrity of data is much easier than verifying the integrity of the person looking at it—in both senses of that word.

And it’s a problem that will get worse as computers get better at imitating people.

Google is testing a new advertising model to deal with click fraud: cost-per-action ads. Advertisers don’t pay unless the customer performs a certain action: buys a product, fills out a survey, whatever. It’s a hard model to make work—Google would become more of a partner in the final sale instead of an indifferent displayer of advertising—but it’s the right security response to click fraud: Change the rules of the game so that click fraud doesn’t matter.

That’s how to solve a security problem.

This essay appeared on Wired.com.

EDITED TO ADD (7/13): Click Monkeys is a hoax site.

EDITED TO ADD (7/25): An evalution of Google’s anti-click-fraud efforts, as part of the Lane Gifts case. I’m not sure if this expert report was done for Google, for Lane Gifts, or for the judge.

Posted on July 13, 2006 at 5:22 AM39 Comments

Identity Theft and Methamphetamines

New trend or scary rumor?

When methamphetamine proliferated more recently, the police and prosecutors at first did not associate it with a rise in other crimes. There were break-ins at mailboxes and people stealing documents from garbage, Mr. Morales said, but those were handled by different parts of the Police Department.

But finally they connected the two. Meth users—awake for days at a time and able to fixate on small details—were looking for checks or credit card numbers, then converting the stolen identities to money, drugs or ingredients to make more methamphetamine. For these drug users, Mr. Morales said, identity theft was the perfect support system.

Supposedly meth users are ideally suited to be computer hackers:

For example, crack cocaine or heroin dealers usually set up in well-defined urban strips run by armed gangs, which stimulates gun traffic and crimes that are suited to densely populated neighborhoods, including mugging, prostitution, carjacking and robbery. Because cocaine creates a rapid craving for more, addicts commit crimes that pay off instantly, even at high risk.

Methamphetamine, by contrast, can be manufactured in small laboratories that move about suburban or rural areas, where addicts are more likely to steal mail from unlocked boxes. Small manufacturers, in turn, use stolen identities to buy ingredients or pay rent without arousing suspicion. And because the drug has a long high, addicts have patience and energy for crimes that take several steps to pay off.

[…]

“Crack users and heroin users are so disorganized and get in these frantic binges, they’re not going to sit still and do anything in an organized way for very long,” Dr. Rawson said. “Meth users, on the other hand, that’s all they have, is time. The drug stimulates the part of the brain that perseverates on things. So you get people perseverating on things, and if you sit down at a computer terminal you can go for hours and hours.”

And there’s the illegal alien tie-in:

“Look at the states that have the highest rates of identity theft—Arizona, Nevada, California, Texas and Colorado,’’ Mr. Morales said. “The two things they all have in common are illegal immigration and meth.”

I have no idea if any of this is actually true. But I do know if the drug user-identity thief connection story has legs, Congress is likely to start paying much closer attention.

Posted on July 12, 2006 at 1:32 PM45 Comments

Failure of Two-Factor Authentication

Here’s a report of phishers defeating two-factor authentication using a man-in-the-middle attack.

The site asks for your user name and password, as well as the token-generated key. If you visit the site and enter bogus information to test whether the site is legit—a tactic used by some security-savvy people—you might be fooled. That’s because this site acts as the “man in the middle”—it submits data provided by the user to the actual Citibusiness login site. If that data generates an error, so does the phishing site, thus making it look more real.

I predicted this last year.

Posted on July 12, 2006 at 7:31 AM63 Comments

Galileo Satellite Code Cracked

Anyone know more?

Members of Cornell’s Global Positioning System (GPS) Laboratory have cracked the so-called pseudo random number (PRN) codes of Europe’s first global navigation satellite, despite efforts to keep the codes secret. That means free access for consumers who use navigation devices—including handheld receivers and systems installed in vehicles—that need PRNs to listen to satellites.

Security by obscurity: it doesn’t work, and it’s a royal pain to recover when it fails.

Posted on July 11, 2006 at 11:30 AM48 Comments

Unreliable Programming

One response to software liability:

Now suppose that there was a magical wand for taking snapshots of computer states just before crashes. Or that the legal system would permit claims on grounds of only the second part of the proof. Then there would be a strong positive incentive to write software that fails unreproducibly: “If our software’s errors cannot be demonstrated reliably in court, we will never lose money in product liability cases.”

Follow the link for examples.

Posted on July 11, 2006 at 7:47 AM26 Comments

Greek Wiretapping Scandal: Perpetrators' Names

According to The Guardian:

Five senior Vodafone technicians have been accused of being the operational masterminds of an elaborate eavesdropping scandal enveloping the mobile phone giant’s Greek subsidiary.

The employees, named in a report released last week by Greece’s independent telecoms watchdog, ADAE, allegedly installed spy software into Vodafone’s central systems.

Still no word on who the technicians were working for.

I’ve written about this scandal before: here, here, and most recently here.

Posted on July 10, 2006 at 1:28 PM12 Comments

Terrorists, Data Mining, and the Base Rate Fallacy

I have already explained why NSA-style wholesale surveillance data-mining systems are useless for finding terrorists. Here’s a more formal explanation:

Floyd Rudmin, a professor at a Norwegian university, applies the mathematics of conditional probability, known as Bayes’ Theorem, to demonstrate that the NSA’s surveillance cannot successfully detect terrorists unless both the percentage of terrorists in the population and the accuracy rate of their identification are far higher than they are. He correctly concludes that “NSA’s surveillance system is useless for finding terrorists.”

The surveillance is, however, useful for monitoring political opposition and stymieing the activities of those who do not believe the government’s propaganda.

And here’s the analysis:

What is the probability that people are terrorists given that NSA’s mass surveillance identifies them as terrorists? If the probability is zero (p=0.00), then they certainly are not terrorists, and NSA was wasting resources and damaging the lives of innocent citizens. If the probability is one (p=1.00), then they definitely are terrorists, and NSA has saved the day. If the probability is fifty-fifty (p=0.50), that is the same as guessing the flip of a coin. The conditional probability that people are terrorists given that the NSA surveillance system says they are, that had better be very near to one (p=1.00) and very far from zero (p=0.00).

The mathematics of conditional probability were figured out by the Scottish logician Thomas Bayes. If you Google “Bayes’ Theorem”, you will get more than a million hits. Bayes’ Theorem is taught in all elementary statistics classes. Everyone at NSA certainly knows Bayes’ Theorem.

To know if mass surveillance will work, Bayes’ theorem requires three estimations:

  1. The base-rate for terrorists, i.e. what proportion of the population are terrorists;
  2. The accuracy rate, i.e., the probability that real terrorists will be identified by NSA;
  3. The misidentification rate, i.e., the probability that innocent citizens will be misidentified by NSA as terrorists.

No matter how sophisticated and super-duper are NSA’s methods for identifying terrorists, no matter how big and fast are NSA’s computers, NSA’s accuracy rate will never be 100% and their misidentification rate will never be 0%. That fact, plus the extremely low base-rate for terrorists, means it is logically impossible for mass surveillance to be an effective way to find terrorists.

I will not put Bayes’ computational formula here. It is available in all elementary statistics books and is on the web should any readers be interested. But I will compute some conditional probabilities that people are terrorists given that NSA’s system of mass surveillance identifies them to be terrorists.

The US Census shows that there are about 300 million people living in the USA.

Suppose that there are 1,000 terrorists there as well, which is probably a high estimate. The base-rate would be 1 terrorist per 300,000 people. In percentages, that is .00033%, which is way less than 1%. Suppose that NSA surveillance has an accuracy rate of .40, which means that 40% of real terrorists in the USA will be identified by NSA’s monitoring of everyone’s email and phone calls. This is probably a high estimate, considering that terrorists are doing their best to avoid detection. There is no evidence thus far that NSA has been so successful at finding terrorists. And suppose NSA’s misidentification rate is .0001, which means that .01% of innocent people will be misidentified as terrorists, at least until they are investigated, detained and interrogated. Note that .01% of the US population is 30,000 people. With these suppositions, then the probability that people are terrorists given that NSA’s system of surveillance identifies them as terrorists is only p=0.0132, which is near zero, very far from one. Ergo, NSA’s surveillance system is useless for finding terrorists.

Suppose that NSA’s system is more accurate than .40, let’s say, .70, which means that 70% of terrorists in the USA will be found by mass monitoring of phone calls and email messages. Then, by Bayes’ Theorem, the probability that a person is a terrorist if targeted by NSA is still only p=0.0228, which is near zero, far from one, and useless.

Suppose that NSA’s system is really, really, really good, really, really good, with an accuracy rate of .90, and a misidentification rate of .00001, which means that only 3,000 innocent people are misidentified as terrorists. With these suppositions, then the probability that people are terrorists given that NSA’s system of surveillance identifies them as terrorists is only p=0.2308, which is far from one and well below flipping a coin. NSA’s domestic monitoring of everyone’s email and phone calls is useless for finding terrorists.

As an exercise to the reader, you can use the same analysis to show that data mining is an excellent tool for finding stolen credit cards, or stolen cell phones. Data mining is by no means useless; it’s just useless for this particular application.

Posted on July 10, 2006 at 7:15 AM145 Comments

U.S. Navy Patents Firewall

At least, that’s what it sounds like to me:

In a communication system having a plurality of networks, a method of achieving network separation between first and second networks is described. First and second networks with respective first and second degrees of trust are defined, the first degree of trust being higher than the second degree of trust. Communication between the first and second networks is enabled via a network interface system having a protocol stack, the protocol stack implemented by the network interface system in an application layer. Data communication from the second network to the first network is enabled while data communication from the first network to the second network is minimized.

Posted on July 7, 2006 at 7:06 AM35 Comments

WiFi Driver Attack

In this attack, you can seize control of someone’s computer using his WiFi interface, even if he’s not connected to a network.

The two researchers used an open-source 802.11 hacking tool called LORCON (Loss of Radio Connectivity) to throw an extremely large number of wireless packets at different wireless cards. Hackers use this technique, called fuzzing, to see if they can cause programs to fail, or perhaps even run unauthorized software when they are bombarded with unexpected data.

Using tools like LORCON, Maynor and Ellch were able to discover many examples of wireless device driver flaws, including one that allowed them to take over a laptop by exploiting a bug in an 802.11 wireless driver. They also examined other networking technologies including Bluetooth, Ev-Do (EVolution-Data Only), and HSDPA (High Speed Downlink Packet Access).

The two researchers declined to disclose the specific details of their attack before the August 2 presentation, but they described it in dramatic terms.

“This would be the digital equivalent of a drive-by shooting,” said Maynor. An attacker could exploit this flaw by simply sitting in a public space and waiting for the right type of machine to come into range.

The victim would not even need to connect to a network for the attack to work.

No details yet. The researchers are presenting their results at BlackHat on August 2.

Posted on July 6, 2006 at 1:52 PM21 Comments

Annual Report from the Privacy Commissioner of Canada

Excellent reading.

It is my duty, in this Annual Report, to present a solemn and urgent warning to every Member of Parliament and Senator, and indeed to every Canadian:

The fundamental human right of privacy in Canada is under assault as never before. Unless the Government of Canada is quickly dissuaded from its present course by Parliamentary action and public insistence, we are on a path that may well lead to the permanent loss not only of privacy rights that we take for granted but also of important elements of freedom as we now know it.

We face this risk because of the implications, both individual and cumulative, of a series of initiatives that the Government has mounted or is actively moving toward. These initiatives are set against the backdrop of September 11, and anti-terrorism is their purported rationale. But the aspects that present the greatest threat to privacy either have nothing at all to do with anti-terrorism, or they present no credible promise of effectively enhancing security.

The Government is, quite simply, using September 11 as an excuse for new collections and uses of personal information about all of us Canadians that cannot be justified by the requirements of anti-terrorism and that, indeed, have no place in a free and democratic society.

Why doesn’t the United States have a Privacy Commissioner?

And this:

A popular response is: “If you have nothing to hide, you have nothing to fear.”

By that reasoning, of course, we shouldn’t mind if the police were free to come into our homes at any time just to look around, if all our telephone conversations were monitored, if all our mail were read, if all the protections developed over centuries were swept away. It’s only a difference of degree from the intrusions already being implemented or considered.

The truth is that we all do have something to hide, not because it’s criminal or even shameful, but simply because it’s private. We carefully calibrate what we reveal about ourselves to others. Most of us are only willing to have a few things known about us by a stranger, more by an acquaintance, and the most by a very close friend or a romantic partner. The right not to be known against our will – indeed, the right to be anonymous except when we choose to identify ourselves – is at the very core of human dignity, autonomy and freedom.

If we allow the state to sweep away the normal walls of privacy that protect the details of our lives, we will consign ourselves psychologically to living in a fishbowl. Even if we suffered no other specific harm as a result, that alone would profoundly change how we feel. Anyone who has lived in a totalitarian society can attest that what often felt most oppressive was precisely the lack of privacy.

Great stuff.

EDITED TO ADD (7/6): That’s the 2001-2002 report. This is the latest report.

Posted on July 6, 2006 at 7:49 AM34 Comments

Cell Phone Security

No, it’s not what you think. This phone has a built-in Breathalyzer:

Here’s how it works: Users blow into a small spot on the phone, and if they’ve had too much to drink the phone issues a warning and shows a weaving car hitting traffic cones.

You can also configure the phone not to let you dial certain phone numbers if you’re drunk. Think ex-lovers.

Now that’s a security feature I can get behind.

Posted on July 5, 2006 at 2:45 PM27 Comments

The League of Women Voters Supports Voter-Verifiable Paper Trails

For a long time, the League of Women Voters (LWV) had been on the wrong side of the electronic voting machine issue. They were in favor of electronic machines, and didn’t see the need for voter-verifiable paper trails. (They use to have a horrid and misleading Q&A about the issue on their website, but it’s gone now. Barbara Simons published a rebuttal, which includes their original Q&A.)

The politics of the LWV are byzantine, but basically there are local leagues under state leagues, which in turn are under the national (LWVUS) league. There is a national convention once every other year, and all sorts of resolutions are passed by the membership. But the national office can do a lot to undercut the membership and the state leagues. The politics of voting machines is an example of this.

At the 2004 convention, the LWV membership passed a resolution on electronic voting called “SARA,” which stood for “Secure, Accurate, Recountable, and Accessible.” Those in favor of the resolution thought that “recountable” meant auditable, which meant voter-verifiable paper trails. But the national LWV office decided to spin SARA to say that recountable does not imply paper. While they could no longer oppose paper outright, they refused to say that paper was desirable. For example, they held Georgia’s system up as a model, and Georgia uses paperless Diebold DRE machines. It makes you wonder if the LWVUS leadership is in someone’s pocket.

So at the 2006 convention, the LWV membership passed another resolution. This one was much more clearly worded: designed to make it impossible for the national office to pretend that the LWV was not in favor of voter-verified paper trails.

Unfortunately, the League of Women Voters has not issued a press release about this resolution. (There is a press release by VerifiedVoting.org about it.) I’m sure that the national office simply doesn’t want to acknowledge the membership’s position on the issue, and wishes the issue would just go away quietly. It’s a pity; the resolution is a great one and worth publicizing.

Here’s the text of the resolution:

Resolution Related to Program Requiring a Voter-Verifiable Paper Ballot or Paper Record with Electronic Voting Machines

Motion to adopt the following resolution related to program requiring a voter-verified paper ballot or paper record with electronic voting systems.

Whereas: Some LWVs have had difficulty applying the SARA Resolution (Secure, Accurate, Recountable and Accessible) passed at the last Convention, and

Whereas: Paperless electronic voting systems are not inherently secure, can malfunction, and do not provide a recountable audit trail,

Therefore be it resolved that:

The position on the Citizens’ Right to Vote be interpreted to affirm that LWVUS supports only voting systems that are designed so that:

  1. they employ a voter-verifiable paper ballot or other paper record, said paper being the official record of the voter¹s intent; and
  2. the voter can verify, either by eye or with the aid of suitable devices for those who have impaired vision, that the paper ballot/record accurately reflects his or her intent; and
  3. such verification takes place while the voter is still in the process of voting; and
  4. the paper ballot/record is used for audits and recounts; and
  5. the vote totals can be verified by an independent hand count of the paper ballot/record; and
  6. routine audits of the paper ballot/record in randomly selected precincts can be conducted in every election, and the results published by the jurisdiction.

By the way, the 2006 LWV membership also voted on a resolution in favor of net neutrality (the Connecticut league issued a press release, because they spearheaded the issue), and one against the death penalty. The national LWV office hasn’t issued a press release about those two issues, either.

Posted on July 5, 2006 at 1:32 PM20 Comments

Brennan Center Report on Security of Voting Systems

I have been participating in the Brennan Center’s Task Force on Voting Security. Last week we released a report on the security of voting systems.

From the Executive Summary:

In 2005, the Brennan Center convened a Task Force of internationally renowned government, academic, and private-sector scientists, voting machine experts and security professionals to conduct the nation’s first systematic analysis of security vulnerabilities in the three most commonly purchased electronic voting systems. The Task Force spent more than a year conducting its analysis and drafting this report. During this time, the methodology, analysis, and text were extensively peer reviewed by the National Institute of Standards and Technology (“NIST”).

[…]

The Task Force examined security threats to the technologies used in Direct Recording Electronic voting systems (“DREs”), DREs with a voter verified auditable paper trail (“DREs w/ VVPT”) and Precinct Count Optical Scan (“PCOS”) systems. The analysis assumes that appropriate physical security and accounting procedures are all in place.

[…]

Three fundamental points emerge from the threat analysis in the Security Report:

  • All three voting systems have significant security and reliability vulnerabilities, which pose a real danger to the integrity of national, state, and local elections.
  • The most troubling vulnerabilities of each system can be substantially remedied if proper countermeasures are implemented at the state and local level.
  • Few jurisdictions have implemented any of the key countermeasures that could make the least difficult attacks against voting systems much more difficult to execute successfully.

[…]

There are a number of steps that jurisdictions can take to address the vulnerabilities identified in the Security Report and make their voting systems significantly more secure. We recommend adoption of the following security measures:

  1. Conduct automatic routine audits comparing voter verified paper records to the electronic record following every election. A voter verified paper record accompanied by a solid automatic routine audit of those records can go a long way toward making the least difficult attacks much more difficult.
  2. Perform “parallel testing” (selection of voting machines at random and testing them as realistically as possible on Election Day.) For paperless DREs, in particular, parallel testing will help jurisdictions detect software-based attacks, as well as subtle software bugs that may not be discovered during inspection and other testing.
  3. Ban use of voting machines with wireless components. All three voting systems are more vulnerable to attack if they have wireless components.
  4. Use a transparent and random selection process for all auditing procedures. For any auditing to be effective (and to ensure that the public is confident in
    such procedures), jurisdictions must develop and implement transparent and random selection procedures.

  5. Ensure decentralized programming and voting system administration. Where a single entity, such as a vendor or state or national consultant, performs key tasks for multiple jurisdictions, attacks against statewide elections become easier.
  6. Institute clear and effective procedures for addressing evidence of fraud or error. Both automatic routine audits and parallel testing are of questionable security value without effective procedures for action where evidence of machine malfunction and/or fraud is discovered. Detection of fraud without an appropriate response will not prevent attacks from succeeding.

    The report is long, but I think it’s worth reading. If you’re short on time, though, at least read the Executive Summary.

    The report has generated some press. Unfortunately, the news articles recycle some of the lame points that Diebold continues to make in the face of this kind of analysis:

    Voting machine vendors have dismissed many of the concerns, saying they are theoretical and do not reflect the real-life experience of running elections, such as how machines are kept in a secure environment.

    “It just isn’t the piece of equipment,” said David Bear, a spokesman for Diebold Election Systems, one of the country’s largest vendors. “It’s all the elements of an election environment that make for a secure election.”

    “This report is based on speculation rather than an examination of the record. To date, voting systems have not been successfully attacked in a live election,” said Bob Cohen, a spokesman for the Election Technology Council, a voting machine vendors’ trade group. “The purported vulnerabilities presented in this study, while interesting in theory, would be extremely difficult to exploit.”

    I wish The Washington Post found someone to point out that there have been many, many irregularities with electronic voting machines over the years, and the lack of convincing evidence of fraud is exactly the problem with their no-audit-possible systems. Or that the “it’s all theoretical” argument is the same on that software vendors used to use to discredit security vulnerabilities before the full-disclosure movement forced them to admit that their software had problems.

    Posted on July 5, 2006 at 6:12 AM28 Comments

    Getting a Personal Unlock Code for Your O2 Cell Phone

    O2 is a UK cell phone network. The company gives you the option of setting up a PIN on your phone. The idea is that if someone steals your phone, they can’t make calls. If they type the PIN incorrectly three times, the phone is blocked. To deal with the problems of phone owners mistyping their PIN—or forgetting it—they can contact O2 and get a Personal Unlock Code (PUK). Presumably, the operator goes through some authentication steps to ensure that the person calling is actually the legitimate owner of the phone.

    So far, so good.

    But O2 has decided to automate the PUK process. Now anyone on the Internet can visit this website, type in a valid mobile telephone number, and get a valid PUK to reset the PIN—without any authentication whatsoever.

    Oops.

    EDITED TO ADD (7/4): A representitive from O2 sent me the following:

    “Yes, it does seem there is a security risk by O2 supplying such a service, but in fact we believe this risk is very small. The risk is when a customer’s phone is lost or stolen. There are two scenarios in that event:

    “Scenario 1 – The phone is powered off. A PIN number would be required at next power on. Although the PUK code will indeed allow you to reset the PIN, you need to know the telephone number of the SIM in order to get it – there is no way to determine the telephone number from the SIM or handset itself. Should the telephone number be known the risk is then same as scenario 2.

    “Scenario 2 – The phone remains powered on: Here, the thief can use the phone in any case without having to acquire PUK.

    “In both scenarios we have taken the view that the principle security measure is for the customer to report the loss/theft as quickly as possible, so that we can remotely disable both the SIM and also the handset (so that it cannot be used with any other SIM).”

    Posted on July 3, 2006 at 2:26 PM

    Load ActiveX Controls on Vista Without Administrator Privileges

    This seems like a bad idea to me:

    Microsoft is adding a brand-new feature to Windows Vista to allow businesses to load ActiveX controls on systems running without admin privileges.

    The new feature, called ActiveX Installer Service, will be fitted into the next public release of Vista to provide a way for enterprises to cope with the UAC (User Account Control) security mechanism.

    UAC, formerly known as LUA (Limited User Account), is enabled by default in Vista to separate Standard User privileges from those that require admin rights to harden the operating system against malware and malicious hacker attacks.

    However, because UAC will block the installation of ActiveX controls on Standard User systems, enterprise applications that use the technology will encounter breakages. ActiveX controls are objects used to enhance a user’s interaction with an application.

    Posted on July 3, 2006 at 8:31 AM42 Comments

    Sidebar photo of Bruce Schneier by Joe MacInnis.