Blog: October 2010 Archives

Halloween and the Irrational Fear of Stranger Danger

From the Wall Street Journal:

Take “stranger danger,” the classic Halloween horror. Even when I was a kid, back in the “Bewitched” and “Brady Bunch” costume era, parents were already worried about neighbors poisoning candy. Sure, the folks down the street might smile and wave the rest of the year, but apparently they were just biding their time before stuffing us silly with strychnine-laced Smarties.

That was a wacky idea, but we bought it. We still buy it, even though Joel Best, a sociologist at the University of Delaware, has researched the topic and spends every October telling the press that there has never been a single case of any child being killed by a stranger’s Halloween candy. (Oh, yes, he concedes, there was once a Texas boy poisoned by a Pixie Stix. But his dad did it for the insurance money. He was executed.)

Anyway, you’d think that word would get out: poisoned candy not happening. But instead, most Halloween articles to this day tell parents to feed children a big meal before they go trick-or-treating, so they won’t be tempted to eat any candy before bringing it home for inspection.

[…]

Then along came new fears. Parents are warned annually not to let their children wear costumes that are too tight—those could seriously restrict breathing! But not too loose either—kids could trip! Fall! Die!

Treating parents like idiots who couldn’t possibly notice that their kid is turning blue or falling on his face might seem like a losing proposition, but it caught on too.

Halloween taught marketers that parents are willing to be warned about anything, no matter how preposterous, and then they’re willing to be sold whatever solutions the market can come up with. Face paint so no mask will obscure a child’s vision. Purell, so no child touches a germ. And the biggest boondoggle of all: an adult-supervised party, so no child encounters anything exciting, er, “dangerous.”

I remember one year when I filled a few Pixie Stix with garlic powder. But that was a long time ago.

EDITED TO ADD (11/2): Interesting essay:

The precise methods of the imaginary Halloween sadist are especially interesting. Apples and home goods occasionally appear in the stories, but the most common culprit is regular candy. This crazed person would purchase candy, open the wrapper, and DO SOMETHING to it, something that would be designed to hurt the unsuspecting child. But also something that would be sufficiently obvious and clumsy that the vigilant parent could spot it (hence the primacy of candy inspection).

The idea that someone, even a greedy child, might consume candies hiding razor blades and needles without noticing seems to strain credulity. And how, exactly, a person might go about coating a jelly bean with arsenic or lacing a molasses chew with Drano has never been clear to me. Yet it is an undisputed fact of Halloween hygiene: Unwrapped candy is the number-one suspect. If Halloween candy is missing a wrapper, or if the wrapper seems loose or flimsy, the candy goes straight into the trash.

Here is where I think we can discover some deeper meanings in the myth of the Halloween sadist. It’s all about the wrappers.

Wrappers are like candy condoms: Safe candy is candy that is covered and sealed. And not just any wrapper will do. Loose, casual, cheap wrappers, the kind of wrappers one might find on locally produced candies or non-brand-name candies, are also liable to send candy to Halloween purgatory. The close, tight factory wrapper says “sealed for your protection.” And the recognized brand name on the wrapper also lends a reassuring aura of corporate responsibility and accountability. It’s a basic axiom of consumer faith: The bigger the brand, the safer the candy.

Ironic, since we know that the most serious food dangers are those that originate from just the kind of large-scale industrial food processing environments that also bring us name-brand, mass-market candies. Salmonella, E. coli, and their bacterial buddies lurking in bagged salads and pre-formed hamburger patties are real food dangers; home-made cookies laced with ground glass are not.

EDITED TO ADD (11/11): Wondermark comments.

Posted on October 31, 2010 at 10:02 AM70 Comments

Cargo Security

The New York Times writes:

Despite the increased scrutiny of people and luggage on passenger planes since 9/11, there are far fewer safeguards for packages and bundles, particularly when loaded on cargo-only planes.

Well, of course. We’ve always known this. We’ve not worried about terrorism on cargo planes because it isn’t very terrorizing. Packages aren’t people. If a passenger plane blows up, it affects a couple of hundred people. If a cargo plane blows up, it just affects the crew.

Cargo that is loaded on to passenger planes should be subjected to the same level of security as passenger luggage. Cargo that is loaded onto cargo planes should be treated no differently from cargo loaded into ships, trains, trucks, and the trunks of cars.

Of course: now that the media is talking about cargo security, we have to “do something.” (Something must be done. This is something. Therefore, we must do it.) But if we’re so scared that we have to devote resources to this kind of terrorist threat, we’ve well and truly lost.

EDITED TO ADD (10/30): The plot—it’s still unclear how serious it was—wasn’t uncovered by any security screening, but by intelligence gathering:

Intelligence officials were onto the suspected plot for days, officials said. The packages in England and Dubai were discovered after Saudi Arabian intelligence picked up information related to Yemen and passed it on to the U.S., two officials said.

This is how you fight through terrorism: not by defending against specific threats, but through intelligence, investigation, and emergency response.

Posted on October 30, 2010 at 9:41 AM118 Comments

Firesheep

Firesheep is a new Firefox plugin that makes it easy for you to hijack other people’s social network connections. Basically, Facebook authenticates clients with cookies. If someone is using a public WiFi connection, the cookies are sniffable. Firesheep uses wincap to capture and display the authentication information for accounts it sees, allowing you to hijack the connection.

Slides from the Toorcon talk.

Protect yourself by forcing the authentication to happen over TLS. Or stop logging in to Facebook from public networks.

EDITED TO ADD (10/27): To protect against this attack, you have to encrypt the entire session—not just the initial authentication.

EDITED TO ADD (11/4): Foiling Firesheep.

EDITED TO ADD (11/10): More info.

EDITED TO ADD (11/17): Blacksheep detects Firesheep.

Posted on October 27, 2010 at 7:53 AM74 Comments

Friday Squid Blogging: Steganography in the Longfin Inshore Squid

Really:

While the notion that a few animals produce polarization signals and use them in communication is not new, Mäthger and Hanlon’s findings present the first anatomical evidence for a “hidden communication channel” that can remain masked by typical camouflage patterns. Their results suggest that it might be possible for squid to send concealed polarized signals to one another while staying camouflaged to fish or mammalian predators, most of which do not have polarization vision.

Mäthger notes that these messages could contain information regarding the whereabouts of other squid, for example. “Whether signals could also contain information regarding the presence of predators (i.e., a warning signal) is speculation, but it may be possible,” she adds.

Posted on October 22, 2010 at 4:31 PM13 Comments

Electronic Car Lock Denial-of-Service Attack

Clever:

Inspector Richard Haycock told local newspapers that the possible use of the car lock jammers would help explain a recent spate of thefts from vehicles that have occurred without leaving any signs of forced entry.

“We do get quite a lot of car crime in the borough where there’s no sign of a break-in and items have been taken from an owner’s car,” Inspector Haycock said. “It’s difficult to get in to a modern car without causing damage and we get a reasonable amount of people who do not report any.

“It is a possibility that central locking jamming is being used,” he added.

Devices that block the frequency used by a car owner’s key fob might be used to thwart an owner’s attempts to lock a car, leaving it open for waiting thieves. A quick search of the internet shows that devices offering to jam car locks are easily available for around $100. Effectiveness at up to 100m is claimed.

I thought car door locks weren’t much of a deterrent to a professional car thief.

EDITED TO ADD (10/22): The thieves are not stealing cars, they’re stealing things left inside the cars.

EDITED TO ADD (11/10): Related paper.

Posted on October 21, 2010 at 2:07 PM51 Comments

Predator Software Pirated?

This isn’t good:

Intelligent Integration Systems (IISi), a small Boston-based software development firm, alleges that their Geospatial Toolkit and Extended SQL Toolkit were pirated by Massachusetts-based Netezza for use by a government client. Subsequent evidence and court proceedings revealed that the “government client” seeking assistance with Predator drones was none other than the Central Intelligence Agency.

IISi is seeking an injunction that would halt the use of their two toolkits by Netezza for three years. Most importantly, IISi alleges in court papers that Netezza used a “hack” version of their software with incomplete targeting functionality in response to rushed CIA deadlines. As a result, Predator drones could be missing their targets by as much as 40 feet.

The obvious joke is that this is what you get when you go with the low bidder, but it doesn’t have to be that way. And there’s nothing special about this being a government procurement; any bespoke IT procurement needs good contractual oversight.

EDITED TO ADD (11/10): Another article.

Posted on October 20, 2010 at 7:21 AM40 Comments

Hiding in Plain Sight

Ha!

When he’s out and about near his Denver home, former Broncos quarterback John Elway has come up with a novel way to travel incognito—­he wears his own jersey. “I do that all the time here,” the 50-year-old Hall of Famer told me. “I go to the mall that way. They know it’s not me because they say there’s no way Elway would be wearing his own jersey in the mall. So it actually is the safest thing to do.”

Of course, now everybody knows.

Posted on October 19, 2010 at 7:34 AM42 Comments

Fingerprinting Telephone Calls

This is clever:

The tool is called PinDr0p, and works by analysing the various characteristic noise artifacts left in audio by the different types of voice network—cellular, VoIP etc. For instance, packet loss leaves tiny gaps in audio signals, too brief for the human ear to detect, but quite perceptible to the PinDr0p algorithms. Vishers and others wishing to avoid giving away the origin of a call will often route a call through multiple different network types.

This system can be used to differentiate telephone calls from your bank from telephone calls from someone in Nigeria pretending to be from your bank.

The PinDr0p analysis can’t produce an IP address or geographical location for a given caller, but once it has a few calls via a given route, it can subsequently recognise further calls via the same route with a high degree of accuracy: 97.5 per cent following three calls and almost 100 per cent after five.

Naturally a visher can change routings easily, but even so PinDr0p can potentially reveal details that will reveal a given call as being false. A call which has passed through a Russian cell network and P2P VoIP is unlikely to really be from your high-street bank in the UK, for instance.

Unless your bank is outsourcing its customer support to Russia, of course.

The GIT researchers hope to develop a database of different signatures which would let their system provide a geolocation as well as routing information in time.

Statement from the researchers.

Posted on October 18, 2010 at 6:23 AM27 Comments

Indian OS

India is writing its own operating system so it doesn’t have to rely on Western technology:

India’s Defence Research and Development Organisation (DRDO) wants to build an OS, primarily so India can own the source code and architecture. That will mean the country won’t have to rely on Western operating systems that it thinks aren’t up to the job of thwarting cyber attacks. The DRDO specifically wants to design and develop its own OS that is hack-proof to prevent sensitive data from being stolen.

On the one hand, this is great. We could use more competition in the OS market—as more and more applications move into the cloud and are only accessed via an Internet browser, OS compatible matters less and less—and an OS that brands itself as “more secure” can only help. But this security by obscurity thinking just isn’t true:

“The only way to protect it is to have a home-grown system, the complete architecture … source code is with you and then nobody knows what’s that,” he added.

The only way to protect it is to design and implement it securely. Keeping control of your source code didn’t magically make Windows secure, and it won’t make this Indian OS secure.

Posted on October 15, 2010 at 3:12 AM104 Comments

Picking a Single Voice out of a Crowd

Interesting new technology.

Squarehead’s new system is like bullet-time for sound. 325 microphones sit in a carbon-fiber disk above the stadium, and a wide-angle camera looks down on the scene from the center of this disk. All the operator has to do is pinpoint a spot on the court or field using the screen, and the Audioscope works out how far that spot is from each of the mics, corrects for delay and then synchronizes the audio from all 315 of them. The result is a microphone that can pick out the pop of a bubblegum bubble in the middle of a basketball game….

[…]

Audio from all microphones is stored in separate channels, so you can even go back and listen in on any sounds later. Want to hear the whispered insult that caused one player to lose it and attack the other? You got it.

Posted on October 14, 2010 at 12:10 PM45 Comments

The FBI is Tracking Whom?

They’re tracking a college student in Silicon Valley. He’s 20, partially Egyptian, and studying marketing at Mission College. He found the tracking device attached to his car. Near as he could tell, what he did to warrant the FBI’s attention is be the friend of someone who did something to warrant the FBI’s attention.

Afifi retrieved the device from his apartment and handed it over, at which point the agents asked a series of questions ­ did he know anyone who traveled to Yemen or was affiliated with overseas training? One of the agents produced a printout of a blog post that Afifi’s friend Khaled allegedly wrote a couple of months ago. It had “something to do with a mall or a bomb,” Afifi said. He hadn’t seen it before and doesn’t know the details of what it said. He found it hard to believe Khaled meant anything threatening by the post.

Here’s the Reddit post:

bombing a mall seems so easy to do. i mean all you really need is a bomb, a regular outfit so you arent the crazy guy in a trench coat trying to blow up a mall and a shopping bag. i mean if terrorism were actually a legitimate threat, think about how many fucking malls would have blown up already.. you can put a bag in a million different places, there would be no way to foresee the next target, and really no way to prevent it unless CTU gets some intel at the last minute in which case every city but LA is fucked…so…yea…now i’m surely bugged : /

Here’s the device. Here’s the story, told by the student who found it.

This weird story poses three sets of questions.

  1. Is the FBI’s car surveillance technology that lame? Don’t they have bugs that are a bit smaller and less obtrusive? Or are they surveilling so many people that they’re forced to use the older models as well as the newer, smaller, stuff?

    From a former FBI agent:

    The former agent, who asked not to be named, said the device was an older model of tracking equipment that had long ago been replaced by devices that don’t require batteries. Batteries die and need to be replaced if surveillance is ongoing so newer devices are placed in the engine compartment and hardwired to the car’s battery so they don’t run out of juice. He was surprised this one was so easily found.

    “It has to be able to be removed but also stay in place and not be seen,” he said. “There’s always the possibility that the car will end up at a body shop or auto mechanic, so it has to be hidden well. It’s very rare when the guys find them.”

  2. If they’re doing this to someone so tangentially connected to a vaguely bothersome post on an obscure blog, just how many of us have tracking devices on our cars right now—perhaps because of this blog? Really, is that blog post plus this enough to warrant surveillance?

    Afifi’s father, Aladdin Afifi, was a U.S. citizen and former president of the Muslim Community Association here, before his family moved to Egypt in 2003. Yasir Afifi returned to the United States alone in 2008, while his father and brothers stayed in Egypt, to further his education he said. He knows he’s on a federal watchlist and is regularly taken aside at airports for secondary screening.

  3. How many people are being paid to read obscure blogs, looking for more college students to surveil?

Remember, the Ninth Circuit Court recently ruled that the police do not need a warrant to attach one of these things to your car. That ruling holds true only for the Ninth Circuit right now; the Supreme Court will probably rule on this soon.

Meanwhile, the ACLU is getting involved:

Brian Alseth from the American Civil Liberties Union in Washington state contacted Afifi after seeing pictures of the tracking device posted online and told him the ACLU had been waiting for a case like this to challenge the ruling.

“This is the kind of thing we like to throw lawyers at,” Afifi said Alseth told him.

“It seems very frightening that the FBI have placed a surveillance-tracking device on the car of a 20-year-old American citizen who has done nothing more than being half-Egyptian,” Alseth told Wired.com.

Posted on October 13, 2010 at 6:20 AM105 Comments

The Mahmoud al-Mabhouh Assassination

Remember the Mahmoud al-Mabhouh assassination last January? The police identified 30 suspects, but haven’t been able to find any of them.

Police spent about 10,000 hours poring over footage from some 1,500 security cameras around Dubai. Using face-recognition software, electronic-payment records, receipts and interviews with taxi drivers and hotel staff, they put together a list of suspects and publicized it.

Seems ubiquitous electronic surveillance is no match for a sufficiently advanced adversary.

Posted on October 12, 2010 at 6:12 AM53 Comments

The Ineffectiveness of Vague Security Warnings

From Slate:

We do nothing, first and foremost, because there is nothing we can do. Unless the State Department gets specific—­e.g., “don’t go to the Eiffel Tower tomorrow”—information at that level of generality is completely meaningless. Unless we are talking about weapons of mass destruction, the chances of being hit by a car while crossing the street are still greater than the chances of being on the one plane or one subway car that comes under attack. Besides, nobody living or working in a large European city (or even a small one) can indefinitely avoid coming within close proximity of “official and private” structures affiliated with U.S. interests—­a Hilton hotel, an Apple computer store­—not to mention subways, trains, airplanes, boats, and all other forms of public transportation.

Second, we do nothing because if the language is that vague, nobody is really sure why the warning has been issued in the first place. Obviously, if the U.S. government knew who the terrorists were and what they were going to attack, it would arrest them and stop them. If it can’t do any better than “tourist infrastructure” and public transportation, it doesn’t really know anything at all.

[…]

In truth, the only people who can profit from such a warning are the officials who have issued it in the first place. If something does happen, they are covered. They warned us, they told us in advance, they won’t be criticized or forced to resign. And if nothing happens, we’ll all forget about it anyway.

Except that we don’t forget about it. Over time, these enigmatic warnings do al-Qaida’s work for them, scaring people without cause. Without so much as lifting a finger, Osama Bin Laden disrupts our sense of security and well-being. At the same time, they put the U.S. government in the position of the boy who cried wolf. The more often general warnings are issued, the less likely we are to heed them. We are perhaps unsettled or unnerved, but we don’t know what to do. So we do nothing­—and wish that we’d been told nothing, as well.

I wrote much the same thing in 2004, about the DHS’s vague terrorist warnings and the color-coded threat advisory system.

EDITED TO ADD (10/13): Another article.

Posted on October 8, 2010 at 12:49 PM39 Comments

Hacking Trial Breaks D.C. Internet Voting System

Sounds like it was easy:

Last week, the D.C. Board of Elections and Ethics opened a new Internet-based voting system for a weeklong test period, inviting computer experts from all corners to prod its vulnerabilities in the spirit of “give it your best shot.” Well, the hackers gave it their best shot—and midday Friday, the trial period was suspended, with the board citing “usability issues brought to our attention.”

[…]

Stenbjorn said a Michigan professor whom the board has been working with on the project had “unleashed his students” during the test period, and one succeeded in infiltrating the system.

My primary worry about contests like this is that people will think a positive result means something. If a bunch of students can break into a system after a couple of weeks of attempts, we know it’s insecure. But just because a system withstands a test like this doesn’t mean it’s secure. We don’t know who tried. We don’t know what they tried. We don’t know how long they tried. And we don’t know if someone who tries smarter, harder, and longer could break the system.

More links.

Posted on October 8, 2010 at 6:23 AM47 Comments

Stuxnet

Computer security experts are often surprised at which stories get picked up by the mainstream media. Sometimes it makes no sense. Why this particular data breach, vulnerability, or worm and not others? Sometimes it’s obvious. In the case of Stuxnet, there’s a great story.

As the story goes, the Stuxnet worm was designed and released by a government—the U.S. and Israel are the most common suspects—specifically to attack the Bushehr nuclear power plant in Iran. How could anyone not report that? It combines computer attacks, nuclear power, spy agencies and a country that’s a pariah to much of the world. The only problem with the story is that it’s almost entirely speculation.

Here’s what we do know: Stuxnet is an Internet worm that infects Windows computers. It primarily spreads via USB sticks, which allows it to get into computers and networks not normally connected to the Internet. Once inside a network, it uses a variety of mechanisms to propagate to other machines within that network and gain privilege once it has infected those machines. These mechanisms include both known and patched vulnerabilities, and four “zero-day exploits”: vulnerabilities that were unknown and unpatched when the worm was released. (All the infection vulnerabilities have since been patched.)

Stuxnet doesn’t actually do anything on those infected Windows computers, because they’re not the real target. What Stuxnet looks for is a particular model of Programmable Logic Controller (PLC) made by Siemens (the press often refers to these as SCADA systems, which is technically incorrect). These are small embedded industrial control systems that run all sorts of automated processes: on factory floors, in chemical plants, in oil refineries, at pipelines—and, yes, in nuclear power plants. These PLCs are often controlled by computers, and Stuxnet looks for Siemens SIMATIC WinCC/Step 7 controller software.

If it doesn’t find one, it does nothing. If it does, it infects it using yet another unknown and unpatched vulnerability, this one in the controller software. Then it reads and changes particular bits of data in the controlled PLCs. It’s impossible to predict the effects of this without knowing what the PLC is doing and how it is programmed, and that programming can be unique based on the application. But the changes are very specific, leading many to believe that Stuxnet is targeting a specific PLC, or a specific group of PLCs, performing a specific function in a specific location—and that Stuxnet’s authors knew exactly what they were targeting.

It’s already infected more than 50,000 Windows computers, and Siemens has reported 14 infected control systems, many in Germany. (These numbers were certainly out of date as soon as I typed them.) We don’t know of any physical damage Stuxnet has caused, although there are rumors that it was responsible for the failure of India’s INSAT-4B satellite in July. We believe that it did infect the Bushehr plant.

All the anti-virus programs detect and remove Stuxnet from Windows systems.

Stuxnet was first discovered in late June, although there’s speculation that it was released a year earlier. As worms go, it’s very complex and got more complex over time. In addition to the multiple vulnerabilities that it exploits, it installs its own driver into Windows. These have to be signed, of course, but Stuxnet used a stolen legitimate certificate. Interestingly, the stolen certificate was revoked on July 16, and a Stuxnet variant with a different stolen certificate was discovered on July 17.

Over time the attackers swapped out modules that didn’t work and replaced them with new ones—perhaps as Stuxnet made its way to its intended target. Those certificates first appeared in January. USB propagation, in March.

Stuxnet has two ways to update itself. It checks back to two control servers, one in Malaysia and the other in Denmark, but also uses a peer-to-peer update system: When two Stuxnet infections encounter each other, they compare versions and make sure they both have the most recent one. It also has a kill date of June 24, 2012. On that date, the worm will stop spreading and delete itself.

We don’t know who wrote Stuxnet. We don’t know why. We don’t know what the target is, or if Stuxnet reached it. But you can see why there is so much speculation that it was created by a government.

Stuxnet doesn’t act like a criminal worm. It doesn’t spread indiscriminately. It doesn’t steal credit card information or account login credentials. It doesn’t herd infected computers into a botnet. It uses multiple zero-day vulnerabilities. A criminal group would be smarter to create different worm variants and use one in each. Stuxnet performs sabotage. It doesn’t threaten sabotage, like a criminal organization intent on extortion might.

Stuxnet was expensive to create. Estimates are that it took 8 to 10 people six months to write. There’s also the lab setup—surely any organization that goes to all this trouble would test the thing before releasing it—and the intelligence gathering to know exactly how to target it. Additionally, zero-day exploits are valuable. They’re hard to find, and they can only be used once. Whoever wrote Stuxnet was willing to spend a lot of money to ensure that whatever job it was intended to do would be done.

None of this points to the Bushehr nuclear power plant in Iran, though. Best I can tell, this rumor was started by Ralph Langner, a security researcher from Germany. He labeled his theory “highly speculative,” and based it primarily on the facts that Iran had an unusually high number of infections (the rumor that it had the most infections of any country seems not to be true), that the Bushehr nuclear plant is a juicy target, and that some of the other countries with high infection rates—India, Indonesia, and Pakistan—are countries where the same Russian contractor involved in Bushehr is also involved. This rumor moved into the computer press and then into the mainstream press, where it became the accepted story, without any of the original caveats.

Once a theory takes hold, though, it’s easy to find more evidence. The word “myrtus” appears in the worm: an artifact that the compiler left, possibly by accident. That’s the myrtle plant. Of course, that doesn’t mean that druids wrote Stuxnet. According to the story, it refers to Queen Esther, also known as Hadassah; she saved the Persian Jews from genocide in the 4th century B.C. “Hadassah” means “myrtle” in Hebrew.

Stuxnet also sets a registry value of “19790509” to alert new copies of Stuxnet that the computer has already been infected. It’s rather obviously a date, but instead of looking at the gazillion things—large and small—that happened on that the date, the story insists it refers to the date Persian Jew Habib Elghanain was executed in Tehran for spying for Israel.

Sure, these markers could point to Israel as the author. On the other hand, Stuxnet’s authors were uncommonly thorough about not leaving clues in their code; the markers could have been deliberately planted by someone who wanted to frame Israel. Or they could have been deliberately planted by Israel, who wanted us to think they were planted by someone who wanted to frame Israel. Once you start walking down this road, it’s impossible to know when to stop.

Another number found in Stuxnet is 0xDEADF007. Perhaps that means “Dead Fool” or “Dead Foot,” a term that refers to an airplane engine failure. Perhaps this means Stuxnet is trying to cause the targeted system to fail. Or perhaps not. Still, a targeted worm designed to cause a specific sabotage seems to be the most likely explanation.

If that’s the case, why is Stuxnet so sloppily targeted? Why doesn’t Stuxnet erase itself when it realizes it’s not in the targeted network? When it infects a network via USB stick, it’s supposed to only spread to three additional computers and to erase itself after 21 days—but it doesn’t do that. A mistake in programming, or a feature in the code not enabled? Maybe we’re not supposed to reverse engineer the target. By allowing Stuxnet to spread globally, its authors committed collateral damage worldwide. From a foreign policy perspective, that seems dumb. But maybe Stuxnet’s authors didn’t care.

My guess is that Stuxnet’s authors, and its target, will forever remain a mystery.

This essay originally appeared on Forbes.com.

My alternate explanations for Stuxnet were cut from the essay. Here they are:

  • A research project that got out of control. Researchers have accidentally released worms before. But given the press, and the fact that any researcher working on something like this would be talking to friends, colleagues, and his advisor, I would expect someone to have outed him by now, especially if it was done by a team.
  • A criminal worm designed to demonstrate a capability. Sure, that’s possible. Stuxnet could be a prelude to extortion. But I think a cheaper demonstration would be just as effective. Then again, maybe not.
  • A message. It’s hard to speculate any further, because we don’t know who the message is for, or its context. Presumably the intended recipient would know. Maybe it’s a “look what we can do” message. Or an “if you don’t listen to us, we’ll do worse next time” message. Again, it’s a very expensive message, but maybe one of the pieces of the message is “we have so many resources that we can burn four or five man-years of effort and four zero-day vulnerabilities just for the fun of it.” If that message were for me, I’d be impressed.
  • A worm released by the U.S. military to scare the government into giving it more budget and power over cybersecurity. Nah, that sort of conspiracy is much more common in fiction than in real life.

Note that some of these alternate explanations overlap.

EDITED TO ADD (10/7): Symantec published a very detailed analysis. It seems like one of the zero-day vulnerabilities wasn’t a zero-day after all. Good CNet article. More speculation, without any evidence. Decent debunking. Alternate theory, that the target was the uranium centrifuges in Natanz, Iran.

Posted on October 7, 2010 at 9:56 AM150 Comments

The Politics of Allocating Homeland Security Money to States

From the Journal of Homeland Security and Emergency Management: “Politics or Risks? An Analysis of Homeland Security Grant Allocations to the States.”

Abstract: In the days following the September 11 terrorist attacks on the United States, the nation’s elected officials created the USA Patriot Act. The act included a grant program for the 50 states that was intended to assist them with homeland security and preparedness efforts. However, not long after its passage, critics charged the Department of Homeland Security with allocating the grant funds on the basis of “politics” rather than “risk.” This study analyzes the allocation of funds through all seven of the grant subprograms for the years 2003 through 2006. Conducting a linear regression analysis for each year, our research indicates that the total per capita amounts are inversely related to risk factors but are not related at all to partisan political factors between 2003-2005. In 2006, Congress changed the formula with the intention of increasing the relationship between allocations and risk. However, our findings reveal that this change did not produce the intended effect and the allocations were still negatively related to risk and unrelated to partisan politics.

I’m not sure I buy the methodology, but there it is.

Posted on October 7, 2010 at 7:03 AM10 Comments

Putting Unique Codes on Objects to Detect Counterfeiting

This will help some.

At least two rival systems plan to put unique codes on packages containing antimalarials and other medications. Buyers will be able to text the code to a phone number on the package and get an immediate reply of “NO” or “OK,” with the drug’s name, expiration date, and other information.

To defeat the system, the counterfeiter has to copy the bar codes. If the stores selling to customers are in on the scam, it can be the same code. If not, there have to be sufficient different bar codes that the store doesn’t detect duplications. Presumably, numbers that are known to have been copied are added to the database, so the counterfeiters need to keep updating their codes. And presumably the codes are cryptographically hard to predict, so the only way to keep updating them is to look at legitimate products.

Another attack would be to intercept the verification system. A man-in-the-middle attack against the phone number or the website would be difficult, but presumably the verification information would be on the object itself. It would be easy to swap in a fake phone number that would verify anything.

It’ll be interesting to see how the counterfeiters get around this security measure.

Posted on October 6, 2010 at 6:59 AM52 Comments

Analyzing CAPTCHAs

New research: “Attacks and Design of Image Recognition CAPTCHAs.”

Abstract. We systematically study the design of image recognition CAPTCHAs (IRCs) in this paper. We first review and examine all IRCs schemes known to us and evaluate each scheme against the practical requirements in CAPTCHA applications, particularly in large-scale real-life applications such as Gmail and Hotmail. Then we present a security analysis of the representative schemes we have identified. For the schemes that remain unbroken, we present our novel attacks. For the schemes for which known attacks are available, we propose a theoretical explanation why those schemes have failed. Next, we provide a simple but novel framework for guiding the design of robust IRCs. Then we propose an innovative IRC called Cortcha that is scalable to meet the requirements of large-scale applications. Cortcha relies on recognizing an object by exploiting its surrounding context, a task that humans can perform well but computers cannot. An infinite number of types of objects can be used to generate challenges, which can effectively disable the learning process in machine learning attacks. Cortcha does not require the images in its image database to be labeled. Image collection and CAPTCHA generation can be fully automated. Our usability studies indicate that, compared with Google’s text CAPTCHA, Cortcha yields a slightly higher human accuracy rate but on average takes more time to solve a challenge.

The paper attacks IMAGINATION (designed at Penn State around 2005) and ARTiFACIAL (designed at MSR Redmond around 2004).

Posted on October 5, 2010 at 7:22 AM38 Comments

Sky Marshals Flying First Class

I regularly say that security decisions are primarily made for non-security reasons. This article about the placement of sky marshals on airplanes is an excellent example. Basically, the airlines would prefer they fly coach instead of first class.

Airline CEOs met recently with TSA administrator John Pistole and officials from the Federal Air Marshal Service requesting the TSA to reconsider the placement of marshals based on current security threats.

“Our concern is far less revenue and more that we have defenses appropriate to the threat,” said James May, chief executive of the Air Transport Association, the airline industry’s lobbying group. “We think there needs to be an even distribution, particularly when we have multiple agents on board.”

[…]

By law, airlines must provide seats to marshals at no cost in any cabin requested. With first-class and business-class seats in particular, the revenue loss to airlines can be substantial because they can’t sell last-minute tickets or upgrades, and travelers sometimes get bumped to the back or lose out on upgrade opportunities. When travelers do get bumped, airlines are barred from divulging why the first-class seat was unexpectedly taken away, to keep the presence of a marshal a secret. Bumped travelers—airlines can’t disclose how many passengers are affected—typically get coach seats and refunds on the cash or miles they paid for the better seat.

When I list the few improvements to airline security since 9/11, I don’t include sky marshals.

EDITED TO ADD (10/9): An article from The Economist.

Posted on October 4, 2010 at 1:55 PM82 Comments

Monitoring Employees' Online Behavior

Not their online behavior at work, but their online behavior in life.

Using automation software that slogs through Facebook, Twitter, Flickr, YouTube, LinkedIn, blogs, and “thousands of other sources,” the company develops a report on the “real you”—not the carefully crafted you in your resume. The service is called Social Intelligence Hiring. The company promises a 48-hour turn-around.

[…]

The reports feature a visual snapshot of what kind of person you are, evaluating you in categories like “Poor Judgment,” “Gangs,” “Drugs and Drug Lingo” and “Demonstrating Potentially Violent Behavior.” The company mines for rich nuggets of raw sewage in the form of racy photos, unguarded commentary about drugs and alcohol and much more.

The company also offers a separate Social Intelligence Monitoring service to watch the personal activity of existing employees on an ongoing basis…. The service provides real-time notification alerts, so presumably the moment your old college buddy tags an old photo of you naked, drunk and armed on Facebook, the boss gets a text message with a link.

This is being sold using fear:

…company spokespeople emphasize liability. What happens if one of your employees freaks out, comes to work and starts threatening coworkers with a samurai sword? You’ll be held responsible because all of the signs of such behavior were clear for all to see on public Facebook pages. That’s why you should scan every prospective hire and run continued scans on every existing employee.

In other words, they make the case that now that people use social networks, companies will be expected (by shareholders, etc.) to monitor those services and protect the company from lawsuits, damage to reputation, and other harm.

Posted on October 4, 2010 at 6:31 AM89 Comments

My Recording Debut

Okay, so this isn’t a normal blog post.

It’s not about security.

Bruce Schneier playing doumbek

I’ve been playing doumbek with a band at the Minneapolis Renaissance Festival called Brother Seamus. They’ve released a CD, “Hale and Sound,” where I play on three of the tracks.

If you’re interested in a copy, it’s only $15—including shipping anywhere in the world.

If you’re in Minneapolis, come to the Renaissance Festival tomorrow to hear us play—I’m not going to be there on Sunday.


Signed or unsigned?



Click to order Brother Seamus’ “Hale and Sound.” Order it here; the PayPal button on the CD’s webpage doesn’t work.

Posted on October 1, 2010 at 2:43 PM49 Comments

Me on Cyberwar

During the cyberwar debate a few months ago, I said this:

If we frame this discussion as a war discussion, then what you do when there’s a threat of war is you call in the military and you get military solutions. You get lockdown; you get an enemy that needs to be subdued. If you think about these threats in terms of crime, you get police solutions. And as we have this debate, not just on stage, but in the country, the way we frame it, the way we talk about it; the way the headlines read, determine what sort of solutions we want, make us feel better. And so the threat of cyberwar is being grossly exaggerated and I think it’s being done for a reason. This is a power grab by government. What Mike McConnell didn’t mention is that grossly exaggerating a threat of cyberwar is incredibly profitable.

More of my writings on cyberwar, and the debate, here.

Posted on October 1, 2010 at 12:10 PM27 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.