Blog: October 2007 Archives

House of Lords on the Liquid Ban

From the UK:

“We continuously monitor the effectiveness of, in particular, the liquid security measures…”

How, one might ask? But hold on:

“The fact that there has not been a serious incident involving liquid explosives indicates, I would have thought, that the measures that we have put in place so far have been very effective.”

Ah, that’s how. On which basis the measures against asteroid strike, alien invasion and unexplained nationwide floods of deadly boiling custard have also been remarkably effective.

Posted on October 31, 2007 at 2:52 PM35 Comments

Programming for Wholesale Surveillance and Data Mining

AT&T has done the research:

They use high-tech data-mining algorithms to scan through the huge daily logs of every call made on the AT&T network; then they use sophisticated algorithms to analyze the connections between phone numbers: who is talking to whom? The paper literally uses the term “Guilt by Association” to describe what they’re looking for: what phone numbers are in contact with other numbers that are in contact with the bad guys?

When this research was done, back in the last century, the bad guys where people who wanted to rip off AT&T by making fraudulent credit-card calls. (Remember, back in the last century, intercontinental long-distance voice communication actually cost money!) But it’s easy to see how the FBI could use this to chase down anyone who talked to anyone who talked to a terrorist. Or even to a “terrorist.”

Posted on October 31, 2007 at 12:03 PM12 Comments

Driver's License Printer Stolen and Recovered

A specialized printer used to print Missouri driver’s licenses was stolen and recovered.

It’s a funny story, actually. Turns out the thief couldn’t get access to the software needed to run the printer; a lockout on the control computer apparently thwarted him. When he called tech support, they tipped off the Secret Service.

On the one hand, this probably won’t deter a more sophisticated thief. On the other hand, you can make pretty good forgeries with off-the-shelf equipment.

Posted on October 31, 2007 at 6:11 AM

Stupid Terrorism Overreaction

Oh, the stupid:

State officials have decided not to publicize their list of polling places in Pennsylvania, citing concerns that terrorists could disrupt elections in the commonwealth.

[…]

“The agencies agreed it was appropriate not to release the statewide list to protect the public and the integrity of the voting process,” Amoros said.

Information on individual polling places remains available on the state voter services Web site or by calling the state or county elections bureaus.

A few days later the governor rescinded the order.

Posted on October 30, 2007 at 12:56 PM22 Comments

Security by Letterhead

This otherwise amusing story has some serious lessons:

John: Yes, I’m calling to find out why request number 48931258 to transfer somedomain.com was rejected.

ISP: Oh, it was rejected because the request wasn’t submitted on company letterhead.

John: Oh… sure… but… uh, just so we’re on the same page, can you define exactly what you mean by ‘company letterhead?’

ISP: Well, you know, it has the company’s logo, maybe a phone number and web site address… that sort of thing. I mean, your fax looks like it could’ve been typed by anyone!

John: So you know what my company letterhead looks like?

ISP: Ye… no. Not specifically. But, like, we’d know it if we saw it.

John: And what if we don’t have letterhead? What if we’re a startup? What if we’re redesigning our logo?

ISP: Well, you’d have to speak to customer—John (clicking and typing): I could probably just pick out a semi-professional-looking MS Word template and paste my request in that and resubmit it, right?

ISP: Look, our policy—John: Oh, it’s ok, I just sent the request back in on letterhead.

Ha ha. The idiot ISP guy doesn’t realize how easy it for anyone with a word processor and a laser printer to fake a letterhead. But what this story really shows is how hard it is for people to change their security intuition. Security-by-letterhead was fairly robust when printing was hard, and faking a letterhead was real work. Today it’s easy, but people—especially people who grew up under the older paradigm—don’t act as if it is. They would if they thought about it, but most of the time our security runs on intuition and not on explicit thought.

This kind of thing bites us all the time. Mother’s maiden name is no longer a good password. An impressive-looking storefront on the Internet is not the same as an impressive-looking storefront in the real world. The headers on an e-mail are not a good authenticator of its origin. It’s an effect of technology moving faster than our ability to develop a good intuition about that technology.

And, as technology changes ever increasingly faster, this will only get worse.

Posted on October 30, 2007 at 6:33 AM73 Comments

Understanding the Black Market in Internet Crime

Here’s a interesting paper from Carnegie Mellon University: “An Inquiry into the Nature and Causes of the Wealth of Internet Miscreants.”

The paper focuses on the large illicit market that specializes in the commoditization of activities in support of Internet-based crime. The main goal of the paper was to understand and measure how these markets function, and discuss the incentives of the various market entities. Using a dataset collected over seven months and comprising over 13 million messages, they were able to categorize the market’s participants, the goods and services advertised, and the asking prices for selected interesting goods.

Really cool stuff.

Unfortunately, the data is extremely noisy and so far the authors have no way to cross-validate it, so it is difficult to make any strong conclusions.

The press focused on just one thing: a discussion of general ways to disrupt the market. Contrary to the claims of the article, the authors have not built any tools to disrupt the markets.

Related blog posts: Gozi and Storm.

Posted on October 29, 2007 at 2:23 PM5 Comments

Switzerland Protects its Vote with Quantum Cryptography

This is so silly I wasn’t going to even bother blogging about it. But the sheer number of news stories has made me change my mind.

Basically, the Swiss company ID Quantique convinced the Swiss government to use quantum cryptography to protect vote transmissions during their October 21 election. It was a great publicity stunt, and the news articles were filled with hyperbole: how the “unbreakable” encryption will ensure the integrity of the election, how this will protect the election against hacking, and so on.

Complete idiocy. There are many serious security threats to voting systems, especially paperless touch-screen voting systems, but they’re not centered around the transmission of votes from the voting site to the central tabulating office. The software in the voting machines themselves is a much bigger threat, one that quantum cryptography doesn’t solve in the least.

Moving data from point A to point B securely is one of the easiest security problems we have. Conventional encryption works great. PGP, SSL, SSH could all be used to solve this problem, as could pretty much any good VPN software package; there’s no need to use quantum crypto for this at all. Software security, OS security, network security, and user security are much harder security problems; and quantum crypto doesn’t even begin to address them.

So, congratulations to ID Quantique for a nice publicity stunt. But did they actually increase the security of the Swiss election? Doubtful.

Posted on October 29, 2007 at 6:02 AM46 Comments

Untwirling a Photoshopped Photo

So, this pedophile posts photos of himself with young boys, but obscures his face with the Photoshop “twirl” tool. Turns out that the transformation isn’t lossy, and that you can untwirl his face.

He was caught in Thailand.

Moral: Don’t blindly trust technology; you need to really know what it’s doing.

Posted on October 26, 2007 at 6:44 AM43 Comments

World Series Ticket Website Hacked?

Maybe:

The Colorado Rockies will try again to sell World Series tickets through their Web site starting on Tuesday at noon.

Spokesman Jay Alves said tonight that the failure of Monday’s ticket sales happened because the system was brought down today by an “external malicious attack.”

There was a presale that “went well”:

The Colorado Rockies had a chance Sunday to test their online-sales operation in advance.

Season-ticket holders who had previously registered were able to log in with a special password to buy extra tickets.

Alves said the presale went well, with no problems.

But some people found glitches, such as being told to “enable cookies” and to set their computer security to the “lowest level.” And some fans couldn’t log in at all.

Alves explained that those who saw a “page cannot be displayed” message had “IP addresses that we blocked due to suspicious/malicious activity to our website during the last 24 to 48 hours. As an example, if several inquiries came from a single IP address they were blocked.”

Certainly scalpers have an incentive to attack this system.

EDITED TO ADD (10/28): The FBI is investigating.

Posted on October 25, 2007 at 11:52 AM30 Comments

Partial Fingerprints Barred from Murder Trial

Brandon Mayfield, the Oregon man who was arrested because his fingerprint “matched” that of an Algerian who handled one of the Madrid bombs, now has a legacy: a judge has ruled partial prints cannot be used in a murder case.

“The repercussions are terrifically broad,” said David L. Faigman, a professor at the University of California’s Hastings College of the Law and an editor of Modern Scientific Evidence: The Law and Science of Expert Testimony.

“Fingerprints, before DNA, were always considered the gold standard of forensic science, and it’s turning out that there’s a lot more tin in that field than gold,” he said. “The public needs to understand that. This judge is declaring, not to mix my metaphors, that the emperor has no clothes.”

Posted on October 25, 2007 at 7:03 AM36 Comments

Terrorist Insects

Yet another movie-plot threat to worry about:

One of the cheapest and most destructive weapons available to terrorists today is also one of the most widely ignored: insects. These biological warfare agents are easy to sneak across borders, reproduce quickly, spread disease, and devastate crops in an indefatigable march. Our stores of grain could be ravaged by the khapra beetle, cotton and soybean fields decimated by the Egyptian cottonworm, citrus and cotton crops stripped by the false codling moth, and vegetable fields pummeled by the cabbage moth. The costs could easily escalate into the billions of dollars, and the resulting disruption of our food supply – and our sense of well-being – could be devastating. Yet the government focuses on shoe bombs and anthrax while virtually ignoring insect insurgents.

[…]

Seeing the potential, military strategists have been keen to conscript insects during war. In World War II, the French and Germans pursued the mass production and dispersion of Colorado potato beetles to destroy enemy food supplies. The Japanese military, meanwhile, sprayed disease-carrying fleas from low-flying airplanes and dropped bombs packed with flies and a slurry of cholera bacteria. The Japanese killed at least 440,000 Chinese using plague-infected fleas and cholera-coated flies, according to a 2002 international symposium of historians.

During the Cold War, the US military planned a facility to produce 100 million yellow-fever-infected mosquitoes a month, produced an “Entomological Warfare Target Analysis” of vulnerable sites in the Soviet Union and among its allies, and tested the dispersal and biting capacity of (uninfected) mosquitoes by secretly dropping the insects over American cities.

Posted on October 24, 2007 at 6:14 AM38 Comments

Declan McCullagh on the Politicization of Security

Good essay:

Politicians of both major parties wield this as the ultimate political threat. Its invocation typically predicts that if a certain piece of legislation is passed (or not passed) Americans will die. Variations may warn that children will die or troops will die. Any version is difficult for the target to combat.

This leads me to propose McCullagh’s Law of Politics:

As the certainty that legislation violates the U.S. Constitution increases, so does the probability of predictions that severe harm or death will come to Americans if the proposal is not swiftly enacted.

McCullagh’s Law describes a promise of political violence. It goes like this: “If you, my esteemed political adversary, are insufficiently wise as to heed my advice, I will direct my staff and members of my political apparatus to unearth examples of dead {Americans|women|children|troops} so I can later accuse you of responsibility for their deaths.”

Posted on October 22, 2007 at 1:13 PM40 Comments

Detecting Restaurant Credit Card Fraud with Checksums

Clever technique to put a checksum into the bill total when you add a tip at a restaurant.

I don’t know how common tip fraud is. This thread implies that it’s pretty common, but I use my credit card in restaurants all the time all over the world and I’ve never been the victim of this sort of fraud. On the other hand, I’m not a lousy tipper. And maybe I don’t frequent the right sort of restaurants.

Posted on October 21, 2007 at 2:25 PM52 Comments

Hiding Data Behind Attorney-Client Privilege

Interesting advice:

He cites a key advantage to bringing in lawyers up front: “If you hire a law firm to supervise the process, even if there are technical engineers involved, then the process will be covered by attorney-client privilege,” Cunningham said.

He noted that in a lawsuit following a data theft, plaintiffs usually seek a company’s records of “all the [data-security] recommendations that were made [before the breach] and whether or not you followed them. And if you go and hire technical consultants only, all that information gets turned over in discovery. [But] if you have it through a law firm, it’s generally not.”

Gregory Engel has some good comments about this:

This isn’t a “prevention initiative” for data security, it’s a preemptive initiative for corporate irresponsibility.

I’m not sure it will work, though. I don’t think you can run all of your data past your attorney and then magically have it imbued with the un-subpoena-able power of “attorney-client privilege.”

EDITED TO ADD (10/22): This talk from Defcon this year is related.

Posted on October 21, 2007 at 6:39 AM27 Comments

"Conceptual Terrorists Encase Sears Tower In Jell-O"

From The Onion:

“Your outdated ideas of what terrorism is have been challenged,” an unidentified, disembodied voice announces following the video’s first 45 minutes of random imagery set to minimalist techno music. “It is not your simple bourgeois notion of destructive explosions and weaponized biochemical agents. True terror lies in the futility of human existence.”

[…]

While officials have yet to determine the purpose of the attack, a number of potential theories have emerged, including the sudden deregulation of the U.S. economy, the destruction of culturally significant landmarks, and maybe the fact that man, in his essence, is no more than a collection of irrational fragments, incapable of finding reason where no reason exists.

Posted on October 20, 2007 at 10:50 AM13 Comments

New TSA Report

A classified 2006 TSA report on airport security has been leaked to USA Today. (Other papers are covering the story, but their articles seem to be all derived from the original USA Today article.)

There’s good news:

This year, the TSA for the first time began running covert tests every day at every checkpoint at every airport. That began partly in response to the classified TSA report showing that screeners at San Francisco International Airport were tested several times a day and found about 80% of the fake bombs.

Constant testing makes screeners “more suspicious as well as more capable of recognizing (bomb) components,” the report said. The report does not explain the high failure rates but said O’Hare’s checkpoints were too congested and too wide for supervisors to monitor screeners.

At San Francisco, “everybody realizes they are under scrutiny, being watched and tested constantly,” said Gerald Berry, president of Covenant Aviation Security, which hires and manages the San Francisco screeners. San Francisco is one of eight airports, most of them small, where screeners work for a private company instead of the TSA. The idea for constant testing came from Ed Gomez, TSA security director at San Francisco, Berry said. The tests often involve an undercover person putting a bag with a fake bomb on an X-ray machine belt, he said.

Repeated testing is good, for a whole bunch of reasons.

There’s bad news:

Howe said the increased difficulty explains why screeners at Los Angeles and Chicago O’Hare airports failed to find more than 60% of fake explosives that TSA agents tried to get through checkpoints last year.

The failure rates—about 75% at Los Angeles and 60% at O’Hare—are higher than some tests of screeners a few years ago and equivalent to other previous tests.

Sure, the tests are harder. But those are miserable numbers.

And there’s unexplainable news:

At San Diego International Airport, tests are run by passengers whom local TSA managers ask to carry a fake bomb, said screener Cris Soulia, an official in a screeners union.

Someone please tell me this doesn’t actually happen. “Hi Mr. Passenger. I’m a TSA manager. You know I’m not lying to you because of this official-looking laminated badge I have. We need you to help us test airport security. Here’s a ‘fake’ bomb that we’d like you to carry through security in your luggage. Another TSA manager will, um, meet you at your destination. Give the fake bomb to him when you land. And, by the way, what’s your mother’s maiden name?”

How in the world is this a good idea? And how hard is it to dress real TSA managers up like vacationers?

EDITED TO ADD (10/24): Here’s a story of someone being asked to carry an item through airport security at Dulles Airport.

EDITED TO ADD (10/26): TSA claims that this doesn’t happen:

TSA officials do not ask random passengers to carry fake bombs through checkpoints for testing at San Diego International Airport, or any other airport.

[…]

TSA Traveler Alert: If approached by anyone claiming to be a TSA employee asking you to take something through the checkpoint, please contact a uniformed TSA employee at the checkpoint or a law enforcement officer immediately.

Is there anyone else who has had this happen to them?

Posted on October 19, 2007 at 2:37 PM68 Comments

Cheating in Online Poker

Fascinating story of insider cheating:

Some opponents became suspicious of how a certain player was playing. He seemed to know what the opponents’ hole cards were. The suspicious players provided examples of these hands, which were so outrageous that virtually all serious poker players were convinced that cheating had occurred. One of the players who’d been cheated requested that Absolute Poker provide hand histories from the tournament (which is standard practice for online sites). In this case, Absolute Poker “accidentally” did not send the usual hand histories, but instead sent a file that contained all sorts of private information that the poker site would never release. The file contained every player’s hole cards, observations of the tables, and even the IP addresses of every person playing. (I put “accidentally” in quotes because the mistake seems like too great a coincidence when you learn what followed.) I suspect that someone at Absolute knew about the cheating and how it happened, and was acting as a whistleblower by sending these data. If that is the case, I hope whomever “accidentally” sent the file gets their proper hero’s welcome in the end.

Then the poker players went to work analyzing the data—not the hand histories themselves, but other, more subtle information contained in the file. What these players-turned-detectives noticed was that, starting with the third hand of the tournament, there was an observer who watched every subsequent hand played by the cheater. (For those of you who don’t know much about online poker, anyone who wants can observe a particular table, although, of course, the observers can’t see any of the players’ hole cards.) Interestingly, the cheater folded the first two hands before this observer showed up, then did not fold a single hand before the flop for the next 20 minutes, and then folded his hand pre-flop when another player had a pair of kings as hole cards! This sort of cheating went on throughout the tournament.

So the poker detectives turned their attention to this observer. They traced the observer’s IP address and account name to the same set of servers that host Absolute Poker, and also, apparently, to a particular individual named Scott Tom, who seems to be a part-owner of Absolute Poker! If all of this is correct, it shows exactly how the cheating would have transpired: an insider at the Web site had real-time access to all of the hole cards (it is not hard to believe that this capability would exist) and was relaying this information to an outside accomplice.

More details here.

EDITED TO ADD (10/20): More information.

EDITED TO ADD (11/13): This graph of players’ river aggression is a great piece of evidence. Note the single outlying point.

Posted on October 19, 2007 at 11:44 AM

Hacking of 911 Emergency Phone System

There are no details of what the “hacking” was, or whether it was anything more spoofing the Caller ID:

Randal T. Ellis, 19, allegedly impersonated a caller from the Lake Forest home shortly before midnight March 29, saying he had murdered someone in the house and threatened to shoot others.

Allegedly hacking into systems maintained by America Online and Verizon, Ellis used the couple’s names, which he had confirmed earlier in a prank call to their home, authorities said.

[…]

Authorities spent more than six months tracking down Ellis before arresting him in Mukilteo last week. He was in the process of being extradited to California on Tuesday and was charged with “false imprisonment by violence” and “assault with an assault weapon by proxy.” The crimes carry a possible prison sentence of 18 years.

Elizabeth Henderson, the assistant Orange County district attorney in charge of the economic-crimes unit, said Ellis’ scheme was “fairly difficult to unravel.”

Some more stories, with no more information.

Posted on October 19, 2007 at 6:36 AM34 Comments

Chemical Plant Security and Externalities

It’s not true that no one worries about terrorists attacking chemical plants, it’s just that our politics seem to leave us unable to deal with the threat.

Toxins such as ammonia, chlorine, propane and flammable mixtures are constantly being produced or stored in the United States as a result of legitimate industrial processes. Chlorine gas is particularly toxic; in addition to bombing a plant, someone could hijack a chlorine truck or blow up a railcar. Phosgene is even more dangerous. According to the Environmental Protection Agency, there are 7,728 chemical plants in the United States where an act of sabotage—or an accident—could threaten more than 1,000 people. Of those, 106 facilities could threaten more than a million people.

The problem of securing chemical plants against terrorism—or even accidents—is actually simple once you understand the underlying economics. Normally, we leave the security of something up to its owner. The basic idea is that the owner of each chemical plant 1) best understands the risks, and 2) is the one who loses out if security fails. Any outsider—i.e., regulatory agency—is just going to get it wrong. It’s the basic free-market argument, and in most instances it makes a lot of sense.

And chemical plants do have security. They have fences and guards (which might or might not be effective). They have fail-safe mechanisms built into their operations. For example, many large chemical companies use hazardous substances like phosgene, methyl isocyanate and ethylene oxide in their plants, but don’t ship them between locations. They minimize the amounts that are stored as process intermediates. In rare cases of extremely hazardous materials, no significant amounts are stored; instead they are only present in pipes connecting the reactors that make them with the reactors that consume them.

This is all good and right, and what free-market capitalism dictates. The problem is, that isn’t enough.

Any rational chemical plant owner will only secure the plant up to its value to him. That is, if the plant is worth $100 million, then it makes no sense to spend $200 million on securing it. If the odds of it being attacked are less than 1 percent, it doesn’t even make sense to spend $1 million on securing it. The math is more complicated than this, because you have to factor in such things as the reputational cost of having your name splashed all over the media after an incident, but that’s the basic idea.

But to society, the cost of an actual attack can be much, much greater. If a terrorist blows up a particularly toxic plant in the middle of a densely populated area, deaths could be in the tens of thousands and damage could be in the hundreds of millions. Indirect economic damage could be in the billions. The owner of the chlorine plant would pay none of these potential costs.

Sure, the owner could be sued. But he’s not at risk for more than the value of his company, and—in any case—he’d probably be smarter to take the chance. Expensive lawyers can work wonders, courts can be fickle, and the government could step in and bail him out (as it did with airlines after Sept. 11). And a smart company can often protect itself by spinning off the risky asset in a subsidiary company, or selling it off completely. The overall result is that our nation’s chemical plants are secured to a much smaller degree than the risk warrants.

In economics, this is called an externality: an effect of a decision not borne by the decision maker. The decision maker in this case, the chemical plant owner, makes a rational economic decision based on the risks and costs to him.

If we—whether we’re the community living near the chemical plant or the nation as a whole—expect the owner of that plant to spend money for increased security to account for those externalities, we’re going to have to pay for it. And we have three basic ways of doing that. One, we can do it ourselves, stationing government police or military or contractors around the chemical plants. Two, we can pay the owners to do it, subsidizing some sort of security standard.

Or three, we could regulate security and force the companies to pay for it themselves. There’s no free lunch, of course. “We,” as in society, still pay for it in increased prices for whatever the chemical plants are producing, but the cost is paid for by the product’s consumers rather than by taxpayers in general.

Personally, I don’t care very much which method is chosen: that’s politics, not security. But I do know we’ll have to pick one, or some combination of the three. Asking nicely just isn’t going to work. It can’t; not in a free-market economy.

We taxpayers pay for airport security, and not the airlines, because the overall effects of a terrorist attack against an airline are far greater than their effects to the particular airline targeted. We pay for port security because the effects of bringing a large weapon into the country are far greater than the concerns of the port’s owners. And we should pay for chemical plant, train and truck security for exactly the same reasons.

Thankfully, after years of hoping the chemical industry would do it on its own, this April the Department of Homeland Security started regulating chemical plant security. Some complain that the regulations don’t go far enough, but at least it’s a start.

This essay previously appeared on Wired.com.

Posted on October 18, 2007 at 7:26 AM59 Comments

Future of Malware

Excellent threepart series on trends in criminal malware:

When Jackson logged in, the genius of 76service became immediately clear. 76service customers weren’t weren’t paying for already-stolen credentials. Instead, 76service sold subscriptions or “projects” to Gozi-infected machines. Usually, projects were sold in 30-day increments because that’s a billing cycle, enough time to guarantee that the person who owns the machine with Gozi on it will have logged in to manage their finances, entering data into forms that could be grabbed.

Subscribers could log in with their assigned user name and password any time during the 30-day project. They’d be met with a screen that told them which of their bots was currently active, and a side bar of management options. For example, they could pull down the latest drops—data deposits that the Gozi-infected machines they subscribed to sent to the servers, like the 3.3 GB one Jackson had found.

A project was like an investment portfolio. Individual Gozi-infected machines were like stocks and subscribers bought a group of them, betting they could gain enough personal information from their portfolio of infected machines to make a profit, mostly by turning around and selling credentials on the black market. (In some cases, subscribers would use a few of the credentials themselves).

Some machines, like some stocks, would under perform and provide little private information. But others would land the subscriber a windfall of private data. The point was to subscribe to several infected machines to balance that risk, the way Wall Street fund managers invest in many stocks to offset losses in one company with gains in another.

[…]

That’s why the subscription prices were steep. “Prices started at $1,000 per machine per project,” says Jackson. With some tinkering and thanks to some loose database configuration, Jackson gained a view into other people’s accounts. He mostly saw subscriptions that bought access to only a handful of machines, rarely more than a dozen.

The $1K figure was for “fresh bots”—new infections that hadn’t been part of a project yet. Used bots that were coming off an expired project were available, but worth less (and thus, cost less) because of the increased likelihood that personal information gained from that machine had already been sold. Customers were urged to act quickly to get the freshest bots available.

This was another advantage for the seller. Providing the self-service interface freed up the sellers to create ancillary services. 76service was extremely customer-focused. “They were there to give you services that made it a good experience,” Jackson says. You want us to clean up the reports for you? Sure, for a small fee. You want a report on all the credentials from one bank in your drop? Hundred bucks, please. For another $150 a month, we’ll create secure remote drops for you. Alternative packaging and delivery options? We can do that. Nickel and dime. Nickel and dime.

And about banks not caring:

As much as the HangUp Team has relied on distributed pain for its success, financial institutions have relied on transferred risk to keep the Internet crime problem from becoming a consumer cause and damaging their businesses. So far, it has been cheaper to follow regulations enough to pass audits and then pay for the fraud rather than implement more serious security. “If you look at the volume of loss versus revenue, it’s not horribly bad yet,” says Chris Hoff, with a nod to the criminal hacker’s strategy of distributed pain. “The banks say, ‘Regulations say I need to do these seven things, so I do them and let’s hope the technology to defend against this catches up.'”

“John” the security executive at the bank, one of the only security professionals from financial services who agreed to speak for this story, says “If you audited a financial institution, you wouldn’t find many out of compliance. From a legal perspective, banks can spin that around and say there’s nothing else we could do.”

The banks know how much data Lance James at Secure Science is monitoring; some of them are his clients. The researcher with expertise on the HangUp Team calls consumers’ ability to transfer funds online “the dumbest thing I’ve ever seen. You can’t walk into the branch of a bank with a mask on and no ID and make a transfer. So why is it okay online?”

And yet banks push online banking to customers with one hand while the other hand pushes problems like Gozi away, into acceptable loss budgets and insurance—transferred risk.

As long as consumers don’t raise a fuss, and thus far they haven’t in any meaningful way, the banks have little to fear from their strategies.

But perhaps the only reason consumers don’t raise a fuss is because the banks have both overstated the safety and security of online banking and downplayed negative events around it, like the existence of Gozi and 76service.

The whole thing is worth reading.

Posted on October 17, 2007 at 1:07 PM27 Comments

Hacker Firefox Extensions

Have fun:

If I could only install one “offensive” extension, it would absolutely be Tamper Data. In the past, I used Paros Proxy and Burp Suite for intercepting requests and responses between my Web browser and the Web server. These tasks can now be done within Firefox via Tamper Data—without configuring the proxy settings.

If the Website you’re trying to break into requires a unique cookie, referrer, or user-agent, intercept the request with Tamper Data before it gets sent to the Web server. Then, add or modify the attributes you need and send it on. It’s even possible to modify the response from the Web server before the Web browser interprets it. It’s a very nice tool for anyone interested in Web application security.

Paros and Burp both have features not yet available in Tamper Data, such as site spidering and vulnerability scanning. Switching over to one of them as a proxy is much easier with SwitchProxy, which helps you quickly configure Firefox to use Paros and Proxy. It’s not a purely “offensive” extension, but SwitchProxy it makes the configuration of proxies for Firefox much quicker.

Posted on October 17, 2007 at 6:06 AM25 Comments

Security Risks of Online Political Contributing

Security researcher Christopher Soghoian gave a presentation this month warning of the potential phishing risk caused by online political donation sites. The Threat Level blog reported:

The presidential campaigns’ tactic of relying on impulsive giving spurred by controversial news events and hyped-up deadlines, combined with a number of other factors such as inconsistent Web addresses and a muddle of payment mechanisms creates a conducive environment for fraud, says Soghoian.

“Basically, the problem here is that banks are doing their best to promote safe online behavior, but the political campaigns are taking advantage of the exact opposite,” he says. “They send out one million e-mails to people designed to encourage impulsive behavior.”

He characterizes the current state of security of the presidential campaigns’ online payment systems as a “mess.”

“It’s a disaster waiting to happen,” he says.

Fraudsters could easily send out e-mails and establish Web sites that mimic the official campaigns’ sites and similarly send out such e-mails that would encourage people to “donate” money without checking for the authenticity of the site.

He has a point, but it’s not new to online contributions. Fake charities and political organizations have long been problems. When you get a solicitation in the mail for “Concerned Citizens for a More Perfect Country”—insert whatever personal definition you have for “more perfect” and “country”—you don’t know if the money is going to your cause or into someone’s pocket. When you give money on the street to someone soliciting contributions for this cause or that one, you have no idea what will happen to the money at the end of the day.

In the end, contributing money requires trust. While the Internet certainly makes frauds like this easier—anyone can set up a webpage that accepts PayPal and send out a zillion e-mails—it’s nothing new.

Posted on October 16, 2007 at 12:20 PM13 Comments

Security Risks of Wholesale Telephone Eavesdropping

A handful of prominent security researchers have published a report on the security risks of the large-scale eavesdropping made temporarily legal by the “Protect America Act” passed in the U.S. in August, and which may be made permanently legal soon. “Risking Communications Security: Potential Hazards of the ‘Protect America Act’“—dated October 1, 2007, and marked “draft”—is well worth reading:

The civil-liberties concern is whether the new law puts Americans at risk of spurious—and invasive—surveillance by their own government. The security concern is whether the new law puts Americans at risk of illegitimate surveillance by others. We focus on security. How will the collection system determine that communications have one end outside the United States? How will the surveillance be secured? We examine the risks and put forth recommendations to address them.

Not surprising, the risks are considerable. And difficult to address.

We see three serious security risks that have not been adequately addressed (or perhaps not even addressed at all): the danger that the system can be exploited by unauthorized users, the danger of criminal misuse by a trusted insider, and the danger of misuse by the U.S. government. Our recommendations are based on these concern.

The group has two basic recommendations: data minimization, and oversight:

Minimization is critical. Allowing collection of calls on U.S. territory necessarily entails greater access to the communications of U.S. persons; the architecture must minimize collection of both the call details and the content of these communications. The best way to prevent problems is to intercept as early as possible: at the cableheads; such a solution, by decreasing the number of interception points will simplify the security problem. Surveilling at the cableheads will help minimize collection but it is not sufficient. Intercepted traffic should be studied (by geo-location and any other available techniques) to determine whether it comes from non-targeted U.S. persons and if so, discarded before any further processing is done.

[…]

Oversight is necessary to prevent abuse and ensure information assurance. Independent oversight of operations is also essential and is a fundamental tenet of security. To assure independence the overseeing authority should be as far removed from the intercepting authority as practical.

More in the report, of course.

EDITED TO ADD (2/4/08): Here’s the final report.

Posted on October 16, 2007 at 7:07 AM29 Comments

Merchants Not Storing Credit Card Data

Now this is a good idea:

In a letter sent Thursday to the Payment Card Industry (PCI) Security Standards Council, the group responsible for setting data-security guidelines for merchants and vendors, the National Retail Federation requested that member companies be allowed to instead keep only the authorization code and a truncated receipt, the NRF said in a statement.

Erasing the data is the easiest way to secure it from theft. But, of course, the issue is more complicated than that, and there’s lots of politics. See the article for details.

Posted on October 15, 2007 at 2:05 PM

More Behavioral Profiling

I’ve seen several articles based on this press release:

Computer and behavioral scientists at the University at Buffalo are developing automated systems that track faces, voices, bodies and other biometrics against scientifically tested behavioral indicators to provide a numerical score of the likelihood that an individual may be about to commit a terrorist act.

I am generally in favor of funding all sorts of research, no matter how outlandish—you never know when you’ll discover something really good—and I am generally in favor of this sort of behavioral assessment profiling.

But I wish reporters would approach these topics with something resembling skepticism. The false-positive rate matters far more than the false-negative rate, and I doubt something like this will be ready for fielding any time soon.

EDITED TO ADD (10/13): Another comment.

Posted on October 15, 2007 at 6:16 AM27 Comments

Master Forger Sentenced in the UK

Fascinating:

Magic fingers and an unerring eye gave “Hologram Tam,” one of the best forgers in Europe, the skills to produce counterfeit banknotes so authentic that when he was arrested nearly £700,000 worth were in circulation.

Thomas McAnea, 58, who was jailed for six years and four months yesterday, was the kingpin of a professional operation based in Glasgow that, according to police, had the capacity to produce £2 million worth of fake notes a day ­ enough potentially tom destabilise the British economy. More may remain out there undetected.

[…]

“Some of Hologram Tam’s money is still out there. It’s that good that if I gave you one of his notes, you wouldn’t know it,” a police source said.

The detectives also found templates for other forgeries including passports, driving licences, ID cards, bank statements, utility bills, MoT certificates, postage and saving stamps and TV licences.

Posted on October 12, 2007 at 11:34 AM22 Comments

Another Movie-Plot Threat: Poison Gumballs

This is too funny:

Fear that terrorists could poison children has led three Dover aldermen to begin inspecting gumball machines.

They’ve surveyed 103 machines in the Morris County town and expect to report their results on New Year’s Day.

Aldermen Frank Poolas, Jack Delaney and Michael Picciallo have found 100 unlicensed machines filled with gumballs, jawbreakers and other candies. The three feel they’re ripe for terrorists to lace with poisoned products.

Here’s another article.

This is simply too stupid for words.

Posted on October 12, 2007 at 6:40 AM58 Comments

OnStar to Stop Cars Remotely

I’m not sure this is a good idea:

Starting with about 20 models for 2009, the service will be able to slowly halt a car that is reported stolen, and the radio may even speak up and tell the thief to pull over because police are watching.

[…]

Then, if officers see the car in motion and judge it can be stopped safely, they can tell OnStar operators, who will send the car a signal via cell phone to slow it to a halt.

“This technology will basically remove the control of the horsepower from the thief,” Huber said. “Everything else in the vehicle works. The steering works. The brakes work.”

GM is still exploring the possibility of having the car give a recorded verbal warning before it stops moving. A voice would tell the driver through the radio speakers that police will stop the car, Huber said, and the car’s emergency flashers would go on.

Anyone want to take a guess on how soon this system will be hacked?

At least, for now, you can opt out:

Those who want OnStar but don’t like police having the ability to slow down their car can opt out of the service, Huber said. But he said their research shows that 95 percent of subscribers would like that feature.

This is a tough trade-off. Giving the good guys the ability to disable a car, as long as it can be done safely, is a good idea. But giving the bad guys the same ability is a really bad idea. Can we do the former without also doing the latter?

Posted on October 11, 2007 at 1:56 PM71 Comments

UK Police Can Now Demand Encryption Keys

Under a new law that went into effect this month, it is now a crime to refuse to turn a decryption key over to the police.

I’m not sure of the point of this law. Certainly it will have the effect of spooking businesses, who now have to worry about the police demanding their encryption keys and exposing their entire operations.

Cambridge University security expert Richard Clayton said in May of 2006 that such laws would only encourage businesses to house their cryptography operations out of the reach of UK investigators, potentially harming the country’s economy. “The controversy here [lies in] seizing keys, not in forcing people to decrypt. The power to seize encryption keys is spooking big business,” Clayton said.

“The notion that international bankers would be wary of bringing master keys into UK if they could be seized as part of legitimate police operations, or by a corrupt chief constable, has quite a lot of traction,” he added. “With the appropriate paperwork, keys can be seized. If you’re an international banker you’ll plonk your headquarters in Zurich.”

But if you’re guilty of something that can only be proved by the decrypted data, you might be better off refusing to divulge the key (and facing the maximum five-year penalty the statue provides) instead of being convicted for whatever more serious charge you’re actually guilty of.

I think this is just another skirmish in the “war on encryption” that has been going on for the past fifteen years. (Anyone remember the Clipper chip?) The police have long maintained that encryption is an insurmountable obstacle to law and order:

The Home Office has steadfastly proclaimed that the law is aimed at catching terrorists, pedophiles, and hardened criminals—all parties which the UK government contents are rather adept at using encryption to cover up their activities.

We heard the same thing from FBI Director Louis Freeh in 1993. I called them “The Four Horsemen of the Information Apocalypse“—terrorists, drug dealers, kidnappers, and child pornographers—and have been used to justify all sorts of new police powers.

Posted on October 11, 2007 at 6:40 AM89 Comments

Shoe Scanners at the Orlando Airport

I flew through Orlando today, and saw an automatic shoe-scanner in the lane for Clear passengers.

Poking around on the TSA website, I found this undated page. It seems they didn’t pass the TSA tests, and will be discontinued:

The shoe scanning feature on the machine presented for testing on August 20 does not meet minimum detection standards. While significant improvements were made, (in fact a new machine was submitted) the shoe scanner still does not meet standards to ensure detection of explosives.

GE’s been apprised of these results and TSA and GE have agreed to continue working together. TSA and its partners at the laboratory stand ready to further test the GE shoe scanner feature upon completion of additional detection capability enhancements to meet the agreed upon security requirements.

The machine currently in use in Orlando does not meet minimum detection standards and several additional security measures are required by TSA to mitigate the shortfalls of the shoe scanner feature. Accordingly, the prototype shoe scanner used in Orlando will be discontinued, effective October 10. It had been hoped that an acceptable scanner would be available, but given that the lab prototype does not meet all standards, TSA will not authorize the shoe scanner feature for security purposes in any of the airports where it is currently deployed and awaiting use. The GE Kiosks may be used to read biometric cards associated with the Registered Traveler program but will not provide a security benefit.

Posted on October 10, 2007 at 4:02 PM22 Comments

Directed Acyclic Graphs for Crypto Algorithms

Maybe this on directed acyclic graphs is a bit too geeky for the blog, but I think it’s interesting.

The idea of drawing cipher DAGs certainly isn’t new; DAGs are common in cryptographic research and even more common in cryptographic education. What’s new here is the level of automation, minimizing the amount of cipherspecific effort required to build a DAG from a cipher (starting from a typical reference implementation in C or C++) and to visualize the DAG.

My tools are only prototypes at this point. I’m planning to put a cipherdag package online, but I haven’t done so yet, and I certainly can’t claim that the tools have saved time in cryptanalysis. But I think that the tools will save time in cryptanalysis, automating several tedious tasks that today are normally done by hand.

Posted on October 10, 2007 at 2:59 PM29 Comments

Cheap Cell Phone Jammer

Only $166. It’s the size of a cell phone, has a 5-10 meter range, and blocks GSM 850, 900, 1800, and 1900 MHz.

I want one.

Pity they’re illegal to use in the U.S.:

In the United States, United Kingdom, Australia and many other countries, blocking cell-phone services (as well as any other electronic transmissions) is against the law. In the United States, cell-phone jamming is covered under the Communications Act of 1934, which prohibits people from “willfully or maliciously interfering with the radio communications of any station licensed or authorized” to operate. In fact, the “manufacture, importation, sale or offer for sale, including advertising, of devices designed to block or jam wireless transmissions is prohibited” as well.

EDITED TO ADD (10/12): Here’s an even cheaper model. I’ve been told that Deal Extreme ships the unit with a label that says it’s a LED flashlight—with a value of HKD 45—so it will just slip through customs.

EDITED TO ADD (11/6): A video demo.

Posted on October 10, 2007 at 6:38 AM161 Comments

Mesa Airlines Destroys Evidence

How not to delete evidence. First, do something bad. Then, try to delete the data files that prove it. Finally, blame it on adult content.

Hawaiian alleged Murnane—who was placed on a 90-leave by Mesa’s board last week—deleted hundreds of pages of computer records that would have shown that Mesa misappropriated the Hawaiian information.

But Mesa says any deletion was not intentional and they have copies of the deleted files.

“He (Murnane) was cruising on adult Web sites,” said Mesa attorney Max Blecher in a court hearing yesterday. Murnane was just trying to delete the porn sites, he said.

EDITED TO ADD (11/6): In the aftermath, the CFO got fired and Mesa got hit with an $80 million judgment. Ouch.

Posted on October 9, 2007 at 2:02 PM14 Comments

Burmese Government Seizing UN Hard Drives

Wow:

Burma’s ruling junta is attempting to seize United Nations computers containing information on opposition activists in the latest stage of its brutal crackdown on pro-democracy demonstrations, The Times has learnt.

[…]

The discs contain information that could help the dictatorship to identify key members of the opposition movement, many of whom have gone underground. UN staff spent much of the weekend deleting information.

Another reason law enforcement’s demands that e-mails be tracable is a bad idea.

Posted on October 9, 2007 at 1:14 PM31 Comments

Methanol Fuel Cells on Airplanes

Methanol fuel cells are now allowed on airplanes. This paragraph sums up the inconsistency nicely:

In some sense, though, that’s missing the point. Read the last restriction again. So now, innocuous gels/liquids/shampoos are deemed too hazardous to bring inside the airplane cabin, but a known volatile liquid (however safe it may be) is required to be stored inside your carryon baggage? I’m not criticizing the technology here, but I have a feeling that that this DOT logic is going to be questioned repeatedly by frazzled flyers.

Posted on October 9, 2007 at 6:24 AM18 Comments

Weird Terrorist Threat Story from the Raleigh Airport

This is all strange:

In a telephone interview, Fischvogt also told me, “we received word from the pilot about the suspicious activity before the flight landed.” Fischvogt explained that when Flight 518 landed, it sat on the tarmac for 45 minutes before FBI “took jurisdiction,” boarded the plane and arrested two people. DHS and local law enforcement were also present on the tarmac but “FBI took over the sight and the situation,” Fischvogt said.

“Wait a minute,” I asked, “The passengers were stuck inside the plane with two bad guys for 45 minutes before law enforcement boarded the aircraft?” I wanted to make sure I heard Fischvogt correctly.

“Yes,” Fischvogt confirmed.

Consider the agencies present 24/7 at the federalized Raleigh-Durham International Airport: FBI, DHS, (TSA & Federal Air Marshal Service), Joint Terrorism Task Force, ICE (Immigrations and Customs Enforcement) and airport police. And yet it took seven law enforcement agencies some forty-five minutes to put a single officer on the plane to counter the threat and secure the aircraft?

My analysis is that the delay was caused by FBI and DHS fighting over who had jurisdiction; protocol over ‘acts of air piracy’ are a constant source of bickering between the two agencies and have been the subject of at least one DHS Inspector General’s Report.

Of course the threat was a false alarm, but still….

EDITED TO ADD (10/9): Read the comments. The author of this blog seems to be a fear-mongering nutcase. (I should have read more about the source before posting this.)

Posted on October 8, 2007 at 1:56 PM29 Comments

Hacking Security Cameras

Clever:

If you’ve seen a Hollywood caper movie in the last 20 years you know the old video-camera-spoofing trick. That’s where the criminal mastermind taps into a surveillance camera system and substitutes his own video stream, leaving hapless security guards watching an endless loop of absolutely-nothing-happening while the bank robber empties the vault.

Now white-hat hackers have demonstrated a technique that neatly replicates that old standby.

Amir Azam and Adrian Pastor, researchers at London-based security firm ProCheckUp, discovered that they can redirect what video file is played back by an AXIS 2100 surveillance camera, a common industrial security camera that boasts a web interface, allowing guards to monitor a building from anywhere in the world.

Posted on October 8, 2007 at 6:39 AM27 Comments

200-Meter Tunnel Discovered in Sri Lankan Prison

Wow:

In a startling discovery, officials of the Kalutara Prison on Horana Road have found a tunnel nearly 200 metres long and eight feet below the prison ground leading to the Kalu Ganga complete with electricity and light bulbs, dug by LTTE suspects in custody over a period of one year.

The tunnel was uncompleted. And the article fails to answer the most important question about this sort of thing: What did they do with the dirt?

“We also suspect that they would have daubed their bodies with soil and had later washed it away to prevent detection of their clandestine project,” the official said.

I don’t see that method being able to dispose of 200 meters worth of dirt over the course of a year, even assuming a small tunnel.

Posted on October 5, 2007 at 1:47 PM25 Comments

Fraudulent Amber Alerts

Amber Alerts are general notifications in the first few hours after a child has been abducted. The idea is that if you get the word out quickly, you have a better chance of recovering the child.

There’s an interesting social dynamic here, though. If you issue too many of these, the public starts ignoring them. This is doubly true if the alerts turn out to be false.

That’s why two hoax Amber Alerts in September (one in Miami and the other in North Carolina) are a big deal. And it’s a disturbing trend. Here’s data from 2004:

Out of 233 Amber Alerts issued last year, at least 46 were made for children who were lost, had run away or were the subjects of hoaxes and misunderstandings, according to the Scripps Howard study, which used records from the National Center for Missing and Exploited Children.

Police also violated federal and state guidelines by issuing dozens of vague alerts with little information upon which the public can act. The study found that 23 alerts were issued last year even though police didn’t know the name of the child who supposedly had been abducted. Twenty-five alerts were issued without complete details about the suspect or a description of the vehicle used in the abduction.

Think of it as a denial-of-service attack against the real world.

Posted on October 5, 2007 at 11:00 AM21 Comments

Randomness at Airport Security

Now this seems to be a great idea:

Security officials at Los Angeles International Airport now have a new weapon in their fight against terrorism: complete, baffling randomness. Anxious to thwart future terror attacks in the early stages while plotters are casing the airport, LAX security patrols have begun using a new software program called ARMOR, NEWSWEEK has learned, to make the placement of security checkpoints completely unpredictable. Now all airport security officials have to do is press a button labeled “Randomize,” and they can throw a sort of digital cloak of invisibility over where they place the cops’ antiterror checkpoints on any given day.

Posted on October 5, 2007 at 6:52 AM34 Comments

NSA's Public Relations Campaign Targets Reporters

Your tax dollars at work:

Frustrated by press leaks about its most sensitive electronic surveillance work, the secretive National Security Agency convened an unprecedented series of off-the-record “seminars” in recent years to teach reporters about the damage caused by such leaks and to discourage reporting that could interfere with the agency’s mission to spy on America’s enemies.

The half-day classes featured high-ranking NSA officials highlighting objectionable passages in published stories and offering “an innocuous rewrite” that officials said maintained the “overall thrust” of the articles but omitted details that could disclose the agency’s techniques, according to course outlines obtained by The New York Sun.

Posted on October 4, 2007 at 3:11 PM25 Comments

Photo ID Required to Buy Police Uniforms

In California, if you want to buy a police uniform, you’ll need to prove you’re a policeman:

Assembly Bill 1448 by Assemblyman Roger Niello, R-Fair Oaks, makes it a misdemeanor punishable by up to a $1,000 fine for vendors who do not verify the identification of those purchasing law enforcement uniforms. Previous law made it illegal to impersonate police but did not require an ID check at the point of purchase. The measure takes effect Jan. 1.

Niello said AB 1448 is necessary because many law enforcement agencies require officers to purchase uniforms through outside retailers rather than their own departments.

I’ve written a lot about the problem of authenticating uniforms. This isn’t going to solve that problem. But it’s probably a good idea all the same.

Posted on October 4, 2007 at 1:08 PM32 Comments

Remote-Controlled Toys and the TSA

Remote controlled toys are getting more scrutiny:

Airport screeners are giving additional scrutiny to remote-controlled toys because terrorists could use them to trigger explosive devices, the Transportation Security Administration said Monday.

The TSA suggests travelers place remote-controlled toys in checked luggage.

The TSA stopped short of banning the toys in carry-on bags but suggested travelers place them in checked luggage.

Okay, let’s think this through. The one place where you don’t need a modified remote-controlled toy is in the passenger cabin, because you have your hands available to push any required buttons. But a remote-controlled toy in checked luggage, now that’s a clever idea. I put my modified remote-controlled toy bomb in my checked suitcase, and use the controller to detonate it once I’m in the air.

So maybe we want the remote-controlled toy in carry-on luggage, where there’s a greater chance of detecting it (at the security checkpoint). And maybe we want to require the remote controller to be in checked luggage.

Or maybe….

In any case, it’s a great movie plot.

EDITED TO ADD (10/4): Here are two news stories and the DHS press release.

Posted on October 4, 2007 at 10:20 AM42 Comments

The Storm Worm

The Storm worm first appeared at the beginning of the year, hiding in e-mail attachments with the subject line: “230 dead as storm batters Europe.” Those who opened the attachment became infected, their computers joining an ever-growing botnet.

Although it’s most commonly called a worm, Storm is really more: a worm, a Trojan horse and a bot all rolled into one. It’s also the most successful example we have of a new breed of worm, and I’ve seen estimates that between 1 million and 50 million computers have been infected worldwide.

Old style worms—Sasser, Slammer, Nimda—were written by hackers looking for fame. They spread as quickly as possible (Slammer infected 75,000 computers in 10 minutes) and garnered a lot of notice in the process. The onslaught made it easier for security experts to detect the attack, but required a quick response by antivirus companies, sysadmins and users hoping to contain it. Think of this type of worm as an infectious disease that shows immediate symptoms.

Worms like Storm are written by hackers looking for profit, and they’re different. These worms spread more subtly, without making noise. Symptoms don’t appear immediately, and an infected computer can sit dormant for a long time. If it were a disease, it would be more like syphilis, whose symptoms may be mild or disappear altogether, but which will eventually come back years later and eat your brain.

Storm represents the future of malware. Let’s look at its behavior:

  1. Storm is patient. A worm that attacks all the time is much easier to detect; a worm that attacks and then shuts off for a while hides much more easily.
  2. Storm is designed like an ant colony, with separation of duties. Only a small fraction of infected hosts spread the worm. A much smaller fraction are C2: command-and-control servers. The rest stand by to receive orders. By only allowing a small number of hosts to propagate the virus and act as command-and-control servers, Storm is resilient against attack. Even if those hosts shut down, the network remains largely intact, and other hosts can take over those duties.
  3. Storm doesn’t cause any damage, or noticeable performance impact, to the hosts. Like a parasite, it needs its host to be intact and healthy for its own survival. This makes it harder to detect, because users and network administrators won’t notice any abnormal behavior most of the time.
  4. Rather than having all hosts communicate to a central server or set of servers, Storm uses a peer-to-peer network for C2. This makes the Storm botnet much harder to disable. The most common way to disable a botnet is to shut down the centralized control point. Storm doesn’t have a centralized control point, and thus can’t be shut down that way.

    This technique has other advantages, too. Companies that monitor net activity can detect traffic anomalies with a centralized C2 point, but distributed C2 doesn’t show up as a spike. Communications are much harder to detect.

    One standard method of tracking root C2 servers is to put an infected host through a memory debugger and figure out where its orders are coming from. This won’t work with Storm: An infected host may only know about a small fraction of infected hosts—25-30 at a time—and those hosts are an unknown number of hops away from the primary C2 servers.

    And even if a C2 node is taken down, the system doesn’t suffer. Like a hydra with many heads, Storm’s C2 structure is distributed.

  5. Not only are the C2 servers distributed, but they also hide behind a constantly changing DNS technique called “fast flux.” So even if a compromised host is isolated and debugged, and a C2 server identified through the cloud, by that time it may no longer be active.
  6. Storm’s payload—the code it uses to spread—morphs every 30 minutes or so, making typical AV (antivirus) and IDS techniques less effective.
  7. Storm’s delivery mechanism also changes regularly. Storm started out as PDF spam, then its programmers started using e-cards and YouTube invites—anything to entice users to click on a phony link. Storm also started posting blog-comment spam, again trying to trick viewers into clicking infected links. While these sorts of things are pretty standard worm tactics, it does highlight how Storm is constantly shifting at all levels.
  8. The Storm e-mail also changes all the time, leveraging social engineering techniques. There are always new subject lines and new enticing text: “A killer at 11, he’s free at 21 and …,” “football tracking program” on NFL opening weekend, and major storm and hurricane warnings. Storm’s programmers are very good at preying on human nature.
  9. Last month, Storm began attacking anti-spam sites focused on identifying it—spamhaus.org, 419eater and so on—and the personal website of Joe Stewart, who published an analysis of Storm. I am reminded of a basic theory of war: Take out your enemy’s reconnaissance. Or a basic theory of urban gangs and some governments: Make sure others know not to mess with you.

Not that we really have any idea how to mess with Storm. Storm has been around for almost a year, and the antivirus companies are pretty much powerless to do anything about it. Inoculating infected machines individually is simply not going to work, and I can’t imagine forcing ISPs to quarantine infected hosts. A quarantine wouldn’t work in any case: Storm’s creators could easily design another worm—and we know that users can’t keep themselves from clicking on enticing attachments and links.

Redesigning the Microsoft Windows operating system would work, but that’s ridiculous to even suggest. Creating a counterworm would make a great piece of fiction, but it’s a really bad idea in real life. We simply don’t know how to stop Storm, except to find the people controlling it and arrest them.

Unfortunately we have no idea who controls Storm, although there’s some speculation that they’re Russian. The programmers are obviously very skilled, and they’re continuing to work on their creation.

Oddly enough, Storm isn’t doing much, so far, except gathering strength. Aside from continuing to infect other Windows machines and attacking particular sites that are attacking it, Storm has only been implicated in some pump-and-dump stock scams. There are rumors that Storm is leased out to other criminal groups. Other than that, nothing.

Personally, I’m worried about what Storm’s creators are planning for Phase II.

This essay originally appeared on Wired.com.

EDITED TO ADD (10/17): Storm is being partitioned, presumably so parts can be sold off. If that’s true, we should expect more malicious activitity out of Storm in the future; anyone buying a botnet will want to use it.

Slashdot thread on Storm.

EDITEDT TO ADD (10/22): Here’s research that suggests Storm is shinking.

EDITED T OADD (10/24): Another article about Storm striking back at security researchers.

Posted on October 4, 2007 at 6:00 AM117 Comments

Government Employee Uses DHS Database to Track Ex-Girlfriend

When you build a surveillance system, you invite trusted insiders to abuse that system:

According to the indictment, Robinson, began a relationship with an unidentified woman in 2002 that ended acrimoniously seven months later. After the breakup, federal authorities allege Robinson accessed a government database known as the TECS (Treasury Enforcement Communications System) at least 163 times to track the travel patterns of the woman and her family.

What I want to know is how he got caught. It can be very hard to catch insiders like this; good audit systems are essential, but often overlooked in the design process.

Posted on October 3, 2007 at 3:02 PM14 Comments

Blowback from Banning Backpacks

A high school bans backpacks as a security measure. This also includes purses, which inconveniences girls who need to carry menstrual supplies. So now, girls who are carrying purses get asked by police: “Are you on your period?” The predictable uproar follows.

Maybe they should try transparent backpacks or bulletproof backpacks. (If only someone would invent a transparent bulletproof backpack. Then our children would finally be safe!)

Posted on October 3, 2007 at 12:55 PM60 Comments

Latest Terrorist False Alarm: Chili Peppers

In London:

Three streets were closed and people evacuated from the area as the search was carried out. After locating the source at about 7pm, emergency crews smashed their way into the Thai Cottage restaurant in D’Arblay Street only to emerge with a 9lb pot of smouldering dried chillies.

Baffled chef Chalemchai Tangjariyapoon, who had been cooking a spicy dip, was amazed to find himself at the centre of the terror scare.

“We only cook it once a year—it’s a spicy dip with extra hot chillies that are deliberately burned,” he said.

“To us it smells like burned chilli and it is slightly unusual. I can understand why people who weren’t Thai would not know what it was but it doesn’t smell like chemicals. I’m a bit confused.”

Another story.

Were this the U.S., that restaurant would be charged with terrorism, or creating a fake bomb, or anything to make the authorities feel better. On the other hand, at least the cook wasn’t shot.

EDITED TO ADD (10/4): Common sense:

The police spokesman said no arrests were made in the case.

“As far as I’m aware it’s not a criminal offense to cook very strong chili,” he said.

EDITED TO ADD (10/11): The BBC has a recipe, in case you need to create your own chemical weapon scare.

Posted on October 3, 2007 at 10:28 AM47 Comments

Unisys Blamed for DHS Data Breaches

This story has been percolating around for a few days. Basically, Unisys was hired by the U.S. Department of Homeland Security to manage and monitor the department’s network security. After data breaches were discovered, DHS blamed Unisys—and I figured that everyone would be in serious CYA mode and that we’d never know what really happened. But it seems that there was a cover-up at Unisys, and that’s a big deal:

As part of the contract, Unisys, based in Blue Bell, Pa., was to install network-intrusion detection devices on the unclassified computer systems for the TSA and DHS headquarters and monitor the networks. But according to evidence gathered by the House Homeland Security Committee, Unisys’s failure to properly install and monitor the devices meant that DHS was not aware for at least three months of cyber-intrusions that began in June 2006. Through October of that year, Thompson said, 150 DHS computers—including one in the Office of Procurement Operations, which handles contract data—were compromised by hackers, who sent an unknown quantity of information to a Chinese-language Web site that appeared to host hacking tools.

The contractor also allegedly falsely certified that the network had been protected to cover up its lax oversight, according to the committee.

What interests me the most (as someone with a company that does network security management and monitoring) is that there might be some liability here:

“For the hundreds of millions of dollars that have been spent on building this system within Homeland, we should demand accountability by the contractor,” [Congressman] Thompson said in an interview. “If, in fact, fraud can be proven, those individuals guilty of it should be prosecuted.”

And, as an aside, we see how useless certifications can be:

She said that Unisys has provided DHS “with government-certified and accredited security programs and systems, which were in place throughout 2006 and remain so today.”

Posted on October 3, 2007 at 6:50 AM29 Comments

IEDs in Iraq

This article about the arms race between the U.S. military and jihadi Improvised Explosive Device (IED) makers in Iraq illustrates that more technology isn’t always an effective security solution:

Insurgents have deftly leveraged consumer electronics technology to build explosive devices that are simple, cheap and deadly: Almost anything that can flip a switch at a distance can detonate a bomb. In the past five years, bombmakers have developed six principal detonation triggers—pressure plates, cellphones, command wire, low-power radio-controlled, high-power radio-controlled and passive infrared—that have prompted dozens of U.S. technical antidotes, some successful and some not.

[…]

The IED struggle has become a test of national agility for a lumbering military-industrial complex fashioned during the Cold War to confront an even more lumbering Soviet system. “If we ever want to kneecap al-Qaeda, just get them to adopt our procurement system. It will bring them to their knees within a week,” a former Pentagon official said.

[…]

Or, as an officer writing in Marine Corps Gazette recently put it, “The Flintstones are adapting faster than the Jetsons.”

EDITED TO ADD (10/8): That was the introduction. It’s a four-part series: Part 1, Part 2, Part 3, and Part 4.

Posted on October 2, 2007 at 4:23 PM34 Comments

The Economist on Privacy and Surveillance

Great article from The Economist on data collection, privacy, surveillance, and the future.

Here’s the conclusion:

If the erosion of individual privacy began long before 2001, it has accelerated enormously since. And by no means always to bad effect: suicide-bombers, by their very nature, may not be deterred by a CCTV camera (even a talking one), but security wonks say many terrorist plots have been foiled, and lives saved, through increased eavesdropping, computer profiling and “sneak and peek” searches. But at what cost to civil liberties?

Privacy is a modern “right.” It is not even mentioned in the 18th-century revolutionaries’ list of demands. Indeed, it was not explicitly enshrined in international human-rights laws and treaties until after the second world war. Few people outside the civil-liberties community seem to be really worried about its loss now.

That may be because electronic surveillance has not yet had a big impact on most people’s lives, other than (usually) making it easier to deal with officialdom. But with the collection and centralisation of such vast amounts of data, the potential for abuse is huge and the safeguards paltry.

Ross Anderson, a professor at Cambridge University in Britain, has compared the present situation to a “boiled frog”—which fails to jump out of the saucepan as the water gradually heats. If liberty is eroded slowly, people will get used to it. He added a caveat: it was possible the invasion of privacy would reach a critical mass and prompt a revolt.

If there is not much sign of that in Western democracies, this may be because most people rightly or wrongly trust their own authorities to fight the good fight against terrorism, and avoid abusing the data they possess. The prospect is much scarier in countries like Russia and China, which have embraced capitalist technology and the information revolution without entirely exorcising the ethos of an authoritarian state where dissent, however peaceful, is closely monitored.

On the face of things, the information age renders impossible an old-fashioned, file-collecting dictatorship, based on a state monopoly of communications. But imagine what sort of state may emerge as the best brains of a secret police force—a force whose house culture treats all dissent as dangerous—perfect the art of gathering and using information on massive computer banks, not yellowing paper.

Posted on October 2, 2007 at 11:14 AM25 Comments

Staged Attack Causes Generator to Self-Destruct

I assume you’ve all seen the news:

A government video shows the potential destruction caused by hackers seizing control of a crucial part of the U.S. electrical grid: an industrial turbine spinning wildly out of control until it becomes a smoking hulk and power shuts down.

The video, produced for the Homeland Security Department and obtained by The Associated Press on Wednesday, was marked “Official Use Only.” It shows commands quietly triggered by simulated hackers having such a violent reaction that the enormous turbine shudders as pieces fly apart and it belches black-and-white smoke.

The video was produced for top U.S. policy makers by the Idaho National Laboratory, which has studied the little-understood risks to the specialized electronic equipment that operates power, water and chemical plants. Vice President Dick Cheney is among those who have watched the video, said one U.S. official, speaking on condition of anonymity because this official was not authorized to publicly discuss such high-level briefings.

More here. And the video is on CNN.com.

I haven’t written much about SCADA security, except to say that I think the risk is overblown today but is getting more serious all the time—and we need to deal with the security before it’s too late. I didn’t know quite what to make of the Idaho National Laboratory video; it seemed like hype, but I couldn’t find any details. (The CNN headline, “Mouse click could plunge city into darkness, experts say,” was definitely hype.)

Then, I received this anonymous e-mail:

I was one of the industry technical folks the DHS consulted in developing the “immediate and required” mitigation strategies for this problem.

They talked to several industry groups (mostly management not tech folks): electric, refining, chemical, and water. They ignored most of what we said but attached our names to the technical parts of the report to make it look credible. We softened or eliminated quite a few sections that may have had relevance 20 years ago, such as war dialing attacks against modems.

The end product is a work order document from DHS which requires such things as background checks on people who have access to modems and logging their visits to sites with datacom equipment or control systems.

By the way—they were unable to hurt the generator you see in the video but did destroy the shaft that drives it and the power unit. They triggered the event from 30 miles away! Then they extrapolated the theory that a malfunctioning generator can destroy not only generators at the power company but the power glitches on the grid would destroy motors many miles away on the electric grid that pump water or gasoline (through pipelines).

They kept everything very secret (all emails and reports encrypted, high security meetings in DC) until they produced a video and press release for CNN. There was huge concern by DHS that this vulnerability would become known to the bad guys—yet now they release it to the world for their own career reasons. Beyond shameful.

Oh, and they did use a contractor for all the heavy lifting that went into writing/revising the required mitigations document. Could not even produce this work product on their own.

By the way, the vulnerability they hypothesize is completely bogus but I won’t say more about the details. Gitmo is still too hot for me this time of year.

Posted on October 2, 2007 at 6:26 AM60 Comments

TJX Hack Blamed on Poor Encryption

Remember the TJX hack from May 2007?

Seems that the credit card information was stolen by eavesdropping on wireless traffic at two Marshals stores in Miami. More details from the Canadian privacy commissioner:

“The company collected too much personal information, kept it too long and relied on weak encryption technology to protect it—putting the privacy of millions of its customers at risk,” said Stoddart, who serves as an ombudsman and advocate to protect Canadians’ privacy rights.

[…]

Retail wireless networks collect and transmit data via radio waves so information about purchases and returns can be shared between cash registers and store computers. Wireless transmissions can be intercepted by antennas, and high-power models can sometimes intercept wireless traffic from miles away.

While such data is typically scrambled, Canadian officials said TJX used an encryption method that was outdated and vulnerable. The investigators said it took TJX two years to convert from Wireless Encryption Protocol to more sophisticated Wi-Fi Protected Access, although many retailers had done so.

Posted on October 1, 2007 at 2:37 PM26 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.