October 15, 2007
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0710.html>. These same essays appear in the "Schneier on Security" blog: <http://www.schneier.com/blog>. An RSS feed is available.
In this issue:
The Storm worm first appeared at the beginning of the year, hiding in e-mail attachments with the subject line: "230 dead as storm batters Europe." Those who opened the attachment became infected, their computers joining an ever-growing botnet.
Although it's most commonly called a worm, Storm is really more: a worm, a Trojan horse and a bot all rolled into one. It's also the most successful example we have of a new breed of worm, and I've seen estimates that between 1 million and 50 million computers have been infected worldwide.
Old-style worms -- Sasser, Slammer, Nimda -- were written by hackers looking for fame. They spread as quickly as possible (Slammer infected 75,000 computers in 10 minutes) and garnered a lot of notice in the process. The onslaught made it easier for security experts to detect the attack, but required a quick response by antivirus companies, sysadmins, and users hoping to contain it. Think of this type of worm as an infectious disease that shows immediate symptoms.
Worms like Storm are written by hackers looking for profit, and they're different. These worms spread more subtly, without making noise. Symptoms don't appear immediately, and an infected computer can sit dormant for a long time. If it were a disease, it would be more like syphilis, whose symptoms may be mild or disappear altogether, but which will eventually come back years later and eat your brain.
Storm represents the future of malware. Let's look at its behavior:
1. Storm is patient. A worm that attacks all the time is much easier to detect; a worm that attacks and then shuts off for a while hides much more easily.
2. Storm is designed like an ant colony, with separation of duties. Only a small fraction of infected hosts spread the worm. A much smaller fraction are C2: command-and-control servers. The rest stand by to receive orders. By only allowing a small number of hosts to propagate the virus and act as command-and-control servers, Storm is resilient against attack. Even if those hosts shut down, the network remains largely intact, and other hosts can take over those duties.
3. Storm doesn't cause any damage, or noticeable performance impact, to the hosts. Like a parasite, it needs its host to be intact and healthy for its own survival. This makes it harder to detect, because users and network administrators won't notice any abnormal behavior most of the time.
4. Rather than having all hosts communicate to a central server or set of servers, Storm uses a peer-to-peer network for C2. This makes the Storm botnet much harder to disable. The most common way to disable a botnet is to shut down the centralized control point. Storm doesn't have a centralized control point, and thus can't be shut down that way.
This technique has other advantages, too. Companies that monitor net activity can detect traffic anomalies with a centralized C2 point, but distributed C2 doesn't show up as a spike. Communications are much harder to detect.
One standard method of tracking root C2 servers is to put an infected host through a memory debugger and figure out where its orders are coming from. This won't work with Storm: An infected host may only know about a small fraction of infected hosts -- 25-30 at a time -- and those hosts are an unknown number of hops away from the primary C2 servers.
And even if a C2 node is taken down, the system doesn't suffer. Like a hydra with many heads, Storm's C2 structure is distributed.
5. Not only are the C2 servers distributed, but they also hide behind a constantly changing DNS technique called "fast flux." So even if a compromised host is isolated and debugged, and a C2 server identified through the cloud, by that time it may no longer be active.
6. Storm's payload -- the code it uses to spread -- morphs every 30 minutes or so, making typical AV (antivirus) and IDS techniques less effective.
7. Storm's delivery mechanism also changes regularly. Storm started out as PDF spam, then its programmers started using e-cards and YouTube invites -- anything to entice users to click on a phony link. Storm also started posting blog-comment spam, again trying to trick viewers into clicking infected links. While these sorts of things are pretty standard worm tactics, it does highlight how Storm is constantly shifting at all levels.
8. The Storm e-mail also changes all the time, leveraging social engineering techniques. There are always new subject lines and new enticing text: "A killer at 11, he's free at 21 and ...," "football tracking program" on NFL opening weekend, and major storm and hurricane warnings. Storm's programmers are very good at preying on human nature.
9. Last month, Storm began attacking anti-spam sites focused on identifying it -- spamhaus.org, 419eater and so on -- and the personal website of Joe Stewart, who published an analysis of Storm. I am reminded of a basic theory of war: Take out your enemy's reconnaissance. Or a basic theory of urban gangs and some governments: Make sure others know not to mess with you.
Not that we really have any idea how to mess with Storm. Storm has been around for almost a year, and the antivirus companies are pretty much powerless to do anything about it. Inoculating infected machines individually is simply not going to work, and I can't imagine forcing ISPs to quarantine infected hosts. A quarantine wouldn't work in any case: Storm's creators could easily design another worm -- and we know that users can't keep themselves from clicking on enticing attachments and links.
Redesigning the Microsoft Windows operating system would work, but that's ridiculous to even suggest. Creating a counterworm would make a great piece of fiction, but it's a really bad idea in real life. We simply don't know how to stop Storm, except to find the people controlling it and arrest them.
Unfortunately, we have no idea who controls Storm, although there's some speculation that they're Russian. The programmers are obviously very skilled, and they're continuing to work on their creation.
Oddly enough, Storm isn't doing much, so far, except gathering strength. Aside from continuing to infect other Windows machines and attacking particular sites that are attacking it, Storm has only been implicated in some pump-and-dump stock scams. There are rumors that Storm is leased out to other criminal groups. Other than that, nothing.
Personally, I'm worried about what Storm's creators are planning for Phase II.
This essay originally appeared on Wired.com.
Amber Alerts are general notifications in the first few hours after a child has been abducted. The idea is that if you get the word out quickly, you have a better chance of recovering the child.
There's an interesting social dynamic here, though. If you issue too many of these, the public starts ignoring them. This is doubly true if the alerts turn out to be false.
That's why two hoax Amber Alerts in September (one in Miami and the other in North Carolina) are a big deal. And it's a disturbing trend. Here's data from 2004:
"Out of 233 Amber Alerts issued last year, at least 46 were made for children who were lost, had run away or were the subjects of hoaxes and misunderstandings, according to the Scripps Howard study, which used records from the National Center for Missing and Exploited Children.
"Police also violated federal and state guidelines by issuing dozens of vague alerts with little information upon which the public can act. The study found that 23 alerts were issued last year even though police didn't know the name of the child who supposedly had been abducted. Twenty-five alerts were issued without complete details about the suspect or a description of the vehicle used in the abduction."
Think of it as a denial-of-service attack against the real world.
Under a new law that went into effect this month, it is now a crime to refuse to turn a decryption key over to the police.
I'm not sure of the point of this law. Certainly it will have the effect of spooking businesses, who now have to worry about the police demanding their encryption keys and exposing their entire operations.
From the ArsTechnica article:
"Cambridge University security expert Richard Clayton said in May of 2006 that such laws would only encourage businesses to house their cryptography operations out of the reach of UK investigators, potentially harming the country's economy. 'The controversy here [lies in] seizing keys, not in forcing people to decrypt. The power to seize encryption keys is spooking big business, ' Clayton said.
"'The notion that international bankers would be wary of bringing master keys into UK if they could be seized as part of legitimate police operations, or by a corrupt chief constable, has quite a lot of traction,' he added. 'With the appropriate paperwork, keys can be seized. If you're an international banker you'll plonk your headquarters in Zurich.'"
But if you're guilty of something that can only be proved by the decrypted data, you might be better off refusing to divulge the key (and facing the maximum five-year penalty the statue provides) instead of being convicted for whatever more serious charge you're actually guilty of.
I think this is just another skirmish in the "war on encryption" that has been going on for the past fifteen years. (Anyone remember the Clipper chip?) The police have long maintained that encryption is an insurmountable obstacle to law and order:
"The Home Office has steadfastly proclaimed that the law is aimed at catching terrorists, pedophiles, and hardened criminals -- all parties which the UK government contents are rather adept at using encryption to cover up their activities."
We heard the same thing from FBI Director Louis Freeh in 1993. I called them "The Four Horsemen of the Information Apocalypse" -- terrorists, drug dealers, kidnappers, and child pornographers -- and they have been used to justify all sorts of new police powers.
Microsoft updates both XP and Vista without user permission or notification. Microsoft can do this; that's just stupid company stuff. But what's to stop anyone else from using Microsoft's stealth remote install capability to put anything onto anyone's computer? How long before some smart hacker exploits this, and then writes a program that will allow all the dumb hackers to do it? When you build a capability like this into your system, you decrease your overall security.
Yet another sports spying scandal, this one from Formula One racing:
The Norwegian Ministry of Transportation asked the EU to lift the liquid ban on airplanes.
MediaDefender is a P2P poisoning company. Last week, company e-mail, phone calls, and source code were leaked.
The Chinese are accused of spying on the Danish Women's Cup soccer team:
Multics was an operating system from the 1960s, and had better security than a lot of operating systems today. This article from 2002 talks about Multics security, and the lessons learned that are still relevant today.
A Pakistani Army officer becomes a suicide bomber. There probably isn't any practicable way to prevent these sorts of attacks by trusted insiders.
London's 10,000 security cameras don't reduce crime:
Another "terrorism arrest" based on fear and overreaction. A 19-year-old named Star Simpson went to the Boston airport with an electronic badge and was arrested on terrorism charges:
Homeland security blanket:
Weird story of psychoecology and the DHS:
Idiotic cryptography reporting:
It's easy to eavesdrop on a copper cable; fiber optic cable is much harder. Here's how to eavesdrop on a fiber optic cable. Total hardware cost: less than $1,000.
Chlorine and cholera in Iraq. Basically, we're interdicting chlorine in Iraq, because of attacks against chlorine tankers. As a result, cholera is on the rise.
Reuters has an article on future security technologies. I've already talked about automatic license-plate-capture cameras and aerial surveillance (drones and satellites), but there's some new stuff. Most impressive is the claim of a technology that can read fingerprints at a distance of five meters.
Security considerations in prison food. For example, the corn dogs don't have sticks in them.
An NASA paper from the 1960s that talks about using cryptanalysis techniques. Well, sort of. "NiCd Space Battery Test Data Analysis Project, Phase 2 Quarterly Report, 1 Jan. - 30 Apr. 1967," uses "cryptanalytic techniques" -- some sort of tri-gram frequency analysis, I think -- to ferret out hidden clues about battery failures. It's hard to imagine non-NSA cryptography in the U.S. from the 1960s. Basically, it was all alphabetic stuff. Even rotor machines were highly classified, and absolutely nothing was being done in binary.
Oracle 11g password algorithm revealed. It's based on SHA-1.
The U.S. has a patchwork of deposit laws on soft drink bottles and cans. Most states don't have deposits, but some states -- Michigan, for example -- do. The cans are the same, so you can make ten cents by buying a can in one state and then returning it for the deposit in Michigan. Ten people have been arrested for making more than $500,000 doing this; they ran grocery stores in Michigan, and as such were semi-insiders.
This is an excellent series of blog posts by Microsoft's Larry Osterman about threat modeling, using the PlaySound API as an example. Long, detailed, and complicated, but well worth reading. The last post is particularly good.
A high school bans backpacks as a security measure. This also includes purses, which inconveniences girls who need to carry menstrual supplies. So now, girls who are carrying purses get asked by police: "Are you on your period?" The predictable uproar follows.
Government employee uses DHS database to track ex-girlfriend.
You can no longer buy a police uniform in California unless you can prove you're a policeman:
NSA's public relations campaign targets reporters:
Randomness in airport security. Seems like a good idea to me.
A 200-meter tunnel was discovered in a Sri Lankan prison, complete with electricity and light bulbs. How did they get rid of the dirt? "We also suspect that they would have daubed their bodies with soil and had later washed it away to prevent detection of their clandestine project." I don't see that method being able to dispose of 200 meters worth of dirt over the course of a year, even assuming a small tunnel.
Weird terrorist threat story from the Raleigh Airport:
Methanol fuel cells are now allowed on airplanes. This paragraph sums up the inconsistency nicely: "So now, innocuous gels/liquids/shampoos are deemed too hazardous to bring inside the airplane cabin, but a known volatile liquid (however safe it may be) is required to be stored inside your carryon baggage? I'm not criticizing the technology here, but I have a feeling that that this DOT logic is going to be questioned repeatedly by frazzled flyers."
The Burmese government is seizing UN hard drives, looking for information to identify dissidents. Another reason law enforcement's demands that e-mails be traceable is a bad idea.
Meanwhile, Mesa Airlines destroys evidence in a court case, and then blames the data loss on pornography:
Directed acyclic graphs for analyzing crypto algorithms:
I flew through Orlando last week, and saw an automatic shoe-scanner in the lane for Clear passengers. Poking around on the TSA website, I found this undated page. It seems the scanners didn't pass the TSA tests, and will be discontinued.
The police will be able to remotely stop cars with the OnStar navigation system:
Latest idiotic movie-plot threat: poisoned gumball machines. Terrorists might target our children!
Funny SQL injection attack cartoon:
I've seen several articles about this behavioral profiling research. I am generally in favor of funding all sorts of research, no matter how outlandish -- you never know when you'll discover something really good -- and I am generally in favor of this sort of behavioral assessment profiling. But I wish reporters would approach these topics with something resembling skepticism. The false-positive rate matters far more than the false-negative rate, and I doubt something like this will be ready for fielding any time soon.
The Palisades Medical Center has suspended 27 people for looking at George Clooney's medical data. This is great news, and I wish places would take the same kind of action when the personal data of non-celebrities is exposed.
Perhaps merchants should not store credit-card data: that way it can't be lost or stolen:
As the name implies, Alcoholics Anonymous meetings are anonymous. You don't have to sign anything, show ID or even reveal your real name. But the meetings are not private. Anyone is free to attend. And anyone is free to recognize you: by your face, by your voice, by the stories you tell. Anonymity is not the same as privacy.
That's obvious and uninteresting, but many of us seem to forget it when we're on a computer. We think "it's secure," and forget that "secure" can mean many different things.
Tor is a free tool that allows people to use the Internet anonymously. Basically, by joining Tor you join a network of computers around the world that pass Internet traffic randomly amongst each other before sending it out to wherever it is going. Imagine a tight huddle of people passing letters around. Once in a while a letter leaves the huddle, sent off to some destination. If you can't see what's going on inside the huddle, you can't tell who sent what letter based on watching letters leave the huddle.
I've left out a lot of details, but that's basically how Tor works. It's called "onion routing," and it was first developed at the Naval Research Laboratory. The communications between Tor nodes are encrypted in a layered protocol -- hence the onion analogy -- but the traffic that leaves the Tor network is in the clear. It has to be.
If you want your Tor traffic to be private, you need to encrypt it. If you want it to be authenticated, you need to sign it as well. The Tor website even says: "Yes, the guy running the exit node can read the bytes that come in and out there. Tor anonymizes the origin of your traffic, and it makes sure to encrypt everything inside the Tor network, but it does not magically encrypt all traffic throughout the Internet."
Tor anonymizes, nothing more.
Dan Egerstad is a Swedish security researcher; he ran five Tor nodes. Last month, he posted a list of 100 e-mail credentials -- server IP addresses, e-mail accounts and the corresponding passwords -- for embassies and government ministries around the globe, all obtained by sniffing exit traffic for usernames and passwords of e-mail servers.
The list contains mostly third-world embassies: Kazakhstan, Uzbekistan, Tajikistan, India, Iran, Mongolia -- but there's a Japanese embassy on the list, as well as the UK Visa Application Center in Nepal, the Russian Embassy in Sweden, the Office of the Dalai Lama and several Hong Kong Human Rights Groups. And this is just the tip of the iceberg; Egerstad sniffed more than 1,000 corporate accounts this way, too. Scary stuff, indeed.
Presumably, most of these organizations are using Tor to hide their network traffic from their host countries' spies. But because anyone can join the Tor network, Tor users necessarily pass their traffic to organizations they might not trust: various intelligence agencies, hacker groups, criminal organizations and so on.
It's simply inconceivable that Egerstad is the first person to do this sort of eavesdropping; Len Sassaman published a paper on this attack earlier this year. The price you pay for anonymity is exposing your traffic to shady people.
We don't really know whether the Tor users were the accounts' legitimate owners, or if they were hackers who had broken into the accounts by other means and were now using Tor to avoid being caught. But certainly most of these users didn't realize that anonymity doesn't mean privacy. The fact that most of the accounts listed by Egerstad were from small nations is no surprise; that's where you'd expect weaker security practices.
True anonymity is hard. Just as you could be recognized at an AA meeting, you can be recognized on the Internet as well. There's a lot of research on breaking anonymity in general -- and Tor specifically -- but sometimes it doesn't even take much. Last year, AOL made 20,000 anonymous search queries public as a research tool. It wasn't very hard to identify people from the data.
A research project called Dark Web, funded by the National Science Foundation, even tried to identify anonymous writers by their style: "One of the tools developed by Dark Web is a technique called Writeprint, which automatically extracts thousands of multilingual, structural, and semantic features to determine who is creating 'anonymous' content online. Writeprint can look at a posting on an online bulletin board, for example, and compare it with writings found elsewhere on the Internet. By analyzing these certain features, it can determine with more than 95 percent accuracy if the author has produced other content in the past."
And if your name or other identifying information is in just one of those writings, you can be identified.
Like all security tools, Tor is used by both good guys and bad guys. And perversely, the very fact that something is on the Tor network means that someone -- for some reason -- wants to hide the fact he's doing it.
As long as Tor is a magnet for "interesting" traffic, Tor will also be a magnet for those who want to eavesdrop on that traffic -- especially because more than 90 percent of Tor users don't encrypt.
This essay previously appeared on Wired.com.
Tor server operator shuts down after police raid:
Tools for identifying the source of Tor data:
Remote-controlled toys are getting more scrutiny at airports, because they might be used to trigger bombs.
Okay, let's think this through. The one place where you *don't* need a modified remote-controlled toy is in the passenger cabin, because you have your hands available to push any required buttons. But a remote-controlled toy in checked luggage, now that's a clever idea. I put my modified remote-controlled toy bomb in my checked suitcase, and use the controller to detonate it once I'm in the air.
So maybe we want the remote-controlled toy in carry-on luggage, where there's a greater chance of detecting it (at the security checkpoint). And maybe we want to require the remote controller to be in checked luggage.
In any case, it's a great movie plot.
DHS press release
Schneier is delivering the keynote at InfoSecurity Mexico, in Mexico City, on Oct 15:
Schneier is speaking at the University of Rochester, in Rochester NY, on Oct 20:
Schneier is delivering the keynote at RSA Europe, in London, on Oct 23:
Schneier is speaking at the Educause 2007 Annual Conference, in Seattle, on Oct 26:
Schneier is delivering the keynote at the ICE Technology Conference, in Alberta, on Nov 5:
Schneier is speaking at Information Security Decisions, in Chicago, on Nov 6:
A video of Schneier's talk at Defcon 15:
It made a pretty big news splash last month. It was a video, produced for the DHS by the Idaho National Laboratory, showing an industrial turbine spinning out of control and eventually destructing, supposedly caused by a simulated hacker attack.
I haven't written much about SCADA security, except to say that I think the risk is overblown today but is getting more serious all the time -- and we need to deal with the security before it's too late. I didn't know quite what to make of the Idaho National Laboratory video; it seemed like hype, but I couldn't find any details. (The CNN headline, "Mouse click could plunge city into darkness, experts say," was definitely hype.)
Then I received this anonymous e-mail:
"I was one of the industry technical folks the DHS consulted in developing the 'immediate and required' mitigation strategies for this problem.
"They talked to several industry groups (mostly management not tech folks): electric, refining, chemical, and water. They ignored most of what we said but attached our names to the technical parts of the report to make it look credible. We softened or eliminated quite a few sections that may have had relevance 20 years ago, such as war dialing attacks against modems.
"The end product is a work order document from DHS which requires such things as background checks on people who have access to modems and logging their visits to sites with datacom equipment or control systems.
"By the way -- they were unable to hurt the generator you see in the video but did destroy the shaft that drives it and the power unit. They triggered the event from 30 miles away! Then they extrapolated the theory that a malfunctioning generator can destroy not only generators at the power company but the power glitches on the grid would destroy motors many miles away on the electric grid that pump water or gasoline (through pipelines).
"They kept everything very secret (all emails and reports encrypted, high security meetings in DC) until they produced a video and press release for CNN. There was huge concern by DHS that this vulnerability would become known to the bad guys -- yet now they release it to the world for their own career reasons. Beyond shameful.
"Oh, and they did use a contractor for all the heavy lifting that went into writing/revising the required mitigations document. Could not even produce this work product on their own.
"By the way, the vulnerability they hypothesize is completely bogus but I won't say more about the details. Gitmo is still too hot for me this time of year."
There are hundreds of comments -- many of them interesting -- on these topics on my blog. Search for the story you want to comment on, and join in.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of BT Counterpane, and is a member of the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
BT Counterpane is the world's leading protector of networked information - the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. BT Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT or BT Counterpane.
Copyright (c) 2007 by Bruce Schneier.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.