June 15, 2003
by Bruce Schneier
A free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography.
Back issues are available at <http://www.schneier.com/crypto-gram.html>. To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to email@example.com.
Copyright (c) 2003 by Counterpane Internet Security, Inc.
In this issue:
The threat of cyberterrorism is causing much alarm these days. We have been told to expect attacks since 9/11; that cyberterrorists would try to cripple our power system, disable air traffic control and emergency services, open dams, or disrupt banking and communications. But so far, nothing's happened. Even during the war in Iraq, which was supposed to increase the risk dramatically, nothing happened. The impending cyberwar was a big dud. Don't congratulate our vigilant security, though; the alarm was caused by a misunderstanding of both the attackers and the attacks.
These attacks are very difficult to execute. The software systems controlling our nation's infrastructure are filled with vulnerabilities, but they're generally not the kinds of vulnerabilities that cause catastrophic disruptions. The systems are designed to limit the damage that occurs from errors and accidents. They have manual overrides. These systems have been proven to work; they've experienced disruptions caused by accident and natural disaster. We've been through blackouts, telephone switch failures, and disruptions of air traffic control computers. In 1999, a software bug knocked out a nationwide paging system for a day. The results might be annoying, and engineers might spend days or weeks scrambling, but the effect on the general population has been minimal.
The worry is that a terrorist would cause a problem more serious than a natural disaster, but this kind of thing is surprisingly hard to do. Worms and viruses have caused all sorts of network disruptions, but it happened by accident. In January 2003, the SQL Slammer worm disrupted 13,000 ATMs on the Bank of America's network. But before it happened, you couldn't have found a security expert who understood that those systems were dependent on that vulnerability. We simply don't understand the interactions well enough to predict which kinds of attacks could cause catastrophic results, and terrorist organizations don't have that sort of knowledge either -- even if they tried to hire experts.
The closest example we have of this kind of thing comes from Australia in 2000. Vitek Boden broke into the computer network of a sewage treatment plant along Australia's Sunshine Coast. Over the course of two months, he leaked hundreds of thousands of gallons of putrid sludge into nearby rivers and parks. Among the results were black creek water, dead marine life, and a stench so unbearable that residents complained. This is the only known case of someone hacking a digital control system with the intent of causing environmental harm.
Despite our predilection for calling anything "terrorism," these attacks are not. We know what terrorism is. It's someone blowing himself up in a crowded restaurant, or flying an airplane into a skyscraper. It's not infecting computers with viruses, forcing air traffic controllers to route planes manually, or shutting down a pager network for a day. That causes annoyance and irritation, not terror.
This is a difficult message for some, because these days anyone who causes widespread damage is being given the label "terrorist." But imagine for a minute the leadership of al Qaeda sitting in a cave somewhere, plotting the next move in their jihad against the United States. One of the leaders jumps up and exclaims: "I have an idea! We'll disable their e-mail...." Conventional terrorism -- driving a truckful of explosives into a nuclear power plant, for example -- is still easier and much more effective.
There are lots of hackers in the world -- kids, mostly -- who like to play at politics and dress their own antics in the trappings of terrorism. They hack computers belonging to some other country (generally not government computers) and display a political message. We've often seen this kind of thing when two countries squabble: China vs. Taiwan, India vs. Pakistan, England vs. Ireland, U.S. vs. China (during the 2001 crisis over the U.S. spy plane that crashed in Chinese territory), the U.S. and Israel vs. various Arab countries. It's the equivalent of soccer hooligans taking out national frustrations on another country's fans at a game. It's base and despicable, and it causes real damage, but it's cyberhooliganism, not cyberterrorism.
There are several organizations that track attacks over the Internet. Over the last six months, less than 1% of all attacks originated from countries on the U.S. government's Cyber Terrorist Watch List, while 35% originated from inside the United States. Computer security is still important. People overplay the risks of cyberterrorism, but they underplay the risks of cybercrime. Fraud and espionage are serious problems. Luckily, the same countermeasures aimed at cyberterrorists will also prevent hackers and criminals. If organizations secure their computer networks for the wrong reasons, it will still be the right thing to do.
Crypto-Gram is currently in its sixth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.
Fixing Intelligence Failures:
Honeypots and the Honeynet Project
The Data Encryption Standard (DES):
The internationalization of cryptography policy:
The new breeds of viruses, worms, and other malware:
Timing attacks, power analysis, and other "side-channel" attacks against cryptosystems:
Disney is launching a pilot DVD-rental program that uses self-destructing DVDs. The idea is that the DVD has a coating that oxidizes after a few days, rendering the DVD unreadable.
I think this is a very clever security countermeasure. The threat is regular consumers. Disney wants to be able to rent DVDs to them at a price-point lower than their sale price. By making a DVD that only lasts a few days after being taken out of the package, Disney has solved the problem of needing an infrastructure to process DVD returns.
Of course this doesn't solve the problem of making illegal copies of the DVD, but that's not the problem that Disney is trying to solve. Self-destructing DVDs are a clever solution for a specific security problem, and if it works well it's likely to be a cheap and effective one. (Compare this to Circuit City's superficially similar DIVX format, which also had expiring DVDs, but required a phone line and special player.)
I got this as spam, no less. It's your typical one-time-pad-that's-really-a-stream-cipher proprietary algorithm. You've got your infinitely long key. You've got your claims of more security than anything else on the market. You've got your weird "independent evaluation" by experts who seem to have no actual expertise in cryptography.
But this is my favorite quote off the Web site: "One of the primary means of testing the solidness of a form of encryption is to test the randomness of the data it creates." Haven't these people ever heard of cryptanalysis?
This is a clever side-channel attack. An attacker can use memory errors to attack a virtual machine. Here's how it works:
First, he loads two Java applets into the target system's memory. The first applet is large, and consists only of pointers to the second applet. The second applet is the attack code, and can do whatever the attacker wants. The trick is to cause a random memory error occur. The researchers used a light bulb to heat the target system, but you can imagine the same sort of result from a microwave oven, static electricity, or a host of other environmental factors. It turns out that a random error is likely to cause the system to run the attack code. If, for example, the first applet fills up 60% of the target system's memory, then a random error (a bit flip) will cause the execution to pass to the pointer and then to the attack code more than 70% of the time.
The attacker needs physical access to the machine being attacked, so its main uses are in breaking smart cards and other devices that attempt to remain secure against the person in possession of it. There are lots of such devices that allow the owner to run any program on it he wants, and maintains security by internal separation of programs. This attack demonstrates that internal separation isn't as good as people might think.
Now that the attack is known, it can easily be prevented. Simple measures like parity checking or error-correcting codes can defeat this technique. But you can be sure there are other attacks like this. In general, there is no way to secure secrets inside a device from someone who has physical possession of the device.
Very interesting article on the arrest of three Russian hackers. This isn't a technical article, but speaks to socioeconomic conditions and motivations of these criminals, as well as the competence and effectiveness of the FBI.
Getting a fake photo ID in New Jersey:
Another article on the question of whether or not to apply security patches:
Good article on how we might preserve privacy in the face of the Total Information Awareness program:
Essay on the motivations of computer attackers: random attacks versus targeted attacks:
Video cameras in cell phones are a potential tool to buy elections. One of the basic tenets of a good election is that the ballot is secret. Someone can offer to buy a vote, but the buyer has no guarantee that the seller will deliver from the privacy of the voting booth. But video cameras in cell phones have the potential to change that; the buyer can demand proof of a vote bought before he pays.
Insider attack at Coca-Cola:
Black box recorders in cars, originally intended to determine the cause of death in an accident, are increasingly being used in court. People can be sent to jail, or be held liable, based on the contents. But since the system was not designed for use in an adversarial setting, my guess is that the security surrounding these devices is minimal.
Hacking customer privacy in DirecTV:
A new biometric: identifying people by the way they walk. The first article claims that the system "has been 80 to 95 percent successful in identifying people." Be careful about that number, though, because it is meaningless without more information about how it was derived.
Seattle police needed a DNA sample from a suspect. So they mailed him a letter, and tricked him into mailing a reply back in an envelope he licked. There was enough DNA there to link him to the crime.
The Pentagon's Total Information Awareness program has a new name: Terrorism Information Awareness.
The Department of Homeland Security is setting up a cybersecurity office. I suspect this is basically a political exercise, but it might actually result in something positive.
The problems with some current cyber-insurance policies:
Identity theft insurance offered:
Lots of companies are using "security" as an excuse to get around all sorts of things from government:
A reporter created a fake letterhead and used it to order the recipe for sarin gas, and enough of the four chemicals to make enough to kill tens of thousands. There's still the small matter of distribution -- which isn't as easy as it seems -- but it seems that making the stuff just requires a basic chemist's education and some cheap commercial lab equipment.
This research on defeating biometric security isn't new, but I don't remember seeing a translation of the actual article before. It covers fingerprint scanners, facial recognition, and iris scanners.
Student hacker being tried as an adult. This, to me, is a measure of the hysteria today. Hacking your school's computer is the equivalent of spray painting your name in the bathroom. It shouldn't be a felony, and he shouldn't be tried as an adult.
Good comments on U.S. cybersecurity by former czar Richard Clarke.
The manual "Keeping Your Jewish Institution Safe," published by the Anti-Defamation League, is actually a pretty good anti-terrorism and security manual.
I'm sure glad the Idaho police department's wireless network is "using a hard-to-crack proprietary encryption protocol."
Cyber criminals are a bigger worry than cyber terrorists. No, it wasn't me saying this...but it could have been.
CryptoGram product. I have no idea if this is any good, and some of the marketing claims made me wince. But for the record, I have nothing to do with this French company.
Fear causes irrational security decisions (see above).
Vulnerability Disclosure plan (draft) from the industry group called the "Organization for Internet Safety."
The U.S. Department of Homeland Security now has a National Cyber Security Division, which will incorporate the Critical Infrastructure Assurance Office (CIAO), the National Infrastructure Protection Center (NIPC), the Federal Computer Incident Response Center (FedCIRC) and the National Communications System. No word yet on a person to run this thing.
Counterpane has a new VP of Worldwide Sales, and a new VP of Strategy and Development.
Security Q&A with Schneier for Washington Technology magazine:
A difficult problems in law enforcement is forensics: proving the police officers acted properly. Many cases hinge on my-word-against-his, and sometimes untrustworthy policemen might be trusted when they shouldn't be. One solution is to add auditing features directly into the weapon:
"The weapon [taser] is fully trackable. A computer chip date-stamps every time the trigger is pulled. The cartridges have serial numbers and when fired, they release confetti with the serial numbers on them. Investigators at a scene involving several officers can determine who fired and how many times."
A very common feature of password-protected Web sites is the ability to request that the password be e-mailed to you. The idea is simple: people forget their passwords and need to be reminded of them. It's a reasonable security assumption that the e-mail address of the person is secure, so it is reasonable to e-mail the password to them. (You can argue about the wisdom of e-mailing the password unencrypted, but I don't think eavesdropping is the attack we're worried about here.)
Here's a clever attack to exploit this feature. Step 1: Buy an expired domain. Step 2: Watch all the spam come in, and figure out what e-mail accounts were active for that domain's previous owner. Step 3: Go to an account-based site -- eBay, Amazon, etc. -- and request that the password be sent to those accounts. If the people with those accounts didn't bother to change their e-mail address when the domain expired, you can collect their passwords.
Someone tried that with an expired domain and eBay accounts, and found that -- if he wanted to -- he could have collected a few passwords. Moral: when an e-mail address deactivates, everything associated with that address should be deactivated as well.
The University of Calgary is offering a course on virus writing, and many are up in arms about it. Wired has published an article on the SQL Slammer worm, including source code, and recriminations ensue.
Get real here. If we have any hope of improving computer security, we need to teach computer security. Teaching computer security includes teaching how attacks work. It includes teaching how viruses work. It includes teaching how worms work.
The bad guys have all sorts of resources to learn how to write viruses. SQL Slammer source code has been available on the Internet. Neither of these two actions will help the bad guys. But they probably will help the good guys.
Worms, viruses, exploits, hacking code...they're not infectious diseases. We need to look at them as educational tools, and not things to keep secret.
Wired's article on the SQL Slammer:
From: Eric Tribou <etriboubridgew.edu>
I think you missed the target on your comments regarding encryption and wiretapping.
First to note is that the report is not exclusive to wiretapping of phone lines. Electronic and oral communications are included. Encrypting phones may not have been encountered at all. The encryption that was encountered could easily (and more likely) have been the use of PGP or some other such method of encrypting e-mail. It could also refer to encounters with encrypted Voice over IP sessions. Both of those can be based on open systems.
Second point is that how, exactly, the plaintext is recovered is not mentioned at all. Using an encrypted phone line is good and all, but if a bug has been planted in the room in which one side of this conversation is taking place then there's little need to worry about decrypting the data going over the phone line. The same holds true for VoIP sessions and encrypted e-mail; in the case of the latter, a key logger could be used.
So while your point about encrypting telephone devices, and the greater point about closed security systems, is certainly correct, I don't believe it should take focus here. Instead I think it's worth discussing how data is (or is not) secured on either end of the communications line and not how it is secured during transmission.
From: Arrigo Triulzi <arrigonorthsea.sevenseas.org>
I am just wondering if you are reading too much into the wiretapping
>1) Encryption of phone communications is very uncommon. Sixteen cases
What about these people being on GSM phones? GSM phones are encrypted, using A4 (in theory). It is also true that to wiretap a GSM phone you don't really have to break A4, you simply tap the base stations.
By applying the above to the report it could well be that the "encryption was encountered in 16 wiretaps" simply means "they had GSM phones, we didn't have to worry about encryption 'cos we went and listened to their conversations at the base stations or gateway switches between the mobile operator and the fixed line operator/other mobile operator."
This is how they wiretap mobile phones in Europe...
Of course it doesn't make the argument that people are selling snake oil for phone encryption wrong at all, it simply completes the picture and points out the need to understand where encryption ends in a conversation...
The court's report about encryption and wiretapping was interesting, but not necessarily factual. As you pointed out, it is unlikely that local police organizations could brute-force DES keys. Given that some of the conversations were encrypted but none of that "prevented law enforcement officials from obtaining the plain text of communications intercepted," you assumed that the officials were able to break the crypto systems.
Other possible explanations include:
- The reports of encryption were erroneous. This could be due to the reporting officials misunderstanding what "encrypted" means, or purposely lying to make themselves look good.
- The reports that the encryption didn't prevent them from obtaining the plaintext were erroneous. It is easy to believe that a police officer would lie about this, particularly if they arrested the person on trumped-up charges but wanted it to look like they had evidence.
To me, both of these are much more plausible than assuming that local police departments (or even the feds) are smart enough to circumvent an encryption system.
From: "Israel, Howard M (Howard)" <hisraelavaya.com>
I think that you have made some assumptions, that are critical to the conclusion that you have drawn. Briefly, the quoted text did not specifically indicate that the encryption was actually broken by law enforcement. Maybe: 1) law enforcement brought a legal action (e.g., subpoena) to the providers of the technology to get the keys?, 2) law enforcement had multiple taps that captured to conversation anyway (e.g., the phone conversation that was encrypted took place in a car, and the encrypted voice was over the phone, but the car also had a bug in it? 3) maybe the plaintext was obtained from a recording device of an informant who was present during the conversation? 4) maybe the encrypted conversation wasn't actually germane to the case, thus not necessary for prosecution?
Those are only a few hypothesis. Thus, I think that your conclusions regarding openness are not justified.
From: Mike Schiraldi <mgs21columbia.edu>
I set up an address of the form flowers@foo when I used the services of 1-800-Flowers, and a year or so later I suddenly started receiving a torrent of pornographic spam at this address. The customer service agent assured me that they do not share their address list with anyone, and I actually believe them. I'm certain that a DBA or even a temp worker ran a quick SQL query, saved the results to disk, and sold it all to spammers. So even if you trust a company to behave honorably as a whole, you should still assume that any e-mail address you give them could easily become public knowledge.
From: "Aram Compeau" <aramtibco.com>
Isn't this just an analog of selecting hard-to-guess passwords? A slightly better schema is to use <optional name>_counterpane_<dateTime when subscribing>@machine.domain. This also overcomes the problem that if you wish to retire <firstname.lastname@example.org> but you still want to subscribe, you must provide another e-mail. Under the new schema, you can retire <email@example.com> and generate <firstname.lastname@example.org>. Of course, there are many variations on the hard-to-guess suffix. As long as you use something like it, framing should be a non-issue for mistakes and casual malice.
From: "Brent J. Nordquist" <brentnordist.net>
On Wed, May 14, 2003 at 11:57:49PM -0500, Bruce Schneier wrote:
> A common security practice is to put a sign on the
A related scenario I've seen is the danger of the employee telling the customer "That will be $7.73" when it's only $6.73, and pocketing an extra $1. I've thus seen (at the Taco Bell drive-through and other places) a conspicuous LED display with the price, and a warning at the bottom "Please call 1-800-XXX-XXXX if you are asked to pay a different amount than that shown here."
While I was studying at university, I needed extra income to pay my way, so in desperation I took a job working in football stadium security! I even attended an official training course with the Football Stewards Association. The issue of bottles was a significant problem in UK football and field sporting events. The classic attack was to take a fizzy drink bottle into the stadium and once it was empty to re-fill it with bodily fluids. Then the bottle would be hurled at either a static player or the opposing crowd. If the victim was lucky it would just hit the body, but the unlucky victim would get it in the head and the bottle would break releasing its contents.
Cans have not been much of a threat, although in UK stadiums there are issues over alcohol which have been addressed. The main can issue I can see would be the problem of constructing a sharp offensive weapon from the aluminum can.
As Mr Bellovin stated, it doesn't matter if you deal with the issue of larger projectile weapons; the smaller implements are always available. There has long been an issue in UK sport with some coins being used -- an especial favorite is the UK 50 pence coin, which is not circular but multi-sided, and previously was much heavier. Although recently, with the introduction of the heavy £2 coin, generous thugs have found its weight and aerodynamics very useful.
One aspect of stadium violence that I found the most enlightening during my time was that a lot of inter-club "supporter" violence is coordinated. There are groups of "fans" who enjoy the violence and they arrange when and where to meet for a "ruck." I worked at a modern stadium where there were very few incidents of in-stadium violence due to skilled crowd control and a flexible high-coverage camera system.
Out of the stadium has often been the biggest problem and this modern stadium uses their technology to assist the police by highlighting those in the crowd who are seen organizing with their mobile phones. Coordinated intelligence gathering between civilian security and police is highly important to maintain a decent level of safety.
From: "Robert P. Goldman" <rpgoldmansift.info>
Seeing those e-mails on this subject reminded me of something I couldn't resist pointing out: the same security restriction is used in New Orleans, except for the streets. You can drink alcoholic bevvies in public, but they have to be in a plastic cup, so you can't hurt anyone with them....
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography. Back issues are available on <http://www.schneier.com/crypto-gram.html>.
To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to email@example.com. To unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>.
Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO of Counterpane Internet Security Inc., the author of "Secrets and Lies" and "Applied Cryptography," and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on computer security and cryptography.
Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane's expert security analysts protect networks for Fortune 1000 companies world-wide.
Copyright (c) 2003 by Counterpane Internet Security, Inc.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.