June 15, 2005
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0506.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.
In this issue:
- 2005 Internet Attack Trends
- Stupid People Buy Fake Concert Tickets
- Backscatter X-Ray Technology
- Crypto-Gram Reprints
- Insider Attacks
- Accuracy of Commercial Data Brokers
- Eric Schmidt on Secrecy and Security
- U.S. Medical Privacy Law Gutted
- Risks of Cell Phones on Airplanes
- Billions Wasted on Anti-Terrorism Security
- Counterpane News
- Attack on the Bluetooth Pairing Process
- Password Safe 2.11
- Public Disclosure of Personal Data Loss
- Holding Computer Files Hostage
- White Powder Anthrax Hoaxes
- Comments from Readers
Counterpane Internet Security, Inc., monitors more than 450 networks in 35 countries, in every time zone. In 2004 we saw 523 billion network events, and our analysts investigated 648,000 security “tickets.” What follows is an overview of what’s happening on the Internet right now, and what we expect to happen in the coming months.
In 2004, 41 percent of the attacks we saw were unauthorized activity of some kind, 21 percent were scanning, 26 percent were unauthorized access, 9 percent were DoS (denial of service), and 3 percent were misuse of applications.
Over the past few months, the two attack vectors that we saw in volume were against the Windows DCOM (Distributed Component Object Model) interface of the RPC (remote procedure call) service and against the Windows LSASS (Local Security Authority Subsystem Service). These seem to be the current favorites for virus and worm writers, and we expect this trend to continue.
The virus trend doesn’t look good. In the last six months of 2004, we saw a plethora of attacks based on browser vulnerabilities (such as GDI-JPEG image vulnerability and IFRAME) and an increase in sophisticated worm and virus attacks. More than 1,000 new worms and viruses were discovered in the last six months alone.
In 2005, we expect to see ever-more-complex worms and viruses in the wild, incorporating complex behavior: polymorphic worms, metamorphic worms, and worms that make use of entry-point obscuration. For example, SpyBot.KEG is a sophisticated vulnerability assessment worm that reports discovered vulnerabilities back to the author via IRC channels.
We expect to see more blended threats: exploit code that combines malicious code with vulnerabilities in order to launch an attack. We expect Microsoft’s IIS (Internet Information Services) Web server to continue to be an attractive target. As more and more companies migrate to Windows 2003 and IIS 6, however, we expect attacks against IIS to decrease.
We also expect to see peer-to-peer networking as a vector to launch viruses.
Targeted worms are another trend we’re starting to see. Recently there have been worms that use third-party information-gathering techniques, such as Google, for advanced reconnaissance. This leads to a more intelligent propagation methodology; instead of propagating scattershot, these worms are focusing on specific targets. By identifying targets through third-party information gathering, the worms reduce the noise they would normally make when randomly selecting targets, thus increasing the window of opportunity between release and first detection.
Another 2004 trend that we expect to continue in 2005 is crime. Hacking has moved from a hobbyist pursuit with a goal of notoriety to a criminal pursuit with a goal of money. Hackers can sell unknown vulnerabilities — “zero-day exploits” — on the black market to criminals who use them to break into computers. Hackers with networks of hacked machines can make money by selling them to spammers or phishers. They can use them to attack networks. We have started seeing criminal extortion over the Internet: hackers with networks of hacked machines threatening to launch DoS attacks against companies. Most of these attacks are against fringe industries — online gambling, online computer gaming, online pornography — and against offshore networks. The more these extortions are successful, the more emboldened the criminals will become.
We expect to see more attacks against financial institutions, as criminals look for new ways to commit fraud. We also expect to see more insider attacks with a criminal profit motive. Already most of the targeted attacks — as opposed to attacks of opportunity — originate from inside the attacked organization’s network.
We also expect to see more politically motivated hacking, whether against countries, companies in “political” industries (petrochemicals, pharmaceuticals, etc.), or political organizations. Although we don’t expect to see terrorism occur over the Internet, we do expect to see more nuisance attacks by hackers who have political motivations.
The Internet is still a dangerous place, but we don’t foresee people or companies abandoning it. The economic and social reasons for using the Internet are still far too compelling.
This article was originally published in the June 05 issue of Queue.
At a rock concert in Boston, hundreds of people bought bad concert tickets from scalpers — sometimes paying as much as $2000 for them. You might think this was some fancy counterfeiting scheme, but no. The tickets were printouts bought from scalpers online.
Online tickets are a great convenience. They contain a unique barcode. You can print as many as you like, but the barcode scanners at the concert door will only accept each barcode once.
Only an idiot would buy a printout from a scalper, because there’s no way to verify that he will only sell it once. This is probably obvious to anyone reading this, but it turns out that it’s not obvious to everyone.
I find this fascinating. Online verification of authorization tokens is supposed to make counterfeiting more difficult, because it assumes the physical token can be copied. It certainly works for management; even if a counterfeiter makes copies, only one person per seat gets admitted into the venue. But it won’t work for the public unless they understand how the system works.
Backscatter X-ray technology is a method of using X rays to see inside objects. The science is complicated, but the upshot is that you can see people naked. The TSA has recently announced a proposal to use these machines to screen airport passengers.
I’m not impressed with this security trade-off. Yes, backscatter X-ray machines might be able to detect things that conventional screening might miss. But I already think we’re spending too much effort screening airplane passengers at the expense of screening luggage and airport employees…to say nothing of the money we should be spending on non-airport security.
On the other side, these machines are expensive and the technology is incredibly intrusive. I don’t think that people should be subjected to strip searches before they board airplanes. And I believe that most people would be appalled by the prospect of security screeners seeing them naked.
I believe that there will be a groundswell of popular opposition to this idea. Aside from the usual list of pro-privacy and pro-liberty groups, I expect fundamentalist Christian groups to be appalled by this technology. I think we can get a bevy of supermodels to speak out against the invasiveness of the search.
Crypto-Gram is currently in its eighth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.
Breaking Iranian Codes:
The Witty Worm:
The Risks Of Cyberterrorism:
Fixing Intelligence Failures:
Honeypots and the Honeynet Project
The Data Encryption Standard (DES):
The internationalization of cryptography policy:
The new breeds of viruses, worms, and other malware:
Timing attacks, power analysis, and other “side-channel” attacks against cryptosystems:
CERT has published a study on insider threats. They analyze 49 insider attacks between 1996 and 2002, and draw some conclusions about the attacks and attackers. Nothing about the prevalence of these attacks, and more about the particulars of them.
The report is mostly obvious, and isn’t worth more than a skim. But the particular methodology only tells part of the story.
Because the study focuses on insider attacks on information systems rather than attacks using information systems, it’s primarily about destructive acts. Of course the major motive is going to be revenge against the employer.
Near as I can tell, the report ignores attacks that use information systems to otherwise benefit the attacker. These attacks would include embezzlement — which at a guess is much more common than revenge.
The report also doesn’t seem to acknowledge that the researchers are only looking at attacks that were noticed. I’m not impressed by the fact that most of the attackers got caught, since those are the ones that were noticed. This reinforces the same bias: network disruption is far more noticeable than theft.
These are worrisome threats, but I’d be more concerned about insider attacks that aren’t nearly so obvious.
Still, there are some interesting statistics about those who use information systems to get back at their employers. In 62% of the cases, “a negative work-related event triggered most of the insiders’ actions.” In 82% of the cases, those who hacked their company “exhibited unusual behavior in the workplace prior to carrying out their activities.” 84% of attacks were motivated by a desire to seek revenge, and 85% of the attackers had a documented grievance against their employer or a co-worker. 96% of the insiders were men, and 30% had previously been arrested. 18% had been arrested for violent offences, 11% for drug or alcohol-related offences, and 11% for non-financial-fraud related theft.
PrivacyActivism has released a study of ChoicePoint and Acxiom, two of the U.S.’s largest data brokers. The study looks at accuracy of information and responsiveness to requests for reports.
It doesn’t look good.
From the press release: “100% of the eleven participants in the study discovered errors in background check reports provided by ChoicePoint. The majority of participants found errors in even the most basic biographical information: name, social security number, address and phone number (in 67% of Acxiom reports, 73% of ChoicePoint reports). Moreover, over 40% of participants did not receive their reports from Acxiom — and the ones who did had to wait an average of three months from the time they requested their information until they received it.”
I spoke with Deborah Pierce, the Executive Director of PrivacyActivism. She made a couple of interesting points.
First, it was very difficult for them to find a legal way to do this study. There are no mechanisms for any kind of oversight of the industry. They had to find companies who were doing background checks on employees anyway, and who felt that participating in this study with PrivacyActivism was important. Then those companies asked their employees if they wanted to anonymously participate in the study.
Second, they were surprised at just how bad the data is. The most shocking error was that two people out of eleven were listed as corporate directors of companies that they had never heard of. This can’t possibly be statistically meaningful, but it is certainly scary.
New timing attack against AES:
My explanation of timing attacks:
Ridiculous fearmongering about botnets:
Nuclear launch codes? Give me a break.
New paper on phishing by the Honeynet Project:
Clever social engineering attack via voicemail:
One library system is considering adding fingerprints to library cards:
The inside story behind the hacking of Paris Hilton’s T-Mobile cell phone.
Another data theft. This isn’t a loss; it’s a theft by a crime ring. It was also a pretty low-tech attack: “The suspects pulled up the account data while working inside their banks, then printed out screen captures of the information or wrote it out by hand, Lomia said. The data was then provided to a company called DRL Associates Inc., which had been set up as a front for the operation. DRL advertised itself as a deadbeat-locator service and as a collection agency, but was not properly licensed for those activities by the state, police said.”
David Card and Enrico Moretti, both economists at UC Berkeley, have published an interesting analysis of electronic voting machines and the 2004 election: “Does Voting Technology Affect Election Outcomes? Touch-screen Voting and the 2004 Presidential Election.”
An appeals court in Minnesota has ruled that the presence of encryption software on a computer may be viewed as evidence of criminal intent.
Text of the ruling:
Intelligent commentary by Jennifer Granick:
An analysis of the Witty Worm. Among other things, the researchers found the initial infection point (patient 0). They also believe that the attack was, at least in part, a deliberate cyber-attack on the U.S. military; an army base was deliberately targeted in the worm’s hotlist. And they suspect that the worm was written by someone working inside the intrusion-detection company ISS.
A major computer espionage case is breaking in Israel. “The companies suspected of commissioning the espionage, which was carried out by planting Trojan horse software in their competitors’ computers, include the satellite television company Yes, which is suspected of spying on cable television company HOT; cell-phone companies Pelephone and Cellcom, suspected of spying on their mutual rival Partner; and Mayer, which imports Volvos and Hondas to Israel and is suspected of spying on Champion Motors, importer of Audis and Volkswagens. Spy programs were also located in the computers of major companies such as Strauss-Elite, Shekem Electric and the business daily Globes.”
Battlefield RF sensors that looks like rocks:
This kind of thing has been discussed for a while. One of the best discussions is still Martin Libicki’s paper from the mid-1990s, “The Mesh and the Net: Speculations on Armed Conflict in a Time of Free Silicon.”
I find the security measures that Mark Felt demanded of Bob Woodward to be fascinating
Sudanese currency is printed on plain paper with very inconsistent color and image quality, and has no security features — not even serial numbers. How does that work? Because anyone who counterfeits will be put in front of a firing squad and shot.
Scary TSA abuse of power:
In January, I wrote about the new DHS biometric ID cards.
In April I pointed to an EPIC analysis of the card.
In May, Phil Libin wrote a rather flawed commentary on the EPIC analysis on CNet.
We wrote a response.
And Libin responded to our response.
Two researchers from the Institute for Cryptology and IT-Security have generated PostScript files with identical MD5-sums but entirely different (but meaningful!) content.
Other MD-5 attacks:
A fascinating law-journal article about defining access in cyberspace:
Torah security system that conforms with Jewish law:
Physicists often use “137” as the code to lock their briefcases: “Measured to be equal to 1/137.03599976, or approximately 1/137, [the fine-structure constant] has endowed the number 137 with a legendary status among physicists (it usually opens the combination locks on their briefcases).”
The new Pentium D will contain technology that can be used to support DRM.
Intel is denying it, but it sounds like they’re weaseling: “According to Intel VP Donald Whiteside, it is ‘an incorrect assertion that Intel has designed-in embedded DRM technologies into the Pentium D processor and the Intel 945 Express Chipset family.’ Whiteside insists they are simply working with vendors who use DRM to ‘design their products to be compatible with the Intel platforms.'”
I’ve already written about what a bad idea trusted traveler programs are:
The trusted traveler programs at various U.S. airports are all run by the TSA. A new program in Orlando Airport is run by the company Verified Identity Pass Inc.
I’ve already written about this company and what it’s doing.
And I’ve already written about the fallacy of confusing identification with security.
From an interview in Information Week:
“InformationWeek: What about security? Have you been paying as much attention to security as, say Microsoft-you can debate whether or not they’ve been successful, but they’ve poured a lot of resources into it.
“Schmidt: More people to a bad architecture does not necessarily make a more secure system. Why don’t you define security so I can answer your question better?
“InformationWeek: I suppose it’s an issue of making the technology transparent enough that people can deploy it with confidence.
“Schmidt: Transparency is not necessarily the only way you achieve security. For example, part of the encryption algorithms are not typically made available to the open source community, because you don’t want people discovering flaws in the encryption.”
Actually, he’s wrong. Everything about an encryption algorithm should always be made available to everyone, because otherwise you’ll invariably have exploitable flaws in your encryption.
My essay on the topic:
In the U.S., medical privacy is largely governed by a 1996 law called HIPAA. Among many other provisions, HIPAA regulates the privacy and security surrounding electronic medical records. HIPAA specifies civil penalties against companies that don’t comply with the regulations, as well as criminal penalties against individuals and corporations who knowingly steal or misuse patient data.
The civil penalties have long been viewed as irrelevant by the healthcare industry. Now the criminal penalties have been gutted. The Justice Department has ruled that the criminal penalties apply to insurers, doctors, hospitals, and other providers — but not necessarily their employees or outsiders who steal personal health data. This means that if an employee mishandles personal data, he cannot be prosecuted under HIPAA unless his boss told him to do it. And the provider cannot be prosecuted unless it is official organization policy.
This is a complicated issue. Peter Swire worked extensively on this bill as the President’s Chief Counselor for Privacy, and I am going to quote him extensively. First, a story about someone who was convicted under the criminal part of this statute.
“In 2004 the U.S. Attorney in Seattle announced that Richard Gibson was being indicted for violating the HIPAA privacy law. Gibson was a phlebotomist a lab assistant in a hospital. While at work he accessed the medical records of a person with a terminal cancer condition. Gibson then got credit cards in the patient’s name and ran up over $9,000 in charges, notably for video game purchases. In a statement to the court, the patient said he ‘lost a year of life both mentally and physically dealing with the stress’ of dealing with collection agencies and other results of Gibson’s actions. Gibson signed a plea agreement and was sentenced to 16 months in jail.”
According to this Justice Department ruling, Gibson was wrongly convicted. I presume his attorney is working on the matter, and I hope he can be re-tried under our identity theft laws. But because Gibson (or someone else like him) was working in his official capacity, he cannot be prosecuted under HIPAA. And because Gibson (or someone like him) was doing something not authorized by his employer, the hospital cannot be prosecuted under HIPAA.
The healthcare industry has been opposed to HIPAA from the beginning, because it puts constraints on their business in the name of security and privacy. This ruling comes after intense lobbying by the industry at the Department of Heath and Human Services and the Justice Department, and is the result of an HHS request for an opinion.
From Swire’s analysis the Justice Department ruling: “For a law professor who teaches statutory interpretation, the OLC opinion is terribly frustrating to read. The opinion reads like a brief for one side of an argument. Even worse, it reads like a brief that knows it has the losing side but has to come out with a predetermined answer.”
I’ve been to my share of HIPAA security conferences. To the extent that big health is following the HIPAA law — and to a large extent, they’re waiting to see how it’s enforced — they are doing so because of the criminal penalties. They know that the civil penalties aren’t that large, and are a cost of doing business. But the criminal penalties were real. Now that they’re gone, the pressure on big health to protect patient privacy is greatly diminished.
Again Swire: “The simplest explanation for the bad OLC opinion is politics. Parts of the health care industry lobbied hard to cancel HIPAA in 2001. When President Bush decided to keep the privacy rule quite possibly based on his sincere personal views the industry efforts shifted direction. Industry pressure has stopped HHS from bringing a single civil case out of the 13,000 complaints. Now, after a U.S. Attorney’s office had the initiative to prosecute Mr. Gibson, senior officials in Washington have clamped down on criminal enforcement. The participation of senior political officials in the interpretation of a statute, rather than relying on staff attorneys, makes this political theory even more convincing.”
This kind of thing is bigger than the security of the healthcare data of Americans. Our administration is trying to collect more data in its attempt to fight terrorism. Part of that is convincing people — both Americans and foreigners — that this data will be protected. When we gut privacy protections because they might inconvenience business, we’re telling the world that privacy isn’t one of our core concerns.
If the administration doesn’t believe that we need to follow its medical data privacy rules, what makes you think they’re following the FISA rules?
Everyone — except those who like peace and quiet — thinks it’s a good idea to allow cell phone calls on airplanes, and are working out the technical details. But the U.S. government is worried that terrorists might make telephone calls from airplanes and coordinate with accomplices on the ground, on another flight or seated elsewhere on the same plane. Or that they could use the system to remotely trigger an explosive device on an airplane.
This is beyond idiotic. Again and again, we hear the argument that a particular technology can be used for bad things, so we have to ban or control it. The problem is that when we ban or control a technology, we also deny ourselves some of the good things it can be used for. Security is always a trade-off. Almost all technologies can be used for both good and evil; in Beyond Fear, I call them “dual use” technologies. Most of the time, the good uses far outweigh the evil uses, and we’re much better off as a society embracing the good uses and dealing with the evil uses some other way.
We don’t ban cars because bank robbers can use them to get away faster. We don’t ban cell phones because drug dealers use them to arrange sales. We don’t ban money because kidnappers use it. And finally, we don’t ban cryptography because the bad guys it to keep their communications secret. In all of these cases, the benefit to society of having the technology is much greater than the benefit to society of controlling, crippling, or banning the technology.
And, of course, security countermeasures that force the attackers to make a minor modification in their tactics aren’t very good trade-offs. Banning cell phones on airplanes only makes sense if the terrorists are planning to use cell phones on airplanes, and will give up and not bother with their attack because they can’t. If their plan doesn’t involve air-to-ground communications, or if it doesn’t involve air travel at all, then the security measure is a waste. And even worse, we denied ourselves all the good uses of the technology in the process.
Recently there have been a bunch of news articles about how lousy counterterrorism security is in the United States, how billions of dollars have been wasted on security since 9/11, and how much of what was purchased doesn’t work as advertised.
The first is from the May 8 New York Times:
“After spending more than $4.5 billion on screening devices to monitor the nation’s ports, borders, airports, mail and air, the federal government is moving to replace or alter much of the antiterrorism equipment, concluding that it is ineffective, unreliable or too expensive to operate.
“Many of the monitoring tools — intended to detect guns, explosives, and nuclear and biological weapons — were bought during the blitz in security spending after the attacks of Sept. 11, 2001.
“In its effort to create a virtual shield around America, the Department of Homeland Security now plans to spend billions of dollars more. Although some changes are being made because of technology that has emerged in the last couple of years, many of them are planned because devices currently in use have done little to improve the nation’s security, according to a review of agency documents and interviews with federal officials and outside experts.”
From another part of the article:
“Among the problems:
“Radiation monitors at ports and borders that cannot differentiate between radiation emitted by a nuclear bomb and naturally occurring radiation from everyday material like cat litter or ceramic tile.
“Air-monitoring equipment in major cities that is only marginally effective because not enough detectors were deployed and were sometimes not properly calibrated or installed. They also do not produce results for up to 36 hours — long after a biological attack would potentially infect thousands of people.
“Passenger-screening equipment at airports that auditors have found is no more likely than before federal screeners took over to detect whether someone is trying to carry a weapon or a bomb aboard a plane.
“Postal Service machines that test only a small percentage of mail and look for anthrax but no other biological agents.”
The Washington Post had a series of articles. The first lists some more problems:
“The contract to hire airport passenger screeners grew to $741 million from $104 million in less than a year. The screeners are failing to detect weapons at roughly the same rate as shortly after the attacks.
“The contract for airport bomb-detection machines ballooned to at least $1.2 billion from $508 million over 18 months. The machines have been hampered by high false-alarm rates.
“A contract for a computer network called US-VISIT to screen foreign visitors could cost taxpayers $10 billion. It relies on outdated technology that puts the project at risk.
“Radiation-detection machines worth a total of a half-billion dollars deployed to screen trucks and cargo containers at ports and borders have trouble distinguishing between highly enriched uranium and common household products. The problem has prompted costly plans to replace the machines.
The second is about border security.
And more recently, a New York Times article on how lousy port security is.
There are a lot of reasons why all this is true: the problems of believing companies that have something to sell you, the difficulty of making technological security solutions work, the problems with making major security changes quickly, the mismanagement that comes from any large bureaucracy like the DHS, and the wastefulness of defending potential terrorist targets instead of broadly trying to deal with terrorism.
New York Times on port security:
Counterpane has a new Identity Management Service:
Counterpane’s white paper on identity management: <http://www.counterpane.com/cgi-bin/whitepaper.cgi>
Network World on outsourced security monitoring and Counterpane:
Schneier will be speaking at the World Summit for the Information Society (WSIS) preparatory meeting on Cybersecurity, on June 30th in Geneva:
There’s a new cryptographic result against Bluetooth. Yaniv Shaked and Avishai Wool of Tel Aviv University in Israel have figured out how to recover the PIN by eavesdropping on the pairing process.
Pairing is an important part of Bluetooth. It’s how two devices — a phone and a headset, for example — associate themselves with one another. They generate a shared secret that they use for all future communication. Pairing is why, when on a crowded subway, your Bluetooth devices don’t link up with all the other Bluetooth devices carried by everyone else.
According to the Bluetooth specification, PINs can be up to 128 bits long. Unfortunately, most manufacturers have standardized on a four decimal-digit PIN. This attack can crack that 4-digit PIN in less than 0.3 sec on an old Pentium III 450MHz computer, and in 0.06 sec on a Pentium IV 3Ghz HT computer.
And it’s not just the PIN; the entire protocol was badly designed.
At first glance, this attack isn’t a big deal. It only works if you can eavesdrop on the pairing process. Pairing is something that occurs rarely, and generally in the safety of your home or office. But the authors have figured out how to force a pair of Bluetooth devices to repeat the pairing process, allowing them to eavesdrop on it. They pretend to be one of the two devices, and send a message to the other claiming to have forgotten the link key. This prompts the other device to discard the key, and the two then begin a new pairing session.
Taken together, this is an impressive result. I can’t be sure, but I believe it would allow an attacker to take control of someone’s Bluetooth devices. Certainly it allows an attacker to eavesdrop on someone’s Bluetooth network.
Combined with the long-range Bluetooth “sniper rifle,” Bluetooth has a serious security problem.
Bluetooth sniper rifle:
Password Safe is a free Windows password-storage utility. These days, anyone who is the Web regularly needs too many passwords, and it’s impossible to remember them all. I have long advocating writing them all down on a piece of paper and putting it in your wallet.
I designed Password Safe as another solution. It’s a small program that encrypts all of your passwords using one passphrase. The program is easy to use, and isn’t bogged down by lots of unnecessary features. Security through simplicity.
Password Safe 2.11 is now available.
Currently, Password Safe is an open-source project at SourceForge, and is run by Rony Shapiro. Thank you to him and to all the other programmers who worked on the project.
Password Safe page:
Note that my Password Safe is not the same as these PasswordSafes. (I should have picked a more obscure name for the program.)
It is the same as this, for the PocketPC:
Citigroup announced that it lost personal data on 3.9 million people. The data was on a set of backup tapes that were sent by UPS (a package delivery service) from point A and never arrived at point B.
This is a huge data loss, and even though it is unlikely that any bad guys got their hands on the data, it will have profound effects on the security of all our personal data.
It might seem that there has been an epidemic of personal-data losses recently, but that’s an illusion. What we’re seeing are the effects of a California law that requires companies to disclose losses of thefts of personal data. It’s always been happening, only now companies have to go public with it.
As a security expert, I like the California law for three reasons. One, data on actual intrusions is useful for research. Two, alerting individuals whose data is lost or stolen is a good idea. And three, increased public scrutiny leads companies to spend more effort protecting personal data.
Think of it as public shaming. Companies will spend money to avoid the PR cost of public shaming. Hence, security improves.
This works, but there’s an attenuation effect going on. As more of these events occur, the press is less likely to report them. When there’s less noise in the press, there’s less public shaming. And when there’s less public shaming, the amount of money companies are willing to spend to avoid it goes down.
This data loss has set a new bar for reporters. Data thefts affecting 50,000 individuals will no longer be news. They won’t be reported.
The notification of individuals also has an attenuation effect. I know people in California who have a dozen notices about the loss of their personal data. When no identity theft follows, people start believing that it isn’t really a problem. (In the large, they’re right. Most data losses don’t result in identity theft. But that doesn’t mean that it’s not a problem.)
Public disclosure is good. But it’s not enough.
This one has been predicted for years. Someone breaks into your network, encrypts your data files, and then demands a ransom to hand over the key.
I don’t know how the attackers did it, but below is probably the best way. A worm could be programmed to do it.
1. Break into a computer.
2. Generate a random 256-bit file-encryption key.
3. Encrypt the file-encryption key with a common RSA public key.
4. Encrypt data files with the file-encryption key.
5. Wipe data files and file-encryption key.
6. Wipe all free space on the drive.
7. Output a file containing the RSA-encrypted, file encryption key.
8. Demand ransom.
9. Receive ransom.
10. Receive encrypted file-encryption key.
11. Decrypt it and send it back.
Step 9 is the hardest, and it’s where you’re likely to get caught. I don’t know much about anonymous money transfer, but I don’t think Swiss bank accounts have the anonymity they used to.
You also might have to prove that you can decrypt the data, so an easy modification is to encrypt a piece of the data with another file-encryption key so you can prove to the victim that you have the RSA private key.
Internet attacks have changed over the last couple of years. They’re no longer about hackers. They’re about criminals. And we should expect to see more of this sort of thing in the future.
This kind of thing has been predicted for years:
Earlier this month, there was an anthrax scare at the Indonesian embassy in Australia. Someone sent them some white powder in an envelope, which was scary enough. Then it tested positive for bacillus. The building was decontaminated, and the staff was quarantined for twelve hours. By then, tests came back negative for anthrax.
A lot of thought went into this false alarm. The attackers obviously knew that their white powder would be quickly tested for the presence of a bacterium of the bacillus family (of which anthrax is a member), but that the bacillus would have to be cultured for a couple of days before a more exact identification could be made. So even without any anthrax, they managed to cause two days of terror.
At a guess, this incident had something to do with Schapelle Corby (yet another security related story). Corby was arrested in Bali for smuggling drugs into the country. Her defense, widely believed in Australia, was that she was an unwitting dupe of the real drug smugglers. Supposedly, the smugglers work as airport baggage handlers and slip packages into checked baggage and remove them at the far end before reclaim. In any case, Bali has very strict drug laws and Corby was recently convicted in what Australians consider a miscarriage of justice. There have been news reports saying that there is no connection, but it just seems too obvious.
350 false alarms:
From: “Dave Mortensen” <drmortflash.net>
Subject: You May 10 Newsday article.
I found your call for a curb on electronic surveillance abuses very interesting, but want to point out a fairly common misconception regarding surveillance and use of evidence.
While seemingly illegal to spy on people without a warrant, the fact is, law enforcement officials and private investigators will resort to it if they feel it is necessary — knowing full well they cannot reveal how they obtained the information and that they simply cannot submit any evidence obtained in that manner in a prosecution or civil suit.
Finding out something via these intrusions and not being able to use information gathered “illegally” is not necessarily a serious impediment to an investigation. There can be significant strategic value in simply obtaining information about the existence of other potential evidence or witnesses that provide even better evidence from which an appropriate trail of acquisition and custody can be shown.
As an analogy, consider what will happen when an attorney/collector firm is determined to find the “hidden” assets of a person who has a judgment outstanding or who has filed for bankruptcy protection. The lawyer/collector assumes there is fraud going on, so they hire an investigator. That private investigator doesn’t have to get a warrant — in fact, all he or she has to do is find the money or the assets. A phone or computer tap (illegal) is placed and eventually the investigator finds the hidden treasure exists. That alone is enough. An “anonymous” tip provides the lawyer/collector with information that now has the victim over a federal perjury barrel. Admit it and pay up or deny it and the information goes to the US Attorney’s office (or the IRS).
More widespread are illegal networks of information brokers, much like Orazio Lembo, Jr., who had bank employees (managers) and a NJ Dept of Employment manager on his string for four years, providing whatever his “clients” asked for and paying the insiders $10 a hit. Lembo’s clients included attorneys and collections firms. He made millions playing detective. His network of contacts face years of prison time. Who knows what the law firms and collections companies that made him rich are up against — maybe nothing.
And since prevention, not just prosecution, is one of the motives in anti-terrorism issues, the balancing act the authorities play in deciding when and if to obtain a warrant leaves an enormous number of privacy violation victims who may never be charged with anything. Given the fact that the information gathered will eventually be put into some civilian data-collection and analysis system (and potentially be accessed without authorization or stolen as in the Seisint case), there really is no prospect for privacy protection in the current climate.
The “corresponding mechanisms to curb abuse” you call for should include severe penalties for not only law enforcement breaches of search warrant law, but civilian “detectives” and firms who routinely ask others to gather information about people.
From: Paul Schumacher <pschoptonline.net>
Subject: Re: Detecting Nuclear Material in Transit
I collect, study and photograph uranium and thorium minerals as a hobby (http://www.uraniumminerals.com). One of the things I do is to measure the radioactivity of each specimen. I use a digital Geiger counter at 2.5 cm from the specimen.
I have some specimens that are quite hot, one reaching 150 microseiverts per hour. Others are barely above background count. It takes half an hour of averaging to get a good reading on these. Purified uranium is less radioactive, gram for gram, than these specimen’s uranium as it does not have many of the daughter products (radium) of uranium decay.
Shielding will help, but the effectiveness of shielding depends on mass. A nuclear weapon will need to be encased in at least ten centimeters of lead to be shielded from casual detection. This will still result in a very heavy package to be shipped.
Instead of shielding the bomb from detection, the adversary will more likely smuggle it in either in the same manner as the Mexican ‘immigrants’ do, or by submarine, like the German agents during WW2. The more sensors we put up, the more alternative methods become attractive.
From this we can conclude:
1. A newly manufactured bomb of good design will produce little radiation.
2. Shielding can make its detection difficult, but the very mass of the shielding will betray the weapon’s presence.
3. Fixed location detectors will simply be bypassed.
4. if we cannot secure our borders from drug and ‘immigrant’ smugglers, how can we protect against WMDs?
5. If we do succeed in 100% detection of all nuclear material, regardless of how it is shielded, it does nothing to stop an attack using chemical or biological agents.
Now add into all that the number of tons of food, goods and mail coming into even a city of 25,000, and the goods, mail and trash leaving it each day.
Now think of our larger cities. To have major loss of life and do major economic damage, a nuclear terror attack would not have to occur in the heart of a city. If detected and stopped, what is to stop remote monitoring and detonation before it can be disarmed?
It is the same as stopping a car bomb. The response to prevent this is lacking, a nuclear weapon simply places the result on a much higher scale of destruction.
While useful, radiation detectors will have little effect on stopping a nuclear terror attack. It must be a small component of a much better integrated system of security to stop nuclear attack.
From: Rich Wilson <wk633yahoo.com>
Subject: Re: REAL ID
It will be interesting to see how this addresses various “fringe” groups:
1) People with no driver’s license. My wife doesn’t have one. I didn’t bother to get one until I was 26. My wife not having one proved interesting with a police officer threatened to give her a jay-walking ticket. When she didn’t seem to care, he pointed out it would affect her driving record. She then cared even less!
2) People with a P.O. box as an address. Santa Barbara has a sizable community of “R.V. dwellers” who roam from parking spot to parking spot, despite city efforts to harass them out of sight.
3) People with revoked licenses. Perhaps that will be handled by changing their status in the magic DB?
4) People with residences in multiple states. REAL ID doesn’t allow for multiple state licenses, and currently at least some states require you to have a license for their state.
From: Petri Aukia <petriaukia.com>
Subject: Re: REAL ID
There is a subtle difference you have not alluded to with regards to the European and American driver’s licenses and their privacy implications.
The Finnish and French (most likely all other EU driver’s licenses as well) do not have the home address of the driver. They serve to document your existence, name, photo, signature, Social Security number, and the types of vehicles you are allowed to drive. Pictograms and standardized numbers and locations of datum are used so that a patrol officer can read a license from any EU country.
Each country has a mechanism to map from the Social Security number or the local equivalent to the current home address of the driver, but this is available only to the government and the companies you have given the right to know of your address (magazines, newspapers, and the like).
Subject: Europe and Identity Theft
In addition to your mention of European legal frameworks protecting data (which are not as complete and as prevalent across the union as one may wish), the very notion of identity theft is almost unknown in Europe. There are many causes, many linked to the limited benefits you can draw from “identity” by itself. This is including the absence of credit rating as it exists in the U.S., and different procedures to open bank accounts and get access to their resources.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Comments on CRYPTO-GRAM should be sent to email@example.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.
Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Counterpane Internet Security, Inc.
Copyright (c) 2005 by Bruce Schneier.