June 15, 2002
by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
A free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography.
Back issues are available at <http://www.schneier.com/crypto-gram.html>. To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to firstname.lastname@example.org.
Copyright (c) 2002 by Counterpane Internet Security, Inc.
In this issue:
- Fixing Intelligence Failures
- Crypto-Gram Reprints
- Counterpane News
- More on Secrecy and Security
- Comments from Readers
Fixing Intelligence Failures
Could the intelligence community have connected the dots? Why didn’t anyone connect the dots? How can we make sure we connect the dots next time? Dot connecting is the metaphor of the moment in Washington, as the various politicians scramble to make sure that 1) their pet ideas for improving domestic security are adopted, and 2) they don’t get blamed for any dot connection failures that could have prevented 9/11.
Unfortunately, it’s the wrong metaphor. We all know how to connect the dots. They’re right there on the page, and they’re all numbered. All you have to do is move your crayon from one dot to another, and when you’re done you’ve drawn a lion. It’s so easy a three-year-old could do it; what’s wrong with the FBI and the CIA?
The problem is that the dots can only be numbered after the fact. With the benefit of hindsight, it’s easy to draw lines from people in flight school here, to secret meetings in foreign countries there, over to interesting tips from foreign governments, and then to INS records. Before 9/11 it’s not so easy. Rather than thinking of intelligence as a simple connect-the-dots picture, think of it as a million unnumbered pictures superimposed on top of each other. Or a random-dot stereogram. Is it a lion, a tree, a cast iron stove, or just an unintelligible mess of dots? You try and figure it out.
This isn’t to say that the United States didn’t have some spectacular failures in analysis leading up to 9/11. Way back in the 30 September 2001 issue of Crypto-Gram, I wrote: “In what I am sure is the mother of all investigations, the CIA, NSA, and FBI have uncovered all sorts of data from their files, data that clearly indicates that an attack was being planned. Maybe it even clearly indicates the nature of the attack, or the date. I’m sure lots of information is there, in files, intercepts, computer memory.” I was guessing there. It seems that there was more than I thought.
Given the bits of information that have been discussed in the press, I would have liked to think that we could have prevented this one, that there was a single Middle Eastern Terrorism desk somewhere inside the intelligence community whose job it was to stay on top of all of this. It seems that we couldn’t, and that there wasn’t. A budget issue, most likely.
Still, I think the “whose fault is it?” witch hunt is a bit much. Not that I mind seeing George Bush on the defensive. I’ve gotten sick of his “we’re at war, and if you criticize me you’re being unpatriotic” nonsense, and I think the enormous damage John Ashcroft has done to our nation’s freedoms and liberties will take a generation and another Warren Court to fix. But all this finger-pointing between the CIA and FBI is childish, and I’m embarrassed by the Democrats who are pushing through their own poorly thought out security proposals so they’re not viewed in the polls as being soft on terrorism.
My preference is for less politics and more intelligent discussion. And I’d rather see the discussion center on how to improve things for next time, rather than on who gets the blame for this time. So, in the spirit of bipartisanship (there are plenty of nitwits in both parties), here are some points for discussion:
1. It’s not about data collection; it’s about data analysis. Again from the 30 September 2001 issue of Crypto-Gram: “Demands for even more surveillance miss the point. The problem is not obtaining data, it’s deciding which data is worth analyzing and then interpreting it. Everyone already leaves a wide audit trail as we go through life, and law enforcement can already access those records with search warrants [and subpoenas]. The FBI quickly pieced together the terrorists’ identities and the last few months of their lives, once they knew where to look. If they had thrown up their hands and said that they couldn’t figure out who did it or how, they might have a case for needing more surveillance data. But they didn’t, and they don’t.”
2. Security decisions need to be made as close to the source as possible. This has all sorts of implications: airport X-ray machines should be right next to the departure gates, like they are in some European airports; bomb target decisions should be made by the generals on the ground in the war zone, not by some bureaucrat in Washington; and investigation approvals should be granted the FBI office that’s closest to the investigation. This mode of operation has more opportunities for abuse, so oversight is vital. But it is also more robust, and the best way to make things work. (The U.S. Marine Corps understands this principle; it’s the heart of their chain of command rules.)
3. Data correlation needs to happen as far away from the sources as possible. Good intelligence involves finding meaning amongst enormous reams of irrelevant data, and then organizing all those disparate pieces of information into coherent predictions about what will happen next. It requires smart people who can see connections, and access to information from many different branches of government. It can’t be by the various individual pieces of bureaucracy, whether it be the CIA, FBI, NSA, INS, Coast Guard, etc. The whole picture is larger than any of them, and each one only has access to a small piece.
4. Intelligence and law enforcement have fundamentally different missions. The FBI’s model of operation—investigation of past crimes—does not lend itself to an intelligence paradigm: prediction of future events. On the other hand, the CIA is prohibited by law from spying on citizens. Expecting the FBI to become a domestic CIA is a terrible idea; the missions are just too different and that’s too much power to consolidate under one roof. Turning the CIA into a domestic intelligence agency is an equally terrible idea; the tactics that they regularly use abroad are unconstitutional here.
5. Don’t forget old-fashioned intelligence gathering. Enough with the Echelon-like NSA programs where everything and anything gets sucked into an enormous electronic maw, never to be looked at again. Lots of Americans managed to become part of Al Qaeda (a 20-year-old Californian did it, for crying out loud); why weren’t any of them feeding intelligence to the CIA? Get out in the field and do your jobs.
6. Organizations with investigative powers require constant oversight. If we want to formalize a domestic intelligence agency, we are going to need to be very careful about how we do it. Many of the checks and balances that Ashcroft is discarding were put in place to prevent abuse. And abuse is rampant—at the federal, state, and local levels. Just because everyone is feeling good about the police today doesn’t mean that things won’t change in the future. They always do.
7. Fundamental changes in how the United States copes with domestic terrorism requires, um, fundamental changes. Much as the Bush Administration would like to ignore the constitutional issues surrounding some of their proposals, those issues are real. Much of what the Israeli government does to combat terrorism in its country, even some of what the British government does, is unconstitutional in the United States. Security is never absolute; it always involved tradeoffs. If we’re going to institute domestic passports, arrest people in secret and deny them any rights, place people with Arab last names under continuous harassment, or methodically track everyone’s financial dealings, we’re going to have to rewrite the Constitution. At the very least, we need to have a frank and candid debate about what we’re getting for what we’re giving up. People might want to live in a police state, but let them at least decide willingly to live in a police state. My opinion has been that it is largely unnecessary to trade civil liberties for security, and that the best security measures—reinforcing the airplane cockpit door, putting barricades and guards around important buildings, improving authentication for telephone and Internet banking—have no effect on civil liberties. Broad surveillance is a mark of bad security.
All in all, I’m not sure how the Department of Homeland Security is going to help with any of this. Taking a bunch of ineffectual little bureaucracies and lumping them together into a single galumptious bureaucracy doesn’t seem like a step in the right direction. Leaving the FBI and CIA out of the mix—the largest sources of both valuable information and turf-based problems—doesn’t help, either. And if the individual organizations squabble and refuse to share information, reshuffling the chain of command isn’t really going to make any difference—it’ll just add needless layers of management. And don’t forget the $37 billion this is all supposed to cost, assuming there aren’t the usual massive cost overruns. Couldn’t we better spend that money teaching Arabic to case officers, hiring investigators, and doing various things that actually will make a difference?
The problems are about politics and policy, and not about form and structure. Fix the former, and fixing the latter becomes easy. Change the latter without fixing the former, and nothing will change.
I’m not denying the need for some domestic intelligence capability. We need something to respond to future domestic threats. I’m not happy with this conclusion, but I think it may be the best of a bunch of bad choices. Given this, the thing to do is make sure we approach that choice correctly, paying attention to constitutional protections, respecting privacy and civil liberty, and minimizing the inevitable abuses of power.
My original articles:
Crypto-Gram is currently in its fifth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.
Honeypots and the Honeynet Project
The Data Encryption Standard (DES):
The internationalization of cryptography policy:
The new breeds of viruses, worms, and other malware:
Timing attacks, power analysis, and other “side-channel” attacks against cryptosystems:
More biometric articles:
And yet another:
This is a particularly good article about failures in biometric sensors:
And another one about fingerprints:
The Matsumoto paper I talked about last month is here:
And there are some news stories on my story in Crypto-Gram. (I think this counts as my first “scoop.”)
In response, various fingerprint companies have issued statements claiming that they’re not vulnerable to the attack for a variety of reasons. Remember the point of the story, though. Matsumoto is not a professional forensic scientist. Even so, with a little bit of ingenuity and some common household products, he was able to reliably defeat eleven commercial fingerprint readers from latent fingerprints. If he could do this, what would he do with a budget of $50? What could he do with a few more weeks of thinking? What could a professional in the field do? Fingerprint-sensor vendors have long claimed that their products could not be spoofed. Why would anyone believe them when they say it now?
And as long as we’re on the topic, consider whether Matsumoto’s procedure for making gummi fingers could be used to leave a latent fingerprint that might fool the police.
High-speed worms, and a discussion on defenses:
Virtual credit cards. Randomized numbers for safer online shopping. Done correctly, this is a really good idea.
Remember, the real risks here are not sending credit card numbers in the clear over the Internet; they’re the databases of credit card numbers stored in merchant computers.
Airport face scanners a dismal failure:
The system failed to correctly identify airport employees 53 percent of the time. And this is with standardized, high-quality photographs to start with. Imagine the system trying to work with grainy photographs of suspected terrorists.
For computer eavesdropping. This nifty hardware device sits between the keyboard and the computer, where no one will think of looking for it, and captures keystrokes for later analysis. Every spy needs one.
Keystroke logging was used by FBI to catch two Russian hackers. No details on whether it was hardware or software, or administered locally or remotely.
Video forgery. Make video recordings of people saying things they’ve never said:
NIST Report: The Economic Impact of Role-Based Access Control. Another example of the interplay of economics and security.
Article on the buying and selling of stolen credit card numbers:
I believe that everyone has, in his wallet right now, a credit card number that has been stolen. Almost none of those stolen numbers will ever be bought, sold, or used. There are just too many large databases of credit card numbers on Internet-accessible computers, and they’re probably all insecure. It’s a good thing this isn’t my liability.
“Photo flash” attacks against smart cards. (Yet another way to break smart cards.)
And yet another way to eavesdrop on CRT displays:
Politically motivated hacking:
Microsoft’s “Trustworthy Computing” initiative seems to be just a scam so the company can avoid antitrust remedies by claiming that opening their systems is a security risk. Pity.
A comment on Slashdot sums it up best: “Microsoft has therefore taken the position that their code is so bad that it must be kept secret to keep people from being killed by it. Windows—the Pinto of the 21st century.”
Most interesting of the two articles above is the second eWeek article, where Microsoft’s Allchin mentions “at least one protocol and two APIs that it plans to withhold from public disclosure under the security carve-out.” The protocol is Message Queuing. Apparently, Microsoft has discovered a security flaw in the Message Queuing protocol so bad that can’t be fixed without breaking existing applications. So, until such time as they can field a backwards-compatible fix, they’re going to hope no one else discovers it. (This is not a wholly unreasonable decision; security researchers have made the same decision in the past.) Of course, Allchin has undermined this decision by publicly naming the protocol. That’s more than enough for someone sufficiently motivated to find the flaw. In the classified world, you can get your clearance pulled for blabbing like that.
Jim Allchin’s testimony:
At the same time, Microsoft tries to convince the Pentagon that open source software threatens security, and using proprietary Microsoft software is much safer (and better for the economy, to boot). This article is hilarious in light of the above.
A white paper that makes the case that open source software is less secure. However, it’s from a group that has received funding from Microsoft.
Good essay on security negligence:
Thousands illegally obtain fraudulent Social Security numbers in the United States. Any reason to believe a national ID card would be any less abuse-prone?
Attacks against Yahoo:
Defeating copy-protected CDs with a magic marker. What bozos design these security systems, anyway?
An analysis of Bernstein’s factoring proposal. I agree with this paper; there’s less in the proposal than appears.
Interview with a senior architect of the Echelon project:
Passwords still suck:
I’ve already written about how UNICODE can evade IDSs. Here’s another essay on the topic:
FBI agents sell a stock short, and then leak unpleasant personal information about the corporate officers. Other companies are blackmailed with threat of the same.
Thomas Friedman writes in the New York Times: “Silicon Valley staunchly opposed the Clipper Chip, which would have given the government a back-door key to all U.S. encrypted data. Now some wonder whether they shouldn’t have opposed it. John Doerr, the venture capitalist, said, ‘Culturally, the Valley was already maturing before 9/11, but since then it’s definitely developed a deeper respect for leaders and government institutions.'” That’s completely wrong. We opposed Clipper not because we don’t trust the government, but because we don’t trust the technology. Respect is not sufficient to make complex systems secure.
Via Usenet, so no annoying registration:
Interesting paper on the relationship between user interface and security:
Protecting your privacy online, a four-part article:
Japanese hackers arrested for industrial spying.
The story of a rather lame hacking contest gone badly wrong:
A report on something called a Tactical Web Page, which purports to be a secret combat Web site, through which American commanders in the United States can direct troops in Afghanistan. My favorite quote: “‘There have been a few instances when unidentified computers have tried to get in, in which case we throw up additional firewalls,’ Lt. Col. Bryan Dyer said.” Is it just me, or does Lt. Col. Dyer come from I-Don’t-Get-It Land?
Yet more comments from the clueless. In response to the report that someone hacked into Experian’s database and stole 13,000 credit-card numbers, a PR flack said: “Our files are protected by state-of-the-art, Star Wars-style security and encryption technology.”
And this review of PGP. “PGP’s basic encryption is about 10 times higher than that of other apps”—but only if you compare RSA with triple-DES. “Fortunately, PGP includes a primer on encryption principles. Read the primer carefully, and you’ll rev right up to speed.” Maybe the reviewer should read it, too.
VeriSign is offering companies the ability to outsource their compliance with U.S. Internet wiretap regulations. They will comply with government wiretap court orders, so companies don’t have to redesign their infrastructure. I’ve seen a lot of VeriSign-bashing over this, but I think it’s a fine example of a company stepping in and filling a market need. VeriSign wants to be the trusted infrastructure company, and this is just another aspect of that. Complain about the U.S. law, not the vehicle for compliance.
Amusing story of lousy passwords at the New York Times:
NIST wants comments on Part 1 of their Key Management Guidelines:
Hacking for good. Password cracked; Norwegian history saved!
Microsoft’s editorial on government-mandated digital copy protection. I actually agree with a lot of this.
Hacking the Xbox. Interesting stuff, especially the fact that the security is based on a single global secret. Hack one box, hack them all.
Using a cell phone as a room bug:
More news on the VeriSign/Counterpane agreement:
A good article about Counterpane from Computerworld:
Schneier is speaking about Counterpane monitoring in California (Anaheim and Irvine, both near Los Angeles), Dallas, and Cincinnati. For details see:
More on Secrecy and Security
In case you think this is an easy issue, here’s something to mull over.
The New York Times Magazine recently carried a fascinating article on the terrorist threat—myths and realities—of a nuclear bomb. The article talks about one atomic researcher’s nightmare scenario: a homemade nuclear explosive device being detonated in New York. The researcher suggests that the attackers would detonate the device inside a tunnel, because that “would vaporize many tons of concrete and dirt and river water into an enduring cloud of lethal fallout.”
My reactions were 1) now that’s clever, 2) I never thought of that, and 3) should he have published that? I have to admit that I don’t know.
Information about poisons is kept relatively secret. I’m sure there are sites on the Internet—however, we all know how accurate the Internet can be—but doctors are generally reluctant to divulge recipes to the general public. Those in the field know, but there’s little benefit to announcing the information to every would-be killer.
Nuclear information is different. It’s possible that anyone with enough skill to create a home-made nuclear bomb already knows about the “detonate it in a tunnel” trick, and a dozen other tricks that I, without any specialized education, doesn’t. It’s also possible that the average amateur nuclear warrior would never think of it. I don’t know.
Comments from Readers
From: Bob Chase <bchasescoreboardinc.com>
Subject: Secrecy, Security, and Obscurity
I accept your general argument wholeheartedly, but this corollary has a gotcha. Secrets can work in parallel or in serial.
For instance, suppose we want to gain root on a box to which we have physical access. There are several “secrets” that we might try (boot from CD, special boot parameters, guessing screen saver passwords, etc). If any of these methods works, then security is compromised; each of these methods works in parallel.
Suppose instead that the box is in a room with a secret combination which is in a building with a secret combination. We have a serial combination of secrets, which will continue to hold—in a weaker state—if any one secret is known.
Again, I think what you are saying is important, and needs to be said! And the systems that you use as examples are good examples of bad combinations of parallel secrets! I simply think that there is a case for strengthening security through multiple serial secrets.
From: Pekka Pihlajasaari <pekkadata.co.za>
Subject: Secrecy, Security, and Obscurity
Kerckhoffs’ principle “in a well-designed cryptographic system, only the key needs to be secret” is not necessarily reflexive, and makes no statement about aspects other than the key. While non-disclosure may restrict the validation of the algorithms, it does not necessarily reduce the security of the system.
Whereas a key is an independent value, a cryptographic algorithm may result in artifacts, and information about it could be inferred through analysis of the encrypted text. As such, security based on secrecy of the algorithm may be discounted in an analysis of a system. There is no need to invoke quality or correctness arguments to show the futility of suppressing the choice of the algorithm as a security measure.
This principle can be generalised to other environments when the algorithm forms a part of a cryptographic system where the economic value is exclusively embodied in the key. In many cases competitive advantage is derived from the algorithm in the form of limiting markets through export restrictions, or reduced credit default exposure. Such systems provide benefits outside the scope of pure cryptographic protocols and should be analysed separately.
Your observation “the fewer secrets a system has, the more secure it is” is a corollary. If the secrets are keys, then they may be coalesced into a single key, giving the traditional interpretation of a system with an algorithm and a single key. Otherwise, the system has multiple secrets that are not keys, but components of a business process (the algorithm) that is inferable and not possible to secure. Given sufficient willing participants it is possible to determine the threshold of an airport security sensor, and even the placement of mines in a mine field, allowing reverse-engineering of a security process.
It may be better to state that if a system can contains a single secret, then full disclosure does not affect its security and simply introduces obscurity, whereas if other information needs to be secured, secrecy is necessary, but not sufficient for continued protection against disclosure through inference.
Subject: Fun with Fingerprint Readers
Matsumoto-san is most likely not using what Americans would purchase in groceries as gelatin, which is definitely not stable at room temperature or higher. Have you ever seen a Jell-O salad at a picnic on a hot summer day? Not a pretty sight, even if you fortify it.
Matsumoto-san is more likely using kanten, commonly known in English as “agar-agar.” This is a sea vegetable gelatin which does not need to be refrigerated and is quite hard at room temperature. (Kanten is the Japanese name for the product; I do not know what it is called in other Asian languages.) Kanten can be purchased at Asian groceries in sheets, strips, or powder: cook’s choice as to which you find easiest to work with. Personally, I find the powder easiest.
From: Gunnar Peterson <ngpwoodshole.visi.com>
Subject: Web Services and SOAP
One thing that I have heard you mention before that I disagree with is that Web Services and things like SOAP are bad for security.
First off, any technology will have some security issues and Web Services are no different, but this does not mean that they are “bad,” because you have to look at what they are replacing. If I am writing a distributed app today that needs to traverse the firewall or WAN my choices are RMI-IIOP (J2EE or CORBA) or DCOM (Microsoft) or some type of proprietary messaging system.
As you have often said, complexity is the enemy of security…well…Web Services represent a much more simplistic approach to distributed app development. For one thing I can use SOAP in either a J2EE or .Net app, so the security team only needs to understand this one protocol to be useful to either style of development team (for the distributed programming part of the project).
For another thing, Web Services and SOAP are a shift away from code and towards semantic meaning (as Don Box says) and this also aids the understanding of a complex system. Given a good architect, development, and security team, a Web Services-based system has a better chance at being secure in development and production than RMI-IIOP- or DCOM-based apps.
So I would say that Web Services and SOAP are an imperfect yet incremental improvement over the current situation.
From: Glenn Welt <glennweltchecksnet.com>
Subject: 10 Day Notice: Welt vs. Counterpane & Schneier
I hereby request the immediate removal of false & defamatory comments about my software, ChecksNet, from the following website within ten days of this notice:
Be sure to also remove all mirror sites containing the same false and defamatory statements.
My website, checksnet.com offers both secure SSL order pages and non-secure order pages, thus your accusations are totally false. Whereas your defamatory pages appear via numerous search engines, it is damaging to the reputation and sales of my software.
This is not an empty threat. In the past 2 years I have taken people to court and been awarded 6 figure judgments for defamation.
This will be my only notice prior to commencement of legal action.
I suggest removing the “doghouse” editorials unless you wish to personally and corporately appear in a “courthouse.”
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography. Back issues are available on <http://www.schneier.com/crypto-gram.html>.
To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to email@example.com. To unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>.
Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO of Counterpane Internet Security Inc., the author of “Secrets and Lies” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on computer security and cryptography.
Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane’s expert security analysts protect networks for Fortune 1000 companies world-wide.