Crypto-Gram

October 15, 2004

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@schneier.com
<http://www.schneier.com>
<http://www.counterpane.com>

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

Back issues are available at <http://www.schneier.com/crypto-gram.html>. To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to crypto-gram-subscribe@chaparraltree.com.

Crypto-Gram also has an RSS feed at <http://www.schneier.com/crypto-gram-rss.xml>.

Or you can read it on the web at <http://www.schneier.com/crypto-gram-0410.html>.


In this issue:


New Blog, and Changes to Crypto-Gram

I’m in the process of making several changes to Crypto-Gram, all designed to give readers more reading options.

Blog: Crypto-Gram is now available in blog form. Called “Schneier on Security,” the blog will have the same content as Crypto-Gram but it will be posted continually rather than only on the 15th of the month. Initially, blog comments will be turned off. I’ll enable them as soon as my anti-blog-spam software is working.

RSS: The Crypto-Gram RSS feed has been working for about six months now. Current RSS subscribers will receive the blog version of Crypto-Gram instead of the once-a-month version.

E-Mail: Crypto-Gram will still be available as a once-a-month e-mail, and back issues of Crypto-Gram will still be available on the Web.

Between now and the next issue, the mailing list will be moving to schneier.com, with new software, so the instructions for subscribing and unsubscribing will change. If you need to subscribe or unsubscribe during that time, the Crypto-Gram webpage at <http://www.schneier.com/crypto-gram.html> will always have the current instructions.

Many of these changes are based on a 400-person reader survey I conducted (making it more accurate than most political polls). Thank you to those who completed the survey, and to everyone for your continued support.

<http://www.schneier.com//>


Keeping Network Outages Secret

<https://www.schneier.com/blog/archives/2004/10/…>

There’s considerable confusion between the concept of secrecy and the concept of security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.

In June, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission already requires telephone companies to report large disruptions of telephone service, and wants to extend that requirement to high-speed data lines and wireless networks. But the DHS fears that such information would give cyberterrorists a “virtual road map” to target critical infrastructures.

This sounds like the “full disclosure” debate all over again. Is publishing computer and network vulnerability information a good idea, or does it just help the hackers? It arises again and again, as malware takes advantage of software vulnerabilities after they’ve been made public.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

Cryptography is based on secrets—keys—but look at all the work that goes into making them effective. Keys are short and easy to transfer. They’re easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography’s most basic principles is to assume that the algorithm is public.

That’s the other fallacy with the secrecy argument: the assumption that secrecy works. Do we really think that the physical weak points of networks are such a mystery to the bad guys? Do we really think that the hacker underground never discovers vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy. And because customers didn’t know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we’ll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks.

Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose one that best serves their needs. Without public disclosure, companies could hide their reliability performance from the public.

Just look at who supports secrecy. Software vendors such as Microsoft want very much to keep vulnerability information secret. The Department of Homeland Security’s recommendations were loudly echoed by the phone companies. It’s the interests of these companies that are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we’re seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures—and even routine government operations—secret. Information about the infrastructure of plants and government buildings is secret. Profiling information used to flag certain airline passengers is secret. The standards for the Department of Homeland Security’s color-coded terrorism threat levels are secret. Even information about government operations without any terrorism connections is being kept secret.

This keeps terrorists in the dark, especially “dumb” terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry—to whom the government is ultimately accountable—is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can’t improve because there’s no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks. This means they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy among them. Trying to keep it a secret that a network has hubs is futile. Better to identify and protect them.

We’re all safer when we have the information we need to exert market pressure on vendors to improve security. We would all be less secure if software vendors didn’t make their security vulnerabilities public, and if telephone companies didn’t have to report network outages. And when government operates without accountability, that serves the security interests of the government, not of the people.

<http://www.securityfocus.com/news/8966>
<http://www.cnn.com/2004/TECH/07/14/…>

Another version of this essay appeared in the October Communications of the ACM.
<http://www.csl.sri.com/neumann/insiderisks04.html>


RFID Passports

<https://www.schneier.com/blog/archives/2004/10/…>

Since 9/11, the Bush administration—specifically, the Department of Homeland Security—has wanted the world to standardize on machine-readable passports. Future U.S. passports, currently being tested, will include an embedded computer chip. This chip will allow the passport to contain much more information than a simple machine-readable character font, and will allow passport officials to quickly and easily read that information. That’s a reasonable requirement, and a good idea for bringing passport technology into the 21st Century. But the administration is advocating radio frequency identification (RFID) chips for both US and foreign passports, and that’s a very bad thing.

RFID chips are like smart cards, but they can be read from a distance. A receiving device can “talk” to the chip remotely, without any need for physical contact, and get whatever information is on it. Passport officials envision being able to download the information on the chip simply by bringing it within a few centimeters of a reader.

Unfortunately, RFID chips can be read by any reader, not just the ones at passport control. The upshot of this is that anyone carrying around an RFID passport is broadcasting his identity.

Think about what that means for a minute. It means that a passport holder is continuously broadcasting his name, nationality, age, address, and whatever else is on the RFID chip. It means that anyone with a reader can learn that information, without the passport holder’s knowledge or consent. It means that pickpockets, kidnappers, and terrorists can easily—and surreptitiously—pick Americans out of a crowd.

It’s a clear threat to both privacy and personal safety. Quite simply, it’s a bad idea.

The administration claims that the chips can only be read from a few centimeters away, so there’s no potential for abuse. This is a spectacularly naive claim. All wireless protocols can work at much longer ranges than specified. In tests, RFID chips have been read by receivers 20 meters away. Improvements in technology are inevitable.

Security is always a trade-off. If the benefits of RFID outweigh the risks, then maybe it’s worth it. Certainly there isn’t a significant benefit when people present their passport to a customs official. If that customs official is going to take the passport and bring it near a reader, why can’t he go those extra few centimeters that a contact chip would require?

The administration is deliberately choosing a less secure technology without justification. If there were a good reason to choose that technology, then it might make sense. But there isn’t. There’s a large cost in security and privacy, and no benefit. Any rational analysis will conclude that there isn’t any reason to choose an RFID chip over a conventional chip.

Unfortunately, there is a reason. At least, it’s the only reason I can think of for the administration wanting RFID chips in passports: they want surreptitious access themselves. They want to be able to identify people in crowds. They want to pick out the Americans, and pick out the foreigners. They want to do the very thing that they insist, despite demonstrations to the contrary, can’t be done.

Normally I am very careful before I ascribe such sinister motives to a government agency. Incompetence is the norm, and malevolence is much rarer. But this seems like a clear case of the government putting its own interests above the security and privacy of its citizens, and then lying about it.

<http://news.com.com/…>
<http://www.theregister.co.uk/2004/05/20/us_passports/>


Disrupting Air Travel with Arabic Writing

<https://www.schneier.com/blog/archives/2004/10/…>

In August, I wrote about the stupidity of United Airlines returning a flight from Sydney to Los Angeles back to Sydney because a flight attendant found an airsickness bag with the letters “BOB” written on it in a lavatory. (“BOB” supposedly stood for “Bomb on Board.”)

I received quite a bit of mail about that. Most of it was supportive, but some people argued that the airline should do everything in its power to protect its passengers and that the airline was reasonable and acting prudently.

The problem with that line of reasoning is that it has no limits. In corresponding with people, I asked whether a flight should be diverted if one of the passengers was wearing an orange shirt: orange being the color of the DHS’s heightened alert level. If you believe that the airline should respond drastically to any threat, no matter how small, then they should.

That example was fanciful, and deliberately so. Here’s another, even more fanciful, example. Unfortunately, it’s a real one.

Last month in Milwaukee, a Midway Airlines flight had already pulled away from the gate when someone, the articles don’t say who, found Arabic writing in his or her copy of the airline’s in-flight magazine.

I have no idea what sort of panic ensued, but the airplane turned around and returned to the gate. Everyone was taken off the plane and inspected. The plane and all the luggage was inspected. Surprise; nothing was found.

The passengers didn’t fly out until the next morning.

This kind of thing is idiotic. Terrorism is a serious problem, and we’re not going to protect ourselves by overreacting every time someone’s overactive imagination kicks in. We need to be alert to the real threats, instead of making up random ones. It simply makes no sense.

News article:
<http://www.cnn.com/2004/TRAVEL/09/21/…>

My original essay:
<http://www.schneier.com/crypto-gram-0408.html#1>


Crypto-Gram Reprints

Crypto-Gram is currently in its seventh year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.

The Future of Surveillance:
<http://www.schneier.com/crypto-gram-0310.html#1>

National Strategy to Secure Cyberspace:
<http://www.schneier.com/crypto-gram-0210.html#1>

Cyberterrorism:
<http://www.schneier.com/crypto-gram-0110.html#1>

Dangers of Port 80
<http://www.schneier.com/crypto-gram-0110.html#9>

Semantic Attacks:
<http://www.schneier.com/crypto-gram-0010.html#1>

NSA on Security:
<http://www.schneier.com/crypto-gram-0010.html#7>

So, You Want to be a Cryptographer:
<http://www.schneier.com/…>

Key Length and Security:
<http://www.schneier.com/…>

Steganography: Truths and Fictions:
<http://www.schneier.com/…>

Memo to the Amateur Cipher Designer:
<http://www.schneier.com/…>


News

<https://www.schneier.com/blog/archives/2004/10/news.html>

Last month I wrote: “Long and interesting review of Windows XP SP2, including a list of missed opportunities for increased security. Worth reading: <http://www.theregister.co.uk/2004/09/02/…> Be sure you read this follow-up as well:
<http://www.theregister.co.uk/2004/09/14/…>

The author of the Sasser worm has been arrested:
<http://www.computerworld.com/printthis/2004/…>
<http://www.theregister.co.uk/2004/09/08/…>
And been offered a job:
<http://australianit.news.com.au/common/print/…>

Interesting essay on the psychology of terrorist alerts:
<http://www.zimbardo.com/downloads/…>

Encrypted e-mail client for the Treo:
<http://discussion.treocentral.com/showthread.php?…>

The Honeynet Project is publishing a bi-annual CD-ROM and newsletter. If you’re involved in honeynets, it’s definitely worth getting. And even if you’re not, it’s worth supporting this endeavor.
<http://www.honeynet.org/funds/cdrom.html>

CIO Magazine has published a survey of corporate information security. I have some issues with the survey, but it’s worth reading.
<http://www.itsecurity.com/tecsnews/sep2004/sep143.htm>

At the Illinois State Capitol, someone shot an unarmed security guard and fled. The security upgrade after the incident is—get ready—to change the building admittance policy from a “check IDs” procedure to a “sign in” procedure. First off, identity checking does not increase security. And secondly, why do they think that an attacker would be willing to forge/steal an identification card, but would be unwilling to sign their name on a clipboard?
<http://www.guardian.co.uk/worldlatest/story/…>

Neat research: a quantum-encrypted TCP/IP network:
<http://www.metrowestdailynews.com/localRegional/…>
<http://science.slashdot.org/article.pl?sid=04/09/15/…>
And NEC has its own quantum cryptography research results:
<http://www.infoworld.com/article/04/09/16/…>

Security story about the U.S. embassy in New Zealand. It’s a good lesson about the pitfalls of not thinking beyond the immediate problem.
<http://www.stuff.co.nz/stuff/dominionpost/…>

The future of worms:
<http://www.computerworld.com/securitytopics/…>

Teacher arrested after a bookmark is called a concealed weapon:
<http://wjz.com/localstories/local_story_261133455.html>
Remember all those other things you can bring on an aircraft that can knock people unconscious: handbags, laptop computers, hardcover books. And that dental floss can be used as a garrote. And, and, oh…you get the idea.

Seems you can open Kryptonite bicycle locks with the cap from a plastic pen. The attack works on what locksmiths call the “impressioning” principle. Tubular locks are especially vulnerable to this because all the pins are exposed, and tools that require little skill to use can be relatively unsophisticated. There have been commercial locksmithing products to do this to circular locks for a long time. Once you get the feel for how to do it, it’s pretty easy. I find Kryptonite’s proposed solution—swapping for a smaller diameter lock so a particular brand of pen won’t work—to be especially amusing.
<http://www.indystar.com/articles/0/179342-1470-223.html>
<http://www.wired.com/news/culture/0,1284,64987,00.html>
<http://www.bikeforums.net/>

I often talk about how most firewalls are ineffective because they’re not configured properly. Here’s some research on firewall configuration:
<http://www.eng.tau.ac.il/~yash/computer2004.pdf>

Reading RFID tags from three feet away:
<http://www.computerworld.com/printthis/2004/…>

AOL is offering two-factor authentication services. It’s not free: $10 plus $2 per month. It’s an RSA Security token, with a number that changes every 60 seconds.
<http://www.pcworld.com/news/article/0,aid,117873,00.asp>

Counterterrorism has its own snake oil:
<http://www.qsleeper.com/>

Great moments in security screening:
<http://images.ucomics.com/comics/nq/2004/nq041005.gif>

The U.S. government’s cybersecurity chief resigned with a day’s notice. I can understand his frustration; the position had no power and could only suggest, plead, and cheerlead.
<http://www.computerworld.com/managementtopics/…>
<http://www.washingtonpost.com/ac2/wp-dyn/…>
http://www.fcw.com/fcw/articles/2004/0927/…
http://news.com.com/2102-7348_3-5392501.html

North Korea had over 500 trained cyberwarriors, according to the South Korean Defense Ministry. Maybe this is true, and maybe it’s just propaganda—from either the North or the South. Although certainly any smart military will train people in the art of attacking enemy computer networks.
http://www.channelnewsasia.com/stories/…


Counterpane News

Counterpane just completed a record quarter:
<http://www.counterpane.com/pr-20041014.html>

There is a two-part interview with Bruce Schneier on SearchSecurity.com
<http://searchsecurity.techtarget.com/…>
<http://searchsecurity.techtarget.com/…>

Schneier is speaking at the following conferences and events:

CSI Asia in Singapore, on 20 October.
<http://www.csi-asia.com/conference2004/sg/>

13th CACR Information Security Workshop & 5th Annual Privacy and Security Workshop, at the University of Toronto, on 28 October.
<http://www.cacr.math.uwaterloo.ca/conferences/2004/…>

RSA Europe in Barcelona, on 4 November.
<http://2004.rsaconference.com/europe/>

Harvey Mudd College, in Ontario, California, on 10 November.


The Legacy of DES

<https://www.schneier.com/blog/archives/2004/10/…>

The Data Encryption Standard, or DES, was a mid-’70s brainchild of the National Bureau of Standards: the first modern, public, freely available encryption algorithm. For over two decades, DES was the workhorse of commercial cryptography.

Over the decades, DES has been used to protect everything from databases in mainframe computers, to the communications links between ATMs and banks, to data transmissions between police cars and police stations. Whoever you are, I can guarantee that many times in your life, the security of your data was protected by DES.

Just last month, the former National Bureau of Standards—the agency is now called the National Institute of Standards and Technology, or NIST—proposed withdrawing DES as an encryption standard, signifying the end of the federal government’s most important technology standard, one more important than ASCII, I would argue.

Today, cryptography is one of the most basic tools of computer security, but 30 years ago it barely existed as an academic discipline. In the days when the Internet was little more than a curiosity, cryptography wasn’t even a recognized branch of mathematics. Secret codes were always fascinating, but they were pencil-and-paper codes based on alphabets. In the secret government labs during World War II, cryptography entered the computer era and became mathematics. But with no professors teaching it, and no conferences discussing it, all the cryptographic research in the United States was conducted at the National Security Agency.

And then came DES.

Back in the early 1970s, it was a radical idea. The National Bureau of Standards decided that there should be a free encryption standard. Because the agency wanted it to be non-military, they solicited encryption algorithms from the public. They got only one serious response—the Data Encryption Standard—from the labs of IBM. In 1976, DES became the government’s standard encryption algorithm for “sensitive but unclassified” traffic. This included things like personal, financial and logistical information. And simply because there was nothing else, companies began using DES whenever they needed an encryption algorithm. Of course, not everyone believed DES was secure.

When IBM submitted DES as a standard, no one outside the National Security Agency had any expertise to analyze it. The NSA made two changes to DES: It tweaked the algorithm, and it cut the key size by more than half.

The strength of an algorithm is based on two things: how good the mathematics is, and how long the key is. A sure way of breaking an algorithm is to try every possible key. Modern algorithms have a key so long that this is impossible; even if you built a computer out of all the silicon atoms on the planet and ran it for millions of years, you couldn’t do it. So cryptographers look for shortcuts. If the mathematics are weak, maybe there’s a way to find the key faster: “breaking” the algorithm.

The NSA’s changes caused outcry among the few who paid attention, both regarding the “invisible hand” of the NSA—the tweaks were not made public, and no rationale was given for the final design—and the short key length.

But with the outcry came research. It’s not an exaggeration to say that the publication of DES created the modern academic discipline of cryptography. The first academic cryptographers began their careers by trying to break DES, or at least trying to understand the NSA’s tweak. And almost all of the encryption algorithms—public-key cryptography, in particular—can trace their roots back to DES. Papers analyzing different aspects of DES are still being published today.

By the mid-1990s, it became widely believed that the NSA was able to break DES by trying every possible key. This ability was demonstrated in 1998, when a $220,000 machine was built that could brute-force a DES key in a few days. In 1985, the academic community proposed a DES variant with the same mathematics but a longer key, called triple-DES. This variant had been used in more secure applications in place of DES for years, but it was time for a new standard. In 1997, NIST solicited an algorithm to replace DES.

The process illustrates the complete transformation of cryptography from a secretive NSA technology to a worldwide public technology. NIST once again solicited algorithms from the public, but this time the agency got 15 submissions from 10 countries. My own algorithm, Twofish, was one of them. And after two years of analysis and debate, NIST chose a Belgian algorithm, Rijndael, to become the Advanced Encryption Standard.

It’s a different world in cryptography now than it was 30 years ago. We know more about cryptography, and have more algorithms to choose among. AES won’t become a ubiquitous standard in the same way that DES did. But it is finding its way into banking security products, Internet security protocols, even computerized voting machines. A NIST standard is an imprimatur of quality and security, and vendors recognize that.

So, how good is the NSA at cryptography? They’re certainly better than the academic world. They have more mathematicians working on the problems, they’ve been working on them longer, and they have access to everything published in the academic world, while they don’t have to make their own results public. But are they a year ahead of the state of the art? Five years? A decade? No one knows.

It took the academic community two decades to figure out that the NSA “tweaks” actually improved the security of DES. This means that back in the ’70s, the National Security Agency was two decades ahead of the state of the art.

Today, the NSA is still smarter, but the rest of us are catching up quickly. In 1999, the academic community discovered a weakness in another NSA algorithm, SHA, that the NSA claimed to have discovered only four years previously. And just last week there was a published analysis of the NSA’s SHA-1 that demonstrated weaknesses that we believe the NSA didn’t know about at all.

Maybe now we’re just a couple of years behind.

This essay was originally published on CNet.com
<http://news.com.com/…>


The Doghouse: Lexar JumpDrives

<https://www.schneier.com/blog/archives/2004/10/…>

If you read Lexar’s documentation, their JumpDrive Secure product is secure. “If lost or stolen, you can rest assured that what you’ve saved there remains there with 256-bit AES encryption.” Sounds good, but security professionals are an untrusting sort. @Stake decided to check. They found that “the password can be observed in memory or read directly from the device, without evidence of tampering.” Even worse: the password “is stored in an XOR encrypted form and can be read directly from the device without any authentication.”

The moral of the story: don’t trust magic security words like “256-bit AES.” The devil is in the details, and it’s easy to screw up security.

Although screwing it up this badly is impressive.

Lexar’s product:
<http://www.lexar.com/jumpdrive/>

@Stake’s analysis
<http://www.atstake.com/research/advisories/2004/…>


License Plate “Guns” and Privacy

<https://www.schneier.com/blog/archives/2004/10/…>

New Haven police have a new law enforcement tool: a license-plate scanner. Similar to a radar gun, it reads the license plates of moving or parked cars and links with remote police databases, immediately providing information about the car and owner. Right now the police check if there are any taxes owed on the car, if the car or license plate is stolen, and if the car is unregistered or uninsured. A car that comes up positive is towed.

On the face of it, this is nothing new. The police have always been able to run a license plate. The difference is they would do it manually, and that limited its use. It simply wasn’t feasible for the police to run the plates of every car in a parking garage, or every car that passed through an intersection. What’s different isn’t the police tactic, but the efficiency of the process.

Technology is fundamentally changing the nature of surveillance. Years ago, surveillance meant trench-coated detectives following people down streets. It was laborious and expensive, and was only used when there was reasonable suspicion of a crime. Modern surveillance is the policeman with a license-plate scanner, or even a remote license-plate scanner mounted on a traffic light and a policeman sitting at a computer in the station. It’s the same, but it’s completely different. It’s wholesale surveillance.

And it disrupts the balance between the powers of the police and the rights of the people.

Wholesale surveillance is fast becoming the norm. New York’s E-Z Pass tracks cars at tunnels and bridges with tolls. We can all be tracked by our cell phones. Our purchases are tracked by banks and credit card companies, our telephone calls by phone companies, our Internet surfing habits by Web site operators. Security cameras are everywhere. If they wanted, the police could take the database of vehicles outfitted with the OnStar tracking system, and immediately locate all of those New Haven cars.

Like the license-plate scanners, the electronic footprints we leave everywhere can be automatically correlated with databases. The data can be stored forever, allowing police to conduct surveillance backwards in time.

The effects of wholesale surveillance on privacy and civil liberties is profound; but unfortunately, the debate often gets mischaracterized as a question about how much privacy we need to give up in order to be secure. This is wrong. It’s obvious that we are all safer when the police can use all techniques at their disposal. What we need are corresponding mechanisms to prevent abuse, and that don’t place an unreasonable burden on the innocent.

Throughout our nation’s history, we have maintained a balance between the necessary interests of police and the civil rights of the people. The license plate itself is such a balance. Imagine the debate from the early 1900s: The police proposed affixing a plaque to every car with the car owner’s name, so they could better track cars used in crimes. Civil libertarians objected because that would reduce the privacy of every car owner. So a compromise was reached: a random string of letter and numbers that the police could use to determine the car owner. By deliberately designing a more cumbersome system, the needs of law enforcement and the public’s right to privacy were balanced.

The search warrant process, as prescribed in the Fourth Amendment, is another balancing method. So is the minimization requirement for telephone eavesdropping: the police must stop listening to a phone line if the suspect under investigation is not talking.

For license-plate scanners, one obvious protection is to require the police to erase data collected on innocent car owners immediately, and not save it. The police have no legitimate need to collect data on everyone’s driving habits. Another is to allow car owners access to the information about them used in these automated searches, and to allow them to challenge inaccuracies.

We need to go further. Criminal penalties are severe in order to create a deterrent, because it is hard to catch wrongdoers. As they become easier to catch, a realignment is necessary. When the police can automate the detection of a wrongdoing, perhaps there should no longer be any criminal penalty attached. For example, both red light cameras and speed-trap cameras all issue citations without any “points” assessed against the driver.

Wholesale surveillance is not simply a more efficient way for the police to do what they’ve always done. It’s a new police power, one made possible with today’s technology and one that will be made easier with tomorrow’s. And with any new police power, we as a society need to take an active role in establishing rules governing its use. To do otherwise is to cede ever more authority to the police.

This essay was originally published in the New Haven Register.
<http://www.nhregister.com/site/news.cfm?…>


Aerial Surveillance to Detect Building Code Violations

<https://www.schneier.com/blog/archives/2004/10/…>

Along similar lines to the essay above, the city of Baltimore is using aerial surveillance photography to detect building code violations. I wrote an op ed for the Baltimore Sun about it, using much the same words as above.

<http://www.baltimoresun.com/news/opinion/oped/…>


Terror Threat Alerts

I’ve written two essays on terror threat alerts, and how useless they are as a security countermeasure.

Minneapolis Star-Tribune essay:
<http://www.schneier.com/essay-055.html>

The Rake essay:
<https://www.schneier.com/blog/archives/2004/10/…>


Academic Freedom and Security

<https://www.schneier.com/blog/archives/2004/10/…>

Cryptography is the science of secret codes, and it is a primary Internet security tool to fight hackers, cyber crime, and cyber terrorism. CRYPTO is the world’s premier cryptography conference. It’s held every August in Santa Barbara.

This year, 400 people from 30 countries came to listen to dozens of talks. Lu Yi was not one of them. Her paper was accepted at the conference. But because she is a Chinese Ph.D. student in Switzerland, she was not able to get a visa in time to attend the conference.

In the three years since 9/11, the U.S. government has instituted a series of security measures at our borders, all designed to keep terrorists out. One of those measures was to tighten up the rules for foreign visas. Certainly this has hurt the tourism industry in the U.S., but the damage done to academic research is more profound and longer-lasting.

According to a survey by the Association of American Universities, many universities reported a drop of more than 10 percent in foreign student applications from last year. During the 2003 academic year, student visas were down 9 percent. Foreign applications to graduate schools were down 32 percent, according to another study by the Council of Graduate Schools.

There is an increasing trend for academic conferences, meetings and seminars to move outside of the United States simply to avoid visa hassles.

This affects all of high-tech, but ironically it particularly affects the very technologies that are critical in our fight against terrorism.

Also in August, on the other side of the country, the University of Connecticut held the second International Conference on Advanced Technologies for Homeland Security. The attendees came from a variety of disciplines—chemical trace detection, communications compatibility, X-ray scanning, sensors of various types, data mining, HAZMAT clothing, network intrusion detection, bomb diffusion, remote-controlled drones—and illustrate the enormous breadth of scientific know-how that can usefully be applied to counterterrorism.

It’s wrong to believe that the U.S. can conduct the research we need alone. At the Connecticut conference, the researchers presenting results included many foreigners studying at U.S. universities. Only 30 percent of the papers at CRYPTO had only U.S. authors. The most important discovery of the conference, a weakness in a mathematical function that protects the integrity of much of the critical information on the Internet, was made by four researchers from China.

Every time a foreign scientist can’t attend a U.S. technology conference, our security suffers. Every time we turn away a qualified technology graduate student, our security suffers. Technology is one of our most potent weapons in the war on terrorism, and we’re not fostering the international cooperation and development that is crucial for U.S. security.

Security is always a trade-off, and specific security countermeasures affect everyone, both the bad guys and the good guys. The new U.S. immigration rules may affect the few terrorists trying to enter the United States on visas, but they also affect honest people trying to do the same.

All scientific disciplines are international, and free and open information exchange—both in conferences and in academic programs at universities—will result in the maximum advance in the technologies vital to homeland security. The Soviet Union tried to restrict academic freedom along national lines, and it didn’t do the country any good. We should try not to follow in those footsteps.

This essay was originally published in the San Jose Mercury News
<http://www.mercurynews.com/mld/mercurynews/…>


Comments from Readers

From: Bruno Treguier <Bruno.Treguier shom.fr>
Subject: Comment on Finnish Mobile Phone Spoofing

The feature used for the spoofing attack has been known for several years, in fact. Nokia phones only compare the last 7 digits when trying to associate a number to a name (I don’t know for other brands, but there’s certainly a similar mechanism). It was even documented in my old Nokia 3110 phone user’s manual, if I remember correctly. This is meant, I guess, because phone numbers can be stored in their international or national form by the user himself, and even caller ID can be different from one operator to another. Bouygtel, in France, for instance, displays the numbers in their international form, whereas SFR uses the national 10 digit form (beginning with a 0). SFR probably chose that form to display the caller IDs because they thought it would be easier for people to remember them.

Seven digits probably sounded like a good compromise, as it is long enough for the collision probability to be rather low (at least as long as you’re not trying to actively spoof the caller ID), and short enough for a varying prefix not to be taken into account…

Never thought about it, but this seems to be a pretty good reason for operators not to let people choose their whole phone number! While it is no longer the case, Bouygtel, a few years ago, did let you choose the last 4 ones, as a commercial incentive.

From: “Howze, Blair” <BHowze co.gallatin.mt.us>
Subject: Security at the Olympics

There is some intrinsic value in “showing the badge” that you did not give credit to. We have common criminals to worry about also, and it seems that the overwhelming numbers of badge-toters had some effectiveness in lessening the number of incidents.

Whether the payoff was worth the price is up to debate. Although badges are not a credible solo deterrent to suicidal terrorists, it does make the potential terrorist plan around the additional eyes.

Second, most of the security was probably planned knowing that they couldn’t stop a determined, intelligent, suicidal (or at least someone who didn’t care about the consequences) individual. The US Secret Service is highly concerned with this type of attack because intelligence operations have decreased effectiveness against solo operators who can keep a secret. Therefore, they spend the bulk of the energy and money on stopping coordinated attacks where the volume of involved individuals gives a greater chance of something slipping out.

I do agree that there is a point of diminishing returns. Unfortunately, the American public (and to some extent the Western world) seems intent on classifying any single death by terrorists as a failure of the whole system of intelligence, military operations, leadership, etc.

Therefore, public officials and their staffs are more likely to expend exorbitant amounts of money to defend against any incident whatsoever to avoid being castigated.

> Far more effective would have been to spend most of that $1.5
> billion on intelligence and on emergency response. Intelligence
> is an invaluable tool against terrorism, and it works regardless
> of what the

The security WAS effective at the Olympics. There was not a major incident. I’m sure there could have been, but for a variety of reasons (at least one of which is luck) nothing happened. However, the money was spent on a one-time basis that didn’t help the overall long-term goal of reducing the effectiveness of terrorist organizations. Your argument about spending more of the total on processes and the infrastructure of intelligence and response is well-taken. Basically it comes down to opinion. The Olympic security organizers’ opinion was selfish in that their overriding concern was the Olympics. They didn’t really care about next year, because that is somebody else’s problem. So, they concentrated on spending money where it was conspicuous. It did work, though. It might not have been the best plan, but it was effective. You have to remember the sheer number of competing interests involved in the planning of the security environment. Just imagining the politics involved boggles the mind.

From: Anonymous
Subject: Fear and Security

This is in response to the letter you published last month by Wayne Schroeder: Fear and security are closely coupled in simple situations, like riding a motorcycle. The way to reduce the fear is to increase your safety, such as by driving more slowly. Millions of years of evolution have evolved fear as a mechanism for keeping us alive, but millions of years of evolution never had to deal with a 767. It evolved for simpler things, like bad weather, high speeds, and scary animals.

When it comes to the more complex security situations of the modern world, our natural instincts are inadequate. People still rely on them to guide them, though, like in the now-notorious Annie Jacobsen freakout. That’s why we have security theater; people are trying to reduce fear, not increase safety, and they don’t realize those aren’t the same anymore.

That is also why people are reluctant to confront their poor choices. When you force them to do so, you are taking them from a place of reduced fear to one of heightened fear; as far as they’re concerned, you’re causing the fear. The rational perspective is clearly that you are making them safer, but they don’t see it that way.

The motorcycle example just doesn’t work because it maps easily to our evolved instincts. Modern security problems are so complicated that the ways to reduce fear have diverged from the ways to increase safety. Trying to map these primitive emotions to modern threats can’t work; the gap is too large. Relying on our fears to guide us won’t make us safer; it will only make it more shocking when our defenses are breached again.


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. Back issues are available on <http://www.schneier.com/crypto-gram.html>.

To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to crypto-gram-subscribe@chaparraltree.com. To unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>.

Comments on CRYPTO-GRAM should be sent to schneier@schneier.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane’s expert security analysts protect networks for Fortune 1000 companies world-wide. See <http://www.counterpane.com>.

Copyright (c) 2004 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.