Crypto-Gram

December 15, 2004

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@schneier.com
<http://www.schneier.com>
<http://www.counterpane.com>

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

Or you can read this issue on the web at <http://www.schneier.com/crypto-gram-0411.html>.

Schneier also publishes these same essays in his blog: <http://www.schneier.com/>. An RSS feed is available.


In this issue:


Behavioral Assessment Profiling

<https://www.schneier.com/blog/archives/2004/11/…>

On Dec. 14, 1999, Ahmed Ressam tried to enter the United States from Canada at Port Angeles, Wash. He had a suitcase bomb in the trunk of his car. A US customs agent, Diana Dean, questioned him at the border. He was fidgeting, sweaty, and jittery. He avoided eye contact. In Dean’s own words, he was acting “hinky.” Ressam’s car was eventually searched, and he was arrested.

It wasn’t any one thing that tipped Dean off; it was everything encompassed in the slang term “hinky.” But it worked. The reason there wasn’t a bombing at Los Angeles International Airport around Christmas 1999 was because a trained, knowledgeable security person was paying attention.

This is “behavioral assessment” profiling. It’s what customs agents do at borders all the time. It’s what the Israeli police do to protect their airport and airplanes. And it’s a new pilot program in the United States at Boston’s Logan Airport. Behavioral profiling is dangerous because it’s easy to abuse, but it’s also the best thing we can do to improve the security of our air passenger system.

Behavioral profiling is not the same as computerized passenger profiling. The latter has been in place for years. It’s a secret system, and it’s a mess. Sometimes airlines decided who would undergo secondary screening, and they would choose people based on ticket purchase, frequent-flyer status, and similarity to names on government watch lists. CAPPS-2 was to follow, evaluating people based on government and commercial databases and assigning a “risk” score. This system was scrapped after public outcry, but another profiling system called Secure Flight will debut next year. Again, details are secret.

The problem with computerized passenger profiling is that it simply doesn’t work. Terrorists don’t fit a profile and cannot be plucked out of crowds by computers. Terrorists are European, Asian, African, Hispanic, and Middle Eastern, male and female, young and old. Richard Reid, the shoe bomber, was British with a Jamaican father. Jose Padilla, arrested in Chicago in 2002 as a “dirty bomb” suspect, was a Hispanic-American. Timothy McVeigh was a white American. So was the Unabomber, who once taught mathematics at the University of California, Berkeley. The Chechens who blew up two Russian planes last August were female. Recent reports indicate that Al Qaeda is recruiting Europeans for further attacks on the United States.

Terrorists can buy plane tickets—either one way or round trip—with cash or credit cards. Mohamed Atta, the leader of the 9/11 plot, had a frequent-flyer gold card. They are a surprisingly diverse group of people, and any computer profiling system will just make it easier for those who don’t meet the profile.

Behavioral assessment profiling is different. It cuts through all of those superficial profiling characteristics and centers on the person. State police are trained as screeners in order to look for suspicious conduct such as furtiveness or undue anxiety. Already at Logan Airport, the program has caught 20 people who were either in the country illegally or had outstanding warrants of one kind or another.

Earlier this month the ACLU of Massachusetts filed a lawsuit challenging the constitutionality of behavioral assessment profiling. The lawsuit is unlikely to succeed; the principle of “implied consent” that has been used to uphold the legality of passenger and baggage screening will almost certainly be applied in this case as well.

But the ACLU has it wrong. Behavioral assessment profiling isn’t the problem. Abuse of behavioral profiling is the problem, and the ACLU has correctly identified where it can go wrong. If policemen fall back on naive profiling by race, ethnicity, age, gender—characteristics not relevant to security—they’re little better than a computer. Instead of “driving while black,” the police will face accusations of harassing people for the infraction of “flying while Arab.” Their actions will increase racial tensions and make them less likely to notice the real threats. And we’ll all be less safe as a result.

Behavioral assessment profiling isn’t a “silver bullet.” It needs to be part of a layered security system, one that includes passenger baggage screening, airport employee screening, and random security checks. It’s best implemented not by police but by specially trained federal officers. These officers could be deployed at airports, sports stadiums, political conventions—anywhere terrorism is a risk because the target is attractive. Done properly, this is the best thing to happen to air passenger security since reinforcing the cockpit door.

This article originally appeared in the Boston Globe.
<http://www.boston.com/news/globe/editorial_opinion/…>

<http://news.airwise.com/stories/2004/11/1100157618.html>
<http://www.usatoday.com/travel/news/…>


Crypto-Gram Reprints

Crypto-Gram is currently in its seventh year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.

Blaster and the August 14th Blackout:
<http://www.schneier.com/crypto-gram-0312.html#1>

Quantum Cryptography:
<http://www.schneier.com/crypto-gram-0312.html#6>

Computerized and Electronic Voting:
<http://www.schneier.com/crypto-gram-0312.html#9>

Counterattack:
<http://www.schneier.com/crypto-gram-0212.html#1>

Comments on the Department of Homeland Security:
<http://www.schneier.com/crypto-gram-0212.html#3>

Crime: The Internet’s Next Big Thing:
<http://www.schneier.com/crypto-gram-0212.html#7>

National ID Cards:
<http://www.schneier.com/crypto-gram-0112.html#1>

Judges Punish Bad Security:
<http://www.schneier.com/crypto-gram-0112.html#2>

Computer Security and Liabilities:
<http://www.schneier.com/crypto-gram-0112.html#4>

Fun with Vulnerability Scanners:
<http://www.schneier.com/crypto-gram-0112.html#9>

Voting and Technology:
<http://www.schneier.com/crypto-gram-0012.html#1>

“Security Is Not a Product; It’s a Process”
<http://www.schneier.com/crypto-gram-9912.html#1>

Echelon Technology:
<http://www.schneier.com/crypto-gram-9912.html#3>

European Digital Cellular Algorithms:
<http://www.schneier.com/crypto-gram-9912.html#10>

The Fallacy of Cracking Contests:
<http://www.schneier.com/crypto-gram-9812.html#contests>

How to Recognize Plaintext:
<http://www.schneier.com/crypto-gram-9812.html#plaintext>


Google Desktop Search

<https://www.schneier.com/blog/archives/2004/11/…>

Google’s desktop search software is so good that it exposes vulnerabilities on your computer that you didn’t know about.

Last month, Google released a beta version of its desktop search software: Google Desktop Search. Install it on your Windows machine, and it creates a searchable index of your data files, including word processing files, spreadsheets, presentations, e-mail messages, cached Web pages and chat sessions. It’s a great idea. Windows’ searching capability has always been mediocre, and Google fixes the problem nicely.

There are some security issues, though. The problem is that GDS indexes and finds documents that you may prefer not be found. For example, GDS searches your browser’s cache. This allows it to find old Web pages you’ve visited, including online banking summaries, personal messages sent from Web e-mail programs and password-protected personal Web pages.

GDS can also retrieve encrypted files. No, it doesn’t break the encryption or save a copy of the key. However, it searches the Windows cache, which can bypass some encryption programs entirely. And if you install the program on a computer with multiple users, you can search documents and Web pages for all users.

GDS isn’t doing anything wrong; it’s indexing and searching documents just as it’s supposed to. The vulnerabilities are due to the designs of the underlying operating system and applications software.

First, Web browsers should not store SSL-encrypted pages or pages with personal e-mail. If they do store them, they should at least ask the user first.

Second, an encryption program that leaves copies of decrypted files in the cache is poorly designed. Those files are there whether or not GDS searches for them.

Third, GDS’ ability to search files and Web pages of multiple users on a computer received a lot of press when it was first discovered. This is a complete nonissue. You have to be an administrator on the machine to do this, which gives you access to everyone’s files anyway.

Some people blame Google for these problems and suggest, wrongly, that Google fix them. What if Google were to bow to public pressure and modify GDS to avoid showing confidential information? The underlying problems would remain: The private Web pages would still be in the browser’s cache; the encryption program would still be leaving copies of the plain-text files in the operating system’s cache; and the administrator could still eavesdrop on anyone’s computer to which he or she has access. The only thing that would have changed is that these vulnerabilities once again would be hidden from the average computer user.

In the end, this can only harm security.

GDS is very good at searching. It’s so good that it exposes vulnerabilities on your computer that you didn’t know about. And now that you know about them, pressure your software vendors to fix them. Don’t shoot the messenger.

Articles on GDS:
<http://www.eweek.com/print_article2/…>
<http://www.pcmag.com/article2/0,1759,1710823,00.asp>

A version of this article previously appeared in eWeek:
<http://www.eweek.com/article2/0,1759,1730748,00.asp>


News

Amtrak is now randomly checking IDs:
<http://www.cnn.com/2004/TRAVEL/11/18/…>
I’ve written about this kind of thing before. It’s the kind of program that makes us no safer, and wastes everyone’s time and Amtrak’s money.
<http://www.schneier.com/essay-008.html>

A New Zealand bank is implementing two-factor authentication with cell phones. Works great if all your customers have a cell phone.
<http://www.smh.com.au/news/Breaking/…>

An impressive car theft:
<http://www.bloomberg.com/apps/news?…>
There are two types of thieves, whether they be car thieves or otherwise. First, there are the thieves that want a car, any car. And second, there are the thieves that want one particular car. Against the first type, any security measure that makes your car harder to steal than the car next to it is good enough. Against the second type, even a sophisticated GPS tracking system might not be enough.

Bletchley Park is hosting a security conference this year:
<http://www.bletchleypark.org.uk/page.cfm?…>

Lycos is giving away a screen saver that overloads spam sites:
<http://www.theregister.co.uk/2004/11/26/…>
<http://news.bbc.co.uk/2/hi/technology/4051553.stm>
<http://news.com.com/…>
I’ve written about this before, and it’s a bad idea:
<http://www.schneier.com/crypto-gram-0212.html#1>

Sensible security thinking from New Zealand:
<http://www.nzherald.co.nz/index.cfm?ObjectID=3600794>

An illustrated history of safes and safecracking:
<http://www.timhunkin.com/94_illegal_engineering.htm>

A 1959 paper about a hardware random number generator attached to a computer.
<http://phk.freebsd.dk/rc3600/DASK_rng.pdf>

The Information & Privacy Commissioner for the Province of British Columbia, Canada, has published a report titled “Privacy and the USA Patriot Act – Implications for British Columbia Public Sector Outsourcing.”
<http://www.oipc.bc.ca/sector_public/usa_patriot_act/…>

The ANSI X.9 standards group is looking for a key-wrapping algorithm. There are several candidates, and now some good cryptanalysis is required:
<http://eprint.iacr.org/2004/340/>


Security Notes from All Over: Israeli

Airport Security Questioning

https://www.schneier.com/blog/archives/2004/12/…

In both “Secrets and Lies” and “Beyond Fear,” I discuss a key difference between attackers and defenders: the ability to concentrate resources. The defender must defend against all possible attacks, while the attacker can concentrate his forces on one particular avenue of attack. This precept is fundamental to a lot of security, and can be seen very clearly in counterterrorism. A country is in the position of the interior; it must defend itself against all possible terrorist attacks: airplane terrorism, chemical bombs, threats at the ports, threats through the mails, lone lunatics with automatic weapons, assassinations, etc, etc, etc. The terrorist just needs to find one weak spot in the defenses, and exploit that. This concentration versus diffusion of resources is one reason why the defender’s job is so much harder than the attackers.

This same principle guides security questioning at the Ben Gurion Airport in Israel. In this example, the attacker is the security screener and the defender is the terrorist. (It’s important to remember that “attacker” and “defender” are not moral labels, but tactical ones. Sometimes the defenders are the good guys and the attackers are the bad guys. In this case, the bad guy is trying to defend his cover story against the good guy who is attacking it.)

Security is impressively tight at the airport, and includes a potentially lengthy interview by a trained security screener. The screener asks each passenger questions, trying to determine if he’s a security risk. But instead of asking different questions—where do you live, what do you do for a living, where were you born—the screener asks questions that follow a storyline: “Where are you going? Who do you know there? How did you meet him? What were you doing there?” And so on.

See the ability to concentrate resources? The defender—the terrorist trying to sneak aboard the airplane—needs a cover story sufficiently broad to be able to respond to any line of questioning. So he might memorize the answers to several hundred questions. The attacker—the security screener—could ask questions scattershot, but instead concentrates his questioning along one particular line. The theory is that eventually the defender will reach the end of his memorized story, and that the attacker will then notice the subtle changes in the defender as he starts to make up answers.


Counterpane News

I have no speaking events between now and the 15th of January. Have a good holiday, everyone.

Schneier was interviewed on universal surveillance for TechWeb:
<http://www.techweb.com/rss/54200987>

Counterpane has been experiencing a great deal of interest in compliance with the data privacy and protection section of the U.S. Security and Exchange Commission’s Sarbanes-Oxley regulation. Check out the Counterpane website for a whitepaper on the topic:
<http://www.counterpane.com/soxwhitepaper>


The Doghouse: Internet Security Foundation

<https://www.schneier.com/blog/archives/2004/12/…>

This organization wants to sell their tool to view passwords in textboxes “hidden” by asterisks on Windows. They claim it’s “a glaring security hole in Microsoft Windows” and a “grave security risk.” Their webpage is thick with FUD, and warns that criminals and terrorists can easily clean out your bank accounts because of this problem.

Of course the problem isn’t that users type passwords into their computers. The problem is that programs don’t store passwords securely. The problem is that programs pass passwords around in plaintext. The problem is that users choose lousy passwords, and then store them insecurely. The problem is that financial applications are still relying on passwords for security, rather than two-factor authentication.

But the “Internet Security Foundation” is trying to make as much noise as possible. They even have this nasty letter to Bill Gates that you can sign (36 people signed, the last time I looked). I’m not sure what their angle is, but I don’t like it.

<http://www.internetsecurityfoundation.org/>


Kafka and the Digital Person

<https://www.schneier.com/blog/archives/2004/12/…>

Last week I stayed at the St. Regis hotel in Washington DC. It was my first visit, and the management gave me a questionnaire, asking me things like my birthday, my spouse’s name and birthday, my anniversary, and my favorite fruits, drinks, and sweet. The purpose was clear; the hotel wanted to be able to offer me a more personalized service the next time I visited. And it was a purpose I agreed with; I wanted more personalized service. But I was very uneasy about filling out the form.

It wasn’t that the information was particularly private. I make no secret of my birthday, or anniversary, or food preferences. Much of that information is even floating around the Web somewhere. Secrecy wasn’t the issue.

The issue was control. In the United States, information about a person is owned by the person collects it, not by the person it is about. There are specific exceptions in the law, but they’re few and far between. There are no broad data protection laws, as you find in the European Union. There are no Privacy Commissioners, as you find in Canada. Privacy law in the United States is largely about secrecy: if the information is not secret, then there’s little you can do to control its dissemination.

As a result, enormous databases exist that are filled with personal information. These databases are owned by marketing firms, credit bureaus, and the government. Amazon knows what books we buy. Our supermarket knows what foods we eat. Credit card companies know quite a lot about our purchasing habits. Credit bureaus know about our financial history, and what they don’t know is contained in bank records. Health insurance records contain details about our health and well-being. Government records contain our Social Security numbers, birthdates, addresses, mother’s maiden names, and a host of other things. Many drivers license records contain digital pictures.

All of this data is being combined, indexed, and correlated. And it’s being used for all sorts of things. Targeted marketing campaigns are just the tip of the iceberg. This information is used by potential employers to judge our suitability as employees, by potential landlords to determine our suitability as renters, and by the government to determine our likelihood of being a terrorist.

Some stores are beginning to use our data to determine whether we are desirable customers or not. If customers take advantage of too many discount offers or make too many returns, they may be profiled as “bad” customers and be treated differently from the “good” customers.

And with alarming frequency, our data is being abused by identity thieves. The businesses that gather our data don’t care much about keeping it secure. So identity theft is a problem where those that suffer from it – the individuals – are not in a position to improve security, and those who are in a position to improve security don’t suffer from the problem

The issue here is not about secrecy, it’s about control. The issue is that both government and commercial organizations are building “digital dossiers” about us, and that these dossiers are being used to judge and categorize us through some secret process.

A new book by George Washington University Law Professor Daniel Solove examines the problem of the growing accumulation of personal information in enormous databases. The book is called “The Digital Person: Technology and Privacy in the Information Age,” and it is a fascinating read.

Solove’s book explores this problem from a legal perspective, explaining what the problem is, how current U.S. law fails to deal with it, and what we should do to protect privacy today. It’s an unusually perceptive discussion of one of the most vexing problems of the digital age—our loss of control over our personal information. It’s a fascinating journey into the almost surreal ways personal information is hoarded, used, and abused in the digital age.

Solove argues that our common conceptualization of the privacy problem is Big Brother—some faceless organization knowing our most intimate secrets—is only one facet of the issue. A better metaphor can be found in Franz Kafka’s “The Trial.” In the book, a vast faceless bureaucracy constructs a vast dossier about a person, who can’t find out what information exists about him in the dossier, why the information has been gathered, or what it will be used for. Privacy is not about intimate secrets; it’s about who has control of the millions of pieces of personal data that we leave like droppings as we go through our daily life. And until the U.S. legal system recognizes this fact, Americans will continue to live in an world where they have little control over their digital person.

In the end, I didn’t complete the questionnaire from the St. Regis Hotel. While I was fine with the St. Regis in Washington DC having that information in order to make my subsequent stays a little more personal, and was probably fine with that information being shared among other St. Regis hotels, I wasn’t comfortable with the St. Regis doing whatever they wanted with that information. I wasn’t comfortable with them selling the information to a marketing database. I wasn’t comfortable with anyone being able to buy that information. I wasn’t comfortable with that information ending up in a database of my habits, my preferences, my proclivities. It wasn’t the primary use of that information that bothered me, it was the secondary uses.

Solove has done much more thinking about this issue than I have. His book provides a clear account of the social problems involving information privacy, and haunting predictions of current U.S. legal policies. Even more importantly, the legal solutions he provides are compelling and worth serious consideration. I recommend his book highly.

The book’s website:
<http://www.law.gwu.edu/facweb/dsolove/…>

Order the book on Amazon:
<http://www.amazon.com/exec/obidos/ASIN/0814798462/…>


The Electronic Privacy Information Center (EPIC)

For many Americans, the end of the year is charitable contribution time. (The reasons are tax-related.) While there is no shortage of worthy causes around the world, I would like to suggest contributing at least something to EPIC.

Since its founding ten years ago, EPIC has worked to protect privacy, freedom of expression, and democratic values, and to promote the Public Voice in decisions concerning the future of the Internet. They maintain one of the most extensive websites on privacy and free speech issues on the Internet. They litigate Freedom of Information Act, First Amendment, and privacy cases. They publish books on open government and privacy. They train law school students about the Internet and the public interest. They testify frequently before Congress about emerging civil liberties issues. They provide an extensive listing of privacy resources as well as a guide to practical privacy tools.

Remember when it became public that JetBlue (and other airlines) provided passenger information to the U.S. government in violation of their own privacy policies? Or when it was revealed that the CAPPS-II airline passenger profiling system would be used for other, non-terrorism, purposes? EPIC’s FOIA work uncovered those stories.

December 15th is the 213th anniversary of the signing of the Bill of Rights. Read through it again today, and notice how the different laws protect the security of Americans. I’m proud to be a member of EPIC’s Advisory Board. They do good work, and we’re all a lot more secure because of it.

EPIC’s website:
<http://www.epic.org/>

U.S. Bill of Rights:
<http://www.archives.gov/…>


Safe Personal Computing

<https://www.schneier.com/blog/archives/2004/12/…>

I am regularly asked what average Internet users can do to ensure their security. My first answer is usually, “Nothing—you’re screwed.”

But that’s not true, and the reality is more complicated. You’re screwed if you do nothing to protect yourself, but there are many things you can do to increase your security on the Internet.

Two years ago, I published a list of PC security recommendations. The idea was to give home users concrete actions they could take to improve security. This is an update of that list: a dozen things you can do to improve your security.

General: Turn off the computer when you’re not using it, especially if you have an “always on” Internet connection.

Laptop security: Keep your laptop with you at all times when not at home; treat it as you would a wallet or purse. Regularly purge unneeded data files from your laptop. The same goes for PDAs. People tend to store more personal data—including passwords and PINs—on PDAs than they do on laptops.

Backups: Back up regularly. Back up to disk, tape or CD-ROM. There’s a lot you can’t defend against; a recent backup will at least let you recover from an attack. Store at least one set of backups off-site (a safe-deposit box is a good place) and at least one set on-site. Remember to destroy old backups. The best way to destroy CD-Rs is to microwave them on high for five seconds. You can also break them in half or run them through better shredders.

Operating systems: If possible, don’t use Microsoft Windows. Buy a Macintosh or use Linux. If you must use Windows, set up Automatic Update so that you automatically receive security patches. And delete the files “command.com” and “cmd.exe.”

Applications: Limit the number of applications on your machine. If you don’t need it, don’t install it. If you no longer need it, uninstall it. Look into one of the free office suites as an alternative to Microsoft Office. Regularly check for updates to the applications you use and install them. Keeping your applications patched is important, but don’t lose sleep over it.

Browsing: Don’t use Microsoft Internet Explorer, period. Limit use of cookies and applets to those few sites that provide services you need. Set your browser to regularly delete cookies. Don’t assume a Web site is what it claims to be, unless you’ve typed in the URL yourself. Make sure the address bar shows the exact address, not a near-miss.

Web sites: Secure Sockets Layer (SSL) encryption does not provide any assurance that the vendor is trustworthy or that its database of customer information is secure.

Think before you do business with a Web site. Limit the financial and personal data you send to Web sites—don’t give out information unless you see a value to you. If you don’t want to give out personal information, lie. Opt out of marketing notices. If the Web site gives you the option of not storing your information for later use, take it. Use a credit card for online purchases, not a debit card.

Passwords: You can’t memorize good enough passwords any more, so don’t bother. For high-security Web sites such as banks, create long random passwords and write them down. Guard them as you would your cash: i.e., store them in your wallet, etc.

Never reuse a password for something you care about. (It’s fine to have a single password for low-security sites, such as for newspaper archive access.) Assume that all PINs can be easily broken and plan accordingly.

Never type a password you care about, such as for a bank account, into a non-SSL encrypted page. If your bank makes it possible to do that, complain to them. When they tell you that it is OK, don’t believe them; they’re wrong.

E-mail: Turn off HTML e-mail. Don’t automatically assume that any e-mail is from the “From” address.

Delete spam without reading it. Don’t open messages with file attachments, unless you know what they contain; immediately delete them. Don’t open cartoons, videos and similar “good for a laugh” files forwarded by your well-meaning friends; again, immediately delete them.

Never click links in e-mail unless you’re sure about the e-mail; copy and paste the link into your browser instead. Don’t use Outlook or Outlook Express. If you must use Microsoft Office, enable macro virus protection; in Office 2000, turn the security level to “high” and don’t trust any received files unless you have to. If you’re using Windows, turn off the “hide file extensions for known file types” option; it lets Trojan horses masquerade as other types of files. Uninstall the Windows Scripting Host if you can get along without it. If you can’t, at least change your file associations, so that script files aren’t automatically sent to the Scripting Host if you double-click them.

Antivirus and anti-spyware software: Use it—either a combined program or two separate programs. Download and install the updates, at least weekly and whenever you read about a new virus in the news. Some antivirus products automatically check for updates. Enable that feature and set it to “daily.”

Firewall: Spend $50 for a Network Address Translator firewall device; it’s likely to be good enough in default mode. On your laptop, use personal firewall software. If you can, hide your IP address. There’s no reason to allow any incoming connections from anybody.

Encryption: Install an e-mail and file encryptor (like PGP). Encrypting all your e-mail or your entire hard drive is unrealistic, but some mail is too sensitive to send in the clear. Similarly, some files on your hard drive are too sensitive to leave unencrypted.

None of the measures I’ve described are foolproof. If the secret police wants to target your data or your communications, no countermeasure on this list will stop them. But these precautions are all good network-hygiene measures, and they’ll make you a more difficult target than the computer next door. And even if you only follow a few basic measures, you’re unlikely to have any problems.

I’m stuck using Microsoft Windows and Office, but I use Opera for Web browsing and Eudora for e-mail. I use Windows Update to automatically get patches and install other patches when I hear about them. My antivirus software updates itself regularly. I keep my computer relatively clean and delete applications that I don’t need. I’m diligent about backing up my data and about storing data files that are no longer needed offline.

I’m suspicious to the point of near-paranoia about e-mail attachments and Web sites. I delete cookies and spyware. I watch URLs to make sure I know where I am, and I don’t trust unsolicited e-mails. I don’t care about low-security passwords, but try to have good passwords for accounts that involve money. I still don’t do Internet banking. I have my firewall set to deny all incoming connections. And I turn my computer off when I’m not using it.

That’s basically it. Really, it’s not that hard. The hardest part is developing an intuition about e-mail and Web sites. But that just takes experience.

Others have disagreed with these recommendations:
<http://www.getluky.net/archives/000145.html>
<http://www.berylliumsphere.com/security_mentor/2004/…>

My original essay on the topic:
<http://www.schneier.com/crypto-gram-0105.html#8>

This essay previously appeared on CNet:
<http://news.com.com/…>


Comments from Readers

From: Sitaram Chamarty <sitaram atc.tcs.co.in>
Subject: Electronic Voting Machines

While I agree with much of what you say regarding electronic voting and related aspects, there are a few things that you may or may not be aware of that I would like to add to the discussion.

(1) While Indian voting does not have “propositions” to vote on, it does happen sometimes that a state and federal election are combined. In fact, an election can be called *earlier* if the party in power loses control before their term is up—if it weren’t for this, all elections would go like clockwork (as in the US) and perhaps then all state and federal elections would be at the same time.

(2) More importantly, the EVM technology used in India is, in my humble opinion, far superior to that in the US. There is a very nice technical comparison of the 2 systems at <http://techaos.blogspot.com/2004/05/…> (skip the first few paragraphs if you like)—it’s not a high-profile site; in fact it is someone’s personal blog, but the facts stand for themselves and can be verified independently, so I suppose that’s OK.

(3) Finally, it does help that our election commission <http://www.eci.gov.in/> is fiercely independent and has the constitutional mandate and the guts to often go head to head with the ruling party! It also helps that the machines are made by public sector organizations. CEOs of EVM manufacturers promising to “deliver the vote” to a certain party, and such things just don’t seem to happen here 😉

From: Jeremy Epstein <jeremy.epstein cox.net>
Subject: Electronic Voting Machines

You wrote: “In Fairfax County, VA, in 2003, a programming error in the electronic voting machines caused them to mysteriously subtract 100 votes from one particular candidates’ totals.”

Actually, it was much worse than that. It wasn’t 100 votes, it was one out of 100 votes (i.e., 1%) in a race where the margin was 1%.

Also, as for source code for voting machines being open, I disagree for two reasons.

First, if we’re going to have voter-verified paper audit trails (or whatever synonym you choose), then it doesn’t matter what the software does. If it misbehaves, we can catch it in the recount.

Second, it draws the line in the wrong place. If we demand open source code, it gives people like the ITAA a chance to say we’re a bunch of geeks who don’t believe in property rights. And in fact, there’s no reason why voting machines *shouldn’t* compete on usability, reliability, etc., which might require that the source be closed. Let’s let the free market rule here! I’m not saying the source needs to be proprietary to be secure (I agree with you on that point). You’ve even pointed out in your columns that there’s no accurately discernable difference in security of Windows and Linux, to take the most popular comparison.

As a matter of getting what we want, we in the security community are better off sticking to what really matters (the paper audit trail), and steering clear of the political minefield of source code access.

From: Thomas Stalzer <electroemporium yahoo.com>
Subject: Electronic Voting Machines

I have noticed here some things in Italian elections which while they seem to be a bit “hokey”, really, really work. The system has been in use for years and years, and basically works like this:

Ballots are paper, and are separated into various sections, each ballot being color coded, and devoted exclusively to a certain things, such as representatives, the president, referendums, and the like. You mark your answers with a nice, big “X,” with spaces for writing in a candidate when required. Various color-coded boxes take the various color-coded ballots when you are done. You reduce the risk and impact of spoiled ballots this way, it seems to me.

The various parties even mail you guidelines on just exactly how to mark the ballot. See the picture, follow the picture…even somebody who is barely literate can figure this setup out. And since the ballot system is the same everywhere, the various TV stations devote considerable time in “mini classes” on how everything works. And they air in prime time, too! And towards election day, they’re practically inescapable.

But the real secret is in the counting. My wife has worked as an official herself, and this part amazes me. The various ballots are counted with the direct participation and observation of the various political parties, by hand. Each has to verify and agree on the counts.

Obviously this is labor intensive as hell; talk about voter participation! And yes, the officials get paid, too, so it isn’t exactly cheap. But you can’t tell me this is less cost-effective than all those electronic machines you and everybody have been talking about. And guess what, since practice makes perfect, the results are far quicker than you think.

Is it perfect? Certainly not. I’m sure flaws could be found, but it seems to me that would more involve mutual neglect or collusion of opposing interests, in a country where politics is the second national sport. At least you can recount a box until everyone agrees.

Yes, Italy gets a lot of raps for the 50-some-odd executive governments since WW 2, and certainly Italy isn’t exactly free from corruption; but I really think this is something where they got it right. At least I don’t hear any complaints about this, and Italians are not shy about complaining when their politicians do things they don’t like!

From: Geoff Kuenning <geoff cs.hmc.edu>
Subject: Computer Security and Liability

If you’ve never read it, you should track down the CACM article on the history of steam boilers that appeared some time in the 80’s. The brief summary is that after steam power was invented, there were lots of nasty boiler explosions. In the UK, the problem was dealt with by regulation. In the U.S., free-market advocates succeeded in arguing that liability law was sufficient: boiler makers would lose a few lawsuits and would then have an incentive to develop safer boilers.

The result was that boiler-related deaths dropped to near zero in the UK and continued at high rates for 20 more years in the U.S., until finally we broke down and regulated the industry.

The problem with liability as a feedback mechanism is that the negative feedback is strongly disconnected from the original action. Liability can help, but it’s not at all clear to me that simply making MS liable for all the worms in the world would cause them to start making secure software.

From: “Joe Patterson” <jpatterson asgardgroup.com>
Subject: World Series Security

A few hundred plainclothes policemen will look out of place, may be nervous, won’t be watching the game, won’t be cheering, hissing, booing, or waving like sport fans. So, there are a few issues here too:

1) It’s hard to know a few hundred people by sight. How will you keep the plainclothes policemen from spending a lot of their time observing each other? What authentication mechanism will there be to allow one cop to trust another one? How easy is that authentication mechanism to fake? This problem can be fixed by assigning areas to each cop, and making sure each cop personally knows and can recognize all of the other cops in his area and adjacent areas (which is probably how it’s done in most cases, but it’s an important consideration nonetheless).

2) What about security-conscious attendees who think the plainclothes cops are acting a little odd and might be terrorists? How many of these attendees will report “suspicious activity”? How many of these reports will it take before *any* report of “suspicious activity” is dismissed as probably just one of the plainclothes cops?

(Note that these two problems are facets of the same issue. Terrorists are rare. Almost certainly there are fewer terrorists than there are cops. We certainly hope so! However, plainclothes cops act fairly similarly to terrorists. Adding more plainclothes cops provides “chaff” for terrorists. It increases the number of false positives. When the noise level gets high enough, small signals are indistinguishable from no signal at all.)

3) Not that it makes things any worse, but terrorists practice behavior recognition too. A skilled terrorist *should* be able to pick out which people are cops. Of course, with the “popularity” of suicide bombings as a terrorist tool, the good guys do have one advantage: a kind of reverse evolution. Terrorists who are bad at their job may end up being captured or deterred. Those who are good at their job remove themselves from the pool of enemies.

My suspicion is that a better method would be to have a smaller number of *uniformed* cops, and a lot of skilled observers staffing the surveillance equipment—skilled minds and trained eyes using technology as a force multiplier. This gives the extra bonus of being able to see which people seem *more* nervous when the uniformed cops are around.

From: Rick Smith <rick cryptosmith.com>
Subject: Forged Faxes

Faxes are easy to forge, especially if you know how the original documents should appear. The story of the West Memphis jail goes beyond that—the document had obvious elements that marked it as a forgery, according to the news report from the Huntsville Times.

The underlying problem was that nobody at the jail was responsible for verifying release documents. According to the news report, releases involved decisions by both front office and back office people. The back office people assumed the front office people would not forward the release order to them (via fax) unless it was legitimate. Apparently the front office people assumed the back office would verify the release order.

Bogus faxes have released inmates before. In this case, as in most cases, a simple phone call would have uncovered the forgery.

From: John Wilson <tug wilson.co.uk>
Subject: The Doghouse: Merced County

The wayback machine has the original page:
<http://web.archive.org/web/20040117090053/http://…>

From: Mel Beckman <mel becknet.com>
Subject: Lexar’s Comment

I thought you’d be interested in my experience with Lexar over their flawed Secure JumpDrive product. I purchased the 1.0 Secure JumpDrive and recommended it to one of my clients, which bought dozens of them. When the  stake story came out, I contacted Lexar, which had recently started having a fire sale on the old 1.0 drives at CompUSA—with the 1.0 version number removed from all the drives but CompUSA’s advertising claiming they are 2.0 drives! CompUSA is still selling those faulty drives today, and nobody at CompUSA management would return my calls.

I asked Lexar to replace my drive with a 2.0 model, since the 1.0 has a fatal hardware flaw (far from “slight”!) and the 2.0 software won’t run on the 1.0 drive. After numerous denials of any problem, including the claim that “only a skilled hacker can get in so we consider the 1.0 drive still secure,” Lexar finally asked me to ship my drive back and replaced it with a 2.0 drive.

The last time I checked, CompUSA was still selling the remaindered 1.0 drives, still labeled as “Secure,” but no longer claiming to have Lexar’s 2.0 software.

Incidentally, neither I nor my client have ever been contacted by Lexar about the security problem, and they are giving my client the runaround on replacing their numerous drives.


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Comments on CRYPTO-GRAM should be sent to schneier@schneier.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane’s expert security analysts protect networks for Fortune 1000 companies world-wide. See <http://www.counterpane.com>.

Sidebar photo of Bruce Schneier by Joe MacInnis.