January 15, 2005

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <>.

Or you can read this issue on the web at <>.

Schneier also publishes these same essays in his blog: <>. An RSS feed is available.

In this issue:

Fingerprinting Students


A nascent security trend in the U.S. is tracking schoolchildren when they get on and off school buses. A school district in Spring, Texas, is using computerized ID badges to record this information, and wirelessly sending it to police headquarters. Another school district, in Phoenix, is doing the same thing with fingerprint readers. The system is supposed to help prevent the loss of a child, whether through kidnapping or accident.

What’s going on here? Have these people lost their minds? Tracking kids as they get on and off school buses is a ridiculous idea. It’s expensive, invasive, and doesn’t increase security very much.

Security is always a trade-off. In “Beyond Fear,” I delineated a five-step process to evaluate security countermeasures. The idea is to be able to determine, rationally, whether a countermeasure is worth it. In the book, I applied the five-step process to everything from home burglar alarms to military action against terrorism. Let’s apply it in this case.

Step 1: What assets are you trying to protect? Children.

Step 2: What are the risks to these assets? Loss of the child, either due to kidnapping or accident. Child kidnapping is a serious problem in the U.S.; the odds of a child being abducted by a family member are one in 340 and by a non-family member are 1 in 1200 (per year). (These statistics are for 1999—link below—and include all sorts of incidents that normally wouldn’t be considered kidnappings. Additionally, my guess is that the rates in Spring, Texas, are much lower.) Very few of these kidnappings involve school buses, so it’s unclear how serious the specific risks being addressed here are.

Step 3: How well does the security solution mitigate those risks? Not very well.

Let’s imagine how this system might provide security in the event of a kidnapping. If a kidnapper—assume it’s someone the child knows—goes onto the school bus and takes the child off at the wrong stop, the system would record that. Otherwise—if the kidnapping took place either before the child got on the bus or after the child got off—the system wouldn’t record anything suspicious. Yes, it would tell investigators if the kidnapping happened before morning attendance and either before or after the school bus ride, but is that one piece of information worth this entire tracking system? I doubt it.

You could imagine a movie-plot scenario where this kind of tracking system could help the hero recover the kidnapped child, but it hardly seems useful in the general case.

Step 4: What other risks does the security solution cause? The additional risk is the data collected through constant surveillance. Where is this information collected? Who has access to it? How long is it stored? These are important security questions that get no mention.

Step 5: What costs and trade-offs does the security solution impose? There are two. The first is obvious: money. I don’t have it figured, but it’s expensive to outfit every child with an ID card and every school bus with this system. The second cost is more intangible: a loss of privacy. We are raising children who think it normal that their daily movements are watched and recorded by the police. That feeling of privacy is not something we should give up lightly.

So, finally: is this system worth it? No. The security gained is not worth the money and privacy spent. If the goal is to make children safer, the money would be better spent elsewhere: guards at the schools, education programs for the children, etc.

If this system makes so little sense, why have at least two cities in the U.S. implemented it? The obvious answer is that the school districts didn’t think the problem through. Either they were seduced by the technology, or by the companies that built the system. But there’s another, more interesting, possibility.

In “Beyond Fear” I talk about the notion of agenda. The five-step process is a subjective one, and should be evaluated from the point of view of the person making the trade-off decision. If you imagine that the school officials are making the trade-off, then the system suddenly makes sense.

If a kidnapping occurs on school property, the subsequent investigation could easily hurt school officials. They could even lose their jobs. If you view this security countermeasure as one protecting *them* just as much as it protects children, it suddenly makes more sense. The trade-off might not be worth it in general, but it’s worth it to *them*.

Kidnapping is a real problem, and countermeasures that help reduce the risk are a good thing. But remember that security is always a trade off, and a good security system is one where the security benefits are worth the money, convenience, and liberties that are being given up. Quite simply, this system isn’t worth it.

News article:

Statistics on kidnappings:

Crypto-Gram Reprints

Crypto-Gram is currently in its eighth year of publication. Back issues cover a variety of security-related topics, and can all be found on <>. These are a selection of articles that appeared in this calendar month in other years.

Diverting Aircraft and National Intelligence:

Fingerprinting Foreigners:

Color-coded Terrorist Threat Levels:

Militaries and Cyber-War:

A cyber Underwriters Laboratories?

Code signing:

Block and stream ciphers:

Easy-to-Remember PINs


The UK is switching to a “chip and pin” system for credit card transactions. It’s been happening slowly, but by January (I’m not sure if it is the beginning of January or the end), every UK credit card will be a smart card.

This kind of system already exists in France and elsewhere. The cards have embedded chips. When you want to make a purchase, you stick your card in a slot and type your four-digit PIN on a keypad. (Presumably they will never turn off the magnetic stripe and signature system required for U.S. cards.)

One consumer fear over this process is about what happens if you forget your PIN. To allay fears, credit card companies have been placing newspaper advertisements suggesting that people change their PINs to an easy-to-remember number: “Keep forgetting your PIN? It’s easy to change with chip and PIN. To something more memorable like a birthday or your lucky numbers.”

Don’t the credit card companies have anyone working on security?

The ad also goes on to say that you can change your PIN by phone, which has its own set of problems.

The ad:
(I know that link isn’t a primary source, but I also received the information from at least two readers, and one of them said that the advertisement was printed in the London Times.)

Shutting Down the GPS Network


The U.S. government is considering plans for temporarily disabling the U.S. network of global positioning satellites during a national crisis to prevent terrorists from using the technology.

During a national crisis, GPS technology will help the good guys far more than it will help the bad guys. Disabling the system will almost certainly do much more harm than good.

This reminds me of comments after the Madrid bombings that we should develop ways to shut down the cell phone network after a terrorist attack. (The Madrid bombs were detonated using cell phones, although not by calling cell phones attached to the bombs.) After a terrorist attack, cell phones are critical to both rescue workers and survivors.

All technology has good and bad uses—automobiles, telephones, cryptography, etc. For the most part, you have to accept the bad uses if you want the good uses. This is okay, because the good guys far outnumber the bad guys, and the good uses far outnumber the bad ones.



Safecracking for the computer scientist:
It’s a great paper, and it has completely pissed off the locksmithing community:
There is a reasonable debate to be had about secrecy versus full disclosure, but a lot of these comments are just mean. Blaze is NOT being dishonest. His results are NOT trivial. I believe that the physical security community has a lot to learn from the computer security community, and that the computer security community has a lot to learn from the physical security community. Blaze’s work in physical security has important lessons for computer security—and, as it turns out, physical security—notwithstanding these people’s attempt to trivialize it in their efforts to attack him.

More mobile phone worms:

A 1959 paper about a hardware random number generator attached to a computer:

Police slipped some plastic explosives into a random passenger’s suitcase as part of a test of sniffer dogs. Four days later, the explosives were still missing.
It’s perfectly reasonable to plant an explosive-filled suitcase in an airport in order to test security. It is not okay to plant it in someone’s bag without his knowledge and permission. (The explosive residue could remain on the suitcase long after the test, and might be picked up by one of those trace mass spectrometers that detects the chemical residue associated with bombs.) But if you are going to plant plastic explosives in the suitcase of some innocent passenger, shouldn’t you at least write down which suitcase it was?

The Irish Commission on Electronic Voting has released a 433-page report. It’s an excellent and detailed analysis of the e-voting system purchased by the Irish government.

EPIC has posted a list of New Year’s Privacy Resolutions.

In a story on a computer glitch that forced Comair to cancel 1,100 flights on Christmas Day, I was quoted in an AP story as saying: “If this kind of thing could happen by accident, what would happen if the bad guys did this on purpose?” I’m sure I said that, but I wish the reporter hadn’t used it. It’s just the sort of fear-mongering that I object to when others do it.

Lots of uniforms belonging to Canadian security screeners have been lost:
I wrote about the security implications of visual authentication tools, like uniforms, in my blog:

Wi-Fi shielding paint:

Good analysis of the security implications of not giving illegal aliens drivers licenses:

Altimeter watches now a terrorist threat:
Someone explain to me why I should worry that a watch being worn by someone might be used as a fuse mechanism. The person himself is a far more effective fuse mechanism. And if it’s a bomb small enough to fit into the watch that is the risk, then we’ve got way bigger problems than a particular brand of watch.

The Honeynet Project released a report saying that Linux is not being hacked. Test systems have an average life expectancy—time before they are successfully hacked—of three months. This is much greater than that of Windows systems, which have average life expectancies on the order of a few minutes. It’s important to remember that this paper focuses on vulnerable systems. The Honeynet researchers deployed almost 20 vulnerable systems to monitor hacker tactics, and found that no one was hacking the systems. That’s the real story: the hackers aren’t bothering with Linux. Two years ago, a vulnerable Linux system would be hacked in less than three days; now it takes three months. Why? My guess is a combination of two reasons. One, Linux is that much more secure than Windows. Two, the bad guys are focusing on Windows—more bang for the buck.

This article titled “Border Patrol hails new ID system” could have just as accurately been titled “No terrorists caught by new ID system.” Notice that terrorism justifies the security expense, and it ends up being used for something else. Look at the numbers of people detained for different sorts of crimes, and you immediately notice how petty most of the arrests really are.

Sad story of an anti-terrorism false positive:

Burglars and “Feeling Secure”


This quote is from “Confessions of a Master Jewel Thief,” by Bill Mason (Villard, 2003): “Nothing works more in a thief’s favor than people feeling secure. That’s why places that are heavily alarmed and guarded can sometimes be the easiest targets. The single most important factor in security—more than locks, alarms, sensors, or armed guards—is attitude. A building protected by nothing more than a cheap combination lock but inhabited by people who are alert and risk-aware is much safer than one with the world’s most sophisticated alarm system whose tenants assume they’re living in an impregnable fortress.”

The author, a burglar, found that luxury condos were an excellent target. Although they had much more security technology than other buildings, they were vulnerable because no one believed a thief could get through the lobby.

The book:

Counterpane News

Schneier is speaking to the Boston Chapter of the Chartered Property Casualty Underwriter (CPCU) Society on January 20th:

Counterpane has recently been certified as an authorized scanning vendor for both MasterCard’s Site Data Protection (SDP) program and Visa’s Cardholder Information Security Program (CISP).

Counterpane also had an impressive Q4. A press release about that is pending.

Short audio interview with Schneier:

Hollywood Sign Security


In Los Angeles, the “HOLLYWOOD” sign is protected by a fence and a locked gate. Because several different agencies need access to the sign for various purposes, the chain locking the gate is formed by several locks linked together. Each of the agencies has the key to its own lock, and not the key to any of the others. Of course, anyone who can open one of the locks can open the gate.

This is a nice example of a multiple-user access-control system. It’s simple, and it works. You can also make it as complicated as you want, with different locks in parallel and in series.

Secure Flight Privacy/IT Working Group


I am participating in a working group to help evaluate the effectiveness and privacy implications of the TSA’s Secure Flight program. We’ve had one meeting so far, and it looks like it will be an interesting exercise.

For those who have not been following along, Secure Flight is the follow-on to CAPPS-I. (CAPPS stands for Computer Assisted Passenger Pre-Screening.) CAPPS-I has been in place since 1997, and is a simple system to match airplane passengers to a terrorist watch list. A follow-on system, CAPPS-II, was proposed last year. That complicated system would have given every traveler a risk score based on information in government and commercial databases. There was a huge public outcry over the invasiveness of the system, and it was cancelled over the summer. Secure Flight is the new follow-on system to CAPPS-I.

Many of us believe that Secure Flight is just CAPPS-II with a new name. I hope to learn whether or not that is true.

I hope to learn a lot of things about Secure Flight and airline passenger profiling in general, but I probably won’t be able to write about it. In order to be a member of this working group, I was required to apply for a U.S. government SECRET security clearance and sign an NDA, promising that I would not disclose something called “Sensitive Security Information.”

SSI is one of three new categories of secret information, all of I think have no reason to exist. There is already a classification scheme—CONFIDENTIAL, SECRET, TOP SECRET, etc.—and information should either fit into that scheme or be public. A new scheme is just confusing. The NDA we were supposed to sign was very general, and included such provisions as allowing the government to conduct warrantless searches of our residences. (Two federal unions have threatened to sue the government over several provisions in that NDA, which applies to many DHS employees. And just recently, the DHS backed down.)

After push-back by myself and several others, we were given a much less onerous NDA to sign.

I am not happy about the secrecy surrounding the working group. NDAs and classified briefings raise serious ethical issues for government oversight committees. My suspicion is that I will be wowed with secret, unverifiable assertions that I will either have to accept or (more likely) question, but not be able to discuss with others. In general, secret deliberations favor the interests of those who impose the rules. They really run against the spirit of the Federal Advisory Committee Act (FACA).

Moreover, I’m not sure why this working group is not in violation of FACA. FACA is a 1972 law intended to govern how the Executive branch uses groups of advisors outside the federal government. Among other rules, it requires that advisory committees announce their meetings, hold them in public, and take minutes that are available to the public. The DHS was given a specific exemption from FACA when it was established: the Secretary of Homeland Security has the authority to exempt any advisory committee from FACA; the only requirement is that the Secretary publish notice of the committee in the Federal Register. I looked, and have not seen any such announcement.

Because of the NDA and the failure to follow FACA, I will not be able to fully exercise my First Amendment rights. That means that the government can stop me from saying things that may be important for the public to know. For example, if I learn that the old CAPPS program failed to identify actual terrorists, or that a lot of people who were not terrorists were wrongfully pulled off planes and the government has tried to keep this quiet—I’m just making these up—I can’t tell you. The government could prosecute me under the NDA because they might claim these facts are SSI and the public would never know this information, because there would be no open meeting obligations as there are for FACA committee.

In other words, the secrecy of this committee could have a real impact on the public understanding of whether or not air passenger screening really works.

In any case, I hope I can help make Secure Flight an effective security tool. I hope I can help minimize the privacy invasions on the program if it continues, and help kill it if it is ineffective. I’m not optimistic, but I’m hopeful.

I’m not hopeful that you will ever learn the results of this working group. We’re preparing our report for the Aviation Security Advisory Committee, and I very much doubt that they will release the report to the public.

Original NDA:

Story about unions objecting to the NDA:

And a recent development that may or may not affect this group:


The first problem with any discussion about cyberwar is definitional. I’ve been reading about cyberwar for years now, and there seem to be as many definitions of the term as there are people who write about the topic. Some people try to limit cyberwar to military actions taken during wartime, while others are so inclusive that they include the script kiddies who deface websites for fun.

I think the restrictive definition is more useful, and would like to define four different terms as follows:

Cyberwar—Warfare in cyberspace. This includes warfare attacks against a nation’s military—forcing critical communications channels to fail, for example—and attacks against the civilian population.

Cyberterrorism—The use of cyberspace to commit terrorist acts. An example might be hacking into a computer system to cause a nuclear power plant to melt down, a dam to open, or two airplanes to collide. In a previous Crypto-Gram essay, I discussed how realistic the cyberterrorism threat is.

Cybercrime—Crime in cyberspace. This includes much of what we’ve already experienced: theft of intellectual property, extortion based on the threat of DDOS attacks, fraud based on identity theft, and so on.

Cybervandalism—The script kiddies who deface websites for fun are technically criminals, but I think of them more as vandals or hooligans. They’re like the kids who spray paint buses: in it more for the thrill than anything else.

At first glance, there’s nothing new about these terms except the “cyber” prefix. War, terrorism, crime, even vandalism are old concepts. That’s correct, the only thing new is the domain; it’s the same old stuff occurring in a new arena. But because the arena of cyberspace is different from other arenas, there are differences worth considering.

One thing that hasn’t changed is that the terms overlap: although the goals are different, many of the tactics used by armies, terrorists, and criminals are the same. Just as all three groups use guns and bombs, all three groups can use cyberattacks. And just as every shooting is not necessarily an act of war, every successful Internet attack, no matter how deadly, is not necessarily an act of cyberwar. A cyberattack that shuts down the power grid might be part of a cyberwar campaign, but it also might be an act of cyberterrorism, cybercrime, or even—if it’s done by some fourteen-year-old who doesn’t really understand what he’s doing—cybervandalism. Which it is will depend on the motivations of the attacker and the circumstances surrounding the attack…just as in the real world.

For it to be cyberwar, it must first be war. And in the 21st century, war will inevitably include cyberwar. For just as war moved into the air with the development of kites and balloons and then aircraft, and war moved into space with the development of satellites and ballistic missiles, war will move into cyberspace with the development of specialized weapons, tactics, and defenses.

The Waging of Cyberwar

There should be no doubt that the smarter and better-funded militaries of the world are planning for cyberwar, both attack and defense. It would be foolish for a military to ignore the threat of a cyberattack and not invest in defensive capabilities, or to disregard the strategic or tactical possibility of launching an offensive cyberattack against an enemy during wartime. And while history has taught us that many militaries are indeed foolish and ignore the march of progress, cyberwar has been discussed too much in military circles to be ignored.

This implies that at least some of our world’s militaries have Internet attack tools that they’re saving in case of wartime. They could be denial-of-service tools. They could be exploits that would allow military intelligence to penetrate military systems. They could be viruses and worms similar to what we’re seeing now, but perhaps country- or network-specific. They could be Trojans that eavesdrop on networks, disrupt network operations, or allow an attacker to penetrate still other networks.

Script kiddies are attackers who run exploit code written by others, but don’t really understand the intricacies of what they’re doing. Conversely, professional attackers spend an enormous amount of time developing exploits: finding vulnerabilities, writing code to exploit them, figuring out how to cover their tracks. The real professionals don’t release their code to the script kiddies; the stuff is much more valuable if it remains secret until it is needed. I believe that militaries have collections of vulnerabilities in common operating systems, generic applications, or even custom military software that their potential enemies are using, and code to exploit those vulnerabilities. I believe that these militaries are keeping these vulnerabilities secret, and that they are saving them in case of wartime or other hostilities. It would be irresponsible for them not to.

The most obvious cyberattack is the disabling of large parts of the Internet, at least for a while. Certainly some militaries have the capability to do this, but in the absence of global war I doubt that they would do so; the Internet is far too useful an asset and far too large a part of the world economy. More interesting is whether they would try to disable national pieces of it. If Country A went to war with Country B, would Country A want to disable Country B’s portion of the Internet, or remove connections between Country B’s Internet and the rest of the world? Depending on the country, a low-tech solution might be the easiest: disable whatever undersea cables they’re using as access. Could Country A’s military turn its own Internet into a domestic-only network if they wanted?

For a more surgical approach, we can also imagine cyberattacks designed to destroy particular organizations’ networks; e.g., as the denial-of-service attack against the Al Jazeera website during the recent Iraqi war, allegedly by pro-American hackers but possibly by the government. We can imagine a cyberattack against the computer networks at a nation’s military headquarters, or the computer networks that handle logistical information.

One important thing to remember is that destruction is the last thing a military wants to do with a communications network. A military only wants to shut an enemy’s network down if they aren’t getting useful information from it. The best thing to do is to infiltrate the enemy’s computers and networks, spy on them, and surreptitiously disrupt select pieces of their communications when appropriate. The next best thing is to passively eavesdrop. After that, the next best is to perform traffic analysis: analyze who is talking to whom and the characteristics of that communication. Only if a military can’t do any of that do they consider shutting the thing down. Or if, as sometimes but rarely happens, the benefits of completely denying the enemy the communications channel outweigh all of the advantages.

Properties of Cyberwar

Because attackers and defenders use the same network hardware and software, there is a fundamental tension between cyberattack and cyberdefense. The National Security Agency has referred to this as the “equities issue,” and it can be summarized as follows. When a military discovers a vulnerability in a common product, they can either alert the manufacturer and fix the vulnerability, or not tell anyone. It’s not an easy decision. Fixing the vulnerability gives both the good guys and the bad guys a more secure system. Keeping the vulnerability secret means that the good guys can exploit the vulnerability to attack the bad guys, but it also means that the good guys are vulnerable. As long as everyone uses the same microprocessors, operating systems, network protocols, applications software, etc., the equities issue will always be a consideration when planning cyberwar.

Cyberwar can take on aspects of espionage, and does not necessarily involve open warfare. (In military talk, cyberwar is not necessarily “hot.”) Since much of cyberwar will be about seizing control of a network and eavesdropping on it, there may not be any obvious damage from cyberwar operations. This means that the same tactics might be used in peacetime by national intelligence agencies. There’s considerable risk here. Just as U.S. U2 flights over the Soviet Union could have been viewed as an act of war, the deliberate penetration of a country’s computer networks might be as well.

Cyberattacks target infrastructure. In this way they are no different than conventional military attacks against other networks: power, transportation, communications, etc. All of these networks are used by both civilians and the military during wartime, and attacks against them inconvenience both groups of people. For example, when the Allies bombed German railroad bridges during World War II, that affected both civilian and military transport. And when the United States bombed Iraqi communications links in both the First and Second Iraqi Wars, that affected both civilian and military communications. Cyberattacks, even attacks targeted as precisely as today’s smart bombs, are likely to have collateral effects.

Cyberattacks can be used to wage information war. Information war is another topic that’s received considerable media attention of late, although it is not new. Dropping leaflets on enemy soldiers to persuade them to surrender is information war. Broadcasting radio programs to enemy troops is information war. As people get more and more of their information over cyberspace, cyberspace will increasingly become a theater for information war. It’s not hard to imagine cyberattacks designed to co-opt the enemy’s communications channels and use them as a vehicle for information war.

Because cyberwar targets information infrastructure, the waging of it can be more damaging to countries that have significant computer-network infrastructure. The idea is that a technologically poor country might decide that a cyberattack that affects the entire world would disproportionately affect its enemies, because rich nations rely on the Internet much more than poor ones. In some ways this is the dark side of the digital divide, and one of the reasons countries like the United States are so worried about cyberdefense.

Cyberwar is asymmetric, and can be a guerrilla attack. Unlike conventional military offensives involving divisions of men and supplies, cyberattacks are carried out by a few trained operatives. In this way, cyberattacks can be part of a guerrilla warfare campaign.

Cyberattacks also make effective surprise attacks. For years we’ve heard dire warnings of an “electronic Pearl Harbor.” These are largely hyperbole today. I discuss this more in that previous Crypto-Gram essay on cyberterrorism, but right now the infrastructure just isn’t sufficiently vulnerable in that way.

Cyberattacks do not necessarily have an obvious origin. Unlike other forms of warfare, misdirection is more likely a feature of a cyberattack. It’s possible to have damage being done, but not know where it’s coming from. This is a significant difference; there’s something terrifying about not knowing your opponent—or knowing it, and then being wrong. Imagine if, after Pearl Harbor, we did not know who attacked us?

Cyberwar is a moving target. In the previous paragraph, I said that today the risks of an electronic Pearl Harbor are unfounded. That’s true; but this, like all other aspects of cyberspace, is continually changing. Technological improvements affect everyone, including cyberattack mechanisms. And the Internet is becoming critical to more of our infrastructure, making cyberattacks more attractive. There will be a time in the future, perhaps not too far into the future, when a surprise cyberattack becomes a realistic threat.

And finally, cyberwar is a multifaceted concept. It’s part of a larger military campaign, and attacks are likely to have both real-world and cyber components. A military might target the enemy’s communications infrastructure through both physical attack—bombings of selected communications facilities and transmission cables—and virtual attack. An information warfare campaign might include dropping of leaflets, usurpation of a television channel, and mass sending of e-mail. And many cyberattacks still have easier non-cyber equivalents: A country wanting to isolate another country’s Internet might find a low-tech solution, involving the acquiescence of backbone companies like Cable & Wireless, easier than a targeted worm or virus. Cyberwar doesn’t replace war; it’s just another arena in which the larger war is fought.

People overplay the risks of cyberwar and cyberterrorism. It’s sexy, and it gets media attention. And at the same time, people underplay the risks of cybercrime. Today crime is big business on the Internet, and it’s getting bigger all the time. But luckily, the defenses are the same. The countermeasures aimed at preventing both cyberwar and cyberterrorist attacks will also defend against cybercrime and cybervandalism. So even if organizations secure their networks for the wrong reasons, they’ll do the right thing.

My previous essay on cyberterrorism:

Comments from Readers

From: “David Allsopp” <d.allsopp>
Subject: Re: Behavioral Profiling

Last time I flew, I overheard a lady passenger (white, European) describing how she had been stopped and searched at checkpoints again, and again, and again, until she finally lost her temper with a customs official and accused them of harassment. The problem seemed to be that after the first search she was nervous about checkpoints, and so was stopped the second time for “undue anxiety,” making her even worse the third time, and so on.

Whilst I agree with your argument that trained humans make for good security, it’s hard to see how to distinguish fear of flying from fear of being stopped (again) from fear of being caught.

From: Jimmy Stiefel <jimmy>
Subject: Behavioral Assessment Profiling

In your analysis of Behavioral Assessment, you highlight the advantages over other forms of profiling, and in general, I think your analysis is dead on (as usual). However, I was a little surprised that you took the bait and didn’t challenge the Logan program’s “success” record. (“Already at Logan Airport, the program has caught 20 people who were either in the country illegally or had outstanding warrants of one kind or another.”)

It seems to me that these “catches” could just as easily be collateral damage rather than program successes. They are false positives—people with skeletons in their closets that made them act hinky around security. But were they terrorists? Was any plot foiled? Are we any safer as a result? I think the answer is no.

Is an illegal alien a security threat on domestic flights? Is a person with an outstanding warrant a threat? Certainly computerized passenger screening systems are set up specifically to catch these sort of perceived threats. These are exactly the sort of people that wind up on “lists,” and you’ve discussed at length why they are not effective security measures. Catching a *threat* because he’s hinky is a good thing. Catching a non-threat because he is hinky is a false positive.

I’m disappointed you didn’t highlight this. Catching an illegal alien is an immigration problem, not a transportation security problem. In the absence of further details, these were program failures, not successes. People lost their freedom as a result of these failures.

To me, this is mission creep. It’s is an erosion of our civil liberties. Remember, the purpose of airport security is to make travel safe, not to scrutinize the skeletons in the closet of every passenger. This is a hidden program cost, one that every citizen will bear.

From: Richard Barrell <rbarrell>
Subject: Israeli Airport Security

I feel you missed the most important part of the Israeli airport security screening technique. The key part is that the security screeners (attackers) work in teams, and suspected “defenders” will be questioned twice by different “screeners” who then compare notes.

This means the “defender” not only has to remember a complex story, but has to undergo a second set of questions, the answers of which are compared with the answers from the first set of questions.

This double blind is an effective method of locating holes in a “defender’s” story and is also a very public process, increasing the pressure on would-be terrorists.

From: Paul Schumacher <psch>
Subject: re: Israeli Airport Security

This is going so far back in time that I do not remember the details. In the Army I was given training in how to evade an interrogator’s questions.

The trick is not to have built up a good cover story, but to adapt a real story to the occasion. Using the scenario of airport screening:

Q: Where are you going?
A: To Pakistan. (This is where the flight is going, so it’s obvious).

Q: Who do you know there?
A: George Hamilton (I simply change Osama bin Laden’s name to another’s).

Q: How did you meet him?
A: In basic training in the Army (an al Qaeda training camp for the Army of Allah).

Q: What were you doing there?”
A: Learning how to be a soldier (one man’s terrorist is another man’s freedom fighter).

As you can see, I have told much of the “truth” from one perspective, but a total falsehood from another. By bending a pattern that exists in reality to form an alternative “truth,” it should be able to withstand considerable questioning.

From: “Charlie Brooks” <linux HBCS.Org>
Subject: Microwaving CDs

You wrote: “The best way to destroy CD-Rs is to microwave them on high for five seconds.”

I once held a party on the theme of “unwise microwave oven experiments,” because I had a working microwave nobody wanted. I asked everyone to bring something to nuke, and the creativity of my friends was rather awe-inspiring. Lots brought CDs.

I (and several of my guests, unfortunately) learned that you should *NOT* breathe the gases released from microwaved CDs!

Since the beam spreader of a microwave is a rather effective fan, you really can’t avoid exposure to the fumes if you are in an enclosed space—like your kitchen, or my back porch. A powerful exhaust fan that vents outside (not one of those crappy modern ones that blows back into the same room) might work if your microwave’s back vent is close to the fan intake.

You probably shouldn’t recommend microwaving CDs without mentioning that this will release nasty gases. The good news is that despite repeated exposure we all recovered after a couple of weeks.

From: David Jefferson <d_jefferson>
Subject: Electronic voting machines

In arguing that we should permit the source code for voting machines to remain secret, Jeremy Epstein wrote: “First, if we’re going to have voter-verified paper audit trails (or whatever synonym you choose), then it doesn’t matter what the software does. If it misbehaves, we can catch it in the recount.”

Unfortunately this is simply not true. There are MANY malicious code attacks that cannot be detected or corrected by any comparison between the VVPAT and the electronic ballot copies. For an easy example, consider that malicious code that systematically and surreptitiously reveals your vote to the next voter via some secret signal on the screen cannot be detected using the VVPAT.

Or, for a more complex example, consider an attack that is executed at random, only a small percentage of the time, and which records a vote for Candidate A for president the first time you try to vote, no matter who the voter intended to vote for; but then clearly displays the incorrect vote on the summary screen and on the paper ballot copy before the vote is committed. If the voter doesn’t notice the error, and simply casts the vote, then the attack succeeds and Candidate A gets a vote he should not have gotten, and the problem is undetectable because the paper trail and electronic records agree. If the voter DOES notice it, voids the paper ballot, and goes back to correct the vote, the system records the vote properly the second time, on both the paper and electronic copies, and again the electronic and paper records agree, while the voter walks away thinking the original mistake was probably his. In any case, the whole thing happens in the privacy of the voting booth where no one else can see it, and even a suspicious voter cannot demonstrate the problem because he cannot vote again and because the attack is manifest only a small percentage of the time anyway.

While a voter-verified paper trail is vital, it is not a panacea; it simply cannot catch all attacks. There is no substitute for serious source code review, although even that is profoundly difficult, and may or may not detect malicious code even if it is present in the source code.

To: Thomas Stalzer <electroemporium>
Subject: Re: Electronic Voting Machines

You describe the Italian paper ballot system. Much of it is identical to paper ballots used in the US—if not now, then certainly 25 years ago, the last time I lived in a precinct that had paper ballots. There were separate ballots for different groups of offices, they were color-coded, etc. And yes, both parties participated in tallying the ballots—I was an observer for the party of my choice.

It’s important to realize, though, that this process has its own flaws and its own risks. Apart from the classics—ballot-box stuffing in the most literal sense, registration fraud, double voting, and the like—there are a few non-obvious risks. The first is the precise definition of what constitutes a valid ballot. I don’t recall North Carolina law on the subject (the state I was living in when I was an observer); in New York a few years earlier, the law required two lines, touching or crossing in the box, to select a particular candidate. This encompasses your traditional check mark or X; it also includes a host of other geometric patterns. Naturally, the law also covered things like voting for more than one candidate. It also discussed extraneous marks on the ballot, such as those outside any box. Query: what should be done with ballots that had *invalid* votes, such as a single line, in one box, but a valid vote mark in another box? I don’t recall what the law said, but the party instructions were clear: challenge any questionable ballot that appeared to be countable for your opponents, because they were sure to do the same to you. “Valid” or “invalid” isn’t nearly as clear-cut as one would like; each party tried for a local optimization. Presumably, it all balanced….

But there’s a more subtle issue, one I learned about when hearing the local politicians disagreeing with the suggestion that voting machines be used: paper ballots leak information. For example, I recall a race where you were supposed to vote for three of six candidates for town council. (The race was nominally non-partisan.) The patterns of which three were selected on any ballot was very useful to the professional politicians in town. Lever voting machines don’t record that sort of thing; electronic machines may or may not. If you’re concerned with that sort of analysis, it’s a question worth asking. Note, too, that even if your local newspaper doesn’t report such things, ballots and detailed voting information is (in the US) public information; assorted political parties may be checking this on their own. You may or may not consider this to be a threat; you should be aware of the information leakage.

The moral? Security isn’t any one thing, be it paper ballots, lever machines, voter-verifiable audit trails, or what have you. It’s a system property—and that includes the registration, voting, and tallying processes.

CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <>. Back issues are also available at that URL.

Comments on CRYPTO-GRAM should be sent to Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <>.

Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane’s expert security analysts protect networks for Fortune 1000 companies world-wide. See <>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Counterpane Internet Security, Inc.

Sidebar photo of Bruce Schneier by Joe MacInnis.