Blog: January 2005 Archives

TSA's Secure Flight

As I wrote previously, I am participating in a working group to study the security and privacy of Secure Flight, the U.S. government’s program to match airline passengers with a terrorist watch list. In the end, I signed the NDA allowing me access to SSI (Sensitive Security Information) documents, but managed to avoid filling out the paperwork for a SECRET security clearance.

Last week the group had its second meeting.

So far, I have four general conclusions. One, assuming that we need to implement a program of matching airline passengers with names on terrorism watch lists, Secure Flight is a major improvement—in almost every way—over what is currently in place. (And by this I mean the matching program, not any potential uses of commercial or other third-party data.)

Two, the security system surrounding Secure Flight is riddled with security holes. There are security problems with false IDs, ID verification, the ability to fly on someone else’s ticket, airline procedures, etc.

Three, the urge to use this system for other things will be irresistible. It’s just too easy to say: “As long as you’ve got this system that watches out for terrorists, how about also looking for this list of drug dealers…and by the way, we’ve got the Super Bowl to worry about too.” Once Secure Flight gets built, all it’ll take is a new law and we’ll have a nationwide security checkpoint system.

And four, a program of matching airline passengers with names on terrorism watch lists is not making us appreciably safer, and is a lousy way to spend our security dollars.

Unfortunately, Congress has mandated that Secure Flight be implemented, so it is unlikely that the program will be killed. And analyzing the effectiveness of the program in general, potential mission creep, and whether the general idea is a worthwhile one, is beyond the scope of our little group. In other words, my first conclusion is basically all that they’re interested in hearing.

But that means I can write about everything else.

To speak to my fourth conclusion: Imagine for a minute that Secure Flight is perfect. That is, we can ensure that no one can fly under a false identity, that the watch lists have perfect identity information, and that Secure Flight can perfectly determine if a passenger is on the watch list: no false positives and no false negatives. Even if we could do all that, Secure Flight wouldn’t be worth it.

Secure Flight is a passive system. It waits for the bad guys to buy an airplane ticket and try to board. If the bad guys don’t fly, it’s a waste of money. If the bad guys try to blow up shopping malls instead of airplanes, it’s a waste of money.

If I had some millions of dollars to spend on terrorism security, and I had a watch list of potential terrorists, I would spend that money investigating those people. I would try to determine whether or not they were a terrorism threat before they got to the airport, or even if they had no intention of visiting an airport. I would try to prevent their plot regardless of whether it involved airplanes. I would clear the innocent people, and I would go after the guilty. I wouldn’t build a complex computerized infrastructure and wait until one of them happened to wander into an airport. It just doesn’t make security sense.

That’s my usual metric when I think about a terrorism security measure: Would it be more effective than taking that money and funding intelligence, investigation, or emergency response—things that protect us regardless of what the terrorists are planning next. Money spent on security measures that only work against a particular terrorist tactic, forgetting that terrorists are adaptable, is largely wasted.

Posted on January 31, 2005 at 9:26 AM22 Comments

Iraqi Election Security

This is so ridiculous I have trouble believing it’s true:

Election security chiefs in Iraq will set up decoy polling centres in an attempt to outwit insurgents who have vowed to target voters with suicide bombs and mortar rounds on Sunday.

Everyone has to vote, right? This means one of two things will happen. One, everyone will know about the decoy sites, and the insurgents will know to avoid them. Or two, no one will know about the decoy sites, voters will flock to them, and it won’t matter to the insurgents that they are decoys.

Posted on January 30, 2005 at 2:00 AM17 Comments

PS2 Cheat Codes Hacked

From Adam Fields weblog:

Some guy tore apart his PS2 controller, connected it to the parallel port on his computer, and wrote a script to press a large number of button combinations. He used it to figure out all of the cheat codes for GTA San Andreas (including some not released by Rockstar, apparently).

http://games.slashdot.org/article.pl?sid=05/01/17/1411251

This is a great example of a “class break” in systems security—the creation of a tool means that this same technique can be easily used on all games, and game developers can no longer rely (if they did before) on the codes being secret because it’s hard to try them all.

Posted on January 29, 2005 at 8:00 AM

Airplane Defense Security Trade-Off

It’s nice to see the government actually making security trade-offs. From the Associated Press:

Outfitting every U.S. commercial passenger plane with anti-missile systems would be a costly and impractical defense against terrorists armed with shoulder-fired rockets, according to a study released Tuesday.

Researchers said it could cost nearly $40 billion over 20 years to deploy defense technology on the country’s 6,800 passengers jets. By comparison, the federal government currently spends roughly $4.4 billion a year on all transportation security.

The Rand study also cited the unreliability of the system, and the problems of false alarms.

Identifying terrorism security countermeasures that aren’t worth it…maybe it’s the start of a trend.

Posted on January 26, 2005 at 8:42 AM19 Comments

Telephone Monitoring While on Hold

When we telephone a customer support line, we all hear the recording saying that the call may be monitored. What we don’t realize is that we may be monitored even when we’re on hold.

Monitoring is intended to track the performance of call center operators, but the professional snoops are inadvertently monitoring callers, too. Most callers do not
realize that they may be taped even while they are on hold.

It is at these times that monitors hear husbands arguing with their wives, mothers yelling at their children, and dog owners throwing fits at disobedient pets, all when they think no one is listening. Most times, the only way a customer can avoid being recorded is to hang up.

There’s an easy defense for those in offices and with full-featured phones: the “mute” button. But people believe their calls are being monitored “for quality or training purposes,” and assume that it’s only the part of the call where they’re actually talking to someone. Even easy defenses don’t work if people don’t know that they have to implement them.

Posted on January 25, 2005 at 8:00 AM14 Comments

FBI Retires Carnivore

According to SecurityFocus:

FBI surveillance experts have put their once-controversial Carnivore Internet surveillance tool out to pasture, preferring instead to use commercial products to eavesdrop on network traffic, according to documents released Friday.

Of course, they’re not giving up on Internet surveillance. They’ve just realized that commercial tools are better, cheaper, or both.

Posted on January 24, 2005 at 8:00 AM9 Comments

American Airlines Data Collection

From BoingBoing:

Last week on a trip from London to the US, American Airlines demanded that I write out a list of the names and addresses of all the friends I would be staying with in the USA. They claimed that this was due to a TSA regulation, but refused to state which regulation required them to gather this information, nor what they would do with it once they’d gathered it. I raised a stink, and was eventually told that I wouldn’t have to give them the requested dossier because I was a Platinum AAdvantage Card holder (i.e., because I fly frequently with AA).

The whole story is worth reading. It’s hard to know what’s really going on, because there’s so much information I don’t have. But it’s chilling nonetheless.

Posted on January 20, 2005 at 9:28 AM25 Comments

DHS Biometric ID Cards

The Department of Homeland Security is considering a biometric identification card for transportation workers:

TWIC is a tamper-resistant credential that contains biometric information about the holder which renders the card useless to anyone other than the rightful owner. Using this biometric data, each transportation facility can verify the identity of a worker and help prevent unauthorized individuals from accessing secure areas. Currently, many transportation workers must carry a different identification card for each facility they access. A standard TWIC would improve the flow of commerce by eliminating the need for redundant credentials and streamlining the identity verification process.

I’ve written extensively about the uses and abuses of biometrics (Beyond Fear, pages 197-200). The short summary is that biometrics are great as a local authentication tool and terrible as a identification tool. For a whole bunch of reasons, this DHS project is a good use of biometrics.

Posted on January 19, 2005 at 8:55 AM10 Comments

Microsoft RC4 Flaw

One of the most important rules of stream ciphers is to never use the same keystream to encrypt two different documents. If someone does, you can break the encryption by XORing the two ciphertext streams together. The keystream drops out, and you end up with plaintext XORed with plaintext—and you can easily recover the two plaintexts using letter frequency analysis and other basic techniques.

It’s an amateur crypto mistake. The easy way to prevent this attack is to use a unique initialization vector (IV) in addition to the key whenever you encrypt a document.

Microsoft uses the RC4 stream cipher in both Word and Excel. And they make this mistake. Hongjun Wu has details (link is a PDF).

In this report, we point out a serious security flaw in Microsoft Word and Excel. The stream cipher RC4 [9] with key length up to 128 bits is used in Microsoft Word and Excel to protect the documents. But when an encrypted document gets modified and saved, the initialization vector remains the same and thus the same keystream generated from RC4 is applied to encrypt the different versions of that document. The consequence is disastrous since a lot of information of the document could be recovered easily.

This isn’t new. Microsoft made the same mistake in 1999 with RC4 in WinNT Syskey. Five years later, Microsoft has the same flaw in other products.

Posted on January 18, 2005 at 9:00 AM23 Comments

Safecracking

Matt Blaze has written an excellent paper: “Safecracking for the computer scientist.”

It has completely pissed off the locksmithing community.

There is a reasonable debate to be had about secrecy versus full disclosure, but a lot of these comments are just mean. Blaze is not being dishonest. His results are not trivial. I believe that the physical security community has a lot to learn from the computer security community, and that the computer security community has a lot to learn from the physical security community. Blaze’s work in physical security has important lessons for computer security—and, as it turns out, physical security—notwithstanding these people’s attempt to trivialize it in their efforts to attack him.

Posted on January 14, 2005 at 8:18 AM11 Comments

Secure Flight Privacy/IT Working Group

I am participating in a working group to help evaluate the effectiveness and privacy implications of the TSA’s Secure Flight program. We’ve had one meeting so far, and it looks like it will be an interesting exercise.

For those who have not been following along, Secure Flight is the follow-on to CAPPS-I. (CAPPS stands for Computer Assisted Passenger Pre-Screening.) CAPPS-I has been in place since 1997, and is a simple system to match airplane passengers to a terrorist watch list. A follow-on system, CAPPS-II, was proposed last year. That complicated system would have given every traveler a risk score based on information in government and commercial databases. There was a huge public outcry over the invasiveness of the system, and it was cancelled over the summer. Secure Flight is the new follow-on system to CAPPS-I.

Many of us believe that Secure Flight is just CAPPS-II with a new name. I hope to learn whether or not that is true.

I hope to learn a lot of things about Secure Flight and airline passenger profiling in general, but I probably won’t be able to write about it. In order to be a member of this working group, I was required to apply for a U.S. government SECRET security clearance and sign an NDA, promising that I would not disclose something called “Sensitive Security Information.”

SSI is one of three new categories of secret information, all of I think have no reason to exist. There is already a classification scheme—CONFIDENTIAL, SECRET, TOP SECRET, etc.—and information should either fit into that scheme or be public. A new scheme is just confusing. The NDA we were supposed to sign was very general, and included such provisions as allowing the government to conduct warrantless searches of our residences. (Two federal unions have threatened to sue the government over several provisions in that NDA, which applies to many DHS employees. And just recently, the DHS backed down.)

After push-back by myself and several others, we were given a much less onerous NDA to sign.

I am not happy about the secrecy surrounding the working group. NDAs and classified briefings raise serious ethical issues for government oversight committees. My suspicion is that I will be wowed with secret, unverifiable assertions that I will either have to accept or (more likely) question, but not be able to discuss with others. In general, secret deliberations favor the interests of those who impose the rules. They really run against the spirit of the Federal Advisory Committee Act (FACA).

Moreover, I’m not sure why this working group is not in violation of FACA. FACA is a 1972 law intended to govern how the Executive branch uses groups of advisors outside the federal government. Among other rules, it requires that advisory committees announce their meetings, hold them in public, and take minutes that are available to the public. The DHS was given a specific exemption from FACA when it was established: the Secretary of Homeland Security has the authority to exempt any advisory committee from FACA; the only requirement is that the Secretary publish notice of the committee in the Federal Register. I looked, and have not seen any such announcement.

Because of the NDA and the failure to follow FACA, I will not be able to fully exercise my First Amendment rights. That means that the government can stop me from saying things that may be important for the public to know. For example, if I learn that the old CAPPS program failed to identify actual terrorists, or that a lot of people who were not terrorists were wrongfully pulled off planes and the government has tried to keep this quiet—I’m just making these up—I can’t tell you. The government could prosecute me under the NDA because they might claim these facts are SSI and the public would never know this information, because there would be no open meeting obligations as there are for FACA committees.

In other words, the secrecy of this committee could have a real impact on the public understanding of whether or not air passenger screening really works.

In any case, I hope I can help make Secure Flight an effective security tool. I hope I can help minimize the privacy invasions on the program if it continues, and help kill it if it is ineffective. I’m not optimistic, but I’m hopeful.

I’m not hopeful that you will ever learn the results of this working group. We’re preparing our report for the Aviation Security Advisory Committee, and I very much doubt that they will release the report to the public.

Original NDA

Story about unions objecting to the NDA

And a recent development that may or may not affect this group

Posted on January 13, 2005 at 9:08 AM19 Comments

British Pub Hours and Crime

The Economist website (only subscribers can read the article) has an article dated January 6 that illustrates nicely the interplay between security trade-offs and economic agendas.

In the 1990s, local councils were scratching around for ideas about to how to revive Britain’s inner cities. Part of the problem was that the cities were dead after their few remaining high-street shops had shut in the evening. Bringing night-life back, it was felt, would bring back young people, and the cheerful social and economic activity they would attract would revive depressed urban areas. The “24-hour city” thus became the motto of every forward-thinking local authority.

For councils to fulfil their plans, Britain’s antiquated drinking laws needed to be liberalised. That has been happening, in stages. The liberalisation culminates in 24-hour drinking licences….

This has worked: “As an urban redevelopment policy, the liberalisation has been tremendously successful. Cities which once relied on a few desultory pubs for entertainment now have centres thumping with activity from early evening all through the night.”

On the other hand, the change comes with a cost. “That is probably why, when crime as a whole has fallen since the late 1990s, violent crime has gone up; and it is certainly why the police have joined the doctors in opposing the 24-hour licences.”

This is all perfectly reasonable. All security is a trade-off, and a community should be able to trade off the economic benefits of a revitalized urban center with the economic costs of an increased police force. Maybe they can issue 24-hour licenses to only a few pubs. Or maybe they can issue 22-hour licenses, or licenses for some other number of hours. Certainly there is a solution that balances the two issues.

But the organization that has to pay the security costs for the program (the police) is not the same as the organization that reaps the benefits (the local governments).

Over the past hundred years, central government’s thirst for power has weakened the local authorities. As a result, policing, which should be a local issue, is largely paid for by central government. So councils, who are largely responsible for licensing, do not pay for the negative consequences of liberalisation.

The result is that the local councils don’t care about the police costs, and consequently make bad security trade-offs.

Posted on January 12, 2005 at 9:01 AM31 Comments

Fingerprinting Students

A nascent security trend in the U.S. is tracking schoolchildren when they get on and off school buses.

Hoping to prevent the loss of a child through kidnapping or more innocent circumstances, a few schools have begun monitoring student arrivals and departures using technology similar to that used to track livestock and pallets of retail shipments.

A school district in Spring, Texas, is using computerized ID badges to record this information, and wirelessly sending it to police headquarters. Another school district, in Phoenix, is doing the same thing with fingerprint readers. The system is supposed to help prevent the loss of a child, whether through kidnapping or accident.

What’s going on here? Have these people lost their minds? Tracking kids as they get on and off school buses is a ridiculous idea. It’s expensive, invasive, and doesn’t increase security very much.

Security is always a trade-off. In Beyond Fear, I delineated a five-step process to evaluate security countermeasures. The idea is to be able to determine, rationally, whether a countermeasure is worth it. In the book, I applied the five-step process to everything from home burglar alarms to military action against terrorism. Let’s apply it in this case.

Step 1: What assets are you trying to protect? Children.

Step 2: What are the risks to these assets? Loss of the child, either due to kidnapping or accident. Child kidnapping is a serious problem in the U.S.; the odds of a child being abducted by a family member are one in 340 and by a non-family member are 1 in 1200 (per year). (These statistics are for 1999, and are from NISMART-2, U.S. Department of Justice. My guess is that the current rates in Spring, Texas, are much lower.) Very few of these kidnappings involve school buses, so it’s unclear how serious the specific risks being addressed here are.

Step 3: How well does the security solution mitigate those risks? Not very well.

Let’s imagine how this system might provide security in the event of a kidnapping. If a kidnapper—assume it’s someone the child knows—goes onto the school bus and takes the child off at the wrong stop, the system would record that. Otherwise—if the kidnapping took place either before the child got on the bus or after the child got off—the system wouldn’t record anything suspicious. Yes, it would tell investigators if the kidnapping happened before morning attendance and either before or after the school bus ride, but is that one piece of information worth this entire tracking system? I doubt it.

You could imagine a movie-plot scenario where this kind of tracking system could help the hero recover the kidnapped child, but it hardly seems useful in the general case.

Step 4: What other risks does the security solution cause? The additional risk is the data collected through constant surveillance. Where is this information collected? Who has access to it? How long is it stored? These are important security questions that get no mention.

Step 5: What costs and trade-offs does the security solution impose? There are two. The first is obvious: money. I don’t have it figured, but it’s expensive to outfit every child with an ID card and every school bus with this system. The second cost is more intangible: a loss of privacy. We are raising children who think it normal that their daily movements are watched and recorded by the police. That feeling of privacy is not something we should give up lightly.

So, finally: is this system worth it? No. The security gained is not worth the money and privacy spent. If the goal is to make children safer, the money would be better spent elsewhere: guards at the schools, education programs for the children, etc.

If this system makes so little sense, why have at least two cities in the U.S. implemented it? The obvious answer is that the school districts didn’t think the problem through. Either they were seduced by the technology, or by the companies that built the system. But there’s another, more interesting, possibility.

In Beyond Fear, I talk about the notion of agenda. The five-step process is a subjective one, and should be evaluated from the point of view of the person making the trade-off decision. If you imagine that the school officials are making the trade-off, then the system suddenly makes sense.

If a kidnapping occurs on school property, the subsequent investigation could easily hurt school officials. They could even lose their jobs. If you view this security countermeasure as one protecting them just as much as it protects children, it suddenly makes more sense. The trade-off might not be worth it in general, but it’s worth it to them.

Kidnapping is a real problem, and countermeasures that help reduce the risk are a good thing. But remember that security is always a trade off, and a good security system is one where the security benefits are worth the money, convenience, and liberties that are being given up. Quite simply, this system isn’t worth it.

Posted on January 11, 2005 at 9:49 AM31 Comments

Terrorism False Positives

Security systems fail in two different ways. The first is the obvious one: they fail to detect, stop, catch, or whatever, the bad guys. The second is more common, and often more important: they wrongly detect, stop, catch, or whatever, an innocent person. This story is from the New Zealand Herald:

A New Zealand resident who sent $5000 to his ill uncle in India had the money frozen for nearly a month because his name matched that of several men on a terrorist watch list.

Because there are far more innocent people than guilty ones, this second type of error is far more common than the first type. Security is always a trade-off, and when you’re trading off positives and negatives, you have to look at these sorts of things.

Posted on January 8, 2005 at 8:00 AM6 Comments

Terrorists and Border ID Systems

This Washington Times article titled “Border Patrol hails new ID system” could have just as accurately been titled “No terrorists caught by new ID system.”

Border Patrol agents assigned to U.S. Customs and Border Protection (CBP) identified and arrested 23,502 persons with criminal records nationwide through a new biometric integrated fingerprint system during a three-month period beginning in September, CBP officials said yesterday.

Terrorism justifies the security expense, and it ends up being used for something else.

During the three-month period this year, the agents identified and detained 84 homicide suspects, 37 kidnapping suspects, 151 sexual assault suspects, 212 robbery suspects, 1,238 suspects for assaults of other types, and 2,630 suspects implicated in dangerous narcotics-related charges.

Posted on January 7, 2005 at 7:58 AM19 Comments

Linux Security

I’m a big fan of the Honeynet Project (and a member of their board of directors). They don’t have a security product; they do security research. Basically, they wire computers up with sensors, put them on the Internet, and watch hackers attack them.

They just released a report about the security of Linux:

Recent data from our honeynet sensor grid reveals that the average life expectancy to compromise for an unpatched Linux system has increased from 72 hours to 3 months. This means that a unpatched Linux system with commonly used configurations (such as server builds of RedHat 9.0 or Suse 6.2) have an online mean life expectancy of 3 months before being successfully compromised.

This is much greater than that of Windows systems, which have average life expectancies on the order of a few minutes.

It’s also important to remember that this paper focuses on vulnerable systems. The Honeynet researchers deployed almost 20 vulnerable systems to monitor hacker tactics, and found that no one was hacking the systems. That’s the real story: the hackers aren’t bothering with Linux. Two years ago, a vulnerable Linux system would be hacked in less than three days; now it takes three months.

Why? My guess is a combination of two reasons. One, Linux is that much more secure than Windows. Two, the bad guys are focusing on Windows—more bang for the buck.

See also here and here.

Posted on January 6, 2005 at 1:45 PM18 Comments

Altimeter Watches Now a Terrorism Threat

This story is so idiotic that I have trouble believing it’s true. According to MSNBC:

An advisory issued Monday by the Department of Homeland Security and the FBI urges the Transportation Security Administration to have airport screeners keep an eye out for wristwatches containing cigarette lighters or altimeters.

The notice says “recent intelligence suggests al-Qaida has expressed interest in obtaining wristwatches with a hidden butane-lighter function and Casio watches with an altimeter function. Casio watches have been extensively used by al-Qaida and associated organizations as timers for improvised explosive devices. The Casio brand is likely chosen due to its worldwide availability and inexpensive price.”

Clocks and watches definitely make good device timers for remotely triggered bombs. In this scenario, the person carrying the watch is an innocent. (Otherwise he wouldn’t need a remote triggering device; he could set the bomb off himself.) This implies that the bomb is stuffed inside the functional watch. But if you assume a bomb as small as the non-functioning space in a wristwatch can blow up an airplane, you’ve got problems far bigger than one particular brand of wristwatch. This story simply makes no sense.

And, like most of the random “alerts” from the DHS, it’s not based on any real facts:

The advisory notes that there is no specific information indicating any terrorist plans to use the devices, but it urges screeners to watch for them.

I wish the DHS were half as good at keeping people safe as they are at scaring people. (I’ve written more about that here.)

Posted on January 5, 2005 at 12:34 PM28 Comments

Shutting Down the GPS Network

More stupid security from our government. From an AP story:

President Bush has ordered plans for temporarily disabling the U.S. network of global positioning satellites during a national crisis to prevent terrorists from using the navigational technology, the White House said Wednesday.

During a national crisis, GPS technology will help the good guys far more than it will help the bad guys. Disabling the system will almost certainly do much more harm than good.

This reminds me of comments after the Madrid bombings that we should develop ways to shut down the cell phone network after a terrorist attack. (The Madrid bombs were detonated using cell phones, although not by calling cell phones attached to the bombs.) After a terrorist attack, cell phones are critical to both rescue workers and survivors.

All technology has good and bad uses—automobiles, telephones, cryptography, etc. For the most part, you have to accept the bad uses if you want the good uses. This is okay, because the good guys far outnumber the bad guys, and the good uses far outnumber the bad ones.

Posted on January 5, 2005 at 8:49 AM25 Comments

Illegal Aliens and Driver's Licenses

Has anyone heard of the Center for Advanced Studies in Science and Technology Policy? They released a statement saying that not issuing driver’s licenses to illegal aliens is bad for security. Their analysis is good, and worth reading:

As part of the legislative compromise to pass the intelligence reform bill signed into law by the President today, the administration and Congressional leaders have promised to attach to the first ‘must pass’ legislation of the new year a controversial provision that was rightly dropped from the intelligence reform bill—this provision would effectively prevent the states from issuing driver’s licenses to illegal aliens by requiring ‘legal presence’ status for holders of licenses to be used as ‘national ID.’

Although this provision is being touted by its supporters as a security measure, its implementation in practice will be to undermine national security because it ignores three widely-recognized principles of counter-terrorism security: the shrinking perimeter of defense; the need to allocate resources to more likely targets; and the economics of fraud.

First, the very fact that 13 million illegal aliens are already within our borders means that a perimeter-based defense is porous. The proposed policy would eliminate another opportunity to screen this large pool of people and to separate ‘otherwise law abiding’ illegal aliens from terrorists or criminals by confirming identity when licenses are issued or when such licenses are presented or used for identity screening at checkpoints.

Recognizing the porous nature of perimeter defense does not mean that border security should not be improved or that additional steps to prevent illegal immigration should not be taken, however, not recognizing its porous nature is unrealistic, counter to current trends in security practice, and undermines national security. Rather than excluding 13 million people already within our borders, we should encourage non-terrorist illegal aliens to participate in internal security screening systems.

This leads to the second point. Contrary to the argument made by its supporters that denying illegal aliens licenses would prevent terrorists from ‘melting’ into society, this legislation would guarantee a larger haystack in which terrorists can hide thus making it more difficult for law enforcement to identify them. Counter-terrorism strategy is based on reducing the suspect population so that security resources can be focused on more likely suspects. Denying identity legitimacy to 13 million illegal aliens—the vast majority of whom are not terrorists or otherwise threats to national security—just increases the size of the suspect pool for law enforcement to have to sort through. Since law enforcement resources are already unable to effectively cope with the large illegal alien population why further complicate their task?

Third, the proposed legislation would increase the incentives for fraud by greatly inflating the value of a driver’s license and by creating significant new demand for fraudulent licenses by making the driver’s license actual proof of citizenship or legal status. Arguments in support of the legislation are based in part on denying illegal aliens the de facto legitimacy that a driver’s license currently confers, yet the legislation would actually make such legitimacy a matter of law, thus increasing the demand for fraudulent licenses not only among those illegal aliens wishing to drive but among all 13 million who may now see it as a way to get jobs or otherwise prove their legitimate status.

If 13 million people living within our borders can’t drive, fly, travel on a train or bus, or otherwise participate in society without a driver’s license and they cannot get a legitimate one, then the market will supply them an illegal fraudulent one. State DMV bureaucracies, no matter how well- intentioned, do not have the resources, training, or skill to prevent fraud driven by this additional demand and no federal mandate will be able to prevent organized criminal elements from responding.

On the other hand, if illegal aliens are allowed to get legitimate licenses upon thorough vetting of their identity, then the only ones who will be trying to get fraudulent documents will be terrorists or criminals—who will face increased costs and more opportunities for mistakes if there is less overall demand—and law enforcement resources can be focused on these activities.

Fourteen states currently allow driver’s licenses to be obtained without showing ‘legal presence.’ These laws were enacted for public safety reasons—to ensure that drivers meet some standard to drive and to lower insurance premiums by decreasing the pool of unlicensed and uninsured drivers. In most cases, these laws were passed with the strong support of state law enforcement officials who recognized the advantages of being able to identify drivers and discourage unlicensed drivers from fleeing from minor traffic infractions or accidents because they were fearful of being caught without a license. The analogous arguments hold for national security—the more we can encourage otherwise law abiding people within our borders to participate in the system the easier it will be to identify those that pose a true threat.

There may be legitimate reasons for cracking down on illegal immigration, there may even be reasons to deny illegal aliens driver’s licenses, but counter-terrorism security is not one. This provision was appropriately dropped from the intelligence reform bill and it should not be resurrected in the 109th Congress.

Posted on January 4, 2005 at 8:00 AM

Easy-to-Remember PINs

The UK is switching to a “chip and pin” system for credit card transactions. It’s been happening slowly, but by January (I’m not sure if it is the beginning of January or the end), every UK credit card will be a smart card.

This kind of system already exists in France and elsewhere. The cards have embedded chips. When you want to make a purchase, you stick your card in a slot and type your four-digit PIN on a keypad. (Presumably they will never turn off the magnetic stripe and signature system required for U.S. cards.)

One consumer fear over this process is about what happens if you forget your PIN. To allay fears, credit card companies have been placing newspaper advertisements suggesting that people change their PINs to an easy-to-remember number:

Keep forgetting your PIN?
It’s easy to change with chip and PIN.
To something more memorable like a birthday or your lucky numbers.

Don’t the credit card companies have anyone working on security?

The ad also goes on to say that you can change your PIN by phone, which has its own set of problems.

(I know that the cite I give doesn’t quote a primary source, but I also received the information from at least two readers, and one of them said that the advertisement was printed in the London Times.)

Posted on January 3, 2005 at 10:36 AM42 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.