Blog: January 2007 Archives

Cameras Protecting Other Cameras

There is a proposal in Scotland to protect automatic speed-trap cameras from vandals by monitoring them with other cameras.

Then, I suppose we need still other cameras to protect the camera-watching cameras.

I am reminded of a certain building corner in York. Centuries ago it was getting banged up by carts and whatnot, so the owners stuck a post in the ground a couple of feet away from the corner to protect it. Time passed, and the post itself became historically significant. So now there is another post a couple of feet away from the first one to protect it.

When will it end?

Posted on January 31, 2007 at 2:05 PM67 Comments

Real-ID: Costs and Benefits

The argument was so obvious it hardly needed repeating. Some thought we would all be safer—­from terrorism, from crime, even from inconvenience—­if we had a better ID card. A good, hard-to-forge national ID is a no-brainer (or so the argument goes), and it’s ridiculous that a modern country like the United States doesn’t have one.

Still, most Americans have been and continue to be opposed to a national ID card. Even just after 9/11, polls showed a bare majority (51%) in favor—­and that quickly became a minority opinion again. As such, both political parties came out against the card, which meant that the only way it could become law was to sneak it through.

Republican Cong. F. James Sensenbrenner of Wisconsin did just that. In February 2005, he attached the Real ID Act to a defense appropriations bill. No one was willing to risk not supporting the troops by holding up the bill, and it became law. No hearings. No floor debate. With nary a whisper, the United States had a national ID.

By forcing all states to conform to common and more stringent rules for issuing driver’s licenses, the Real ID Act turns these licenses into a de facto national ID. It’s a massive, unfunded mandate imposed on the states, and—­naturally—­the states have resisted. The detailed rules and timetables are still being worked out by the Department of Homeland Security, and it’s the details that will determine exactly how expensive and onerous the program actually is.

It is against this backdrop that the National Governors Association, the National Conference of State Legislatures, and the American Association of Motor Vehicle Administrators together tried to estimate the cost of this initiative. “The Real ID Act: National Impact Analysis” is a methodical and detailed report, and everything after the executive summary is likely to bore anyone but the most dedicated bean counters. But rigor is important because states want to use this document to influence both the technical details and timetable of Real ID. The estimates are conservative, leaving no room for problems, delays, or unforeseen costs, and yet the total cost is $11 billion over the first five years of the program.

If anything, it’s surprisingly cheap: Only $37 each for an estimated 295 million people who would get a new ID under this program. But it’s still an enormous amount of money. The question to ask is, of course: Is the security benefit we all get worth the $11 billion price tag? We have a cost estimate; all we need now is a security estimate.

I’m going to take a crack at it.

When most people think of ID cards, they think of a small plastic card with their name and photograph. This isn’t wrong, but it’s only a small piece of any ID program. What starts out as a seemingly simple security device—­a card that binds a photograph with a name—­rapidly becomes a complex security system.

It doesn’t really matter how well a Real ID works when used by the hundreds of millions of honest people who would carry it. What matters is how the system might fail when used by someone intent on subverting that system: how it fails naturally, how it can be made to fail, and how failures might be exploited.

The first problem is the card itself. No matter how unforgeable we make it, it will be forged. We can raise the price of forgery, but we can’t make it impossible. Real IDs will be forged.

Even worse, people will get legitimate cards in fraudulent names. Two of the 9/11 terrorists had valid Virginia driver’s licenses in fake names. And even if we could guarantee that everyone who issued national ID cards couldn’t be bribed, cards are issued based on other identity documents—­all of which are easier to forge.

And we can’t assume that everyone will always have a Real ID. Currently about 20% of all identity documents are lost per year. An entirely separate security system would have to be developed for people who lost their card, a system that itself would be susceptible to abuse.

Additionally, any ID system involves people: people who regularly make mistakes. We’ve all heard stories of bartenders falling for obviously fake IDs, or sloppy ID checks at airports and government buildings. It’s not simply a matter of training; checking IDs is a mind-numbingly boring task, one that is guaranteed to have failures. Biometrics such as thumbprints could help, but bring with them their own set of exploitable failure modes.

All of these problems demonstrate that identification checks based on Real ID won’t be nearly as secure as we might hope. But the main problem with any strong identification system is that it requires the existence of a database. In this case, it would have to be 50 linked databases of private and sensitive information on every American—­one widely and instantaneously accessible from airline check-in stations, police cars, schools, and so on.

The security risks of this database are enormous. It would be a kludge of existing databases that are incompatible, full of erroneous data, and unreliable. Computer scientists don’t know how to keep a database of this magnitude secure, whether from outside hackers or the thousands of insiders authorized to access it.

But even if we could solve all these problems, and within the putative $11 billion budget, we still wouldn’t be getting very much security. A reliance on ID cards is based on a dangerous security myth, that if only we knew who everyone was, we could pick the bad guys out of the crowd.

In an ideal world, what we would want is some kind of ID that denoted intention. We’d want all terrorists to carry a card that said “evildoer�? and everyone else to carry a card that said “honest person who won’t try to hijack or blow up anything.�? Then security would be easy. We could just look at people’s IDs, and, if they were evildoers, we wouldn’t let them on the airplane or into the building.

This is, of course, ridiculous; so we rely on identity as a substitute. In theory, if we know who you are, and if we have enough information about you, we can somehow predict whether you’re likely to be an evildoer. But that’s almost as ridiculous.

Even worse, as soon as you divide people into two categories—­more trusted and less trusted people—­you create a third, and very dangerous, category: untrustworthy people whom we have no reason to mistrust. Oklahoma City bomber Timothy McVeigh; the Washington, DC, snipers; the London subway bombers; and many of the 9/11 terrorists had no previous links to terrorism. Evildoers can also steal the identity—­and profile—­of an honest person. Profiling can result in less security by giving certain people an easy way to skirt security.

There’s another, even more dangerous, failure mode for these systems: honest people who fit the evildoer profile. Because evildoers are so rare, almost everyone who fits the profile will turn out to be a false alarm. Think of all the problems with the government’s no-fly list. That list, which is what Real IDs will be checked against, not only wastes investigative resources that might be better spent elsewhere, but it also causes grave harm to those innocents who fit the profile.

Enough of terrorism; what about more mundane concerns like identity theft? Perversely, a hard-to-forge ID card can actually increase the risk of identity theft. A single ubiquitous ID card will be trusted more and used in more applications. Therefore, someone who does manage to forge one—­or get one issued in someone else’s name—­can commit much more fraud with it. A centralized ID system is a far greater security risk than a decentralized one with various organizations issuing ID cards according to their own rules for their own purposes.

Security is always a trade-off; it must be balanced with the cost. We all do this intuitively. Few of us walk around wearing bulletproof vests. It’s not because they’re ineffective, it’s because for most of us the trade-off isn’t worth it. It’s not worth the cost, the inconvenience, or the loss of fashion sense. If we were living in a war-torn country like Iraq, we might make a different trade-off.

Real ID is another lousy security trade-off. It’ll cost the United States at least $11 billion, and we won’t get much security in return. The report suggests a variety of measures designed to ease the financial burden on the states: extend compliance deadlines, allow manual verification systems, and so on. But what it doesn’t suggest is the simple change that would do the most good: scrap the Real ID program altogether. For the price, we’re not getting anywhere near the security we should.

This essay will appear in the March/April issue of The Bulletin of Atomic Scientists.

EDITED TO ADD (1/30): There’s REAL-ID news this week. Maine became the first state to reject REAL-ID. This means that a Maine state driver’s license will not be recognized as valid for federal purposes, although I’m sure the Feds will back down over this. And other states will follow:

“As Maine goes, so goes the nation,” said Charlie Mitchell, director of the ACLU State Legislative Department. “Already bills have been filed in Montana, New Hampshire, New Mexico, Georgia and Washington, which would follow Maine’s lead in saying no to Real ID, with many mores states on the verge of similar action. Across the nation, local lawmakers are rejecting the federal government’s demand that they curtail their constituents’ privacy through this giant unfunded boondoggle.”

More info on REAL-ID here.

EDITED TO ADD (1/31): More information on Montana. My guess is that Montana will become the second state ro reject REAL-ID, and New Mexico will be the third.

Posted on January 30, 2007 at 6:33 AM94 Comments

Iraqi Gunmen Dressing Up in American Military Uniforms

I’ve previously written about how official uniforms are inherent authentication tokens, even though they shouldn’t be (see also this and this for some less deadly anecdotes).

Now we see this tactic being used in Baghdad:

The armored sport utility vehicles whisked into a government compound in the city of Karbala with speed and urgency, the way most Americans and foreign dignitaries travel along Iraq’s treacherous roads these days.

Iraqi guards at checkpoints waved them through Saturday afternoon because the men wore what appeared to be legitimate U.S. military uniforms and badges, and drove cars commonly used by foreigners, the provincial governor said.

Once inside, however, the men unleashed one of the deadliest and most brazen ambushes of U.S. forces in a secure, official area. Five American service members were killed in a hail of grenades and gunfire in a breach of security that Iraqi officials called unprecedented.

Uniforms are no substitute for real authentication. They’re just too easy to steal or forge.

Posted on January 29, 2007 at 1:37 PM43 Comments

Islam on Trial

Prophetic Justice,” by Amy Waldman (The Atlantic Monthly, Oct 2006) is a fascinating article about terrorism trials in the U.S. where the prosecution attempts to prove that the defendant was planning on committing an act of terrorism. Very often, the trials hinge on different interpretations of Islam, Islamic scripture, and Islamic belief—and often we are essentially putting the religion on trial.

Reading it, I was struck with the eliminationist rhetoric coming out of the Christian Right in the U.S. today, and how it would fare under the same level of scrutiny.

It’s a long article, but well worth reading. There are many problems with prosecuting people for thoughtcrimes, and the article discusses some of them.

Posted on January 29, 2007 at 6:55 AM118 Comments

Friday Squid Blogging: "Squid-Inspired Design"

From the University of Colorado: “Squid-inspired design could mean better handling of underwater vehicles“:

Inspired by the sleek and efficient propulsion of squid, jellyfish and other cephalopods, a University of Colorado at Boulder researcher has designed a new generation of compact vortex generators that could make it easier for scientists to maneuver and dock underwater vehicles at low speeds and with greater precision.

Another article here.

Posted on January 26, 2007 at 4:28 PM8 Comments

Blu-ray Cracked

The Blu-ray DRM system has been broken, although details are scant. It’s the same person who broke the HD DVD system last month. (Both use AACS.)

As I’ve written previously, both of these systems are supposed to be designed in such a way as to recover from hacks like this. We’re going to find out if the recovery feature works.

Blu-ray and HD DVD both allow for decryption keys to be updated in reaction to attacks, for example by making it impossible to play high-definition movies via playback software known to be weak or flawed. So muslix64 work has effectively sparked off a cat-and-mouse game between hackers and the entertainment industry, where consumers are likely to face compatibility problems while footing the bill for the entertainment industry’s insistence on pushing ultimately flawed DRM technology on an unwilling public.

EDITED TO ADD (1/29): You should read this seven part series on the topic.

Posted on January 26, 2007 at 12:47 PM26 Comments

On the "War on Terror" Rhetoric

Echoing what I said in my previous post, Sir Ken Macdonald—the UK’s “director of public prosecutions”—has spoken out against the “war on terror”:

He said: “London is not a battlefield. Those innocents who were murdered on July 7 2005 were not victims of war. And the men who killed them were not, as in their vanity they claimed on their ludicrous videos, ‘soldiers’. They were deluded, narcissistic inadequates. They were criminals. They were fantasists. We need to be very clear about this. On the streets of London, there is no such thing as a ‘war on terror’, just as there can be no such thing as a ‘war on drugs’.

“The fight against terrorism on the streets of Britain is not a war. It is the prevention of crime, the enforcement of our laws and the winning of justice for those damaged by their infringement.”

Sir Ken, head of the Crown Prosecution Service, told members of the Criminal Bar Association it should be an article of faith that crimes of terrorism are dealt with by criminal justice and that a “culture of legislative restraint in the area of terrorist crime is central to the existence of an efficient and human rights compatible process”.

He said: “We wouldn’t get far in promoting a civilising culture of respect for rights amongst and between citizens if we set about undermining fair trials in the simple pursuit of greater numbers of inevitably less safe convictions. On the contrary, it is obvious that the process of winning convictions ought to be in keeping with a consensual rule of law and not detached from it. Otherwise we sacrifice fundamental values critical to the maintenance of the rule of law – upon which everything else depends.”

Exactly. This is not a job for the military, it’s a job for the police.

Posted on January 26, 2007 at 6:56 AM51 Comments

SAS Troops Stationed in London

British special forces are now stationed in London:

An SAS unit is now for the first time permanently based in London on 24-hour standby for counter-terrorist operations, The Times has learnt.

The basing of a unit from the elite special forces regiment “in the metropolitan area” is intended to provide the police with a combat-proven ability to deal with armed terrorists in the capital.

The small unit also includes surveillance specialists and bomb-disposal experts.

Although the Metropolitan Police has its own substantial firearms capability, the fatal shooting of Jean Charles de Menezes, the Brazilian electrician who was mistakenly identified as a terrorist bomber on the run, has underlined the need to have military expertise on tap.

While I agree that the British police completely screwed up the Menezes shooting, I’m not at all convinced the SAS can do better. The police are trained to work within a lawful society; military units are primarily trained for military combat operations. Which group do you think will be more restrained?

This kind of thing is a result of the “war on terror” rhetoric. We don’t need military operations, we need police protection.

I think people have been watching too many seasons of 24.

Posted on January 25, 2007 at 3:34 PM67 Comments

In Praise of Security Theater

While visiting some friends and their new baby in the hospital last week, I noticed an interesting bit of security. To prevent infant abduction, all babies had RFID tags attached to their ankles by a bracelet. There are sensors on the doors to the maternity ward, and if a baby passes through, an alarm goes off.

Infant abduction is rare, but still a risk. In the last 22 years, about 233 such abductions have occurred in the United States. About 4 million babies are born each year, which means that a baby has a 1-in-375,000 chance of being abducted. Compare this with the infant mortality rate in the U.S.—one in 145—and it becomes clear where the real risks are.

And the 1-in-375,000 chance is not today’s risk. Infant abduction rates have plummeted in recent years, mostly due to education programs at hospitals.

So why are hospitals bothering with RFID bracelets? I think they’re primarily to reassure the mothers. Many times during my friends’ stay at the hospital the doctors had to take the baby away for this or that test. Millions of years of evolution have forged a strong bond between new parents and new baby; the RFID bracelets are a low-cost way to ensure that the parents are more relaxed when their baby was out of their sight.

Security is both a reality and a feeling. The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures. We know the infant abduction rates and how well the bracelets reduce those rates. We also know the cost of the bracelets, and can thus calculate whether they’re a cost-effective security measure or not. But security is also a feeling, based on individual psychological reactions to both the risks and the countermeasures. And the two things are different: You can be secure even though you don’t feel secure, and you can feel secure even though you’re not really secure.

The RFID bracelets are what I’ve come to call security theater: security primarily designed to make you feel more secure. I’ve regularly maligned security theater as a waste, but it’s not always, and not entirely, so.

It’s only a waste if you consider the reality of security exclusively. There are times when people feel less secure than they actually are. In those cases—like with mothers and the threat of baby abduction—a palliative countermeasure that primarily increases the feeling of security is just what the doctor ordered.

Tamper-resistant packaging for over-the-counter drugs started to appear in the 1980s, in response to some highly publicized poisonings. As a countermeasure, it’s largely security theater. It’s easy to poison many foods and over-the-counter medicines right through the seal—with a syringe, for example—or to open and replace the seal well enough that an unwary consumer won’t detect it. But in the 1980s, there was a widespread fear of random poisonings in over-the-counter medicines, and tamper-resistant packaging brought people’s perceptions of the risk more in line with the actual risk: minimal.

Much of the post-9/11 security can be explained by this as well. I’ve often talked about the National Guard troops in airports right after the terrorist attacks, and the fact that they had no bullets in their guns. As a security countermeasure, it made little sense for them to be there. They didn’t have the training necessary to improve security at the checkpoints, or even to be another useful pair of eyes. But to reassure a jittery public that it’s OK to fly, it was probably the right thing to do.

Security theater also addresses the ancillary risk of lawsuits. Lawsuits are ultimately decided by juries, or settled because of the threat of jury trial, and juries are going to decide cases based on their feelings as well as the facts. It’s not enough for a hospital to point to infant abduction rates and rightly claim that RFID bracelets aren’t worth it; the other side is going to put a weeping mother on the stand and make an emotional argument. In these cases, security theater provides real security against the legal threat.

Like real security, security theater has a cost. It can cost money, time, concentration, freedoms and so on. It can come at the cost of reducing the things we can do. Most of the time security theater is a bad trade-off, because the costs far outweigh the benefits. But there are instances when a little bit of security theater makes sense.

We make smart security trade-offs—and by this I mean trade-offs for genuine security—when our feeling of security closely matches the reality. When the two are out of alignment, we get security wrong. Security theater is no substitute for security reality, but, used correctly, security theater can be a way of raising our feeling of security so that it more closely matches the reality of security. It makes us feel more secure handing our babies off to doctors and nurses, buying over-the-counter medicines, and flying on airplanes—closer to how secure we should feel if we had all the facts and did the math correctly.

Of course, too much security theater and our feeling of security becomes greater than the reality, which is also bad. And others—politicians, corporations and so on—can use security theater to make us feel more secure without doing the hard work of actually making us secure. That’s the usual way security theater is used, and why I so often malign it.

But to write off security theater completely is to ignore the feeling of security. And as long as people are involved with security trade-offs, that’s never going to work.

This essay appeared on Wired.com, and is dedicated to my new godson, Nicholas Quillen Perry.

EDITED TO ADD: This essay has been translated into Portuguese.

Posted on January 25, 2007 at 5:50 AM78 Comments

NSA Hiring Data Miners

Certainly looks that way:

The Algorithm Developer will work with massive amounts of inter-related data and develop and implement algorithms to search, sort and find patterns and hidden relationships in the data. The preferred candidate would be required to be able to work closely with Analysts to develop Rapid Operational Prototypes. The candidate would have the availability of existing algorithms as a model to begin.

Posted on January 24, 2007 at 2:57 PM21 Comments

Kansas City Loses IRS Tapes

Second in our series of stupid comments to the press, here’s Kansas City’s assistant city manager commenting on the fact that they lost 26 computer tapes containing personal information:

“It’s not a situation that if you had a laptop you could access,” Noll said. “You would need some specialized equipment and some specialized knowledge in order to read these tapes.”

While you may be concerned the missing tapes contain your personal information, Cindy Richey, a financial planner, said don’t be too alarmed.

“I think people might be surprised at how much of that is already floating around out there,” Richey said.

Got that? Don’t worry because 1) someone would need a tape drive to read those tapes, and 2) your personal information is all over the net anyway.

Posted on January 24, 2007 at 1:04 PM25 Comments

Huge Online Bank Heist

Wow:

Swedish bank Nordea has told ZDNet UK that it has been stung for between seven and eight million Swedish krona—up to £580,000—in what security company McAfee is describing as the “biggest ever” online bank heist.

Over the last 15 months, Nordea customers have been targeted by emails containing a tailormade Trojan, said the bank.

Nordea believes that 250 customers have been affected by the fraud, after falling victim to phishing emails containing the Trojan. According to McAfee, Swedish police believe Russian organised criminals are behind the attacks. Currently, 121 people are suspected of being involved.

This is my favorite line:

Ehlin blamed successful social engineering for the heist, rather than any deficiencies in Nordea security procedures.

Um…hello? Are you an idiot, or what?

Posted on January 23, 2007 at 12:54 PM106 Comments

Debating Full Disclosure

Full disclosure—the practice of making the details of security vulnerabilities public—is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure.

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers (See The Vulnerability Disclosure Game: Are We More Secure?). The problem, according to this position, is less the vulnerability itself and more the information about the vulnerability.

But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

To understand why the second assumption isn’t true, you need to understand the underlying economics. To a software company, vulnerabilities are largely an externality. That is, they affect you—the user—much more than they affect it. A smart vendor treats vulnerabilities less as a software problem, and more as a PR problem. So if we, the user community, want software vendors to patch vulnerabilities, we need to make the PR problem more acute.

Full disclosure does this. Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies—who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability—and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

It wasn’t until researchers published complete details of the vulnerabilities that the software companies started fixing them.

Of course, the software companies hated this. They received bad PR every time a vulnerability was made public, and the only way to get some good PR was to quickly release a patch. For a large company like Microsoft, this was very expensive.

So a bunch of software companies, and some security researchers, banded together and invented “responsible disclosure” (See “The Chilling Effect”). The basic idea was that the threat of publishing the vulnerability is almost as good as actually publishing it. A responsible researcher would quietly give the software vendor a head start on patching its software, before releasing the vulnerability to the public.

This was a good idea—and these days it’s normal procedure—but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.

The moral here doesn’t just apply to software; it’s very general. Public scrutiny is how security improves, whether we’re talking about software or airport security or government counterterrorism measures. Yes, there are trade-offs. Full disclosure means that the bad guys learn about the vulnerability at the same time as the rest of us—unless, of course, they knew about it beforehand—but most of the time the benefits far outweigh the disadvantages.

Secrecy prevents people from accurately assessing their own risk. Secrecy precludes public debate about security, and inhibits security education that leads to improvements. Secrecy doesn’t improve security; it stifles it.

I’d rather have as much information as I can to make an informed decision about security, whether it’s a buying decision about a software product or an election decision about two political parties. I’d rather have the information I need to pressure vendors to improve security.

I don’t want to live in a world where companies can sell me software they know is full of holes or where the government can implement security measures without accountability. I much prefer a world where I have all the information I need to assess and protect my own security.

This essay originally appeared on CSOOnline, as part of a series of essay on the topic. Marcus Ranum wrote against the practice of disclosing vulnerabilities, and Mark Miller of Microsoft wrote in favor of responsible disclosure. These are on-line-only sidebars to a very interesting article in CSO Magazine, “The Chilling Effect,” about the confluence of forces that are making it harder to research and disclose vulnerabilities in web-based software:

“Laws say you can’t access computers without permission,” she [attorney Jennifer Granick] explains. “Permission on a website is implied. So far, we’ve relied on that. The Internet couldn’t work if you had to get permission every time you wanted to access something. But what if you’re using a website in a way that’s possible but that the owner didn’t intend? The question is whether the law prohibits you from exploring all the ways a website works,” including through vulnerabilities.

All the links are worth reading in full.

A Simplified Chinese translation by Xin LI is available on Delphij’s Chaos.

Posted on January 23, 2007 at 6:45 AM36 Comments

RFID Tattoos

Great idea for livestock. Dumb idea for soldiers:

The ink also could be used to track and rescue soldiers, Pydynowski said.

“It could help identify friends or foes, prevent friendly fire, and help save soldiers’ lives,” he said. “It’s a very scary proposition when you’re dealing with humans, but with military personnel, we’re talking about saving soldiers’ lives and it may be something worthwhile.”

Posted on January 22, 2007 at 12:27 PM42 Comments

"Clear" Registered Traveller Program

CLEAR, a private service that prescreens travelers for a $100 annual fee, has come to Kennedy International Airport. To benefit from the Clear Registered Traveler program, which is run by Verified Identity Pass, a person must fill out an application, let the service capture his fingerprints and iris pattern and present two forms of identification. If the traveler passes a federal background check, he will be given a card that allows him to pass quickly through airport security.

Sounds great, but it’s actually two ideas rolled into one: one clever and one very stupid.

The clever idea is allowing people to pay for better service. Clear has been in operation at the Orlando International Airport since July 2005, and members have passed through security checkpoints faster simply because they are segregated from less experienced fliers who don’t know the drill.

Now, at Kennedy and other airports, Clear is purchasing and installing federally approved technology that will further speed up the screening process: scanners that will eliminate the need for cardholders to remove their shoes, and explosives detection machines that will eliminate the need for them to remove their coats and jackets. There are also Clear employees at the checkpoints who, although they can’t screen cardholders, can guide members through the security process. Clear has not yet paid airports for an extra security lane or the Transportation Security Administration for extra screening personnel, but both of those enhancements are on the table if enough people sign up.

I fly more than 200,000 miles per year and would gladly pay $100 a year to get through airport security faster.

But the stupid idea is the background check. When first conceived, traveler programs focused on prescreening. Pre-approved travelers would pass through security checkpoints with less screening, and resources would be focused on everyone else. Sounds reasonable, but it would leave us all less safe.

Background checks are based on the dangerous myth that we can somehow pick terrorists out of a crowd if we could identify everyone. Unfortunately, there isn’t any terrorist profile that prescreening can uncover. Timothy McVeigh could probably have gotten one of these cards. So could have Eric Rudolph, the pipe bomber at the 1996 Olympic Games in Atlanta. There isn’t even a good list of known terrorists to check people against; the government list used by the airlines has been the butt of jokes for years.

And have we forgotten how prevalent identity theft is these days? If you think having a criminal impersonating you to your bank is bad, wait until they start impersonating you to the Transportation Security Administration.

The truth is that whenever you create two paths through security—a high-security path and a low-security path—you have to assume that the bad guys will find a way to exploit the low-security path. It may be counterintuitive, but we are all safer if the people chosen for more thorough screening are truly random and not based on an error-filled database or a cursory background check.

I think of Clear as a $100 service that tells terrorists if the F.B.I. is on to them or not. Why in the world would we provide terrorists with this ability?

We don’t have to. Clear cardholders are not scrutinized less when they go through checkpoints, they’re scrutinized more efficiently. So why not get rid of the background checks altogether? We should all be able to walk into the airport, pay $10, and use the Clear lanes when it’s worth it to us.

This essay originally appeared in The New York Times.

I’ve already written about trusted traveller programs, and have also written about Verified Identity Card, Inc., the company that runs Clear. Note that these two essays were from 2004. This is the Clear website, and this is the website for Verified Identity Pass, Inc.

Posted on January 22, 2007 at 7:11 AM55 Comments

Sending Photos to 911 Operators

On Wednesday, Mayor Bloomberg announced that New York will be the first city with 911 call centers able to receive images and videos from cell phones and computers. If you witness a crime, you can not only call in—you can send in a picture or video as well.

This is a great idea that can make us all safer. Often the biggest problem a 911 operator has is getting enough good information from the caller. Sometimes the caller is emotionally distraught. Sometimes there’s confusion and background noise. Sometimes there’s a language barrier. Giving callers the opportunity to use all the communications tools at their disposal will help operators dispatch the right help faster.

Still Images and videos can also help identify and prosecute criminals. Memories are notoriously inaccurate. Photos aren’t perfect, but they provide a different sort of evidence—one that, with the right safeguards, can be used in court.

The worry is that New York will become a city of amateur sleuths and snitches, turning each other in to settle personal scores or because of cultural misunderstandings. But the 911 service has long avoided such hazards. Falsely reporting a crime is itself a serious crime, which discourages people from using 911 for anything other than a true emergency.

Since 1968, the 911 system has evolved smartly with the times. Calls are now automatically recorded. Callers are now automatically located by phone number or cell phone location.

Bloomberg’s plan is the next logical evolution—one that all of us should welcome. Smile, suspected criminals: you’re on candid camphone.

This essay appeared today in The New York Daily News.

Another blog comments.

Posted on January 19, 2007 at 12:22 PM38 Comments

No-Fly List to Be Scrubbed

After over five years of harassing innocents and not catching any terrorists, the no-fly list is finally being checked for accuracy, and probably cut in half.

Yes, it’s great to see that even the threat of oversight by a Democratic Congress is enough to get these things done, but it’s nowhere near enough.

The no-fly list doesn’t work. And, of course, you can easily bypass it. You can 1) print a boarding pass under an assumed name or buy a ticket under an assumed name, or 2) fly without ID. In fact, the whole notion of checking ID as a security measure is fraught with problems. And the list itself is just awful.

My favorite sound bite:

Imagine a list of suspected terrorists so dangerous that we can’t ever let them fly, yet so innocent that we can’t arrest them – even under the draconian provisions of the Patriot Act.

Even with a better list, it’s a waste of money.

Posted on January 19, 2007 at 7:14 AM40 Comments

Information Security and Externalities

Information insecurity is costing us billions. There are many different ways in which we pay for information insecurity. We pay for it in theft, such as information theft, financial theft and theft of service. We pay for it in productivity loss, both when networks stop functioning and in the dozens of minor security inconveniences we all have to endure on a daily basis. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for the lack of security, year after year.

Fundamentally, the issue is insecure software. It is a result of bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the myriad effects of insecure software. Unfortunately, the money spent does not improve the security of that software. We are paying to mitigate the risk rather than fix the problem.

The only way to fix the problem is for vendors to improve their software. They need to design security in their products from the start and not as an add-on feature. Software vendors need also to institute good security practices and improve the overall quality of their products. But they will not do this until it is in their financial best interests to do so. And so far, it is not.

The reason is easy to explain. In a capitalist society, businesses are profit-making ventures, so they make decisions based on both short- and long-term profitability. This holds true for decisions about product features and sale prices, but it also holds true for software. Vendors try to balance the costs of more secure software—extra developers, fewer features, longer time to market—against the costs of insecure software: expense to patch, occasional bad press, potential loss of sales.

So far, so good. But what the vendors do not look at is the total costs of insecure software; they only look at what insecure software costs them. And because of that, they miss a lot of the costs: all the money we, the software product buyers, are spending on security. In economics, this is known as an externality: the cost of a decision that is borne by people other than those taking the decision.

Normally, you would expect users to respond by favouring secure products over insecure products—after all, users are also making their buying decisions based on the same capitalist model. Unfortunately, that is not generally possible. In some cases software monopolies limit the available product choice; in other cases, the ‘lock-in effect’ created by proprietary file formats or existing infrastructure or compatibility requirements makes it harder to switch; and in still other cases, none of the competing companies have made security a differentiating characteristic. In all cases, it is hard for an average buyer to distinguish a truly secure product from an insecure product with a ‘trust us’ marketing campaign.

Because of all these factors, there are no real consequences to the vendors for having insecure or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality. The result is what we have all witnessed: insecure software. Companies find that it is cheaper to weather the occasional press storm, spend money on PR campaigns touting good security and fix public problems after the fact, than to design security in from the beginning.

And so the externality remains…

If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security—especially the security of their customers—it also needs to be in their financial best interests.

Liability law is one way to make it in those organisations’ best interests. If end users could sue software manufacturers for product defects, then the cost of those defects to the software manufacturers would rise. Manufacturers would then pay the true economic cost for poor software, and not just a piece of it. So when they balance the cost of making their software secure versus the cost of leaving their software insecure, there would be more costs on the latter side. This would provide an incentive for them to make their software more secure.

Basically, we have to tweak the risk equation in such a way that the Chief Executive Officer (CEO) of a company cares about actually fixing the problem—and putting pressure on the balance sheet is the best way to do that. Security is risk management; liability fiddles with the risk equation.

Clearly, liability is not all or nothing. There are many parties involved in a typical software attack. The list includes:

  • the company that sold the software with the vulnerability in the first place
  • the person who wrote the attack tool
  • the attacker himself, who used the tool to break into a network
  • and finally, the owner of the network, who was entrusted with defending that network.

100% of the liability should not fall on the shoulders of the software vendor, just as 100% should not fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.

Certainly, making software more secure will cost money, and manufacturers will have to pass those costs on to users in the form of higher prices. But users are already paying extra costs for insecure software: costs of third-party security products, costs of consultants and security services companies, direct and indirect costs of losses. But as long as one is going to pay anyway, it would be better to pay to fix the problem. Forcing the software vendor to pay to fix the problem and then passing those costs on to users means that the actual problem might get fixed.

Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature, without any regard to security. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they are entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.

Information security is not a technological problem. It is an economics problem. And the way to improve information security is to fix the economics problem. If this is done, companies will come up with the right technological solutions that vendors will happily implement. Fail to solve the economics problem, and vendors will not bother implementing or researching any security technologies, regardless of how effective they are.

This essay previously appeared in the European Network and Information Security Agency quarterly newsletter, and is an update to a 2004 essay I wrote for Computerworld.

Posted on January 18, 2007 at 7:04 AM65 Comments

FedEx Refuses to Ship Empty Containers

Security theater at its finest:

Me [going into post-9/11, TSA-style super-dumbfounded mode]: So what you’re saying is you can’t ship any sort of containers, even if they’re empty? You know that we originally ordered these empty cans and jars from a company, and *they* shipped them to *us*.

FedEx guy: They must have used a different vendor [“vendor”? I can’t remember, some word like that, like a “service”].

Which I imagine he said because he couldn’t bring himself to say, “It’s the *words* that are *on* the containers that are dangerous”—even after I had opened them all and demonstrated the utter harmlessness/emptiness of the containers themselves.

Posted on January 17, 2007 at 1:35 PM64 Comments

Do Terrorists Lie?

Terrorists might bomb airplanes, take and kill hostages, and otherwise terrorize innocents. But there’s one thing they just won’t do: lie on government forms. And that’s why the State of Ohio requires certain license (including private pilot licenses) applicants to certify that they’re not terrorists. Because if we can’t lock them up long enough for terrorism, we’ve got the additional charge of lying on a government form to throw at them.

Okay, it’s actually slightly less silly than that. You have to certify that you are not a member of, a funder of, a solicitor for, or a hirer of members of any of these organizations, which someone—presumably the Department of Homeland Security—has decided are terrorist organizations.

The Aircraft Owners and Pilots Association is pissed off, as they should well be.

More security theater.

I assume Ohio isn’t the only state doing this. Does anyone know anything about other states?

EDITED TO ADD (1/18): Here’s a Pennsylvania application or a license to carry firearms that asks: “Is your character and reputation such that you would be likely to act in a manner dangerous to public safety?” I agree that Pennsylvania shouldn’t issue carry permits to people for whom this is true, but I’m not sure that asking them is the best way to find out.

Posted on January 17, 2007 at 7:34 AM87 Comments

Architecture and Airport Security

Good essay by Matt Blaze:

Somehow, for all the attention to minutiae in the guidelines, everything ends up just slightly wrong by the time it gets put together at an airport. Even if we accept some form of passenger screening as a necessary evil these days, today’s checkpoints seem like case studies in basic usability failure designed to inflict maximum frustration on everyone involved. The tables aren’t quite at the right height to smoothly enter the X-ray machines, bins slide off the edges of tables, there’s never enough space or seating for putting shoes back on as you leave the screening area, basic instructions have to be yelled across crowded hallways. According to the TSA’s manual, there are four models of standard approved X-ray machines, from two different manufacturers. All four have sightly different heights, and all are different from the heights of the standard approved tables. Do the people setting this stuff up ever actually fly? And if they can’t even get something as simple as the furniture right, how confident should we be in the less visible but more critical parts of the system that we don’t see every time we fly?

Yes, Matt Blaze now has a blog. See also his essay on making your own fake boarding pass.

Posted on January 12, 2007 at 7:08 AM41 Comments

Wholesale Surveillance

I had an op-ed published in the Arizona Star today:

Technology is fundamentally changing the nature of surveillance. Years ago, surveillance meant trench-coated detectives following people down streets. It was laborious and expensive and was used only when there was reasonable suspicion of a crime. Modern surveillance is the policeman with a license-plate scanner, or even a remote license-plate scanner mounted on a traffic light and a policeman sitting at a computer in the station.

It’s the same, but it’s completely different. It’s wholesale surveillance. And it disrupts the balance between the powers of the police and the rights of the people.

The news hook I used was this article, about the police testing a vehicle-mounted automatic license plate scanner. Unfortunately, I got the police department wrong. It’s the Arizona State Police, not the Tucson Police.

Posted on January 11, 2007 at 1:00 PM34 Comments

Radio Transmitters Found in Canadian Coins

Radio transmitters have been found in Canadian coins:

Canadian coins containing tiny transmitters have mysteriously turned up in the pockets of at least three American contractors who visited Canada, says a branch of the U.S. Defense Department.

Security experts believe the miniature devices could be used to track the movements of defence industry personnel dealing in sensitive military technology.

Sounds implausible, really. There are far easier ways to track someone than to give him something he’s going to give away the next time he buys a cup of coffee. Like, maybe, by his cell phone.

And then we have this:

A report that some Canadian coins have been compromised by secretly embedded spy transmitters is overblown, according to a U.S. official familiar with the case.

“There is no story there,” the official, who asked not to be named, told The Globe and Mail.

He said that while some odd-looking Canadian coins briefly triggered suspicions in the United States, he said that the fears proved groundless: “We have no evidence to indicate anything connected with these coins poses a risk or danger.”

Take your pick. Either the original story was overblown, or those involved are trying to spin the news to cover their tracks. We definitely don’t have very many facts here.

EDITED TO ADD (1/18): The U.S. retracts the story.

Posted on January 11, 2007 at 12:07 PM41 Comments

Choosing Secure Passwords

Ever since I wrote about the 34,000 MySpace passwords I analyzed, people have been asking how to choose secure passwords.

My piece aside, there’s been a lot written on this topic over the years—both serious and humorous—but most of it seems to be based on anecdotal suggestions rather than actual analytic evidence. What follows is some serious advice.

The attack I’m evaluating against is an offline password-guessing attack. This attack assumes that the attacker either has a copy of your encrypted document, or a server’s encrypted password file, and can try passwords as fast as he can. There are instances where this attack doesn’t make sense. ATM cards, for example, are secure even though they only have a four-digit PIN, because you can’t do offline password guessing. And the police are more likely to get a warrant for your Hotmail account than to bother trying to crack your e-mail password. Your encryption program’s key-escrow system is almost certainly more vulnerable than your password, as is any “secret question” you’ve set up in case you forget your password.

Offline password guessers have gotten both fast and smart. AccessData sells Password Recovery Toolkit, or PRTK. Depending on the software it’s attacking, PRTK can test up to hundreds of thousands of passwords per second, and it tests more common passwords sooner than obscure ones.

So the security of your password depends on two things: any details of the software that slow down password guessing, and in what order programs like PRTK guess different passwords.

Some software includes routines deliberately designed to slow down password guessing. Good encryption software doesn’t use your password as the encryption key; there’s a process that converts your password into the encryption key. And the software can make this process as slow as it wants.

The results are all over the map. Microsoft Office, for example, has a simple password-to-key conversion, so PRTK can test 350,000 Microsoft Word passwords per second on a 3-GHz Pentium 4, which is a reasonably current benchmark computer. WinZip used to be even worse—well over a million guesses per second for version 7.0—but with version 9.0, the cryptosystem’s ramp-up function has been substantially increased: PRTK can only test 900 passwords per second. PGP also makes things deliberately hard for programs like PRTK, also only allowing about 900 guesses per second.

When attacking programs with deliberately slow ramp-ups, it’s important to make every guess count. A simple six-character lowercase exhaustive character attack, “aaaaaa” through “zzzzzz,” has more than 308 million combinations. And it’s generally unproductive, because the program spends most of its time testing improbable passwords like “pqzrwj.”

According to Eric Thompson of AccessData, a typical password consists of a root plus an appendage. A root isn’t necessarily a dictionary word, but it’s something pronounceable. An appendage is either a suffix (90 percent of the time) or a prefix (10 percent of the time).

So the first attack PRTK performs is to test a dictionary of about 1,000 common passwords, things like “letmein,” “password,” “123456” and so on. Then it tests them each with about 100 common suffix appendages: “1,” “4u,” “69,” “abc,” “!” and so on. Believe it or not, it recovers about 24 percent of all passwords with these 100,000 combinations.

Then, PRTK goes through a series of increasingly complex root dictionaries and appendage dictionaries. The root dictionaries include:

  • Common word dictionary: 5,000 entries
  • Names dictionary: 10,000 entries
  • Comprehensive dictionary: 100,000 entries
  • Phonetic pattern dictionary: 1/10,000 of an exhaustive character search

The phonetic pattern dictionary is interesting. It’s not really a dictionary; it’s a Markov-chain routine that generates pronounceable English-language strings of a given length. For example, PRTK can generate and test a dictionary of very pronounceable six-character strings, or just-barely pronounceable seven-character strings. They’re working on generation routines for other languages.

PRTK also runs a four-character-string exhaustive search. It runs the dictionaries with lowercase (the most common), initial uppercase (the second most common), all uppercase and final uppercase. It runs the dictionaries with common substitutions: “$” for “s,” “@” for “a,” “1” for “l” and so on. Anything that’s “leet speak” is included here, like “3” for “e.”

The appendage dictionaries include things like:

  • All two-digit combinations
  • All dates from 1900 to 2006
  • All three-digit combinations
  • All single symbols
  • All single digit, plus single symbol
  • All two-symbol combinations

AccessData’s secret sauce is the order in which it runs the various root and appendage dictionary combinations. The company’s research indicates that the password sweet spot is a seven- to nine-character root plus a common appendage, and that it’s much more likely for someone to choose a hard-to-guess root than an uncommon appendage.

Normally, PRTK runs on a network of computers. Password guessing is a trivially distributable task, and it can easily run in the background. A large organization like the Secret Service can easily have hundreds of computers chugging away at someone’s password. A company called Tableau is building a specialized FPGA hardware add-on to speed up PRTK for slow programs like PGP and WinZip: roughly a 150- to 300-times performance increase.

How good is all of this? Eric Thompson estimates that with a couple of weeks’ to a month’s worth of time, his software breaks 55 percent to 65 percent of all passwords. (This depends, of course, very heavily on the application.) Those results are good, but not great.

But that assumes no biographical data. Whenever it can, AccessData collects whatever personal information it can on the subject before beginning. If it can see other passwords, it can make guesses about what types of passwords the subject uses. How big a root is used? What kind of root? Does he put appendages at the end or the beginning? Does he use substitutions? ZIP codes are common appendages, so those go into the file. So do addresses, names from the address book, other passwords and any other personal information. This data ups PRTK’s success rate a bit, but more importantly it reduces the time from weeks to days or even hours.

So if you want your password to be hard to guess, you should choose something not on any of the root or appendage lists. You should mix upper and lowercase in the middle of your root. You should add numbers and symbols in the middle of your root, not as common substitutions. Or drop your appendage in the middle of your root. Or use two roots with an appendage in the middle.

Even something lower down on PRTK’s dictionary list—the seven-character phonetic pattern dictionary—together with an uncommon appendage, is not going to be guessed. Neither is a password made up of the first letters of a sentence, especially if you throw numbers and symbols in the mix. And yes, these passwords are going to be hard to remember, which is why you should use a program like the free and open-source Password Safe to store them all in. (PRTK can test only 900 Password Safe 3.0 passwords per second.)

Even so, none of this might actually matter. AccessData sells another program, Forensic Toolkit, that, among other things, scans a hard drive for every printable character string. It looks in documents, in the Registry, in e-mail, in swap files, in deleted space on the hard drive … everywhere. And it creates a dictionary from that, and feeds it into PRTK.

And PRTK breaks more than 50 percent of passwords from this dictionary alone.

What’s happening is that the Windows operating system’s memory management leaves data all over the place in the normal course of operations. You’ll type your password into a program, and it gets stored in memory somewhere. Windows swaps the page out to disk, and it becomes the tail end of some file. It gets moved to some far out portion of your hard drive, and there it’ll sit forever. Linux and Mac OS aren’t any better in this regard.

I should point out that none of this has anything to do with the encryption algorithm or the key length. A weak 40-bit algorithm doesn’t make this attack easier, and a strong 256-bit algorithm doesn’t make it harder. These attacks simulate the process of the user entering the password into the computer, so the size of the resultant key is never an issue.

For years, I have said that the easiest way to break a cryptographic product is almost never by breaking the algorithm, that almost invariably there is a programming error that allows you to bypass the mathematics and break the product. A similar thing is going on here. The easiest way to guess a password isn’t to guess it at all, but to exploit the inherent insecurity in the underlying operating system.

This essay originally appeared on Wired.com.

Posted on January 11, 2007 at 8:04 AM169 Comments

Surveillance Cameras Catch a Cold-Blooded Killer

I’m in the middle of writing a long essay on the psychology of security. One of the things I’m writing about is the “availability heuristic,” which basically says that the human brain tends to assess the frequency of a class of events based on how easy it is to bring an instance of that class to mind. It explains why people tend to be afraid of the risks that are discussed in the media, or why people are afraid to fly but not afraid to drive.

One of the effects of this heuristic is that people are more persuaded by a vivid example than they are by statistics. The latter might be more useful, but the former is easier to remember.

That’s the context in which I want you to think about this very gripping story about a cold-blooded killer caught by city-wide surveillance cameras.

Federal agents showed Peterman the recordings from that morning. One camera captured McDermott, 48, getting off the bus. A man wearing a light jacket and dark pants got off the same bus, and followed a few steps behind her.

Another camera caught them as they rounded the corner. McDermott didn’t seem to notice the man following her. Halfway down the block, the man suddenly raised his arm and shot her once in the back of the head.

“I’ve seen shootings incidents on video before,” Peterman said, “but the suddenness, and that he did it for no reason at all, was really scary.”

I can write essay after essay about the inefficacy of security cameras. I can talk about trade-offs, and the better ways to spend the money. I can cite statistics and experts and whatever I want. But—used correctly—stories like this one will do more to move public opinion than anything I can do.

Posted on January 10, 2007 at 11:36 AM64 Comments

MI5 Terror Alerts by E-mail

Sounds like security theater to me:

But he added that one of the difficult questions was what people should do about the information when they receive it: “There’s not necessarily that much information on the website about how you should act and how you should respond other than being vigilant and calling a hotline if you see anything suspicious.”

The first, called Threat Level Only, will inform the recipient if the nationwide terror threat level changes. The condition is currently listed as severe.

The second more inclusive service is called What’s New, and will be a digest of the latest information from MI5, including speeches made by the director general and links to relevant websites.

I’ve written about terror threat alerts in the UK before.

EDITED TO ADD (1/15): System is in shambles and being overhauled:

Digital detective work by campaigners revealed that the alerting system did little to protect the identities of anyone signing up.

They found that data gathered was being stored in the US leading to questions about who would have access to the list of names and e-mail addresses.

Posted on January 10, 2007 at 6:31 AM28 Comments

NSA Helps Microsoft with Windows Vista

Is this a good idea or not?

For the first time, the giant software maker is acknowledging the help of the secretive agency, better known for eavesdropping on foreign officials and, more recently, U.S. citizens as part of the Bush administration’s effort to combat terrorism. The agency said it has helped in the development of the security of Microsoft’s new operating system—the brains of a computer—to protect it from worms, Trojan horses and other insidious computer attackers.

[…]

The NSA declined to comment on its security work with other software firms, but Sager said Microsoft is the only one “with this kind of relationship at this point where there’s an acknowledgment publicly.”

The NSA, which provided its service free, said it was Microsoft’s idea to acknowledge the spy agency’s role.

It’s called the “equities issue.” Basically, the NSA has two roles: eavesdrop on their stuff, and protect our stuff. When both sides use the same stuff—Windows Vista, for example—the agency has to decide whether to exploit vulnerabilities to eavesdrop on their stuff or close the same vulnerabilities to protect our stuff. In its partnership with Microsoft, it could have decided to go either way: to deliberately introduce vulnerabilities that it could exploit, or deliberately harden the OS to protect its own interests.

A few years ago I was ready to believe the NSA recognized we’re all safer with more secure general-purpose computers and networks, but in the post-9/11 take-the-gloves-off eavesdrop-on-everybody environment, I simply don’t trust the NSA to do the right thing.

“I kind of call it a Good Housekeeping seal” of approval, said Michael Cherry, a former Windows program manager who now analyzes the product for Directions on Microsoft, a firm that tracks the software maker.

Cherry says the NSA’s involvement can help counter the perception that Windows is not entirely secure and help create a perception that Microsoft has solved the security problems that have plagued it in the past. “Microsoft also wants to make the case that [the new Windows] more secure than its earlier versions,” he said.

For some of us, the result is the exact opposite.

EDITED TO ADD (1/11): Another opinion.

Posted on January 9, 2007 at 12:43 PM82 Comments

Wi-Fi Eavesdropping

New York Times blog post on how easy it is to eavesdrop on an open Wi-Fi session:

Turns out there was absolutely nothing to it. John sat a few feet away with his PowerBook; I fired up my Fujitsu laptop and began doing some e-mail and Web surfing.

That’s all it took. He turned his laptop around to reveal all of this:

* Every copy of every e-mail message I sent *and* received.

* A list of the Web sites I visited.

* Even, incredibly, the graphics that had appeared on the Web sites I had visited.

None of this took any particular effort, hacker skill or fancy software. Anyone could do it. You could do it.

Nice to see this getting some popular attention.

Posted on January 8, 2007 at 6:20 AM42 Comments

Song Parody

“Strangers on my Flight.”

EDITED TO ADD (1/8): This post has generated much more controversy than I expected. Yes, it’s in very poor taste. No, I don’t agree with the sentiment in the words. And no, I don’t know anything about the provenance of the lyrics or the sentiment of the person who wrote or sang them.

I probably should have said that, instead of just posting the link.

I apologize to anyone I offended by including this link. And I am going to close comments on this thread.

Posted on January 5, 2007 at 12:22 PM

Licensing Boaters

The U.S. Coast Guard is talking about licensing boaters. It’s being talked about as an antiterrorism measure, in typical incoherent ways:

The United States already has endured terrorism using small civilian craft, albeit overseas: In 2000, suicide bombers in the port of Aden, Yemen, used an inflatable boat to blow themselves up next to the U.S. Navy destroyer USS Cole, killing 17 sailors and wounding 39 others.

Terrorism experts point to other ways small boats potentially could assist in attacks ­ for example, a speedboat could deposit saboteurs at the outlet pipes of a nuclear power plant, or hijackers aboard a cruise ship. In a nightmare scenario, suicide bombers in a crowded harbor could use small watercraft to detonate a tanker carrying ultra-volatile liquefied natural gas, causing a powerful explosion that could kill thousands.

And how exactly is licensing watercraft supposed to help?

There are lots of good reasons to license boats and boaters, just as there are to license cars and drivers. But counterterrorism is not one of them.

Posted on January 4, 2007 at 2:35 PM66 Comments

Ensuring the Accuracy of Electronic Voting Machines

A Florida judge ruled (text of the ruling) that the defeated candidate has no right to examine the source code in the voting machines that determined the winner in a disputed Congressional race.

Meanwhile:

A laboratory that has tested most of the nation’s electronic voting systems has been temporarily barred from approving new machines after federal officials found that it was not following its quality-control procedures and could not document that it was conducting all the required tests.

That company is Ciber Inc.

Is it just me, or are things starting to make absolutely no sense?

Posted on January 4, 2007 at 12:06 PM37 Comments

ID Cards to Stop Bullying

No, really:

“Introducing photo ID cards will help bring an end to bullying over use of ‘cash free’ cards for school meals, will assist with access to school bus services and, ultimately, can be used to add security to school examinations,” he said.

“SSTA members report frequently that young people are bullied into handing over their cards for school meals to others, thus leaving them without their meal entitlement.

“With non-identified cards this will remain a problem. If photo ID is introduced widely, then the problem will dramatically reduce.”

He said that introducing such a system would also help prepare young people for “the realities of identity management in the 21st Century”.

I agree with this:

However, Green MSP Patrick Harvie said the suggestion was troubling.

“We should be preparing young people for the reality of defending their privacy and civil liberties against ever-more intrusive government systems,” he argued.

“We’ve heard proposals for airport-style scanners and random drug testing in schools, fingerprinting is already in place in some schools. There’s a risk of creating environments which feel more like penal institutions than places of learning.

“These ID cards will do absolutely nothing to address the causes of bullying. Instead they will teach the next generation that an ID card culture is ‘normal’, and that they should have to prove their entitlement to services.”

It’s important that schools teach the right lessons, and “we’re all living in a surveillance society, and we should just get used to it” is not the right lesson.

Posted on January 4, 2007 at 6:17 AM54 Comments

U.S. Government to Encrypt All Laptops

This is a good idea:

To address the issue of data leaks of the kind we’ve seen so often in the last year because of stolen or missing laptops, writes Saqib Ali, the Feds are planning to use Full Disk Encryption (FDE) on all Government-owned computers.

“On June 23, 2006 a Presidential Mandate was put in place requiring all agency laptops to fully encrypt data on the HDD. The U.S. Government is currently conducting the largest single side-by-side comparison and competition for the selection of a Full Disk Encryption product. The selected product will be deployed on Millions of computers in the U.S. federal government space. This implementation will end up being the largest single implementation ever, and all of the information regarding the competition is in the public domain. The evaluation will come to an end in 90 days. You can view all the vendors competing and list of requirements.”

Certainly, encrypting everything is overkill, but it’s much easier than figuring out what to encrypt and what not to. And I really like that there is a open competition to choose which encryption program to use. It’s certainly a high-stakes competition among the vendors, but one that is likely to improve the security of all products. I’ve long said that one of the best things the government can do to improve computer security is to use its vast purchasing power to pressure vendors to improve their security. I would expect the winner to make a lot of sales outside of the contract, and for the losers to correct their deficiencies so they’ll do better next time.

Side note: Key escrow is a requirement, something that makes sense in a government or corporate application:

Capable of secure escrow and recovery of the symetric [sic] encryption key

I wonder if the NSA is involved in the evaluation at all, and if its analysis will be made public.

Posted on January 3, 2007 at 2:00 PM74 Comments

DHS Privacy Office Report on MATRIX

The Privacy Office of the Department of Homeland Security has issued a report on MATRIX: The Multistate Anti-Terrorism Information Exchange. MATRIX is a now-defunct data mining and data sharing program among federal, state, and local law enforcement agencies, one of the many data-mining programs going on in government (TIA—Total Information Awareness—being the most famous, and Tangram being the newest).

The report is short, and very critical of the program’s inattention to privacy and lack of transparency. That’s probably why it was released to the public just before Christmas, burying it in the media.

Posted on January 3, 2007 at 11:58 AM6 Comments

More on the Unabomber's Code

Last month I posted about Ted Kaczynski’s pencil-and-paper cryptography. It seems that he invented his own cipher, which the police couldn’t crack until they found a description of the code amongst his personal papers.

The link I found was from KPIX, a CBS affiliate in the San Francisco area. Some time after writing it, I was contacted by the station and asked to comment on some other pieces of the Unabomber’s cryptography for a future story (video online).

There were five new pages of Unabomber evidence that I talked about (1, 2, 3, 4, and 5). All five pages were presented to me as being pages written by the Unabomber, but it seems pretty obvious to me that pages 4 and 5, rather than being Kaczynski’s own key, are notes written by a cryptanalyst trying to break the Unabomber’s code.

In any case, it’s all fascinating.

Posted on January 3, 2007 at 6:59 AM27 Comments

New Congress: Changes at the U.S. Borders

Item #1: US-VISIT, the program to keep better track of people coming in and out of the U.S. (more information here, here, here, and here), is running into all sorts of problems.

In a major blow to the Bush administration’s efforts to secure borders, domestic security officials have for now given up on plans to develop a facial or fingerprint recognition system to determine whether a vast majority of foreign visitors leave the country, officials say.

[…]

But in recent days, officials at the Homeland Security Department have conceded that they lack the financing and technology to meet their deadline to have exit-monitoring systems at the 50 busiest land border crossings by next December. A vast majority of foreign visitors enter and exit by land from Mexico and Canada, and the policy shift means that officials will remain unable to track the departures.

A report released on Thursday by the Government Accountability Office, the nonpartisan investigative arm of Congress, restated those findings, reporting that the administration believes that it will take 5 to 10 years to develop technology that might allow for a cost-effective departure system.

Domestic security officials, who have allocated $1.7 billion since the 2003 fiscal year to track arrivals and departures, argue that creating the program with the existing technology would be prohibitively expensive.

They say it would require additional employees, new buildings and roads at border crossings, and would probably hamper the vital flow of commerce across those borders.

Congress ordered the creation of such a system in 1996.

In an interview last week, the assistant secretary for homeland security policy, Stewart A. Baker, estimated that an exit system at the land borders would cost “tens of billions of dollars” and said the department had concluded that such a program was not feasible, at least for the time being.

“It is a pretty daunting set of costs, both for the U.S. government and the economy,” Mr. Stewart said. “Congress has said, ‘We want you to do it.’ We are not going to ignore what Congress has said. But the costs here are daunting.

“There are a lot of good ideas and things that would make the country safer. But when you have to sit down and compare all the good ideas people have developed against each other, with a limited budget, you have to make choices that are much harder.”

I like the trade-off sentiment of that quote.

My guess is that the program will be completely killed by Congress in 2007. (More articles here and here, and an editorial here.)

Item #2: The new Congress is—wisely, I should add—unlikely to fund the 700-mile fence along the Mexican border.

Item #3: I hope they examine the Coast Guard’s security failures and cost overruns.

Item #4: Note this paragraph from the last article:

During a drill in which officials pretended that a ferry had been hijacked by terrorists, the Coast Guard and the Federal Bureau of Investigation competed for the right to take charge, a contest that became so intense that the Coast Guard players manipulated the war game to cut the F.B.I. out, government auditors say.

Seems that there are still serious turf battles among government agencies involved with terrorism. It would be nice if Congress spent some time on this (actually important) problem.

Posted on January 2, 2007 at 12:26 PM28 Comments

OneDOJ

Yet another massive U.S. government database—OneDOJ:

The Justice Department is building a massive database that allows state and local police officers around the country to search millions of case files from the FBI, Drug Enforcement Administration and other federal law enforcement agencies, according to Justice officials.

The system, known as “OneDOJ,” already holds approximately 1 million case records and is projected to triple in size over the next three years, Justice officials said. The files include investigative reports, criminal-history information, details of offenses, and the names, addresses and other information of criminal suspects or targets, officials said.

The database is billed by its supporters as a much-needed step toward better information-sharing with local law enforcement agencies, which have long complained about a lack of cooperation from the federal government.

But civil-liberties and privacy advocates say the scale and contents of such a database raise immediate privacy and civil rights concerns, in part because tens of thousands of local police officers could gain access to personal details about people who have not been arrested or charged with crimes.

The little-noticed program has been coming together over the past year and a half. It already is in use in pilot projects with local police in Seattle, San Diego and a handful of other areas, officials said. About 150 separate police agencies have access, officials said.

But in a memorandum sent last week to the FBI, U.S. attorneys and other senior Justice officials, Deputy Attorney General Paul J. McNulty announced that the program will be expanded immediately to 15 additional regions and that federal authorities will “accelerate . . . efforts to share information from both open and closed cases.”

Eventually, the department hopes, the database will be a central mechanism for sharing federal law enforcement information with local and state investigators, who now run checks individually, and often manually, with Justice’s five main law enforcement agencies: the FBI, the DEA, the U.S. Marshals Service, the Bureau of Prisons and the Bureau of Alcohol, Tobacco, Firearms and Explosives.

Within three years, officials said, about 750 law enforcement agencies nationwide will have access.

Computerizing this stuff is a good idea, but any new systems need privacy safeguards built-in. We need to ensure that:

  • Inaccurate data can be corrected.
  • Data is deleted when it is no longer needed, especially investigative data on people who have turned out to be innocent.
  • Protections are in place to prevent abuse of the data, both by people in their official capacity and people acting unofficially or fraudulently.

ln our rush to computerize these records, we’re ignoring these safeguards and building systems that will make us all less secure.

Posted on January 2, 2007 at 11:55 AM17 Comments

Secure Flight Privacy Report

The Department of Homeland Security’s own Privacy Office released a report on privacy issues with Secure Flight, the new airline passenger matching program. It’s not good, which is why the government tried to bury it by releasing it to the public the Friday before Christmas. And that’s why I’m waiting until after New Year’s Day before posting this.

Secure Flight Report: DHS Privacy Office Report to the Public on the Transportation Security Administration’s Secure Flight Program and Privacy Recommendations“:

Summary:

The Department of Homeland Security (DHS) Privacy Office conducted a review of the Transportation Security Administration’s (TSA) collection and use of commercial data during initial testing for the Secure Flight program that occurred in the fall 2004 through spring 2005. The Privacy Office review was undertaken following notice by the TSA Privacy Officer of preliminary concerns raised by the Government Accountability Office (GAO) that, contrary to published privacy notices and public statements, TSA may have accessed and stored personally identifying data from commercial sources as part of its efforts to fashion a passenger prescreening program.

These new concerns followed much earlier public complaints that TSA collected passenger name record data from airlines to test the developmental passenger prescreening program without giving adequate notice to the public. Thus, the Privacy Office’s review of the Secure Flight commercial data testing also sought to determine whether the data collection from air carriers and commercial data brokers about U.S. persons was consistent with published privacy documents.

The Privacy Office appreciates the cooperation in this review by TSA management, staff, and contractors involved in the commercial data testing. The Privacy Office wishes to recognize that, with the best intentions, TSA undertook considerable efforts to address information privacy and security in the development of the Secure Flight Program. Notwithstanding these efforts, we are concerned that shortcomings identified in this report reflect what appear to be largely unintentional, yet significant privacy missteps that merit the careful attention and privacy leadership that TSA Administrator Kip Hawley is giving to the development of the Secure Flight program and, in support of which, the DHS Acting Chief Privacy Officer has committed to provide Privacy Office staff resources and privacy guidance.

I’ve written about Secure Flight many times. I suppose this is a good summary post. This is a post about the Secure Flight Privacy/IT Working Group, which I was a member of, and its final report. That link also includes links to my other posts on the program.

Posted on January 2, 2007 at 7:24 AM15 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.