Entries Tagged "essays"

Page 40 of 48

In Praise of Security Theater

While visiting some friends and their new baby in the hospital last week, I noticed an interesting bit of security. To prevent infant abduction, all babies had RFID tags attached to their ankles by a bracelet. There are sensors on the doors to the maternity ward, and if a baby passes through, an alarm goes off.

Infant abduction is rare, but still a risk. In the last 22 years, about 233 such abductions have occurred in the United States. About 4 million babies are born each year, which means that a baby has a 1-in-375,000 chance of being abducted. Compare this with the infant mortality rate in the U.S.—one in 145—and it becomes clear where the real risks are.

And the 1-in-375,000 chance is not today’s risk. Infant abduction rates have plummeted in recent years, mostly due to education programs at hospitals.

So why are hospitals bothering with RFID bracelets? I think they’re primarily to reassure the mothers. Many times during my friends’ stay at the hospital the doctors had to take the baby away for this or that test. Millions of years of evolution have forged a strong bond between new parents and new baby; the RFID bracelets are a low-cost way to ensure that the parents are more relaxed when their baby was out of their sight.

Security is both a reality and a feeling. The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures. We know the infant abduction rates and how well the bracelets reduce those rates. We also know the cost of the bracelets, and can thus calculate whether they’re a cost-effective security measure or not. But security is also a feeling, based on individual psychological reactions to both the risks and the countermeasures. And the two things are different: You can be secure even though you don’t feel secure, and you can feel secure even though you’re not really secure.

The RFID bracelets are what I’ve come to call security theater: security primarily designed to make you feel more secure. I’ve regularly maligned security theater as a waste, but it’s not always, and not entirely, so.

It’s only a waste if you consider the reality of security exclusively. There are times when people feel less secure than they actually are. In those cases—like with mothers and the threat of baby abduction—a palliative countermeasure that primarily increases the feeling of security is just what the doctor ordered.

Tamper-resistant packaging for over-the-counter drugs started to appear in the 1980s, in response to some highly publicized poisonings. As a countermeasure, it’s largely security theater. It’s easy to poison many foods and over-the-counter medicines right through the seal—with a syringe, for example—or to open and replace the seal well enough that an unwary consumer won’t detect it. But in the 1980s, there was a widespread fear of random poisonings in over-the-counter medicines, and tamper-resistant packaging brought people’s perceptions of the risk more in line with the actual risk: minimal.

Much of the post-9/11 security can be explained by this as well. I’ve often talked about the National Guard troops in airports right after the terrorist attacks, and the fact that they had no bullets in their guns. As a security countermeasure, it made little sense for them to be there. They didn’t have the training necessary to improve security at the checkpoints, or even to be another useful pair of eyes. But to reassure a jittery public that it’s OK to fly, it was probably the right thing to do.

Security theater also addresses the ancillary risk of lawsuits. Lawsuits are ultimately decided by juries, or settled because of the threat of jury trial, and juries are going to decide cases based on their feelings as well as the facts. It’s not enough for a hospital to point to infant abduction rates and rightly claim that RFID bracelets aren’t worth it; the other side is going to put a weeping mother on the stand and make an emotional argument. In these cases, security theater provides real security against the legal threat.

Like real security, security theater has a cost. It can cost money, time, concentration, freedoms and so on. It can come at the cost of reducing the things we can do. Most of the time security theater is a bad trade-off, because the costs far outweigh the benefits. But there are instances when a little bit of security theater makes sense.

We make smart security trade-offs—and by this I mean trade-offs for genuine security—when our feeling of security closely matches the reality. When the two are out of alignment, we get security wrong. Security theater is no substitute for security reality, but, used correctly, security theater can be a way of raising our feeling of security so that it more closely matches the reality of security. It makes us feel more secure handing our babies off to doctors and nurses, buying over-the-counter medicines, and flying on airplanes—closer to how secure we should feel if we had all the facts and did the math correctly.

Of course, too much security theater and our feeling of security becomes greater than the reality, which is also bad. And others—politicians, corporations and so on—can use security theater to make us feel more secure without doing the hard work of actually making us secure. That’s the usual way security theater is used, and why I so often malign it.

But to write off security theater completely is to ignore the feeling of security. And as long as people are involved with security trade-offs, that’s never going to work.

This essay appeared on Wired.com, and is dedicated to my new godson, Nicholas Quillen Perry.

EDITED TO ADD: This essay has been translated into Portuguese.

Posted on January 25, 2007 at 5:50 AMView Comments

Debating Full Disclosure

Full disclosure—the practice of making the details of security vulnerabilities public—is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure.

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers (See The Vulnerability Disclosure Game: Are We More Secure?). The problem, according to this position, is less the vulnerability itself and more the information about the vulnerability.

But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

To understand why the second assumption isn’t true, you need to understand the underlying economics. To a software company, vulnerabilities are largely an externality. That is, they affect you—the user—much more than they affect it. A smart vendor treats vulnerabilities less as a software problem, and more as a PR problem. So if we, the user community, want software vendors to patch vulnerabilities, we need to make the PR problem more acute.

Full disclosure does this. Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies—who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability—and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

It wasn’t until researchers published complete details of the vulnerabilities that the software companies started fixing them.

Of course, the software companies hated this. They received bad PR every time a vulnerability was made public, and the only way to get some good PR was to quickly release a patch. For a large company like Microsoft, this was very expensive.

So a bunch of software companies, and some security researchers, banded together and invented “responsible disclosure” (See “The Chilling Effect”). The basic idea was that the threat of publishing the vulnerability is almost as good as actually publishing it. A responsible researcher would quietly give the software vendor a head start on patching its software, before releasing the vulnerability to the public.

This was a good idea—and these days it’s normal procedure—but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.

The moral here doesn’t just apply to software; it’s very general. Public scrutiny is how security improves, whether we’re talking about software or airport security or government counterterrorism measures. Yes, there are trade-offs. Full disclosure means that the bad guys learn about the vulnerability at the same time as the rest of us—unless, of course, they knew about it beforehand—but most of the time the benefits far outweigh the disadvantages.

Secrecy prevents people from accurately assessing their own risk. Secrecy precludes public debate about security, and inhibits security education that leads to improvements. Secrecy doesn’t improve security; it stifles it.

I’d rather have as much information as I can to make an informed decision about security, whether it’s a buying decision about a software product or an election decision about two political parties. I’d rather have the information I need to pressure vendors to improve security.

I don’t want to live in a world where companies can sell me software they know is full of holes or where the government can implement security measures without accountability. I much prefer a world where I have all the information I need to assess and protect my own security.

This essay originally appeared on CSOOnline, as part of a series of essay on the topic. Marcus Ranum wrote against the practice of disclosing vulnerabilities, and Mark Miller of Microsoft wrote in favor of responsible disclosure. These are on-line-only sidebars to a very interesting article in CSO Magazine, “The Chilling Effect,” about the confluence of forces that are making it harder to research and disclose vulnerabilities in web-based software:

“Laws say you can’t access computers without permission,” she [attorney Jennifer Granick] explains. “Permission on a website is implied. So far, we’ve relied on that. The Internet couldn’t work if you had to get permission every time you wanted to access something. But what if you’re using a website in a way that’s possible but that the owner didn’t intend? The question is whether the law prohibits you from exploring all the ways a website works,” including through vulnerabilities.

All the links are worth reading in full.

A Simplified Chinese translation by Xin LI is available on Delphij’s Chaos.

Posted on January 23, 2007 at 6:45 AMView Comments

"Clear" Registered Traveller Program

CLEAR, a private service that prescreens travelers for a $100 annual fee, has come to Kennedy International Airport. To benefit from the Clear Registered Traveler program, which is run by Verified Identity Pass, a person must fill out an application, let the service capture his fingerprints and iris pattern and present two forms of identification. If the traveler passes a federal background check, he will be given a card that allows him to pass quickly through airport security.

Sounds great, but it’s actually two ideas rolled into one: one clever and one very stupid.

The clever idea is allowing people to pay for better service. Clear has been in operation at the Orlando International Airport since July 2005, and members have passed through security checkpoints faster simply because they are segregated from less experienced fliers who don’t know the drill.

Now, at Kennedy and other airports, Clear is purchasing and installing federally approved technology that will further speed up the screening process: scanners that will eliminate the need for cardholders to remove their shoes, and explosives detection machines that will eliminate the need for them to remove their coats and jackets. There are also Clear employees at the checkpoints who, although they can’t screen cardholders, can guide members through the security process. Clear has not yet paid airports for an extra security lane or the Transportation Security Administration for extra screening personnel, but both of those enhancements are on the table if enough people sign up.

I fly more than 200,000 miles per year and would gladly pay $100 a year to get through airport security faster.

But the stupid idea is the background check. When first conceived, traveler programs focused on prescreening. Pre-approved travelers would pass through security checkpoints with less screening, and resources would be focused on everyone else. Sounds reasonable, but it would leave us all less safe.

Background checks are based on the dangerous myth that we can somehow pick terrorists out of a crowd if we could identify everyone. Unfortunately, there isn’t any terrorist profile that prescreening can uncover. Timothy McVeigh could probably have gotten one of these cards. So could have Eric Rudolph, the pipe bomber at the 1996 Olympic Games in Atlanta. There isn’t even a good list of known terrorists to check people against; the government list used by the airlines has been the butt of jokes for years.

And have we forgotten how prevalent identity theft is these days? If you think having a criminal impersonating you to your bank is bad, wait until they start impersonating you to the Transportation Security Administration.

The truth is that whenever you create two paths through security—a high-security path and a low-security path—you have to assume that the bad guys will find a way to exploit the low-security path. It may be counterintuitive, but we are all safer if the people chosen for more thorough screening are truly random and not based on an error-filled database or a cursory background check.

I think of Clear as a $100 service that tells terrorists if the F.B.I. is on to them or not. Why in the world would we provide terrorists with this ability?

We don’t have to. Clear cardholders are not scrutinized less when they go through checkpoints, they’re scrutinized more efficiently. So why not get rid of the background checks altogether? We should all be able to walk into the airport, pay $10, and use the Clear lanes when it’s worth it to us.

This essay originally appeared in The New York Times.

I’ve already written about trusted traveller programs, and have also written about Verified Identity Card, Inc., the company that runs Clear. Note that these two essays were from 2004. This is the Clear website, and this is the website for Verified Identity Pass, Inc.

Posted on January 22, 2007 at 7:11 AMView Comments

Information Security and Externalities

Information insecurity is costing us billions. There are many different ways in which we pay for information insecurity. We pay for it in theft, such as information theft, financial theft and theft of service. We pay for it in productivity loss, both when networks stop functioning and in the dozens of minor security inconveniences we all have to endure on a daily basis. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for the lack of security, year after year.

Fundamentally, the issue is insecure software. It is a result of bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the myriad effects of insecure software. Unfortunately, the money spent does not improve the security of that software. We are paying to mitigate the risk rather than fix the problem.

The only way to fix the problem is for vendors to improve their software. They need to design security in their products from the start and not as an add-on feature. Software vendors need also to institute good security practices and improve the overall quality of their products. But they will not do this until it is in their financial best interests to do so. And so far, it is not.

The reason is easy to explain. In a capitalist society, businesses are profit-making ventures, so they make decisions based on both short- and long-term profitability. This holds true for decisions about product features and sale prices, but it also holds true for software. Vendors try to balance the costs of more secure software—extra developers, fewer features, longer time to market—against the costs of insecure software: expense to patch, occasional bad press, potential loss of sales.

So far, so good. But what the vendors do not look at is the total costs of insecure software; they only look at what insecure software costs them. And because of that, they miss a lot of the costs: all the money we, the software product buyers, are spending on security. In economics, this is known as an externality: the cost of a decision that is borne by people other than those taking the decision.

Normally, you would expect users to respond by favouring secure products over insecure products—after all, users are also making their buying decisions based on the same capitalist model. Unfortunately, that is not generally possible. In some cases software monopolies limit the available product choice; in other cases, the ‘lock-in effect’ created by proprietary file formats or existing infrastructure or compatibility requirements makes it harder to switch; and in still other cases, none of the competing companies have made security a differentiating characteristic. In all cases, it is hard for an average buyer to distinguish a truly secure product from an insecure product with a ‘trust us’ marketing campaign.

Because of all these factors, there are no real consequences to the vendors for having insecure or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality. The result is what we have all witnessed: insecure software. Companies find that it is cheaper to weather the occasional press storm, spend money on PR campaigns touting good security and fix public problems after the fact, than to design security in from the beginning.

And so the externality remains…

If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security—especially the security of their customers—it also needs to be in their financial best interests.

Liability law is one way to make it in those organisations’ best interests. If end users could sue software manufacturers for product defects, then the cost of those defects to the software manufacturers would rise. Manufacturers would then pay the true economic cost for poor software, and not just a piece of it. So when they balance the cost of making their software secure versus the cost of leaving their software insecure, there would be more costs on the latter side. This would provide an incentive for them to make their software more secure.

Basically, we have to tweak the risk equation in such a way that the Chief Executive Officer (CEO) of a company cares about actually fixing the problem—and putting pressure on the balance sheet is the best way to do that. Security is risk management; liability fiddles with the risk equation.

Clearly, liability is not all or nothing. There are many parties involved in a typical software attack. The list includes:

  • the company that sold the software with the vulnerability in the first place
  • the person who wrote the attack tool
  • the attacker himself, who used the tool to break into a network
  • and finally, the owner of the network, who was entrusted with defending that network.

100% of the liability should not fall on the shoulders of the software vendor, just as 100% should not fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.

Certainly, making software more secure will cost money, and manufacturers will have to pass those costs on to users in the form of higher prices. But users are already paying extra costs for insecure software: costs of third-party security products, costs of consultants and security services companies, direct and indirect costs of losses. But as long as one is going to pay anyway, it would be better to pay to fix the problem. Forcing the software vendor to pay to fix the problem and then passing those costs on to users means that the actual problem might get fixed.

Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature, without any regard to security. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they are entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.

Information security is not a technological problem. It is an economics problem. And the way to improve information security is to fix the economics problem. If this is done, companies will come up with the right technological solutions that vendors will happily implement. Fail to solve the economics problem, and vendors will not bother implementing or researching any security technologies, regardless of how effective they are.

This essay previously appeared in the European Network and Information Security Agency quarterly newsletter, and is an update to a 2004 essay I wrote for Computerworld.

Posted on January 18, 2007 at 7:04 AMView Comments

Wholesale Surveillance

I had an op-ed published in the Arizona Star today:

Technology is fundamentally changing the nature of surveillance. Years ago, surveillance meant trench-coated detectives following people down streets. It was laborious and expensive and was used only when there was reasonable suspicion of a crime. Modern surveillance is the policeman with a license-plate scanner, or even a remote license-plate scanner mounted on a traffic light and a policeman sitting at a computer in the station.

It’s the same, but it’s completely different. It’s wholesale surveillance. And it disrupts the balance between the powers of the police and the rights of the people.

The news hook I used was this article, about the police testing a vehicle-mounted automatic license plate scanner. Unfortunately, I got the police department wrong. It’s the Arizona State Police, not the Tucson Police.

Posted on January 11, 2007 at 1:00 PMView Comments

Choosing Secure Passwords

Ever since I wrote about the 34,000 MySpace passwords I analyzed, people have been asking how to choose secure passwords.

My piece aside, there’s been a lot written on this topic over the years—both serious and humorous—but most of it seems to be based on anecdotal suggestions rather than actual analytic evidence. What follows is some serious advice.

The attack I’m evaluating against is an offline password-guessing attack. This attack assumes that the attacker either has a copy of your encrypted document, or a server’s encrypted password file, and can try passwords as fast as he can. There are instances where this attack doesn’t make sense. ATM cards, for example, are secure even though they only have a four-digit PIN, because you can’t do offline password guessing. And the police are more likely to get a warrant for your Hotmail account than to bother trying to crack your e-mail password. Your encryption program’s key-escrow system is almost certainly more vulnerable than your password, as is any “secret question” you’ve set up in case you forget your password.

Offline password guessers have gotten both fast and smart. AccessData sells Password Recovery Toolkit, or PRTK. Depending on the software it’s attacking, PRTK can test up to hundreds of thousands of passwords per second, and it tests more common passwords sooner than obscure ones.

So the security of your password depends on two things: any details of the software that slow down password guessing, and in what order programs like PRTK guess different passwords.

Some software includes routines deliberately designed to slow down password guessing. Good encryption software doesn’t use your password as the encryption key; there’s a process that converts your password into the encryption key. And the software can make this process as slow as it wants.

The results are all over the map. Microsoft Office, for example, has a simple password-to-key conversion, so PRTK can test 350,000 Microsoft Word passwords per second on a 3-GHz Pentium 4, which is a reasonably current benchmark computer. WinZip used to be even worse—well over a million guesses per second for version 7.0—but with version 9.0, the cryptosystem’s ramp-up function has been substantially increased: PRTK can only test 900 passwords per second. PGP also makes things deliberately hard for programs like PRTK, also only allowing about 900 guesses per second.

When attacking programs with deliberately slow ramp-ups, it’s important to make every guess count. A simple six-character lowercase exhaustive character attack, “aaaaaa” through “zzzzzz,” has more than 308 million combinations. And it’s generally unproductive, because the program spends most of its time testing improbable passwords like “pqzrwj.”

According to Eric Thompson of AccessData, a typical password consists of a root plus an appendage. A root isn’t necessarily a dictionary word, but it’s something pronounceable. An appendage is either a suffix (90 percent of the time) or a prefix (10 percent of the time).

So the first attack PRTK performs is to test a dictionary of about 1,000 common passwords, things like “letmein,” “password,” “123456” and so on. Then it tests them each with about 100 common suffix appendages: “1,” “4u,” “69,” “abc,” “!” and so on. Believe it or not, it recovers about 24 percent of all passwords with these 100,000 combinations.

Then, PRTK goes through a series of increasingly complex root dictionaries and appendage dictionaries. The root dictionaries include:

  • Common word dictionary: 5,000 entries
  • Names dictionary: 10,000 entries
  • Comprehensive dictionary: 100,000 entries
  • Phonetic pattern dictionary: 1/10,000 of an exhaustive character search

The phonetic pattern dictionary is interesting. It’s not really a dictionary; it’s a Markov-chain routine that generates pronounceable English-language strings of a given length. For example, PRTK can generate and test a dictionary of very pronounceable six-character strings, or just-barely pronounceable seven-character strings. They’re working on generation routines for other languages.

PRTK also runs a four-character-string exhaustive search. It runs the dictionaries with lowercase (the most common), initial uppercase (the second most common), all uppercase and final uppercase. It runs the dictionaries with common substitutions: “$” for “s,” “@” for “a,” “1” for “l” and so on. Anything that’s “leet speak” is included here, like “3” for “e.”

The appendage dictionaries include things like:

  • All two-digit combinations
  • All dates from 1900 to 2006
  • All three-digit combinations
  • All single symbols
  • All single digit, plus single symbol
  • All two-symbol combinations

AccessData’s secret sauce is the order in which it runs the various root and appendage dictionary combinations. The company’s research indicates that the password sweet spot is a seven- to nine-character root plus a common appendage, and that it’s much more likely for someone to choose a hard-to-guess root than an uncommon appendage.

Normally, PRTK runs on a network of computers. Password guessing is a trivially distributable task, and it can easily run in the background. A large organization like the Secret Service can easily have hundreds of computers chugging away at someone’s password. A company called Tableau is building a specialized FPGA hardware add-on to speed up PRTK for slow programs like PGP and WinZip: roughly a 150- to 300-times performance increase.

How good is all of this? Eric Thompson estimates that with a couple of weeks’ to a month’s worth of time, his software breaks 55 percent to 65 percent of all passwords. (This depends, of course, very heavily on the application.) Those results are good, but not great.

But that assumes no biographical data. Whenever it can, AccessData collects whatever personal information it can on the subject before beginning. If it can see other passwords, it can make guesses about what types of passwords the subject uses. How big a root is used? What kind of root? Does he put appendages at the end or the beginning? Does he use substitutions? ZIP codes are common appendages, so those go into the file. So do addresses, names from the address book, other passwords and any other personal information. This data ups PRTK’s success rate a bit, but more importantly it reduces the time from weeks to days or even hours.

So if you want your password to be hard to guess, you should choose something not on any of the root or appendage lists. You should mix upper and lowercase in the middle of your root. You should add numbers and symbols in the middle of your root, not as common substitutions. Or drop your appendage in the middle of your root. Or use two roots with an appendage in the middle.

Even something lower down on PRTK’s dictionary list—the seven-character phonetic pattern dictionary—together with an uncommon appendage, is not going to be guessed. Neither is a password made up of the first letters of a sentence, especially if you throw numbers and symbols in the mix. And yes, these passwords are going to be hard to remember, which is why you should use a program like the free and open-source Password Safe to store them all in. (PRTK can test only 900 Password Safe 3.0 passwords per second.)

Even so, none of this might actually matter. AccessData sells another program, Forensic Toolkit, that, among other things, scans a hard drive for every printable character string. It looks in documents, in the Registry, in e-mail, in swap files, in deleted space on the hard drive … everywhere. And it creates a dictionary from that, and feeds it into PRTK.

And PRTK breaks more than 50 percent of passwords from this dictionary alone.

What’s happening is that the Windows operating system’s memory management leaves data all over the place in the normal course of operations. You’ll type your password into a program, and it gets stored in memory somewhere. Windows swaps the page out to disk, and it becomes the tail end of some file. It gets moved to some far out portion of your hard drive, and there it’ll sit forever. Linux and Mac OS aren’t any better in this regard.

I should point out that none of this has anything to do with the encryption algorithm or the key length. A weak 40-bit algorithm doesn’t make this attack easier, and a strong 256-bit algorithm doesn’t make it harder. These attacks simulate the process of the user entering the password into the computer, so the size of the resultant key is never an issue.

For years, I have said that the easiest way to break a cryptographic product is almost never by breaking the algorithm, that almost invariably there is a programming error that allows you to bypass the mathematics and break the product. A similar thing is going on here. The easiest way to guess a password isn’t to guess it at all, but to exploit the inherent insecurity in the underlying operating system.

This essay originally appeared on Wired.com.

Posted on January 11, 2007 at 8:04 AMView Comments

Automated Targeting System

If you’ve traveled abroad recently, you’ve been investigated. You’ve been assigned a score indicating what kind of terrorist threat you pose. That score is used by the government to determine the treatment you receive when you return to the U.S. and for other purposes as well.

Curious about your score? You can’t see it. Interested in what information was used? You can’t know that. Want to clear your name if you’ve been wrongly categorized? You can’t challenge it. Want to know what kind of rules the computer is using to judge you? That’s secret, too. So is when and how the score will be used.

U.S. customs agencies have been quietly operating this system for several years. Called Automated Targeting System, it assigns a “risk assessment” score to people entering or leaving the country, or engaging in import or export activity. This score, and the information used to derive it, can be shared with federal, state, local and even foreign governments. It can be used if you apply for a government job, grant, license, contract or other benefit. It can be shared with nongovernmental organizations and individuals in the course of an investigation. In some circumstances private contractors can get it, even those outside the country. And it will be saved for 40 years.

Little is known about this program. Its bare outlines were disclosed in the Federal Register in October. We do know that the score is partially based on details of your flight record—where you’re from, how you bought your ticket, where you’re sitting, any special meal requests—or on motor vehicle records, as well as on information from crime, watch-list and other databases.

Civil liberties groups have called the program Kafkaesque. But I have an even bigger problem with it. It’s a waste of money.

The idea of feeding a limited set of characteristics into a computer, which then somehow divines a person’s terrorist leanings, is farcical. Uncovering terrorist plots requires intelligence and investigation, not large-scale processing of everyone.

Additionally, any system like this will generate so many false alarms as to be completely unusable. In 2005 Customs & Border Protection processed 431 million people. Assuming an unrealistic model that identifies terrorists (and innocents) with 99.9% accuracy, that’s still 431,000 false alarms annually.

The number of false alarms will be much higher than that. The no-fly list is filled with inaccuracies; we’ve all read about innocent people named David Nelson who can’t fly without hours-long harassment. Airline data, too, are riddled with errors.

The odds of this program’s being implemented securely, with adequate privacy protections, are not good. Last year I participated in a government working group to assess the security and privacy of a similar program developed by the Transportation Security Administration, called Secure Flight. After five years and $100 million spent, the program still can’t achieve the simple task of matching airline passengers against terrorist watch lists.

In 2002 we learned about yet another program, called Total Information Awareness, for which the government would collect information on every American and assign him or her a terrorist risk score. Congress found the idea so abhorrent that it halted funding for the program. Two years ago, and again this year, Secure Flight was also banned by Congress until it could pass a series of tests for accuracy and privacy protection.

In fact, the Automated Targeting System is arguably illegal, as well (a point several congressmen made recently); all recent Department of Homeland Security appropriations bills specifically prohibit the department from using profiling systems against persons not on a watch list.

There is something un-American about a government program that uses secret criteria to collect dossiers on innocent people and shares that information with various agencies, all without any oversight. It’s the sort of thing you’d expect from the former Soviet Union or East Germany or China. And it doesn’t make us any safer from terrorism.

This essay, without the links, was published in Forbes. They also published a rebuttal by William Baldwin, although it doesn’t seen to rebut any of the actual points.

Here’s an odd division of labor: a corporate data consultant argues for more openness, while a journalist favors more secrecy.

It’s only odd if you don’t understand security.

Posted on December 22, 2006 at 11:38 AMView Comments

Real-World Passwords

How good are the passwords people are choosing to protect their computers and online accounts?

It’s a hard question to answer because data is scarce. But recently, a colleague sent me some spoils from a MySpace phishing attack: 34,000 actual user names and passwords.

The attack was pretty basic. The attackers created a fake MySpace login page, and collected login information when users thought they were accessing their own account on the site. The data was forwarded to various compromised web servers, where the attackers would harvest it later.

MySpace estimates that more than 100,000 people fell for the attack before it was shut down. The data I have is from two different collection points, and was cleaned of the small percentage of people who realized they were responding to a phishing attack. I analyzed the data, and this is what I learned.

Password Length: While 65 percent of passwords contain eight characters or less, 17 percent are made up of six characters or less. The average password is eight characters long.

Specifically, the length distribution looks like this:

1-4 0.82 percent
5 1.1 percent
6 15 percent
7 23 percent
8 25 percent
9 17 percent
10 13 percent
11 2.7 percent
12 0.93 percent
13-32 0.93 percent

Yes, there’s a 32-character password: “1ancheste23nite41ancheste23nite4.” Other long passwords are “fool2thinkfool2thinkol2think” and “dokitty17darling7g7darling7.”

Character Mix: While 81 percent of passwords are alphanumeric, 28 percent are just lowercase letters plus a single final digit—and two-thirds of those have the single digit 1. Only 3.8 percent of passwords are a single dictionary word, and another 12 percent are a single dictionary word plus a final digit—once again, two-thirds of the time that digit is 1.

numbers only 1.3 percent
letters only 9.6 percent
alphanumeric 81 percent
non-alphanumeric 8.3 percent

Only 0.34 percent of users have the user name portion of their e-mail address as their password.

Common Passwords: The top 20 passwords are (in order): password1, abc123, myspace1, password, blink182, qwerty1, fuckyou, 123abc, baseball1, football1, 123456, soccer, monkey1, liverpool1, princess1, jordan23, slipknot1, superman1, iloveyou1 and monkey. (Different analysis here.)

The most common password, “password1,” was used in 0.22 percent of all accounts. The frequency drops off pretty fast after that: “abc123” and “myspace1” were only used in 0.11 percent of all accounts, “soccer” in 0.04 percent and “monkey” in 0.02 percent.

For those who don’t know, Blink 182 is a band. Presumably lots of people use the band’s name because it has numbers in its name, and therefore it seems like a good password. The band Slipknot doesn’t have any numbers in its name, which explains the 1. The password “jordan23” refers to basketball player Michael Jordan and his number. And, of course, “myspace” and “myspace1” are easy-to-remember passwords for a MySpace account. I don’t know what the deal is with monkeys.

We used to quip that “password” is the most common password. Now it’s “password1.” Who said users haven’t learned anything about security?

But seriously, passwords are getting better. I’m impressed that less than 4 percent were dictionary words and that the great majority were at least alphanumeric. Writing in 1989, Daniel Klein was able to crack (.gz) 24 percent of his sample passwords with a small dictionary of just 63,000 words, and found that the average password was 6.4 characters long.

And in 1992 Gene Spafford cracked (.pdf) 20 percent of passwords with his dictionary, and found an average password length of 6.8 characters. (Both studied Unix passwords, with a maximum length at the time of 8 characters.) And they both reported a much greater percentage of all lowercase, and only upper- and lowercase, passwords than emerged in the MySpace data. The concept of choosing good passwords is getting through, at least a little.

On the other hand, the MySpace demographic is pretty young. Another password study (.pdf) in November looked at 200 corporate employee passwords: 20 percent letters only, 78 percent alphanumeric, 2.1 percent with non-alphanumeric characters, and a 7.8-character average length. Better than 15 years ago, but not as good as MySpace users. Kids really are the future.

None of this changes the reality that passwords have outlived their usefulness as a serious security device. Over the years, password crackers have been getting faster and faster. Current commercial products can test tens—even hundreds—of millions of passwords per second. At the same time, there’s a maximum complexity to the passwords average people are willing to memorize (.pdf). Those lines crossed years ago, and typical real-world passwords are now software-guessable. AccessData’s Password Recovery Toolkit—at 200,000 guesses per second—would have been able to crack 23 percent of the MySpace passwords in 30 minutes, 55 percent in 8 hours.

Of course, this analysis assumes that the attacker can get his hands on the encrypted password file and work on it offline, at his leisure; i.e., that the same password was used to encrypt an e-mail, file or hard drive. Passwords can still work if you can prevent offline password-guessing attacks, and watch for online guessing. They’re also fine in low-value security situations, or if you choose really complicated passwords and use something like Password Safe to store them. But otherwise, security by password alone is pretty risky.

This essay originally appeared on Wired.com.

Posted on December 14, 2006 at 7:39 AMView Comments

Separating Data Ownership and Device Ownership

Consider two different security problems. In the first, you store your valuables in a safe in your basement. The threat is burglars, of course. But the safe is yours, and the house is yours, too. You control access to the safe, and probably have an alarm system.

The second security problem is similar, but you store your valuables in someone else’s safe. Even worse, it’s someone you don’t trust. He doesn’t know the combination, but he controls access to the safe. He can try to break in at his leisure. He can transport the safe anyplace he needs to. He can use whatever tools he wants. In the first case, the safe needs to be secure, but it’s still just a part of your overall home security. In the second case, the safe is the only security device you have.

This second security problem might seem contrived, but it happens regularly in our information society: Data controlled by one person is stored on a device controlled by another. Think of a stored-value smart card: If the person owning the card can break the security, he can add money to the card. Think of a DRM system: Its security depends on the person owning the computer not being able to get at the insides of the DRM security. Think of the RFID chip on a passport. Or a postage meter. Or SSL traffic being sent over a public network.

These systems are difficult to secure, and not just because you give your attacker the device and let him utilize whatever time, equipment and expertise he needs to break it. It’s difficult to secure because breaks are generally “class breaks.” The expert who figures out how to do it can build hardware—or write software—to do it automatically. Only one person needs to break a given DRM system; the software can break every other device in the same class.

This means that the security needs to be secure not against the average attacker, but against the smartest, most motivated and best funded attacker.

I was reminded of this problem earlier this month, when researchers announced a new attack (.pdf) against implementations of the RSA cryptosystem. The attack exploits the fact that different operations take different times on modern CPUs. By closely monitoring—and actually affecting—the CPU during an RSA operation, an attacker can recover the key. The most obvious applications for this attack are DRM systems that try to use a protected partition in the CPU to prevent the computer’s owner from learning the DRM system’s cryptographic keys.

These sorts of attacks are not new. In 1995, researchers discovered they could recover cryptographic keys by comparing relative timings on chips. In later years, both power and radiation were used to break cryptosystems. I called these “side-channel attacks,” because they made use of information other than the plaintext and ciphertext. And where are they most useful? To recover secrets from smart cards.

Whenever I see security systems with this data/device separation, I try to solve the security problem by removing the separation. This means completely redesigning the system and the security assumptions behind it.

Compare a stored-value card with a debit card. In the former case, the card owner can create money by changing the value on the card. For this system to be secure, the card needs to be protected by a variety of security countermeasures. In the latter case, there aren’t any secrets on the card. Your bank doesn’t care that you can read the account number off the front of the card, or the data off the magnetic stripe off the back—the real data, and the security, are in the bank’s databases.

Or compare a DRM system with a financial model that doesn’t care about copying. The former is impossible to secure, the latter easy.

While common in digital systems, this kind of security problem isn’t limited to them. Last month, the province of Ontario started investigating insider fraud in their scratch-and-win lottery systems, after the CBC aired allegations that people selling the tickets are able to figure out which tickets are winners, and not sell them. It’s the same problem: the owners of the data on the tickets—the lottery commission—tried to keep that data secret from those who had physical control of the tickets. And they failed.

Compare that with a traditional drawing-at-the-end-of-the-week lottery system. The attack isn’t possible, because there are no secrets on the tickets for an attacker to learn.

Separating data ownership and device ownership doesn’t mean that security is impossible, only much more difficult. You can buy a safe so strong that you can lock your valuables in it and give it to your attacker—with confidence. I’m not so sure you can design a smart card that keeps secrets from its owner, or a DRM system that works on a general-purpose computer—especially because of the problem of class breaks. But in all cases, the best way to solve the security problem is not to have it in the first place.

This essay originally appeared on Wired.com.

EDITED TO ADD (12/1): I completely misunderstood the lottery problem in Ontario. The frauds reported were perpetrated by lottery machine operators at convenience stores and the like stealing end-of-week draw tickets from unsuspecting customers. The customer would hand their ticket over the counter to be scanned to see if it was a winner. The clerk (knowing what the winning numbers actually were) would palm a non-winning ticket into the machine, inform the customer “sorry better luck next time” and claim the prize on their own at a later date.

Nice scam, but nothing to do with the point of this essay.

Posted on November 30, 2006 at 6:36 AMView Comments

1 38 39 40 41 42 48

Sidebar photo of Bruce Schneier by Joe MacInnis.