Schneier on Security
A blog covering security and security technology.
February 2005 Archives
A Pennsylvania Supreme Court Justice faces a fine -- although no criminal charges at the moment -- for trying to sneak a knife aboard an aircraft.
Saylor, 58, and his wife entered a security checkpoint Feb. 4 on a trip to Philadelphia when screeners found a small Swiss Army-style knife attached to his key chain.
There are two points worth making here. One: ridiculous rules have a way of turning people into criminals. And two: this is an example of a security failure, not a security success.
Security systems fail in one of two ways. They can fail to stop the bad guy, and they can mistakenly stop the good guy. The TSA likes to measure its success by looking at the forbidden items they have prevented from being carried onto aircraft, but that's wrong. Every time the TSA takes a pocketknife from an innocent person, that's a security failure. It's a false alarm. The system has prevented access where no prevention was required. This, coupled with the widespread belief that the bad guys will find a way around the system, demonstrates what a colossal waste of money it is.
For a couple of years I have been arguing that liability is a way to solve the economic problems underlying our computer security problems. At the RSA conference this year, I was on a panel on that very topic.
This essay argues that regulation, not liability, is the correct way to solve the underlying economic problems, using the analogy of high-pressure steam engines in the 1800s.
Definitely worth thinking about some more.
A very nicely written analysis of the recent DMCA-related court decisions in the Lexmark and Chamberlain cases.
According to the San Franciso Chronicle:
The private firm in charge of security at San Francisco International Airport cheated to pass tests aimed at ensuring it could stop terrorists from smuggling weapons onto flights, a former employee contends.
All security systems require trusted people: people that must be trusted in order for the security to work. If the trusted people turn out not to be trustworthy, security fails.
The ChoicePoint fiasco has been news for over a week now, and there are only a few things I can add. For those who haven't been following along, ChoicePoint mistakenly sold personal credit reports for about 145,000 Americans to criminals.
This story would have never been made public if it were not for SB 1386, a California law requiring companies to notify California residents if any of a specific set of personal information is leaked.
ChoicePoint's behavior is a textbook example of how to be a bad corporate citizen. The information leakage occurred in October, and it didn't tell any victims until February. First, ChoicePoint notified 30,000 Californians and said that it would not notify anyone who lived outside California (since the law didn't require it). Finally, after public outcry, it announced that it would notify everyone affected.
The clear moral here is that first, SB 1386 needs to be a national law, since without it ChoicePoint would have covered up their mistakes forever. And second, the national law needs to force companies to disclose these sorts of privacy breaches immediately, and not allow them to hide for four months behind the "ongoing FBI investigation" shield.
More is required. Compare the difference in ChoicePoint's public marketing slogans with its private reality.
From "Identity Theft Puts Pressure on Data Sellers," by Evan Perez, in the 18 Feb 2005 Wall Street Journal:
The current investigation involving ChoicePoint began in October when the company found the 50 accounts it said were fraudulent. According to the company and police, criminals opened the accounts, posing as businesses seeking information on potential employees and customers. They paid fees of $100 to $200, and provided fake documentation, gaining access to a trove of personal data including addresses, phone numbers, and social security numbers.
From ChoicePoint Chairman and CEO Derek V. Smith:
ChoicePoint's core competency is verifying and authenticating individuals and their credentials.
The reason there is a difference is purely economic. Identity theft is the fastest-growing crime in the U.S., and an enormous problem elsewhere in the world. It's expensive -- both in money and time -- to the victims. And there's not much people can do to stop it, as much of their personal identifying information is not under their control: it's in the computers of companies like ChoicePoint.
ChoicePoint protects its data, but only to the extent that it values it. The hundreds of millions of people in ChoicePoint's databases are not ChoicePoint's customers. They have no power to switch credit agencies. They have no economic pressure that they can bring to bear on the problem. Maybe they should rename the company "NoChoicePoint."
The upshot of this is that ChoicePoint doesn't bear the costs of identity theft, so ChoicePoint doesn't take those costs into account when figuring out how much money to spend on data security. In economic terms, it's an "externality."
The point of regulation is to make externalities internal. SB 1386 did that to some extent, since ChoicePoint now must figure the cost of public humiliation when they decide how much money to spend on security. But the actual cost of ChoicePoint's security failure is much, much greater.
Until ChoicePoint feels those costs -- whether through regulation or liability -- it has no economic incentive to reduce them. Capitalism works, not through corporate charity, but through the free market. I see no other way of solving the problem.
Really excellent article from the Economist.
...despite the belief that biometrics will make crossing a border more efficient and secure, it could well have the opposite effect, as false alarms become the norm.
A high-school student used a hardware keystroke logger -- the undetectable kind that sits between the keyboard and the computer -- to steal exams in order to sell them.
Officials said the 16-year-old boy hooked up a keystroke decoder to a teacher's computer and downloaded exams in November.
"Security," by Hunter S. Thompson
CallABike offers bicycles to rent in several German cities. You register with the company, find a bike parked somewhere, and phone the company for an unlock key. You enter the key, use the bike, then park it wherever you want and lock it. The bike displays a code, and you phone the company once again, telling them this code. Thereafter, the bike is available for the next person to use it. You get charged for the time between unlock and lock.
Now read this site, from a group of hackers who claim to have changed the code in 10% of all the bikes in Berlin, which they now can use for free.
On Tuesday, I blogged about a new cryptanalytic result -- the first attack faster than brute-force against SHA-1. I wrote about SHA, and the need to replace it, last September. Aside from the details of the new attack, everything I said then still stands. I'll quote from that article, adding new material where appropriate.
One-way hash functions are a cryptographic construct used in many applications. They are used in conjunction with public-key algorithms for both encryption and digital signatures. They are used in integrity checking. They are used in authentication. They have all sorts of applications in a great many different protocols. Much more than encryption algorithms, one-way hash functions are the workhorses of modern cryptography.
Earlier this week, three Chinese cryptographers showed that SHA-1 is not collision-free. That is, they developed an algorithm for finding collisions faster than brute force.
SHA-1 produces a 160-bit hash. That is, every message hashes down to a 160-bit number. Given that there are an infinite number of messages that hash to each possible value, there are an infinite number of possible collisions. But because the number of possible hashes is so large, the odds of finding one by chance is negligibly small (one in 280, to be exact). If you hashed 280 random messages, you'd find one pair that hashed to the same value. That's the "brute force" way of finding collisions, and it depends solely on the length of the hash value. "Breaking" the hash function means being able to find collisions faster than that. And that's what the Chinese did.
They can find collisions in SHA-1 in 269 calculations, about 2,000 times faster than brute force. Right now, that is just on the far edge of feasibility with current technology. Two comparable massive computations illustrate that point.
In 1999, a group of cryptographers built a DES cracker. It was able to perform 256 DES operations in 56 hours. The machine cost $250K to build, although duplicates could be made in the $50K-$75K range. Extrapolating that machine using Moore's Law, a similar machine built today could perform 260 calculations in 56 hours, and 269 calculations in three and a quarter years. Or, a machine that cost $25M-$38M could do 269 calculations in the same 56 hours.
On the software side, the main comparable is a 264 keysearch done by distributed.net that finished in 2002. One article put it this way: "Over the course of the competition, some 331,252 users participated by allowing their unused processor cycles to be used for key discovery. After 1,757 days (4.81 years), a participant in Japan discovered the winning key." Moore's Law means that today the calculation would have taken one quarter the time -- or have required one quarter the number of computers -- so today a 269 computation would take eight times as long, or require eight times the computers.
The magnitude of these results depends on who you are. If you're a cryptographer, this is a huge deal. While not revolutionary, these results are substantial advances in the field. The techniques described by the researchers are likely to have other applications, and we'll be better able to design secure systems as a result. This is how the science of cryptography advances: we learn how to design new algorithms by breaking other algorithms. Additionally, algorithms from the NSA are considered a sort of alien technology: they come from a superior race with no explanations. Any successful cryptanalysis against an NSA algorithm is an interesting data point in the eternal question of how good they really are in there.
For the average Internet user, this news is not a cause for panic. No one is going to be breaking digital signatures or reading encrypted messages anytime soon. The electronic world is no less secure after these announcements than it was before.
But there's an old saying inside the NSA: "Attacks always get better; they never get worse." Just as this week's attack builds on other papers describing attacks against simplified versions of SHA-1, SHA-0, MD4, and MD5, other researchers will build on this result. The attack against SHA-1 will continue to improve, as others read about it and develop faster tricks, optimizations, etc. And Moore's Law will continue to march forward, making even the existing attack faster and more affordable.
Jon Callas, PGP's CTO, put it best: "It's time to walk, but not run, to the fire exits. You don't see smoke, but the fire alarms have gone off." That's basically what I said last August.
It's time for us all to migrate away from SHA-1.
Hash functions are the least-well-understood cryptographic primitive, and hashing techniques are much less developed than encryption techniques. Regularly there are surprising cryptographic results in hashing. I have a paper, written with John Kelsey, that describes an algorithm to find second preimages with SHA-1 -- a technique that generalizes to almost all other hash functions -- in 2106 calculations: much less than the 2160 calculations for brute force. This attack is completely theoretical and not even remotely practical, but it demonstrates that we still have a lot to learn about hashing.
It is clear from rereading what I wrote last September that I expected this to happen, but not nearly this quickly and not nearly this impressively. The Chinese cryptographers deserve a lot of credit for their work, and we need to get to work replacing SHA.
This is from Richard M. Smith:
Tukwila, Washington firefighter, Philip Scott Lyons found out the hard way that supermarket loyalty cards come with a huge price. Lyons was arrested last August and charged with attempted arson. Police alleged at the time that Lyons tried to set fire to his own house while his wife and children were inside. According to the KOMO-TV and the Seattle Times, a major piece of evidence used against Lyons in his arrest was the record of his supermarket purchases that he made with his Safeway Club Card. Police investigators had discovered that his Club Card was used to buy fire starters of the same type used in the arson attempt.
From the Associated Press:
Microsoft Corp. plans to severely curtail the ways in which people running pirated copies of its dominant Windows operating system can receive software updates, including security fixes.
I've written about this before. Unpatched Windows systems on the Internet are a security risk to everyone. I understand Microsoft wanting to fight piracy, but reducing the security of its paying customers is not a good way to go about it.
A long time ago I wrote about the security risks of Unicode. This is an example of the problem.
Here's a demo: it's a Web page that appears to be www.paypal.com but is not PayPal. Everything from the address bar to the hover-over status on the link says www.paypal.com.
It works by substituting a Unicode character for the second "a" in PayPal. That Unicode character happens to look like an English "a," but it's not an "a." The attack works even under SSL.
Here's the source code of the link: http://www.p&#1072;ypal.com/
SHA-1 has been broken. Not a reduced-round version. Not a simplified version. The real thing.
The research team of Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu (mostly from Shandong University in China) have been quietly circulating a paper describing their results:
This attack builds on previous attacks on SHA-0 and SHA-1, and is a major, major cryptanalytic result. It pretty much puts a bullet into SHA-1 as a hash function for digital signatures (although it doesn't affect applications such as HMAC where collisions aren't important).
The paper isn't generally available yet. At this point I can't tell if the attack is real, but the paper looks good and this is a reputable research team.
More details when I have them.
Update: See here
I'm at the RSA Conference here in San Francisco. Is anyone else here? What's interesting on the show floor? Anything?
And what did you all think of Bill Gates's speech this morning?
This is a really interesting technical report from Microsoft. It describes a clever prototype -- called GhostBuster -- they developed for detecting arbitrary persistent and stealthy software, such as rootkits, Trojans, and software keyloggers. It's a really elegant idea, based on a simple observation: the rootkit must exist on disk to be persistent, but must lie to programs running within the infected OS in order to hide.
Here's how it works: The user has the GhostBuster program on a CD. He sticks the CD in the drive, and from within the (possibly corrupted) OS, the checker program runs: stopping all other user programs, flushing the caches, and then doing a complete checksum of all files on the disk and a scan of any registry keys that could autostart the system, writing out the results to a file on the hard drive.
Then the user is instructed to press the reset button, the CD boots its own OS, and the scan is repeated. Any differences indicate a rootkit or other stealth software, without the need for knowing what particular rootkits are or the proper checksums for the programs installed on disk.
Simple. Clever. Elegant.
In order to fool GhostBuster, the rootkit must 1) detect that such a checking program is running and either not lie to it or change the output as it's written to disk (in the limit this becomes the halting problem for the rootkit designer), 2) integrate into the BIOS rather than the OS (tricky, platform specific, and not always possible), or 3) give up on either being persistent or stealthy. Thus this doesn't eliminate rootkits entirely, but is a pretty mortal blow to persistent rootkits.
Of course, the concept could be adopted for any other operating system as well.
This is a great idea, but there's a huge problem. GhostBuster is only a research prototype, so you can't get a copy. And, even worse, Microsoft has no plans to turn it into a commercial tool.
This is too good an idea to abandon. Microsoft, if you're listening, you should release this tool to the world. Make it public domain. Make it open source, even. It's a great idea, and you deserve credit for coming up with it.
Any other security companies listening? Make and sell one of these. Anyone out there looking for an open source project? Here's a really good one.
Note: I have no idea if Microsoft patented this idea. If they did and they don't release it, shame on them. If they didn't, good for them.
For at least seven months last year, a hacker had access to T-Mobile's customer network. He's known to have accessed information belonging to 400 customers -- names, Social Security numbers, voicemail messages, SMS messages, photos -- and probably had the ability to access data belonging to any of T-Mobile's 16.3 million U.S. customers. But in its fervor to report on the security of cell phones, and T-Mobile in particular, the media missed the most important point of the story: The security of much of our data is not under our control.
This is new. A dozen years ago, if someone wanted to look through your mail, they would have to break into your house. Now they can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your house; now it's on a computer owned by a telephone company. Your financial data is on Websites protected only by passwords. The list of books you browse, and the books you buy, is stored in the computers of some online bookseller. Your affinity card allows your supermarket to know what food you like. Data that used to be under your direct control is now controlled by others.
We have no choice but to trust these companies with our privacy, even though the companies have little incentive to protect that privacy. T-Mobile suffered some bad press for its lousy security, nothing more. It'll spend some money improving its security, but it'll be security designed to protect its reputation from bad PR, not security designed to protect the privacy of its customers.
This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as that data is held by others. The police need a warrant to read the e-mail on your computer; but they don't need one to read it off the backup tapes at your ISP. According to the Supreme Court, that's not a search as defined by the 4th Amendment.
This isn't a technology problem, it's a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don't have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant -- even though it occurred at the phone company switching office -- the Supreme Court must recognize that reading e-mail at an ISP is no different.
It's happened to all of us: We sign up for some online account, choose a difficult-to-remember and hard-to-guess password, and are then presented with a "secret question" to answer. Twenty years ago, there was just one secret question: "What's your mother's maiden name?" Today, there are more: "What street did you grow up on?" "What's the name of your first pet?" "What's your favorite color?" And so on.
The point of all these questions is the same: a backup password. If you forget your password, the secret question can verify your identity so you can choose another password or have the site e-mail your current password to you. It's a great idea from a customer service perspective -- a user is less likely to forget his first pet's name than some random password -- but terrible for security. The answer to the secret question is much easier to guess than a good password, and the information is much more public. (I'll bet the name of my family's first pet is in some database somewhere.) And even worse, everybody seems to use the same series of secret questions.
The result is the normal security protocol (passwords) falls back to a much less secure protocol (secret questions). And the security of the entire system suffers.
What can one do? My usual technique is to type a completely random answer -- I madly slap at my keyboard for a few seconds -- and then forget about it. This ensures that some attacker can't bypass my password and try to guess the answer to my secret question, but is pretty unpleasant if I forget my password. The one time this happened to me, I had to call the company to get my password and question reset. (Honestly, I don't remember how I authenticated myself to the customer service rep at the other end of the phone line.)
Which is maybe what should have happened in the first place. I like to think that if I forget my password, it should be really hard to gain access to my account. I want it to be so hard that an attacker can't possibly do it. I know this is a customer service issue, but it's a security issue too. And if the password is controlling access to something important -- like my bank account -- then the bypass mechanism should be harder, not easier.
Passwords have reached the end of their useful life. Today, they only work for low-security applications. The secret question is just one manifestation of that fact.
This essay originally appeared on Computerworld.
No, really. It's liquid with a unique identifier that is linked to a particular owner.
Forensic Coding combined with microdot technology.
The idea is for me to paint this stuff on my valuables as proof of ownership. I think a better idea would be for me to paint it on your valuables, and then call the police.
There's a security problem with many Internet authentication systems that's never talked about: there's no way to terminate the authentication.
A couple of months ago, I bought something from an e-commerce site. At the checkout page, I wasn't able to just type in my credit-card number and make my purchase. Instead, I had to choose a username and password. Usually I don't like doing that, but in this case I wanted to be able to access my account at a later date. In fact, the password was useful because I needed to return an item I purchased.
Months have passed, and I no longer want an ongoing relationship with the e-commerce site. I don't want a username and password. I don't want them to have my credit-card number on file. I've received my purchase, I'm happy, and I'm done. But because that username and password have no expiration date associated with them, they never end. It's not a subscription service, so there's no mechanism to sever the relationship. I will have access to that e-commerce site for as long as it remembers that username and password.
In other words, I am liable for that account forever.
Traditionally, passwords have indicated an ongoing relationship between a user and some computer service. Sometimes it's a company employee and the company's servers. Sometimes it's an account and an ISP. In both cases, both parties want to continue the relationship, so expiring a password and then forcing the user to choose another is a matter of security.
In cases with this ongoing relationship, the security consideration is damage minimization. Nobody wants some bad guy to learn the password, and everyone wants to minimize the amount of damage he can do if he does. Regularly changing your password is a solution to that problem.
This approach works because both sides want it to; they both want to keep the authentication system working correctly, and minimize attacks.
There's nothing I can do about this, but a username and password that never expire is another matter entirely. The e-commerce site wants me to establish an account because it increases the chances that I'll use them again. But I want a way to terminate the business relationship, a way to say: "I am no longer taking responsibility for items purchased using that username and password."
Near as I can tell, the username and password I typed into that e-commerce site puts my credit card at risk until it expires. If the e-commerce site uses a system that debits amounts from my checking account whenever I place an order, I could be at risk forever. (The US has legal liability limits, but they're not that useful. According to Regulation E, the electronic transfers regulation, a fraudulent transaction must be reported within two days to cap liability at US$50; within 60 days, it's capped at $500. Beyond that, you're out of luck.)
This is wrong. Every e-commerce site should have a way to purchase items without establishing a username and password. I like sites that allow me to make a purchase as a "guest," without setting up an account.
But just as importantly, every e-commerce site should have a way for customers to terminate their accounts and should allow them to delete their usernames and passwords from the system. It's okay to market to previous customers. It's not okay to needlessly put them at financial risk.
This essay also appeared in the Jan/Feb 05 issue of IEEE Security & Privacy.
This story is interesting:
A Miami businessman is suing Bank of America over $90,000 he says was stolen from his online banking account in a case that highlights the thorny question of who is responsible when a customer's computer is hacked into.
The typical press coverage of this story is along the lines of "Bank of America sued because customer's PC was hacked." But that's not it. Bank of America is being sued because they allowed an unauthorized transaction to occur, and they're not making good on that mistake. The transaction happened to occur because the customer's PC was hacked.
I know nothing about the actual suit and its merits, but this is a problem that is not going away. And while I think that banks should not be held responsible for what's on their customers' machines, they should be held responsible for allowing unauthorized transactions to occur. The bank's internal systems, however set up, for whatever reason, permitted the fraudulent transaction.
There is a simple economic incentive problem here. As long as the banks are not responsible for financial losses from fraudulent transactions over the Internet, banks have no incentive to improve security. But if banks are held responsible for these transactions, you can bet that they won't allow such shoddy security.
Slate has published a method for anyone to fly on anyone else's ticket.
I wrote about this exact vulnerability a year and a half ago.
The vulnerability is obvious, but the general concepts are subtle. There are three things to authenticate: the identity of the traveler, the boarding pass, and the computer record. Think of them as three points on the triangle. Under the current system, the boarding pass is compared to the traveler's identity document, and then the boarding pass is compared with the computer record. But because the identity document is never compared with the computer record -- the third leg of the triangle -- it's possible to create two different boarding passes and have no one notice. That's why the attack works.
In an attempt to protect us from terrorism, there are new restrictions on fertilizer sales in the Kansas (and elsewhere):
Under the rules, retailers would have to obtain the name, address and telephone and driver's license number of purchasers of ammonium nitrate fertilizer and maintain records, including the date of the sale and the amount purchased, for at least two years.
The Australian bank Suncorp has just updated its terms and conditions for Internet banking. They have a maximum withdrawal limit, hint about a physical access token, and require customers to use the most vulnerability-laden browser:
"suitable software" means Internet Explorer 5.5 Service Pack 2 or above or Netscape Navigator 6.1 or above running on Windows 98/ME/NT/2000/XP with anti-virus software or other software approved by us.
I have no idea if this is real or not. But even if it's not real, it's just a matter of time before it becomes real. How long before people can surreptitiously have RFID tags injected into them?
What is the ID SNIPER rifle?
Edited to add: This is a hoax.
There's a conference in Washington, DC, in March that explores technologies for intelligence and terrorism prevention.
The 4th Annual Government Convention on Emerging Technologies will focus on the impact of the Intelligence Reform and Terrorism Prevention Act signed into law by President Bush in December 2004.
There's a lot of interesting stuff on the agenda, including some classified sessions. I'm especially interested in this track:
Track Two: Attaining Tailored Persistence
What does "persistent surveillance" mean, anyway?
SC Magazine is reporting on a virus that infects Lexus cars:
Lexus cars may be vulnerable to viruses that infect them via mobile phones. Landcruiser 100 models LX470 and LS430 have been discovered with infected operating systems that transfer within a range of 15 feet.
It seems that no one has done this yet, and the story is based on speculation that a cell phone can transfer a virus to the Lexus using Bluetooth. But it's only a matter of time before something like this actually works.
As this photo shows, it doesn't matter what kind of security you implement if it's easy to get around.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.