Blog: February 2005 Archives

Sneaking Items Aboard Aircraft

A Pennsylvania Supreme Court Justice faces a fine—although no criminal charges at the moment—for trying to sneak a knife aboard an aircraft.

Saylor, 58, and his wife entered a security checkpoint Feb. 4 on a trip to Philadelphia when screeners found a small Swiss Army-style knife attached to his key chain.

A police report said he was told the item could not be carried onto a plane and that he needed to place the knife into checked luggage or make other arrangements.

When Saylor returned a short time later to be screened a second time, an X-ray machine detected a knife inside his carry-on luggage, police said.

There are two points worth making here. One: ridiculous rules have a way of turning people into criminals. And two: this is an example of a security failure, not a security success.

Security systems fail in one of two ways. They can fail to stop the bad guy, and they can mistakenly stop the good guy. The TSA likes to measure its success by looking at the forbidden items they have prevented from being carried onto aircraft, but that’s wrong. Every time the TSA takes a pocketknife from an innocent person, that’s a security failure. It’s a false alarm. The system has prevented access where no prevention was required. This, coupled with the widespread belief that the bad guys will find a way around the system, demonstrates what a colossal waste of money it is.

Posted on February 28, 2005 at 8:00 AM48 Comments

Regulation, Liability, and Computer Security

For a couple of years I have been arguing that liability is a way to solve the economic problems underlying our computer security problems. At the RSA conference this year, I was on a panel on that very topic.

This essay argues that regulation, not liability, is the correct way to solve the underlying economic problems, using the analogy of high-pressure steam engines in the 1800s.

Definitely worth thinking about some more.

Posted on February 25, 2005 at 8:00 AM13 Comments

Airport Screeners Cheat to Pass Tests

According to the San Franciso Chronicle:

The private firm in charge of security at San Francisco International Airport cheated to pass tests aimed at ensuring it could stop terrorists from smuggling weapons onto flights, a former employee contends.

All security systems require trusted people: people that must be trusted in order for the security to work. If the trusted people turn out not to be trustworthy, security fails.

Posted on February 24, 2005 at 8:00 AM15 Comments

ChoicePoint

The ChoicePoint fiasco has been news for over a week now, and there are only a few things I can add. For those who haven’t been following along, ChoicePoint mistakenly sold personal credit reports for about 145,000 Americans to criminals.

This story would have never been made public if it were not for SB 1386, a California law requiring companies to notify California residents if any of a specific set of personal information is leaked.

ChoicePoint’s behavior is a textbook example of how to be a bad corporate citizen. The information leakage occurred in October, and it didn’t tell any victims until February. First, ChoicePoint notified 30,000 Californians and said that it would not notify anyone who lived outside California (since the law didn’t require it). Finally, after public outcry, it announced that it would notify everyone affected.

The clear moral here is that first, SB 1386 needs to be a national law, since without it ChoicePoint would have covered up their mistakes forever. And second, the national law needs to force companies to disclose these sorts of privacy breaches immediately, and not allow them to hide for four months behind the “ongoing FBI investigation” shield.

More is required. Compare the difference in ChoicePoint’s public marketing slogans with its private reality.

From “Identity Theft Puts Pressure on Data Sellers,” by Evan Perez, in the 18 Feb 2005 Wall Street Journal:

The current investigation involving ChoicePoint began in October when the company found the 50 accounts it said were fraudulent. According to the company and police, criminals opened the accounts, posing as businesses seeking information on potential employees and customers. They paid fees of $100 to $200, and provided fake documentation, gaining access to a trove of
personal data including addresses, phone numbers, and social security numbers.

From ChoicePoint Chairman and CEO Derek V. Smith:

ChoicePoint’s core competency is verifying and authenticating individuals
and their credentials.

The reason there is a difference is purely economic. Identity theft is the fastest-growing crime in the U.S., and an enormous problem elsewhere in the world. It’s expensive—both in money and time—to the victims. And there’s not much people can do to stop it, as much of their personal identifying information is not under their control: it’s in the computers of companies like ChoicePoint.

ChoicePoint protects its data, but only to the extent that it values it. The hundreds of millions of people in ChoicePoint’s databases are not ChoicePoint’s customers. They have no power to switch credit agencies. They have no economic pressure that they can bring to bear on the problem. Maybe they should rename the company “NoChoicePoint.”

The upshot of this is that ChoicePoint doesn’t bear the costs of identity theft, so ChoicePoint doesn’t take those costs into account when figuring out how much money to spend on data security. In economic terms, it’s an “externality.”

The point of regulation is to make externalities internal. SB 1386 did that to some extent, since ChoicePoint now must figure the cost of public humiliation when they decide how much money to spend on security. But the actual cost of ChoicePoint’s security failure is much, much greater.

Until ChoicePoint feels those costs—whether through regulation or liability—it has no economic incentive to reduce them. Capitalism works, not through corporate charity, but through the free market. I see no other way of solving the problem.

Posted on February 23, 2005 at 3:19 PM40 Comments

Hacking a Bicycle Rental System

CallABike offers bicycles to rent in several German cities. You register with the company, find a bike parked somewhere, and phone the company for an unlock key. You enter the key, use the bike, then park it wherever you want and lock it. The bike displays a code, and you phone the company once again, telling them this code. Thereafter, the bike is available for the next person to use it. You get charged for the time between unlock and lock.

Clever system.

Now read this site, from a group of hackers who claim to have changed the code in 10% of all the bikes in Berlin, which they now can use for free.

Posted on February 21, 2005 at 8:00 AM13 Comments

Cryptanalysis of SHA-1

On Tuesday, I blogged about a new cryptanalytic result—the first attack faster than brute-force against SHA-1. I wrote about SHA, and the need to replace it, last September. Aside from the details of the new attack, everything I said then still stands. I’ll quote from that article, adding new material where appropriate.

One-way hash functions are a cryptographic construct used in many applications. They are used in conjunction with public-key algorithms for both encryption and digital signatures. They are used in integrity checking. They are used in authentication. They have all sorts of applications in a great many different protocols. Much more than encryption algorithms, one-way hash functions are the workhorses of modern cryptography.

In 1990, Ron Rivest invented the hash function MD4. In 1992, he improved on MD4 and developed another hash function: MD5. In 1993, the National Security Agency published a hash function very similar to MD5, called SHA (Secure Hash Algorithm). Then, in 1995, citing a newly discovered weakness that it refused to elaborate on, the NSA made a change to SHA. The new algorithm was called SHA-1. Today, the most popular hash function is SHA-1, with MD5 still being used in older applications.

One-way hash functions are supposed to have two properties. One, they’re one way. This means that it is easy to take a message and compute the hash value, but it’s impossible to take a hash value and recreate the original message. (By “impossible” I mean “can’t be done in any reasonable amount of time.”) Two, they’re collision free. This means that it is impossible to find two messages that hash to the same hash value. The cryptographic reasoning behind these two properties is subtle, and I invite curious readers to learn more in my book Applied Cryptography.

Breaking a hash function means showing that either—or both—of those properties are not true.

Earlier this week, three Chinese cryptographers showed that SHA-1 is not collision-free. That is, they developed an algorithm for finding collisions faster than brute force.

SHA-1 produces a 160-bit hash. That is, every message hashes down to a 160-bit number. Given that there are an infinite number of messages that hash to each possible value, there are an infinite number of possible collisions. But because the number of possible hashes is so large, the odds of finding one by chance is negligibly small (one in 280, to be exact). If you hashed 280 random messages, you’d find one pair that hashed to the same value. That’s the “brute force” way of finding collisions, and it depends solely on the length of the hash value. “Breaking” the hash function means being able to find collisions faster than that. And that’s what the Chinese did.

They can find collisions in SHA-1 in 269 calculations, about 2,000 times faster than brute force. Right now, that is just on the far edge of feasibility with current technology. Two comparable massive computations illustrate that point.

In 1999, a group of cryptographers built a DES cracker. It was able to perform 256 DES operations in 56 hours. The machine cost $250K to build, although duplicates could be made in the $50K-$75K range. Extrapolating that machine using Moore’s Law, a similar machine built today could perform 260 calculations in 56 hours, and 269 calculations in three and a quarter years. Or, a machine that cost $25M-$38M could do 269 calculations in the same 56 hours.

On the software side, the main comparable is a 264 keysearch done by distributed.net that finished in 2002. One article put it this way: “Over the course of the competition, some 331,252 users participated by allowing their unused processor cycles to be used for key discovery. After 1,757 days (4.81 years), a participant in Japan discovered the winning key.” Moore’s Law means that today the calculation would have taken one quarter the time—or have required one quarter the number of computers—so today a 269 computation would take eight times as long, or require eight times the computers.

The magnitude of these results depends on who you are. If you’re a cryptographer, this is a huge deal. While not revolutionary, these results are substantial advances in the field. The techniques described by the researchers are likely to have other applications, and we’ll be better able to design secure systems as a result. This is how the science of cryptography advances: we learn how to design new algorithms by breaking other algorithms. Additionally, algorithms from the NSA are considered a sort of alien technology: they come from a superior race with no explanations. Any successful cryptanalysis against an NSA algorithm is an interesting data point in the eternal question of how good they really are in there.

For the average Internet user, this news is not a cause for panic. No one is going to be breaking digital signatures or reading encrypted messages anytime soon. The electronic world is no less secure after these announcements than it was before.

But there’s an old saying inside the NSA: “Attacks always get better; they never get worse.” Just as this week’s attack builds on other papers describing attacks against simplified versions of SHA-1, SHA-0, MD4, and MD5, other researchers will build on this result. The attack against SHA-1 will continue to improve, as others read about it and develop faster tricks, optimizations, etc. And Moore’s Law will continue to march forward, making even the existing attack faster and more affordable.

Jon Callas, PGP’s CTO, put it best: “It’s time to walk, but not run, to the fire exits. You don’t see smoke, but the fire alarms have gone off.” That’s basically what I said last August.

It’s time for us all to migrate away from SHA-1.

Luckily, there are alternatives. The National Institute of Standards and Technology already has standards for longer—and harder to break—hash functions: SHA-224, SHA-256, SHA-384, and SHA-512. They’re already government standards, and can already be used. This is a good stopgap, but I’d like to see more.

I’d like to see NIST orchestrate a worldwide competition for a new hash function, like they did for the new encryption algorithm, AES, to replace DES. NIST should issue a call for algorithms, and conduct a series of analysis rounds, where the community analyzes the various proposals with the intent of establishing a new standard.

Most of the hash functions we have, and all the ones in widespread use, are based on the general principles of MD4. Clearly we’ve learned a lot about hash functions in the past decade, and I think we can start applying that knowledge to create something even more secure.

Hash functions are the least-well-understood cryptographic primitive, and hashing techniques are much less developed than encryption techniques. Regularly there are surprising cryptographic results in hashing. I have a paper, written with John Kelsey, that describes an algorithm to find second preimages with SHA-1 ­—a technique that generalizes to almost all other hash functions—in 2106 calculations: much less than the 2160 calculations for brute force. This attack is completely theoretical and not even remotely practical, but it demonstrates that we still have a lot to learn about hashing.

It is clear from rereading what I wrote last September that I expected this to happen, but not nearly this quickly and not nearly this impressively. The Chinese cryptographers deserve a lot of credit for their work, and we need to get to work replacing SHA.

Posted on February 18, 2005 at 11:24 PM105 Comments

Security Risks of Frequent-Shopper Cards

This is from Richard M. Smith:

Tukwila, Washington firefighter, Philip Scott Lyons found out the hard way that supermarket loyalty cards come with a huge price. Lyons was arrested last August and charged with attempted arson. Police alleged at the time that Lyons tried to set fire to his own house while his wife and children were inside. According to the KOMO-TV and the Seattle Times, a major piece of evidence used against Lyons in his arrest was the record of his supermarket purchases that he made with his Safeway Club Card. Police investigators had discovered that his Club Card was used to buy fire starters of the same type used in the arson attempt.

For Lyons, the story did have a happy ending. All charges were dropped against him in January 2005 because another person stepped forward saying he set the fire and not Lyons. Lyons is now back at work after more than 5 months of being on administrative leave from his firefighter job.

The moral of this story is that even the most innocent database can be used against a person in a criminal investigation turning their lives completely upside down.

Safeway needs to be more up-front with customers about the potential downsides of shopper cards. They should also provide the details of their role in the arrest or Mr. Lyons and other criminal cases in which the company provided Club Card purchase information to police investigators.

Here is how Safeway currently describes their Club Card program in the Club Card application:

We respect your privacy. Safeway does not sell or lease personally identifying information (i.e., your name, address, telephone number, and bank and credit card account numbers) to non-affiliated companies or entities. We do record information regarding the purchases made with your Safeway Club Card to help us provide you with special offers and other information. Safeway also may use this information to provide you with personally tailored coupons, offers or other information that may be provided to Safeway by other companies. If you do not wish to receive personally tailored coupons, offers or other information, please check the box below. Must be at least 18 years of age.

Links:

Firefighter Arrested For Attempted Arson

Fireman attempted to set fire to house, charges say

Tukwila Firefighter Cleared Of Arson Charges

Posted on February 18, 2005 at 8:00 AM20 Comments

Pirated Windows to Remain Unpatched

From the Associated Press:

Microsoft Corp. plans to severely curtail the ways in which people running pirated copies of its dominant Windows operating system can receive software updates, including security fixes.

The new authentication system, announced Tuesday and due to arrive by midyear, will still allow people with pirated copies of Windows to obtain security fixes, but their options will be limited. The move allows Microsoft to use one of its sharpest weapons—access to security patches that can prevent viruses, worms and other crippling attacks—to thwart a costly and meddlesome piracy problem.

I’ve written about this before. Unpatched Windows systems on the Internet are a security risk to everyone. I understand Microsoft wanting to fight piracy, but reducing the security of its paying customers is not a good way to go about it.

Posted on February 17, 2005 at 8:00 AM43 Comments

Unicode URL Hack

A long time ago I wrote about the security risks of Unicode. This is an example of the problem.

Here’s a demo: it’s a Web page that appears to be www.paypal.com but is not PayPal. Everything from the address bar to the hover-over status on the link says www.paypal.com.

It works by substituting a Unicode character for the second “a” in PayPal. That Unicode character happens to look like an English “a,” but it’s not an “a.” The attack works even under SSL.

Here’s the source code of the link: http://www.p&amp#1072;ypal.com/

Secuna has some information on how to fix this vulnerability. So does BoingBoing.

Posted on February 16, 2005 at 9:17 AM28 Comments

SHA-1 Broken

SHA-1 has been broken. Not a reduced-round version. Not a simplified version. The real thing.

The research team of Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu (mostly from Shandong University in China) have been quietly circulating a paper describing their results:

  • collisions in the the full SHA-1 in 2**69 hash operations, much less than the brute-force attack of 2**80 operations based on the hash length.
  • collisions in SHA-0 in 2**39 operations.
  • collisions in 58-round SHA-1 in 2**33 operations.

This attack builds on previous attacks on SHA-0 and SHA-1, and is a major, major cryptanalytic result. It pretty much puts a bullet into SHA-1 as a hash function for digital signatures (although it doesn’t affect applications such as HMAC where collisions aren’t important).

The paper isn’t generally available yet. At this point I can’t tell if the attack is real, but the paper looks good and this is a reputable research team.

More details when I have them.

Update: See here

Posted on February 15, 2005 at 7:15 PM136 Comments

GhostBuster

This is a really interesting technical report from Microsoft. It describes a clever prototype—called GhostBuster—they developed for detecting arbitrary persistent and stealthy software, such as rootkits, Trojans, and software keyloggers. It’s a really elegant idea, based on a simple observation: the rootkit must exist on disk to be persistent, but must lie to programs running within the infected OS in order to hide.

Here’s how it works: The user has the GhostBuster program on a CD. He sticks the CD in the drive, and from within the (possibly corrupted) OS, the checker program runs: stopping all other user programs, flushing the caches, and then doing a complete checksum of all files on the disk and a scan of any registry keys that could autostart the system, writing out the results to a file on the hard drive.

Then the user is instructed to press the reset button, the CD boots its own OS, and the scan is repeated. Any differences indicate a rootkit or other stealth software, without the need for knowing what particular rootkits are or the proper checksums for the programs installed on disk.

Simple. Clever. Elegant.

In order to fool GhostBuster, the rootkit must 1) detect that such a checking program is running and either not lie to it or change the output as it’s written to disk (in the limit this becomes the halting problem for the rootkit designer), 2) integrate into the BIOS rather than the OS (tricky, platform specific, and not always possible), or 3) give up on either being persistent or stealthy. Thus this doesn’t eliminate rootkits entirely, but is a pretty mortal blow to persistent rootkits.

Of course, the concept could be adopted for any other operating system as well.

This is a great idea, but there’s a huge problem. GhostBuster is only a research prototype, so you can’t get a copy. And, even worse, Microsoft has no plans to turn it into a commercial tool.

This is too good an idea to abandon. Microsoft, if you’re listening, you should release this tool to the world. Make it public domain. Make it open source, even. It’s a great idea, and you deserve credit for coming up with it.

Any other security companies listening? Make and sell one of these. Anyone out there looking for an open source project? Here’s a really good one.

Note: I have no idea if Microsoft patented this idea. If they did and they don’t release it, shame on them. If they didn’t, good for them.

Posted on February 15, 2005 at 8:00 AM38 Comments

T-Mobile Hack

For at least seven months last year, a hacker had access to T-Mobile’s customer network. He’s known to have accessed information belonging to 400 customers—names, Social Security numbers, voicemail messages, SMS messages, photos—and probably had the ability to access data belonging to any of T-Mobile’s 16.3 million U.S. customers. But in its fervor to report on the security of cell phones, and T-Mobile in particular, the media missed the most important point of the story: The security of much of our data is not under our control.

This is new. A dozen years ago, if someone wanted to look through your mail, they would have to break into your house. Now they can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your house; now it’s on a computer owned by a telephone company. Your financial data is on Websites protected only by passwords. The list of books you browse, and the books you buy, is stored in the computers of some online bookseller. Your affinity card allows your supermarket to know what food you like. Data that used to be under your direct control is now controlled by others.

We have no choice but to trust these companies with our privacy, even though the companies have little incentive to protect that privacy. T-Mobile suffered some bad press for its lousy security, nothing more. It’ll spend some money improving its security, but it’ll be security designed to protect its reputation from bad PR, not security designed to protect the privacy of its customers.

This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as that data is held by others. The police need a warrant to read the e-mail on your computer; but they don’t need one to read it off the backup tapes at your ISP. According to the Supreme Court, that’s not a search as defined by the 4th Amendment.

This isn’t a technology problem, it’s a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don’t have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant—even though it occurred at the phone company switching office—the Supreme Court must recognize that reading e-mail at an ISP is no different.

This essay appeared in eWeek.

Posted on February 14, 2005 at 4:26 PM30 Comments

The Curse of the Secret Question

It’s happened to all of us: We sign up for some online account, choose a difficult-to-remember and hard-to-guess password, and are then presented with a “secret question” to answer. Twenty years ago, there was just one secret question: “What’s your mother’s maiden name?” Today, there are more: “What street did you grow up on?” “What’s the name of your first pet?” “What’s your favorite color?” And so on.

The point of all these questions is the same: a backup password. If you forget your password, the secret question can verify your identity so you can choose another password or have the site e-mail your current password to you. It’s a great idea from a customer service perspective—a user is less likely to forget his first pet’s name than some random password—but terrible for security. The answer to the secret question is much easier to guess than a good password, and the information is much more public. (I’ll bet the name of my family’s first pet is in some database somewhere.) And even worse, everybody seems to use the same series of secret questions.

The result is the normal security protocol (passwords) falls back to a much less secure protocol (secret questions). And the security of the entire system suffers.

What can one do? My usual technique is to type a completely random answer—I madly slap at my keyboard for a few seconds—and then forget about it. This ensures that some attacker can’t bypass my password and try to guess the answer to my secret question, but is pretty unpleasant if I forget my password. The one time this happened to me, I had to call the company to get my password and question reset. (Honestly, I don’t remember how I authenticated myself to the customer service rep at the other end of the phone line.)

Which is maybe what should have happened in the first place. I like to think that if I forget my password, it should be really hard to gain access to my account. I want it to be so hard that an attacker can’t possibly do it. I know this is a customer service issue, but it’s a security issue too. And if the password is controlling access to something important—like my bank account—then the bypass mechanism should be harder, not easier.

Passwords have reached the end of their useful life. Today, they only work for low-security applications. The secret question is just one manifestation of that fact.

This essay originally appeared on Computerworld.

Posted on February 11, 2005 at 8:00 AM70 Comments

Smart Water

No, really. It’s liquid with a unique identifier that is linked to a particular owner.

Forensic Coding combined with microdot technology.

SmartWater has been designed to protect household property and motor vehicles. Each bottle of SmartWater solution contains a unique forensic code, which is assigned to a household or vehicle.

An additional feature of SmartWater Instant is the inclusion tiny micro-dot particles which enable Police to quickly identify the true owner of the property.

The idea is for me to paint this stuff on my valuables as proof of ownership. I think a better idea would be for me to paint it on your valuables, and then call the police.

Posted on February 10, 2005 at 9:20 AM40 Comments

Authentication and Expiration

There’s a security problem with many Internet authentication systems that’s never talked about: there’s no way to terminate the authentication.

A couple of months ago, I bought something from an e-commerce site. At the checkout page, I wasn’t able to just type in my credit-card number and make my purchase. Instead, I had to choose a username and password. Usually I don’t like doing that, but in this case I wanted to be able to access my account at a later date. In fact, the password was useful because I needed to return an item I purchased.

Months have passed, and I no longer want an ongoing relationship with the e-commerce site. I don’t want a username and password. I don’t want them to have my credit-card number on file. I’ve received my purchase, I’m happy, and I’m done. But because that username and password have no expiration date associated with them, they never end. It’s not a subscription service, so there’s no mechanism to sever the relationship. I will have access to that e-commerce site for as long as it remembers that username and password.

In other words, I am liable for that account forever.

Traditionally, passwords have indicated an ongoing relationship between a user and some computer service. Sometimes it’s a company employee and the company’s servers. Sometimes it’s an account and an ISP. In both cases, both parties want to continue the relationship, so expiring a password and then forcing the user to choose another is a matter of security.

In cases with this ongoing relationship, the security consideration is damage minimization. Nobody wants some bad guy to learn the password, and everyone wants to minimize the amount of damage he can do if he does. Regularly changing your password is a solution to that problem.

This approach works because both sides want it to; they both want to keep the authentication system working correctly, and minimize attacks.

In the case of the e-commerce site, the interests are much more one-sided. The e-commerce site wants me to live in their database forever. They want to market to me, and entice me to come back. They want to sell my information. (This is the kind of information that might be buried in the privacy policy or terms of service, but no one reads those because they’re unreadable. And all bets are off if the company changes hands.)

There’s nothing I can do about this, but a username and password that never expire is another matter entirely. The e-commerce site wants me to establish an account because it increases the chances that I’ll use them again. But I want a way to terminate the business relationship, a way to say: “I am no longer taking responsibility for items purchased using that username and password.”

Near as I can tell, the username and password I typed into that e-commerce site puts my credit card at risk until it expires. If the e-commerce site uses a system that debits amounts from my checking account whenever I place an order, I could be at risk forever. (The US has legal liability limits, but they’re not that useful. According to Regulation E, the electronic transfers regulation, a fraudulent transaction must be reported within two days to cap liability at US$50; within 60 days, it’s capped at $500. Beyond that, you’re out of luck.)

This is wrong. Every e-commerce site should have a way to purchase items without establishing a username and password. I like sites that allow me to make a purchase as a “guest,” without setting up an account.

But just as importantly, every e-commerce site should have a way for customers to terminate their accounts and should allow them to delete their usernames and passwords from the system. It’s okay to market to previous customers. It’s not okay to needlessly put them at financial risk.

This essay also appeared in the Jan/Feb 05 issue of IEEE Security & Privacy.

Posted on February 10, 2005 at 7:55 AM41 Comments

Bank Sued for Unauthorized Transaction

This story is interesting:

A Miami businessman is suing Bank of America over $90,000 he says was stolen from his online banking account in a case that highlights the thorny question of who is responsible when a customer’s computer is hacked into.

The typical press coverage of this story is along the lines of “Bank of America sued because customer’s PC was hacked.” But that’s not it. Bank of America is being sued because they allowed an unauthorized transaction to occur, and they’re not making good on that mistake. The transaction happened to occur because the customer’s PC was hacked.

I know nothing about the actual suit and its merits, but this is a problem that is not going away. And while I think that banks should not be held responsible for what’s on their customers’ machines, they should be held responsible for allowing unauthorized transactions to occur. The bank’s internal systems, however set up, for whatever reason, permitted the fraudulent transaction.

There is a simple economic incentive problem here. As long as the banks are not responsible for financial losses from fraudulent transactions over the Internet, banks have no incentive to improve security. But if banks are held responsible for these transactions, you can bet that they won’t allow such shoddy security.

Posted on February 9, 2005 at 8:00 AM43 Comments

Flying on Someone Else's Airline Ticket

Slate has published a method for anyone to fly on anyone else’s ticket.

I wrote about this exact vulnerability a year and a half ago.

The vulnerability is obvious, but the general concepts are subtle. There are three things to authenticate: the identity of the traveler, the boarding pass, and the computer record. Think of them as three points on the triangle. Under the current system, the boarding pass is compared to the traveler’s identity document, and then the boarding pass is compared with the computer record. But because the identity document is never compared with the computer record—the third leg of the triangle—it’s possible to create two different boarding passes and have no one notice. That’s why the attack works.

Posted on February 8, 2005 at 9:11 AM14 Comments

Fertilizer as a Weapon

In an attempt to protect us from terrorism, there are new restrictions on fertilizer sales in the Kansas (and elsewhere):

Under the rules, retailers would have to obtain the name, address and telephone and driver’s license number of purchasers of ammonium nitrate fertilizer and maintain records, including the date of the sale and the amount purchased, for at least two years.

The administrative guidelines would authorize retailers to refuse to sell ammonium nitrate when it was being purchased out of season, in unusual quantities or in other suspicious circumstances.

The proposal, similar to rules in place in South Carolina and Nevada, is designed to make ammonium nitrate more secure and keep it out of the hands of terrorists….

Posted on February 8, 2005 at 7:58 AM28 Comments

Bank Mandates Insecure Browser

The Australian bank Suncorp has just updated its terms and conditions for Internet banking. They have a maximum withdrawal limit, hint about a physical access token, and require customers to use the most vulnerability-laden browser:

“suitable software” means Internet Explorer 5.5 Service Pack 2 or above or Netscape Navigator 6.1 or above running on Windows 98/ME/NT/2000/XP with anti-virus software or other software approved by us.

Posted on February 7, 2005 at 8:00 AM33 Comments

Implanting Chips in People at a Distance

I have no idea if this is real or not. But even if it’s not real, it’s just a matter of time before it becomes real. How long before people can surreptitiously have RFID tags injected into them?

What is the ID SNIPER rifle?

It is used to implant a GPS-microchip in the body of a human being, using a high powered sniper rifle as the long distance injector. The microchip will enter the body and stay there, causing no internal damage, and only a very small amount of physical pain to the target. It will feel like a mosquito-bite lasting a fraction of a second. At the same time a digital camcorder with a zoom-lense fitted within the scope will take a high-resolution picture of the target. This picture will be stored on a memory card for later image-analysis.

Edited to add: This is a hoax.

Posted on February 4, 2005 at 8:00 AM34 Comments

GovCon

There’s a conference in Washington, DC, in March that explores technologies for intelligence and terrorism prevention.

The 4th Annual Government Convention on Emerging Technologies will focus on the impact of the Intelligence Reform and Terrorism Prevention Act signed into law by President Bush in December 2004.

The departments and agencies of the National Security Community are currently engaged in the most comprehensive transformation of policy, structure, doctrine, and capabilities since the National Security Act of 1947.

Many of the legal, policy, organizational, and cultural challenges to manage the National Security Community as an enterprise and provide a framework for fielding new capabilities are being addressed. However, there are many emerging technologies and commercial best practices available to help the National Security Community achieve its critical mission of keeping America safe and secure.

There’s a lot of interesting stuff on the agenda, including some classified sessions. I’m especially interested in this track:

Track Two: Attaining Tailored Persistence

Explore the technologies required to attain persistent surveillance and tailored persistence.

What does “persistent surveillance” mean, anyway?

Posted on February 3, 2005 at 9:07 AM6 Comments

Automobile Virus

SC Magazine is reporting on a virus that infects Lexus cars:

Lexus cars may be vulnerable to viruses that infect them via mobile phones. Landcruiser 100 models LX470 and LS430 have been discovered with infected operating systems that transfer within a range of 15 feet.

It seems that no one has done this yet, and the story is based on speculation that a cell phone can transfer a virus to the Lexus using Bluetooth. But it’s only a matter of time before something like this actually works.

Posted on February 2, 2005 at 8:00 AM16 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.