Entries Tagged "essays"

Page 45 of 47

Mitigating Identity Theft

Identity theft is the new crime of the information age. A criminal collects enough personal data on someone to impersonate a victim to banks, credit card companies, and other financial institutions. Then he racks up debt in the person’s name, collects the cash, and disappears. The victim is left holding the bag. While some of the losses are absorbed by financial institutions—credit card companies in particular—the credit-rating damage is borne by the victim. It can take years for the victim to clear his name.

Unfortunately, the solutions being proposed in Congress won’t help. To see why, we need to start with the basics. The very term “identity theft” is an oxymoron. Identity is not a possession that can be acquired or lost; it’s not a thing at all. Someone’s identity is the one thing about a person that cannot be stolen.

The real crime here is fraud; more specifically, impersonation leading to fraud. Impersonation is an ancient crime, but the rise of information-based credentials gives it a modern spin. A criminal impersonates a victim online and steals money from his account. He impersonates a victim in order to deceive financial institutions into granting credit to the criminal in the victim’s name. He impersonates a victim to the Post Office and gets the victim’s address changed. He impersonates a victim in order to fool the police into arresting the wrong man. No one’s identity is stolen; identity information is being misused to commit fraud.

The crime involves two very separate issues. The first is the privacy of personal data. Personal privacy is important for many reasons, one of which is impersonation and fraud. As more information about us is collected, correlated, and sold, it becomes easier for criminals to get their hands on the data they need to commit fraud. This is what’s been in the news recently: ChoicePoint, LexisNexis, Bank of America, and so on. But data privacy is more than just fraud. Whether it is the books we take out of the library, the websites we visit, or the contents of our text messages, most of us have personal data on third-party computers that we don’t want made public. The posting of Paris Hilton’s phone book on the Internet is a celebrity example of this.

The second issue is the ease with which a criminal can use personal data to commit fraud. It doesn’t take much personal information to apply for a credit card in someone else’s name. It doesn’t take much to submit fraudulent bank transactions in someone else’s name. It’s surprisingly easy to get an identification card in someone else’s name. Our current culture, where identity is verified simply and sloppily, makes it easier for a criminal to impersonate his victim.

Proposed fixes tend to concentrate on the first issue—making personal data harder to steal—whereas the real problem is the second. If we’re ever going to manage the risks and effects of electronic impersonation, we must concentrate on preventing and detecting fraudulent transactions.

Fraudulent transactions have nothing to do with the legitimate account holders. Criminals impersonate legitimate users to financial intuitions. That means that any solution can’t involve the account holders. That leaves only one reasonable answer: financial intuitions need to be liable for fraudulent transactions. They need to be liable for sending erroneous information to credit bureaus based on fraudulent transactions.

They can’t claim that the user must keep his password secure or his machine virus free. They can’t require the user to monitor his accounts for fraudulent activity, or his credit reports for fraudulently obtained credit cards. Those aren’t reasonable requirements for most users. The bank must be made responsible, regardless of what the user does.

If you think this won’t work, look at credit cards. Credit card companies are liable for all but the first $50 of fraudulent transactions. They’re not hurting for business; and they’re not drowning in fraud, either. They’ve developed and fielded an array of security technologies designed to detect and prevent fraudulent transactions. They’ve pushed most of the actual costs onto the merchants. And almost no security centers around trying to authenticate the cardholder.

That’s an important lesson. Identity theft solutions focus much too much on authenticating the person. Whether it’s two-factor authentication, ID cards, biometrics, or whatever, there’s a widespread myth that authenticating the person is the way to prevent these crimes. But once you understand that the problem is fraudulent transactions, you quickly realize that authenticating the person isn’t the way to proceed.

Again, think about credit cards. Store clerks barely verify signatures when people use cards. People can use credit cards to buy things by mail, phone, or Internet, where no one verifies the signature or even that you have possession of the card. Even worse, no credit card company mandates secure storage requirements for credit cards. They don’t demand that cardholders secure their wallets in any particular way. Credit card companies simply don’t worry about verifying the cardholder or putting requirements on what he does. They concentrate on verifying the transaction.

This same sort of thinking needs to be applied to other areas where criminals use impersonation to commit fraud. I don’t know what the final solutions will look like, but I do know that once financial institutions are liable for losses due to these types of fraud, they will find solutions. Maybe there’ll be a daily withdrawal limit, like there is on ATMs. Maybe large transactions will be delayed for a period of time, or will require a call-back from the bank or brokerage company. Maybe people will no longer be able to open a credit card account by simply filling out a bunch of information on a form. Likely the solution will be a combination of solutions that reduces fraudulent transactions to a manageable level, but we’ll never know until the financial institutions have the financial incentive to put them in place.

Right now, the economic incentives result in financial institutions that are so eager to allow transactions—new credit cards, cash transfers, whatever—that they’re not paying enough attention to fraudulent transactions. They’ve pushed the costs for fraud onto the merchants. But if they’re liable for losses and damages to legitimate users, they’ll pay more attention. And they’ll mitigate the risks. Security can do all sorts of things, once the economic incentives to apply them are there.

By focusing on the fraudulent use of personal data, I do not mean to minimize the harm caused by third-party data and violations of privacy. I believe that the U.S. would be well-served by a comprehensive Data Protection Act like the European Union. However, I do not believe that a law of this type would significantly reduce the risk of fraudulent impersonation. To mitigate that risk, we need to concentrate on detecting and preventing fraudulent transactions. We need to make the entity that is in the best position to mitigate the risk to be responsible for that risk. And that means making the financial institutions liable for fraudulent transactions.

Doing anything less simply won’t work.

Posted on April 15, 2005 at 9:17 AMView Comments

More on Two-Factor Authentication

Recently I published an essay arguing that two-factor authentication is an ineffective defense against identity theft. For example, issuing tokens to online banking customers won’t reduce fraud, because new attack techniques simply ignore the countermeasure. Unfortunately, some took my essay as a condemnation of two-factor authentication in general. This is not true. It’s simply a matter of understanding the threats and the attacks.

Passwords just don’t work anymore. As computers have gotten faster, password guessing has gotten easier. Ever-more-complicated passwords are required to evade password-guessing software. At the same time, there’s an upper limit to how complex a password users can be expected to remember. About five years ago, these two lines crossed: It is no longer reasonable to expect users to have passwords that can’t be guessed. For anything that requires reasonable security, the era of passwords is over.

Two-factor authentication solves this problem. It works against passive attacks: eavesdropping and password guessing. It protects against users choosing weak passwords, telling their passwords to their colleagues or writing their passwords on pieces of paper taped to their monitors. For an organization trying to improve access control for its employees, two-factor authentication is a great idea. Microsoft is integrating two-factor authentication into its operating system, another great idea.

What two-factor authentication won’t do is prevent identity theft and fraud. It’ll prevent certain tactics of identity theft and fraud, but criminals simply will switch tactics. We’re already seeing fraud tactics that completely ignore two-factor authentication. As banks roll out two-factor authentication, criminals simply will switch to these new tactics.

Security is always an arms race, and you could argue that this situation is simply the cost of treading water. The problem with this reasoning is it ignores countermeasures that permanently reduce fraud. By concentrating on authenticating the individual rather than authenticating the transaction, banks are forced to defend against criminal tactics rather than the crime itself.

Credit cards are a perfect example. Notice how little attention is paid to cardholder authentication. Clerks barely check signatures. People use their cards over the phone and on the Internet, where the card’s existence isn’t even verified. The credit card companies spend their security dollar authenticating the transaction, not the cardholder.

Two-factor authentication is a long-overdue solution to the problem of passwords. I welcome its increasing popularity, but identity theft and bank fraud are not results of password problems; they stem from poorly authenticated transactions. The sooner people realize that, the sooner they’ll stop advocating stronger authentication measures and the sooner security will actually improve.

This essay previously appeared in Network World as a “Face Off.” Joe Uniejewski of RSA Security wrote an opposing position. Another article on the subject was published at SearchSecurity.com.

One way to think about this—a phrasing I didn’t think about until after writing the above essay—is that two-factor authentication solves security problems involving authentication. The current wave of attacks against financial systems are not exploiting vulnerabilities in the authentication system, so two-factor authentication doesn’t help.

Posted on April 12, 2005 at 11:02 AMView Comments

The Failure of Two-Factor Authentication

Two-factor authentication isn’t our savior. It won’t defend against phishing. It’s not going to prevent identity theft. It’s not going to secure online accounts from fraudulent transactions. It solves the security problems we had ten years ago, not the security problems we have today.

The problem with passwords is that they’re too easy to lose control of. People give them to other people. People write them down, and other people read them. People send them in e-mail, and that e-mail is intercepted. People use them to log into remote servers, and their communications are eavesdropped on. They’re also easy to guess. And once any of that happens, the password no longer works as an authentication token because you can’t be sure who is typing that password in.

Two-factor authentication mitigates this problem. If your password includes a number that changes every minute, or a unique reply to a random challenge, then it’s harder for someone else to intercept. You can’t write down the ever-changing part. An intercepted password won’t be good the next time it’s needed. And a two-factor password is harder to guess. Sure, someone can always give his password and token to his secretary, but no solution is foolproof.

These tokens have been around for at least two decades, but it’s only recently that they have gotten mass-market attention. AOL is rolling them out. Some banks are issuing them to customers, and even more are talking about doing it. It seems that corporations are finally waking up to the fact that passwords don’t provide adequate security, and are hoping that two-factor authentication will fix their problems.

Unfortunately, the nature of attacks has changed over those two decades. Back then, the threats were all passive: eavesdropping and offline password guessing. Today, the threats are more active: phishing and Trojan horses.

Here are two new active attacks we’re starting to see:

  • Man-in-the-Middle attack. An attacker puts up a fake bank website and entices user to that website. User types in his password, and the attacker in turn uses it to access the bank’s real website. Done right, the user will never realize that he isn’t at the bank’s website. Then the attacker either disconnects the user and makes any fraudulent transactions he wants, or passes along the user’s banking transactions while making his own transactions at the same time.

  • Trojan attack. Attacker gets Trojan installed on user’s computer. When user logs into his bank’s website, the attacker piggybacks on that session via the Trojan to make any fraudulent transaction he wants.

See how two-factor authentication doesn’t solve anything? In the first case, the attacker can pass the ever-changing part of the password to the bank along with the never-changing part. And in the second case, the attacker is relying on the user to log in.

The real threat is fraud due to impersonation, and the tactics of impersonation will change in response to the defenses. Two-factor authentication will force criminals to modify their tactics, that’s all.

Recently I’ve seen examples of two-factor authentication using two different communications paths: call it “two-channel authentication.” One bank sends a challenge to the user’s cell phone via SMS and expects a reply via SMS. If you assume that all your customers have cell phones, then this results in a two-factor authentication process without extra hardware. And even better, the second authentication piece goes over a different communications channel than the first; eavesdropping is much, much harder.

But in this new world of active attacks, no one cares. An attacker using a man-in-the-middle attack is happy to have the user deal with the SMS portion of the log-in, since he can’t do it himself. And a Trojan attacker doesn’t care, because he’s relying on the user to log in anyway.

Two-factor authentication is not useless. It works for local login, and it works within some corporate networks. But it won’t work for remote authentication over the Internet. I predict that banks and other financial institutions will spend millions outfitting their users with two-factor authentication tokens. Early adopters of this technology may very well experience a significant drop in fraud for a while as attackers move to easier targets, but in the end there will be a negligible drop in the amount of fraud and identity theft.

This essay will appear in the April issue of Communications of the ACM.

Posted on March 15, 2005 at 7:54 AMView Comments

T-Mobile Hack

For at least seven months last year, a hacker had access to T-Mobile’s customer network. He’s known to have accessed information belonging to 400 customers—names, Social Security numbers, voicemail messages, SMS messages, photos—and probably had the ability to access data belonging to any of T-Mobile’s 16.3 million U.S. customers. But in its fervor to report on the security of cell phones, and T-Mobile in particular, the media missed the most important point of the story: The security of much of our data is not under our control.

This is new. A dozen years ago, if someone wanted to look through your mail, they would have to break into your house. Now they can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your house; now it’s on a computer owned by a telephone company. Your financial data is on Websites protected only by passwords. The list of books you browse, and the books you buy, is stored in the computers of some online bookseller. Your affinity card allows your supermarket to know what food you like. Data that used to be under your direct control is now controlled by others.

We have no choice but to trust these companies with our privacy, even though the companies have little incentive to protect that privacy. T-Mobile suffered some bad press for its lousy security, nothing more. It’ll spend some money improving its security, but it’ll be security designed to protect its reputation from bad PR, not security designed to protect the privacy of its customers.

This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as that data is held by others. The police need a warrant to read the e-mail on your computer; but they don’t need one to read it off the backup tapes at your ISP. According to the Supreme Court, that’s not a search as defined by the 4th Amendment.

This isn’t a technology problem, it’s a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don’t have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant—even though it occurred at the phone company switching office—the Supreme Court must recognize that reading e-mail at an ISP is no different.

This essay appeared in eWeek.

Posted on February 14, 2005 at 4:26 PMView Comments

The Curse of the Secret Question

It’s happened to all of us: We sign up for some online account, choose a difficult-to-remember and hard-to-guess password, and are then presented with a “secret question” to answer. Twenty years ago, there was just one secret question: “What’s your mother’s maiden name?” Today, there are more: “What street did you grow up on?” “What’s the name of your first pet?” “What’s your favorite color?” And so on.

The point of all these questions is the same: a backup password. If you forget your password, the secret question can verify your identity so you can choose another password or have the site e-mail your current password to you. It’s a great idea from a customer service perspective—a user is less likely to forget his first pet’s name than some random password—but terrible for security. The answer to the secret question is much easier to guess than a good password, and the information is much more public. (I’ll bet the name of my family’s first pet is in some database somewhere.) And even worse, everybody seems to use the same series of secret questions.

The result is the normal security protocol (passwords) falls back to a much less secure protocol (secret questions). And the security of the entire system suffers.

What can one do? My usual technique is to type a completely random answer—I madly slap at my keyboard for a few seconds—and then forget about it. This ensures that some attacker can’t bypass my password and try to guess the answer to my secret question, but is pretty unpleasant if I forget my password. The one time this happened to me, I had to call the company to get my password and question reset. (Honestly, I don’t remember how I authenticated myself to the customer service rep at the other end of the phone line.)

Which is maybe what should have happened in the first place. I like to think that if I forget my password, it should be really hard to gain access to my account. I want it to be so hard that an attacker can’t possibly do it. I know this is a customer service issue, but it’s a security issue too. And if the password is controlling access to something important—like my bank account—then the bypass mechanism should be harder, not easier.

Passwords have reached the end of their useful life. Today, they only work for low-security applications. The secret question is just one manifestation of that fact.

This essay originally appeared on Computerworld.

Posted on February 11, 2005 at 8:00 AMView Comments

Authentication and Expiration

There’s a security problem with many Internet authentication systems that’s never talked about: there’s no way to terminate the authentication.

A couple of months ago, I bought something from an e-commerce site. At the checkout page, I wasn’t able to just type in my credit-card number and make my purchase. Instead, I had to choose a username and password. Usually I don’t like doing that, but in this case I wanted to be able to access my account at a later date. In fact, the password was useful because I needed to return an item I purchased.

Months have passed, and I no longer want an ongoing relationship with the e-commerce site. I don’t want a username and password. I don’t want them to have my credit-card number on file. I’ve received my purchase, I’m happy, and I’m done. But because that username and password have no expiration date associated with them, they never end. It’s not a subscription service, so there’s no mechanism to sever the relationship. I will have access to that e-commerce site for as long as it remembers that username and password.

In other words, I am liable for that account forever.

Traditionally, passwords have indicated an ongoing relationship between a user and some computer service. Sometimes it’s a company employee and the company’s servers. Sometimes it’s an account and an ISP. In both cases, both parties want to continue the relationship, so expiring a password and then forcing the user to choose another is a matter of security.

In cases with this ongoing relationship, the security consideration is damage minimization. Nobody wants some bad guy to learn the password, and everyone wants to minimize the amount of damage he can do if he does. Regularly changing your password is a solution to that problem.

This approach works because both sides want it to; they both want to keep the authentication system working correctly, and minimize attacks.

In the case of the e-commerce site, the interests are much more one-sided. The e-commerce site wants me to live in their database forever. They want to market to me, and entice me to come back. They want to sell my information. (This is the kind of information that might be buried in the privacy policy or terms of service, but no one reads those because they’re unreadable. And all bets are off if the company changes hands.)

There’s nothing I can do about this, but a username and password that never expire is another matter entirely. The e-commerce site wants me to establish an account because it increases the chances that I’ll use them again. But I want a way to terminate the business relationship, a way to say: “I am no longer taking responsibility for items purchased using that username and password.”

Near as I can tell, the username and password I typed into that e-commerce site puts my credit card at risk until it expires. If the e-commerce site uses a system that debits amounts from my checking account whenever I place an order, I could be at risk forever. (The US has legal liability limits, but they’re not that useful. According to Regulation E, the electronic transfers regulation, a fraudulent transaction must be reported within two days to cap liability at US$50; within 60 days, it’s capped at $500. Beyond that, you’re out of luck.)

This is wrong. Every e-commerce site should have a way to purchase items without establishing a username and password. I like sites that allow me to make a purchase as a “guest,” without setting up an account.

But just as importantly, every e-commerce site should have a way for customers to terminate their accounts and should allow them to delete their usernames and passwords from the system. It’s okay to market to previous customers. It’s not okay to needlessly put them at financial risk.

This essay also appeared in the Jan/Feb 05 issue of IEEE Security & Privacy.

Posted on February 10, 2005 at 7:55 AMView Comments

Safe Personal Computing

I am regularly asked what average Internet users can do to ensure their security. My first answer is usually, “Nothing—you’re screwed.”

But that’s not true, and the reality is more complicated. You’re screwed if you do nothing to protect yourself, but there are many things you can do to increase your security on the Internet.

Two years ago, I published a list of PC security recommendations. The idea was to give home users concrete actions they could take to improve security. This is an update of that list: a dozen things you can do to improve your security.

General: Turn off the computer when you’re not using it, especially if you have an “always on” Internet connection.

Laptop security: Keep your laptop with you at all times when not at home; treat it as you would a wallet or purse. Regularly purge unneeded data files from your laptop. The same goes for PDAs. People tend to store more personal data—including passwords and PINs—on PDAs than they do on laptops.

Backups: Back up regularly. Back up to disk, tape or CD-ROM. There’s a lot you can’t defend against; a recent backup will at least let you recover from an attack. Store at least one set of backups off-site (a safe-deposit box is a good place) and at least one set on-site. Remember to destroy old backups. The best way to destroy CD-Rs is to microwave them on high for five seconds. You can also break them in half or run them through better shredders.

Operating systems: If possible, don’t use Microsoft Windows. Buy a Macintosh or use Linux. If you must use Windows, set up Automatic Update so that you automatically receive security patches. And delete the files “command.com” and “cmd.exe.”

Applications: Limit the number of applications on your machine. If you don’t need it, don’t install it. If you no longer need it, uninstall it. Look into one of the free office suites as an alternative to Microsoft Office. Regularly check for updates to the applications you use and install them. Keeping your applications patched is important, but don’t lose sleep over it.

Browsing: Don’t use Microsoft Internet Explorer, period. Limit use of cookies and applets to those few sites that provide services you need. Set your browser to regularly delete cookies. Don’t assume a Web site is what it claims to be, unless you’ve typed in the URL yourself. Make sure the address bar shows the exact address, not a near-miss.

Web sites: Secure Sockets Layer (SSL) encryption does not provide any assurance that the vendor is trustworthy or that its database of customer information is secure.

Think before you do business with a Web site. Limit the financial and personal data you send to Web sites—don’t give out information unless you see a value to you. If you don’t want to give out personal information, lie. Opt out of marketing notices. If the Web site gives you the option of not storing your information for later use, take it. Use a credit card for online purchases, not a debit card.

Passwords: You can’t memorize good enough passwords any more, so don’t bother. For high-security Web sites such as banks, create long random passwords and write them down. Guard them as you would your cash: i.e., store them in your wallet, etc.

Never reuse a password for something you care about. (It’s fine to have a single password for low-security sites, such as for newspaper archive access.) Assume that all PINs can be easily broken and plan accordingly.

Never type a password you care about, such as for a bank account, into a non-SSL encrypted page. If your bank makes it possible to do that, complain to them. When they tell you that it is OK, don’t believe them; they’re wrong.

E-mail : Turn off HTML e-mail. Don’t automatically assume that any e-mail is from the “From” address.

Delete spam without reading it. Don’t open messages with file attachments, unless you know what they contain; immediately delete them. Don’t open cartoons, videos and similar “good for a laugh” files forwarded by your well-meaning friends; again, immediately delete them.

Never click links in e-mail unless you’re sure about the e-mail; copy and paste the link into your browser instead. Don’t use Outlook or Outlook Express. If you must use Microsoft Office, enable macro virus protection; in Office 2000, turn the security level to “high” and don’t trust any received files unless you have to. If you’re using Windows, turn off the “hide file extensions for known file types” option; it lets Trojan horses masquerade as other types of files. Uninstall the Windows Scripting Host if you can get along without it. If you can’t, at least change your file associations, so that script files aren’t automatically sent to the Scripting Host if you double-click them.

Antivirus and anti-spyware software : Use it—either a combined program or two separate programs. Download and install the updates, at least weekly and whenever you read about a new virus in the news. Some antivirus products automatically check for updates. Enable that feature and set it to “daily.”

Firewall : Spend $50 for a Network Address Translator firewall device; it’s likely to be good enough in default mode. On your laptop, use personal firewall software. If you can, hide your IP address. There’s no reason to allow any incoming connections from anybody.

Encryption: Install an e-mail and file encryptor (like PGP). Encrypting all your e-mail or your entire hard drive is unrealistic, but some mail is too sensitive to send in the clear. Similarly, some files on your hard drive are too sensitive to leave unencrypted.

None of the measures I’ve described are foolproof. If the secret police wants to target your data or your communications, no countermeasure on this list will stop them. But these precautions are all good network-hygiene measures, and they’ll make you a more difficult target than the computer next door. And even if you only follow a few basic measures, you’re unlikely to have any problems.

I’m stuck using Microsoft Windows and Office, but I use Opera for Web browsing and Eudora for e-mail. I use Windows Update to automatically get patches and install other patches when I hear about them. My antivirus software updates itself regularly. I keep my computer relatively clean and delete applications that I don’t need. I’m diligent about backing up my data and about storing data files that are no longer needed offline.

I’m suspicious to the point of near-paranoia about e-mail attachments and Web sites. I delete cookies and spyware. I watch URLs to make sure I know where I am, and I don’t trust unsolicited e-mails. I don’t care about low-security passwords, but try to have good passwords for accounts that involve money. I still don’t do Internet banking. I have my firewall set to deny all incoming connections. And I turn my computer off when I’m not using it.

That’s basically it. Really, it’s not that hard. The hardest part is developing an intuition about e-mail and Web sites. But that just takes experience.

This essay previously appeared on CNet

Posted on December 13, 2004 at 9:59 AMView Comments

Desktop Google Finds Holes

Google’s desktop search software is so good that it exposes vulnerabilities on your computer that you didn’t know about.

Last month, Google released a beta version of its desktop search software: Google Desktop Search. Install it on your Windows machine, and it creates a searchable index of your data files, including word processing files, spreadsheets, presentations, e-mail messages, cached Web pages and chat sessions. It’s a great idea. Windows’ searching capability has always been mediocre, and Google fixes the problem nicely.

There are some security issues, though. The problem is that GDS indexes and finds documents that you may prefer not be found. For example, GDS searches your browser’s cache. This allows it to find old Web pages you’ve visited, including online banking summaries, personal messages sent from Web e-mail programs and password-protected personal Web pages.

GDS can also retrieve encrypted files. No, it doesn’t break the encryption or save a copy of the key. However, it searches the Windows cache, which can bypass some encryption programs entirely. And if you install the program on a computer with multiple users, you can search documents and Web pages for all users.

GDS isn’t doing anything wrong; it’s indexing and searching documents just as it’s supposed to. The vulnerabilities are due to the design of Internet Explorer, Opera, Firefox, PGP and other programs.

First, Web browsers should not store SSL-encrypted pages or pages with personal e-mail. If they do store them, they should at least ask the user first.

Second, an encryption program that leaves copies of decrypted files in the cache is poorly designed. Those files are there whether or not GDS searches for them.

Third, GDS’ ability to search files and Web pages of multiple users on a computer received a lot of press when it was first discovered. This is a complete nonissue. You have to be an administrator on the machine to do this, which gives you access to everyone’s files anyway.

Some people blame Google for these problems and suggest, wrongly, that Google fix them. What if Google were to bow to public pressure and modify GDS to avoid showing confidential information? The underlying problems would remain: The private Web pages would still be in the browser’s cache; the encryption program would still be leaving copies of the plain-text files in the operating system’s cache; and the administrator could still eavesdrop on anyone’s computer to which he or she has access. The only thing that would have changed is that these vulnerabilities once again would be hidden from the average computer user.

In the end, this can only harm security.

GDS is very good at searching. It’s so good that it exposes vulnerabilities on your computer that you didn’t know about. And now that you know about them, pressure your software vendors to fix them. Don’t shoot the messenger.

This article originally appeared in eWeek.

Posted on November 29, 2004 at 11:15 AMView Comments

Behavioral Assessment Profiling

On Dec. 14, 1999, Ahmed Ressam tried to enter the United States from Canada at Port Angeles, Wash. He had a suitcase bomb in the trunk of his car. A US customs agent, Diana Dean, questioned him at the border. He was fidgeting, sweaty, and jittery. He avoided eye contact. In Dean’s own words, he was acting “hinky.” Ressam’s car was eventually searched, and he was arrested.

It wasn’t any one thing that tipped Dean off; it was everything encompassed in the slang term “hinky.” But it worked. The reason there wasn’t a bombing at Los Angeles International Airport around Christmas 1999 was because a trained, knowledgeable security person was paying attention.

This is “behavioral assessment” profiling. It’s what customs agents do at borders all the time. It’s what the Israeli police do to protect their airport and airplanes. And it’s a new pilot program in the United States at Boston’s Logan Airport. Behavioral profiling is dangerous because it’s easy to abuse, but it’s also the best thing we can do to improve the security of our air passenger system.

Behavioral profiling is not the same as computerized passenger profiling. The latter has been in place for years. It’s a secret system, and it’s a mess. Sometimes airlines decided who would undergo secondary screening, and they would choose people based on ticket purchase, frequent-flyer status, and similarity to names on government watch lists. CAPPS-2 was to follow, evaluating people based on government and commercial databases and assigning a “risk” score. This system was scrapped after public outcry, but another profiling system called Secure Flight will debut next year. Again, details are secret.

The problem with computerized passenger profiling is that it simply doesn’t work. Terrorists don’t fit a profile and cannot be plucked out of crowds by computers. Terrorists are European, Asian, African, Hispanic, and Middle Eastern, male and female, young and old. Richard Reid, the shoe bomber, was British with a Jamaican father. Jose Padilla, arrested in Chicago in 2002 as a “dirty bomb” suspect, was a Hispanic-American. Timothy McVeigh was a white American. So was the Unabomber, who once taught mathematics at the University of California, Berkeley. The Chechens who blew up two Russian planes last August were female. Recent reports indicate that Al Qaeda is recruiting Europeans for further attacks on the United States.

Terrorists can buy plane tickets—either one way or round trip—with cash or credit cards. Mohamed Atta, the leader of the 9/11 plot, had a frequent-flyer gold card. They are a surprisingly diverse group of people, and any computer profiling system will just make it easier for those who don’t meet the profile.

Behavioral assessment profiling is different. It cuts through all of those superficial profiling characteristics and centers on the person. State police are trained as screeners in order to look for suspicious conduct such as furtiveness or undue anxiety. Already at Logan Airport, the program has caught 20 people who were either in the country illegally or had outstanding warrants of one kind or another.

Earlier this month the ACLU of Massachusetts filed a lawsuit challenging the constitutionality of behavioral assessment profiling. The lawsuit is unlikely to succeed; the principle of “implied consent” that has been used to uphold the legality of passenger and baggage screening will almost certainly be applied in this case as well.

But the ACLU has it wrong. Behavioral assessment profiling isn’t the problem. Abuse of behavioral profiling is the problem, and the ACLU has correctly identified where it can go wrong. If policemen fall back on naive profiling by race, ethnicity, age, gender—characteristics not relevant to security—they’re little better than a computer. Instead of “driving while black,” the police will face accusations of harassing people for the infraction of “flying while Arab.” Their actions will increase racial tensions and make them less likely to notice the real threats. And we’ll all be less safe as a result.

Behavioral assessment profiling isn’t a “silver bullet.” It needs to be part of a layered security system, one that includes passenger baggage screening, airport employee screening, and random security checks. It’s best implemented not by police but by specially trained federal officers. These officers could be deployed at airports, sports stadiums, political conventions—anywhere terrorism is a risk because the target is attractive. Done properly, this is the best thing to happen to air passenger security since reinforcing the cockpit door.

This article originally appeared in the Boston Globe.

Posted on November 24, 2004 at 9:33 AMView Comments

The Problem with Electronic Voting Machines

In the aftermath of the U.S.’s 2004 election, electronic voting machines are again in the news. Computerized machines lost votes, subtracted votes instead of adding them, and doubled votes. Because many of these machines have no paper audit trails, a large number of votes will never be counted. And while it is unlikely that deliberate voting-machine fraud changed the result of the presidential election, the Internet is buzzing with rumors and allegations of fraud in a number of different jurisdictions and races. It is still too early to tell if any of these problems affected any individual elections. Over the next several weeks we’ll see whether any of the information crystallizes into something significant.

The U.S has been here before. After 2000, voting machine problems made international headlines. The government appropriated money to fix the problems nationwide. Unfortunately, electronic voting machines—although presented as the solution—have largely made the problem worse. This doesn’t mean that these machines should be abandoned, but they need to be designed to increase both their accuracy, and peoples’ trust in their accuracy. This is difficult, but not impossible.

Before I can discuss electronic voting machines, I need to explain why voting is so difficult. Basically, a voting system has four required characteristics:

  1. Accuracy. The goal of any voting system is to establish the intent of each individual voter, and translate those intents into a final tally. To the extent that a voting system fails to do this, it is undesirable. This characteristic also includes security: It should be impossible to change someone else’s vote, ballot stuff, destroy votes, or otherwise affect the accuracy of the final tally.

  2. Anonymity. Secret ballots are fundamental to democracy, and voting systems must be designed to facilitate voter anonymity.

  3. Scalability. Voting systems need to be able to handle very large elections. One hundred million people vote for president in the United States. About 372 million people voted in India’s June elections, and over 115 million in Brazil’s October elections. The complexity of an election is another issue. Unlike many countries where the national election is a single vote for a person or a party, a United States voter is faced with dozens of individual election: national, local, and everything in between.

  4. Speed. Voting systems should produce results quickly. This is particularly important in the United States, where people expect to learn the results of the day’s election before bedtime. It’s less important in other countries, where people don’t mind waiting days—or even weeks—before the winner is announced.

Through the centuries, different technologies have done their best. Stones and pot shards dropped in Greek vases gave way to paper ballots dropped in sealed boxes. Mechanical voting booths, punch cards, and then optical scan machines replaced hand-counted ballots. New computerized voting machines promise even more efficiency, and Internet voting even more convenience.

But in the rush to improve speed and scalability, accuracy has been sacrificed. And to reiterate: accuracy is not how well the ballots are counted by, for example, a punch-card reader. It’s not how the tabulating machine deals with hanging chads, pregnant chads, or anything like that. Accuracy is how well the process translates voter intent into properly counted votes.

Technologies get in the way of accuracy by adding steps. Each additional step means more potential errors, simply because no technology is perfect. Consider an optical-scan voting system. The voter fills in ovals on a piece of paper, which is fed into an optical-scan reader. The reader senses the filled-in ovals and tabulates the votes. This system has several steps: voter to ballot to ovals to optical reader to vote tabulator to centralized total.

At each step, errors can occur. If the ballot is confusing, then some voters will fill in the wrong ovals. If a voter doesn’t fill them in properly, or if the reader is malfunctioning, then the sensor won’t sense the ovals properly. Mistakes in tabulation—either in the machine or when machine totals get aggregated into larger totals—also cause errors. A manual system—tallying the ballots by hand, and then doing it again to double-check—is more accurate simply because there are fewer steps.

The error rates in modern systems can be significant. Some voting technologies have a 5% error rate: one in twenty people who vote using the system don’t have their votes counted properly. This system works anyway because most of the time errors don’t matter. If you assume that the errors are uniformly distributed—in other words, that they affect each candidate with equal probability—then they won’t affect the final outcome except in very close races. So we’re willing to sacrifice accuracy to get a voting system that will more quickly handle large and complicated elections. In close races, errors can affect the outcome, and that’s the point of a recount. A recount is an alternate system of tabulating votes: one that is slower (because it’s manual), simpler (because it just focuses on one race), and therefore more accurate.

Note that this is only true if everyone votes using the same machines. If parts of town that tend to support candidate A use a voting system with a higher error rate than the voting system used in parts of town that tend to support candidate B, then the results will be skewed against candidate A. This is an important consideration in voting accuracy, although tangential to the topic of this essay.

With this background, the issue of computerized voting machines becomes clear. Actually, “computerized voting machines” is a bad choice of words. Many of today’s voting technologies involve computers. Computers tabulate both punch-card and optical-scan machines. The current debate centers around all-computer voting systems, primarily touch-screen systems, called Direct Record Electronic (DRE) machines. (The voting system used in India’s most recent election—a computer with a series of buttons—is subject to the same issues.) In these systems the voter is presented with a list of choices on a screen, perhaps multiple screens if there are multiple elections, and he indicates his choice by touching the screen. These machines are easy to use, produce final tallies immediately after the polls close, and can handle very complicated elections. They also can display instructions in different languages and allow for the blind or otherwise handicapped to vote without assistance.

They’re also more error-prone. The very same software that makes touch-screen voting systems so friendly also makes them inaccurate. And even worse, they’re inaccurate in precisely the worst possible way.

Bugs in software are commonplace, as any computer user knows. Computer programs regularly malfunction, sometimes in surprising and subtle ways. This is true for all software, including the software in computerized voting machines. For example:

In Fairfax County, VA, in 2003, a programming error in the electronic voting machines caused them to mysteriously subtract 100 votes from one particular candidates’ totals.

In San Bernardino County, CA in 2001, a programming error caused the computer to look for votes in the wrong portion of the ballot in 33 local elections, which meant that no votes registered on those ballots for that election. A recount was done by hand.

In Volusia County, FL in 2000, an electronic voting machine gave Al Gore a final vote count of negative 16,022 votes.

The 2003 election in Boone County, IA, had the electronic vote-counting equipment showing that more than 140,000 votes had been cast in the Nov. 4 municipal elections. The county has only 50,000 residents and less than half of them were eligible to vote in this election.

There are literally hundreds of similar stories.

What’s important about these problems is not that they resulted in a less accurate tally, but that the errors were not uniformly distributed; they affected one candidate more than the other. This means that you can’t assume that errors will cancel each other out and not affect the election; you have to assume that any error will skew the results significantly.

Another issue is that software can be hacked. That is, someone can deliberately introduce an error that modifies the result in favor of his preferred candidate. This has nothing to do with whether the voting machines are hooked up to the Internet on election day. The threat is that the computer code could be modified while it is being developed and tested, either by one of the programmers or a hacker who gains access to the voting machine company’s network. It’s much easier to surreptitiously modify a software system than a hardware system, and it’s much easier to make these modifications undetectable.

A third issue is that these problems can have further-reaching effects in software. A problem with a manual machine just affects that machine. A software problem, whether accidental or intentional, can affect many thousands of machines—and skew the results of an entire election.

Some have argued in favor of touch-screen voting systems, citing the millions of dollars that are handled every day by ATMs and other computerized financial systems. That argument ignores another vital characteristic of voting systems: anonymity. Computerized financial systems get most of their security from audit. If a problem is suspected, auditors can go back through the records of the system and figure out what happened. And if the problem turns out to be real, the transaction can be unwound and fixed. Because elections are anonymous, that kind of security just isn’t possible.

None of this means that we should abandon touch-screen voting; the benefits of DRE machines are too great to throw away. But it does mean that we need to recognize its limitations, and design systems that can be accurate despite them.

Computer security experts are unanimous on what to do. (Some voting experts disagree, but I think we’re all much better off listening to the computer security experts. The problems here are with the computer, not with the fact that the computer is being used in a voting application.) And they have two recommendations:

  1. DRE machines must have a voter-verifiable paper audit trails (sometimes called a voter-verified paper ballot). This is a paper ballot printed out by the voting machine, which the voter is allowed to look at and verify. He doesn’t take it home with him. Either he looks at it on the machine behind a glass screen, or he takes the paper and puts it into a ballot box. The point of this is twofold. One, it allows the voter to confirm that his vote was recorded in the manner he intended. And two, it provides the mechanism for a recount if there are problems with the machine.

  2. Software used on DRE machines must be open to public scrutiny. This also has two functions. One, it allows any interested party to examine the software and find bugs, which can then be corrected. This public analysis improves security. And two, it increases public confidence in the voting process. If the software is public, no one can insinuate that the voting system has unfairness built into the code. (Companies that make these machines regularly argue that they need to keep their software secret for security reasons. Don’t believe them. In this instance, secrecy has nothing to do with security.)

Computerized systems with these characteristics won’t be perfect—no piece of software is—but they’ll be much better than what we have now. We need to start treating voting software like we treat any other high-reliability system. The auditing that is conducted on slot machine software in the U.S. is significantly more meticulous than what is done to voting software. The development process for mission-critical airplane software makes voting software look like a slapdash affair. If we care about the integrity of our elections, this has to change.

Proponents of DREs often point to successful elections as “proof” that the systems work. That completely misses the point. The fear is that errors in the software—either accidental or deliberately introduced—can undetectably alter the final tallies. An election without any detected problems is no more a proof the system is reliable and secure than a night that no one broke into your house is proof that your door locks work. Maybe no one tried, or maybe someone tried and succeeded…and you don’t know it.

Even if we get the technology right, we still won’t be done. If the goal of a voting system is to accurately translate voter intent into a final tally, the voting machine is only one part of the overall system. In the 2004 U.S. election, problems with voter registration, untrained poll workers, ballot design, and procedures for handling problems resulted in far more votes not being counted than problems with the technology. But if we’re going to spend money on new voting technology, it makes sense to spend it on technology that makes the problem easier instead of harder.

This article originally appeared on openDemocracy.com.

Posted on November 10, 2004 at 9:15 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.