Blog: November 2004 Archives
Google's desktop search software is so good that it exposes vulnerabilities on your computer that you didn't know about.
Last month, Google released a beta version of its desktop search software: Google Desktop Search. Install it on your Windows machine, and it creates a searchable index of your data files, including word processing files, spreadsheets, presentations, e-mail messages, cached Web pages and chat sessions. It's a great idea. Windows' searching capability has always been mediocre, and Google fixes the problem nicely.
There are some security issues, though. The problem is that GDS indexes and finds documents that you may prefer not be found. For example, GDS searches your browser's cache. This allows it to find old Web pages you've visited, including online banking summaries, personal messages sent from Web e-mail programs and password-protected personal Web pages.
GDS can also retrieve encrypted files. No, it doesn't break the encryption or save a copy of the key. However, it searches the Windows cache, which can bypass some encryption programs entirely. And if you install the program on a computer with multiple users, you can search documents and Web pages for all users.
GDS isn't doing anything wrong; it's indexing and searching documents just as it's supposed to. The vulnerabilities are due to the design of Internet Explorer, Opera, Firefox, PGP and other programs.
First, Web browsers should not store SSL-encrypted pages or pages with personal e-mail. If they do store them, they should at least ask the user first.
Second, an encryption program that leaves copies of decrypted files in the cache is poorly designed. Those files are there whether or not GDS searches for them.
Third, GDS' ability to search files and Web pages of multiple users on a computer received a lot of press when it was first discovered. This is a complete nonissue. You have to be an administrator on the machine to do this, which gives you access to everyone's files anyway.
Some people blame Google for these problems and suggest, wrongly, that Google fix them. What if Google were to bow to public pressure and modify GDS to avoid showing confidential information? The underlying problems would remain: The private Web pages would still be in the browser's cache; the encryption program would still be leaving copies of the plain-text files in the operating system's cache; and the administrator could still eavesdrop on anyone's computer to which he or she has access. The only thing that would have changed is that these vulnerabilities once again would be hidden from the average computer user.
In the end, this can only harm security.
GDS is very good at searching. It's so good that it exposes vulnerabilities on your computer that you didn't know about. And now that you know about them, pressure your software vendors to fix them. Don't shoot the messenger.
This article originally appeared in eWeek.
On Dec. 14, 1999, Ahmed Ressam tried to enter the United States from Canada at Port Angeles, Wash. He had a suitcase bomb in the trunk of his car. A US customs agent, Diana Dean, questioned him at the border. He was fidgeting, sweaty, and jittery. He avoided eye contact. In Dean's own words, he was acting "hinky." Ressam's car was eventually searched, and he was arrested.
It wasn't any one thing that tipped Dean off; it was everything encompassed in the slang term "hinky." But it worked. The reason there wasn't a bombing at Los Angeles International Airport around Christmas 1999 was because a trained, knowledgeable security person was paying attention.
This is "behavioral assessment" profiling. It's what customs agents do at borders all the time. It's what the Israeli police do to protect their airport and airplanes. And it's a new pilot program in the United States at Boston's Logan Airport. Behavioral profiling is dangerous because it's easy to abuse, but it's also the best thing we can do to improve the security of our air passenger system.
Behavioral profiling is not the same as computerized passenger profiling. The latter has been in place for years. It's a secret system, and it's a mess. Sometimes airlines decided who would undergo secondary screening, and they would choose people based on ticket purchase, frequent-flyer status, and similarity to names on government watch lists. CAPPS-2 was to follow, evaluating people based on government and commercial databases and assigning a "risk" score. This system was scrapped after public outcry, but another profiling system called Secure Flight will debut next year. Again, details are secret.
The problem with computerized passenger profiling is that it simply doesn't work. Terrorists don't fit a profile and cannot be plucked out of crowds by computers. Terrorists are European, Asian, African, Hispanic, and Middle Eastern, male and female, young and old. Richard Reid, the shoe bomber, was British with a Jamaican father. Jose Padilla, arrested in Chicago in 2002 as a "dirty bomb" suspect, was a Hispanic-American. Timothy McVeigh was a white American. So was the Unabomber, who once taught mathematics at the University of California, Berkeley. The Chechens who blew up two Russian planes last August were female. Recent reports indicate that Al Qaeda is recruiting Europeans for further attacks on the United States.
Terrorists can buy plane tickets -- either one way or round trip -- with cash or credit cards. Mohamed Atta, the leader of the 9/11 plot, had a frequent-flyer gold card. They are a surprisingly diverse group of people, and any computer profiling system will just make it easier for those who don't meet the profile.
Behavioral assessment profiling is different. It cuts through all of those superficial profiling characteristics and centers on the person. State police are trained as screeners in order to look for suspicious conduct such as furtiveness or undue anxiety. Already at Logan Airport, the program has caught 20 people who were either in the country illegally or had outstanding warrants of one kind or another.
Earlier this month the ACLU of Massachusetts filed a lawsuit challenging the constitutionality of behavioral assessment profiling. The lawsuit is unlikely to succeed; the principle of "implied consent" that has been used to uphold the legality of passenger and baggage screening will almost certainly be applied in this case as well.
But the ACLU has it wrong. Behavioral assessment profiling isn't the problem. Abuse of behavioral profiling is the problem, and the ACLU has correctly identified where it can go wrong. If policemen fall back on naive profiling by race, ethnicity, age, gender -- characteristics not relevant to security -- they're little better than a computer. Instead of "driving while black," the police will face accusations of harassing people for the infraction of "flying while Arab." Their actions will increase racial tensions and make them less likely to notice the real threats. And we'll all be less safe as a result.
Behavioral assessment profiling isn't a "silver bullet." It needs to be part of a layered security system, one that includes passenger baggage screening, airport employee screening, and random security checks. It's best implemented not by police but by specially trained federal officers. These officers could be deployed at airports, sports stadiums, political conventions -- anywhere terrorism is a risk because the target is attractive. Done properly, this is the best thing to happen to air passenger security since reinforcing the cockpit door.
This article originally appeared in the Boston Globe.
Here's a good idea:
ASB and Bank Direct's internet banking customers will need to have their cellphone close to hand if they want to use the net to transfer more than $2500 into another account from December.
ASB technology and operations group general manager Clayton Wakefield announced the banks would be the first in New Zealand to implement a "two factor authentication" system to shut out online fraudsters, unveiling details of the service on Friday.
After logging on to internet banking, customers who want to remit more than $2500 into a third party account will receive an eight-digit text message to their cellphone, which they will need to enter online within three minutes to complete the transaction.
It's more secure than a simple username and password. It's easy to implement, with no extra hardware required (assuming your customers already have cellphones). It's easy for the customers to understand and to do. What's not to like?
I can't make heads or tails of this story:
A security loophole at a bank allowed easy access to sensitive credit card information, the BBC has found.
The Morgan Stanley website allowed users to access account details after entering just the first digit of a credit card number.
The shortcut would only work if the account holder had set up the computer to automatically save passwords.
It seems to me that if you set up your computer to automatically save passwords and autofill them onto webpages, you shouldn't be surprised when your computer does exactly that.
Amtrak will now randomly check IDs:
Amtrak conductors have begun random checks of passengers' IDs as a precaution against terrorist attacks.
This works because, somehow, terrorists don't have IDs.
I've written about this kind of thing before. It's the kind of program that makes us no safer, and wastes everyone's time and Amtrak's money.
In the aftermath of the U.S.’s 2004 election, electronic voting machines are again in the news. Computerized machines lost votes, subtracted votes instead of adding them, and doubled votes. Because many of these machines have no paper audit trails, a large number of votes will never be counted. And while it is unlikely that deliberate voting-machine fraud changed the result of the presidential election, the Internet is buzzing with rumors and allegations of fraud in a number of different jurisdictions and races. It is still too early to tell if any of these problems affected any individual elections. Over the next several weeks we'll see whether any of the information crystallizes into something significant.
The U.S has been here before. After 2000, voting machine problems made international headlines. The government appropriated money to fix the problems nationwide. Unfortunately, electronic voting machines -- although presented as the solution -- have largely made the problem worse. This doesn’t mean that these machines should be abandoned, but they need to be designed to increase both their accuracy, and peoples’ trust in their accuracy. This is difficult, but not impossible.
Before I can discuss electronic voting machines, I need to explain why voting is so difficult. Basically, a voting system has four required characteristics:
- Accuracy. The goal of any voting system is to establish the intent of each individual voter, and translate those intents into a final tally. To the extent that a voting system fails to do this, it is undesirable. This characteristic also includes security: It should be impossible to change someone else’s vote, ballot stuff, destroy votes, or otherwise affect the accuracy of the final tally.
- Anonymity. Secret ballots are fundamental to democracy, and voting systems must be designed to facilitate voter anonymity.
- Scalability. Voting systems need to be able to handle very large elections. One hundred million people vote for president in the United States. About 372 million people voted in India’s June elections, and over 115 million in Brazil’s October elections. The complexity of an election is another issue. Unlike many countries where the national election is a single vote for a person or a party, a United States voter is faced with dozens of individual election: national, local, and everything in between.
- Speed. Voting systems should produce results quickly. This is particularly important in the United States, where people expect to learn the results of the day’s election before bedtime. It’s less important in other countries, where people don’t mind waiting days -- or even weeks -- before the winner is announced.
Through the centuries, different technologies have done their best. Stones and pot shards dropped in Greek vases gave way to paper ballots dropped in sealed boxes. Mechanical voting booths, punch cards, and then optical scan machines replaced hand-counted ballots. New computerized voting machines promise even more efficiency, and Internet voting even more convenience.
But in the rush to improve speed and scalability, accuracy has been sacrificed. And to reiterate: accuracy is not how well the ballots are counted by, for example, a punch-card reader. It’s not how the tabulating machine deals with hanging chads, pregnant chads, or anything like that. Accuracy is how well the process translates voter intent into properly counted votes.
Technologies get in the way of accuracy by adding steps. Each additional step means more potential errors, simply because no technology is perfect. Consider an optical-scan voting system. The voter fills in ovals on a piece of paper, which is fed into an optical-scan reader. The reader senses the filled-in ovals and tabulates the votes. This system has several steps: voter to ballot to ovals to optical reader to vote tabulator to centralized total.
At each step, errors can occur. If the ballot is confusing, then some voters will fill in the wrong ovals. If a voter doesn’t fill them in properly, or if the reader is malfunctioning, then the sensor won’t sense the ovals properly. Mistakes in tabulation -- either in the machine or when machine totals get aggregated into larger totals -- also cause errors. A manual system -- tallying the ballots by hand, and then doing it again to double-check -- is more accurate simply because there are fewer steps.
The error rates in modern systems can be significant. Some voting technologies have a 5% error rate: one in twenty people who vote using the system don’t have their votes counted properly. This system works anyway because most of the time errors don’t matter. If you assume that the errors are uniformly distributed -- in other words, that they affect each candidate with equal probability -- then they won’t affect the final outcome except in very close races. So we’re willing to sacrifice accuracy to get a voting system that will more quickly handle large and complicated elections. In close races, errors can affect the outcome, and that’s the point of a recount. A recount is an alternate system of tabulating votes: one that is slower (because it’s manual), simpler (because it just focuses on one race), and therefore more accurate.
Note that this is only true if everyone votes using the same machines. If parts of town that tend to support candidate A use a voting system with a higher error rate than the voting system used in parts of town that tend to support candidate B, then the results will be skewed against candidate A. This is an important consideration in voting accuracy, although tangential to the topic of this essay.
With this background, the issue of computerized voting machines becomes clear. Actually, "computerized voting machines" is a bad choice of words. Many of today’s voting technologies involve computers. Computers tabulate both punch-card and optical-scan machines. The current debate centers around all-computer voting systems, primarily touch-screen systems, called Direct Record Electronic (DRE) machines. (The voting system used in India’s most recent election -- a computer with a series of buttons -- is subject to the same issues.) In these systems the voter is presented with a list of choices on a screen, perhaps multiple screens if there are multiple elections, and he indicates his choice by touching the screen. These machines are easy to use, produce final tallies immediately after the polls close, and can handle very complicated elections. They also can display instructions in different languages and allow for the blind or otherwise handicapped to vote without assistance.
They’re also more error-prone. The very same software that makes touch-screen voting systems so friendly also makes them inaccurate. And even worse, they’re inaccurate in precisely the worst possible way.
Bugs in software are commonplace, as any computer user knows. Computer programs regularly malfunction, sometimes in surprising and subtle ways. This is true for all software, including the software in computerized voting machines. For example:
In Fairfax County, VA, in 2003, a programming error in the electronic voting machines caused them to mysteriously subtract 100 votes from one particular candidates’ totals.
In San Bernardino County, CA in 2001, a programming error caused the computer to look for votes in the wrong portion of the ballot in 33 local elections, which meant that no votes registered on those ballots for that election. A recount was done by hand.
In Volusia County, FL in 2000, an electronic voting machine gave Al Gore a final vote count of negative 16,022 votes.
The 2003 election in Boone County, IA, had the electronic vote-counting equipment showing that more than 140,000 votes had been cast in the Nov. 4 municipal elections. The county has only 50,000 residents and less than half of them were eligible to vote in this election.
There are literally hundreds of similar stories.
What’s important about these problems is not that they resulted in a less accurate tally, but that the errors were not uniformly distributed; they affected one candidate more than the other. This means that you can’t assume that errors will cancel each other out and not affect the election; you have to assume that any error will skew the results significantly.
Another issue is that software can be hacked. That is, someone can deliberately introduce an error that modifies the result in favor of his preferred candidate. This has nothing to do with whether the voting machines are hooked up to the Internet on election day. The threat is that the computer code could be modified while it is being developed and tested, either by one of the programmers or a hacker who gains access to the voting machine company’s network. It’s much easier to surreptitiously modify a software system than a hardware system, and it’s much easier to make these modifications undetectable.
A third issue is that these problems can have further-reaching effects in software. A problem with a manual machine just affects that machine. A software problem, whether accidental or intentional, can affect many thousands of machines -- and skew the results of an entire election.
Some have argued in favor of touch-screen voting systems, citing the millions of dollars that are handled every day by ATMs and other computerized financial systems. That argument ignores another vital characteristic of voting systems: anonymity. Computerized financial systems get most of their security from audit. If a problem is suspected, auditors can go back through the records of the system and figure out what happened. And if the problem turns out to be real, the transaction can be unwound and fixed. Because elections are anonymous, that kind of security just isn’t possible.
None of this means that we should abandon touch-screen voting; the benefits of DRE machines are too great to throw away. But it does mean that we need to recognize its limitations, and design systems that can be accurate despite them.
Computer security experts are unanimous on what to do. (Some voting experts disagree, but I think we’re all much better off listening to the computer security experts. The problems here are with the computer, not with the fact that the computer is being used in a voting application.) And they have two recommendations:
- DRE machines must have a voter-verifiable paper audit trails (sometimes called a voter-verified paper ballot). This is a paper ballot printed out by the voting machine, which the voter is allowed to look at and verify. He doesn’t take it home with him. Either he looks at it on the machine behind a glass screen, or he takes the paper and puts it into a ballot box. The point of this is twofold. One, it allows the voter to confirm that his vote was recorded in the manner he intended. And two, it provides the mechanism for a recount if there are problems with the machine.
- Software used on DRE machines must be open to public scrutiny. This also has two functions. One, it allows any interested party to examine the software and find bugs, which can then be corrected. This public analysis improves security. And two, it increases public confidence in the voting process. If the software is public, no one can insinuate that the voting system has unfairness built into the code. (Companies that make these machines regularly argue that they need to keep their software secret for security reasons. Don’t believe them. In this instance, secrecy has nothing to do with security.)
Computerized systems with these characteristics won’t be perfect -- no piece of software is -- but they’ll be much better than what we have now. We need to start treating voting software like we treat any other high-reliability system. The auditing that is conducted on slot machine software in the U.S. is significantly more meticulous than what is done to voting software. The development process for mission-critical airplane software makes voting software look like a slapdash affair. If we care about the integrity of our elections, this has to change.
Proponents of DREs often point to successful elections as "proof" that the systems work. That completely misses the point. The fear is that errors in the software -- either accidental or deliberately introduced -- can undetectably alter the final tallies. An election without any detected problems is no more a proof the system is reliable and secure than a night that no one broke into your house is proof that your door locks work. Maybe no one tried, or maybe someone tried and succeeded...and you don’t know it.
Even if we get the technology right, we still won’t be done. If the goal of a voting system is to accurately translate voter intent into a final tally, the voting machine is only one part of the overall system. In the 2004 U.S. election, problems with voter registration, untrained poll workers, ballot design, and procedures for handling problems resulted in far more votes not being counted than problems with the technology. But if we’re going to spend money on new voting technology, it makes sense to spend it on technology that makes the problem easier instead of harder.
This article originally appeared on openDemocracy.com.
Prisoner is freed from jail based on a forged fax:
In West Memphis District Court yesterday, Tristian Wilson was set to appear on the docket for a bond hearing on the charges. When he did not appear, Judge William "Pal" Rainey inquired about his release and found that a jail staff member released Wilson by the authority of a fax sent to the jail late Saturday night.
According to Assistant Chief Mike Allen, a fax was sent to the jail which stated "Upon decision between Judge Rainey and the West Memphis Police Department CID Division Tristian Wilson is to be released immediately on this date of October 30, 2004 with a waiver of all fines, bonds and settlements per Judge Rainey and Detective McDugle."
Jail Administrator Mickey Thornton said that these faxes are part of a normal routine for the jail when it comes to releasing prisoners, however, this fax was different.
Faxes are fascinating. They're treated like original documents, but lack any of the authentication mechanisms that we've developed for original documents: letterheads, watermarks, signatures. Most of the time there's no problem, but sometimes you can exploit people's innate trust in faxes to good effect.
Recently I talked about a security vulnerability in Lexar's JumpDrives. I received this e-mail from the company:
From: Diane Carlini
Subject: Lexar's JumpDrive
@stake's findings revealed a slight security exposure in scenarios where an experienced hacker could potentially monitor and gain access to the secure area. This was only the case in version 1.0 which included SafeGuard. Lexar's JumpDrive Secure 2.0 device now includes software based on 256-bit AES Encryption Technology. With this new product, JumpDrive Secure 2.0 offers the highest level of data protection that is commonly available today.
Registered JumpDrive Secure customers will be contacted to inform them of this Security Advisory found in version 1.
I have no technical information, either from Lexar or @Stake, that verifies or refutes this claim.
Yet another one-time pad system. Not a lot of detail on the website, but this bit says it all:
"Based on patent-pending technology and 18 years of exhaustive research, Vadium's AlphaCipher Encryption System (tm), implements a true digital One-Time-Pad ("OTP") cipher. The One-Time Pad is the only method of encrypting data where the strength of protection is immune to the mounting threats posed by breakthroughs in advanced mathematics and the ever-increasing processing power of computers. The consistently accelerated increases in computing power are proven to be a present and severe threat to all the other prevalent encryption methods."
I am continually amazed at the never-ending stream of one-time pad systems. Every few months another company believes that they have finally figured out how to make a commercial one-time pad system. They announce it, are uniformly laughed at, and then disappear. It's cryptography's perpetual motion machine.
Vadium Technology's website.
My essay on one-time pads.
Information insecurity is costing us billions. We pay for it in theft: information theft, financial theft. We pay for it in productivity loss, both when networks stop working and in the dozens of minor security inconveniences we all have to endure. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for security, year after year.
The problem is that all the money we spend isn't fixing the problem. We're paying, but we still end up with insecurities.
The problem is insecure software. It's bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the effects of insecure software.
And that's the problem. We're not paying to improve the security of the underlying software. We're paying to deal with the problem rather than to fix it.
The only way to fix this problem is for vendors to fix their software, and they won't do it until it's in their financial best interests to do so.
Today, the costs of insecure software aren't borne by the vendors that produce the software. In economics, this is known as an externality, the cost of a decision that's borne by people other than those making the decision.
There are no real consequences to the vendors for having bad security or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality.
If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security -- especially the security of their customers -- it also needs to be in their financial best interests.
Liability law is a way to make it in those organizations' best interests. Raising the risk of liability raises the costs of doing it wrong and therefore increases the amount of money a CEO is willing to spend to do it right. Security is risk management; liability fiddles with the risk equation.
Basically, we have to tweak the risk equation so the CEO cares about actually fixing the problem, and putting pressure on his balance sheet is the best way to do that.
Clearly, this isn't all or nothing. There are many parties involved in a typical software attack. There's the company that sold the software with the vulnerability in the first place. There's the person who wrote the attack tool. There's the attacker himself, who used the tool to break into a network. There's the owner of the network, who was entrusted with defending that network. One hundred percent of the liability shouldn't fall on the shoulders of the software vendor, just as 100% shouldn't fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.
We will always pay for security. If software vendors have liability costs, they'll pass those on to us. It might not be cheaper than what we're paying today. But as long as we're going to pay, we might as well pay to fix the problem. Forcing the software vendor to pay to fix the problem and then pass those costs on to us means that the problem might actually get fixed.
Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they're entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.
Information security isn't a technological problem. It's an economics problem. And the way to improve information technology is to fix the economics problem. Do that, and everything else will follow.
This essay originally appeared in Computerworld.
An interesting rebuttal of this piece is here.
Just received this e-mail message, with an attachment entitled "firstname.lastname@example.org." The file is really an executable .com file, presumably one harboring a virus. Clever social engineering attack, and one I had not seen before.
From: ((some fake address))
Subject: Message could not be delivered
Dear user email@example.com,
Your email account has been used to send a huge amount of spam messages during the last week. Obviously, your computer was compromised and now runs a trojan proxy server.
Please follow our instruction in the attached file in order to keep your computer safe.
counterpane.com user support team.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient, an IBM Company.