Blog: April 2005 Archives

The Emergence of a Global Infrastructure for Mass Registration and Surveillance

The International Campaign Against Mass Surveillance has issued a report (dated April 2005): “The Emergence of a Global Infrastructure for Mass Registration and Surveillance.” It’s a chilling assessment of the current international trends towards global surveillance. Most of it you will have seen before, although it’s good to have everything in one place. I am particularly pleased that the report explicitly states that these measures do not make us any safer, but only create the illusion of security.

The global surveillance initiatives that governments have embarked upon do not make us more secure. They create only the illusion of security.

Sifting through an ocean of information with a net of bias and faulty logic, they yield outrageous numbers of false positives ­ and false negatives. The dragnet approach might make the public feel that something is being done, but the dragnet is easily circumvented by determined terrorists who are either not known to authorities, or who use identity theft to evade them.

For the statistically large number of people that will be wrongly identified or wrongly assessed as a risk under the system, the consequences can be dire.

At the same time, the democratic institutions and protections, which would be the safeguards of individuals’ personal security, are being weakened. And national sovereignty and the ability of national governments to protect citizens against the actions of other states (when they are willing) are being compromised as security functions become more and more deeply integrated.

The global surveillance dragnet diverts crucial resources and efforts away from the kind of investments that would make people safer. What is required is good information about specific threats, not crude racial profiling and useless information on the nearly 100 percent of the population that poses no threat whatsoever.

Posted on April 29, 2005 at 8:54 AM11 Comments

RFID Passport Security

According to a Wired article, the State Department is reconsidering a security measure to protect privacy that it previously rejected.

The solution would require an RFID reader to provide a key or password before it could read data embedded on an RFID passport’s chip. It would also encrypt data as it’s transmitted from the chip to a reader so that no one could read the data if they intercepted it in transit.

The devil is in the details, but this is a great idea. It means that only readers that know a secret data string can query the RFID chip inside the passport. Of course, this is a systemwide global secret and will be in the hands of every country, but it’s still a great idea.

It’s nice to read that the State Department is taking privacy concerns seriously.

Frank Moss, deputy assistant secretary for passport services, told Wired News on Monday that the government was “taking a very serious look” at the privacy solution in light of the 2,400-plus comments the department received about the e-passport rule and concerns expressed last week in Seattle by
participants at the Computers, Freedom and Privacy conference. Moss said recent work on the passports conducted with the National Institute of Standards and Technology had also led him to rethink the issue.

“Basically what changed my mind was a recognition that the read rates may have actually been able to be more than 10 centimeters, and also recognition that we had to do everything possible to protect the security of people,” Moss said.

The next step is for them to actually implement this countermeasure, and not just consider it. And the step after that is for us to get our hands on some test passports to see if they’ve implemented it well.

Posted on April 28, 2005 at 8:30 AM37 Comments

Blowfish on "24"

Two nights ago, my encryption algorithm Blowfish was mentioned on the Fox show “24.” An alleged computer expert from the fictional anti-terror agency CTU was trying to retrieve some files from a terrorist’s laptop. This is the exchange between the agent and the terrorist’s girlfriend:

They used Blowfish algorithm.

How can you tell?

By the tab on the file headers.

Can you decrypt it?

CTU has a proprietary algorithm. It shouldn’t take that long. We’ll start by trying to hack the password. Let’s start with the basics. Write down nicknames, birthdays, pets—anything you think he might have used.

Posted on April 27, 2005 at 12:26 PM114 Comments

The PITAC Report on CyberSecurity

I finally got around to reading the President’s Information Technology Advisory Committee (PITAC) report entitled “Cyber Security: A Crisis of Prioritization” (dated February 2005). The report looks at the current state of federal involvement in cybersecurity research, and makes recommendations for the future. It’s a good report, and one which the administration would do well to listen to.

The report’s recommendations are based on two observations. The observations are that 1) cybersecurity research is primarily focused on current threats, and not long-term threats, and 2) there simply aren’t enough cybersecurity researchers, and no good mechanism for producing them. The federal government isn’t doing enough to foster cybersecurity research, and the effects of this shortfall will be felt more in the long term than the short term.

To remedy this problem, the report makes four specific recommendations (in much more detail than I summarize here). One, the government needs to increase funding for basic cybersecurity research. Two, the government needs to increase the number of researchers working in cybersecurity. Three, the government need to better foster the transfer of technology from research to product development. And four, the government needs to improve its own cybersecurity coordination and oversight. Four good recommendations.

More specifically, the report lists ten technologies that need more research. They are (not in any priority order):

Authentication Technologies
Secure Fundamental Protocols
Secure Software Engineering and Software Assurance
Holistic System Security
Monitoring and Detection
Mitigation and Recovery Methodologies
Cyber Forensics
Modeling and Testbeds for New Technologies
Metrics, Benchmarks, and Best Practices
Non-Technology Issues that Can Compromise Cyber Security

It’s a good list, and I am especially pleased to see the tenth item—one that is usually forgotten. I would add something on the order of “Dynamic Cyber Security Systems”—I think we need serious basic research in how systems should react to new threats and how to update the security of already fielded system—but that’s all I would change.

The report itself is a bit repetitive, but it’s definitely worth skimming.

Posted on April 27, 2005 at 8:52 AM12 Comments

Ants Staging Ambushes

From Nature via BoingBoing:

Using a home-made trap, a tiny species of ant is capable of ensnaring prey much larger than itself and tearing it to pieces.

The ants (Allomerus decemarticulatus), which live in Amazonian plants called Hirtella physophora, construct a honeycomb-like structure out of their host plant’s fibres from which they can stage an ambush.

The worker ants hide in the holes of this death trap with their mouths open wide, waiting for locusts, butterflies or other insects to land. When prey arrives they quickly seize its extremities, pulling on legs, arms and antennae until the hostage is rendered immobile. Once trapped, other ants from the colony arrive to sting and bite the prey until it is paralyzed.

Posted on April 26, 2005 at 9:52 AM18 Comments

New Risks of Automatic Speedtraps

Every security system brings about new threats. Here’s an example:

The RAC Foundation yesterday called for an urgent review of the first fixed motorway speed cameras.

Far from improving drivers’ behaviour, motorists are now bunching at high speeds between junctions 14-18 on the M4 in Wiltshire, said Edmund King, the foundation’s executive director.

The cameras were introduced by the Wiltshire and Swindon Safety Camera Partnership in an attempt to reduce accidents on a stretch of the motorway. But most motorists are now travelling at just under 79mph, the speed at which they face being fined.

In response to automated speedtraps, drivers are adopting the obvious tactic of driving just below the trigger speed for the cameras, presumably on cruise control. So instead of cars on the road traveling at a spectrum of speeds with reasonable gaps between them, we are seeing “pelotons” of cars traveling closely bunched together at the same high speed, presenting unfamiliar hazards to each other and to law-abiding slower road-users.

The result is that average speeds are going up, and not down.

Posted on April 25, 2005 at 3:12 PM44 Comments

Security Trade-Offs

An essay by an anonymous CSO. This is how it begins:

On any given day, we CSOs come to work facing a multitude of security risks. They range from a sophisticated hacker breaching the network to a common thug picking a lock on the loading dock and making off with company property. Each of these scenarios has a probability of occurring and a payout (in this case, a cost to the company) should it actually occur. To guard against these risks, we have a finite budget of resources in the way of time, personnel, money and equipment—poker chips, if you will.

If we’re good gamblers, we put those chips where there is the highest probability of winning a high payout. In other words, we guard against risks that are most likely to occur and that, if they do occur, will cost the company the most money. We could always be better, but as CSOs, I think we’re getting pretty good at this process. So lately I’ve been wondering—as I watch spending on national security continue to skyrocket, with diminishing marginal returns—why we as a nation can’t apply this same logic to national security spending. If we did this, the war on terrorism would look a lot different. In fact, it might even be over.

The whole thing is worth reading.

Posted on April 22, 2005 at 12:32 PM20 Comments

Universal Automobile Surveillance

Universal automobile surveillance comes to the United Arab Emirates:

IBM will begin installing a “Smart Box” system in vehicles in the United Arab Emirates next year, potentially generating millions in traffic fines for the Gulf state. The UAE signed a $125 million contract with IBM today to provide the high-tech traffic monitoring and speed-enforcing system in which a GPS-enabled “Smart Box” would be installed in cars to provide a voice warning if the driver exceeds the local speed limit for wherever he may be driving. If the voice warning is ignored, the system would use a GSM/GPRS link to beam the car’s speed, identity and location to the police so that a ticket could be issued. The system would also track and monitor any other driving violations, including “reckless behavior.”

This kind of thing is also being implemented in the UK, for insurance purposes.

Posted on April 22, 2005 at 8:30 AM36 Comments

Biometric Passports in the UK

The UK government tried, and failed, to get a national ID. Now they’re adding biometrics to their passports.

Financing for the Passport Office is planned to rise from £182 million a year to £415 million a year by 2008 to cope with the introduction of biometric information such as fingerprints.

A Home Office spokesman said the aim was to cut out the 1,500 fraudulent applications found through the postal system last year alone.

Okay, let’s do the math. Eliminating 1,500 instances of fraud will cost £233 million a year. That comes to £155,000 per instance of fraud.

Does this kind of security trade-off make sense to anyone? Is there absolutely nothing better the UK government can do to ensure security and safety with £233 million a year?

Yes, adding additional biometrics to passports—there’s already a picture—will make them more secure. But I don’t think that the additional security is worth the money and the additional risks. It’s a bad security trade-off.

And I’m not a fan of national IDs.

Posted on April 21, 2005 at 1:18 PM23 Comments

Wi-Fi Liabilities

Interesting law review article:

Suppose you turn on your laptop while sitting at the kitchen table at home and respond OK to a prompt about accessing a nearby wireless Internet access point owned and operated by a neighbor. What potential liability may ensue from accessing someone else’s wireless access point? How about intercepting wireless connection signals? What about setting up an open or unsecured wireless access point in your house or business? Attorneys can expect to grapple with these issues and other related questions as the popularity of wireless technology continues to increase.

This paper explores several theories of liability involving both the accessing and operating of wireless Internet, including the Computer Fraud and Abuse Act, wiretap laws, as well as trespass to chattels and other areas of common law. The paper concludes with a brief discussion of key policy considerations.

Posted on April 21, 2005 at 9:16 AM36 Comments

Lighters Banned on Airplanes

Lighters are now banned on U.S. commercial flights, but not matches.

The Senators who proposed the bill point to Richard Reid, who unsuccessfully tried to light explosives on an airplane with matches. They were worried that a lighter might have worked.

That, of course, is silly. The reason Reid failed is because he tried to light the explosives in his seat, so he could watch the faces of those around him. If he’d gone into the lavatory and lit them in private, he would have been successful.

Hence, the ban is silly.

But there’s a serious problem here. Airport security screeners are much better at detecting explosives when the detonation mechanism is attached. Explosives without any detonation mechanism—like Richard Reid’s—are much harder to detect. As are explosives carried by one person and a detonation device carried by another. I’ve heard that this was the technique the Chechnyan women used to blow up a Russian airplane.

Posted on April 20, 2005 at 4:21 PM35 Comments

Processing Exit Visas

From Federal Computer Week:

The Homeland Security Department will choose in the next 60 days which of three procedures it will use to track international visitors leaving the United States, department officials said today.

A report evaluating the three methods under consideration is due in the next few weeks, said Anna Hinken, spokeswoman for US-VISIT, the program that screens foreign nationals entering and exiting the country to weed out potential terrorists.

The first process uses kiosks located throughout an airport or seaport. An “exit attendant”—who would be a contract worker, Hinken said—checks the traveler’s documents. The traveler then steps to the station, scans both index fingers and has a digital photo taken. The station prints out a receipt that verifies the passenger has checked out.

The second method requires the passenger to present the receipt when reaching the departure gate. An exit attendant will scan the receipt and one of the passenger’s index fingers using a wireless handheld device. If the passenger’s fingerprint matches the identity on the receipt, the attendant returns the receipt and the passenger can board.

The third procedure uses just the wireless device at the gate. The screening officer scans the traveler’s fingerprints and takes a picture with the device, which is similar in size to tools that car-rental companies use, Hinken said. The device wirelessly checks the US-VISIT database. Once the traveler’s identity is confirmed as safe, the officer prints out a receipt and the visitor can pass.

Properly evaluating this trade-off would look at the relative ease of attacking the three systems, the relative costs of the three systems, and the relative speed and convenience—to the traveller—of the three systems. My guess is that the system that requires the least amount of interaction with a person when boarding the plane is best.

Posted on April 20, 2005 at 8:16 AM28 Comments

A Taxonomy of Privacy

Interesting law review paper by Daniel Solove. Here’s the abstract:

Privacy is a concept in disarray. Nobody can articulate what it means. As one commentator has observed, privacy suffers from “an embarrassment of meanings.” Privacy is far too vague a concept to guide adjudication and lawmaking, as abstract incantations of the importance of “privacy” do not fare well when pitted against more concretely-stated countervailing interests.

In 1960, the famous torts scholar William Prosser attempted to make sense of the landscape of privacy law by identifying four different interests. But Prosser focused only on tort law, and the law of information privacy is significantly more vast and complex, extending to Fourth Amendment law, the constitutional right to information privacy, evidentiary privileges, dozens of federal privacy statutes, and hundreds of state statutes. Moreover, Prosser wrote over 40 years ago, and new technologies have given rise to a panoply of new privacy harms.

A new taxonomy to understand privacy violations is thus sorely needed. This article develops a taxonomy to identify privacy problems in a comprehensive and concrete manner. It endeavors to guide the law toward a more coherent understanding of privacy and to serve as a framework for the future development of the field of privacy law.

The paper is a follow-on to his previous paper, “Conceptualizing Privacy.”

Posted on April 19, 2005 at 1:32 PM3 Comments

Failures of Airport Screening

According to the AP:

Security at American airports is no better under federal control than it was before the Sept. 11 attacks, a congressman says two government reports will conclude.

The Government Accountability Office, the investigative arm of Congress, and the Homeland Security Department’s inspector general are expected to release their findings soon on the performance of Transportation Security Administration screeners.

This finding will not surprise anyone who has flown recently. How does anyone expect competent security from screeners who don’t know the difference between books and books of matches? Only two books of matches are now allowed on flights; you can take as many reading books as you can carry.

The solution isn’t to privatize the screeners, just as the solution in 2001 wasn’t to make them federal employees. It’s a much more complex problem.

I wrote about it in Beyond Fear (pages 153-4):

No matter how much training they get, airport screeners routinely miss guns and knives packed in carry-on luggage. In part, that’s the result of human beings having developed the evolutionary survival skill of pattern matching: the ability to pick out patterns from masses of random visual data. Is that a ripe fruit on that tree? Is that a lion stalking quietly through the grass? We are so good at this that we see patterns in anything, even if they’re not really there: faces in inkblots, images in clouds, and trends in graphs of random data. Generating false positives helped us stay alive; maybe that wasn’t a lion that your ancestor saw, but it was better to be safe than sorry. Unfortunately, that survival skill also has a failure mode. As talented as we are at detecting patterns in random data, we are equally terrible at detecting exceptions in uniform data. The quality-control inspector at Spacely Sprockets, staring at a production line filled with identical sprockets looking for the one that is different, can’t do it. The brain quickly concludes that all the sprockets are the same, so there’s no point paying attention. Each new sprocket confirms the pattern. By the time an anomalous sprocket rolls off the assembly line, the brain simply doesn’t notice it. This psychological problem has been identified in inspectors of all kinds; people can’t remain alert to rare events, so they slip by.

The tendency for humans to view similar items as identical makes it clear why airport X-ray screening is so difficult. Weapons in baggage are rare, and the people studying the X-rays simply lose the ability to see the gun or knife. (And, at least before 9/11, there was enormous pressure to keep the lines moving rather than double-check bags.) Steps have been put in place to try to deal with this problem: requiring the X-ray screeners to take frequent breaks, artificially imposing the image of a weapon onto a normal bag in the screening system as a test, slipping a bag with a weapon into the system so that screeners learn it can happen and must expect it. Unfortunately, the results have not been very good.

This is an area where the eventual solution will be a combination of machine and human intelligence. Machines excel at detecting exceptions in uniform data, so it makes sense to have them do the boring repetitive tasks, eliminating many, many bags while having a human sort out the final details. Think about the sprocket quality-control inspector: If he sees 10,000 negatives, he’s going to stop seeing the positives. But if an automatic system shows him only 100 negatives for every positive, there’s a greater chance he’ll see them.

Paying the screeners more will attract a smarter class of worker, but it won’t solve the problem.

Posted on April 19, 2005 at 9:22 AM23 Comments

State-Sponsored Identity Theft

In an Ohio sting operation at a strip bar, a 22-year-old student intern with the United States Marshals Service was given a fake identity so she could work undercover at the club. But instead of giving her a fabricated identity, the police gave her the identity of another woman living in another Ohio city. And they didn’t tell the other woman.

Oddly enough, this is legal. According to Ohio’s identity theft law, the police are allowed to do it. More specifically, the crime cannot be prosecuted if:

The person or entity using the personal identifying information is a law enforcement agency, authorized fraud personnel, or a representative of or attorney for a law enforcement agency or authorized fraud personnel and is using the personal identifying information in a bona fide investigation, an information security evaluation, a pretext calling evaluation, or a similar matter.

I have to admit that I’m stunned. I naively assumed that the police would have a list of Social Security numbers that would never be given to real people, numbers that could be used for purposes such as this. Or at least that they would use identities of people from other parts of the country after asking for permission. (I’m sure people would volunteer to help out the police.) It never occurred to me that they would steal the identity of random citizens. What could they be thinking?

Posted on April 18, 2005 at 3:02 PM33 Comments

Wi-Fi Minefield

The U.S. is laying a minefield in Iraq that can be controlled by a soldier with a wi-fi-enabled laptop. Details via AP.

Put aside arguments about the ethics and efficacy of landmines. Assume they exist and are being used. Given that, the question is whether radio-controlled landmines are better or worse than regular landmines. This comment, for example, seems to get it wrong:

“We’re concerned the United States is going to field something that has the capability of taking the man out of the loop when engaging the target,” said senior researcher Mark Hiznay of Human Rights Watch. “Or that we’re putting a 19-year-old soldier in the position of pushing a button when a blip shows up on a computer screen.”

With conventional landmines, the man is out of the loop as soon as he lays the mine. Even a 19-year-old seeing a blip on a computer screen is better than a completely automatic system.

Were I the U.S. military, I would be more worried whether the mines could accidentally be triggered by radio interference. I would be more worried about the enemy jamming the radio control mechanism.

Posted on April 18, 2005 at 11:15 AM19 Comments

Brandeis Quote on Openness

Here is the definitive citation—and text—of this often-used Brandeis quote.

US Supreme Court Justice Louis Brandeis wrote in Harper’s Weekly, Dec 20 1913:

Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants; electric light the most efficient policeman.

Edited to add:

Apparently the authoritative cite is to his book, not the magazine—in legal writing books are more authoritative than magazines.

Louis D. Brandeis, Other People’s Money and How the Bankers Use It 92 (1914): “Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants; electric light the most efficient policeman.”

Posted on April 18, 2005 at 8:40 AM6 Comments

License-Plate Scanning by Helicopter

From TheNewspaper.com:

The fictional police spy helicopter from the movie Blue Thunder is taking a big step toward becoming a reality. Police in the UK have successfully tested a 160 MPH helicopter that can read license plates from as much as 2,000 feet in the air. The Eurocopter EC135 is equipped with a camera capable of scanning 5 cars every second. Essex Police Inspector Paul Moor told the Daily Star newspaper: “This is all about denying criminals the use of the road. Using a number plate recognition camera from the air means crooks will have nowhere to hide.”

The use of Automated Plate Number Recognition (ANPR) is growing. ANPR devices photograph vehicles and then use optical character recognition to extract license plate numbers and match them with any selected databases. The devices use infrared sensors to avoid the need for a flash and to operate in all weather conditions.

This is an example of wholesale surveillance, and something I’ve written about before.

Of course, once the system is in place it will be used for privacy violations that we can’t even conceive of.

One of the companies that sells the camera scanning equipment touts it’s potential for marketing applications. “Once the number plate has been successfully ‘captured’ applications for it’s use are limited only by imagination and almost anything is possible,” Westminister International says on its website. UK police also envision a national database that holds time and location data on every vehicle scanned. “This data warehouse would also hold ANPR reads and hits as a further source of vehicle intelligence, providing great benefits to major crime and terrorism enquiries,” a Home Office proposal explains.

The only way to maintain security is not to field this sort of system in the first place.

Posted on April 15, 2005 at 12:10 PM27 Comments

Mitigating Identity Theft

Identity theft is the new crime of the information age. A criminal collects enough personal data on someone to impersonate a victim to banks, credit card companies, and other financial institutions. Then he racks up debt in the person’s name, collects the cash, and disappears. The victim is left holding the bag. While some of the losses are absorbed by financial institutions—credit card companies in particular—the credit-rating damage is borne by the victim. It can take years for the victim to clear his name.

Unfortunately, the solutions being proposed in Congress won’t help. To see why, we need to start with the basics. The very term “identity theft” is an oxymoron. Identity is not a possession that can be acquired or lost; it’s not a thing at all. Someone’s identity is the one thing about a person that cannot be stolen.

The real crime here is fraud; more specifically, impersonation leading to fraud. Impersonation is an ancient crime, but the rise of information-based credentials gives it a modern spin. A criminal impersonates a victim online and steals money from his account. He impersonates a victim in order to deceive financial institutions into granting credit to the criminal in the victim’s name. He impersonates a victim to the Post Office and gets the victim’s address changed. He impersonates a victim in order to fool the police into arresting the wrong man. No one’s identity is stolen; identity information is being misused to commit fraud.

The crime involves two very separate issues. The first is the privacy of personal data. Personal privacy is important for many reasons, one of which is impersonation and fraud. As more information about us is collected, correlated, and sold, it becomes easier for criminals to get their hands on the data they need to commit fraud. This is what’s been in the news recently: ChoicePoint, LexisNexis, Bank of America, and so on. But data privacy is more than just fraud. Whether it is the books we take out of the library, the websites we visit, or the contents of our text messages, most of us have personal data on third-party computers that we don’t want made public. The posting of Paris Hilton’s phone book on the Internet is a celebrity example of this.

The second issue is the ease with which a criminal can use personal data to commit fraud. It doesn’t take much personal information to apply for a credit card in someone else’s name. It doesn’t take much to submit fraudulent bank transactions in someone else’s name. It’s surprisingly easy to get an identification card in someone else’s name. Our current culture, where identity is verified simply and sloppily, makes it easier for a criminal to impersonate his victim.

Proposed fixes tend to concentrate on the first issue—making personal data harder to steal—whereas the real problem is the second. If we’re ever going to manage the risks and effects of electronic impersonation, we must concentrate on preventing and detecting fraudulent transactions.

Fraudulent transactions have nothing to do with the legitimate account holders. Criminals impersonate legitimate users to financial intuitions. That means that any solution can’t involve the account holders. That leaves only one reasonable answer: financial intuitions need to be liable for fraudulent transactions. They need to be liable for sending erroneous information to credit bureaus based on fraudulent transactions.

They can’t claim that the user must keep his password secure or his machine virus free. They can’t require the user to monitor his accounts for fraudulent activity, or his credit reports for fraudulently obtained credit cards. Those aren’t reasonable requirements for most users. The bank must be made responsible, regardless of what the user does.

If you think this won’t work, look at credit cards. Credit card companies are liable for all but the first $50 of fraudulent transactions. They’re not hurting for business; and they’re not drowning in fraud, either. They’ve developed and fielded an array of security technologies designed to detect and prevent fraudulent transactions. They’ve pushed most of the actual costs onto the merchants. And almost no security centers around trying to authenticate the cardholder.

That’s an important lesson. Identity theft solutions focus much too much on authenticating the person. Whether it’s two-factor authentication, ID cards, biometrics, or whatever, there’s a widespread myth that authenticating the person is the way to prevent these crimes. But once you understand that the problem is fraudulent transactions, you quickly realize that authenticating the person isn’t the way to proceed.

Again, think about credit cards. Store clerks barely verify signatures when people use cards. People can use credit cards to buy things by mail, phone, or Internet, where no one verifies the signature or even that you have possession of the card. Even worse, no credit card company mandates secure storage requirements for credit cards. They don’t demand that cardholders secure their wallets in any particular way. Credit card companies simply don’t worry about verifying the cardholder or putting requirements on what he does. They concentrate on verifying the transaction.

This same sort of thinking needs to be applied to other areas where criminals use impersonation to commit fraud. I don’t know what the final solutions will look like, but I do know that once financial institutions are liable for losses due to these types of fraud, they will find solutions. Maybe there’ll be a daily withdrawal limit, like there is on ATMs. Maybe large transactions will be delayed for a period of time, or will require a call-back from the bank or brokerage company. Maybe people will no longer be able to open a credit card account by simply filling out a bunch of information on a form. Likely the solution will be a combination of solutions that reduces fraudulent transactions to a manageable level, but we’ll never know until the financial institutions have the financial incentive to put them in place.

Right now, the economic incentives result in financial institutions that are so eager to allow transactions—new credit cards, cash transfers, whatever—that they’re not paying enough attention to fraudulent transactions. They’ve pushed the costs for fraud onto the merchants. But if they’re liable for losses and damages to legitimate users, they’ll pay more attention. And they’ll mitigate the risks. Security can do all sorts of things, once the economic incentives to apply them are there.

By focusing on the fraudulent use of personal data, I do not mean to minimize the harm caused by third-party data and violations of privacy. I believe that the U.S. would be well-served by a comprehensive Data Protection Act like the European Union. However, I do not believe that a law of this type would significantly reduce the risk of fraudulent impersonation. To mitigate that risk, we need to concentrate on detecting and preventing fraudulent transactions. We need to make the entity that is in the best position to mitigate the risk to be responsible for that risk. And that means making the financial institutions liable for fraudulent transactions.

Doing anything less simply won’t work.

Posted on April 15, 2005 at 9:17 AM49 Comments

Passwords Alone Don't Protect Trade Secrets

A court ruled that simply password-protecting a file isn’t enough to make it a trade secret.

To establish that information is a trade secret under the ITSA, two requirements must be met: (1) the plaintiff must show the information was sufficiently secret to give the plaintiff a competitive advantage, and (2) the plaintiff must show that it took affirmative measures to prevent others from acquiring or using the information. Although the court determined in this case that the customer lists met the first requirement, it denied trade secret protection based on the second requirement.

The court held that “[r]estricting access to sensitive information by assigning employees passwords on a need-to-know basis is a step in the right direction.” This precaution in and of itself, however was not enough. The court was “troubled by the failure to either require employees to sign confidentiality agreements, advise employees that its records were confidential, or label the information as confidential.” There was insufficient evidence in the record to show the employees understood the information to be confidential, thus the trial court’s finding that the customer lists were not trade secrets was not against the manifest weight of the evidence.

Posted on April 14, 2005 at 1:05 PM19 Comments

Hacking the Papal Election

As the College of Cardinals prepares to elect a new pope, people like me wonder about the election process. How does it work, and just how hard is it to hack the vote?

Of course I’m not advocating voter fraud in the papal election. Nor am I insinuating that a cardinal might perpetrate fraud. But people who work in security can’t look at a system without trying to figure out how to break it; it’s an occupational hazard.

The rules for papal elections are steeped in tradition, and were last codified on 22 Feb 1996: “Universi Dominici Gregis on the Vacancy of the Apostolic See and the Election of the Roman Pontiff.” The document is well-thought-out, and filled with details.

The election takes place in the Sistine Chapel, directed by the Church Chamberlain. The ballot is entirely paper-based, and all ballot counting is done by hand. Votes are secret, but everything else is done in public.

First there’s the “pre-scrutiny” phase. “At least two or three” paper ballots are given to each cardinal (115 will be voting), presumably so that a cardinal has extras in case he makes a mistake. Then nine election officials are randomly selected: three “Scrutineers” who count the votes, three “Revisers,” who verify the results of the Scrutineers, and three “Infirmarii” who collect the votes from those too sick to be in the room. (These officials are chosen randomly for each ballot.)

Each cardinal writes his selection for Pope on a rectangular ballot paper “as far as possible in handwriting that cannot be identified as his.” He then folds the paper lengthwise and holds it aloft for everyone to see.

When everyone is done voting, the “scrutiny” phase of the election begins. The cardinals proceed to the altar one by one. On the altar is a large chalice with a paten (the shallow metal plate used to hold communion wafers during mass) resting on top of it. Each cardinal places his folded ballot on the paten. Then he picks up the paten and slides his ballot into the chalice.

If a cardinal cannot walk to the altar, one of the Scrutineers—in full view of everyone—does this for him. If any cardinals are too sick to be in the chapel, the Scrutineers give the Infirmarii a locked empty box with a slot, and the three Infirmarii together collect those votes. (If a cardinal is too sick to write, he asks one of the Infirmarii to do it for him) The box is opened and the ballots are placed onto the paten and into the chalice, one at a time.

When all the ballots are in the chalice, the first Scrutineer shakes it several times in order to mix them. Then the third Scrutineer transfers the ballots, one by one, from one chalice to another, counting them in the process. If the total number of ballots is not correct, the ballots are burned and everyone votes again.

To count the votes, each ballot is opened and the vote is read by each Scrutineer in turn, the third one aloud. Each Scrutineer writes the vote on a tally sheet. This is all done in full view of the cardinals. The total number of votes cast for each person is written on a separate sheet of paper.

Then there’s the “post-scrutiny” phase. The Scrutineers tally the votes and determine if there’s a winner. Then the Revisers verify the entire process: ballots, tallies, everything. And then the ballots are burned. (That’s where the smoke comes from: white if a Pope has been elected, black if not.)

How hard is this to hack? The first observation is that the system is entirely manual, making it immune to the sorts of technological attacks that make modern voting systems so risky. The second observation is that the small group of voters—all of whom know each other—makes it impossible for an outsider to affect the voting in any way. The chapel is cleared and locked before voting. No one is going to dress up as a cardinal and sneak into the Sistine Chapel. In effect, the voter verification process is about as perfect as you’re ever going to find.

Eavesdropping on the process is certainly possible, although the rules explicitly state that the chapel is to be checked for recording and transmission devices “with the help of trustworthy individuals of proven technical ability.” I read that the Vatican is worried about laser microphones, as there are windows near the chapel’s roof.

That leaves us with insider attacks. Can a cardinal influence the election? Certainly the Scrutineers could potentially modify votes, but it’s difficult. The counting is conducted in public, and there are multiple people checking every step. It’s possible for the first Scrutineer, if he’s good at sleight of hand, to swap one ballot paper for another before recording it. Or for the third Scrutineer to swap ballots during the counting process.

A cardinal can’t stuff ballots when he votes. The complicated paten-and-chalice ritual ensures that each cardinal votes once—his ballot is visible—and also keeps his hand out of the chalice holding the other votes.

Making the ballots large would make these attacks harder. So would controlling the blank ballots better, and only distributing one to each cardinal per vote. Presumably cardinals change their mind more often during the voting process, so distributing extra blank ballots makes sense.

Ballots from previous votes are burned, which makes it harder to use one to stuff the ballot box. But there’s one wrinkle: “If however a second vote is to take place immediately, the ballots from the first vote will be burned only at the end, together with those from the second vote.” I assume that’s done so there’s only one plume of smoke for the two elections, but it would be more secure to burn each set of ballots before the next round of voting.

And lastly, the cardinals are in “choir dress” during the voting, which has translucent lace sleeves under a short red cape; much harder for sleight-of-hand tricks.

It’s possible for one Scrutineer to misrecord the votes, but with three Scrutineers, the discrepancy would be quickly detected. I presume a recount would take place, and the correct tally would be verified. Two or three Scrutineers in cahoots with each other could do more mischief, but since the Scrutineers are chosen randomly, the probability of a cabal being selected is very low. And then the Revisers check everything.

More interesting is to try and attack the system of selecting Scrutineers, which isn’t well-defined in the document. Influencing the selection of Scrutineers and Revisers seems a necessary first step towards influencing the election.

Ballots with more than one name (overvotes) are void, and I assume the same is true for ballots with no name written on them (undervotes). Illegible or ambiguous ballots are much more likely, and I presume they are discarded. The rules do have a provision for multiple ballots by the same cardinal: “If during the opening of the ballots the Scrutineers should discover two ballots folded in such a way that they appear to have been completed by one elector, if these ballots bear the same name they are counted as one vote; if however they bear two different names, neither vote will be valid; however, in neither of the two cases is the voting session annulled.” This surprises me, although I suppose it has happened by accident.

If there’s a weak step, it’s the counting of the ballots. There’s no real reason to do a pre-count, and it gives the Scrutineer doing the transfer a chance to swap legitimate ballots with others he previously stuffed up his sleeve. I like the idea of randomizing the ballots, but putting the ballots in a wire cage and spinning it around would accomplish the same thing more securely, albeit with less reverence.

And if I were improving the process, I would add some kind of white-glove treatment to prevent a Scrutineer from hiding a pencil lead or pen tip under his fingernails. Although the requirement to write out the candidate’s name in full gives more resistance against this sort of attack.

The recent change in the process that lets the cardinals go back and forth from the chapel into their dorm rooms—instead of being locked in the chapel the whole time as was done previously—makes the process slightly less secure. But I’m sure it makes it a lot more comfortable.

Lastly, there’s the potential for one of the Infirmarii to do what he wants when transcribing the vote of an infirm cardinal, but there’s no way to prevent that. If the cardinal is concerned, he could ask all three Infirmarii to witness the ballot.

There’s also enormous social—religious, actually—disincentives to hacking the vote. The election takes place in a chapel, and at an altar. They also swear an oath as they are casting their ballot—further discouragement. And the Scrutineers are explicitly exhorted not to form any sort of cabal or make any plans to sway the election under pain of excommunication: “The Cardinal electors shall further abstain from any form of pact, agreement, promise or other commitment of any kind which could oblige them to give or deny their vote to a person or persons.”

I’m sure there are negotiations and deals and influencing—cardinals are mortal men, after all, and such things are part of how humans come to agreement.

What are the lessons here? First, open systems conducted within a known group make voting fraud much harder. Every step of the election process is observed by everyone, and everyone knows everyone, which makes it harder for someone to get away with anything. Second, small and simple elections are easier to secure. This kind of process works to elect a Pope or a club president, but quickly becomes unwieldy for a large-scale election. The only way manual systems work is through a pyramid-like scheme, with small groups reporting their manually obtained results up the chain to more central tabulating authorities.

And a third and final lesson: when an election process is left to develop over the course of a couple thousand years, you end up with something surprisingly good.

Rules for a papal election

There’s a picture of choir dress on this page

Edited to add: The stack of used ballots are pierced with a needle and thread and tied together, which 1) marks them as used, and 2) makes them harder to reuse.

Posted on April 14, 2005 at 9:59 AM33 Comments

The Doghouse: ExeShield

Yes, there are companies that believe that keeping cryptographic algorithms secret makes them more secure.

ExeShield uses the latest advances in software protection and encryption technology, to give your applications even more protection. Of course, for your security and ours, we won’t divulge the encryption scheme to anyone.

If anyone reading this needs a refresher on exactly why secret cryptography algorithms are invariably snake oil, I wrote about it three years ago.

Posted on April 13, 2005 at 9:19 AM23 Comments

More on Two-Factor Authentication

Recently I published an essay arguing that two-factor authentication is an ineffective defense against identity theft. For example, issuing tokens to online banking customers won’t reduce fraud, because new attack techniques simply ignore the countermeasure. Unfortunately, some took my essay as a condemnation of two-factor authentication in general. This is not true. It’s simply a matter of understanding the threats and the attacks.

Passwords just don’t work anymore. As computers have gotten faster, password guessing has gotten easier. Ever-more-complicated passwords are required to evade password-guessing software. At the same time, there’s an upper limit to how complex a password users can be expected to remember. About five years ago, these two lines crossed: It is no longer reasonable to expect users to have passwords that can’t be guessed. For anything that requires reasonable security, the era of passwords is over.

Two-factor authentication solves this problem. It works against passive attacks: eavesdropping and password guessing. It protects against users choosing weak passwords, telling their passwords to their colleagues or writing their passwords on pieces of paper taped to their monitors. For an organization trying to improve access control for its employees, two-factor authentication is a great idea. Microsoft is integrating two-factor authentication into its operating system, another great idea.

What two-factor authentication won’t do is prevent identity theft and fraud. It’ll prevent certain tactics of identity theft and fraud, but criminals simply will switch tactics. We’re already seeing fraud tactics that completely ignore two-factor authentication. As banks roll out two-factor authentication, criminals simply will switch to these new tactics.

Security is always an arms race, and you could argue that this situation is simply the cost of treading water. The problem with this reasoning is it ignores countermeasures that permanently reduce fraud. By concentrating on authenticating the individual rather than authenticating the transaction, banks are forced to defend against criminal tactics rather than the crime itself.

Credit cards are a perfect example. Notice how little attention is paid to cardholder authentication. Clerks barely check signatures. People use their cards over the phone and on the Internet, where the card’s existence isn’t even verified. The credit card companies spend their security dollar authenticating the transaction, not the cardholder.

Two-factor authentication is a long-overdue solution to the problem of passwords. I welcome its increasing popularity, but identity theft and bank fraud are not results of password problems; they stem from poorly authenticated transactions. The sooner people realize that, the sooner they’ll stop advocating stronger authentication measures and the sooner security will actually improve.

This essay previously appeared in Network World as a “Face Off.” Joe Uniejewski of RSA Security wrote an opposing position. Another article on the subject was published at SearchSecurity.com.

One way to think about this—a phrasing I didn’t think about until after writing the above essay—is that two-factor authentication solves security problems involving authentication. The current wave of attacks against financial systems are not exploiting vulnerabilities in the authentication system, so two-factor authentication doesn’t help.

Posted on April 12, 2005 at 11:02 AM14 Comments

Security as a Trade-Off

The Economist has an excellent editorial on security trade-offs. You need to subscribe to read the whole thing, but here’s my favorite paragraph:

The second point is that all technologies have both good and bad uses. There is currently a debate about whether it is safe to install mobile antennas in underground stations, for example, for fear that
terrorists will use mobile phones to detonate bombs. Last year’s bombs in Madrid were detonated by mobile phones, but it was the phones’ internal alarm-clock function, not a call, that was used as the trigger mechanism. Nobody is suggesting that alarm clocks be outlawed, however; nor does anyone suggest banning telephones, even though kidnappers can use them to make ransom demands. Rather than demonising new technologies, their legitimate uses by good people must always be weighed against their illegitimate uses by bad ones. New technologies are inevitable, but by learning the lessons of history, needless scares need not be.

Posted on April 11, 2005 at 1:05 PM11 Comments

Insider Attack Against Citibank

Insiders are the biggest threat:

The Pune police have unearthed a major siphoning racket involving former and serving callcentre employees.

They allegedly transferred a total of [15 million rupees (US $350,000)] from a multinational bank into their own accounts, opened under fictitious names. The money was used to splurge on luxuries like cars and mobile phones.

The call center was in India. The victim was Citibank.

Posted on April 11, 2005 at 9:14 AM9 Comments

More Uses for Airline Passenger Data

I’ve been worried about the government getting comprehensive data on airline passengers in order to check their names against a terrorist “watch list.” Turns out that the government has another reason for wanting passenger data.

Although privacy experts worry about the government gathering personal information on airline travelers, Delta Airlines is handing over electronic lists of passengers from some flights to help stop the spread of deadly infectious diseases.

The lists will allow health officials to notify more quickly those travelers who might have been exposed to illnesses such as dengue fever, flu, plague, SARS and biological agents, the Centers for Disease Control and Prevention told a congressional panel on Wednesday.

It’s the same story: a massive privacy violation of everybody just in case something happens to a few.

As an example of the CDC’s notification efforts, Schuchat cited the case of a New Jersey resident who returned from a trip to Sierra Leone in September with Lassa fever. The patient flew to Newark via London and took a train home. Only after he died a few days later did the CDC confirm the disease.

CDC worked with the state, the airline, the railroad, the hospital and others to identify 188 people who had been near the patient. Nineteen were deemed at-risk and 16 were contacted; none of those contacted came down with the disease. It took more than five days to notify some passengers, Schuchat said.

It’s unclear how this program would reduce that “five days” problem. I think it’s a better trade-off for the airlines to be ready to send the CDC the data in the event of a problem, rather than them sending the CDC all the data—just in case—before there is any problem.

Posted on April 8, 2005 at 9:14 AM7 Comments

Secrecy and Security

Nice op-ed on the security problems with secrecy.

Some information that previously was open no doubt needs to be classified now. Terrorism alters perspectives. But the terrorist threat also has provided cover for bureaucrats who instinctively opt for secrecy and public officials who would prefer to keep the public in the dark to avoid accountability.

Posted on April 7, 2005 at 9:40 AM12 Comments

Finding Nuclear Power Plants

Recently I wrote about the government requiring pilots not to fly near nuclear power plants, and then not telling them where those plants are, because of security concerns. Here’s a story about how someone found the exact location of the nuclear power plant in Oyster Creek, N.J., using only publicly available information.

But of course a terrorist would never be able to do that.

Posted on April 6, 2005 at 9:05 AM29 Comments

UK National IDs

The London School of Economics recently published a report on the UK government’s national ID proposals. Definitely worth reading.

From the summary:

The Report concludes that the establishment of a secure national identity system has the potential to create significant, though limited, benefits for society. However, the proposals currently being considered by Parliament are neither safe nor appropriate. There was an overwhelming view expressed by stakeholders involved in this Report that the proposals are too complex, technically unsafe, overly prescriptive and lack a foundation of public trust and confidence. The current proposals miss key opportunities to establish a secure, trusted and cost-effective identity system and the Report therefore considers alternative models for an identity card scheme that may achieve the goals of
the legislation more effectively. The concept of a national identity system is supportable, but the current proposals are not feasible.

Posted on April 5, 2005 at 12:14 PM21 Comments

Sandia on Terrorism Security

I have very mixed feelings about this report:

Anticipating attacks from terrorists, and hardening potential targets against them, is a wearying and expensive business that could be made simpler through a broader view of the opponents’ origins, fears, and ultimate objectives, according to studies by the Advanced Concepts Group (ACG) of Sandia National Laboratories.

“Right now, there are way too many targets considered and way too many ways to attack them,” says ACG’s Curtis Johnson. “Any thinking person can spin up enemies, threats, and locations it takes billions [of dollars] to fix.”

That makes a lot of sense, and this way of thinking is sorely needed. As is this kind of thing:

“The game really starts when the bad guys are getting together to plan something, not when they show up at your door,” says Johnson. “Can you ping them to get them to reveal their hand, or get them to turn against themselves?”

Better yet is to bring the battle to the countries from which terrorists spring, and beat insurgencies before they have a foothold.

“We need to help win over the as-yet-undecided populace to the view it is their government that is legitimate and not the insurgents,” says the ACG’s David Kitterman. Data from Middle East polls suggest, perhaps surprisingly, that most respondents are favorable to Western values. Turbulent times, however, put that liking under stress.

A nation’s people and media can be won over, says Yonas, through global initiatives that deal with local problems such as the need for clean water and affordable energy.

Says Johnson, “U.S. security already is integrated with global security. We’re always helping victims of disaster like tsunami victims, or victims of oppressive governments. Perhaps our ideas on national security should be redefined to reflect the needs of these people.”

Remember right after 9/11, when that kind of thinking would get you vilified?

But the article also talks about security mechanisms that won’t work, cost too much in freedoms and liberties, and have dangerous side effects.

People in airports voluntarily might carry smart cards if the cards could be sweetened to perform additional tasks like helping the bearer get through security, or to the right gate at the right time.

Mall shoppers might be handed a sensing card that also would help locate a particular store, a special sale, or find the closest parking space through cheap distributed-sensor networks.

“Suppose every PDA had a sensor on it,” suggests ACG researcher Laura McNamara. “We would achieve decentralized surveillance.” These sensors could report by radio frequency to a central computer any signal from contraband biological, chemical, or nuclear material.

Universal surveillance to improve our security? Seems unlikely.

But the most chilling quote of all:

“The goal here is to abolish anonymity, the terrorist’s friend,” says Sandia researcher Peter Chew. “We’re not talking about abolishing privacy—that’s another issue. We’re only considering the effect of setting up an electronic situation where all the people in a mall, subway, or airport ‘know’ each other—via, say, Bluetooth—as they would have, personally, in a small town. This would help malls and communities become bad targets.”

Anonymity is now the terrorist’s friend? I like to think of it as democracy’s friend.

Security against terrorism is important, but it’s equally important to remember that terrorism isn’t the only threat. Criminals, police, and governments are also threats, and security needs to be viewed as a trade-off with respect to all the threats. When you analyze terrorism in isolation, you end up with all sorts of weird answers.

Posted on April 5, 2005 at 9:26 AM13 Comments

Police Foil Bank Electronic Theft

From the BBC:

Police in London say they have foiled one of the biggest attempted bank thefts in Britain.

The plan was to steal £220m ($423m) from the London offices of the Japanese bank Sumitomo Mitsui.

Computer experts are believed to have tried to transfer the money electronically after hacking into the bank’s systems.

Not a lot of detail here, but it seems that the thieves got in using a keyboard recorder. It’s the simple attacks that you have to worry about….

Posted on April 4, 2005 at 12:51 PM18 Comments

The Price of Restricting Vulnerability Information

Interesting law article:

There are calls from some quarters to restrict the publication of information about security vulnerabilities in an effort to limit the number of people with the knowledge and ability to attack computer systems. Scientists in other fields have considered similar proposals and rejected them, or adopted only narrow, voluntary restrictions. As in other fields of science, there is a real danger that publication restrictions will inhibit the advancement of the state of the art in computer security. Proponents of disclosure restrictions argue that computer security information is different from other scientific research because it is often expressed in the form of functioning software code. Code has a dual nature, as both speech and tool. While researchers readily understand the information expressed in code, code enables many more people to do harm more readily than with the non-functional information typical of most research publications. Yet, there are strong reasons to reject the argument that code is different, and that restrictions are therefore good policy. Code’s functionality may help security as much as it hurts it and the open distribution of functional code has valuable effects for consumers, including the ability to pressure vendors for more secure products and to counteract monopolistic practices.

Posted on April 4, 2005 at 7:25 AM13 Comments

ChoicePoint Feeling the Heat

AP says:

An executive of embattled data broker ChoicePoint Inc. says the company is developing a system that would allow people
to review their personal information that is sold to law enforcement agencies, employers, landlords and businesses. ChoicePoint’s announcement comes a month after it disclosed
that thieves used previously stolen identities to create what appeared to be legitimate businesses seeking personal
records.

Posted on April 2, 2005 at 9:09 AM18 Comments

Student Hacks System to Alter Grades

This is an interesting story:

A UCSB student is being charged with four felonies after she allegedly stole the identity of two professors and used the information to change her own and several other students’ grades, police said.

The Universty of California Santa Barbara has a custom program, eGrades, where faculty can submit and alter grades. It’s password protected, of course. But there’s a backup system, so that faculty who forget their password can reset it using their Social Security number and date of birth.

A student worked for an insurance company, and she was able to obtain SSN and DOB for two faculty members. She used that information to reset their passwords and change grades.

Police, university officials and campus computer specialists said Ramirez’s alleged illegal access to the computer grading system was not the result of a deficiency or flaw in the program.

Sounds like a flaw in the program to me. It’s even one I’ve written about: a primary security mechanism that fails to a less-secure secondary mechanism.

Posted on April 1, 2005 at 2:36 PM24 Comments

Sybase Practices Dumb Security

From Computerworld:

A threat by Sybase Inc. to sue a U.K.-based security research firm if it publicly discloses the details of eight holes it found in Sybase’s database software last year is evoking sharp criticism from some IT managers but sympathetic comments from others.

I can see why Sybase would prefer it if people didn’t know about vulnerabilities in their software—it’s bad for business—but disclosure is the reason companies are fixing them. If researchers are prohibited from publishing, then software developers are free to ignore security problems.

Posted on April 1, 2005 at 1:24 PM12 Comments

Security Risks of Biometrics

From the BBC:

Police in Malaysia are hunting for members of a violent gang who chopped off a car owner’s finger to get round the vehicle’s hi-tech security system.

The car, a Mercedes S-class, was protected by a fingerprint recognition system.

What interests me about this story is the interplay between attacker and defender. The defender implements a countermeasure that causes the attacker to change his tactics. Sometimes the new tactics are more harmful, and it’s not obvious whether or not the countermeasure was worth it.

I wrote about something similar in Beyond Fear (p. 113):

Someone might think: “I am worried about car theft, so I will buy an expensive security device that makes ignitions impossible to hot-wire.” That seems like a reasonable thought, but countries such as Russia, where these security devices are commonplace, have seen an increase in carjackings. A carjacking puts the driver at a much greater risk; here the security countermeasure has caused the weakest link to move from the ignition switch to the driver. Total car thefts may have declined, but drivers’ safety did, too.

Posted on April 1, 2005 at 9:12 AM30 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.