Blog: March 2009 Archives

Privacy and the Fourth Amendment

In the United States, the concept of “expectation of privacy” matters because it’s the constitutional test, based on the Fourth Amendment, that governs when and how the government can invade your privacy.

Based on the 1967 Katz v. United States Supreme Court decision, this test actually has two parts. First, the government’s action can’t contravene an individual’s subjective expectation of privacy; and second, that expectation of privacy must be one that society in general recognizes as reasonable. That second part isn’t based on anything like polling data; it is more of a normative idea of what level of privacy people should be allowed to expect, given the competing importance of personal privacy on one hand and the government’s interest in public safety on the other.

The problem is, in today’s information society, that definition test will rapidly leave us with no privacy at all.

In Katz, the Court ruled that the police could not eavesdrop on a phone call without a warrant: Katz expected his phone conversations to be private and this expectation resulted from a reasonable balance between personal privacy and societal security. Given NSA’s large-scale warrantless eavesdropping, and the previous administration’s continual insistence that it was necessary to keep America safe from terrorism, is it still reasonable to expect that our phone conversations are private?

Between the NSA’s massive internet eavesdropping program and Gmail’s content-dependent advertising, does anyone actually expect their e-mail to be private? Between calls for ISPs to retain user data and companies serving content-dependent web ads, does anyone expect their web browsing to be private? Between the various computer-infecting malware, and world governments increasingly demanding to see laptop data at borders, hard drives are barely private. I certainly don’t believe that my SMSes, any of my telephone data, or anything I say on LiveJournal or Facebook—regardless of the privacy settings—is private.

Aerial surveillance, data mining, automatic face recognition, terahertz radar that can “see” through walls, wholesale surveillance, brain scans, RFID, “life recorders” that save everything: Even if society still has some small expectation of digital privacy, that will change as these and other technologies become ubiquitous. In short, the problem with a normative expectation of privacy is that it changes with perceived threats, technology and large-scale abuses.

Clearly, something has to change if we are to be left with any privacy at all. Three legal scholars have written law review articles that wrestle with the problems of applying the Fourth Amendment to cyberspace and to our computer-mediated world in general.

George Washington University’s Daniel Solove, who blogs at Concurring Opinions, has tried to capture the byzantine complexities of modern privacy. He points out, for example, that the following privacy violations—all real—are very different: A company markets a list of 5 million elderly incontinent women; reporters deceitfully gain entry to a person’s home and secretly photograph and record the person; the government uses a thermal sensor device to detect heat patterns in a person’s home; and a newspaper reports the name of a rape victim. Going beyond simple definitions such as the divulging of a secret, Solove has developed a taxonomy of privacy, and the harms that result from their violation.

His 16 categories are: surveillance, interrogation, aggregation, identification, insecurity, secondary use, exclusion, breach of confidentiality, disclosure, exposure, increased accessibility, blackmail, appropriation, distortion, intrusion and decisional interference. Solove’s goal is to provide a coherent and comprehensive understanding of what is traditionally an elusive and hard-to-explain concept: privacy violations. (This taxonomy is also discussed in Solove’s book, Understanding Privacy.)

Orin Kerr, also a law professor at George Washington University, and a blogger at Volokh Conspiracy, has attempted to lay out general principles for applying the Fourth Amendment to the internet. First, he points out that the traditional inside/outside distinction—the police can watch you in a public place without a warrant, but not in your home—doesn’t work very well with regard to cyberspace. Instead, he proposes a distinction between content and non-content information: the body of an e-mail versus the header information, for example. The police should be required to get a warrant for the former, but not for the latter. Second, he proposes that search warrants should be written for particular individuals and not for particular internet accounts.

Meanwhile, Jed Rubenfeld of Yale Law School has tried to reinterpret the Fourth Amendment not in terms of privacy, but in terms of security. Pointing out that the whole “expectations” test is circular—what the government does affects what the government can do—he redefines everything in terms of security: the security that our private affairs are private.

This security is violated when, for example, the government makes widespread use of informants, or engages in widespread eavesdropping—even if no one’s privacy is actually violated. This neatly bypasses the whole individual privacy versus societal security question—a balancing that the individual usually loses—by framing both sides in terms of personal security.

I have issues with all of these articles. Solove’s taxonomy is excellent, but the sense of outrage that accompanies a privacy violation—”How could they know/do/say that!?”—is an important part of the harm resulting from a privacy violation. The non-content information that Kerr believes should be collectible without a warrant can be very private and personal: URLs can be very personal, and it’s possible to figure out browsed content just from the size of encrypted SSL traffic. Also, the ease with which the government can collect all of it—the calling and called party of every phone call in the country—makes the balance very different. I believe these need to be protected with a warrant requirement. Rubenfeld’s reframing is interesting, but the devil is in the details. Reframing privacy in terms of security still results in a balancing of competing rights. I’d rather take the approach of stating the—obvious to me—individual and societal value of privacy, and giving privacy its rightful place as a fundamental human right. (There’s additional commentary on Rubenfeld’s thesis at ArsTechnica.)

The trick here is to realize that a normative definition of the expectation of privacy doesn’t need to depend on threats or technology, but rather on what we—as society—decide it should be. Sure, today’s technology make it easier than ever to violate privacy. But it doesn’t necessarily follow that we have to violate privacy. Today’s guns make it easier than ever to shoot virtually anyone for any reason. That doesn’t mean our laws have to change.

No one knows how this will shake out legally. These three articles are from law professors; they’re not judicial opinions. But clearly something has to change, and ideas like these may someday form the basis of new Supreme Court decisions that brings legal notions of privacy into the 21st century.

This essay originally appeared on Wired.com.

Posted on March 31, 2009 at 6:30 AM42 Comments

Massive Chinese Espionage Network

The story broke in The New York Times yesterday:

In a report to be issued this weekend, the researchers said that the system was being controlled from computers based almost exclusively in China, but that they could not say conclusively that the Chinese government was involved.

[…]

Their sleuthing opened a window into a broader operation that, in less than two years, has infiltrated at least 1,295 computers in 103 countries, including many belonging to embassies, foreign ministries and other government offices, as well as the Dalai Lama’s Tibetan exile centers in India, Brussels, London and New York.

The researchers, who have a record of detecting computer espionage, said they believed that in addition to the spying on the Dalai Lama, the system, which they called GhostNet, was focused on the governments of South Asian and Southeast Asian countries.

The Chinese government denies involvement. It’s probably true; these networks tend to be run by amateur hackers with the tacit approval of the government, not the government itself. I wrote this on the topic last year.

It’s only circumstantial evidence that the hackers are Chinese:

In a report to be issued this weekend, the researchers said that the system was being controlled from computers based almost exclusively in China, but that they could not say conclusively that the Chinese government was involved.

And here’s the report, from the University of Toronto.

Good commentary by James Fallows:

My guess is that the “convenient instruments” hypothesis will eventually prove to be true (versus the “centrally controlled plot” scenario), if the “truth” of the case is ever fully determined. For reasons the Toronto report lays out, the episode looks more like the effort of groups of clever young hackers than a concentrated project of the People Liberation Army cyberwar division. But no one knows for certain, and further information about the case is definitely worth following.

An excellent article on Wired.com, and another on ArsTechnica.

There’s another paper, released at the same time on the same topic, from Cambridge University. It makes more pointed claims about the attackers and their origins, claims I’m not sure can be supported from the evidence.

In this note we described how agents of the Chinese government compromised the computing infrastructure of the Office of His His Holiness the Dalai Lama.

EDITED TO ADD (3/30): More information on the tools the hackers used.

EDITED TO ADD (3/30): An interview with the University of Toronto researchers.

EDITED TO ADD (4/1): The Chinese government denies involvement.

EDITD TO ADD (4/1): My essay from last year on Chinese hacking.

Posted on March 30, 2009 at 12:43 PM24 Comments

The Zone of Essential Risk

Bob Blakley makes an interesting point. It’s in the context of eBay fraud, but it’s more general than that.

If you conduct infrequent transactions which are also small, you’ll never lose much money and it’s not worth it to try to protect yourself – you’ll sometimes get scammed, but you’ll have no trouble affording the losses.

If you conduct large transactions, regardless of frequency, each transaction is big enough that it makes sense to insure the transactions or pay an escrow agent. You’ll have occasional experiences of fraud, but you’ll be reimbursed by the insurer or the transactions will be reversed by the escrow agent and you don’t lose anything.

If you conduct small or medium-sized transactions frequently, you can amortize fraud losses using the gains from your other transactions. This is how casinos work; they sometimes lose a hand, but they make it up in the volume.

But if you conduct medium-sized transactions rarely, you’re in trouble. The transactions are big enough so that you care about losses, you don’t have enough transaction volume to amortize those losses, and the cost of insurance or escrow is high enough compared to the value of your transactions that it doesn’t make economic sense to protect yourself.

Posted on March 30, 2009 at 6:50 AM30 Comments

Security Fears Drive Iran to Linux

According to The Age in Australia:

“We would have to pay a lot of money,” said Sephery-Rad, noting that most of the government’s estimated one million PCs and the country’s total of six to eight million computers were being run almost exclusively on the Windows platform.

“Secondly, Microsoft software has a lot of backdoors and security weaknesses that are always being patched, so it is not secure. We are also under US sanctions. All this makes us think we need an alternative operating system.”

[…]

“Microsoft is a national security concern. Security in an operating system is an important issue, and when it is on a computer in the government it is of even greater importance,” said the official.

Posted on March 27, 2009 at 5:52 AM35 Comments

A Solar Plasma Movie-Plot Threat

This is impressive:

It is midnight on 22 September 2012 and the skies above Manhattan are filled with a flickering curtain of colourful light. Few New Yorkers have seen the aurora this far south but their fascination is short-lived. Within a few seconds, electric bulbs dim and flicker, then become unusually bright for a fleeting moment. Then all the lights in the state go out. Within 90 seconds, the entire eastern half of the US is without power.

A year later and millions of Americans are dead and the nation’s infrastructure lies in tatters. The World Bank declares America a developing nation. Europe, Scandinavia, China and Japan are also struggling to recover from the same fateful event—a violent storm, 150 million kilometres away on the surface of the sun.

[…]

It is hard to conceive of the sun wiping out a large amount of our hard-earned progress. Nevertheless, it is possible. The surface of the sun is a roiling mass of plasma—charged high-energy particles—some of which escape the surface and travel through space as the solar wind. From time to time, that wind carries a billion-tonne glob of plasma, a fireball known as a coronal mass ejection (see “When hell comes to Earth“). If one should hit the Earth’s magnetic shield, the result could be truly devastating.

The incursion of the plasma into our atmosphere causes rapid changes in the configuration of Earth’s magnetic field which, in turn, induce currents in the long wires of the power grids. The grids were not built to handle this sort of direct current electricity. The greatest danger is at the step-up and step-down transformers used to convert power from its transport voltage to domestically useful voltage. The increased DC current creates strong magnetic fields that saturate a transformer’s magnetic core. The result is runaway current in the transformer’s copper wiring, which rapidly heats up and melts. This is exactly what happened in the Canadian province of Quebec in March 1989, and six million people spent 9 hours without electricity. But things could get much, much worse than that.

Posted on March 26, 2009 at 12:44 PM56 Comments

Surviving a Suicide Bombing

Where you stand matters:

The two researchers have developed accurate physics-based models of a suicide bombing attack, including casualty levels and explosive composition. Their work also describes human shields available in the crowd with partial and full coverage in both two- and three-dimensional environments.

Their virtual simulation tool assesses the impact of crowd formation patterns and their densities on the magnitude of injury and number of casualties of a suicide bombing attack. For a typical attack, the writers suggest that they can reduce the number of fatalities by 12 percent and the number of injuries by 7 percent if their recommendations are followed.

Simulation results were compared and validated by real-life incidents in Iraq. Line-of-sight with the attacker, rushing toward the exit and stampede were found to be the victims’ most lethal choices both during and after the attack.

Presumably they also discovered where the attacker should stand to be as lethal as possible, but there’s no indication that they published those results.

Posted on March 26, 2009 at 8:08 AM31 Comments

Sniffing Keyboard Keystrokes with a Laser

Interesting:

Chief Security Engineer Andrea Barisani and hardware hacker Daniele Bianco used a handmade laser microphone device and a photo diode to measure the vibrations, software for analyzing the spectrograms of frequencies from different keystrokes, as well as technology to apply the data to a dictionary to try to guess the words. They used a technique called dynamic time warping that’s typically used for speech recognition applications, to measure the similarity of signals.

Line-of-sight on the laptop is needed, but it works through a glass window, they said. Using an infrared laser would prevent a victim from knowing they were being spied on.

Another article.

Posted on March 25, 2009 at 6:59 AM38 Comments

Election Fraud in Kentucky

I think this is the first documented case of election fraud in the U.S. using electronic voting machines (there have been lots of documented cases of errors and voting problems, but this one involves actual maliciousness):

Five Clay County officials, including the circuit court judge, the county clerk, and election officers were arrested Thursday after they were indicted on federal charges accusing them of using corrupt tactics to obtain political power and personal gain.

The 10-count indictment, unsealed Thursday, accused the defendants of a conspiracy from March 2002 until November 2006 that violated the Racketeering Influenced and Corrupt Organizations Act (RICO). RICO is a federal statute that prosecutors use to combat organized crime. The defendants were also indicted for extortion, mail fraud, obstruction of justice, conspiracy to injure voters’ rights and conspiracy to commit voter fraud.

According to the indictment, these alleged criminal actions affected the outcome of federal, local, and state primary and general elections in 2002, 2004, and 2006.

From BradBlog:

Clay County uses the horrible ES&S iVotronic system for all of its votes at the polling place. The iVotronic is a touch-screen Direct Recording Electronic (DRE) device, offering no evidence, of any kind, that any vote has ever been recorded as per the voter’s intent. If the allegations are correct here, there would likely have been no way to discover, via post-election examination of machines or election results, that votes had been manipulated on these machines.

ES&S is the largest distributor of voting systems in America and its iVotronic system—which is well-documented to have lost and flipped votes on many occasions—is likely the most widely-used DRE system in the nation. It’s currently in use in some 419 jurisdictions in 18 states including Arkansas, Colorado, Florida, Indiana, Kansas, Kentucky, Missouri, Mississippi, North Carolina, New Jersey, Ohio, Pennsylvania, South Carolina, Tennessee, Texas, Virginia, Wisconsin, and West Virginia.

ArsTechnica has more, and here’s the actual indictment; BradBlog has excerpts.

The fraud itself is very low-tech, and didn’t make use of any of the documented vulnerabilities in the ES&S iVotronic machines; it was basic social engineering. Matt Blaze explains:

The iVotronic is a popular Direct Recording Electronic (DRE) voting machine. It displays the ballot on a computer screen and records voters’ choices in internal memory. Voting officials and machine manufacturers cite the user interface as a major selling point for DRE machines—it’s already familiar to voters used to navigating touchscreen ATMs, computerized gas pumps, and so on, and thus should avoid problems like the infamous “butterfly ballot”. Voters interact with the iVotronic primarily by touching the display screen itself. But there’s an important exception: above the display is an illuminated red button labeled “VOTE” (see photo at right). Pressing the VOTE button is supposed to be the final step of a voter’s session; it adds their selections to their candidates’ totals and resets the machine for the next voter.

The Kentucky officials are accused of taking advantage of a somewhat confusing aspect of the way the iVotronic interface was implemented. In particular, the behavior (as described in the indictment) of the version of the iVotronic used in Clay County apparently differs a bit from the behavior described in ES&S’s standard instruction sheet for voters [pdf – see page 2]. A flash-based iVotronic demo available from ES&S here shows the same procedure, with the VOTE button as the last step. But evidently there’s another version of the iVotronic interface in which pressing the VOTE button is only the second to last step. In those machines, pressing VOTE invokes an extra “confirmation” screen. The vote is only actually finalized after a “confirm vote” box is touched on that screen. (A different flash demo that shows this behavior with the version of the iVotronic equipped with a printer is available from ES&S here). So the iVotronic VOTE button doesn’t necessarily work the way a voter who read the standard instructions might expect it to.

The indictment describes a conspiracy to exploit this ambiguity in the iVotronic user interface by having pollworkers systematically (and incorrectly) tell voters that pressing the VOTE button is the last step. When a misled voter would leave the machine with the extra “confirm vote” screen still displayed, a pollworker would quietly “correct” the not-yet-finalized ballot before casting it. It’s a pretty elegant attack, exploiting little more than a poorly designed, ambiguous user interface, printed instructions that conflict with actual machine behavior, and public unfamiliarity with equipment that most citizens use at most once or twice each year. And once done, it leaves behind little forensic evidence to expose the deed.

Read the rest of Blaze’s post for some good analysis on the attack and what it says about iVotronic. He led the team that analyzed the security of that very machine:

We found numerous exploitable security weaknesses in these machines, many of which would make it easy for a corrupt voter, pollworker, or election official to tamper with election results (see our report for details).

[…]

On the one hand, we might be comforted by the relatively “low tech” nature of the attack—no software modifications, altered electronic records, or buffer overflow exploits were involved, even though the machines are, in fact, quite vulnerable to such things. But a close examination of the timeline in the indictment suggests that even these “simple” user interface exploits might well portend more technically sophisticated attacks sooner, rather than later.

Count 9 of the Kentucky indictment alleges that the Clay County officials first discovered and conspired to exploit the iVotronic “confirm screen” ambiguity around June 2004. But Kentucky didn’t get iVotronics until at the earliest late 2003; according to the state’s 2003 HAVA Compliance Plan [pdf], no Kentucky county used the machines as of mid-2003. That means that the officials involved in the conspiracy managed to discover and work out the operational details of the attack soon after first getting the machines, and were able to use it to alter votes in the next election.

[…]

But that’s not the worst news in this story. Even more unsettling is the fact that none of the published security analyses of the iVotronic—including the one we did at Penn—had noticed the user interface weakness. The first people to have discovered this flaw, it seems, didn’t publish or report it. Instead, they kept it to themselves and used it to steal votes.

Me on electronic voting machines, from 2004.

Posted on March 24, 2009 at 6:41 AM49 Comments

Fear and the Availability Heuristic

Psychology Today on fear and the availability heuristic:

We use the availability heuristic to estimate the frequency of specific events. For example, how often are people killed by mass murderers? Because higher frequency events are more likely to occur at any given moment, we also use the availability heuristic to estimate the probability that events will occur. For example, what is the probability that I will be killed by a mass murderer tomorrow?

We are especially reliant upon the availability heuristic when we do not have solid evidence from which to base our estimates. For example, what is the probability that the next plane you fly on will crash? The true probability of any particular plane crashing depends on a huge number of factors, most of which you’re not aware of and/or don’t have reliable data on. What type of plane is it? What time of day is the flight? What is the weather like? What is the safety history of this particular plane? When was the last time the plane was examined for problems? Who did the examination and how thorough was it? Who is flying the plane? How much sleep did they get last night? How old are they? Are they taking any medications? You get the idea.

The chances are excellent that you do not have access to all or even most of the information needed to make accurate estimates for just about anything. Indeed, you probably have little or no data from which to base your estimate. Well, that’s not exactly true. In fact, there is one piece that evidence that you always have access to: your memory. Specifically, how easily can you recall previous incidents of the event in question? The easier time we have recalling prior incidents, the greater probability the event has of occurring—at least as far as our minds are concerned. In a nutshell, this is the availability heuristic.

[…]

Although there are many problems associated with the availability heuristic, perhaps the most concerning one is that it often leads people to lose sight of life’s real dangers. Psychologist Gerd Gigerenzer, for example, conducted a fascinating study that showed in the months following September 11, 2001, Americans were less likely to travel by air and more likely to instead travel by car. While it is understandable why Americans would have been fearful of air travel following the incredibly high profile attacks on New York and Washington, the unfortunate result is that Americans died on the highways at alarming rates following 9/11. This is because highway travel is far more dangerous than air travel. More than 40,000 Americans are killed every year on America’s roads. Fewer than 1,000 people die in airplane accidents, and even fewer people are killed aboard commercial airlines.

[…]

Consider, for example, that the 2009 budget for homeland security (the folks that protect us from terrorists) will likely be about $50 billion. Don’t get us wrong, we like the fact that people are trying to prevent terrorism, but even at its absolute worst, terrorists killed about 3,000 Americans in a single year. And less than 100 Americans are killed by terrorists in most years. By contrast, the budget for the National Highway Traffic Safety Administration (the folks who protect us on the road) is about $1 billion, even though more than 40,000 people will die this year on the nation’s roads. In terms of dollars spent per fatality, we fund terrorism prevention at about $17,000,000/fatality (i.e., $50 billion/3,000 fatalities) and accident prevention at about $25,000/fatality (i.e., $1 billion/40,000 fatalities).

I’ve written about this sort of thing here.

Posted on March 23, 2009 at 12:31 PM42 Comments

Research in Explosive Detection

Interesting:

Much of this research focuses on “micromechanical” devices—tiny sensors that have microscopic probes on which airborne chemical vapors deposit. When the right chemicals find the surface of the sensors, they induce tiny mechanical motions, and those motions create electronic signals that can be measured.

These devices are relatively inexpensive to make and can sensitively detect explosives, but they often have the drawback that they cannot discriminate between similar chemicals—the dangerous and the benign. They may detect a trace amount of TNT, for instance, but they may not be able to distinguish that from a trace amount of gasoline.

Seeking to make a better micromechanical sensor, Thundat and his colleagues realized they could detect explosives selectively and with extremely high sensitivity by building sensors that probed the thermal signatures of chemical vapors.

They started with standard micromechanical sensors—devices with microscopic cantilevers beams supported at one end. They modified the cantilevers so that they could be electronically heated by passing a current through them. Next they allowed air to flow over the sensors. If explosive vapors were present in the air, they could be detected when molecules in the vapor clung to the cantilevers.

Then by heating the cantilevers in a fraction of a second, they could discriminate between explosives and non-explosives. All the explosives they tested responded with unique and reproducible thermal response patterns within a split second of heating. In their paper, Thundat and his colleagues demonstrate that they could detect very small amounts of adsorbed explosives—with a limit of 600 picograms (a picogram is a trillionth of a gram). They are now improving the sensitivity and making a prototype device, which they expect to be ready for field testing later this year.

Here’s the paper, behind a paywall.

Posted on March 23, 2009 at 6:55 AM33 Comments

Holy Hand Grenade of Antioch Bomb Scare

You just can’t make this stuff up:

Buildings were evacuated, a street was cordoned off and a bomb disposal team called in after workmen spotted a suspicious object.

But the dangerous-looking weapon turned out to be the Holy Hand Grenade of Antioch, made famous in the 1975 film Monty Python And The Holy Grail.

[…]

They evacuated a pub and another building in Tabernacle Street, while office staff in another building were stopped from leaving.

But when the bomb squad arrived, they quickly established there was no danger and the street was declared safe. In the film, the grenade was used to slaughter a killer rabbit. …

Alberto Romanelli, who owns the Windmill pub nearby, said the police action in ordering his pub to be evacuated had been as ridiculous as the film scene. “They evacuated the pub while they were doing X-rays and stuff,” he said.

“It all lasted about 45 minutes before they decided it was nothing—which I thought was pretty obvious from the start. I lost a good hour’s worth of business.”

I used to catalog examples of the war on the unexpected, but stopped because they were just too many of them (see also here and here), but this one is just too funny to ignore.

EDITED TO ADD (3/20): Lest you think this is tabloid hyperbole, here’s the story in a more respectable newspaper.

Posted on March 20, 2009 at 3:10 PM44 Comments

Why People Steal Rare Books

Interesting analysis:

“Book theft is very hard to quantify because very often pages are cut and it’s not noticed for years,” says Rapley. “Often we come across pages from books [in hauls of recovered property] and we work back from there.” The Museum Security Network, a Dutch-based, not-for-profit organisation devoted to co-ordinating efforts to combat this type of theft, estimates that only 2 to 5 per cent of stolen books are recovered, compared with about half of stolen paintings.

“Books are extremely difficult to identify,” Rapley continues. “That means they can be sold commercially at near to market value rather than black-market value.” Thieves know that single pages cut from books to be sold as prints are easier to steal and even harder to trace, so they are often even more desirable than books themselves.

Most thieves simply cut out pages with razor blades and then hide them about their person. High bookshelves, quiet stacks or storage areas, or any lavatories located within reading rooms, are obvious places for such nefarious activities.

Regular users will have noticed that libraries have tightened up security in recent years. Among the strategies employed are CCTV cameras, improved sightlines for librarians, ID and bag checks at entrances and exits, and more floorwalking by security, uniformed or otherwise.

Posted on March 20, 2009 at 6:24 AM29 Comments

Blowfish on 24, Again

Three nights ago, my encryption algorithm Blowfish was mentioned on the Fox show 24. The clip is available here, or streaming on Hulu. This is the exchange:

Janis Gold: I isolated the data Renee uploaded to Bauer but I can’t get past the filed header.

Larry Moss: What does that mean?

JG: She encrypted the name and address she used and I can’t seem to crack it.

LM: Who can?

JG: She used her personal computer. This is very serious encryption. I mean, there are some high-level people who can do it.

LM: Like who?

JG: Chloe O’Brian, but from what you told me earlier she’s too loyal to Bauer.

LM: Is her husband still here?

JG: Yes, he’s waiting to see you.

LM: He’s a level 6 analyst too.

JG: Mr. O’Brian, a short time ago one of our agents was in touch with Jack Bauer. She sent a name and address that we assume is his next destination. Unfortunately, it’s encrypted with Blowfish 148 and no one here knows how to crack that. Therefore, we need your help, please.

Morris O’Brian: Show me the file.

MO: Where’s your information. 16 or 32 bit wavelength word length?

JG: 32.

MO: Native or modified data points?

JG: Native.

MO: The designer of this algorithm built a backdoor into his code. Decryption’s a piece of cake if you know the override codes.

LM: And you do?

MO: Yeah.

LM: Will this take long?

MO: Course not.

LM: Mr. O’Brian, can you tell me specifically when you’ll have the file decrypted?

MO: Yes.

MO: Now.

O’Brian spends just over 30 seconds at the keyboard.

This is the second time Blowfish has appeared on the show. It was broken the first time, too.

EDITED TO ADD (4/14): Avi Rubin comments.

Posted on March 19, 2009 at 12:18 PM134 Comments

Fingerprinting Paper

Interesting paper:

Fingerprinting Blank Paper Using Commodity Scanners

Will Clarkson, Tim Weyrich, Adam Finkelstein, Nadia Heninger, Alex Halderman, and Edward W. Felten

Abstract: This paper presents a novel technique for authenticating physical documents based on random, naturally occurring imperfections in paper texture. We introduce a new method for measuring the three-dimensional surface of a page using only a commodity scanner and without modifying the document in any way. From this physical feature, we generate a concise fingerprint that uniquely identifies the document. Our technique is secure against counterfeiting and robust to harsh handling; it can be used even before any content is printed on a page. It has a wide range of applications, including detecting forged currency and tickets, authenticating passports, and halting counterfeit goods. Document identification could also be applied maliciously to de-anonymize printed surveys and to compromise the secrecy of paper ballots.

Posted on March 19, 2009 at 6:07 AM17 Comments

Hiding Behind Terrorism Law

The Bayer company is refusing to talk about a fatal accident at a West Virginia plant, citing a 2002 terrorism law.

CSB had intended to hear community concerns, gather more information on the accident, and inform residents of the status of its investigation. However, Bayer attorneys contacted CSB Chairman John Bresland and set up a Feb. 12 conference at the board’s Washington, D.C., headquarters. There, they warned CSB not to reveal details of the accident or the facility’s layout at the community meeting.

“This is where it gets a little strange,” Bresland tells C&EN. To justify their request, Bayer attorneys cited the Maritime Transportation Security Act of 2002, an antiterrorism law that requires companies with plants on waterways to develop security plans to minimize the threat of a terrorist attack. Part of the plans can be designated as “sensitive security information” that can be disseminated only on a “need-to-know basis.” Enforcement of the act is overseen by the Coast Guard and covers some 3,200 facilities, including 320 chemical and petrochemical facilities. Among those facilities is the Bayer plant.

Bayer argued that CSB’s planned public meeting could reveal sensitive plant-specific security information, Bresland says, and therefore would be a violation of the maritime transportation law. The board got cold feet and canceled the meeting.

Bresland contends that CSB wasn’t agreeing with Bayer, but says it was better to put off the meeting than to hold it and be unable to answer questions posed by the public.

The board then met with Coast Guard officials, Bresland says, and formally canceled the community meeting. The outcome of the Coast Guard meeting remains murky. It is unclear what role the Coast Guard might have in editing or restricting release of future CSB reports of accidents at covered facilities, the board says. “This could really cause difficulties for us,” Bresland says. “We could find ourselves hemming and hawing about what actually happened in an accident.”

This isn’t the first time that the specter of terrorism has been used to keep embarrassing information secret.

EDITED TO ADD (3/20): The meeting has been rescheduled. No word on how forthcoming Bayer will be.

Posted on March 18, 2009 at 12:45 PM27 Comments

Leaving Infants in the Car

It happens; sometimes they die.

“Death by hyperthermia” is the official designation. When it happens to young children, the facts are often the same: An otherwise loving and attentive parent one day gets busy, or distracted, or upset, or confused by a change in his or her daily routine, and just… forgets a child is in the car. It happens that way somewhere in the United States 15 to 25 times a year, parceled out through the spring, summer and early fall.

It’s a fascinating piece of reporting, with some interesting security aspects. We protect against a common risk, and increase the chances of a rare risk:

Two decades ago, this was relatively rare. But in the early 1990s, car-safety experts declared that passenger-side front airbags could kill children, and they recommended that child seats be moved to the back of the car; then, for even more safety for the very young, that the baby seats be pivoted to face the rear.

There is a theory of why we forget something so important: dropping off the baby is routine:

The human brain, he says, is a magnificent but jury-rigged device in which newer and more sophisticated structures sit atop a junk heap of prototype brains still used by lower species. At the top of the device are the smartest and most nimble parts: the prefrontal cortex, which thinks and analyzes, and the hippocampus, which makes and holds on to our immediate memories. At the bottom is the basal ganglia, nearly identical to the brains of lizards, controlling voluntary but barely conscious actions.

Diamond says that in situations involving familiar, routine motor skills, the human animal presses the basal ganglia into service as a sort of auxiliary autopilot. When our prefrontal cortex and hippocampus are planning our day on the way to work, the ignorant but efficient basal ganglia is operating the car; that’s why you’ll sometimes find yourself having driven from point A to point B without a clear recollection of the route you took, the turns you made or the scenery you saw.

There are technical solutions:

In 2000, Chris Edwards, Terry Mack and Edward Modlin began to work on just such a product after one of their colleagues, Kevin Shelton, accidentally left his 9-month-old son to die in the parking lot of NASA Langley Research Center in Hampton, Va. The inventors patented a device with weight sensors and a keychain alarm. Based on aerospace technology, it was easy to use; it was relatively cheap, and it worked.

Janette Fennell had high hopes for this product: The dramatic narrative behind it, she felt, and the fact that it came from NASA, created a likelihood of widespread publicity and public acceptance.

That was five years ago. The device still isn’t on the shelves. The inventors could not find a commercial partner willing to manufacture it. One big problem was liability. If you made it, you could face enormous lawsuits if it malfunctioned and a child died. But another big problem was psychological: Marketing studies suggested it wouldn’t sell well.

The problem is this simple: People think this could never happen to them.

There’s talk of making this a mandatory safety feature, but nothing about the cost per lives saved. (In general, a regulatory goal is between $1 million and $10 million per life saved.)

And there’s the question of whether someone who accidentally leaves a baby in the car, resulting in the baby’s death, should be prosecuted as a criminal.

EDITED TO ADD (4/14): Tips to prevent this kind of tragedy.

Posted on March 17, 2009 at 1:10 PM152 Comments

The Doghouse: Sentex Keypads

Many can be opened with a default admin password:

Here’s a fun little tip: You can open most Sentex key pad-access doors by typing in the following code:

***00000099#*

The first *** are to enter into the admin mode, 000000 (six zeroes) is the factory-default password, 99# opens the door, and * exits the admin mode (make sure you press this or the access box will be left in admin mode!)

Posted on March 13, 2009 at 1:46 PM48 Comments

The Kindness of Strangers

When I was growing up, children were commonly taught: “don’t talk to strangers.” Strangers might be bad, we were told, so it’s prudent to steer clear of them.

And yet most people are honest, kind, and generous, especially when someone asks them for help. If a small child is in trouble, the smartest thing he can do is find a nice-looking stranger and talk to him.

These two pieces of advice may seem to contradict each other, but they don’t. The difference is that in the second instance, the child is choosing which stranger to talk to. Given that the overwhelming majority of people will help, the child is likely to get help if he chooses a random stranger. But if a stranger comes up to a child and talks to him or her, it’s not a random choice. It’s more likely, although still unlikely, that the stranger is up to no good.

As a species, we tend help each other, and a surprising amount of our security and safety comes from the kindness of strangers. During disasters: floods, earthquakes, hurricanes, bridge collapses. In times of personal tragedy. And even in normal times.

If you’re sitting in a café working on your laptop and need to get up for a minute, ask the person sitting next to you to watch your stuff. He’s very unlikely to steal anything. Or, if you’re nervous about that, ask the three people sitting around you. Those three people don’t know each other, and will not only watch your stuff, but they’ll also watch each other to make sure no one steals anything.

Again, this works because you’re selecting the people. If three people walk up to you in the café and offer to watch your computer while you go to the bathroom, don’t take them up on that offer. Your odds of getting three honest people are much lower.

Some computer systems rely on the kindness of strangers, too. The Internet works because nodes benevolently forward packets to each other without any recompense from either the sender or receiver of those packets. Wikipedia works because strangers are willing to write for, and edit, an encyclopedia—with no recompense.

Collaborative spam filtering is another example. Basically, once someone notices a particular e-mail is spam, he marks it, and everyone else in the network is alerted that it’s spam. Marking the e-mail is a completely altruistic task; the person doing it gets no benefit from the action. But he receives benefit from everyone else doing it for other e-mails.

Tor is a system for anonymous Web browsing. The details are complicated, but basically, a network of Tor servers passes Web traffic among each other in such a way as to anonymize where it came from. Think of it as a giant shell game. As a Web surfer, I put my Web query inside a shell and send it to a random Tor server. That server knows who I am but not what I am doing. It passes that shell to another Tor server, which passes it to a third. That third server—which knows what I am doing but not who I am—processes the Web query. When the Web page comes back to that third server, the process reverses itself and I get my Web page. Assuming enough Web surfers are sending enough shells through the system, even someone eavesdropping on the entire network can’t figure out what I’m doing.

It’s a very clever system, and it protects a lot of people, including journalists, human rights activists, whistleblowers, and ordinary people living in repressive regimes around the world. But it only works because of the kindness of strangers. No one gets any benefit from being a Tor server; it uses up bandwidth to forward other people’s packets around. It’s more efficient to be a Tor client and use the forwarding capabilities of others. But if there are no Tor servers, then there’s no Tor. Tor works because people are willing to set themselves up as servers, at no benefit to them.

Alibi clubs work along similar lines. You can find them on the Internet, and they’re loose collections of people willing to help each other out with alibis. Sign up, and you’re in. You can ask someone to pretend to be your doctor and call your boss. Or someone to pretend to be your boss and call your spouse. Or maybe someone to pretend to be your spouse and call your boss. Whatever you want, just ask and some anonymous stranger will come to your rescue. And because your accomplice is an anonymous stranger, it’s safer than asking a friend to participate in your ruse.

There are risks in these sorts of systems. Regularly, marketers and other people with agendas try to manipulate Wikipedia entries to suit their interests. Intelligence agencies can, and almost certainly have, set themselves up as Tor servers to better eavesdrop on traffic. And a do-gooder could join an alibi club just to expose other members. But for the most part, strangers are willing to help each other, and systems that harvest this kindness work very well on the Internet.

This essay originally appeared on the Wall Street Journal website.

Posted on March 13, 2009 at 7:41 AM38 Comments

IT Security: Blaming the Victim

Blaming the victim is common in IT: users are to blame because they don’t patch their systems, choose lousy passwords, fall for phishing attacks, and so on. But, while users are, and will continue to be, a major source of security problems, focusing on them is an unhelpful way to think.

People regularly don’t do things they are supposed to: changing the oil in their cars, going to the dentist, replacing the batteries in their smoke detectors. Why? Because people learn from experience. If something is immediately harmful, e.g., touching a hot stove or petting a live tiger, they quickly learn not to do it. But if someone skips an oil change, ignores a computer patch, or chooses a lousy password, it’s unlikely to matter. No feedback, no learning.

We’ve tried to solve this in several ways. We give people rules of thumb: oil change every 5,000 miles; secure password guidelines. Or we send notifications: smoke alarms beep at us, dentists send postcards, Google warns us if we are about to visit a website suspected of hosting malware. But, again, the effects of ignoring these aren’t generally felt immediately.

This makes security primarily a hindrance to the user. It’s a recurring obstacle: something that interferes with the seamless performance of the user’s task. And it’s human nature, wired into our reasoning skills, to remove recurring obstacles. So, if the consequences of bypassing security aren’t obvious, then people will naturally do it.

This is the problem with Microsoft‘s User Account Control (UAC). Introduced in Vista, the idea is to improve security by limiting the privileges applications have when they’re running. But the security prompts pop up too frequently, and there’s rarely any ill-effect from ignoring them. So people do ignore them.

This doesn’t mean user education is worthless. On the contrary, user education is an important part of any corporate security program. And at home, the more users understand security threats and hacker tactics, the more secure their systems are likely to be. But we should also recognise the limitations of education.

The solution is to better design security systems that assume uneducated users: to prevent them from changing security settings that would leave them exposed to undue risk, or—even better—to take security out of their hands entirely.

For example, we all know that backups are a good thing. But if you forget to do a backup this week, nothing terrible happens. In fact, nothing terrible happens for years on end when you forget. So, despite what you know, you start believing that backups aren’t really that important. Apple got the solution right with its backup utility Time Machine. Install it, plug in an external hard drive, and you are automatically backed up against hardware failure and human error. It’s easier to use it than not.

For its part, Microsoft has made great strides in securing its operating system, providing default security settings in Windows XP and even more in Windows Vista to ensure that, when a naive user plugs a computer in, it’s not defenceless.

Unfortunately, blaming the user can be good business. Mobile phone companies save money if they can bill their customers when a calling card number is stolen and used fraudulently. British banks save money by blaming users when they are victims of chip-and-pin fraud. This is continuing, with some banks going so far as to accuse the victim of perpetrating the fraud, despite evidence of large-scale fraud by organised crime syndicates.

The legal system needs to fix the business problems, but system designers need to work on the technical problems. They must accept that security systems that require the user to do the right thing are doomed to fail. And then they must design resilient security nevertheless.

This essay originally appeared in The Guardian.

Posted on March 12, 2009 at 12:39 PM49 Comments

The Story of the World's Largest Diamond Heist

Read the whole thing:

He took the elevator, descending two floors underground to a small, claustrophobic room—the vault antechamber. A 3-ton steel vault door dominated the far wall. It alone had six layers of security. There was a combination wheel with numbers from 0 to 99. To enter, four numbers had to be dialed, and the digits could be seen only through a small lens on the top of the wheel. There were 100 million possible combinations.

Power tools wouldn’t do the trick. The door was rated to withstand 12 hours of nonstop drilling. Of course, the first vibrations of a drill bit would set off the embedded seismic alarm anyway.

The door was monitored by a pair of abutting metal plates, one on the door itself and one on the wall just to the right. When armed, the plates formed a magnetic field. If the door were opened, the field would break, triggering an alarm. To disarm the field, a code had to be typed into a nearby keypad. Finally, the lock required an almost-impossible-to-duplicate foot-long key.

During business hours, the door was actually left open, leaving only a steel grate to prevent access. But Notarbartolo had no intention of muscling his way in when people were around and then shooting his way out. Any break-in would have to be done at night, after the guards had locked down the vault, emptied the building, and shuttered the entrances with steel roll-gates. During those quiet midnight hours, nobody patrolled the interior—the guards trusted their technological defenses.

Notarbartolo pressed a buzzer on the steel grate. A guard upstairs glanced at the videofeed, recognized Notarbartolo, and remotely unlocked the steel grate. Notarbartolo stepped inside the vault.

It was silent—he was surrounded by thick concrete walls. The place was outfitted with motion, heat, and light detectors. A security camera transmitted his movements to the guard station, and the feed was recorded on videotape. The safe-deposit boxes themselves were made of steel and copper and required a key and combination to open. Each box had 17,576 possible combinations.

Notarbartolo went through the motions of opening and closing his box and then walked out. The vault was one of the hardest targets he’d ever seen.

Definitely a movie plot.

Posted on March 12, 2009 at 6:36 AM81 Comments

Google Maps Spam

There are zillions of locksmiths in New York City.

Not really; this is the latest attempt by phony locksmiths to steer business to themselves:

This is one of the scary parts they have a near monopoly on the cell phone 411 system. They have filled the data bases with so many phony address listings in most major citys that when you call 411 on your cell phone ( which most people do now) you will get the same counterfiet locksmiths over and over again. you could ask for 10 listings and they will all be one of these scammers or another with some local adress that is phony. they use thousands of different names also. It is always the same 55.00 service qouted for a lockout and after they unlock your stuff the price goes much higher. These companys are really not in the rural areas but the are in just about all major citys from coast to coast and from top to bottom. [sic]

More here:

Google wasn’t their first target. The “blackhats” in the industry have used whatever marketing vehicle was “au courant,” whether it was the phone books, 411 or now Google and Yahoo.

Here is a BBB alert from 2007, BBB Warns Consumers of Nationwide Locksmith Swindle and a recent ABC news article and video. The Associated Locksmiths of America provides a list of over 110 news reports over the past several years from across the nation detailing the abuses. As you can see, consumers have paid the price of these many scams with high prices, rip-off installs and even theft.

Posted on March 11, 2009 at 12:38 PM

The Techniques for Distributing Child Porn

Fascinating history of an illegal industry:

Today’s schemes are technologically very demanding and extremely complex. It starts with the renting of computer servers in several countries. First the Carders are active to obtain the credit cards and client identities wrongfully. These data are then passed to the falsifiers who manufacture wonderful official documents so that they can be used to identify oneself. These identities and credit card infos are then sold as credit card kits to operators. There is still an alternative where no credit card is needed: in the U.S. one can buy so-called Visa or MasterCard gift cards. However, these with a certain amount of money charged Visa or MasterCard cards usually only usable in the U.S.. Since this anonymous gift cards to buy, these are used to over the Internet with fake identities to pay. Using a false identity and well-functioning credit card servers are then rented and domains purchased as an existing, unsuspecting person. Most of the time an ID is required and in that case they will simply send a forged document. There is yet another alternative: a payment system called WebMoney (webmoney.ru) that is in Eastern Europe as widespread as PayPal in Western Europe. Again, accounts are opened with false identities. Then the business is very simple in Eastern Europe: one buys domains and rents servers via WebMoney and uses it to pay.

As soon as the server is available, a qualified server admin connects to it via a chain of servers in various countries with the help of SSH on the new server. Today complete partitions are encrypted with TrueCrypt and all of the operating system logs are turned off. Because people consider the servers in Germany very reliable, fast and inexpensive, these are usually configured as HIDDEN CONTENT SERVERS. In other words, all the illegal files such as pictures, videos, etc. are uploaded on these servers – naturally via various proxies (and since you are still wondering what these proxies can be – I’ll explain that later). These servers are using firewalls, completely sealed and made inaccessible except by a few servers all over the world – so-called PROXY SERVERs or FORWARD SERVERs. If the server is shut down or Someone logs in from the console, the TrueCrypt partition is unmounted. Just as was done on the content servers, logs are turned off and TrueCrypt is installed on the so-called proxy servers or forward servers. The Russians have developed very clever software that can be used as a proxy server (in addition to the possibilities of SSL tunneling and IP Forwarding). These proxy servers accept incoming connections from the retail customers and route them to the content Servers in Germany – COMPLETELY ANONYMOUSLY AND UNIDENTIFIABLY. The communication link can even be configured to be encrypted. Result: the server in Germany ATTRACTS NO ATTENTION AND STAYS COMPLETELY ANONYMOUS because its IP is not used by anyone except for the proxy server that uses it to route the traffic back and forth through a tunnel – using similar technology as is used with large enterprise VPNs. I stress that these proxy servers are everywhere in the world and only consume a lot of traffic, have no special demands, and above all are completely empty.

Networks of servers around the world are also used at the DNS level. The DNS has many special features: the refresh times have a TTL (Time To Live) of approximately 10 minutes, the entries usually have multiple IP entries in the round robin procedure at each request and rotate the visitor to any of the forward proxy servers. But what is special are the different zones of the DNS linked with extensive GeoIP databases … Way, there are pedophiles in authorities and hosting providers, allowing the Russian server administrators access to valuable information about IP blocks etc. that can be used in conjuction with the DNA. Each one who has little technical knowledge will understabd the importance and implications of this… But what I have to report to you is much more significant than this, and maybe they will finally understand to what extent the public is cheated by the greedy politicians who CANNOT DO ANYTHING against child pornography but use it as a means to justify total monitoring.

Posted on March 11, 2009 at 5:49 AM51 Comments

Security Theater Scare Mongering

We need more security in hotels and churches:

First Baptist Church in Maryville, Illinois, had a security plan in place when a gunman walked into services Sunday morning and killed Pastor Fred Winters, said Tim Lawson, another pastor at the church.

Lawson told CNN he was not prepared to disclose details of his church’s security plan on Monday.

But Maryville police Chief Rich Schardam said Winters was keenly aware of the security issues, had sought out police advice and had identified police and medical personnel in the congregation who could help in an emergency.

“They did have plans on what to do,” Schardam said Monday.

Schardam said neither of the men who subdued the gunman had a law enforcement background.

“Those parishioners were just real-life heroes,” Pastor Lawson said.

Sounds like those plans didn’t make much of a difference.

And does anyone really believe that security checkpoints at hotel entrances will make any difference at all?

Posted on March 10, 2009 at 7:52 AM77 Comments

Choosing a Bad Password Has Real-World Consequences

Oops:

Wikileaks has cracked the encryption to a key document relating to the war in Afghanistan. The document, titled “NATO in Afghanistan: Master Narrative”, details the “story” NATO representatives are to give to, and to avoid giving to, journalists.

An unrelated leaked photo from the war: a US soldier poses with a dead Afghani man in the hills of Afghanistan

The encrypted document, which is dated October 6, and believed to be current, can be found on the Pentagon Central Command (CENTCOM) website.

Posted on March 9, 2009 at 1:19 PM41 Comments

History and Ethics of Military Robots

This article gives an overview of U.S. military robots, and discusses a bit around the issues regarding their use in war:

As military robots gain more and more autonomy, the ethical questions involved will become even more complex. The U.S. military bends over backwards to figure out when it is appropriate to engage the enemy and how to limit civilian casualties. Autonomous robots could, in theory, follow the rules of engagement; they could be programmed with a list of criteria for determining appropriate targets and when shooting is permissible. The robot might be programmed to require human input if any civilians were detected. An example of such a list at work might go as follows: “Is the target a Soviet-made T-80 tank? Identification confirmed. Is the target located in an authorized free-fire zone? Location confirmed. Are there any friendly units within a 200-meter radius? No friendlies detected. Are there any civilians within a 200-meter radius? No civilians detected. Weapons release authorized. No human command authority required.”

Such an “ethical” killing machine, though, may not prove so simple in the reality of war. Even if a robot has software that follows all the various rules of engagement, and even if it were somehow absolutely free of software bugs and hardware failures (a big assumption), the very question of figuring out who an enemy is in the first place—that is, whether a target should even be considered for the list of screening questions—is extremely complicated in modern war. It essentially is a judgment call. It becomes further complicated as the enemy adapts, changes his conduct, and even hides among civilians. If an enemy is hiding behind a child, is it okay to shoot or not? Or what if an enemy is plotting an attack but has not yet carried it out? Politicians, pundits, and lawyers can fill pages arguing these points. It is unreasonable to expect robots to find them any easier.

The legal questions related to autonomous systems are also extremely sticky. In 2002, for example, an Air National Guard pilot in an F-16 saw flashing lights underneath him while flying over Afghanistan at twenty-three thousand feet and thought he was under fire from insurgents. Without getting required permission from his commanders, he dropped a 500-pound bomb on the lights. They instead turned out to be troops from Canada on a night training mission. Four were killed and eight wounded. In the hearings that followed, the pilot blamed the ubiquitous “fog of war” for his mistake. It didn’t matter and he was found guilty of dereliction of duty.

Change this scenario to an unmanned system and military lawyers aren’t sure what to do. Asks a Navy officer, “If these same Canadian forces had been attacked by an autonomous UCAV, determining who is accountable proves difficult. Would accountability lie with the civilian software programmers who wrote the faulty target identification software, the UCAV squadron’s Commanding Officer, or the Combatant Commander who authorized the operational use of the UCAV? Or are they collectively held responsible and accountable?”

The article was adapted from his book Wired for War: The Robotics Revolution and Conflict in the 21st Century, published this year. I bought the book, but I have not read it yet.

Related is this paper on the ethics of autonomous military robots.

Posted on March 9, 2009 at 6:59 AM46 Comments

New eBay Fraud

Here’s a clever attack, exploiting relative delays in eBay, PayPal, and UPS shipping:

The buyer reported the item as “destroyed” and demanded and got a refund from Paypal. When the buyer shipped it back to Chad and he opened it, he found there was nothing wrong with it—except that the scammer had removed the memory, processor and hard drive. Now Chad is out $500 and left with a shell of a computer, and since the item was “received” Paypal won’t do anything.

Very clever. The seller accepted the return from UPS after a visual inspection, so UPS considered the matter closed. PayPal and eBay both considered the matter closed. if the amount was large enough, the seller could sue, but how could he prove that the computer was functional when he sold it?

It seems to me that the only way to solve this is for PayPal to not process refunds until the seller confirms what he received back is the same as what he shipped. Yes, then the seller could commit similar fraud, but sellers (certainly professional ones) have a greater reputational risk.

Posted on March 6, 2009 at 1:30 PM

More European Chip and Pin Insecurity

Optimised to Fail: Card Readers for Online Banking,” by Saar Drimer, Steven J. Murdoch, and Ross Anderson.

Abstract

The Chip Authentication Programme (CAP) has been introduced by banks in Europe to deal with the soaring losses due to online banking fraud. A handheld reader is used together with the customer’s debit card to generate one-time codes for both login and transaction authentication. The CAP protocol is not public, and was rolled out without any public scrutiny. We reverse engineered the UK variant of card readers and smart cards and here provide the first public description of the protocol. We found numerous weaknesses that are due to design errors such as reusing authentication tokens, overloading data semantics, and failing to ensure freshness of responses. The overall strategic error was excessive optimisation. There are also policy implications. The move from signature to PIN for authorising point-of-sale transactions shifted liability from banks to customers; CAP introduces the same problem for online banking. It may also expose customers to physical harm.

EDITED TO ADD (3/12): More info.

Posted on March 5, 2009 at 12:45 PM34 Comments

Commentary on the UK Government National Security Strategy

This is scary:

Sir David Omand, the former Whitehall security and intelligence co-ordinator, sets out a blueprint for the way the state will mine data—including travel information, phone records and emails—held by public and private bodies and admits: “Finding out other people’s secrets is going to involve breaking everyday moral rules.”

In short: it’s immoral, but we’re going to do it anyway.

Posted on March 4, 2009 at 12:32 PM46 Comments

Michael Froomkin on Identity Cards

University of Miami law professor Michael Froomkin writes about ID cards and society in “Identity Cards and Identity Romanticism.”

This book chapter for “Lessons from the Identity Trail: Anonymity, Privacy and Identity in a Networked Society” (New York: Oxford University Press, 2009)—a forthcoming comparative examination of approaches to the regulation of anonymity edited by Ian Kerr—discusses the sources of hostility to National ID Cards in common law countries. It traces that hostility in the United States to a romantic vision of free movement and in England to an equally romantic vision of the ‘rights of Englishmen’.

Governments in the United Kingdom, United States, Australia, and other countries are responding to perceived security threats by introducing various forms of mandatory or nearly mandatory domestic civilian national identity documents. This chapter argues that these ID cards pose threats to privacy and freedom, especially in countries without strong data protection rules. The threats created by weak data protection in these new identification schemes differ significantly from previous threats, making the romantic vision a poor basis from which to critique (highly flawed) contemporary proposals.

One small excerpt:

…it is important to note that each ratchet up in an ID card regime—the introduction of a non-mandatory ID card scheme, improvements to authentication, the transition from an optional regime to a mandatory one, or the inclusion of multiple biometric identifiers—increases the need for attention to how the data collected at the time the card is created will be stored and accessed. Similarly, as ID cards become ubiquitous, a de facto necessity even when not required de jure, the card becomes the visible instantiation of a large, otherwise unseen, set of databases. If each use of the card also creates a data trail, the resulting profile becomes an ongoing temptation to both ordinary and predictive profiling.

Posted on March 4, 2009 at 7:25 AM34 Comments

Three Security Anecdotes from the Insect World

Beet armyworm caterpillars react to the sound of a passing wasp by freezing in place, or even dropping off the plant. Unfortunately, armyworm intelligence isn’t good enough to tell the difference between enemy aircraft (the wasps that prey on them) and harmless commercial flights (bees); they react the same way to either. So by producing nectar for bees, plants not only get pollinated, but also gain some protection against being eaten by caterpillars.

The small hive beetle lives by entering beehives to steal combs and honey. They home in on the hives by detecting the bees’ own alarm pheromones. They also track in yeast that ferments the pollen and releases chemicals that spoof the alarm pheromones, attracting more beetles and more yeast. Eventually the bees abandon the hive, leaving their store of pollen and honey to the beetles and yeast.

Mountain alcon blue caterpillars get ants to feed them by spoofing a biometric: the sounds made by the queen ant.

Posted on March 3, 2009 at 1:20 PM23 Comments

Judge Orders Defendant to Decrypt Laptop

This is an interesting case:

At issue in this case is whether forcing Boucher to type in that PGP passphrase—which would be shielded from and remain unknown to the government—is “testimonial,” meaning that it triggers Fifth Amendment protections. The counterargument is that since defendants can be compelled to turn over a key to a safe filled with incriminating documents, or provide fingerprints, blood samples, or voice recordings, unlocking a partially-encrypted hard drive is no different.

Posted on March 2, 2009 at 12:30 PM94 Comments

Perverse Security Incentives

An employee of Whole Foods in Ann Arbor, Michigan, was fired in 2007 for apprehending a shoplifter. More specifically, he was fired for touching a customer, even though that customer had a backpack filled with stolen groceries and was running away with them.

I regularly see security decisions that, like the Whole Foods incident, seem to make absolutely no sense. However, in every case, the decisions actually make perfect sense once you understand the underlying incentives driving the decision. All security decisions are trade-offs, but the motivations behind them are not always obvious: They’re often subjective, and driven by external incentives. And often security trade-offs are made for nonsecurity reasons.

Almost certainly, Whole Foods has a no-touching-the-customer policy because its attorneys recommended it. “No touching” is a security measure as well, but it’s security against customer lawsuits. The cost of these lawsuits would be much, much greater than the $346 worth of groceries stolen in this instance. Even applied to suspected shoplifters, the policy makes sense: The cost of a lawsuit resulting from tackling an innocent shopper by mistake would be far greater than the cost of letting actual shoplifters get away. As perverse it may seem, the result is completely reasonable given the corporate incentives—Whole Foods wrote a corporate policy that benefited itself.

At least, it works as long as the police and other factors keep society’s shoplifter population down to a reasonable level.

Incentives explain much that is perplexing about security trade-offs. Why does King County, Washington, require one form of ID to get a concealed-carry permit, but two forms of ID to pay for the permit by check? Making a mistake on a gun permit is an abstract problem, but a bad check actually costs some department money.

In the decades before 9/11, why did the airlines fight every security measure except the photo-ID check? Increased security annoys their customers, but the photo-ID check solved a security problem of a different kind: the resale of nonrefundable tickets. So the airlines were on board for that one.

And why does the TSA confiscate liquids at airport security, on the off chance that a terrorist will try to make a liquid explosive instead of using the more common solid ones? Because the officials in charge of the decision used CYA security measures to prevent specific, known tactics rather than broad, general ones.

The same misplaced incentives explain the ongoing problem of innocent prisoners spending years in places like Guantanamo and Abu Ghraib. The solution might seem obvious: Release the innocent ones, keep the guilty ones, and figure out whether the ones we aren’t sure about are innocent or guilty. But the incentives are more perverse than that. Who is going to sign the order releasing one of those prisoners? Which military officer is going to accept the risk, no matter how small, of being wrong?

I read almost five years ago that prisoners were being held by the United States far longer than they should, because ”no one wanted to be responsible for releasing the next Osama bin Laden.” That incentive to do nothing hasn’t changed. It might have even gotten stronger, as these innocents languish in prison.

In all these cases, the best way to change the trade-off is to change the incentives. Look at why the Whole Foods case works. Store employees don’t have to apprehend shoplifters, because society created a special organization specifically authorized to lay hands on people the grocery store points to as shoplifters: the police. If we want more rationality out of the TSA, there needs to be someone with a broader perspective willing to deal with general threats rather than specific targets or tactics.

For prisoners, society has created a special organization specifically entrusted with the role of judging the evidence against them and releasing them if appropriate: the judiciary. It’s only because the George W. Bush administration decided to remove the Guantanamo prisoners from the legal system that we are now stuck with these perverse incentives. Our country would be smart to move as many of these people through the court system as we can.

This essay originally appeared on Wired.com.

Posted on March 2, 2009 at 7:10 AM54 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.