Blog: April 2008 Archives

Microsoft Has Developed Windows Forensic Analysis Tool for Police


The COFEE, which stands for Computer Online Forensic Evidence Extractor, is a USB “thumb drive” that was quietly distributed to a handful of law-enforcement agencies last June. Microsoft General Counsel Brad Smith described its use to the 350 law-enforcement experts attending a company conference Monday.

The device contains 150 commands that can dramatically cut the time it takes to gather digital evidence, which is becoming more important in real-world crime, as well as cybercrime. It can decrypt passwords and analyze a computer’s Internet activity, as well as data stored in the computer.

It also eliminates the need to seize a computer itself, which typically involves disconnecting from a network, turning off the power and potentially losing data. Instead, the investigator can scan for evidence on site.

More news here. Commentary here.

How long before this device is in the hands of the hacker community? Days? Months? They had it before it was released?

EDITED TO ADD (4/30): Seems that these are not Microsoft-developed tools:

COFEE, according to forensic folk who have used it, is simply a suite of 150 bundled off-the-shelf forensic tools that run from a script. None of the tools are new or were created by Microsoft. Microsoft simply combined existing programs into a portable tool that can be used in the field before agents bring a computer back to their forensic lab.

Microsoft wouldn’t disclose which tools are in the suite other than that they’re all publicly available, but a forensic expert told me that when he tested the product last year it included standard forensic products like Windows Forensic Toolchest (WFT) and RootkitRevealer.

With COFEE, a forensic agent can select, through the interface, which of the 150 investigative tools he wants to run on a targeted machine. COFEE creates a script and copies it to the USB device which is then plugged into the targeted machine. The advantage is that instead of having to run each tool separately, a forensic investigator can run them all through the script much more quickly and can also grab information (such as data temporarily stored in RAM or network connection information) that might otherwise be lost if he had to disconnect a machine and drag it to a forensics lab before he could examine it.

And it’s certainly not a back door, as TechDirt claims.

But given that a Federal court has ruled that border guards can search laptop computers without cause, this tool might see wider use than Microsoft anticipated.

Posted on April 30, 2008 at 1:54 PM57 Comments

Virtual Kidnapping

A real crime in Mexico:

“We’ve got your child,” he says in rapid-fire Spanish, usually adding an expletive for effect and then rattling off a list of demands that might include cash or jewels dropped off at a certain street corner or a sizable deposit made to a local bank.

The twist is that little Pablo or Teresa is safe and sound at school, not duct-taped to a chair in a rundown flophouse somewhere or stuffed in the back of a pirate taxi. But when the cellphone call comes in, that is not at all clear.


But identifying the phone numbers—they are now listed on a government Web site—has done little to slow the extortion calls. Nearly all the calls are from cellphones, most of them stolen, authorities say.

On top of that, many extortionists are believed to be pulling off the scams from prisons.

Authorities say hundreds of different criminal gangs are engaged in various telephone scams. Besides the false kidnappings, callers falsely tell people they have won cars or money. Sometimes, people are told to turn off their cellphones for an hour so the service can be repaired; then, relatives are called and told that the cellphone’s owner has been kidnapped. Ransom demands have even been made by text message.

Posted on April 29, 2008 at 5:29 AM39 Comments

Cyber Espionage

Interesting investigative article from Business Week on Chinese cyber espionage against the U.S. government, and the government’s reaction.

When the deluge began in 2006, officials scurried to come up with software “patches,” “wraps,” and other bits of triage. The effort got serious last summer when top military brass discreetly summoned the chief executives or their representatives from the 20 largest U.S. defense contractors to the Pentagon for a “threat briefing.” BusinessWeek has learned the U.S. government has launched a classified operation called Byzantine Foothold to detect, track, and disarm intrusions on the government’s most critical networks. And President George W. Bush on Jan. 8 quietly signed an order known as the Cyber Initiative to overhaul U.S. cyber defenses, at an eventual cost in the tens of billions of dollars, and establishing 12 distinct goals, according to people briefed on its contents. One goal in particular illustrates the urgency and scope of the problem: By June all government agencies must cut the number of communication channels, or ports, through which their networks connect to the Internet from more than 4,000 to fewer than 100. On Apr. 8, Homeland Security Dept. Secretary Michael Chertoff called the President’s order a cyber security “Manhattan Project.”

It can only help for the U.S. government to get its own cybersecurity house in order.

Posted on April 28, 2008 at 6:45 AM27 Comments

Boring Jobs Dull the Mind

We already knew this, but it’s good to reinforce the lesson:

In the study, Dr Eichele and his colleagues asked participants to repeatedly perform a “flanker task”—an experiment in which individuals must quickly respond to visual clues.

As they did so, brain scans were performed using functional magnetic resonance imaging (fMRI).

They found the participants’ mistakes were “foreshadowed” by a particular pattern of brain activity.

“To our surprise, up to 30 seconds before the mistake we could detect a distinct shift in activity,” said Dr Stefan Debener, of Southampton University, UK.

“The brain begins to economise, by investing less effort to complete the same task.

“We see a reduction in activity in the prefrontal cortex. At the same time, we see an increase in activity in an area which is more active in states of rest, known as the Default Mode Network (DMN).”

This has security implications whenever you have people watching the same thing over and over again, looking for anomalies: airport screeners looking at X-ray scans, casino dealers looking for cheaters, building guards looking for bad guys. It’s hard to do it correctly, because the brain doesn’t work that way.

EDITED TO ADD (4/28): This video demonstrates the point nicely.

Posted on April 26, 2008 at 6:37 AM18 Comments

Identity Theft from the Dead

List of deaths, intended to prevent identity theft, is used for identity theft:

Ironically, the government produces the monthly Death Index so that banks and other lenders can prevent people from applying for credit using a dead person’s information—the index is made public by the Department of Commerce under the Freedom of Information Act. The caper Kirkland’s accused of mastering apparently exploits a loophole, by taking over accounts that are already open.

Posted on April 25, 2008 at 6:01 AM21 Comments

Designing Processors to Support Hacking

This won best-paper award at the First USENIX Workshop on Large-Scale Exploits and Emergent Threats: “Designing and implementing malicious hardware,” by Samuel T. King, Joseph Tucek, Anthony Cozzie, Chris Grier, Weihang Jiang, and Yuanyuan Zhou.

Hidden malicious circuits provide an attacker with a stealthy attack vector. As they occupy a layer below the entire software stack, malicious circuits can bypass traditional defensive techniques. Yet current work on trojan circuits considers only simple attacks against the hardware itself, and straightforward defenses. More complex designs that attack the software are unexplored, as are the countermeasures an attacker may take to bypass proposed defenses.

We present the design and implementation of Illinois Malicious Processors (IMPs). There is a substantial design space in malicious circuitry; we show that an attacker, rather than designing one speci?c attack, can instead design hardware to support attacks. Such ?exible hardware allows powerful, general purpose attacks, while remaining surprisingly low in the amount of additional hardware. We show two such hardware designs, and implement them in a real system. Further, we show three powerful attacks using this hardware, including a login backdoor that gives an attacker complete and highlevel access to the machine. This login attack requires only 1341 additional gates: gates that can be used for other attacks as well. Malicious processors are more practical, more flexible, and harder to detect than an initial analysis would suggest.

Theoretical? Sure. But combine this with stories of counterfeit computer hardware from China, and you’ve got yourself a potentially serious problem.

Posted on April 24, 2008 at 1:52 PM32 Comments

Hacking ISP Error Pages

This is a big deal:

At issue is a growing trend in which ISPs subvert the Domain Name System, or DNS, which translates website names into numeric addresses.

When users visit a website like, the DNS system maps the domain name into an IP address such as But if a particular site does not exist, the DNS server tells the browser that there’s no such listing and a simple error message should be displayed.

But starting in August 2006, Earthlink instead intercepts that Non-Existent Domain (NXDOMAIN) response and sends the IP address of ad-partner Barefruit’s server as the answer. When the browser visits that page, the user sees a list of suggestions for what site the user might have actually wanted, along with a search box and Yahoo ads.

The rub comes when a user is asking for a nonexistent subdomain of a real website, such as, where the subdomain webmale doesn’t exist (unlike, say, mail in In this case, the Earthlink/Barefruit ads appear in the browser, while the title bar suggests that it’s the official Google site.

As a result, all those subdomains are only as secure as Barefruit’s servers, which turned out to be not very secure at all. Barefruit neglected basic web programming techniques, making its servers vulnerable to a malicious JavaScript attack. That meant hackers could have crafted special links to unused subdomains of legitimate websites that, when visited, would serve any content the attacker wanted.

The hacker could, for example, send spam e-mails to Earthlink subscribers with a link to a webpage on Visiting that link would take the victim to the hacker’s site, and it would look as though they were on a real PayPal page.

Kaminsky demonstrated the vulnerability by finding a way to insert a YouTube video from 80s pop star Rick Astley into Facebook and PayPal domains. But a black hat hacker could instead embed a password-stealing Trojan. The attack might also allow hackers to pretend to be a logged-in user, or to send e-mails and add friends to a Facebook account.

Earthlink isn’t alone in substituting ad pages for error messages, according to Kaminsky, who has seen similar behavior from other major ISPs including Verizon, Time Warner, Comcast and Qwest.

Another article.

Posted on April 24, 2008 at 6:43 AM42 Comments

Reverse-Engineering Exploits from Patches

This is interesting research: given a security patch, can you automatically reverse-engineer the security vulnerability that is being patched and create exploit code to exploit it?

Turns out you can.

What does this mean?

Attackers can simply wait for a patch to be released, use these techniques, and with reasonable chance, produce a working exploit within seconds. Coupled with a worm, all vulnerable hosts could be compromised before most are even aware a patch is available, let alone download it. Thus, Microsoft should redesign Windows Update. We propose solutions which prevent several possible schemes, some of which could be done with existing technology.

Full paper here.

Posted on April 23, 2008 at 1:35 PM65 Comments

Software that Assesses Security Risks to Transportation Networks

The TSA wants a tool that will assess risks against transportation networks:

“The tool will assist in prioritization of security measures based on their risk reduction potential,” said the statement of work accompanying TSA’s formal solicitation, which was posted April 18.

The software tool would help TSA gather and organize information about specific transport modes and assist agency officials to make risk management decisions.

The contract, which will be issued by TSA’s office of operational process and technology, envisions a one-year base period plus four one-year options. The chosen vendor will be expected to install the software, troubleshoot any hardware or software problems, consult on building risk assessment modules, attend classified intelligence meetings at TSA headquarters and maintain the software.

I don’t think you have to be very good to qualify here. This automated system put Boise, ID, on the top of its list of most vulnerable cities. The bar isn’t very high here; I’m just saying.

Posted on April 23, 2008 at 6:16 AM25 Comments

The RSA Conference

Last week was the RSA Conference, easily the largest information security conference in the world. Over 17,000 people descended on San Francisco’s Moscone Center to hear some of the over 250 talks, attend I-didn’t-try-to-count parties, and try to evade over 350 exhibitors vying to sell them stuff.

Talk to the exhibitors, though, and the most common complaint is that the attendees aren’t buying.

It’s not the quality of the wares. The show floor is filled with new security products, new technologies, and new ideas. Many of these are products that will make the attendees’ companies more secure in all sorts of different ways. The problem is that most of the people attending the RSA Conference can’t understand what the products do or why they should buy them. So they don’t.

I spoke with one person whose trip was paid for by a smallish security firm. He was one of the company’s first customers, and the company was proud to parade him in front of the press. I asked him if he walked through the show floor, looking at the company’s competitors to see if there was any benefit to switching.

“I can’t figure out what any of those companies do,” he replied.

I believe him. The booths are filled with broad product claims, meaningless security platitudes, and unintelligible marketing literature. You could walk into a booth, listen to a five-minute sales pitch by a marketing type, and still not know what the company does. Even seasoned security professionals are confused.

Commerce requires a meeting of minds between buyer and seller, and it’s just not happening. The sellers can’t explain what they’re selling to the buyers, and the buyers don’t buy because they don’t understand what the sellers are selling. There’s a mismatch between the two; they’re so far apart that they’re barely speaking the same language.

This is a bad thing in the near term—some good companies will go bankrupt and some good security technologies won’t get deployed—but it’s a good thing in the long run. It demonstrates that the computer industry is maturing: IT is getting complicated and subtle, and users are starting to treat it like infrastructure.

For a while now I have predicted the death of the security industry. Not the death of information security as a vital requirement, of course, but the death of the end-user security industry that gathers at the RSA Conference. When something becomes infrastructure—power, water, cleaning service, tax preparation—customers care less about details and more about results. Technological innovations become something the infrastructure providers pay attention to, and they package it for their customers.

No one wants to buy security. They want to buy something truly useful—database management systems, Web 2.0 collaboration tools, a company-wide network—and they want it to be secure. They don’t want to have to become IT security experts. They don’t want to have to go to the RSA Conference. This is the future of IT security.

You can see it in the large IT outsourcing contracts that companies are signing—not security outsourcing contracts, but more general IT contracts that include security. You can see it in the current wave of industry consolidation: not large security companies buying small security companies, but non-security companies buying security companies. And you can see it in the new popularity of software as a service: Customers want solutions; who cares about the details?

Imagine if the inventor of antilock brakes—or any automobile safety or security feature—had to sell them directly to the consumer. It would be an uphill battle convincing the average driver that he needed to buy them; maybe that technology would have succeeded and maybe it wouldn’t. But that’s not what happens. Antilock brakes, airbags, and that annoying sensor that beeps when you’re backing up too close to another object are sold to automobile companies, and those companies bundle them together into cars that are sold to consumers. This doesn’t mean that automobile safety isn’t important, and often these new features are touted by the car manufacturers.

The RSA Conference won’t die, of course. Security is too important for that. There will still be new technologies, new products, and new start-ups. But it will become inward-facing, slowly turning into an industry conference. It’ll be security companies selling to the companies who sell to corporate and home users—and will no longer be a 17,000-person user conference.

This essay originally appeared on

EDITED TO ADD (5/1): Commentary.

Posted on April 22, 2008 at 6:35 AM34 Comments

Chertoff Says Fingerprints Aren't Personal Data

Homeland Security Secretary Michael Chertoff says:

QUESTION: Some are raising that the privacy aspects of this thing, you know, sharing of that kind of data, very personal data, among four countries is quite a scary thing.

SECRETARY CHERTOFF: Well, first of all, a fingerprint is hardly personal data because you leave it on glasses and silverware and articles all over the world, they’re like footprints. They’re not particularly private.

Sounds like he’s confusing “secret” data with “personal” data. Lots of personal data isn’t particularly secret.

Posted on April 21, 2008 at 6:54 AM55 Comments

Oklahoma Data Leak

Usually I don’t bother blogging about these, but this one is particularly bad. Anyone with basic SQL knowledge could have registered anyone he wanted as a sex offender.

One of the cardinal rules of computer programming is to never trust your input. This holds especially true when your input comes from users, and even more so when it comes from the anonymous, general public. Apparently, the developers at Oklahoma’s Department of Corrections slept through that day in computer science class, and even managed to skip all of Common Sense 101. You see, not only did they trust anonymous user input on their public-facing website, but they blindly executed it and displayed whatever came back.

The result of this negligently bad coding has some rather serious consequences: the names, addresses, and social security numbers of tens of thousands of Oklahoma residents were made available to the general public for a period of at least three years. Up until yesterday, April 13 2008, anyone with a web browser and the knowledge from Chapter One of SQL For Dummies could have easily accessed—and possibly, changed—any data within the DOC’s databases. It took me all of a minute to figure out how to download 10,597 records—SSNs and all—from their website.

Posted on April 18, 2008 at 6:16 AM39 Comments

Risk Preferences in Chimpanzees and Bonobos

I’ve already written about prospect theory, which explains how people approach risk. People tend to be risk averse when it comes to gains, and risk seeking when it comes to losses:

Evolutionarily, presumably it is a better survival strategy to—all other things being equal, of course—accept small gains rather than risking them for larger ones, and risk larger losses rather than accepting smaller losses. Lions chase young or wounded wildebeest because the investment needed to kill them is lower. Mature and healthy prey would probably be more nutritious, but there’s a risk of missing lunch entirely if it gets away. And a small meal will tide the lion over until another day. Getting through today is more important than the possibility of having food tomorrow.

Similarly, it is evolutionarily better to risk a larger loss than to accept a smaller loss. Because animals tend to live on the razor’s edge between starvation and reproduction, any loss of food—whether small or large—can be equally bad. That is, both can result in death. If that’s true, the best option is to risk everything for the chance at no loss at all.

This behavior has been demonstrated in animals as well: “species of insects, birds and mammals range from risk neutral to risk averse when making decisions about amounts of food, but are risk seeking towards delays in receiving food.”

A recent study examines the relative risk preferences in two closely related species: chimanzees and bonobos.


Human and non-human animals tend to avoid risky prospects. If such patterns of economic choice are adaptive, risk preferences should reflect the typical decision-making environments faced by organisms. However, this approach has not been widely used to examine the risk sensitivity in closely related species with different ecologies. Here, we experimentally examined risk-sensitive behaviour in chimpanzees (Pan troglodytes) and bonobos (Pan paniscus), closely related species whose distinct ecologies are thought to be the major selective force shaping their unique behavioural repertoires. Because chimpanzees exploit riskier food sources in the wild, we predicted that they would exhibit greater tolerance for risk in choices about food. Results confirmed this prediction: chimpanzees significantly preferred the risky option, whereas bonobos preferred the fixed option. These results provide a relatively rare example of risk-prone behaviour in the context of gains and show how ecological pressures can sculpt economic decision making.

The basic argument is that in the natural environment of the chimpanzee, if you don’t take risks you don’t get any of the high-value rewards (e.g., monkey meat). Bonobos “rely more heavily than chimpanzees on terrestrial herbaceous vegetation, a more temporally and spatially consistent food source.” So chimpanzees are less likely to avoid taking risks.

Fascinating stuff, but there are at least two problems with this study. The first one, the researchers explain in their paper. The animals studied—five of each species—were from the Wolfgang Koehler Primate Research Center at the Leipzig Zoo, and the experimenters were unable to rule out differences in the “experiences, cultures and conditions of the two specific groups tested here.”

The second problem is more general: we know very little about the life of bonobos in the wild. There’s a lot of popular stereotypes about bonobos, but they’re sloppy at best.

Even so, I like seeing this kind of research. It’s fascinating.

EDITED TO ADD (5/13): Response to that last link.

Posted on April 17, 2008 at 6:20 AM14 Comments

Comparing Cybersecurity to Early 1800s Security on the High Seas

This article in CSO compares modern cybersecurity to open seas piracy in the early 1800s. After a bit of history, the article talks about current events:

In modern times, the nearly ubiquitous availability of powerful computing systems, along with the proliferation of high-speed networks, have converged to create a new version of the high seas—the cyber seas. The Internet has the potential to significantly impact the United States’ position as a world leader. Nevertheless, for the last decade, U.S. cybersecurity policy has been inconsistent and reactionary. The private sector has often been left to fend for itself, and sporadic policy statements have left U.S. government organizations, private enterprises and allies uncertain of which tack the nation will take to secure the cyber frontier.

This should be a surprise to no one.

What to do?

With that goal in mind, let us consider how the United States could take a Jeffersonian approach to the cyber threats faced by our economy. The first step would be for the United States to develop a consistent policy that articulates America’s commitment to assuring the free navigation of the “cyber seas.” Perhaps most critical to the success of that policy will be a future president’s support for efforts that translate rhetoric to actions—developing initiatives to thwart cyber criminals, protecting U.S. technological sovereignty, and balancing any defensive actions to avoid violating U.S. citizens’ constitutional rights. Clearly articulated policy and consistent actions will assure a stable and predictable environment where electronic commerce can thrive, continuing to drive U.S. economic growth and avoiding the possibility of the U.S. becoming a cyber-colony subject to the whims of organized criminal efforts on the Internet.

I am reminded of comments comparing modern terrorism with piracy on the high seas.

Posted on April 16, 2008 at 2:27 PM24 Comments

Our Inherent Capability for Evil

This is interesting:

What took place on a peaceful Californian university campus nearly four decades ago still has the power to disturb. Eager to explore the way that “situation” can impact on behaviour, the young psychologist enrolled students to spend two weeks in a simulated jail environment, where they would randomly be assigned roles as either prisoners or guards.

Zimbardo’s volunteers were bright, liberal young men of good character, brimming with opposition to the Vietnam war and authority in general. All expressed a preference to be prisoners, a role they could relate to better. Yet within days the strong, rebellious “prisoners” had become depressed and hopeless. Two broke down emotionally, crushed by the behaviour of the “guards”, who had embraced their authoritarian roles in full, some becoming ever-more sadistic, others passively accepting the abuses taking place in front of them.

Transcripts of the experiment, published in Zimbardo’s book The Lucifer Effect: Understanding How Good People Turn Evil, record in terrifying detail the way reality slipped away from the participants. On the first day ­ Sunday ­ it is all self-conscious play-acting between college buddies. On Monday the prisoners start a rebellion, and the guards clamp down, using solitary confinement, sleep deprivation and intimidation. One refers to “these dangerous prisoners”. They have to be prevented from using physical force.

Control techniques become more creative and sadistic. The prisoners are forced to repeat their numbers over and over at roll call, and to sing them. They are woken repeatedly in the night. Their blankets are rolled in dirt and they are ordered painstakingly to pick them clean of burrs. They are harangued and pitted against one another, forced to humiliate each other, pulled in and out of solitary confinement.

On day four, a priest visits. Prisoner 819 is in tears, his hands shaking. Rather than question the experiment, the priest tells him, “You’re going to have to get less emotional.” Later, a guard leads the inmates in chanting “Prisoner 819 did a bad thing!” and blaming him for their poor conditions.

Zimbardo finds 819 covering his ears, “a quivering mess, hysterical”, and says it is time to go home. But 819 refuses to leave until he has proved to his fellow prisoners that he isn’t “bad”. “Listen carefully to me, you’re not 819,” says Zimbardo. “You are Stewart and my name is Dr Zimbardo. I am a psychologist not a prison superintendent, and this is not a real prison.”819 stops sobbing “and looks like a small child awakening from a nightmare”, according to Zimbardo. But it doesn’t seem to occur to him that things are going too far.

Guard Hellmann, leader of the night shift, plumbs new depths. He wakes up the prisoners to shout abuse in their faces. He forces them to play leapfrog dressed only in smocks, their genitals exposed. A new prisoner, 416, replaces 819, and brings fresh perspective. “I was terrified by each new shift of guards,” he says. “I knew by the first evening that I had done something foolish to volunteer for this study.”

The study is scheduled to run for two weeks. On the evening of Thursday, the fifth day, Zimbardo’s girlfriend, Christina Maslach, also a psychologist, comes to meet him for dinner. She is confronted by a line of prisoners en route to the lavatory, bags over their heads, chained together by the ankles. “What you’re doing to these boys is a terrible thing,” she tells Zimbardo. “Don’t you understand this is a crucible of human behaviour?” he asks. “We are seeing things no one has witnessed before in such a situation.” She tells him this has made her question their relationship, and the person he is.

Downstairs, Guard Hellmann is yelling at the prisoners. “See that hole in the ground? Now do 25 push-ups, fucking that hole. You hear me?” Three prisoners are forced to be “female camels”, bent over, their naked bottoms exposed. Others are told to “hump” them and they simulate sodomy. Zimbardo ends the experiment the following morning.

To read the transcripts or watch the footage is to follow a rapid and dramatic collapse of human decency, resilience and perspective. And so it should be, says Zimbardo. “Evil is a slippery slope,” he says. “Each day is a platform for the abuses of the next day. Each day is only slightly worse than the previous day. Once you don’t object to those first steps it is easy to say, ‘Well, it’s only a little worse then yesterday.’ And you become morally acclimatised to this kind of evil.”

EDITED TO ADD (5/13): The website is worth visiting, especially the section on resisting influence.

Posted on April 16, 2008 at 6:40 AM70 Comments

More RIPA Creep

I previously blogged about the UK’s Regulation of Investigatory Powers Act (RIPA), which was sold as a means to tackle terrorism, and other serious crimes, being used against animal rights protestors. The latest news from the UK is that a local council has used provisions of the act to put a couple and their children under surveillance, for “suspected fraudulent school place applications”:

Poole council said it used the legislation to watch a family at home and in their daily movements because it wanted to know if they lived in the catchment area for a school, which they wanted their three-year-old daughter to attend.

This kind of thing happens again and again. When campaigning for a law’s passage, the authorities invoke the most heinous of criminals—terrorists, kidnappers, drug dealers, child pornographers—but after the law is passed, they start using it in more mundane situations.

Another article. And this follow-up.

Posted on April 15, 2008 at 1:04 PM37 Comments

Pentagon May Issue Pocket Lie Detectors to Afghan Soldiers

This is just ridiculous. Lie detectors are pseudo-science at best, and even the Pentagon knows it:

The Pentagon, in a PowerPoint presentation released to through a Freedom of Information Act request, says the PCASS is 82 to 90 percent accurate. Those are the only accuracy numbers that were sent up the chain of command at the Pentagon before the device was approved.

But Pentagon studies obtained by show a more complicated picture: In calculating its accuracy, the scientists conducting the tests discarded the yellow screens, or inconclusive readings.

That practice was criticized in the 2003 National Academy study, which said the “inconclusives” have to be included to measure accuracy. If you take into account the yellow screens, the PCASS accuracy rate in the three Pentagon-funded tests drops to the level of 63 to 79 percent.

Posted on April 14, 2008 at 12:57 PM45 Comments

People and Security Rules

In this article analyzing a security failure resulting in live nuclear warheads being flown over the U.S., there’s an interesting commentary on people and security rules:

Indeed, the gaff [sic] that allowed six nukes out over three major American cities (Omaha, Neb., Kansas City, Mo., and Little Rock, Ark.) could have been avoided if the Air Force personnel had followed procedure.

“Let’s not forget that the existing rules were pretty tight,” says Hans Kristensen, director of the Nuclear Information Project for the Federation of American Scientists. “Much of what went wrong occurred because people didn’t follow these tight rules. You can have all sorts of rules and regulations, but they still won’t do any good if the people don’t follow them.”

Procedures are a tough balancing act. If they’re too lax, there will be security problems. If they’re too tight, people will get around them and there will be security problems.

Posted on April 14, 2008 at 6:47 AM25 Comments

Seat Belt Usage and Compensating Behavior

There is a theory that people have an inherent risk thermostat that seeks out an optimal level of risk. When something becomes inherently safer—a law is passed requiring motorcycle riders to wear helmets, for example—people compensate by riding more recklessly. I first read this theory in a 1999 paper by John Adams at the University of Reading, although it seems to have originated with Sam Peltzman.

In any case, this paper presents data that contradicts that thesis:

Abstract—This paper investigates the effects of mandatory seat belt laws on driver behavior and traffic fatalities. Using a unique panel data set on seat belt usage in all U.S. jurisdictions, we analyze how such laws, by influencing seat belt use, affect the incidence of traffic fatalities. Allowing for the endogeneity of seat belt usage, we find that such usage decreases overall traffic fatalities. The magnitude of this effect, however, is significantly smaller than the estimate used by the National Highway Traffic Safety Administration. In addition, we do not find significant support for the compensating-behavior theory, which suggests that seat belt use also has an indirect adverse effect on fatalities by encouraging careless driving. Finally, we identify factors, especially the type of enforcement used, that make seat belt laws more effective in increasing seat belt usage.

Posted on April 11, 2008 at 1:44 PM48 Comments

Bulk Text Messaging

This seems very worrisome:

Federal regulators approved a plan on Wednesday to create a nationwide emergency alert system using text messages delivered to cellphones.

The real question is whether the benefits outweigh the risks. I could certainly imagine scenarios where getting short text messages out to everyone in a particular geographic area is a good thing, but I can also imagine the hacking possibilities.

And once this system is developed for emergency use, can a bulk SMS business be far behind?

Posted on April 11, 2008 at 6:22 AM54 Comments

Overestimating Threats Against Children

This is a great essay by a mom who let her 9-year-old son ride the New York City subway alone:

No, I did not give him a cell phone. Didn’t want to lose it. And no, I didn’t trail him, like a mommy private eye. I trusted him to figure out that he should take the Lexington Avenue subway down, and the 34th Street crosstown bus home. If he couldn’t do that, I trusted him to ask a stranger. And then I even trusted that stranger not to think, “Gee, I was about to catch my train home, but now I think I’ll abduct this adorable child instead.”

Long story short: My son got home, ecstatic with independence.

Long story longer, and analyzed, to boot: Half the people I’ve told this episode to now want to turn me in for child abuse. As if keeping kids under lock and key and helmet and cell phone and nanny and surveillance is the right way to rear kids. It’s not. It’s debilitating—for us and for them.

It’s amazing how our fears blind us. The mother and son appeared on The Today Show, where they both continued to explain why it wasn’t an unreasonable thing to do:

And that was Skenazy’s point in her column: The era is long past when Times Square was a fetid sump and taking a walk in Central Park after dark was tantamount to committing suicide. Recent federal statistics show New York to be one of the safest cities in the nation—right up there with Provo, Utah, in fact.

“Times are back to 1963,” Skenzay said. “It’s safe. It’s a great time to be a kid in the city.”

The problem is that people read about children who are abducted and murdered and fear takes over, she said. And she doesn’t think fear should rule our lives.

Of course, The Today Show interviewer didn’t get it:

Dr. Ruth Peters, a parenting expert and TODAY Show contributor, agreed that children should be allowed independent experiences, but felt there are better—and safer—ways to have them than the one Skenazy chose.

“I’m not so much concerned that he’s going to be abducted, but there’s a lot of people who would rough him up,” she said. “There’s some bullies and things like that. He could have gotten the same experience in a safer manner.”

“It’s safe to go on the subway,” Skenazy replied. “It’s safe to be a kid. It’s safe to ride your bike on the streets. We’re like brainwashed because of all the stories we hear that it isn’t safe. But those are the exceptions. That’s why they make it to the news. This is like, ‘Boy boils egg.’ He did something that any 9-year-old could do.”

Here’s an audio interview with Skenazy.

I am reminded of this great graphic depicting childhood independence diminishing over four generations.

Posted on April 10, 2008 at 1:00 PM205 Comments

Tracking Vehicles through Tire Pressure Monitors

Just another example of our surveillance future:

Each wheel of the vehicle transmits a unique ID, easily readable using off-the-shelf receiver. Although the transmitter’s power is very low, the signal is still readable from a fair distance using a good directional antenna.

Remember the paper that discussed how Bluetooth radios in cell phones can be used to track their owners? The problem with TPMS is incomparably bigger, because the lifespan of a typical cell phone is around 2 years and you can turn the Bluetooth radio off in most of them. On the contrary, TPMS cannot be turned off. It comes with a built-in battery that lasts 7 to 10 years, and the battery-less TPMS sensors are ready to hit the market in 2010. It does not matter how long you own the vehicle ­ transportation authorities keep up-to-date information about vehicle ownership.

Posted on April 10, 2008 at 6:29 AM48 Comments

The Feeling and Reality of Security

Security is both a feeling and a reality, and they’re different. You can feel secure even though you’re not, and you can be secure even though you don’t feel it. There are two different concepts mapped onto the same word—the English language isn’t working very well for us here—and it can be hard to know which one we’re talking about when we use the word.

There is considerable value in separating out the two concepts: in explaining how the two are different, and understanding when we’re referring to one and when the other. There is value as well in recognizing when the two converge, understanding why they diverge, and knowing how they can be made to converge again.

Some fundamentals first. Viewed from the perspective of economics, security is a trade-off. There’s no such thing as absolute security, and any security you get has some cost: in money, in convenience, in capabilities, in insecurities somewhere else, whatever. Every time someone makes a decision about security—computer security, community security, national security—he makes a trade-off.

People make these trade-offs as individuals. We all get to decide, individually, if the expense and inconvenience of having a home burglar alarm is worth the security. We all get to decide if wearing a bulletproof vest is worth the cost and tacky appearance. We all get to decide if we’re getting our money’s worth from the billions of dollars we’re spending combating terrorism, and if invading Iraq was the best use of our counterterrorism resources. We might not have the power to implement our opinion, but we get to decide if we think it’s worth it.

Now we may or may not have the expertise to make those trade-offs intelligently, but we make them anyway. All of us. People have a natural intuition about security trade-offs, and we make them, large and small, dozens of times throughout the day. We can’t help it: It’s part of being alive.

Imagine a rabbit, sitting in a field eating grass. And he sees a fox. He’s going to make a security trade-off: Should he stay or should he flee? Over time, the rabbits that are good at making that trade-off will tend to reproduce, while the rabbits that are bad at it will tend to get eaten or starve.

So, as a successful species on the planet, you’d expect that human beings would be really good at making security trade-offs. Yet, at the same time, we can be hopelessly bad at it. We spend more money on terrorism than the data warrants. We fear flying and choose to drive instead. Why?

The short answer is that people make most trade-offs based on the feeling of security and not the reality.

I’ve written a lot about how people get security trade-offs wrong, and the cognitive biases that cause us to make mistakes. Humans have developed these biases because they make evolutionary sense. And most of the time, they work.

Most of the time—and this is important—our feeling of security matches the reality of security. Certainly, this is true of prehistory. Modern times are harder. Blame technology, blame the media, blame whatever. Our brains are much better optimized for the security trade-offs endemic to living in small family groups in the East African highlands in 100,000 B.C. than to those endemic to living in 2008 New York.

If we make security trade-offs based on the feeling of security rather than the reality, we choose security that makes us feel more secure over security that actually makes us more secure. And that’s what governments, companies, family members and everyone else provide. Of course, there are two ways to make people feel more secure. The first is to make people actually more secure and hope they notice. The second is to make people feel more secure without making them actually more secure, and hope they don’t notice.

The key here is whether we notice. The feeling and reality of security tend to converge when we take notice, and diverge when we don’t. People notice when 1) there are enough positive and negative examples to draw a conclusion, and 2) there isn’t too much emotion clouding the issue.

Both elements are important. If someone tries to convince us to spend money on a new type of home burglar alarm, we as society will know pretty quickly if he’s got a clever security device or if he’s a charlatan; we can monitor crime rates. But if that same person advocates a new national antiterrorism system, and there weren’t any terrorist attacks before it was implemented, and there weren’t any after it was implemented, how do we know if his system was effective?

People are more likely to realistically assess these incidents if they don’t contradict preconceived notions about how the world works. For example: It’s obvious that a wall keeps people out, so arguing against building a wall across America’s southern border to keep illegal immigrants out is harder to do.

The other thing that matters is agenda. There are lots of people, politicians, companies and so on who deliberately try to manipulate your feeling of security for their own gain. They try to cause fear. They invent threats. They take minor threats and make them major. And when they talk about rare risks with only a few incidents to base an assessment on—terrorism is the big example here—they are more likely to succeed.

Unfortunately, there’s no obvious antidote. Information is important. We can’t understand security unless we understand it. But that’s not enough: Few of us really understand cancer, yet we regularly make security decisions based on its risk. What we do is accept that there are experts who understand the risks of cancer, and trust them to make the security trade-offs for us.

There are some complex feedback loops going on here, between emotion and reason, between reality and our knowledge of it, between feeling and familiarity, and between the understanding of how we reason and feel about security and our analyses and feelings. We’re never going to stop making security trade-offs based on the feeling of security, and we’re never going to completely prevent those with specific agendas from trying to take care of us. But the more we know, the better trade-offs we’ll make.

This article originally appeared on

Posted on April 8, 2008 at 5:50 AM38 Comments

Third Annual Movie-Plot Threat Contest

I can’t believe I let April 1 come and go without posting the rules to the Third Annual Movie-Plot Threat Contest. Well, better late than never.

For this contest, the goal is to create fear. Not just any fear, but a fear that you can alleviate through the sale of your new product idea. There are lots of risks out there, some of them serious, some of them so unlikely that we shouldn’t worry about them, and some of them completely made up. And there are lots of products out there that provide security against those risks.

Your job is to invent one. First, find a risk or create one. It can be a terrorism risk, a criminal risk, a natural-disaster risk, a common household risk—whatever. The weirder the better. Then, create a product that everyone simply has to buy to protect him- or herself from that risk. And finally, write a catalog ad for that product.

Here’s an example, pulled from page 25 of the Late Spring 2008 Skymall catalog I’m reading on my airplane right now:

A Turtle is Safe in Water, A Child is Not!

Even with the most vigilant supervision a child can disappear in seconds and not be missed until it’s too late. Our new wireless pool safety alarm system is a must for pool owners and parents of young children. The Turtle Wristband locks on the child’s wrist (a special key is required to remove it) and instantly detects immersion in water and sounds a shrill alarm at the Base Station located in the house or within 100 feet of the pool, spa, or backyard pond. Keep extra wristbands on hand for guests or to protect the family dog.

Entries are limited to 150 words—the example above had 97 words—because fear doesn’t require a whole lot of explaining. Tell us why we should be afraid, and why we should buy your product.

Entries will be judged on creativity, originality, persuasiveness, and plausibility. It’s okay if the product you invent doesn’t actually exist, but this isn’t a science fiction contest.

Portable salmonella detectors for salad bars. Acoustical devices that estimate tiger proximity based on roar strength. GPS-enabled wallets for use when you’ve been pickpocketed. Wrist cuffs that emit fake DNA to fool DNA detectors. The Quantum Sleeper. Fear offers endless business opportunities. Good luck.

Entries due by May 1.

The First Movie-Plot Threat Contest rules and winner. The Second Movie-Plot Threat Contest rules, semifinalists, and winner.

EDITED TO ADD (4/7): Submit your entry in the comments.

EDITED TO ADD (4/8): You people are frighteningly creative.

Posted on April 7, 2008 at 3:50 PM336 Comments

The Ineffectiveness of Security Cameras

Data from San Francisco:

Researchers examined data from the San Francisco Police Department detailing the 59,706 crimes committed within 1,000 feet of the camera locations between Jan. 1, 2005, and Jan. 28, 2008.

These were the total number of crimes for which police had reports—regardless of whether the crimes were caught on video. The idea was to look at whether criminals stopped committing crimes at those locations because they knew cameras were there.

Using a complicated method, researchers were able to come up with an average daily crime rate at each location broken out by type of crime and distance from the cameras. They then compared it with the average daily crime rate from the period before the cameras were installed.

They looked at seven types of crime: larcenies, burglaries, motor vehicle theft, assault, robbery, homicide and forcible sex offenses.

The only positive deterrent effect was the reduction of larcenies within 100 feet of the cameras. No other crimes were affected—except for homicides, which had an interesting pattern.

Murders went down within 250 feet of the cameras, but the reduction was completely offset by an increase 250 to 500 feet away, suggesting people moved down the block before killing each other.

The final report is expected to analyze the figures in more depth and to include other crimes, including prostitution and drug offenses.

This quote is instructive:

Mayor Gavin Newsom called the report “conclusively inconclusive” on Thursday but said he still wants to install more cameras around the city because they make residents feel safer.

That’s right: the cameras aren’t about security, they’re about security theater. More comments on the general issue here.

Posted on April 7, 2008 at 1:33 PM77 Comments

Internet Censorship

A review of Access Denied, edited by Ronald Deibert, John Palfrey, Rafal Rohozinski and Jonathan Zittrain, MIT Press: 2008.

In 1993, Internet pioneer John Gilmore said “the net interprets censorship as damage and routes around it”, and we believed him. In 1996, cyberlibertarian John Perry Barlow issued his ‘Declaration of the Independence of Cyberspace’ at the World Economic Forum at Davos, Switzerland, and online. He told governments: “You have no moral right to rule us, nor do you possess any methods of enforcement that we have true reason to fear.”

At the time, many shared Barlow’s sentiments. The Internet empowered people. It gave them access to information and couldn’t be stopped, blocked or filtered. Give someone access to the Internet, and they have access to everything. Governments that relied on censorship to control their citizens were doomed.

Today, things are very different. Internet censorship is flourishing. Organizations selectively block employees’ access to the Internet. At least 26 countries—mainly in the Middle East, North Africa, Asia, the Pacific and the former Soviet Union—selectively block their citizens’ Internet access. Even more countries legislate to control what can and cannot be said, downloaded or linked to. “You have no sovereignty where we gather,” said Barlow. Oh yes we do, the governments of the world have replied.

Access Denied is a survey of the practice of Internet filtering, and a sourcebook of details about the countries that engage in the practice. It is written by researchers of the OpenNet Initiative (ONI), an organization that is dedicated to documenting global Internet filtering around the world.

The first half of the book comprises essays written by ONI researchers on the politics, practice, technology, legality and social effects of Internet filtering. There are three basic rationales for Internet censorship: politics and power; social norms, morals and religion; and security concerns.

Some countries, such as India, filter only a few sites; others, such as Iran, extensively filter the Internet. Saudi Arabia tries to block all pornography (social norms and morals). Syria blocks everything from the Israeli domain “.il” (politics and power). Some countries filter only at certain times. During the 2006 elections in Belarus, for example, the website of the main opposition candidate disappeared from the Internet.

The effectiveness of Internet filtering is mixed; it depends on the tools used and the granularity of filtering. It is much easier to block particular URLs or entire domains than it is to block information on a particular topic. Some countries block specific sites or URLs based on some predefined list but new URLs with similar content appear all the time. Other countries—notably China—try to filter on the basis of keywords in the actual web pages. A halfway measure is to filter on the basis of URL keywords: names of dissidents or political parties, or sexual words.

Much of the technology has other applications. Software for filtering is a legitimate product category, purchased by schools to limit access by children to objectionable material and by corporations trying to prevent their employees from being distracted at work. One chapter discusses the ethical implications of companies selling products, services and technologies that enable Internet censorship.

Some censorship is legal, not technical. Countries have laws against publishing certain content, registration requirements that prevent anonymous Internet use, liability laws that force Internet service providers to filter themselves, or surveillance. Egypt does not engage in technical Internet filtering; instead, its laws discourage the publishing and reading of certain content—it has even jailed people for their online activities.

The second half of Access Denied consists of detailed descriptions of Internet use, regulations and censorship in eight regions of the world, and in each of 40 different countries. The ONI found evidence of censorship in 26 of those 40. For the other 14 countries, it summarizes the legal and regulatory framework surrounding Internet use, and tests the results that indicated no censorship. This leads to 200 pages of rather dry reading, but it is vitally important to have this information well-documented and easily accessible. The book’s data are from 2006, but the authors promise frequent updates on the ONI website.

No set of Internet censorship measures is perfect. It is often easy to find the same information on uncensored URLs, and relatively easy to get around the filtering mechanisms and to view prohibited web pages if you know what you’re doing. But most people don’t have the computer skills to bypass controls, and in a country where doing so is punishable by jail—or worse—few take the risk. So even porous and ineffective attempts at censorship can become very effective socially and politically.

In 1996, Barlow said: “You are trying to ward off the virus of liberty by erecting guard posts at the frontiers of cyberspace. These may keep out the contagion for some time, but they will not work in a world that will soon be blanketed in bit-bearing media.”

Brave words, but premature. Certainly, there is much more information available to many more people today than there was in 1996. But the Internet is made up of physical computers and connections that exist within national boundaries. Today’s Internet still has borders and, increasingly, countries want to control what passes through them. In documenting this control, the ONI has performed an invaluable service.

This was originally published in Nature.

Posted on April 7, 2008 at 5:00 AM45 Comments

Friday Squid Blogging: Squid Beaks for Artificial Limbs?

Scientists are considering it:

The beak, made of hard chitin and other materials, changes density gradually from the hard tip to a softer, more flexible base where it attaches to the muscle around the squid’s mouth, the researchers found.

That means the tough beak can chomp away at fish for dinner, but the hard material doesn’t press or rub directly against the squid’s softer tissues.

Herbert Waite, a professor in the university’s department of molecular, cellular & developmental biology and co-author of the paper, said such graduated materials could have broad applications in biomedical materials.

“Lots of useful information could some out of this for implant materials, for example. Interfaces between soft and hard materials occur everywhere,” he said in a telephone interview.

Frank Zok, professor and associate chair of the department of materials, said he had always been skeptical of whether there is any real advantage to materials that change their properties gradually from one part to another, “but the squid beak turned me into a believer.”

“If we could reproduce the property gradients that we find in squid beak, it would open new possibilities for joining materials,” Zok said in a statement. “For example, if you graded an adhesive to make its properties match one material on one side and the other material on the other side, you could potentially form a much more robust bond.”

The researchers are learning lessons that can be applied to medical materials in the future, said Phillip B. Messersmith of the department of biomedical engineering at Northwestern University.

Messersmith, who was not part of the research team, noted that hard medical implants made of metal or ceramic are often imbedded in soft tissues.

“The lessons here from nature might be useful in transitions between devices and the tissues they are imbedded in,” he said in a telephone interview.

More on squid beaks.

Posted on April 4, 2008 at 4:38 PM4 Comments

Terroristic Threatening

What in the world is “terroristic threatening“?

The woman was also charged with one count of terroristic threatening for pointing a handgun at an officer, said university police Maj. Kenny Brown. The woman gave her handgun to a counselor at the health services building, he said.

We are all hurt by the application of the word “terrorist” to everything we don’t like. Terrorism does not equal criminality.

Posted on April 4, 2008 at 11:19 AM50 Comments

KeeLoq Still Broken

That’s the key entry system used by Chrysler, Daewoo, Fiat, General Motors, Honda, Toyota, Lexus, Volvo, Volkswagen, Jaguar, and probably others. It’s broken:

The KeeLoq encryption algorithm is widely used for security relevant applications, e.g., in the form of passive Radio Frequency Identification (RFID) transponders for car immobilizers and in various access control and Remote Keyless Entry (RKE) systems, e.g., for opening car doors and garage doors.

We present the first successful DPA (Differential Power Analysis) attacks on numerous commercially available products employing KeeLoq. These so-called side-channel attacks are based on measuring and evaluating the power consumption of a KeeLoq device during its operation. Using our techniques, an attacker can reveal not only the secret key of remote controls in less than one hour, but also the manufacturer key of the corresponding receivers in less than one day. Knowing the manufacturer key allows for creating an arbitrary number of valid new keys and generating new remote controls.

We further propose a new eavesdropping attack for which monitoring of two ciphertexts, sent from a remote control employing KeeLoq code hopping (car key, garage door opener, etc.), is sufficient to recover the device key of the remote control. Hence, using the methods described by us, an attacker can clone a remote control from a distance and gain access to a target that is protected by the claimed to be “highly secure” KeeLoq algorithm.

We consider our attacks to be of serious practical interest, as commercial KeeLoq access control systems can be overcome with modest effort.

I’ve written about this before, but the above link has much better data.

EDITED TO ADD (4/4): A good article.

Posted on April 4, 2008 at 6:03 AM24 Comments

The Liquid Bomb

We finally have some actual information about the “liquid bomb” that was planned by that London group arrested in 2006:

The court heard the bombers intended to use hydrogen peroxide and mix it with a product called Tang, used in soft drinks, to turn it into an explosive.

They intended to carry it on board disguised as 500ml bottles of Oasis or Lucozade by using food dye to recreate the drinks’ distinctive colour.

The detonator would have been disguised as AA 1.5 batteries. The contents of the batteries would have been removed and an electric element such as a lightbulb or wiring would have been inserted.

A disposable camera would have provided a power source.

Any chemists want to take a crack at this one?

Posted on April 3, 2008 at 5:11 PM145 Comments

Would-Be Bomber Caught at Orlando Airport

Oddly enough, I flew into Orlando Airport on Tuesday night, hours after TSA and police caught Kevin Brown—not the baseball player—with bomb-making equipment in his checked luggage. (Yes, checked luggage. He was bringing it to Jamaica, not planning on blowing up the plane he was on.) Seems like someone trained in behavioral profiling singled him out, probably for stuff like this:

“He was rocking left to right, bouncing up and down … he was there acting crazy,” passenger Jason Doyle said.

But that was a passenger remembering Brown after the fact, so I wouldn’t put too much credence in it.

There are a bunch of articles about Brown and potential motives. Note that he is not an Islamic terrorist; he’s a U.S. Army veteran who served in Iraq:

“This is not him,” she said in a phone interview. “It has to be a mental issue for him. I know if they looked through his medical records…I’m sure they will see…”He’s not a terrorist.”

Brown married Holt’s daughter, Kamishia, 25, about three years ago. They met while serving in the Army and separated a year later. Brown wasn’t the same after returning from Iraq, her daughter told her.

“When he doesn’t take it [medication], he’s off the chain,” Holt said. “When you don’t take it and drink alcohol, it makes it worse.”

Doesn’t sound like a terrorist, but this does:

According to the affidavit, Brown admitted he had the items because he wanted to make pipe bombs in Jamaica. It also indicated he wanted to show friends how to make pipe bombs like he made while in Iraq.

Federal agents said federal agents found two vodka bottles filled with nitro-methane, a highly explosive liquid, as well as galvanized pipes, end caps with holes, BBs, a model-rocket igniter, AA batteries, a lighter and lighter fluid, plus other items used to make pipe bombs and detailed instructions and diagrams. He indicated the items were purchased in Gainesville where he lived at one time.

Ignore the hyperbole; nitromethane is a liquid fuel, not a high explosive. Here’s the whole affidavit, if you want to read it.

Even with all this news, the truth is that we just don’t know what happened. It looks like a great win for behavioral profiling (which, when done well, I think is a good idea) and the TSA. The TSA is certainly pleased. But we’ve seen apparent TSA wins before that turn out to be bogus when the details finally come out. Right now I’m cautiously pleased with the TSA’s performance, and offer them a tentative congratulations, especially for not over-reacting. I read—but can’t find the link now—that only 11 flights were delayed because of the event. The TSA claims that no flights were delayed, and also says that no security checkpoints were closed. Either way, it’s certainly something to congratulate the TSA about.

Posted on April 3, 2008 at 9:02 AM39 Comments

1967 Article on Data Privacy and Security

An eerily prescient article from The Atlantic in 1967 about the future of data privacy. It presents all of the basic arguments for strict controls on data collection of personal information, and it’s remarkably accurate in it’s predictions of the future development and importance of computers as well all of all of the ways the government would abuse them.

Well worth reading.

Posted on April 3, 2008 at 6:35 AM12 Comments

Outsourcing Passports

The U.S. is outsourcing the manufacture of its RFID passports to some questionable companies.

This is a great illustration of the maxim “security trade-offs are often made for non-security reasons.” I can imagine the manager in charge: “Yes, it’s insecure. But think of the savings!”

The Government Printing Office’s decision to export the work has proved lucrative, allowing the agency to book more than $100 million in recent profits by charging the State Department more money for blank passports than it actually costs to make them, according to interviews with federal officials and documents obtained by The Times.

Another story.

Posted on April 2, 2008 at 6:08 AM40 Comments

German Minister's Fingerprint Published

This is 1) a good demonstration that a fingerprint is not a secret, and 2) a great political hack. Wolfgang Schauble, Germany’s interior minister, is a strong supporter of collecting biometric data on everyone as an antiterrorist measure. Because, um, because it sounds like a good idea.

Here’s the story directly from the Chaos Computer Club (in German), and its Engligh-language guide to lifting and using fingerprints. And me on biometrics from 10 years ago.

Posted on April 1, 2008 at 2:37 PM35 Comments

For a Safe Night's Sleep

This is just insane:

The Quantum Sleeper Unit is a high-level security system designed for maximum protection in various hostile environments

Quantum Sleepers can also be fitted to provide protection from destructive forces of nature such as tornados, hurricanes, earthquakes and floods.

The Quantum Sleeper is the ultimate in protection, entertainment and communications, ” ALL ROLLED UP IN ONE.”

Posted on April 1, 2008 at 1:10 PM49 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.