Blog: March 2008 Archives

N-DEx National Intelligence System

An article from The Washington Post:

Federal authorities hope N-DEx will become what one called a “one-stop shop” enabling federal law enforcement, counterterrorism and intelligence analysts to automatically examine the enormous caches of local and state records for the first time.

[…]

The expanding police systems illustrate the prominent roles that private companies play in homeland security and counterterrorism efforts. They also underscore how the use of new data—and data surveillance—technology to fight crime and terrorism is evolving faster than the public’s understanding or the laws intended to check government power and protect civil liberties, authorities said.

Three decades ago, Congress imposed limits on domestic intelligence activity after revelations that the FBI, Army, local police and others had misused their authority for years to build troves of personal dossiers and monitor political activists and other law-abiding Americans.

Since those reforms, police and federal authorities have observed a wall between law enforcement information-gathering, relating to crimes and prosecutions, and more open-ended intelligence that relates to national security and counterterrorism. That wall is fast eroding following the passage of laws expanding surveillance authorities, the push for information-sharing networks, and the expectation that local and state police will play larger roles as national security sentinels.

Law enforcement and federal security authorities said these developments, along with a new willingness by police to share information, hold out the promise of fulfilling post-Sept. 11, 2001, mandates to connect the dots and root out signs of threats before attacks can occur.

Posted on March 31, 2008 at 6:13 AM23 Comments

Friday Squid Blogging: Plastinated Squid

In Paris:

France’s National Museum of Natural History on Tuesday unveiled the world’s first “plastinated” squid—a 6.5-metre-long (21.25-feet) deep-sea beast donated by New Zealand and named in honour of a creature featuring in Maori legend.

Plastination entails replacing the animal’s water, fat and other liquids with a polymer that hardens.

It means the specimen can be appreciated in three dimensions in a dry, solid state, rather than in a jar filled with formalin or alcohol, whose glass distorts the view.

The squid was hauled up in January 2000 at a depth of 615 metres (2,000 feet) by fishermen off New Zealand.

[…]

The 65,000-euro (100,000-dollar) plastination, carried out by Italian lab VisDocta Research, took two and a half years, during which the specimen of Architeuthis sanctipauli lost 2.5 metres (seven feet) of its length through drying out.

Wheke is being given pride of place in the Paris museum’s Great Gallery of Evolution, its centrepiece exhibit on biodiversity.

The giant squid, Architeuthis, of which there are three sub-species, is a potent source of maritime tales of tentacled monsters able to grab a ship and pull it down to its doom. The critter memorably featured in Jules Vernes’ “20,000 Leagues Under the Sea,” trying to engulf the submarine Nautilus.

In real life, though, the species is rather less gigantic—about 13 metres (42.25 feet) from the caudal fin to the tip of its suckered tentacles. Females are larger than males.

Posted on March 28, 2008 at 4:29 PM11 Comments

Speeding Tickets and Agenda

If you ever need an example to demonstrate that security is a function of agenda, use this story about speed cameras. Cities that have installed speed cameras are discovering motorists are driving slower, which is decreasing revenues from fines. So they’re turning the cameras off.

Perhaps a better solution would be to raise the fines to the remaining speeders to make up for the lost revenue?

EDITED TO ADD (3/31): Too many people thought that above comment was serious. It’s not. The whole incident illustrates why fines should never be considered part of a revenue stream: it gives the police a whole new agenda.

Posted on March 28, 2008 at 1:42 PM

Web Entrapment

Frightening sting operation by the FBI. They posted links to supposed child porn videos on boards frequented by those types, and obtained search warrants based on access attempts.

This seems like incredibly flimsy evidence. Someone could post the link as an embedded image, or send out e-mail with the link embedded, and completely mess with the FBI’s data—and the poor innocents’ lives. Such are the problems when the mere clicking on a link is justification for a warrant.

See also this Slashdot thread and this article.

Posted on March 27, 2008 at 2:46 PM61 Comments

NSA's Domestic Spying

This article from The Wall Street Journal outlines how the NSA is increasingly engaging in domestic surveillance, data collection, and data mining. The result is essentially the same as Total Information Awareness.

According to current and former intelligence officials, the spy agency now monitors huge volumes of records of domestic emails and Internet searches as well as bank transfers, credit-card transactions, travel and telephone records. The NSA receives this so-called “transactional” data from other agencies or private companies, and its sophisticated software programs analyze the various transactions for suspicious patterns. Then they spit out leads to be explored by counterterrorism programs across the U.S. government, such as the NSA’s own Terrorist Surveillance Program, formed to intercept phone calls and emails between the U.S. and overseas without a judge’s approval when a link to al Qaeda is suspected.

[…]

Two former officials familiar with the data-sifting efforts said they work by starting with some sort of lead, like a phone number or Internet address. In partnership with the FBI, the systems then can track all domestic and foreign transactions of people associated with that item—and then the people who associated with them, and so on, casting a gradually wider net. An intelligence official described more of a rapid-response effect: If a person suspected of terrorist connections is believed to be in a U.S. city—for instance, Detroit, a community with a high concentration of Muslim Americans—the government’s spy systems may be directed to collect and analyze all electronic communications into and out of the city.

The haul can include records of phone calls, email headers and destinations, data on financial transactions and records of Internet browsing. The system also would collect information about other people, including those in the U.S., who communicated with people in Detroit.

The information doesn’t generally include the contents of conversations or emails. But it can give such transactional information as a cellphone’s location, whom a person is calling, and what Web sites he or she is visiting. For an email, the data haul can include the identities of the sender and recipient and the subject line, but not the content of the message.

Intelligence agencies have used administrative subpoenas issued by the FBI—which don’t need a judge’s signature—to collect and analyze such data, current and former intelligence officials said. If that data provided “reasonable suspicion” that a person, whether foreign or from the U.S., was linked to al Qaeda, intelligence officers could eavesdrop under the NSA’s Terrorist Surveillance Program.

[…]

The NSA uses its own high-powered version of social-network analysis to search for possible new patterns and links to terrorism. The Pentagon’s experimental Total Information Awareness program, later renamed Terrorism Information Awareness, was an early research effort on the same concept, designed to bring together and analyze as much and as many varied kinds of data as possible. Congress eliminated funding for the program in 2003 before it began operating. But it permitted some of the research to continue and TIA technology to be used for foreign surveillance.

Some of it was shifted to the NSA—which also is funded by the Pentagon—and put in the so-called black budget, where it would receive less scrutiny and bolster other data-sifting efforts, current and former intelligence officials said. “When it got taken apart, it didn’t get thrown away,” says a former top government official familiar with the TIA program.

Two current officials also said the NSA’s current combination of programs now largely mirrors the former TIA project. But the NSA offers less privacy protection. TIA developers researched ways to limit the use of the system for broad searches of individuals’ data, such as requiring intelligence officers to get leads from other sources first. The NSA effort lacks those controls, as well as controls that it developed in the 1990s for an earlier data-sweeping attempt.

Barry Steinhardt of the ACLU comments:

I mean, when we warn about a “surveillance society,” this is what we’re talking about. This is it, this is the ballgame. Mass data from a wide variety of sources—including the private sector—is being collected and scanned by a secretive military spy agency. This represents nothing less than a major change in American life—and unless stopped the consequences of this system for everybody will grow in magnitude along with the rivers of data that are collected about each of us—and that’s more and more every day.

More commentary.

Posted on March 26, 2008 at 6:02 AM52 Comments

Craigslist Scam

This is a weird story: someone posts a hoax Craigslist ad saying that the owner of a home had to leave suddenly, and this his belongings were free for the taking. People believed the ad and starting coming by and taking his stuff.

But Robert Salisbury had no plans to leave. The independent contractor was at Emigrant Lake when he got a call from a woman who had stopped by his house to claim his horse.

On his way home he stopped a truck loaded down with his work ladders, lawn mower and weed eater.

“I informed them I was the owner, but they refused to give the stuff back,” Salisbury said. “They showed me the Craigslist printout and told me they had the right to do what they did.”

The driver sped away after rebuking Salisbury. On his way home he spotted other cars filled with his belongings.

Once home he was greeted by close to 30 people rummaging through his barn and front porch.

The trespassers, armed with printouts of the ad, tried to brush him off. “They honestly thought that because it appeared on the Internet it was true,” Salisbury said. “It boggles the mind.”

This doesn’t surprise me at all. People just don’t think of authenticating this sort of thing. And what if they did call a phone number listed on a hoax ad? How do they know the phone number is real? On the other hand, a phone number on the hoax ad would give the police something to find the hoaxer with.

At least this guy is getting some of his stuff back.

EDITED TO ADD (3/26): In comments, Karl pointed out a previous example of this hoax.

EDITED TO ADD (4/1): A couple have been charged with posting the ad; they allegedly used it to cover up their own thefts.

Posted on March 25, 2008 at 7:33 PM

Martin Hellman on the Invention of Public-Key Cryptography

At the DISI conference last December, Martin Hellman gave a lecture on the invention of public-key cryptography. A video is online (it’s hard to find, search for his name), along with PowerPoint slides.

(Unfortunately, the video isn’t set up for streaming; in order to view the it, you’ll have to download the ten files, then use a fairly recent version of WinZip to concatenate the files.)

EDITED TO ADD (3/26): Now on Google Video.

Posted on March 25, 2008 at 1:21 PM10 Comments

The Security Mindset

Uncle Milton Industries has been selling ant farms to children since 1956. Some years ago, I remember opening one up with a friend. There were no actual ants included in the box. Instead, there was a card that you filled in with your address, and the company would mail you some ants. My friend expressed surprise that you could get ants sent to you in the mail.

I replied: “What’s really interesting is that these people will send a tube of live ants to anyone you tell them to.”

Security requires a particular mindset. Security professionals—at least the good ones—see the world differently. They can’t walk into a store without noticing how they might shoplift. They can’t use a computer without wondering about the security vulnerabilities. They can’t vote without trying to figure out how to vote twice. They just can’t help it.

SmartWater is a liquid with a unique identifier linked to a particular owner. “The idea is for me to paint this stuff on my valuables as proof of ownership,” I wrote when I first learned about the idea. “I think a better idea would be for me to paint it on your valuables, and then call the police.”

Really, we can’t help it.

This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail. It involves thinking like an attacker, an adversary or a criminal. You don’t have to exploit the vulnerabilities you find, but if you don’t see the world that way, you’ll never notice most security problems.

I’ve often speculated about how much of this is innate, and how much is teachable. In general, I think it’s a particular way of looking at the world, and that it’s far easier to teach someone domain expertise—cryptography or software security or safecracking or document forgery—than it is to teach someone a security mindset.

Which is why CSE 484, an undergraduate computer-security course taught this quarter at the University of Washington, is so interesting to watch. Professor Tadayoshi Kohno is trying to teach a security mindset.

You can see the results in the blog the students are keeping. They’re encouraged to post security reviews about random things: smart pill boxes, Quiet Care Elder Care monitors, Apple’s Time Capsule, GM’s OnStar, traffic lights, safe deposit boxes, and dorm room security.

One recent one is about an automobile dealership. The poster described how she was able to retrieve her car after service just by giving the attendant her last name. Now any normal car owner would be happy about how easy it was to get her car back, but someone with a security mindset immediately thinks: “Can I really get a car just by knowing the last name of someone whose car is being serviced?”

The rest of the blog post speculates on how someone could steal a car by exploiting this security vulnerability, and whether it makes sense for the dealership to have this lax security. You can quibble with the analysis—I’m curious about the liability that the dealership has, and whether their insurance would cover any losses—but that’s all domain expertise. The important point is to notice, and then question, the security in the first place.

The lack of a security mindset explains a lot of bad security out there: voting machines, electronic payment cards, medical devices, ID cards, internet protocols. The designers are so busy making these systems work that they don’t stop to notice how they might fail or be made to fail, and then how those failures might be exploited. Teaching designers a security mindset will go a long way toward making future technological systems more secure.

That part’s obvious, but I think the security mindset is beneficial in many more ways. If people can learn how to think outside their narrow focus and see a bigger picture, whether in technology or politics or their everyday lives, they’ll be more sophisticated consumers, more skeptical citizens, less gullible people.

If more people had a security mindset, services that compromise privacy wouldn’t have such a sizable market share—and Facebook would be totally different. Laptops wouldn’t be lost with millions of unencrypted Social Security numbers on them, and we’d all learn a lot fewer security lessons the hard way. The power grid would be more secure. Identity theft would go way down. Medical records would be more private. If people had the security mindset, they wouldn’t have tried to look at Britney Spears’ medical records, since they would have realized that they would be caught.

There’s nothing magical about this particular university class; anyone can exercise his security mindset simply by trying to look at the world from an attacker’s perspective. If I wanted to evade this particular security device, how would I do it? Could I follow the letter of this law but get around the spirit? If the person who wrote this advertisement, essay, article or television documentary were unscrupulous, what could he have done? And then, how can I protect myself from these attacks?

The security mindset is a valuable skill that everyone can benefit from, regardless of career path.

This essay originally appeared on Wired.com.

EDITED TO ADD (3/31): Comments from Ed Felten. And another comment.

EDITED TO ADD (4/30): Another comment.

Posted on March 25, 2008 at 5:27 AM92 Comments

Security Perception: Fear vs Anger

If you’re fearful, you think you’re more at risk than if you’re angry:

In the aftermath of September 11th, we realized that, tragically, we were presented with an opportunity to find out whether our lab research could predict how the country as a whole would react to the attacks and how U.S. citizens would perceive future risks of terrorism. We did a nationwide field experiment, the first of its kind. As opposed to the participants in our lab studies, the participants in our nationwide field study did have strong feelings about the issues at stake—September 11th and possible future attacks—and they also had a lot of information about these issues as well. We wondered whether the same emotional carryover that we found in our lab studies would occur—whether fear and anger would still have opposing effects.

In pilot tests, we identified some media coverage of the attacks (video clips) that triggered a sense of fear, and some coverage that triggered a sense of anger. We randomly assigned participants from around the country to be exposed to one of those two conditions—media reports that were known to trigger fear or reports that were known to trigger anger. Next, we asked participants to predict how much risk, if any, they perceived in a variety of different events. For example, they were asked to predict the likelihood of another terrorist attack on the United States within the following 12 months and whether they themselves expected to be victims of potential future attacks. They made many other risk judgments about themselves, the country, and the world as a whole. They also rated their policy preferences.

The results mirrored those of our lab studies. Specifically, people who saw the anger-inducing video clip were subsequently more optimistic on a whole series of judgments about the future—their own future, the country’s future, and the future of the world. In contrast, the people who saw the fear-inducing video clip were less optimistic about their own future, the country’s future, and the world’s future. Policy preferences also differed as a function of exposure to the different media/emotion conditions. Participants who saw the fear-inducing clip subsequently endorsed less aggressive and more conciliatory policies than did participants who saw the anger-inducing clip, even though the clip was only a few minutes long and participants had had weeks to form their own policy opinions regarding responses to terrorism.

So, to summarize: we should not be fearful of future terrorist attacks, we should be angry that our government has done such a poor job safeguarding our liberties. And that if we take this second approach, we are more likely to respond effectively to future terrorist attacks.

Posted on March 23, 2008 at 12:42 PM30 Comments

Quantum Computing: Hype vs. Reality

Really good blog post on the future potential of quantum computing and its effects on cryptography:

To factor a 4096-bit number, you need 72*4096^3 or 4,947,802,324,992 quantum gates. Lets just round that up to an even 5 trillion. Five trillion is a big number. We’re only now getting to the point that we can put about that many normal bits on a disk drive. The first thing this tells me is that we aren’t going to wake up one day and find out that someone’s put that many q-gates on something you can buy from Fry’s from a white-box Taiwanese special.

Posted on March 23, 2008 at 6:29 AM41 Comments

Fraud Due to a Credit Card Breach

This sort of story is nothing new:

Hannaford said credit and debit card numbers were stolen during the card authorization process and about 4.2 million unique account numbers were exposed.

But it’s rare that we see statistics about the actual risk of fraud:

The company is aware of about 1,800 cases of fraud reported so far relating to the breach.

And this is interesting:

“Visa and MasterCard have stipulated in their contracts with retailers that they will not divulge who the source is when a data breach occurs,” Spitzer said. “We’ve been engaged in a dialogue for a couple years now about changing this rule…. Without knowing who the retailer is that caused the breach, it’s hard for banks to conduct a good investigation on behalf of their consumers. And it’s a problem for consumers as well, because if they know which retailer is responsible, they can rule themselves out for being at risk if they don’t shop at that retailer.”

Posted on March 21, 2008 at 6:39 AM28 Comments

Detecting Gunshots

Minneapolis—the city I live in—has an acoustic system that automatically detects and locates gunshots. It’s been in place for a year and a half.

The main system being considered by Minneapolis is called ShotSpotter. It could cost up to $350,000, and some community groups are hoping to pitch in.

That seems like a bargain to me.

Recently, I was asked about this system on Winnipeg radio. Actually, I kind of like it. I like it because it’s finely tuned to one particular problem: detecting gunfire. It doesn’t record everything. It doesn’t invade privacy. If there’s no gunfire, it’s silent. But if there is a gunshot, it figures out the location of the noise and automatically tells police.

From a privacy and liberties perspective, it’s a good system. Now all that has to be demonstrated is that it’s cost effective.

Posted on March 20, 2008 at 7:27 AM74 Comments

The Continuing Slide Towards Thoughtcrime

A suggestion from the UK of putting primary-school children in a DNA database if they “exhibit behaviour indicating they may become criminals in later life.”

Pugh’s call for the government to consider options such as placing primary school children who have not been arrested on the database is supported by elements of criminological theory. A well-established pattern of offending involves relatively trivial offences escalating to more serious crimes. Senior Scotland Yard criminologists are understood to be confident that techniques are able to identify future offenders.

A recent report from the think-tank Institute for Public Policy Research (IPPR) called for children to be targeted between the ages of five and 12 with cognitive behavioural therapy, parenting programmes and intensive support. Prevention should start young, it said, because prolific offenders typically began offending between the ages of 10 and 13. Julia Margo, author of the report, entitled ‘Make me a Criminal’, said: ‘You can carry out a risk factor analysis where you look at the characteristics of an individual child aged five to seven and identify risk factors that make it more likely that they would become an offender.’ However, she said that placing young children on a database risked stigmatising them by identifying them in a ‘negative’ way.

Thankfully, the article contains some reasonable reactions:

Shami Chakrabarti, director of the civil rights group Liberty, denounced any plan to target youngsters. ‘Whichever bright spark at Acpo thought this one up should go back to the business of policing or the pastime of science fiction novels,’ she said. ‘The British public is highly respectful of the police and open even to eccentric debate, but playing politics with our innocent kids is a step too far.’

Chris Davis, of the National Primary Headteachers’ Association, said most teachers and parents would find the suggestion an ‘anathema’ and potentially very dangerous. ‘It could be seen as a step towards a police state,’ he said. ‘It is condemning them at a very young age to something they have not yet done. They may have the potential to do something, but we all have the potential to do things. To label children at that stage and put them on a register is going too far.’

Posted on March 18, 2008 at 2:12 PM77 Comments

Risk and the Brain

New research on how the brain estimates risk:

Using functional imaging in a simple gambling task in which risk was constantly changed, the researchers discovered that an early activation of the anterior insula of the brain was associated with mistakes in predicting risk.

The time course of the activation also indicated a role in rapid updating, suggesting that this area is involved in how we learn to modify our risk predictions. The finding was particularly interesting, notes lead author and EPFL professor Peter Bossaerts, because the anterior insula is the locus of where we integrate and process emotions.

“This represents an important advance in our understanding of the neurological underpinnings of risk, in analogy with an earlier discovery of a signal for forecast error in the dopaminergic system,” says Bossaerts, “and indicates that we need to update our understanding of the neural basis of reward anticipation in uncertain conditions to include risk assessment.”

Posted on March 18, 2008 at 6:51 AM11 Comments

Security in Montana

Three items.

The first is about the difficulty of implementing REAL ID in areas so remote they don’t have a permanent DMV. The second is about airport security at airports so remote they average only two passengers per flight. The third—and this is the best—is Brian Schweitzer, Montana’s governor, speaking about his opposition to REAL ID.

EDITED TO ADD (3/24): More on Montana and REAL-ID.

Posted on March 17, 2008 at 1:17 PM22 Comments

Camera that Sees Under Clothes

Interesting:

A British company has developed a camera that can detect weapons, drugs or explosives hidden under people’s clothes from up to 25 meters away in what could be a breakthrough for the security industry.

The T5000 camera, created by a company called ThruVision, uses what it calls “passive imaging technology” to identify objects by the natural electromagnetic rays—known as Terahertz or T-rays—that they emit.

The high-powered camera can detect hidden objects from up to 80 feet away and is effective even when people are moving. It does not reveal physical body details and the screening is harmless, the company says.

If this is real, it seems much less invasive than backscatter X ray.

Posted on March 17, 2008 at 6:30 AM35 Comments

London Tube Smartcard Cracked

Looks like lousy cryptography.

Details here. When will people learn not to invent their own crypto?

Note that this is the same card—maybe a different version—that was used in the Dutch transit system, and was hacked back in January. There’s another hack of that system (press release here, and a video demo), and many companies—and government agencies—are scrambling in the wake of all these revelations.

Seems like the Mifare system (especially the version called Mifare Classic—and there are billions out there) was really badly designed, in all sorts of ways. I’m sure there are many more serious security vulnerabilities waiting to be discovered.

Posted on March 14, 2008 at 7:27 AM64 Comments

Stealing from Bookstores

Fascinating:

There’s an underground economy of boosted books. These values are commonly understood and roundly agreed upon through word of mouth, and the values always seem to be true. Once, a scruffy, large man approached me, holding a folded-up piece of paper. “Do you have any Buck?” He paused and looked at the piece of paper. “Any books by Buckorsick?” I suspected that he meant Bukowski, but I played dumb, and asked to see the piece of paper he was holding. It was written in crisp handwriting that clearly didn’t belong to him, and it read:

  1. Charles Bukowski
  2. Jim Thompson
  3. Philip K. Dick
  4. William S. Burroughs
  5. Any Graphic Novel

This is pretty much the authoritative top five, the New York Times best-seller list of stolen books. Its origins still mystify me. It might have belonged to an unscrupulous used bookseller who sent the homeless out, Fagin-like, to do his bidding, or it might have been another book thief helping a semi-illiterate friend identify the valuable merchandise.

Posted on March 13, 2008 at 1:06 PM31 Comments

Physically Hacking Windows Computers via FireWire

This is impressive:

With Winlockpwn, the attacker connects a Linux machine to the Firewire port on the victim’s machine. The attacker then gets full read-and-write memory access and the tool deactivates Windows’s password protection that resides in local memory. Then he or she has carte blanche to steal passwords or drop rootkits and keyloggers onto the machine.

Full disk encryption seems like the only defense here.

Posted on March 13, 2008 at 11:54 AM58 Comments

Chip and PIN Vulnerable

This both is and isn’t news. In the security world, we knew that replacing credit card signatures with chip and PIN created new vulnerabilities. In this paper (see also the press release and FAQ), researchers demonstrated some pretty basic attacks against the system—one using a paper clip, a needle, and a small recording device. This BBC article is a good summary of the research.

And also, there’s also this leaked chip and PIN report from APACS, the UK trade association that has been pushing chip and PIN.

Posted on March 12, 2008 at 2:12 PM16 Comments

Hacking Medical Devices

Okay, so this could be big news:

But a team of computer security researchers plans to report Wednesday that it had been able to gain wireless access to a combination heart defibrillator and pacemaker.

They were able to reprogram it to shut down and to deliver jolts of electricity that would potentially be fatal—if the device had been in a person. In this case, the researcher were hacking into a device in a laboratory.

The researchers said they had also been able to glean personal patient data by eavesdropping on signals from the tiny wireless radio that Medtronic, the device’s maker, had embedded in the implant as a way to let doctors monitor and adjust it without surgery.

There’s only a little bit of hyperbole in the New York Times article. The research is being conducted by the Medical Device Security Center, with researchers from Beth Israel Deaconess Medical Center, Harvard Medical School, the University of Massachusetts Amherst, and the University of Washington. They have two published papers:

This is from the FAQ for the second paper (an ICD is a implantable cardiac defibrillator):

As part of our research we evaluated the security and privacy properties of a common ICD. We investigate whether a malicious party could create his or her own equipment capable of wirelessly communicating with this ICD.

Using our own equipment (an antenna, radio hardware, and a PC), we found that someone could violate the privacy of patient information and medical telemetry. The ICD wirelessly transmits patient information and telemetry without observable encryption. The adversary’s computer could intercept wireless signals from the ICD and learn information including: the patient’s name, the patient’s medical history, the patient’s date of birth, and so on.

Using our own equipment (an antenna, radio hardware, and a PC), we found that someone could also turn off or modify therapy settings stored on the ICD. Such a person could render the ICD incapable of responding to dangerous cardiac events. A malicious person could also make the ICD deliver a shock that could induce ventricular fibrillation, a potentially lethal arrhythmia.

Of course, we all know how this happened. It’s a story we’ve seen a zillion times before: the designers didn’t think about security, so the design wasn’t secure.

The researchers are making it very clear that this doesn’t mean people shouldn’t get pacemakers and ICDs. Again, from the FAQ:

We strongly believe that nothing in our report should deter patients from receiving these devices if recommended by their physician. The implantable cardiac defibrillator is a proven, life-saving technology. We believe that the risk to patients is low and that patients should not be alarmed. We do not know of a single case where an IMD patient has ever been harmed by a malicious security attack. To carry out the attacks we discuss in our paper would require: malicious intent, technical sophistication, and the ability to place electronic equipment close to the patient. Our goal in performing this study is to improve the security, privacy, safety, and effectiveness of future IMDs.

For all our experiments our antenna, radio hardware, and PC were near the ICD. Our experiments were conducted in a computer laboratory and utilized simulated patient data. We did not experiment with extending the distance between the antenna and the ICD.

I agree with this answer. The risks are there, but the benefits of these devices are much greater. The point of this research isn’t to help people hack into pacemakers and commit murder, but to enable medical device companies to design better implantable equipment in the future. I think it’s great work.

Of course, that will only happen if the medical device companies don’t react like idiots:

Medtronic, the industry leader in cardiac regulating implants, said Tuesday that it welcomed the chance to look at security issues with doctors, regulators and researchers, adding that it had never encountered illegal or unauthorized hacking of its devices that have telemetry, or wireless control, capabilities.

“To our knowledge there has not been a single reported incident of such an event in more than 30 years of device telemetry use, which includes millions of implants worldwide,” a Medtronic spokesman, Robert Clark, said. Mr. Clark added that newer implants with longer transmission ranges than Maximo also had enhanced security.

[…]

St. Jude Medical, the third major defibrillator company, said it used “proprietary techniques” to protect the security of its implants and had not heard of any unauthorized or illegal manipulation of them.

Just because you have no knowledge of something happening does not mean it’s not a risk.

Another article.

The general moral here: more and more, computer technology is becoming intimately embedded into our lives. And with each new application comes new security risks. And we have to take those risks seriously.

Posted on March 12, 2008 at 10:39 AM46 Comments

German Courts Rule on Spying in Cyberspace

Good ruling:

The Federal Constitutional Court in Karlsruhe said cyber spying violated individuals’ right to privacy and could be used only in exceptional cases.

More info:

Germany’s Federal Constitutional Court has rejected provisions adopted by the State of North Rhine-Westphalia that allowed investigators to covertly search PCs online. In its ruling, the court creates a new right to confidentiality and integrity of personal data stored on IT systems; the ruling expands the current protection provided by the country’s constitutional rights for telecommunications privacy and the personal right to control private information under the German constitution.

In line with an earlier ruling on censuses, the judges found that the modern digital world requires a new right, but not one which is absolute ­ exceptions can be made if there is just cause. The judges did not feel that the blanket covert online searches that North Rhine-Westphalia’s (NRW) provisions allowed fell under that category; rather, these searches were found to be a severe violation of privacy.

The court explained that strict legal provisions apply for covert online searches of PCs, as with exceptional cases of telephone tapping or other exceptions to the right to privacy. Specifically, the judges say that private PCs can only be covertly searched “if there is evidence that an important overriding right would otherwise be violated.”

More articles. Commentary. And here’s the ruling—in German, of course.

Posted on March 12, 2008 at 6:18 AM29 Comments

Searching for Terrorists in World of Warcraft

So, you’re sitting around the house with your buddies, playing World of Warcraft. One of you wonders: “How can we get paid for doing this?” Another says: “I know; let’s pretend we’re fighting terrorism, and then get a government grant.”

Having eliminated all terrorism in the real world, the U.S. intelligence community is working to develop software that will detect violent extremists infiltrating World of Warcraft and other massive multiplayer games, according to a data-mining report from the Director of National Intelligence.

Another article.

You just can’t make this stuff up.

EDITED TO ADD (3/13): Funny.

Posted on March 11, 2008 at 2:42 PM43 Comments

FAA Badges Missing

I don’t know how big a deal this really is, but it is amusing nonetheless:

According to the investigation, 122 Federal Aviation Administration safety inspector badges have been stolen or lost in the past five years. The credentials are one of the few forms of identification that give complete and unfettered access to airport facilities, including the cockpits of planes in flight.

“The FAA badge is probably of all the badges just as dangerous if not more so than any other,” aviation expert Denny Kelly said.

Kelly, a former commercial pilot and a private investigator, said the badge can give a person free access to nearly every secure area of an airport.

“The FAA badge allows you not only on one airline, plus getting through security, it allows you to get on any airline, any airplane, anyplace,” he said.

Posted on March 11, 2008 at 11:14 AM31 Comments

Privacy and Power

When I write and speak about privacy, I am regularly confronted with the mutual disclosure argument. Explained in books like David Brin’s The Transparent Society, the argument goes something like this: In a world of ubiquitous surveillance, you’ll know all about me, but I will also know all about you. The government will be watching us, but we’ll also be watching the government. This is different than before, but it’s not automatically worse. And because I know your secrets, you can’t use my secrets as a weapon against me.

This might not be everybody’s idea of utopia—and it certainly doesn’t address the inherent value of privacy—but this theory has a glossy appeal, and could easily be mistaken for a way out of the problem of technology’s continuing erosion of privacy. Except it doesn’t work, because it ignores the crucial dissimilarity of power.

You cannot evaluate the value of privacy and disclosure unless you account for the relative power levels of the discloser and the disclosee.

If I disclose information to you, your power with respect to me increases. One way to address this power imbalance is for you to similarly disclose information to me. We both have less privacy, but the balance of power is maintained. But this mechanism fails utterly if you and I have different power levels to begin with.

An example will make this clearer. You’re stopped by a police officer, who demands to see identification. Divulging your identity will give the officer enormous power over you: He or she can search police databases using the information on your ID; he or she can create a police record attached to your name; he or she can put you on this or that secret terrorist watch list. Asking to see the officer’s ID in return gives you no comparable power over him or her. The power imbalance is too great, and mutual disclosure does not make it OK.

You can think of your existing power as the exponent in an equation that determines the value, to you, of more information. The more power you have, the more additional power you derive from the new data.

Another example: When your doctor says “take off your clothes,” it makes no sense for you to say, “You first, doc.” The two of you are not engaging in an interaction of equals.

This is the principle that should guide decision-makers when they consider installing surveillance cameras or launching data-mining programs. It’s not enough to open the efforts to public scrutiny. All aspects of government work best when the relative power between the governors and the governed remains as small as possible—when liberty is high and control is low. Forced openness in government reduces the relative power differential between the two, and is generally good. Forced openness in laypeople increases the relative power, and is generally bad.

Seventeen-year-old Erik Crespo was arrested in 2005 in connection with a shooting in a New York City elevator. There’s no question that he committed the shooting; it was captured on surveillance-camera videotape. But he claimed that while being interrogated, Detective Christopher Perino tried to talk him out of getting a lawyer, and told him that he had to sign a confession before he could see a judge.

Perino denied, under oath, that he ever questioned Crespo. But Crespo had received an MP3 player as a Christmas gift, and surreptitiously recorded the questioning. The defense brought a transcript and CD into evidence. Shortly thereafter, the prosecution offered Crespo a better deal than originally proffered (seven years rather than 15). Crespo took the deal, and Perino was separately indicted on charges of perjury.

Without that recording, it was the detective’s word against Crespo’s. And who would believe a murder suspect over a New York City detective? That power imbalance was reduced only because Crespo was smart enough to press the “record” button on his MP3 player. Why aren’t all interrogations recorded? Why don’t defendants have the right to those recordings, just as they have the right to an attorney? Police routinely record traffic stops from their squad cars for their own protection; that video record shouldn’t stop once the suspect is no longer a threat.

Cameras make sense when trained on police, and in offices where lawmakers meet with lobbyists, and wherever government officials wield power over the people. Open-government laws, giving the public access to government records and meetings of governmental bodies, also make sense. These all foster liberty.

Ubiquitous surveillance programs that affect everyone without probable cause or warrant, like the National Security Agency’s warrantless eavesdropping programs or various proposals to monitor everything on the internet, foster control. And no one is safer in a political system of control.

This essay originally appeared on Wired.com.

Commentary by David Brin.

Posted on March 11, 2008 at 6:09 AM83 Comments

Israel Implementing IFF System for Commercial Aircraft

Israel is implementing an IFF (identification, friend or foe) system for commercial aircraft, designed to differentiate legitimate planes from terrorist-controlled planes.

The news article implies that it’s a basic challenge-and-response system. Ground control issues some kind of alphanumeric challenge to the plane. The pilot types the challenge into some hand-held computer device, and reads back the reply. Authentication is achieved by 1) physical possession of the device, and 2) typing a legitimate PIN into the device to activate it.

The article talks about a distress mode, where the pilot signals that a terrorist is holding a gun to his head. Likely, that’s done by typing a special distress PIN into the device, and reading back whatever the screen displays.

The military has had this sort of system—first paper-based, and eventually computer-based—for decades. The critical issue with using this on commercial aircraft is how to deal with user error. The system has to be easy enough to use, and the parts hard enough to lose, that there won’t be a lot of false alarms.

Posted on March 10, 2008 at 12:24 PM34 Comments

Security Products: Suites vs. Best-of-Breed

We know what we don’t like about buying consolidated product suites: one great product and a bunch of mediocre ones. And we know what we don’t like about buying best-of-breed: multiple vendors, multiple interfaces, and multiple products that don’t work well together. The security industry has gone back and forth between the two, as a new generation of IT security professionals rediscovers the downsides of each solution.

The real problem is that neither solution really works, and we continually fool ourselves into believing whatever we don’t have is better than what we have at the time. And the real solution is to buy results, not products.

Honestly, no one wants to buy IT security. People want to buy whatever they want—connectivity, a Web presence, email, networked applications, whatever—and they want it to be secure. That they’re forced to spend money on IT security is an artifact of the youth of the computer industry. And sooner or later the need to buy security will disappear.

It will disappear because IT vendors are starting to realize they have to provide security as part of whatever they’re selling. It will disappear because organizations are starting to buy services instead of products, and demanding security as part of those services. It will disappear because the security industry will disappear as a consumer category, and will instead market to the IT industry.

The critical driver here is outsourcing. Outsourcing is the ultimate consolidator, because the customer no longer cares about the details. If I buy my network services from a large IT infrastructure company, I don’t care if it secures things by installing the hot new intrusion prevention systems, by configuring the routers and servers as to obviate the need for network-based security, or if it uses magic security dust given to it by elven kings. I just want a contract that specifies a level and quality of service, and my vendor can figure it out.

IT is infrastructure. Infrastructure is always outsourced. And the details of how the infrastructure works are left to the companies that provide it.

This is the future of IT, and when that happens we’re going to start to see a type of consolidation we haven’t seen before. Instead of large security companies gobbling up small security companies, both large and small security companies will be gobbled up by non-security companies. It’s already starting to happen. In 2006, IBM bought ISS. The same year BT bought my company, Counterpane, and last year it bought INS. These aren’t large security companies buying small security companies; these are non-security companies buying large and small security companies.

If I were Symantec and McAfee, I would be preparing myself for a buyer.

This is good consolidation. Instead of having to choose between a single product suite that isn’t very good or a best-of-breed set of products that don’t work well together, we can ignore the issue completely. We can just find an infrastructure provider that will figure it out and make it work—who cares how?

This essay originally appeared as the second half of a point/counterpoint with Marcus Ranum in Information Security. Here’s Marcus’s half.

Posted on March 10, 2008 at 6:33 AM29 Comments

TSA's Ideal Laptop Bag

This seems not to be a joke.

The Transportation Security Administration is interested in evaluating—and eventually approving –- the design of certain laptop bags, so travelers would be permitted to pass through security checkpoints without having to remove their laptops.

[…]

To accomplish this, the TSA RFI pointed out that the laptop bag would need to meet the following requirements:

  • The carrying bag cannot exceed any one of the proposed dimensions – 16 inches in height, 24 inches wide and 36 inches long.
  • The materials that make up the bag cannot degrade the quality of the X-ray image of the laptop.
  • No straps, pockets, zippers, handles or closures of the bag can interfere with the image of the laptop.
  • No electronics, chargers, batteries, wires, paper products, pens or other contents of the bag can shield the image of the laptop.

TSA is inviting bag designers and manufacturers to come up with creative ways to meet these design requirements, but it has also suggested three concepts of its own:

  • A bag that would open completely, and lie horizontally on the X-ray belt, such that one side with hold only the laptop.
  • A bag that would open completely, leaving the laptop standing vertically, supported by clips.
  • A bag that would pull apart in separate compartments, with one compartment containing only the laptop.

Doesn’t sound like a particularly useful laptop bag.

Posted on March 7, 2008 at 10:42 AM72 Comments

Risk of Knowing Too Much About Risk

Interesting:

Dread is a powerful force. The problem with dread is that it leads to terrible decision-making.

Slovic says all of this results from how our brains process risk, which is in two ways. The first is intuitive, emotional and experience based. Not only do we fear more what we can’t control, but we also fear more what we can imagine or what we experience. This seems to be an evolutionary survival mechanism. In the presence of uncertainty, fear is a valuable defense. Our brains react emotionally, generate anxiety and tell us, “Remember the news report that showed what happened when those other kids took the bus? Don’t put your kids on the bus.”

The second way we process risk is analytical: we use probability and statistics to override, or at least prioritize, our dread. That is, our brain plays devil’s advocate with its initial intuitive reaction, and tries to say, “I know it seems scary, but eight times as many people die in cars as they do on buses. In fact, only one person dies on a bus for every 500 million miles buses travel. Buses are safer than cars.”

Unfortunately for us, that’s often not the voice that wins. Intuitive risk processors can easily overwhelm analytical ones, especially in the presence of those etched-in images, sounds and experiences. Intuition is so strong, in fact, that if you presented someone who had experienced a bus accident with factual risk analysis about the relative safety of buses over cars, it’s highly possible that they’d still choose to drive their kids to school, because their brain washes them in those dreadful images and reminds them that they control a car but don’t control a bus. A car just feels safer. “We have to work real hard in the presence of images to get the analytical part of risk response to work in our brains,” says Slovic. “It’s not easy at all.”

And we’re making it harder by disclosing more risks than ever to more people than ever. Not only does all of this disclosure make us feel helpless, but it also gives us ever more of those images and experiences that trigger the intuitive response without analytical rigor to override the fear. Slovic points to several recent cases where reason has lost to fear: The sniper who terrorized Washington D.C.; pathogenic threats like MRSA and brain-eating amoeba. Even the widely publicized drunk-driving death of a baseball player this year led to decisions that, from a risk perspective, were irrational.

Posted on March 6, 2008 at 6:24 AM33 Comments

Creating and Entrapping Terrorists

When I wrote this essay—“Portrait of the Modern Terrorist as an Idiot”—I thought a lot about the government inventing terrorist plotters and entrapping them, to make the world seem scarier. Since then, it’s been on my list of topics to write about someday.

Rolling Stone has this excellent article on the topic, about the Joint Terrorism Task Forces in the U.S.:

But a closer inspection of the cases brought by JTTFs reveals that most of the prosecutions had one thing in common: The defendants posed little if any demonstrable threat to anyone or anything. According to a study by the Center on Law and Security at the New York University School of Law, only ten percent of the 619 “terrorist” cases brought by the federal government have resulted in convictions on “terrorism-related” charges—a category so broad as to be meaningless. In the past year, none of the convictions involved jihadist terror plots targeting America. “The government releases selective figures,” says Karen Greenberg, director of the center. “They have never even defined ‘terrorism.’ They keep us in the dark over statistics.”

Indeed, Shareef is only one of many cases where the JTTFs have employed dubious means to reach even more dubious ends. In Buffalo, the FBI spent eighteen months tracking the “Lackawanna Six”—a half-dozen men from the city’s large Muslim population who had been recruited by an Al Qaeda operative in early 2001 to undergo training in Afghanistan. Only two lasted the six-week course; the rest pretended to be hurt or left early. Despite extensive surveillance, the FBI found no evidence that the men ever discussed, let alone planned, an attack—but that didn’t stop federal agents from arresting the suspects with great fanfare and accusing them of operating an “Al Qaeda-trained terrorist cell on American soil.” Fearing they would be designated as “enemy combatants” and disappeared into the legal void created by the Patriot Act, all six pleaded guilty to aiding Al Qaeda and were sentenced to at least seven years in prison.

In other cases, the use of informants has led the government to flirt with outright entrapment. In Brooklyn, a Guyanese immigrant and former cargo handler named Russell Defreitas was arrested last spring for plotting to blow up fuel tanks at JFK International Airport. In fact, before he encountered the might of the JTTF, Defreitas was a vagrant who sold incense on the streets of Queens and spent his spare time checking pay phones for quarters. He had no hope of instigating a terrorist plot of the magnitude of the alleged attack on JFK—until he received the help of a federal informant known only as “Source,” a convicted drug dealer who was cooperating with federal agents to get his sentence reduced. Backed by the JTTF, Defreitas suddenly obtained the means to travel to the Caribbean, conduct Google Earth searches of JFK’s grounds and build a complex, multifaceted, international terror conspiracy—albeit one that was impossible to actually pull off. After Defreitas was arrested, U.S. Attorney Roslynn Mauskopf called it “one of the most chilling plots imaginable.”

Using informants to gin up terrorist conspiracies is a radical departure from the way the FBI has traditionally used cooperating sources against organized crime or drug dealers, where a pattern of crime is well established before the investigation begins. Now, in new-age terror cases, the JTTFs simply want to establish that suspects are predisposed to be terrorists—even if they are completely unable or ill-equipped to act on that predisposition. High-tech video and audio evidence, coupled with anti-terror hysteria, has made it effectively impossible for suspects to use the legal defense of entrapment. The result in many cases has been guilty pleas—and no scrutiny of government conduct.

In most cases, because no trial is ever held, few details emerge beyond the spare and slanted descriptions in the indictments. When facts do come to light during a trial, they cast doubt on the seriousness of the underlying case. The “Albany Pizza” case provides a stark example. Known as a “sting case,” the investigation began in June 2003 when U.S. soldiers raided an “enemy camp” in Iraq and seized a notebook containing the name of an imam in Albany—one Yassin Aref. To snare Aref, the JTTF dispatched a Pakistani immigrant named Shahed “Malik” Hussain, who was facing years in prison for a driver’s-license scam. Instead of approaching Aref directly, federal agents sent Malik to befriend Mohammed Hossain, a Bangladeshi immigrant who went to the same mosque as Aref. Hossain, an American citizen who ran a place called Little Italy Pizzeria in Albany, had no connections whatsoever to terrorism or any form of radical Islam. After the attacks on 9/11, he had been quoted in the local paper saying, “I am proud to be an American.” But enticed by Malik, Hossain soon found himself caught up in a government-concocted terror plot. Posing as an arms dealer, Malik told Hossain that a surface-to-air missile was needed for an attack on a Pakistani diplomat in New York. He offered Hossain $5,000 in cash to help him launder $50,000—a deal Hossain claims he never properly grasped. According to Muslim tradition, a witness is needed for significant financial transactions. Thus, the JTTF reached out for Hossain’s imam and the true target of the sting—Aref.

Posted on March 5, 2008 at 6:25 AM30 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.