Blog: February 2008 Archives

Kids and Lying

How kids learn to lie. (Maybe it’s a bit off the security topic, but with all my reading on the psychology of security, I don’t think so.)

So when do the 98 percent who think lying is wrong become the 98 percent who lie?

It starts very young. Indeed, bright kids—those who do better on other academic indicators—are able to start lying at 2 or 3. “Lying is related to intelligence,” explains Dr. Victoria Talwar, an assistant professor at Montreal’s McGill University and a leading expert on children’s lying behavior.

Although we think of truthfulness as a young child’s paramount virtue, it turns out that lying is the more advanced skill. A child who is going to lie must recognize the truth, intellectually conceive of an alternate reality, and be able to convincingly sell that new reality to someone else. Therefore, lying demands both advanced cognitive development and social skills that honesty simply doesn’t require. “It’s a developmental milestone,” Talwar has concluded.

Posted on February 29, 2008 at 7:09 AM68 Comments

Why Some Terrorist Attacks Succeed and Others Fail

In “Underlying Reasons for Success and Failure of Terrorist Attacks: Selected Case Studies” (Homeland Security Institute, June 2007), the authors examine eight recent terrorist plots against commercial aviation and passenger rail, and come to some interesting conclusions.

From the “Executive Summary”:

The analytic results indicated that the most influential factors determining the success or failure of a terrorist attack are those that occur in the pre-execution phases. While safeguards and controls at airports and rail stations are critical, they are most effective when coupled with factors that can be leveraged to detect the plot in the planning stages. These factors include:

  • Poor terrorist operational security (OPSEC). The case studies indicate that even plots that are otherwise well-planned and operationally sound will fail if there is a lack of attention to OPSEC. Security services cannot “cause” poor OPSEC, but they can create the proper conditions to capitalize on it when it occurs.
  • Observant public and vigilant security services. OPSEC breaches are a significant factor only if they are noticed. In cases where the public was sensitive to suspicious behavior, lapses in OPSEC were brought to the attention of authorities by ordinary citizens. However, the authorities must likewise be vigilant and recognize the value of unexpected information that may seem unimportant, but actually provides the opening to interdict a planned attack.
  • Terrorist profile indicators. Awareness of and sensitivity to behavioral indicators, certain activities, or past involvement with extremist elements can help alert an observant public and help a vigilant security apparatus recognize a potential cell of terrorist plotters.
  • Law enforcement or intelligence information sharing. Naturally, if security services are aware of an impending attack they will be better able to interdict it. The key, as stated above, is to recognize the value of information that may seem unimportant but warrants further investigation. Security services may not recognize the context into which a certain piece of information fits, but by sharing with other organizations more parts of the puzzle can be pieced together. Information should be shared laterally, with counterpart organizations; downward, with local law enforcement, who can serve as collectors of information; and with higher elements capable of conducting detailed analysis. Intelligence collection and analysis are relatively new functions for law enforcement. Training is a key element in their ability to recognize and respond to indicators.
  • International cooperation. Nearly all terrorist plots, including most of those studied for this project, have an international connection. This could include overseas support elements, training camps, or movement of funds. The sharing of information among allies appears from our analysis to have a positive impact on interdicting attack plans as well as apprehending members of larger networks.

I especially like this quote, which echos what I’ve been saying for a long time now:

One phenomenon stands out: terrorists are rarely caught in the act during the execution phase of an operation, other than instances in which their equipment or weapons fail. Rather, plots are most often foiled during the pre-execution phases.

Intelligence, investigation, and emergency response: that’s where we should be spending our counterterrorism dollar. Defending the targets is rarely the right answer.

Posted on February 28, 2008 at 6:25 AM29 Comments

Third Parties Controlling Information

Wine Therapy is a web bulletin board for serious wine geeks. It’s been active since 2000, and its database of back posts and comments is a wealth of information: tasting notes, restaurant recommendations, stories and so on. Late last year someone hacked the board software, got administrative privileges and deleted the database. There was no backup.

Of course the board’s owner should have been making backups all along, but he has been very sick for the past year and wasn’t able to. And the Internet Archive has been only somewhat helpful.

More and more, information we rely on—either created by us or by others—is out of our control. It’s out there on the internet, on someone else’s website and being cared for by someone else. We use those websites, sometimes daily, and don’t even think about their reliability.

Bits and pieces of the web disappear all the time. It’s called “link rot,” and we’re all used to it. A friend saved 65 links in 1999 when he planned a trip to Tuscany; only half of them still work today. In my own blog, essays and news articles and websites that I link to regularly disappear—sometimes within a few days of my linking to them.

It may be because of a site’s policies—some newspapers only have a couple of weeks on their website—or it may be more random: Position papers disappear off a politician’s website after he changes his mind on an issue, corporate literature disappears from the company’s website after an embarrassment, etc. The ultimate link rot is “site death,” where entire websites disappear: Olympic and World Cup events after the games are over, political candidates’ websites after the elections are over, corporate websites after the funding runs out and so on.

Mostly, we ignore the issue. Sometimes I save a copy of a good recipe I find, or an article relevant to my research, but mostly I trust that whatever I want will be there next time. Were I planning a trip to Tuscany, I would rather search for relevant articles today than rely on a nine-year-old list anyway. Most of the time, link rot and site death aren’t really a problem.

This is changing in a Web 2.0 world, with websites that are less about information and more about community. We help build these sites, with our posts or our comments. We visit them regularly and get to know others who also visit regularly. They become part of our socialization on the internet and the loss of them affects us differently, as Greatest Journal users discovered in January when their site died.

Few, if any, of the people who made Wine Therapy their home kept backup copies of their own posts and comments. I’m sure they didn’t even think of it. I don’t think of it, when I post to the various boards and blogs and forums I frequent. Of course I know better, but I think of these forums as extensions of my own computer—until they disappear.

As we rely on others to maintain our writings and our relationships, we lose control over their availability. Of course, we also lose control over their security, as MySpace users learned last month when a 17-GB file of half a million supposedly private photos was uploaded to a BitTorrent site.

In the early days of the web, I remember feeling giddy over the wealth of information out there and how easy it was to get to. “The internet is my hard drive,” I told newbies. It’s even more true today; I don’t think I could write without so much information so easily accessible. But it’s a pretty damned unreliable hard drive.

The internet is my hard drive, but only if my needs are immediate and my requirements can be satisfied inexactly. It was easy for me to search for information about the MySpace photo hack. And it will be easy to look up, and respond to, comments to this essay, both on Wired.com and on my own blog. Wired.com is a commercial venture, so there is advertising value in keeping everything accessible. My site is not at all commercial, but there is personal value in keeping everything accessible. By that analysis, all sites should be up on the internet forever, although that’s certainly not true. What is true is that there’s no way to predict what will disappear when.

Unfortunately, there’s not much we can do about it. The security measures largely aren’t in our hands. We can save copies of important web pages locally, and copies of anything important we post. The Internet Archive is remarkably valuable in saving bits and pieces of the internet. And recently, we’ve started seeing tools for archiving information and pages from social networking sites. But what’s really important is the whole community, and we don’t know which bits we want until they’re no longer there.

And about Wine Therapy, I think it started in 2000. It might have been 2001. I can’t check, because someone erased the archives.

This essay originally appeared on Wired.com.

Posted on February 27, 2008 at 5:46 AM45 Comments

Liquid Bomb

I’d love to get details on this:

A television documentary team said it had made a bomb by mixing a series of odourless and colourless chemicals that could be brought into an aircraft by passengers.

The liquids that were mixed to make the explosive cocktail were all contained in bottles of less than 100ml, which is the limit enforced at most airports around the world at present and was introduced shortly after British authorities thwarted an alleged attempt to blow up transatlantic aircraft in August 2006.

[…]

It blew a gaping hole in a decommissioned aircraft, snapping the ribs of the fuselage.

EDITED TO ADD (3/8): More info.

EDITED TO ADD (3/13): Here’s the Channel 4 documentary. And this is well worth reading.

Posted on February 26, 2008 at 3:16 PM66 Comments

Fear of Internet Predators Largely Unfounded

Does this really come as a surprise?

“There’s been some overreaction to the new technology, especially when it comes to the danger that strangers represent,” said Janis Wolak, a sociologist at the Crimes against Children Research Center at the University of New Hampshire in Durham.

“Actually, Internet-related sex crimes are a pretty small proportion of sex crimes that adolescents suffer,” Wolak added, based on three nationwide surveys conducted by the center.

[…]

In an article titled “Online ‘Predators’ and Their Victims,” which appears Tuesday in American Psychologist, the journal of the American Psychological Association, Wolak and co-researchers examined several fears that they concluded are myths:

  • Internet predators are driving up child sex crime rates.

    Finding: Sex assaults on teens fell 52 percent from 1993 to 2005, according to the Justice Department’s National Crime Victimization Survey, the best measure of U.S. crime trends. “The Internet may not be as risky as a lot of other things that parents do without concern, such as driving kids to the mall and leaving them there for two hours,” Wolak said.

  • Internet predators are pedophiles.

    Finding: Internet predators don’t hit on the prepubescent children whom pedophiles target. They target adolescents, who have more access to computers, more privacy and more interest in sex and romance, Wolak’s team determined from interviews with investigators.

  • Internet predators represent a new dimension of child sexual abuse.

    Finding: The means of communication is new, according to Wolak, but most Internet-linked offenses are essentially statutory rape: nonforcible sex crimes against minors too young to consent to sexual relationships with adults.

  • Internet predators trick or abduct their victims.

    Finding: Most victims meet online offenders face-to-face and go to those meetings expecting to engage in sex. Nearly three-quarters have sex with partners they met on the Internet more than once.

  • Internet predators meet their victims by posing online as other teens.

    Finding: Only 5 percent of predators did that, according to the survey of investigators.

  • Online interactions with strangers are risky.

    Finding: Many teens interact online all the time with people they don’t know. What’s risky, according to Wolak, is giving out names, phone numbers and pictures to strangers and talking online with them about sex.

  • Internet predators go after any child.

    Finding: Usually their targets are adolescent girls or adolescent boys of uncertain sexual orientation, according to Wolak. Youths with histories of sexual abuse, sexual orientation concerns and patterns of off- and online risk-taking are especially at risk.

In January, I said this:

…there isn’t really any problem with child predators—just a tiny handful of highly publicized stories—on MySpace. It’s just security theater against a movie-plot threat. But we humans have a well-established cognitive bias that overestimates threats against our children, so it all makes sense.

EDITED TO ADD (3/7): A good essay.

Posted on February 26, 2008 at 6:30 AM42 Comments

Research on Malware Distribution

Interesting:

Among their conclusions are that the majority of malware distribution sites are hosted in China, and that 1.3% of Google searches return at least one link to a malicious site. The lead author, Niels Provos, wrote, ‘It has been over a year and a half since we started to identify web pages that infect vulnerable hosts via drive-by downloads, i.e. web pages that attempt to exploit their visitors by installing and running malware automatically. During that time we have investigated billions of URLs and found more than three million unique URLs on over 180,000 web sites automatically installing malware. During the course of our research, we have investigated not only the prevalence of drive-by downloads but also how users are being exposed to malware and how it is being distributed.'”

Draft paper, and some data.

Posted on February 26, 2008 at 6:23 AM5 Comments

Friday Squid Blogging: Camouflage in Squids

How squids and other cephalopods camouflage themselves:

A clue to how cephalopods disguise themselves so quickly came to Dr. Hanlon when he and his colleagues reviewed thousands of images of cuttlefish, trying to sort their patterns into categories. “It finally dawned on me there aren’t dozens of camouflage patterns,” he said. “I can squeeze them into three categories.”

One category is a uniform color. Cephalopods take on this camouflage to match a smooth-textured background. The second category consists of mottled patterns that help them hide in busier environments. Dr. Hanlon calls the third category disruptive patterning. A cuttlefish creates large blocks of light and dark on its skin. This camouflage disrupts the body outlines.

It’s not often you can find research on the intersection of security and squid.

Posted on February 22, 2008 at 4:09 PM12 Comments

Amtrak to Start Passenger Screening

Amtrak is going to start randomly screening passengers, in an effort to close the security-theater gap between trains and airplanes.

It’s kind of random:

The teams will show up unannounced at stations and set up baggage screening areas in front of boarding gates. Officers will randomly pull people out of line and wipe their bags with a special swab that is then put through a machine that detects explosives. If the machine detects anything, officers will open the bag for visual inspection.

Anybody who is selected for screening and refuses will not be allowed to board and their ticket will be refunded.

In addition to the screening, counterterrorism officers with bomb-sniffing dogs will patrol platforms and walk through trains, and sometimes will ride the trains, officials said.

This is the most telling comment:

“There is no new or different specific threat,” [Amtrak chief executive Alex] Kummant said. “This is just the correct step to take.”

Why is it the correct step to take? Because it makes him feel better. That’s the very definition of security theater.

Posted on February 22, 2008 at 12:17 PM61 Comments

Cryptanalysis of A5/1

There have been a lot of articles about the new attack against the GSM cell phone encryption algorithm, A5/1. In some ways, this isn’t real news; we’ve seen A5/1 cryptanalysis papers as far back as ten years ago.

What’s new about this attack is: 1) it’s completely passive, 2) its total hardware cost is around $1,000, and 3) the total time to break the key is about 30 minutes. That’s impressive.

The cryptanalysis of A5/1 demonstrates an important cryptographic maxim: attacks always get better; they never get worse. This is why we tend to abandon algorithms at the first sign of weakness; we know that with time, the weaknesses will be exploited more effectively to yield better and faster attacks.

Posted on February 22, 2008 at 6:31 AM34 Comments

Cold Boot Attacks Against Disk Encryption

Nice piece of research:

We show that disk encryption, the standard approach to protecting sensitive data on laptops, can be defeated by relatively simple methods. We demonstrate our methods by using them to defeat three popular disk encryption products: BitLocker, which comes with Windows Vista; FileVault, which comes with MacOS X; and dm-crypt, which is used with Linux.

[…]

The root of the problem lies in an unexpected property of today’s DRAM memories. DRAMs are the main memory chips used to store data while the system is running. Virtually everybody, including experts, will tell you that DRAM contents are lost when you turn off the power. But this isn’t so. Our research shows that data in DRAM actually fades out gradually over a period of seconds to minutes, enabling an attacker to read the full contents of memory by cutting power and then rebooting into a malicious operating system.

Interestingly, if you cool the DRAM chips, for example by spraying inverted cans of “canned air” dusting spray on them, the chips will retain their contents for much longer. At these temperatures (around -50 °C) you can remove the chips from the computer and let them sit on the table for ten minutes or more, without appreciable loss of data. Cool the chips in liquid nitrogen (-196 °C) and they hold their state for hours at least, without any power. Just put the chips back into a machine and you can read out their contents.

This is deadly for disk encryption products because they rely on keeping master decryption keys in DRAM. This was thought to be safe because the operating system would keep any malicious programs from accessing the keys in memory, and there was no way to get rid of the operating system without cutting power to the machine, which “everybody knew” would cause the keys to be erased.

Our results show that an attacker can cut power to the computer, then power it back up and boot a malicious operating system (from, say, a thumb drive) that copies the contents of memory. Having done that, the attacker can search through the captured memory contents, find any crypto keys that might be there, and use them to start decrypting hard disk contents. We show very effective methods for finding and extracting keys from memory, even if the contents of memory have faded somewhat (i.e., even if some bits of memory were flipped during the power-off interval). If the attacker is worried that memory will fade too quickly, he can chill the DRAM chips before cutting power.

There seems to be no easy fix for these problems. Fundamentally, disk encryption programs now have nowhere safe to store their keys. Today’s Trusted Computing hardware does not seem to help; for example, we can defeat BitLocker despite its use of a Trusted Platform Module.

The paper is here; more info is here. Articles here.

There is a general security problem illustrated here: it is very difficult to secure data when the attacker has physical control of the machine the data is stored on. I talk about the general problem here, and it’s a hard problem.

EDITED TO ADD (2/26): How-to, with pictures.

Posted on February 21, 2008 at 1:29 PM126 Comments

U.S. Post Office to Enable Wholesale Surveillance of Mail

The post office is launching a new barcode on first class mail that will enable the sender to track mail through the system:

With the new bar code, companies will be able to track mail delivery and know when their customers got a bill, solicitation or product, and the Postal Service will have another way of checking that mail is being delivered on time.

Companies also will be given a chance to buy data collected by the post office that will give them insights into how customers respond to advertising and marketing. A company, for instance, can buy a television or newspaper ad to tout a new product, follow up with an announcement in the mail and get a sense of how well the ad is connecting with customers.

So now the government will have a database of who sends mail to whom. Of course, there’s no discussion of this in the news article.

ETA: The plan only applies to commercial mail, like ad mailers and magazines, not to letters that individual people send each other.

Posted on February 21, 2008 at 6:26 AM49 Comments

Foreign Hackers Stealing American Health Care Records

What in the world is going on here?

Foreign hackers, primarily from Russia and China, are increasingly seeking to steal Americans’ health care records, according to a Department of Homeland Security analyst.

Mark Walker, who works in DHS’ Critical Infrastructure Protection Division, told a workshop audience at the National Institute of Standards and Technology that the hackers’ primary motive seems to be espionage.

Espionage? Um, how?

Walker said the hackers are seeking to exfiltrate health care data. “We don’t know why,” he added. “We want to know why.” At the same time, he said, it’s clear that “medical information can be used against us from a national security standpoint.”

How? It’s not at all clear to me.

Any health problems among the nation’s leaders would be of interest to potential enemies, he said.

This just has to be another joke.

EDITED TO ADD (3/13): More Tags: , , , , , , , ,

Posted on February 20, 2008 at 12:30 PM58 Comments

Hijacking in New Zealand

There are a couple of interesting things about the hijacking in New Zealand two weeks ago. First, it was a traditional hijacking. Remember after 9/11 when people said that the era of airplane hijacking was over, that it would no longer be possible to hijack an airplane and demand a ransom or demand passage to some exotic location? Turns out that’s just not true; there still can be traditional non-terrorist hijackings.

And even more interesting, the media coverage reflected that. Read the links above. They’re calm and reasoned. There’s no mention of the T-word. We’re not all cautioned that we’re going to die. If anything, they’re recommending that everyone not overreact.

Refreshing, really.

EDITED TO ADD (2/25): And this:

Mr Williamson today said the idea behind anything involving transport was “safety at reasonable cost”.

He said the Government needed to weigh up the cost of x-ray screening every passenger on a small plane against the risk of such an attempted hijacking happening again.

“I just think it’s over the top, sledgehammer to crack a nut stuff and my advice to the Cabinet this morning is just make sure you’re very careful. . .to consider what the costs are.”

Posted on February 20, 2008 at 7:26 AM51 Comments

Spending Money on the Wrong Security Threats

This story is a year and a half old, but the lessons are still good:

Kim Hyten, emergency management director in Putnam County, said he didn’t realize homeland security grants can now be used to prepare for tornados. As a result, Putnam County is using its grant money to prepare for something else.

“Weapons of mass destruction,” Hyten said.

That’s right—weapons of mass destruction. This year, Putnam County spent most of its $58,000 homeland security grant to buy dozens of gas masks, boxes full of chemical suits, a plutonium-detecting gamma and neutron ray radiological monitor and, for good measure, this rural county about fifty miles west of Indianapolis also ordered plenty of weapons of mass destruction test strips.

But asked whether weapons of mass destruction are a concern, Hyten replied: “The weapons of mass destruction—I don’t believe this county has ever, when we did our terrorism protection plan, ever looked at that we’d be a targeted site.”

Posted on February 19, 2008 at 7:18 AM28 Comments

Benevolent Worms

This is a stupid idea:

Milan Vojnovic and colleagues from Microsoft Research in Cambridge, UK, want to make useful pieces of information such as software updates behave more like computer worms: spreading between computers instead of being downloaded from central servers.

The research may also help defend against malicious types of worm, the researchers say.

Software worms spread by self-replicating. After infecting one computer they probe others to find new hosts. Most existing worms randomly probe computers when looking for new hosts to infect, but that is inefficient, says Vojnovic, because they waste time exploring groups or “subnets” of computers that contain few uninfected hosts.

This idea pops up every few years. This is what I wrote back in 2003, updating something I wrote in 2000:

This is tempting for several reasons. One, it’s poetic: turning a weapon against itself. Two, it lets ethical programmers share in the fun of designing worms. And three, it sounds like a promising technique to solve one of the nastiest online security problems: patching or repairing computers’ vulnerabilities.

Everyone knows that patching is in shambles. Users, especially home users, don’t do it. The best patching techniques involve a lot of negotiation, pleading, and manual labor…things that nobody enjoys very much. Beneficial worms look like a happy solution. You turn a Byzantine social problem into a fun technical problem. You don’t have to convince people to install patches and system updates; you use technology to force them to do what you want.

And that’s exactly why it’s a terrible idea. Patching other people’s machines without annoying them is good; patching other people’s machines without their consent is not. A worm is not “bad” or “good” depending on its payload. Viral propagation mechanisms are inherently bad, and giving them beneficial payloads doesn’t make things better. A worm is no tool for any rational network administrator, regardless of intent.

A good software distribution mechanism has the following characteristics:

  1. People can choose the options they want.
  2. Installation is adapted to the host it’s running on.
  3. It’s easy to stop an installation in progress, or uninstall the software.
  4. It’s easy to know what has been installed where.

A successful worm, on the other hand, runs without the consent of the user. It has a small amount of code, and once it starts to spread, it is self-propagating, and will keep going automatically until it’s halted.

These characteristics are simply incompatible. Giving the user more choice, making installation flexible and universal, allowing for uninstallation—all of these make worms harder to propagate. Designing a better software distribution mechanism, makes it a worse worm, and vice versa. On the other hand, making the worm quieter and less obvious to the user, making it smaller and easier to propagate, and making it impossible to contain, all make for bad software distribution.

EDITED TO ADD (2/19): This is worth reading on the topic.

EDITED TO ADD (2/19): Microsoft is trying to dispel the rumor that it is working on this technology.

EDITED TO ADD (2/21): Using benevolent worms to test Internet censorship.

EDITED TO ADD (3/13): The benveolent W32.Welchia.Worm, intended to fix Blaster-infected systems, just created havoc.

Posted on February 19, 2008 at 6:57 AM48 Comments

Sonic Weapon

Story of a sonic blaster:

Here’s how it works: Inferno uses four frequencies spread out over 2 to 5 kHz. The idea behind it is that unlike a regular siren, these particular frequencies have a uniquely disturbing effect on people (and presumably cats, dogs and any other living thing). At 123 dB, it’s loud, but not significantly louder than any other alarm system. The advantage, according to Dr. Goldman, is the combination of frequencies. The human ear just doesn’t like it. I agree, I really didn’t like it.

Note to the TSA: Dr. Goldman has had no problems bringing this thing onto airplanes.

Posted on February 18, 2008 at 6:16 AM27 Comments

Credentica

Cryptographer Stefan Brands has a new company, Credentica, that allows people to disclose personal information while maintaining privacy and minimizing the threat of identity theft.

I know Stefan; he’s good. The cryptography behind this system is almost certainly impeccable. I like systems like this, and I want them to succeed. I just don’t see a viable business model.

I’d like to be proven wrong.

Posted on February 15, 2008 at 5:02 AM25 Comments

DHS Warns of Female Suicide Bombers

First paragraph:

Terrorists increasingly favor using women as suicide bombers to thwart security and draw attention to their causes, a new FBI-Department of Homeland Security assessment concludes.

Photo caption:

Female suicide bombers can use devices to make them appear pregnant, a security assessment says.

Second paragraph:

The assessment said the agencies “have no specific, credible intelligence indicating that terrorist organizations intend to utilize female suicide bombers against targets in the homeland.”

Does the DHS think we’re idiots or something?

Posted on February 13, 2008 at 12:35 PM84 Comments

Giving Drivers Licenses to Illegal Immigrants

Many people say that allowing illegal aliens to obtain state driver’s licenses helps them and encourages them to remain illegally in this country. Michigan Attorney General Mike Cox late last year issued an opinion that licenses could be issued only to legal state residents, calling it “one more tool in our initiative to bolster Michigan’s border and document security.”

In reality, we are a much more secure nation if we do issue driver’s licenses and/or state IDs to every resident who applies, regardless of immigration status. Issuing them doesn’t make us any less secure, and refusing puts us at risk.

The state driver’s license databases are the only comprehensive databases of U.S. residents. They’re more complete, and contain more information – including photographs and, in some cases, fingerprints – than the IRS database, the Social Security database, or state birth certificate databases. As such, they are an invaluable police tool – for investigating crimes, tracking down suspects, and proving guilt.

Removing the 8 million-15 million illegal immigrants from these databases would only make law enforcement harder. Of course, the unlicensed won’t pack up and leave. They will drive without licenses, increasing insurance premiums for everyone. They will use fake IDs, buy real IDs from crooked DMV employees – as several of the 9/11 terrorists did – forge “breeder documents” to get real IDs (another 9/11 terrorist trick), or resort to identity theft. These millions of people will continue to live and work in this country, invisible to any government database and therefore the police.

Assuming that denying licenses to illegals will make them leave is head-in-the-sand thinking.

Of course, even an attempt to deny licenses to illegal immigrants puts DMV clerks in the impossible position of verifying immigration status. This is expensive and time-consuming; furthermore, it won’t work. The law is complicated, and it can take hours to verify someone’s status only to get it wrong. Paperwork can be easy to forge, far easier than driver’s licenses, meaning many illegal immigrants will get these licenses that now “prove” immigrant status.

Even more legal immigrants will be mistakenly denied licenses, resulting in lawsuits and additional government expense.

Some states have considered a tiered license system, one that explicitly lists immigration status on the licenses. Of course, this won’t work either. Illegal immigrants are far more likely to take their chances being caught than admit their immigration status to the DMV.

We are all safer if everyone in society trusts and respects law enforcement. A society where illegal immigrants are afraid to talk to police because of fear of deportation is a society where fewer people come forward to report crimes, aid police investigations, and testify as witnesses.

And finally, denying driver’s licenses to illegal immigrants will not protect us from terrorism. Contrary to popular belief, a driver’s license is not required to board a plane. You can use any government-issued photo ID, including a foreign passport. And if you’re willing to undergo secondary screening, you can board a plane without an ID at all. This is probably how anybody on the “no fly” list gets around these days.

A 2003 American Association of Motor Vehicle Administrators report concludes: “Digital images from driver’s licenses have significantly aided law enforcement agencies charged with homeland security. The 19 (9/11) terrorists obtained driver licenses from several states, and federal authorities relied heavily on these images for the identification of the individuals responsible.”

Whether it’s the DHS trying to protect the nation from terrorism, or local, state and national law enforcement trying to protect the nation from crime, we are all safer if we encourage every adult in America to get a driver’s license.

This op ed originally appeared in the Detroit Free Press.

Posted on February 13, 2008 at 5:57 AM

U.S. Customs Seizing Laptops

I’ve heard many anecdotal stories about U.S. Customs and Border Protection seizing, copying data from, or otherwise accessing laptops of people entering the country. But this is very mainstream:

Today, the Electronic Frontier Foundation and Asian Law Caucus, two civil liberties groups in San Francisco, plan to file a lawsuit to force the government to disclose its policies on border searches, including which rules govern the seizing and copying of the contents of electronic devices. They also want to know the boundaries for asking travelers about their political views, religious practices and other activities potentially protected by the First Amendment. The question of whether border agents have a right to search electronic devices at all without suspicion of a crime is already under review in the federal courts.

The lawsuit was inspired by two dozen cases, 15 of which involved searches of cellphones, laptops, MP3 players and other electronics. Almost all involved travelers of Muslim, Middle Eastern or South Asian background, many of whom, including Mango and the tech engineer, said they are concerned they were singled out because of racial or religious profiling.

Some of this seems pretty severe:

“I was assured that my laptop would be given back to me in 10 or 15 days,” said [Maria] Udy, who continues to fly into and out of the United States. She said the federal agent copied her log-on and password, and asked her to show him a recent document and how she gains access to Microsoft Word. She was asked to pull up her e-mail but could not because of lack of Internet access. With ACTE’s help, she pressed for relief. More than a year later, Udy has received neither her laptop nor an explanation.

[…]

Kamran Habib, a software engineer with Cisco Systems, has had his laptop and cellphone searched three times in the past year. Once, in San Francisco, an officer “went through every number and text message on my cellphone and took out my SIM card in the back,” said Habib, a permanent U.S. resident. “So now, every time I travel, I basically clean out my phone. It’s better for me to keep my colleagues and friends safe than to get them on the list as well.”

Privacy? There’s no need to worry:

Hollinger said customs officers “are trained to protect confidential information.”

I know I feel better.

I strongly recommend the two-tier encryption strategy I described here. And I even more strongly recommend cleaning out your laptop and BlackBerry regularly; if you don’t have it on your computer, no one else can get his hands on it. This defense not only works against U.S. customs, but against the much more likely threat of you losing the damn thing.

And the TSA wants you to know that it’s not them.

Posted on February 12, 2008 at 12:23 PM103 Comments

Lock-In

Buying an iPhone isn’t the same as buying a car or a toaster. Your iPhone comes with a complicated list of rules about what you can and can’t do with it. You can’t install unapproved third-party applications on it. You can’t unlock it and use it with the cellphone carrier of your choice. And Apple is serious about these rules: A software update released in September 2007 erased unauthorized software and—in some cases—rendered unlocked phones unusable.

Bricked” is the term, and Apple isn’t the least bit apologetic about it.

Computer companies want more control over the products they sell you, and they’re resorting to increasingly draconian security measures to get that control. The reasons are economic.

Control allows a company to limit competition for ancillary products. With Mac computers, anyone can sell software that does anything. But Apple gets to decide who can sell what on the iPhone. It can foster competition when it wants, and reserve itself a monopoly position when it wants. And it can dictate terms to any company that wants to sell iPhone software and accessories.

This increases Apple’s bottom line. But the primary benefit of all this control for Apple is that it increases lock-in. “Lock-in” is an economic term for the difficulty of switching to a competing product. For some products—cola, for example—there’s no lock-in. I can drink a Coke today and a Pepsi tomorrow: no big deal. But for other products, it’s harder.

Switching word processors, for example, requires installing a new application, learning a new interface and a new set of commands, converting all the files (which may not convert cleanly) and custom software (which will certainly require rewriting), and possibly even buying new hardware. If Coke stops satisfying me for even a moment, I’ll switch: something Coke learned the hard way in 1985 when it changed the formula and started marketing New Coke. But my word processor has to really piss me off for a good long time before I’ll even consider going through all that work and expense.

Lock-in isn’t new. It’s why all gaming-console manufacturers make sure that their game cartridges don’t work on any other console, and how they can price the consoles at a loss and make the profit up by selling games. It’s why Microsoft never wants to open up its file formats so other applications can read them. It’s why music purchased from Apple for your iPod won’t work on other brands of music players. It’s why every U.S. cellphone company fought against phone number portability. It’s why Facebook sues any company that tries to scrape its data and put it on a competing website. It explains airline frequent flyer programs, supermarket affinity cards and the new My Coke Rewards program.

With enough lock-in, a company can protect its market share even as it reduces customer service, raises prices, refuses to innovate and otherwise abuses its customer base. It should be no surprise that this sounds like pretty much every experience you’ve had with IT companies: Once the industry discovered lock-in, everyone started figuring out how to get as much of it as they can.

Economists Carl Shapiro and Hal Varian even proved that the value of a software company is the total lock-in. Here’s the logic: Assume, for example, that you have 100 people in a company using MS Office at a cost of $500 each. If it cost the company less than $50,000 to switch to Open Office, they would. If it cost the company more than $50,000, Microsoft would increase its prices.

Mostly, companies increase their lock-in through security mechanisms. Sometimes patents preserve lock-in, but more often it’s copy protection, digital rights management (DRM), code signing or other security mechanisms. These security features aren’t what we normally think of as security: They don’t protect us from some outside threat, they protect the companies from us.

Microsoft has been planning this sort of control-based security mechanism for years. First called Palladium and now NGSCB (Next-Generation Secure Computing Base), the idea is to build a control-based security system into the computing hardware. The details are complicated, but the results range from only allowing a computer to boot from an authorized copy of the OS to prohibiting the user from accessing “unauthorized” files or running unauthorized software. The competitive benefits to Microsoft are enormous (.pdf).

Of course, that’s not how Microsoft advertises NGSCB. The company has positioned it as a security measure, protecting users from worms, Trojans and other malware. But control does not equal security; and this sort of control-based security is very difficult to get right, and sometimes makes us more vulnerable to other threats. Perhaps this is why Microsoft is quietly killing NGSCB—we’ve gotten BitLocker, and we might get some other security features down the line—despite the huge investment hardware manufacturers made when incorporating special security hardware into their motherboards.

In my last column, I talked about the security-versus-privacy debate, and how it’s actually a debate about liberty versus control. Here we see the same dynamic, but in a commercial setting. By confusing control and security, companies are able to force control measures that work against our interests by convincing us they are doing it for our own safety.

As for Apple and the iPhone, I don’t know what they’re going to do. On the one hand, there’s this analyst report that claims there are over a million unlocked iPhones, costing Apple between $300 million and $400 million in revenue. On the other hand, Apple is planning to release a software development kit this month, reversing its earlier restriction and allowing third-party vendors to write iPhone applications. Apple will attempt to keep control through a secret application key that will be required by all “official” third-party applications, but of course it’s already been leaked.

And the security arms race goes on …

This essay previously appeared on Wired.com.

EDITED TO ADD (2/12): Slashdot thread.

And critical commentary, which is oddly political:

This isn’t lock-in, it’s called choosing a product that meets your needs. If you don’t want to be tied to a particular phone network, don’t buy an iPhone. If installing third-party applications (between now and the end of February, when officially-sanctioned ones will start to appear) is critically important to you, don’t buy an iPhone.

It’s one thing to grumble about an otherwise tempting device not supporting some feature you would find useful; it’s another entirely to imply that this represents anti-libertarian lock-in. The fact remains, you are free to buy one of the many other devices on the market that existed before there ever was an iPhone.

Actually, lock-in is one of the factors you have to consider when choosing a product to meet your needs. It’s not one thing or the other. And lock-in is certainly not “anti-libertarian.” Lock-in is what you get when you have an unfettered free market competing for customers; it’s libertarian utopia. Government regulations that limit lock-in tactics—something I think would be very good for society—is what’s anti-libertarian.

Here’s a commentary on that previous commentary. This is some good commentary, too.

Posted on February 12, 2008 at 6:08 AM74 Comments

How the MPAA Might Enforce Copyright on the Internet

Interesting speculation from Nicholas Weaver:

All that is necessary is that the MPAA or their contractor automatically spiders for torrents. When it finds torrents, it connects to each torrent with manipulated clients. The client would first transfer enough content to verify copyright, and then attempt to map the participants in the Torrent.

Now the MPAA has a “map” of the participants, a graph of all clients of a particular stream. Simply send this as an automated message to the ISP saying “This current graph is bad, block it”. All the ISP has to do is put in a set of short lived (10 minute) router ACLs which block all pairs that cross its network, killing all traffic for that torrent on the ISP’s network. By continuing to spider the Torrent, the MPAA can find new users as they are added and dropped, updating the map to the ISP in near-real-time.

Note that this requires no wiretapping, and nicely minimizes false positives.

Debate on idea here.

Posted on February 11, 2008 at 1:24 PM37 Comments

Improvements in Face Recognition

Ignore the laughable “100% accurate” claim; this is an interesting idea:

Mike Burton, Professor of Psychology at Glasgow, and lecturer Rob Jenkins say they achieved their hugely-improved results by eliminating the variable effects of age, hairstyle, expression, lighting, different camera equipment etc. This was done by producing a composite “average face” for a person, synthesised from twenty different pictures across a range of ages, lighting and so on.

Not useful when you only have one grainy photograph of your target, but interesting research nonetheless.

Posted on February 11, 2008 at 7:18 AM29 Comments

The Onion on Terror

Excellent:

We must all do whatever we can to preserve America by refocusing our priorities back on the contemplation of lethal threats—invisible nightmarish forces plotting to destroy us in a number of horrific ways. It is only through the vigilance and determination of every patriot that we can maintain the sense of total dread vital to the prolonged existence of a thriving, quivering America.

Our country deserves no less than every citizen living in apprehension.

Fear has always made America strong. Were we ever more determined than during the Yellow Scare? When every Christian gentleman lived in mortal terror of his daughter being doped up on opium and raped by pagan, mustachioed Chinamen? What about the Red Scare, when citizens from all walks of life showed their pride by turning in their friends and associates to rabid anticommunists? Has America ever been more resolute?

The whole thing is funny, and far too real.

Posted on February 8, 2008 at 1:28 PM39 Comments

Mujahideen Secrets 2

Mujahideen Secrets 2 is a new version of an encryption tool, ostensibly written to help Al Qaeda members encrypt secrets as they communicate on the Internet.

A bunch of sites have covered this story, and a couple of security researchers are quoted in the various articles. But quotes like this make you wonder if they have any idea what they’re talking about:

Mujahideen Secrets 2 is a very compelling piece of software, from an encryption perspective, according to Henry. He said the new tool is easy to use and provides 2,048-bit encryption, an improvement over the 256-bit AES encryption supported in the original version.

No one has explained why a terrorist would use this instead of PGP—perhaps they simply don’t trust anything coming from a U.S. company. But honestly, this isn’t a big deal at all: strong encryption software has been around for over fifteen years now, either cheap or free. And the NSA probably breaks most of the stuff by guessing the password, anyway. Unless the whole program is an NSA plant, that is.

My question: the articles claim that the program uses several encryption algorithms, including RSA and AES. Does it use Blowfish or Twofish?

Posted on February 8, 2008 at 5:39 AM61 Comments

Cyber Storm Details

Recently the Associated Press obtained hundreds of pages of documents related to the 2006 “Cyber Storm” exercise. Most interesting is the part where the participants attacked the game computers and pissed the referees off:

However, the government’s files hint at a tantalizing mystery: In the middle of the war game, someone quietly attacked the very computers used to conduct the exercise. Perplexed organizers traced the incident to overzealous players and sent everyone an urgent e-mail marked “IMPORTANT!” reminding them not to probe or attack the game computers.

“Any time you get a group of (information technology) experts together, there’s always a desire, ‘Let’s show them what we can do,'” said George Foresman, a former senior Homeland Security official who oversaw Cyber Storm. “Whether its intent was embarrassment or a prank, we had to temper the enthusiasm of the players.”

See also this. CyberStorm report here.

Posted on February 7, 2008 at 2:30 PM33 Comments

Heavily Armed Officers on New York City Subways

Why does anyone think this is a good idea?

In the first counterterrorism strategy of its kind in the nation, roving teams of New York City police officers armed with automatic rifles and accompanied by bomb-sniffing dogs will patrol the city’s subway system daily, beginning next month, officials said on Friday.

Under a tactical plan called Operation Torch, the officers will board trains and patrol platforms, focusing on sites like Pennsylvania Station, Herald Square, Columbus Circle, Rockefeller Center and Times Square in Manhattan, and Atlantic Avenue in Brooklyn.

What does it accomplish besides intimidating innocent commuters?

Posted on February 7, 2008 at 6:06 AM147 Comments

Cloned Trucks

Criminals are using cloned trucks to bypass security:

Savvy criminals are using some of the country’s most credible logos, including FedEx, Wal-Mart, DirecTV and the U.S. Border Patrol, to create fake trucks to smuggle drugs, money and illegal aliens across the border, according to a report by the Florida Department of Law Enforcement.

[…]

In August 2006, the Texas Department of Public Safety, on a routine traffic stop, found 3,058 pounds of marijuana and 204 kilograms of cocaine in a “cloned” Wal-Mart semi-trailer, driven by a man wearing a Wal-Mart uniform.

In another case, a truck painted with DirecTV and other markings was pulled over in a routine traffic stop in Mississippi and discovered to be carrying 786 pounds of cocaine.

This is the same problem as fake uniforms, and the more general problem of fake credentials. It’s very hard to solve.

EDITED TO ADD (2/6): Here’s someone who puts on a red shirt and predends to be a Target employee so he can steal stuff:

Police in North Miami Beach are looking for a man they say likes to pose as a Target employee while stealing pricey iPods, and the man allegedly knows so much about the store, he’s even helped customers who thought he was a real employee.

[…]

Investigators say McKenzie simply walks into the stores, wearing a red polo shirt, and pretends he works there. North Miami Beach police officials say he has extensive knowledge of Target procedures and has even assisted customers.

Posted on February 6, 2008 at 12:37 PM58 Comments

Fourth Undersea Cable Failure in Middle East

The first two affected India, Pakistan, Egypt, Qatar, Saudi Arabia, the United Arab Emirates, Kuwait, and Bahrain. The third one is between the UAE and Oman. The fourth one connected Qatar and the UAE. This one may not have been cut, but taken offline due to power issues.

The first three have been blamed on ships’ anchors, but there is some dispute about that. And that’s two in the Mediterranean and two in the Persian Gulf.

There have been no official reports of malice to me, but it’s an awfully big coincidence. The fact that Iran has lost Internet connectivity only makes this weirder.

EDITED TO ADD (2/5): The International Herald Tribune has more. And a comment below questions whether Iran being offline has anything to do with this.

EDITED TO ADD (2/5): A fifth cut? What the hell is going on out there?

EDITED TO ADD (2/5): More commentary from Steve Bellovin.

EDITED TO ADD (2/5): Just to be clear: Iran is not offline. That was an untrue rumor; it was never true.

Posted on February 5, 2008 at 8:28 PM211 Comments

UK Two-Tier Tax Security System

Poor security for everyone except the rich and powerful:

The security of the online computer system used by more than three million people to file tax returns is in doubt after HM Revenue and Customs admitted it was not secure enough to be used by MPs, celebrities and the Royal Family.

Thousands of “high profile” people have been secretly barred from using the online tax return system amid concerns that their confidential details would be put at risk.

Posted on February 5, 2008 at 2:38 PM32 Comments

A Good Security Investment by DHS

They’re paying for open source software to be scanned for security bugs, and then fixing them.

All the software scrutinized was found to have significant numbers of security flaws, Coverity said on Wednesday. Since 2006 the project has helped fix 7,826 open source flaws in 250 projects, out of 50 million lines of code scanned, the company said.

They find, on average, one security flaw per 1,000 lines of code. And when the flaw is fixed, everyone’s security improves.

Posted on February 5, 2008 at 6:30 AM42 Comments

Little People Hiding in Luggage

This is both clever and very weird:

Swedish police are quizzing “people of limited stature” with criminal records following a spate of robberies from the cargo holds of coaches – possibly carried out by dwarves smuggled onboard in sports bags.

[…]

National coach operator Swebus confirmed it’d been hit by the audacious crims, who have over the last few months has lifted “thousands of pounds” in cash, jewellery and other valuables.

The company’s sales manager, Ingvar Ryggasjo, said that one short person had been put aboard a coach in a hockey bag. A female passenger said she’d seen some men squeezing the “large, heavy bag” into the cargo hold, and that she later found she’d been relieved of stuff including a camera and purse.

Posted on February 4, 2008 at 1:19 PM30 Comments

NSA Monitoring U.S. Government Internet Traffic

I have mixed feeling about this, but in general think it is a good idea:

President Bush signed a directive this month that expands the intelligence community’s role in monitoring Internet traffic to protect against a rising number of attacks on federal agencies’ computer systems.

The directive, whose content is classified, authorizes the intelligence agencies, in particular the National Security Agency, to monitor the computer networks of all federal agencies—including ones they have not previously monitored.

[…]

The classified joint directive, signed Jan. 8 and called the National Security Presidential Directive 54/Homeland Security Presidential Directive 23, has not been previously disclosed. Plans to expand the NSA’s role in cyber-security were reported in the Baltimore Sun in September.

According to congressional aides and former White House officials with knowledge of the program, the directive outlines measures collectively referred to as the “cyber initiative,” aimed at securing the government’s computer systems against attacks by foreign adversaries and other intruders. It will cost billions of dollars, which the White House is expected to request in its fiscal 2009 budget.

[…]

Under the initiative, the NSA, CIA and the FBI’s Cyber Division will investigate intrusions by monitoring Internet activity and, in some cases, capturing data for analysis, sources said.

The Pentagon can plan attacks on adversaries’ networks if, for example, the NSA determines that a particular server in a foreign country needs to be taken down to disrupt an attack on an information system critical to the U.S. government. That could include responding to an attack against a private-sector network, such as the telecom industry’s, sources said.

Also, as part of its attempt to defend government computer systems, the Department of Homeland Security will collect and monitor data on intrusions, deploy technologies for preventing attacks and encrypt data. It will also oversee the effort to reduce Internet portals across government to 50 from 2,000, to make it easier to detect attacks.

My concern is that the NSA is doing the monitoring. I simply don’t like them monitoring domestic traffic, even domestic government traffic.

EDITED TO ADD: Commentary.

Posted on February 4, 2008 at 6:30 AM46 Comments

Detecting Nuclear Weapons Using the Cell Phone Network

Okay, this is clever:

Such a system could blanket the nation with millions of cell phones equipped with radiation sensors able to detect even light residues of radioactive material. Because cell phones already contain global positioning locators, the network of phones would serve as a tracking system, said physics professor Ephraim Fischbach. Fischbach is working with Jere Jenkins, director of Purdue’s radiation laboratories within the School of Nuclear Engineering.

[…]

Tiny solid-state radiation sensors are commercially available. The detection system would require additional circuitry and would not add significant bulk to portable electronic products, Fischbach said.

I’m not convinced it’s a good idea to deploy such a system, but I like the idea of piggy-backing a nationwide sensor network on top of our already existing cell phone infrastructure.

Posted on February 1, 2008 at 12:54 PM39 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.