June 2010 Archives

Data at Rest vs. Data in Motion

For a while now, I've pointed out that cryptography is singularly ill-suited to solve the major network security problems of today: denial-of-service attacks, website defacement, theft of credit card numbers, identity theft, viruses and worms, DNS attacks, network penetration, and so on.

Cryptography was invented to protect communications: data in motion. This is how cryptography was used throughout most of history, and this is how the militaries of the world developed the science. Alice was the sender, Bob the receiver, and Eve the eavesdropper. Even when cryptography was used to protect stored data -- data at rest -- it was viewed as a form of communication. In "Applied Cryptography," I described encrypting stored data in this way: "a stored message is a way for someone to communicate with himself through time." Data storage was just a subset of data communication.

In modern networks, the difference is much more profound. Communications are immediate and instantaneous. Encryption keys can be ephemeral, and systems like the STU-III telephone can be designed such that encryption keys are created at the beginning of a call and destroyed as soon as the call is completed. Data storage, on the other hand, occurs over time. Any encryption keys must exist as long as the encrypted data exists. And storing those keys becomes as important as storing the unencrypted data was. In a way, encryption doesn't reduce the number of secrets that must be stored securely; it just makes them much smaller.

Historically, the reason key management worked for stored data was that the key could be stored in a secure location: the human brain. People would remember keys and, barring physical and emotional attacks on the people themselves, would not divulge them. In a sense, the keys were stored in a "computer" that was not attached to any network. And there they were safe.

This whole model falls apart on the Internet. Much of the data stored on the Internet is only peripherally intended for use by people; it's primarily intended for use by other computers. And therein lies the problem. Keys can no longer be stored in people's brains. They need to be stored on the same computer, or at least the network, that the data resides on. And that is much riskier.

Let's take a concrete example: credit card databases associated with websites. Those databases are not encrypted because it doesn't make any sense. The whole point of storing credit card numbers on a website is so it's accessible -- so each time I buy something, I don't have to type it in again. The website needs to dynamically query the database and retrieve the numbers, millions of times a day. If the database were encrypted, the website would need the key. But if the key were on the same network as the data, what would be the point of encrypting it? Access to the website equals access to the database in either case. Security is achieved by good access control on the website and database, not by encrypting the data.

The same reasoning holds true elsewhere on the Internet as well. Much of the Internet's infrastructure happens automatically, without human intervention. This means that any encryption keys need to reside in software on the network, making them vulnerable to attack. In many cases, the databases are queried so often that they are simply left in plaintext, because doing otherwise would cause significant performance degradation. Real security in these contexts comes from traditional computer security techniques, not from cryptography.

Cryptography has inherent mathematical properties that greatly favor the defender. Adding a single bit to the length of a key adds only a slight amount of work for the defender, but doubles the amount of work the attacker has to do. Doubling the key length doubles the amount of work the defender has to do (if that -- I'm being approximate here), but increases the attacker's workload exponentially. For many years, we have exploited that mathematical imbalance.

Computer security is much more balanced. There'll be a new attack, and a new defense, and a new attack, and a new defense. It's an arms race between attacker and defender. And it's a very fast arms race. New vulnerabilities are discovered all the time. The balance can tip from defender to attacker overnight, and back again the night after. Computer security defenses are inherently very fragile.

Unfortunately, this is the model we're stuck with. No matter how good the cryptography is, there is some other way to break into the system. Recall how the FBI read the PGP-encrypted email of a suspected Mafia boss several years ago. They didn't try to break PGP; they simply installed a keyboard sniffer on the target's computer. Notice that SSL- and TLS-encrypted web communications are increasingly irrelevant in protecting credit card numbers; criminals prefer to steal them by the hundreds of thousands from back-end databases.

On the Internet, communications security is much less important than the security of the endpoints. And increasingly, we can't rely on cryptography to solve our security problems.

This essay originally appeared on DarkReading. I wrote it in 2006, but lost it on my computer for four years. I hate it when that happens.

EDITED TO ADD (7/14): As several readers pointed out, I overstated my case when I said that encrypting credit card databases, or any database in constant use, is useless. In fact, there is value in encrypting those databases, especially if the encryption appliance is separate from the database server. In this case, the attacker has to steal both the encryption key and the database. That's a harder hacking problem, and this is why credit-card database encryption is mandated within the PCI security standard. Given how good encryption performance is these days, it's a smart idea. But while encryption makes it harder to steal the data, it is only harder in a computer-security sense and not in a cryptography sense.

Posted on June 30, 2010 at 12:53 PM44 Comments

Cryptography Success Story

From Brazil: the moral, of course, is to choose a strong key and to encrypt the entire drive, not just key files.

Posted on June 30, 2010 at 9:16 AM38 Comments

Space Terrorism

Space terrorism? Yes, space terrorism. This article, by someone at the European Space Policy Institute, hypes a terrorist threat I've never seen hyped before. The author waves a bunch of scare stories around, and then concludes that "the threat of 'Space Terrorism' is both real and latent," then talks about countermeasures. Certainly securing our satellites is a good idea, but this is just silly.

Posted on June 29, 2010 at 11:42 AM57 Comments

Baby Terrorists

This, from Congressman Louie Gohmert of Texas, is about as dumb as it gets:

I talked to a retired FBI agent who said that one of the things they were looking at were terrorist cells overseas who had figured out how to game our system. And it appeared they would have young women, who became pregnant, would get them into the United States to have a baby. They wouldn't even have to pay anything for the baby. And then they would turn back where they could be raised and coddled as future terrorists. And then one day, twenty...thirty years down the road, they can be sent in to help destroy our way of life. 'Cause they figured out how stupid we are being in this country to allow our enemies to game our system, hurt our economy, get setup in a position to destroy our way of life.

This is simply too stupid to even bother refuting. But this sort of fear mongering is nothing new for Gohmert.

Posted on June 29, 2010 at 6:28 AM92 Comments

Third SHB Workshop

I'm at SHB 2010, the Third Interdisciplinary Workshop on Security and Human Behavior, at Cambridge University. This is a two-day gathering of computer security researchers, psychologists, behavioral economists, sociologists, philosophers, and others -- all of whom are studying the human side of security -- organized by Ross Anderson, Alessandro Acquisti, and myself.

Here is the program. The list of attendees contains links to readings from each of them -- definitely a good place to browse for more information on this topic.

Here are links to my posts on the first and second SHB workshops. Follow those links to find summaries, papers, and audio recordings of the workshops. I may liveblog this workshop -- I did it last year -- but I may just pay attention. Ross Anderson has liveblogged the previous two years, and is very likely to do so again. There will also be audio.

EDITED TO ADD (6/28): Ross is liveblogging the workshop here. I'm not; I find I pay better attention when I'm not trying to take coherent and accessible notes.

Posted on June 28, 2010 at 4:02 AM5 Comments

Hacker Scare Story

"10 Everyday Items Hackers Are Targeting Right Now"

5. Your Blender. Yes, Your Blender

That's right: your blender is under attack! Most mixers are self-contained and not hackable, but Siciliano says many home automation systems tap into appliances such as blenders and coffee machines. These home networks are then open to attack in surprising ways: A hacker might turn on the blender from outside your home to distract you as he sneaks in a back window, he warns.

Yeah, and Richard Clarke thinks hackers can set your printer on fire.

Posted on June 25, 2010 at 1:47 PM78 Comments

Security Trade-Offs in Crayfish

Interesting:

The experiments offered the crayfish stark decisions -- a choice between finding their next meal and becoming a meal for an apparent predator. In deciding on a course of action, they carefully weighed the risk of attack against the expected reward, Herberholz says.

Using a non-invasive method that allowed the crustaceans to freely move, the researchers offered juvenile Louisiana Red Swamp crayfish a simultaneous threat and reward: ahead lay the scent of food, but also the apparent approach of a predator.

In some cases, the "predator" (actually a shadow) appeared to be moving swiftly, in others slowly. To up the ante, the researchers also varied the intensity of the odor of food.

How would the animals react? Did the risk of being eaten outweigh their desire to feed? Should they "freeze" -- in effect, play dead, hoping the predator would pass by, while the crayfish remained close to its meal -- or move away from both the predator and food?

To make a quick escape, the crayfish flip their tails and swim backwards, an action preceded by a strong, measurable electric neural impulse. The specially designed tanks could non-invasively pick up and record these electrical signals. This allowed the researchers to identify the activation patterns of specific neurons during the decision-making process.

Although tail-flipping is a very effective escape strategy against natural predators, it adds critical distance between a foraging animal and its next meal.

The crayfish took decisive action in a matter of milliseconds. When faced with very fast shadows, they were significantly more likely to freeze than tail-flip away.

The researchers conclude that there is little incentive for retreat when the predator appears to be moving too rapidly for escape, and the crayfish would lose its own opportunity to eat. This was also true when the food odor was the strongest, raising the benefit of staying close to the expected reward. A strong predator stimulus, however, was able to override an attractive food signal, and crayfish decided to flip away under these conditions.

It's not that this surprises anyone, it's that researchers can now try and figure out the exact brain processes that enable the crayfish to make these decisions.

Posted on June 25, 2010 at 6:53 AM15 Comments

TacSat-3 "Hyperspectral" Spy Satellite

It's operational:

The idea of hyperspectral sensing is not, however, merely to "see" in the usual sense of optical telescopes, infrared nightscopes and/or thermal imagers. This kind of detection is used on spy satellites and other surveillance systems, but it suffers from the so-called "drinking straw effect" -- that is, you can only view a small area in enough detail to pick out information of interest. It's impossible to cover an entire nation or region in any length of time by such means; you have to know where to look in advance.

Hyperspectral imaging works differently. It's based on the same principle as the spectrometry used in astronomy and other scientific fields - that some classes of objects and substances will emit a unique set of wavelengths when stimulated by energy. In this case, everything on the surface below the satellite is being stimulated by sunlight to emit its unique spectral fingerprint.

By scanning across a wide spectrum all at once across a wide area, it's then possible to use a powerful computer to crunch through all wavelengths coming from all points on the surface below (the so-called "hyperspectral cube", made up of the full spectrum coming from all points on a two-dimensional surface).

If the sensor is good enough and the computer crunching powerful and discriminating enough, the satellite can then identify a set of points on the surface where substances or objects of interest are to be found, and supply map coordinates for these. This is a tiny amount of data compared to the original "hyperspectral cube" generated by ARTEMIS and crunched by the satellite's onboard processors, and as such it can be downloaded to a portable ground terminal (rather than a one with a big high-bandwidth dish). Within ten minutes of the TacSat passing overhead, laptop-sized ROVER ground terminals can be marking points of interest on a map for combat troops nearby.

Posted on June 24, 2010 at 1:21 PM26 Comments

WikiLeaks

Long, but interesting, profile of WikiLeaks's Julian Assange from The New Yorker.

Assange is an international trafficker, of sorts. He and his colleagues collect documents and imagery that governments and other institutions regard as confidential and publish them on a Web site called WikiLeaks.org. Since it went online, three and a half years ago, the site has published an extensive catalogue of secret material, ranging from the Standard Operating Procedures at Camp Delta, in Guantánamo Bay, and the “Climategate” e-mails from the University of East Anglia, in England, to the contents of Sarah Palin’s private Yahoo account.

This is only peripherally related, but Bradley Manning -- an American soldier -- has been arrested for leaking classified documents to WikiLeaks.

Another article from The Guardian, directly related to Manning.

EDITED TO ADD (7/13): More links.

Posted on June 24, 2010 at 1:13 PM

Popsicle Makers a Security Threat

Chicago chef Rick Bayless photographed this security sign, posted before airport security as people were returning home from the Aspen Food & Wine Festival:

No popsicle makers are allowed through security.

Anyone have any idea why something like this is so dangerous? Is the TSA prohibiting random things to toy with us? Their blog is silent on this question.

EDITED TO ADD (6/23): Seems that it's not all popsicle makers, but the Zoku Quick Pop Maker that Williams Sonoma was selling at the festival. It has a small amount of liquid inside. And remember, if there isn't a printed label stating the volume of liquid, the TSA assumes that it's over 3 ounces. (Terrorists take note: print official looking labels on your larger-than-three-ounce bottles and you'll have no trouble at airport security.)

Posted on June 23, 2010 at 1:16 PM35 Comments

How Much Counterterrorism Can We Afford?

In an article on using terahertz rays (is that different from terahertz radar?) to detect biological agents, we find this quote:

"High-tech, low-tech, we can't afford to overlook any possibility in dealing with mass casualty events," according to center head Donald Sebastian. "You need multiple methods of detection and response. Terrorism comes in many forms; you have to see, smell, taste and analyze everything."

He's got it completely backwards. I think we can easily afford not to do what he's saying, and can't afford to do it.

The technology to detect traces of chemical and biological agents is neat, though. And I am very much in favor of research along these lines.

Posted on June 23, 2010 at 6:00 AM19 Comments

Buying an ATM Skimmer

Interesting:

ATM skimmers -- or fraud devices that criminals attach to cash machines in a bid to steal and ultimately clone customer bank card data -- are marketed on a surprisingly large number of open forums and Web sites. For example, ATMbrakers operates a forum that claims to sell or even rent ATM skimmers. Tradekey.com, a place where you can find truly anything for sale, also markets these devices on the cheap.

The truth is that most of these skimmers openly advertised are little more than scams designed to separate clueless crooks from their ill-gotten gains. Start poking around on some of the more exclusive online fraud forums for sellers who have built up a reputation in this business and chances are eventually you will hit upon the real deal.

Generally, these custom-made devices are not cheap, and you won't find images of them plastered all over the Web.

EDITED TO ADD (6/23): Another post.

Posted on June 22, 2010 at 6:49 AM44 Comments

Cheating on Tests, by the Teachers

If you give people enough incentive to cheat, people will cheat:

Of all the forms of academic cheating, none may be as startling as educators tampering with children's standardized tests. But investigations in Georgia, Indiana, Massachusetts, Nevada, Virginia and elsewhere this year have pointed to cheating by educators. Experts say the phenomenon is increasing as the stakes over standardized testing ratchet higher -- including, most recently, taking student progress on tests into consideration in teachers' performance reviews.

Posted on June 21, 2010 at 12:01 PM50 Comments

AT&T's iPad Security Breach

I didn't write about the recent security breach that disclosed tens of thousands of e-mail addresses and ICC-IDs of iPad users because, well, there was nothing terribly interesting about it. It was yet another web security breach.

Right after the incident, though, I was being interviewed by a reporter that wanted to know what the ramifications of the breach were. He specifically wanted to know if anything could be done with those ICC-IDs, and if the disclosure of that information was worse than people thought. He didn't like the answer I gave him, which is that no one knows yet: that it's too early to know the full effects of that information disclosure, and that both the good guys and the bad guys would be figuring it out in the coming weeks. And, that it's likely that there were further security implications of the breach.

Seems like there were:

The problem is that ICC-IDs—unique serial numbers that identify each SIM card—can often be converted into IMSIs. While the ICC-ID is nonsecret—it's often found printed on the boxes of cellphone/SIM bundles—the IMSI is somewhat secret. In theory, knowing an ICC-ID shouldn't be enough to determine an IMSI. The phone companies do need to know which IMSI corresponds to which ICC-ID, but this should be done by looking up the values in a big database.

In practice, however, many phone companies simply calculate the IMSI from the ICC-ID. This calculation is often very simple indeed, being little more complex than "combine this hard-coded value with the last nine digits of the ICC-ID." So while the leakage of AT&T's customers' ICC-IDs should be harmless, in practice, it could reveal a secret ID.

What can be done with that secret ID? Quite a lot, it turns out. The IMSI is sent by the phone to the network when first signing on to the network; it's used by the network to figure out which call should be routed where. With someone else's IMSI, an attacker can determine the person's name and phone number, and even track his or her position. It also opens the door to active attacks—creating fake cell towers that a victim's phone will connect to, enabling every call and text message to be eavesdropped.

More to come, I'm sure.

And that's really the point: we all want to know -- right away -- the effects of a security vulnerability, but often we don't and can't. It takes time before the full effects are known, sometimes a lot of time.

And in related news, the image redaction that went along with some of the breach reporting wasn't very good.

Posted on June 21, 2010 at 5:27 AM31 Comments

Remote Printing to an E-Mail Address

This is cool technology from HP:

Each printer with the ePrint capability will be assigned its own e-mail address. If someone wants to print a document from an iPhone, the document will go to HP's data center, where it is rendered into the correct format, and then sent to the person's printer. The process takes about 25 seconds.

Maybe this feature was designed with robust security, but I'm not betting on it. The first people to hack the system will certainly be spammers. (For years I've gotten more spam on my fax machine than legitimate faxes.) And why would HP fix the spam problem when it will just enable them to sell overpriced ink cartridges faster?

Any other illegitimate uses for this technology?

EDITED TO ADD (7/13): Location-sensitive advertising to your printer.

Posted on June 18, 2010 at 1:37 PM67 Comments

The Continuing Incompetence of Terrorists

The Atlantic on stupid terrorists:

Nowhere is the gap between sinister stereotype and ridiculous reality more apparent than in Afghanistan, where it's fair to say that the Taliban employ the world's worst suicide bombers: one in two manages to kill only himself. And this success rate hasn't improved at all in the five years they've been using suicide bombers, despite the experience of hundreds of attacks -- or attempted attacks. In Afghanistan, as in many cultures, a manly embrace is a time-honored tradition for warriors before they go off to face death. Thus, many suicide bombers never even make it out of their training camp or safe house, as the pressure from these group hugs triggers the explosives in suicide vests. According to several sources at the United Nations, as many as six would-be suicide bombers died last July after one such embrace in Paktika.

Many Taliban operatives are just as clumsy when suicide is not part of the plan. In November 2009, several Talibs transporting an improvised explosive device were killed when it went off unexpectedly. The blast also took out the insurgents' shadow governor in the province of Balkh.

When terrorists do execute an attack, or come close, they often have security failures to thank, rather than their own expertise. Consider Umar Farouk Abdulmutallab -- the Nigerian "Jockstrap Jihadist" who boarded a Detroit-bound jet in Amsterdam with a suicidal plan in his head and some explosives in his underwear. Although the media colored the incident as a sophisticated al-Qaeda plot, Abdulmutallab showed no great skill or cunning, and simple safeguards should have kept him off the plane in the first place. He was, after all, traveling without luggage, on a one-way ticket that he purchased with cash. All of this while being on a U.S. government watch list.

Fortunately, Abdulmutallab, a college-educated engineer, failed to detonate his underpants. A few months later another college grad, Faisal Shahzad, is alleged to have crudely rigged an SUV to blow up in Times Square. That plan fizzled and he was quickly captured, despite the fact that he was reportedly trained in a terrorist boot camp in Pakistan. Indeed, though many of the terrorists who strike in the West are well educated, their plots fail because they lack operational know-how. On June 30, 2007, two men -- one a medical doctor, the other studying for his Ph.D. -- attempted a brazen attack on Glasgow Airport. Their education did them little good. Planning to crash their propane-and-petrol-laden Jeep Cherokee into an airport terminal, the men instead steered the SUV, with flames spurting out its windows, into a security barrier. The fiery crash destroyed only the Jeep, and both men were easily apprehended; the driver later died from his injuries. (The day before, the same men had rigged two cars to blow up near a London nightclub. That plan was thwarted when one car was spotted by paramedics and the other, parked illegally, was removed by a tow truck. As a bonus for investigators, the would-be bombers' cell phones, loaded with the phone numbers of possible accomplices, were salvaged from the cars.)

Reminds me of my own "Portrait of the Modern Terrorist as an Idiot."

Posted on June 18, 2010 at 5:49 AM43 Comments

Hot Dog Security

A nice dose of risk reality:

Last week, the American Academy of Pediatrics issued a statement calling for large-type warning labels on the foods that kids most commonly choke on—grapes, nuts, carrots, candy and public enemy No. 1: the frank. Then the lead author of the report, pediatric emergency room doctor Gary Smith, went one step further.

He called for a redesign of the hot dog.

The reason, he said, is that hot dogs are "high-risk." But are they? I mean, I certainly diced my share of Oscar Mayers when my kids were younger, but if once in a while we stopped for a hot dog and I gave it to 'em whole, was I really taking a crazy risk?

Here are the facts: About 61 children each year choke to death on food, or one in a million. Of them, 17 percent—or about 10—choke on franks. So now we are talking 1 in 6 million. This is still tragic; the death of any child is. But to call it "high-risk" means we would have to call pretty much all of life "high-risk." Especially getting in a car! About 1,300 kids younger than 14 die each year as car passengers, compared with 10 a year from hot dogs.

What's happening is that the concept of "risk" is broadening to encompass almost everything a kid ever does, from running to sitting to sleeping. Literally!

There's a lot of good stuff on this website about how to raise children without being crazy paranoid. She comments on my worst-case thinking essay, too.

Posted on June 17, 2010 at 2:28 PM65 Comments

Patrolling the U.S./Canada Border

Doesn't the DHS have anything else to do?

As someone who believes that our nation has a right to enforce its borders, I should have been gratified when the Immigrations official at the border saw the canoe on our car and informed us that anyone who crossed the nearby international waterway illegally would be arrested and fined as much as $5,000.

Trouble is, the river wasn't the Rio Grande, but the St. Croix, which defines the border between Maine and New Brunswick, Canada. And the threat of arrest wasn't aimed at illegal immigrants or terrorists but at canoeists like myself.

The St. Croix is a wild river that flows through unpopulated country. Primitive campsites are maintained on both shores, some accessible by logging roads, but most reached only by water or by bushwhacking for miles through thick forest and marsh. There are easier ways to sneak into the U.S. from Canada. According to Homeland Security regulations, however, canoeists who begin their trip in Canada cannot step foot on American soil, thus putting half the campsites off limits. It is not an idle threat; the U.S. Border Patrol makes regular helicopter flights down the river.

Posted on June 17, 2010 at 6:57 AM82 Comments

Filming the Police

In at least three U.S. states, it is illegal to film an active duty policeman:

The legal justification for arresting the "shooter" rests on existing wiretapping or eavesdropping laws, with statutes against obstructing law enforcement sometimes cited. Illinois, Massachusetts, and Maryland are among the 12 states in which all parties must consent for a recording to be legal unless, as with TV news crews, it is obvious to all that recording is underway. Since the police do not consent, the camera-wielder can be arrested. Most all-party-consent states also include an exception for recording in public places where "no expectation of privacy exists" (Illinois does not) but in practice this exception is not being recognized.

Massachusetts attorney June Jensen represented Simon Glik who was arrested for such a recording. She explained, "[T]he statute has been misconstrued by Boston police. You could go to the Boston Common and snap pictures and record if you want." Legal scholar and professor Jonathan Turley agrees, "The police are basing this claim on a ridiculous reading of the two-party consent surveillance law -- requiring all parties to consent to being taped. I have written in the area of surveillance law and can say that this is utter nonsense."

The courts, however, disagree. A few weeks ago, an Illinois judge rejected a motion to dismiss an eavesdropping charge against Christopher Drew, who recorded his own arrest for selling one-dollar artwork on the streets of Chicago. Although the misdemeanor charges of not having a peddler's license and peddling in a prohibited area were dropped, Drew is being prosecuted for illegal recording, a Class I felony punishable by 4 to 15 years in prison.

This is a horrible idea, and will make us all less secure. I wrote in 2008:

You cannot evaluate the value of privacy and disclosure unless you account for the relative power levels of the discloser and the disclosee.

If I disclose information to you, your power with respect to me increases. One way to address this power imbalance is for you to similarly disclose information to me. We both have less privacy, but the balance of power is maintained. But this mechanism fails utterly if you and I have different power levels to begin with.

An example will make this clearer. You're stopped by a police officer, who demands to see identification. Divulging your identity will give the officer enormous power over you: He or she can search police databases using the information on your ID; he or she can create a police record attached to your name; he or she can put you on this or that secret terrorist watch list. Asking to see the officer's ID in return gives you no comparable power over him or her. The power imbalance is too great, and mutual disclosure does not make it OK.

You can think of your existing power as the exponent in an equation that determines the value, to you, of more information. The more power you have, the more additional power you derive from the new data.

Another example: When your doctor says "take off your clothes," it makes no sense for you to say, "You first, doc." The two of you are not engaging in an interaction of equals.

This is the principle that should guide decision-makers when they consider installing surveillance cameras or launching data-mining programs. It's not enough to open the efforts to public scrutiny. All aspects of government work best when the relative power between the governors and the governed remains as small as possible -- when liberty is high and control is low. Forced openness in government reduces the relative power differential between the two, and is generally good. Forced openness in laypeople increases the relative power, and is generally bad.

EDITED TO ADD (7/13): Another article. One jurisdiction in Pennsylvania has explicitly ruled the opposite: that it's legal to record police officers no matter what.

Posted on June 16, 2010 at 1:36 PM98 Comments

Dating Recordings by Power Line Fluctuations

Interesting:

The capability, called "electrical network frequency analysis" (ENF), is now attracting interest from the FBI and is considered the exciting new frontier in digital forensics, with power lines acting as silent witnesses to crime.

In the "high profile" murder trial, which took place earlier this year, ENF meant prosecutors were able to show that a seized voice recording that became vital to their case was authentic. Defence lawyers suggested it could have been concocted by a witness to incriminate the accused.

[...]

ENF relies on frequency variations in the electricity supplied by the National Grid. Digital devices such as CCTV recorders, telephone recorders and camcorders that are plugged in to or located near the mains pick up these deviations in the power supply, which are caused by peaks and troughs in demand. Battery-powered devices are not immune to to ENF analysis, as grid frequency variations can be induced in their recordings from a distance.

At the Metropolitan Police's digital forensics lab in Penge, south London, scientists have created a database that has recorded these deviations once every one and a half seconds for the last five years. Over a short period they form a unique signature of the electrical frequency at that time, which research has shown is the same in London as it is in Glasgow.

On receipt of recordings made by the police or public, the scientists are able to detect the variations in mains electricity occurring at the time the recording was made. This signature is extracted and automatically matched against their ENF database, which indicates when it was made.

The technique can also uncover covert editing—or rule it out, as in the recent murder trial—because a spliced recording will register more than one ENF match.

Posted on June 16, 2010 at 7:00 AM45 Comments

Reading Me

The number of different ways to read my essays, commentaries, and links has grown recently. Here's the rundown:

I think that about covers it for useful distribution formats right now.

EDITED TO ADD (6/20): One more; there's a Crypto-Gram podcast.

Posted on June 15, 2010 at 1:05 PM25 Comments

Fifth Annual Movie-Plot Threat Contest Winner

On April 1, I announced the Fifth Annual Movie Plot Threat Contest:

Your task, ye Weavers of Tales, is to create a fable of fairytale suitable for instilling the appropriate level of fear in children so they grow up appreciating all the lords do to protect them.

On May 15, I announced the five semi-finalists. Voting continued through the end of the month, and the winner (improved by the author, with help from blog comments) is:

The Gashlycrumb Terrors, by Laura

A is for anthrax, so deadly and white.
B is for burglars who break in at night.
C is for cars that, with minds of their own,
accelerate suddenly in a school zone.
D is for dynamite lit with a fuse.
E is for everything we have to lose.
F is for foreigners, different and strange.
G is for gangs and the crimes they arrange.
H is for hand lotion, more than three ounces;
pray some brave agent sees it and pounces.
I is for ingenious criminal plans.
J is for jury-rigged pipe-bombs in vans.
K is for kids who would recklessly play
in playgrounds and parks with their friends every day.
L is for lead in our toys and our food.
M is for Mom’s cavalier attitude.
N is for neighbors — you never can tell:
is that a book club or terrorist cell?
O is for ostrich, with head in the sand.
P is for plots to blow up Disneyland.
Q is for those who would question authorities.
R is for radical sects and minorities.
S is for Satanists, who have been seen
giving kids razor blades on Halloween.
T is for terrorists, by definition.
U is for uncensored acts of sedition.
V is for vigilance, our leaders’ tool,
keeping us safe, both at home and at school.
W is for warnings with colors and levels.
X is for x-raying bags at all revels.
Y is for *you*, my dear daughter or son
Z is for Zero! No tolerance! None!

Laura, contact me with your address so I can send you your prize. Anyone interesting in illustrating this, preferably in Edward Gorey's style, should e-mail me first.

History: The First Movie-Plot Threat Contest rules and winner. The Second Movie-Plot Threat Contest rules, semifinalists, and winner. The Third Movie-Plot Threat Contest rules, semifinalists, and winner. The Fourth Movie-Plot Threat Contest rules and winner.

Posted on June 15, 2010 at 6:02 AM22 Comments

Protecting Cars with The Club

From the Freakonomics blog:

At some point, the Club was mentioned. The professional thieves laughed and exchanged knowing glances. What we knew was that the Club is a hardened steel device that attaches to the steering wheel and the brake pedal to prevent steering and/or braking. What we found out was that a pro thief would carry a short piece of a hacksaw blade to cut through the plastic steering wheel in a couple seconds. They were then able to release The Club and use it to apply a huge amount of torque to the steering wheel and break the lock on the steering column (which most cars were already equipped with). The pro thieves actually sought out cars with The Club on them because they didn't want to carry a long pry bar that was too hard to conceal.

Posted on June 14, 2010 at 1:46 PM80 Comments

Behavioral Profiling at Airports

There's a long article in Nature on the practice:

It remains unclear what the officers found anomalous about George's behaviour, and why he was detained. The TSA's parent agency, the Department of Homeland Security (DHS), has declined to comment on his case because it is the subject of a federal lawsuit that was filed on George's behalf in February by the American Civil Liberties Union. But the incident has brought renewed attention to a burgeoning controversy: is it possible to know whether people are being deceptive, or planning hostile acts, just by observing them?

Some people seem to think so. At London's Heathrow Airport, for example, the UK government is deploying behaviour-detection officers in a trial modelled in part on SPOT. And in the United States, the DHS is pursuing a programme that would use sensors to look at nonverbal behaviours, and thereby spot terrorists as they walk through a corridor. The US Department of Defense and intelligence agencies have expressed interest in similar ideas.

Yet a growing number of researchers are dubious ­ not just about the projects themselves, but about the science on which they are based. "Simply put, people (including professional lie-catchers with extensive experience of assessing veracity) would achieve similar hit rates if they flipped a coin," noted a 2007 report from a committee of credibility-assessment experts who reviewed research on portal screening.

"No scientific evidence exists to support the detection or inference of future behaviour, including intent," declares a 2008 report prepared by the JASON defence advisory group. And the TSA had no business deploying SPOT across the nation's airports "without first validating the scientific basis for identifying suspicious passengers in an airport environment", stated a two-year review of the programme released on 20 May by the Government Accountability Office (GAO), the investigative arm of the US Congress.

Commentary from the MindHacks blog.

Also, the GAO has published a report on the U.S. DHS's SPOT program: "Aviation Security: Efforts to Validate TSA’s Passenger Screening Behavior Detection Program Underway, but Opportunities Exist to Strengthen Validation and Address Operational Challenges."

As of March 2010, TSA deployed about 3,000 BDOs at an annual cost of about $212 million; this force increased almost fifteen-fold between March 2007 and July 2009. BDOs have been selectively deployed to 161 of the 457 TSA-regulated airports in the United States at which passengers and their property are subject to TSA-mandated screening procedures.

It seems pretty clear that the program only catches criminals, and no terrorists. You'd think there would be more important things to spend $200 million a year on.

EDITED TO ADD (6/14): In the comments, a couple of people asked how this compares with the Israeli model of airport security -- concentrate on the person -- and the idea that trained officers notice if someone is acting "hinky": both things that I have written favorably about.

The difference is the experience of the detecting officer and the amount of time they spend with each person. If you read about the programs described above, they're supposed to "spot terrorists as they walk through a corridor," or possibly after a few questions. That's very different from what happens when you check into a flight an Ben Gurion Airport.

The problem with fast detection programs is that they don't work, and the problem with the Israeli security model is that it doesn't scale.

Posted on June 14, 2010 at 6:23 AM70 Comments

Mainstream Cost-Benefit Security Analysis

This essay in The New York Times is refreshingly cogent:

You've seen it over and over. At a certain intersection in a certain town, there'll be an unfortunate accident. A child is hit by a car.

So the public cries out, the town politicians band together, and the next thing you know, they've spent $60,000 to install speed bumps, guardrails and a stoplight at that intersection—even if it was clearly a accident, say, a drunk driver, that had nothing to do with the design of the intersection.

I understand the concept; people want to DO something to channel their grief. But rationally, turning that single intersection into a teeming jungle of safety features, while doing nothing for all the other intersections in town, in the state, across the country, doesn't make a lot of sense.

Another essay from the BBC website:

That poses a difficult ethical dilemma: should government decisions about risk reflect the often irrational foibles of the populace or the rational calculations of sober risk assessment? Should our politicians opt for informed paternalism or respect for irrational preferences?

The volcanic ash cloud is a classic case study. Were the government to allow flights to go ahead when the risks were equal to those of road travel, it is almost certain that, over the course of the year, hundreds of people would die in resulting air accidents, since around 2,500 die on the roads each year.

This is politically unimaginable, not for good, rational reasons, but because people are much more risk averse when it comes to plane travel than they are to driving their own cars.

So, in practice, governments do not make fully rational risk assessments. Their calculations are based partly on cost-benefit analyses, and partly on what the public will tolerate.

Posted on June 11, 2010 at 12:08 PM34 Comments

Ninth Workshop on Economics and Information Security

Earlier this week, the Ninth Workshop on Economics and Information Security (WEIS 2010) was held at Harvard. As always, it was a great workshop with some very interesting papers. Ross Anderson liveblogged the event.

EDITED TO ADD (6/10): The papers are all on the conference website.

Posted on June 10, 2010 at 12:56 PM7 Comments

The "Quake" Simulation and Risk Perception

Read this.

EDITED TO ADD (6/10): Another article.

Posted on June 10, 2010 at 7:10 AM40 Comments

Hiring Hackers

Any essay on hiring hackers quickly gets bogged down in definitions. What is a hacker, and how is he different from a cracker? I have my own definitions, but I'd rather define the issue more specifically: Would you hire someone convicted of a computer crime to fill a position of trust in your computer network? Or, more generally, would you hire someone convicted of a crime for a job related to that crime?

The answer, of course, is "it depends." It depends on the specifics of the crime. It depends on the ethics involved. It depends on the recidivism rate of the type of criminal. It depends a whole lot on the individual.

Would you hire a convicted pedophile to work at a day care center? Would you hire Bernie Madoff to manage your investment fund? The answer is almost certainly no to those two -- but you might hire a convicted bank robber to consult on bank security. You might hire someone who was convicted of false advertising to write ad copy for your next marketing campaign. And you might hire someone who ran a chop shop to fix your car. It depends on the person and the crime.

It can get even murkier. Would you hire a CIA-trained assassin to be a bodyguard? Would you put a general who led a successful attack in charge of defense? What if they were both convicted of crimes in whatever country they were operating in? There are different legal and ethical issues, to be sure, but in both cases the people learned a certain set of skills regarding offense that could be transferable to defense.

Which brings us back to computers. Hacking is primarily a mindset: a way of thinking about security. Its primary focus is in attacking systems, but it's invaluable to the defense of those systems as well. Because computer systems are so complex, defending them often requires people who can think like attackers.

Admittedly, there's a difference between thinking like an attacker and acting like a criminal, and between researching vulnerabilities in fielded systems and exploiting those vulnerabilities for personal gain. But there is a huge variability in computer crime convictions, and -- at least in the early days -- many hacking convictions were unjust and unfair. And there's also a difference between someone's behavior as a teenager and his behavior later in life. Additionally, there might very well be a difference between someone's behavior before and after a hacking conviction. It all depends on the person.

An employer's goal should be to hire moral and ethical people with the skill set required to do the job. And while a hacking conviction is certainly a mark against a person, it isn't always grounds for complete non-consideration.

"We don't hire hackers" and "we don't hire felons" are coarse generalizations, in the same way that "we only hire people with this or that security certification" is. They work -- you're less likely to hire the wrong person if you follow them -- but they're both coarse and flawed. Just as all potential employees with certifications aren't automatically good hires, all potential employees with hacking convictions aren't automatically bad hires. Sure, it's easier to hire people based on things you can learn from checkboxes, but you won't get the best employees that way. It's far better to look at the individual, and put those check boxes into context. But we don't always have time to do that.

Last winter, a Minneapolis attorney who works to get felons a fair shake after they served their time told of a sign he saw: "Snow shovelers wanted. Felons need not apply." It's not good for society if felons who have served their time can't even get jobs shoveling snow.

This essay previously appeared in Information Security as the first half of a point-counterpoint with Marcus Ranum. Marcus's half is here.

Posted on June 10, 2010 at 6:34 AM52 Comments

DARPA Research into Clean-Slate Network Security Redesign

This looks like a good research direction:

Is it possible that given a clean slate and likely millions of dollars, engineers could come up with the ultimate in secure network technology? The scientists at the Defense Advanced Research Projects Agency (DARPA) think so and this week announced the Clean Slate Design of Resilient, Adaptive, Secure Hosts (CRASH) program that looks to lean heavily on human biology to develop super-smart, highly adaptive, supremely secure networks.

For example, the CRASH program looks to translate human immune system strategies into computational terms.  In the human immune system multiple independent mechanisms constantly monitor the body for pathogens. Even at the cellular level, multiple redundant mechanisms monitor and repair the structure of the DNA. These mechanisms consume tons of resources, but let the body continue functioning and to repair the damage caused by malfunctions and infectious agents, DARPA stated.

Posted on June 9, 2010 at 12:59 PM53 Comments

Terrorists Placing Fake Bombs in Public Places

Supposedly, the latest terrorist tactic is to place fake bombs -- suspicious looking bags, backpacks, boxes, and coolers -- in public places in an effort to paralyze the city and probe our defenses. The article doesn't say whether or not this has actually ever happened, only that the FBI is warning of the tactic.

Citing an FBI informational document, ABC News reports a so called "battle of suspicious bags" is being encouraged on a jihadist website.

I have no doubt that this may happen, but I'm sure these are not actual terrorists doing the planting. We're so easy to terrorize that anyone can play; this is the equivalent of hacking in the real world. One solution is to continue to overreact, and spend even more money on these fake threats. The other is to refuse to be terrorized.

Posted on June 9, 2010 at 6:24 AM46 Comments

Fear in a Political Ad

Carly Fiorina wants to scare Californians into voting for her.

Yes, terrorists kill -- about as often as home appliances.

EDITED TO ADD (6/12): PolitiFact breaks down the ad. And we need to fear demon sheep (the best part starts around 2:20).

Posted on June 8, 2010 at 1:04 PM54 Comments

Bletchley Park Archives to Go Online

This is good:

Simon Greenish, chief executive officer of the Bletchley Park Trust, said the plan was for the centre's entire archive to be digitised.

[...]

He said since the archive is so big nobody knows exactly what each individual document stored there contains.

However, the information they expect to dig out will definitely include communication transcripts, communiques, memoranda, photographs, maps and other material relating to key events that took place during the war.

He said: "We have many boxes full of index cards, which have lots of different messages on them. But this will be our chance to follow a trail and put the messages together so we can find out what they really mean.

It'll be years before any documents actually get online, but it's still a good thing.

Another article.

The Bletchley Park Museum really needs donations, if you're so inclined.

Posted on June 8, 2010 at 6:30 AM18 Comments

How to Spot a CIA Officer

How to spot a CIA officer, at least in the mid-1970s.

The reason the CIA office was located in the embassy -- as it is in most of the other countries in the world -- is that by presidential order the State Department is responsible for hiding and housing the CIA. Like the intelligence services of most other countries, the CIA has been unwilling to set up foreign offices under its own name. So American embassies -- and, less frequently. military bases -- provide the needed cover. State confers respectability on the Agency's operatives, dressing them up with the same titles and calling cards that give legitimate diplomats entree into foreign government circles. Protected by diplomatic immunity, the operatives recruit local officials as CIA agents to supply secret intelligence and, especially in the Third World, to help in the Agency's manipulation of a country's internal affairs.

Posted on June 7, 2010 at 5:43 AM25 Comments

The Four Stages of Fear

Interesting:

In the throes of intense fear, we suddenly find ourselves operating in a different and unexpected way. The psychological tools that we normally use to navigate the world­reasoning and planning before we act­get progressively shut down. In the grip of the brain’s subconscious fear centers, we behave in ways that to our rational mind seem nonsensical or worse. We might respond automatically, with preprogrammed motor routines, or simply melt down. We lose control.

In this unfamiliar realm, it can seem like we’re in the grip of utter chaos. But although the preconscious fear centers of the brain are not capable of deliberation and reason, they do have their own logic, a simplified suite of responses keyed to the nature of the threat at hand. There is a structure to panic.

When the danger is far away, or at least not immediately imminent, the instinct is to freeze. When danger is approaching, the impulse is to run away. When escape is impossible, the response is to fight back. And when struggling is futile, the animal will become immobilized in the grip of fright. Although it doesn't slide quite as smoothly off the tongue, a more accurate description than "fight or flight" would be "fight, freeze, flight, or fright"­or, for short, "the four fs."

I'm in the middle of reading Dave Grossman's book On Killing: The Psychological Cost of Learning to Kill in War and Society. He writes that "fight or flight" is actually "fight, flight, posture, or submit."

Posted on June 4, 2010 at 3:30 PM27 Comments

Intelligence Can Never Be Perfect

Go read this article -- "Setting impossible standards on intelligence" -- on laying blame for the intelligence "failure" that allowed the Underwear Bomber to board an airplane on Christmas Day.

Although the CIA, FBI, and Defense, State, Treasury and Homeland Security departments have counterterrorism analytic units -- some even with information-gathering operations -- the assumption is that all of the data are passed on to NCTC.

The law, by the way, specifically says that the NCTC director "may not direct the execution of counterterrorism operations."

The Senate committee's list identifying "points of failure" shows that not all relevant information from some agencies landed at the NCTC.

Perhaps the leading example was the State Department's failure to notify the NCTC in its initial reporting that Abdulmutallab -- whose father had reported him missing in November and suspected "involvement with Yemeni-based extremists" -- had an outstanding U.S. visa.

This initial fact, if contained in State's first notice to the NCTC, would have raised the importance of his status. Instead, Abdulmutallab became one of hundreds of new names sent to the NCTC that day. The Senate panel blurs this in its report by focusing on State's failure -- as well as NCTC's -- to revoke the visa. Neither the department nor NCTC discovered the visa until it was too late.

Two other agencies also failed to report important relevant information.

[...]

How can the NCTC perform its role, which by law is "to serve as the central and shared knowledge bank on known and suspected terrorists and international terror groups," if its analysts are unaware that additional intelligence exists at other agencies? The committee's answer to that, listed as failure 10, was that the "NCTC's watchlisting office did not conduct additional research to find additional derogatory information to place Abdulmutallab on a watchlist."

True, NCTC analysts have access to most agency databases. But with hundreds of names arriving each day, which name does the NCTC select to then begin its search of 16 other agency databases? Especially when the expectation is that each agency has searched its own.

I've never been impressed with the "dots" that should have been connected regarding Abdulmutallab. On closer examination, they mostly evaporate. Nor do I consider Christmas Day a security failure. Plane lands safely, terrorist captured, no one hurt; what more do people want?

Posted on June 2, 2010 at 6:39 AM63 Comments

Voluntary Security Inspections

What could possibly be the point of this?

Cars heading to Austin-Bergstrom International Airport will see random, voluntary inspections Monday.

The searches are part of an increase in security at the airport.

It's a joint operation between the U.S. Department of Homeland Security, Austin Police, and airport security.

The enhancements are not a response to specific threats, and the security level has not changed.

Officials say the searches are voluntary and drivers can opt out if they want.

Training? Reassuring a jittery public? Looking busy? This can't possibly be done for security reasons.

Posted on June 1, 2010 at 1:00 PM59 Comments

Terrorizing Ourselves

Who needs actual terrorists?

How’s this for an ill-conceived emergency preparedness drill? An off-duty cop pretending to be a terrorist stormed into a hospital intensive care unit brandishing a handgun, which he pointed at nurses while herding them down a corridor and into a room.

There, after harrowing moments, he explained that the whole caper was a training exercise.

[...]

The staff at St. Rose Dominican Hospitals-Siena Campus, where the incident took place Monday morning, found the exercise more traumatizing than instructive.

Perhaps a better way to phrase it is that they learned to be terrorized.

Posted on June 1, 2010 at 5:54 AM64 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..