Blog: June 2009 Archives

Cryptography Spam

I think this is a first.

Information security, and protection of your e-money. Electronic payments and calculations, on means of a network the Internet or by means of bank credit cards, continue to win the world market. Electronic payments, it quickly, conveniently, but is not safely. Now there is a real war, between users and hackers. Your credit card can be forgery. The virus can get into your computer. Most not pleasant, what none, cannot give you guarantees, safety.

But, this disgrace can put an end.

I have developed the program which, does impossible the fact of abduction of a passwords, countersign, and personal data of the users. In the program the technology of an artificial intellect is used. As you cannot, guess about what the person thinks. As and not possible to guess, algorithm of the program. This system to crack it is impossible.

I assure that this system, will be most popular in the near future. I wish to create the company, with branches in the different countries of the world, and I invite all interested persons.

Together we will construct very profitable business.

Posted on June 30, 2009 at 1:36 PM52 Comments

Protecting Against the Snatched Laptop Data Theft

Almost two years ago, I wrote about my strategy for encrypting my laptop. One of the things I said was:

There are still two scenarios you aren’t secure against, though. You’re not secure against someone snatching your laptop out of your hands as you’re typing away at the local coffee shop. And you’re not secure against the authorities telling you to decrypt your data for them.

Here’s a free program that defends against that first threat: it locks the computer unless a key is pressed every n seconds.

Honestly, this would be too annoying for me to use, but you’re welcome to try it.

Posted on June 29, 2009 at 6:51 AM45 Comments

The Problem with Password Masking

I agree with this:

It’s time to show most passwords in clear text as users type them. Providing feedback and visualizing the system’s status have always been among the most basic usability principles. Showing undifferentiated bullets while users enter complex codes definitely fails to comply.

Most websites (and many other applications) mask passwords as users type them, and thereby theoretically prevent miscreants from looking over users’ shoulders. Of course, a truly skilled criminal can simply look at the keyboard and note which keys are being pressed. So, password masking doesn’t even protect fully against snoopers.

More importantly, there’s usually nobody looking over your shoulder when you log in to a website. It’s just you, sitting all alone in your office, suffering reduced usability to protect against a non-issue.

Shoulder surfing isn’t very common, and cleartext passwords greatly reduces errors. It has long annoyed me when I can’t see what I type: in Windows logins, in PGP, and so on.

EDITED TO ADD (6/26): To be clear, I’m not talking about PIN masking on public terminals like ATMs. I’m talking about password masking on personal computers.

EDITED TO ADD (6/30): Two articles on the subject.

Posted on June 26, 2009 at 6:17 AM180 Comments

Clear Shuts Down Operation

Clear, the company that sped people through airport security, has ceased operations. My first question: what happened to all that personal information it collected on its members? An answer appeared on its website:

Applicant and Member data is currently secured in accordance with the Transportation Security Administration’s Security, Privacy and Compliance Standards. Verified Identity Pass, Inc. will continue to secure such information and will take appropriate steps to delete the information.

Some are not reassured:

The disturbing part is that everyone who joined the Clear program had to give this private company (and the TSA) fingerprint and iris scans. I never joined Clear. But if I had, I would be extremely concerned about what happens to this information now that the company has gone defunct.

I can hear it now—they’ll surely say all the biometric and fingerprint data is secure, you don’t need to worry. But how much can you trust a company that shuts down with little notice while being hounded by creditors?

Details matter here. Nowhere do the articles say that Clear, or its parent company Verified Identity, Inc., have declared bankruptcy. But if that does happen, does the company’s biggest asset—the personal information of the quarter of a million Clear members—become the property of Clear’s creditors?

I previously wrote about Clear here.

More commentary.

Posted on June 25, 2009 at 12:36 PM31 Comments

Authenticating Paperwork

It’s a sad, horrific story. Homeowner returns to find his house demolished. The demolition company was hired legitimately but there was a mistake and it demolished the wrong house. The demolition company relied on GPS co-ordinates, but requiring street addresses isn’t a solution. A typo in the address is just as likely, and it would have demolished the house just as quickly.

The problem is less how the demolishers knew which house to knock down, and more how they confirmed that knowledge. They trusted the paperwork, and the paperwork was wrong. Informality works when everybody knows everybody else. When merchants and customers know each other, government officials and citizens know each other, and people know their neighbours, people know what’s going on. In that sort of milieu, if something goes wrong, people notice.

In our modern anonymous world, paperwork is how things get done. Traditionally, signatures, forms, and watermarks all made paperwork official. Forgeries were possible but difficult. Today, there’s still paperwork, but for the most part it only exists until the information makes its way into a computer database. Meanwhile, modern technology—computers, fax machines and desktop publishing software—has made it easy to forge paperwork. Every case of identity theft has, at its core, a paperwork failure. Fake work orders, purchase orders, and other documents are used to steal computers, equipment, and stock. Occasionally, fake faxes result in people being sprung from prison. Fake boarding passes can get you through airport security. This month hackers officially changed the name of a Swedish man.

A reporter even changed the ownership of the Empire State Building. Sure, it was a stunt, but this is a growing form of crime. Someone pretends to be you—preferably when you’re away on holiday—and sells your home to someone else, forging your name on the paperwork. You return to find someone else living in your house, someone who thinks he legitimately bought it. In some senses, this isn’t new. Paperwork mistakes and fraud have happened ever since there was paperwork. And the problem hasn’t been fixed yet for several reasons.

One, our sloppy systems generally work fine, and it’s how we get things done with minimum hassle. Most people’s houses don’t get demolished and most people’s names don’t get maliciously changed. As common as identity theft is, it doesn’t happen to most of us. These stories are news because they are so rare. And in many cases, it’s cheaper to pay for the occasional blunder than ensure it never happens.

Two, sometimes the incentives aren’t in place for paperwork to be properly authenticated. The people who demolished that family home were just trying to get a job done. The same is true for government officials processing title and name changes. Banks get paid when money is transferred from one account to another, not when they find a paperwork problem. We’re all irritated by forms stamped 17 times, and other mysterious bureaucratic processes, but these are actually designed to detect problems.

And three, there’s a psychological mismatch: it is easy to fake paperwork, yet for the most part we act as if it has magical properties of authenticity.

What’s changed is scale. Fraud can be perpetrated against hundreds of thousands, automatically. Mistakes can affect that many people, too. What we need are laws that penalise people or companies—criminally or civilly—who make paperwork errors. This raises the cost of mistakes, making authenticating paperwork more attractive, which changes the incentives of those on the receiving end of the paperwork. And that will cause the market to devise technologies to verify the provenance, accuracy, and integrity of information: telephone verification, addresses and GPS co-ordinates, cryptographic authentication, systems that double- and triple-check, and so on.

We can’t reduce society’s reliance on paperwork, and we can’t eliminate errors based on it. But we can put economic incentives in place for people and companies to authenticate paperwork more.

This essay originally appeared in The Guardian.

Posted on June 25, 2009 at 6:11 AM50 Comments

Fixing Airport Security

It’s been months since the Transportation Security Administration has had a permanent director. If, during the job interview (no, I didn’t get one), President Obama asked me how I’d fix airport security in one sentence, I would reply: “Get rid of the photo ID check, and return passenger screening to pre-9/11 levels.”

Okay, that’s a joke. While showing ID, taking your shoes off and throwing away your water bottles isn’t making us much safer, I don’t expect the Obama administration to roll back those security measures anytime soon. Airport security is more about CYA than anything else: defending against what the terrorists did last time.

But the administration can’t risk appearing as if it facilitated a terrorist attack, no matter how remote the possibility, so those annoyances are probably here to stay.

This would be my real answer: “Establish accountability and transparency for airport screening.” And if I had another sentence: “Airports are one of the places where Americans, and visitors to America, are most likely to interact with a law enforcement officer – and yet no one knows what rights travelers have or how to exercise those rights.”

Obama has repeatedly talked about increasing openness and transparency in government, and it’s time to bring transparency to the Transportation Security Administration (TSA).

Let’s start with the no-fly and watch lists. Right now, everything about them is secret: You can’t find out if you’re on one, or who put you there and why, and you can’t clear your name if you’re innocent. This Kafkaesque scenario is so un-American it’s embarrassing. Obama should make the no-fly list subject to judicial review.

Then, move on to the checkpoints themselves. What are our rights? What powers do the TSA officers have? If we’re asked “friendly” questions by behavioral detection officers, are we allowed not to answer? If we object to the rough handling of ourselves or our belongings, can the TSA official retaliate against us by putting us on a watch list? Obama should make the rules clear and explicit, and allow people to bring legal action against the TSA for violating those rules; otherwise, airport checkpoints will remain a Constitution-free zone in our country.

Next, Obama should refuse to use unfunded mandates to sneak expensive security measures past Congress. The Secure Flight program is the worst offender. Airlines are being forced to spend billions of dollars redesigning their reservations systems to accommodate the TSA’s demands to preapprove every passenger before he or she is allowed to board an airplane. These costs are borne by us, in the form of higher ticket prices, even though we never see them explicitly listed.

Maybe Secure Flight is a good use of our money; maybe it isn’t. But let’s have debates like that in the open, as part of the budget process, where it belongs.

And finally, Obama should mandate that airport security be solely about terrorism, and not a general-purpose security checkpoint to catch everyone from pot smokers to deadbeat dads.

The Constitution provides us, both Americans and visitors to America, with strong protections against invasive police searches. Two exceptions come into play at airport security checkpoints. The first is “implied consent,” which means that you cannot refuse to be searched; your consent is implied when you purchased your ticket. And the second is “plain view,” which means that if the TSA officer happens to see something unrelated to airport security while screening you, he is allowed to act on that.

Both of these principles are well established and make sense, but it’s their combination that turns airport security checkpoints into police-state-like checkpoints.

The TSA should limit its searches to bombs and weapons and leave general policing to the police – where we know courts and the Constitution still apply.

None of these changes will make airports any less safe, but they will go a long way to de-ratcheting the culture of fear, restoring the presumption of innocence and reassuring Americans, and the rest of the world, that – as Obama said in his inauguration speech – “we reject as false the choice between our safety and our ideals.”

This essay originally appeared, without hyperlinks, in the New York Daily News.

Posted on June 24, 2009 at 6:40 AM68 Comments

John Walker and the Fleet Broadcasting System

Ph.D. thesis from 2001:

An Analysis of the Systemic Security Weaknesses of the U.S. Navy Fleet Broadcasting System, 1967-1974, as exploited by CWO John Walker, by MAJ Laura J. Heath

Abstract: CWO John Walker led one of the most devastating spy rings ever unmasked in the US. Along with his brother, son, and friend, he compromised US Navy cryptographic systems and classified information from 1967 to 1985. This research focuses on just one of the systems compromised by John Walker himself: the Fleet Broadcasting System (FBS) during the period 1967-1975, which was used to transmit all US Navy operational orders to ships at sea. Why was the communications security (COMSEC) system so completely defenseless against one rogue sailor, acting alone? The evidence shows that FBS was designed in such a way that it was effectively impossible to detect or prevent rogue insiders from compromising the system. Personnel investigations were cursory, frequently delayed, and based more on hunches than hard scientific criteria. Far too many people had access to the keys and sensitive materials, and the auditing methods were incapable, even in theory, of detecting illicit copying of classified materials. Responsibility for the security of the system was distributed between many different organizations, allowing numerous security gaps to develop. This has immediate implications for the design of future classified communications systems.

EDITED TO ADD (9/23): I blogged about this in 2005. Apologies; I forgot.

Posted on June 23, 2009 at 1:30 PM20 Comments

Eavesdropping on Dot-Matrix Printers by Listening to Them

Interesting research.

First, we develop a novel feature design that borrows from commonly used techniques for feature extraction in speech recognition and music processing. These techniques are geared towards the human ear, which is limited to approx. 20 kHz and whose sensitivity is logarithmic in the frequency; for printers, our experiments show that most interesting features occur above 20 kHz, and a logarithmic scale cannot be assumed. Our feature design reflects these observations by employing a sub-band decomposition that places emphasis on the high frequencies, and spreading filter frequencies linearly over the frequency range. We further add suitable smoothing to make the recognition robust against measurement variations and environmental noise.

Second, we deal with the decay time and the induced blurring by resorting to a word-based approach instead of decoding individual letters. A word-based approach requires additional upfront effort such as an extended training phase as the dictionary grows larger, and it does not permit us to increase recognition rates by using, e.g., spell-checking. Recognition of words based on training the sound of individual letters (or pairs/triples of letters), however, is infeasible because the sound emitted by printers blurs so strongly over adjacent letters.

Third, we employ speech recognition techniques to increase the recognition rate: we use Hidden Markov Models (HMMs) that rely on the statistical frequency of sequences of words in text in order to rule out incorrect word combinations. The presence of strong blurring, however, requires to use at least 3-grams on the words of the dictionary to be effective, causing existing implementations for this task to fail because of memory exhaustion. To tame memory consumption, we implemented a delayed computation of the transition matrix that underlies HMMs, and in each step of the search procedure, we adaptively removed the words with only weakly matching features from the search space.

We built a prototypical implementation that can bootstrap the recognition routine from a database of featured words that have been trained using supervised learning. Afterwards, the prototype automatically recognizes text with recognition rates of up to 72 %.

Researchers have done lots of work on eavesdropping on remote devices. (One example.) And we know the various intelligence organizations of the world have been doing this sort of thing for decades.

Posted on June 23, 2009 at 6:16 AM23 Comments

John Mueller on Nuclear Disarmament

The New York Times website has a blog called “Room for Debate,” where a bunch of people—experts in their areas—write short essays commenting on a news item. (I participated a few weeks ago.) Earlier this month, there was a post on nuclear disarmament, following President Obama’s speech in Cairo that mentioned the subject. One of the commentators was John Mueller, Ohio State University political science professor and longtime critic of the terrorism hype. (I recommend his book, Overblown.) His commentary was very good; I especially liked the first sentence. An excerpt:

The notion that the world should rid itself of nuclear weapons has been around for over six decades—during which time they have been just about the only instrument of destruction that hasn’t killed anybody. The abolition idea has been dismissed by most analysts because, since inspection of any arms reduction cannot be perfect, the measure could potentially put wily cheaters in a commanding position.

There may be another approach to the same end, one that, while also imperfect, would require far less effort while greatly reducing the amount of sanctimonious huffing and puffing we would have to endure.

Just let it happen.

While it may not be entirely fair to characterize disarmament as an effort to cure a fever by destroying the thermometer, the analogy is instructive when it is reversed: when fever subsides, the instrument designed to measure it loses its usefulness and is often soon misplaced.

Indeed, a fair amount of nuclear arms reduction, requiring little in the way of formal agreement, has already taken place between the former cold war contestants.

Posted on June 22, 2009 at 1:46 PM61 Comments

This Week's Movie-Plot Threat: Fungus

I had been wondering whether to post this, since it’s not really a security threat—there’s no intelligence by the attacker:

Crop scientists fear the Ug99 fungus could wipe out more than 80% of worldwide wheat crops as it spreads from eastern Africa. It has already jumped the Red Sea and traveled as far as Iran. Experts say it is poised to enter the breadbasket of northern India and Pakistan, and the wind will inevitably carry it to Russia, China and even North America—if it doesn’t hitch a ride with people first.

“It’s a time bomb,” said Jim Peterson, a professor of wheat breeding and genetics at Oregon State University in Corvallis. “It moves in the air, it can move in clothing on an airplane. We know it’s going to be here. It’s a matter of how long it’s going to take.”

Posted on June 19, 2009 at 2:03 PM29 Comments

Fraud on eBay

I expected selling my computer on eBay to be easy.

Attempt 1: I listed it. Within hours, someone bought it—from a hacked account, as eBay notified me, cancelling the sale.

Attempt 2: I listed it again. Within hours, someone bought it, and asked me to send it to her via FedEx overnight. The buyer sent payment via PayPal immediately, and then—near as I could tell—immediately opened a dispute with PayPal so that the funds were put on hold. And then she sent me an e-mail saying “I paid you, now send me the computer.” But PayPal was faster than she expected, I think. At the same time, I received an e-mail from PayPal saying that I might have received a payment that the account holder did not authorize, and that I shouldn’t ship the item until the investigation is complete.

I’m willing to make Attempt 3, if just to see what kind of scam happens this time. But I still want to sell the computer, and I am pissed off at what is essentially a denial-of-service attack. The facts from this listing are accurate; does anyone want it? List price is over $3K. Send me e-mail.

EDITED TO ADD (6/19): It’s not just me.

EDITED TO ADD (6/24): The computer is sold, to someone who reads my blog.

EDITED TO ADD (6/25): I’m not entirely sure, but it looks like the payment from the second eBay buyer has gone through PayPal. I don’t trust it—just because I can’t figure out the scam doesn’t mean there isn’t one. And, anyway, the computer is sold.

EDITED TO ADD (7/3): For the record: despite articles to the contrary, I was not scammed on eBay. I was the victim of two scam attempts, both of which I detected and did not fall for.

Posted on June 19, 2009 at 11:55 AM116 Comments

Imagining Threats

A couple of years ago, the Department of Homeland Security hired a bunch of science fiction writers to come in for a day and think of ways terrorists could attack America. If our inability to prevent 9/11 marked a failure of imagination, as some said at the time, then who better than science fiction writers to inject a little imagination into counterterrorism planning?

I discounted the exercise at the time, calling it “embarrassing.” I never thought that 9/11 was a failure of imagination. I thought, and still think, that 9/11 was primarily a confluence of three things: the dual failure of centralized coordination and local control within the FBI, and some lucky breaks on the part of the attackers. More imagination leads to more movie-plot threats—which contributes to overall fear and overestimation of the risks. And that doesn’t help keep us safe at all.

Recently, I read a paper by Magne Jørgensen that provides some insight into why this is so. Titled More Risk Analysis Can Lead to Increased Over-Optimism and Over-Confidence, the paper isn’t about terrorism at all. It’s about software projects.

Most software development project plans are overly optimistic, and most planners are overconfident about their overoptimistic plans. Jørgensen studied how risk analysis affected this. He conducted four separate experiments on software engineers, and concluded (though there are lots of caveats in the paper, and more research needs to be done) that performing more risk analysis can make engineers more overoptimistic instead of more realistic.

Potential explanations all come from behavioral economics: cognitive biases that affect how we think and make decisions. (I’ve written about some of these biases and how they affect security decisions, and there’s a great book on the topic as well.)

First, there’s a control bias. We tend to underestimate risks in situations where we are in control, and overestimate risks in situations when we are not in control. Driving versus flying is a common example. This bias becomes stronger with familiarity, involvement and a desire to experience control, all of which increase with increased risk analysis. So the more risk analysis, the greater the control bias, and the greater the underestimation of risk.

The second explanation is the availability heuristic. Basically, we judge the importance or likelihood of something happening by the ease of bringing instances of that thing to mind. So we tend to overestimate the probability of a rare risk that is seen in a news headline, because it is so easy to imagine. Likewise, we underestimate the probability of things occurring that don’t happen to be in the news.

A corollary of this phenomenon is that, if we’re asked to think about a series of things, we overestimate the probability of the last thing thought about because it’s more easily remembered.

According to Jørgensen’s reasoning, people tend to do software risk analysis by thinking of the severe risks first, and then the more manageable risks. So the more risk analysis that’s done, the less severe the last risk imagined, and thus the greater the underestimation of the total risk.

The third explanation is similar: the peak end rule. When thinking about a total experience, people tend to place too much weight on the last part of the experience. In one experiment, people had to hold their hands under cold water for one minute. Then, they had to hold their hands under cold water for one minute again, then keep their hands in the water for an additional 30 seconds while the temperature was gradually raised. When asked about it afterwards, most people preferred the second option to the first, even though the second had more total discomfort. (An intrusive medical device was redesigned along these lines, resulting in a longer period of discomfort but a relatively comfortable final few seconds. People liked it a lot better.) This means, like the second explanation, that the least severe last risk imagined gets greater weight than it deserves.

Fascinating stuff. But the biases produce the reverse effect when it comes to movie-plot threats. The more you think about far-fetched terrorism possibilities, the more outlandish and scary they become, and the less control you think you have. This causes us to overestimate the risks.

Think about this in the context of terrorism. If you’re asked to come up with threats, you’ll think of the significant ones first. If you’re pushed to find more, if you hire science-fiction writers to dream them up, you’ll quickly get into the low-probability movie plot threats. But since they’re the last ones generated, they’re more available. (They’re also more vivid—science fiction writers are good at that—which also leads us to overestimate their probability.) They also suggest we’re even less in control of the situation than we believed. Spending too much time imagining disaster scenarios leads people to overestimate the risks of disaster.

I’m sure there’s also an anchoring effect in operation. This is another cognitive bias, where people’s numerical estimates of things are affected by numbers they’ve most recently thought about, even random ones. People who are given a list of three risks will think the total number of risks are lower than people who are given a list of 12 risks. So if the science fiction writers come up with 137 risks, people will believe that the number of risks is higher than they otherwise would—even if they recognize the 137 number is absurd.

Jørgensen does not believe risk analysis is useless in software projects, and I don’t believe scenario brainstorming is useless in counterterrorism. Both can lead to new insights and, as a result, a more intelligent analysis of both specific risks and general risk. But an over-reliance on either can be detrimental.

Last month, at the 2009 Homeland Security Science & Technology Stakeholders Conference in Washington D.C., science fiction writers helped the attendees think differently about security. This seems like a far better use of their talents than imagining some of the zillions of ways terrorists can attack America.

This essay originally appeared on Wired.com.

Posted on June 19, 2009 at 6:49 AM34 Comments

New Computer Snooping Tool

From the press release:

Unlike existing computer forensics solutions, EnCase Portable runs on a USB drive, rather than a laptop, and enables the user to easily and rapidly boot a target computer to the USB drive, and run a pre-configured data search and collection job. The ease-of-use and ultra-portability of EnCase Portable creates exciting new possibilities in data acquisition. Even personnel untrained in computer forensics can forensically acquire documents, Internet history and artifacts, images, and other digital evidence, including entire hard drives, with a few simple keyboard clicks.

Posted on June 18, 2009 at 7:08 AM37 Comments

The Psychology of Being Scammed

Fascinating research on the psychology of con games. “The psychology of scams: Provoking and committing errors of judgement” was prepared for the UK Office of Fair Trading by the University of Exeter School of Psychology.

From the executive summary, here’s some stuff you may know:

Appeals to trust and authority: people tend to obey authorities so scammers use, and victims fall for, cues that make the offer look like a legitimate one being made by a reliable official institution or established reputable business.

Visceral triggers: scams exploit basic human desires and needs—such as greed, fear, avoidance of physical pain, or the desire to be liked—in order to provoke intuitive reactions and reduce the motivation of people to process the content of the scam message deeply. For example, scammers use triggers that make potential victims focus on the huge prizes or benefits on offer.

Scarcity cues. Scams are often personalised to create the impression that the offer is unique to the recipient. They also emphasise the urgency of a response to reduce the potential victim’s motivation to process the scam content objectively.

Induction of behavioural commitment. Scammers ask their potential victims to make small steps of compliance to draw them in, and thereby cause victims to feel committed to continue sending money.

The disproportionate relation between the size of the alleged reward and the cost of trying to obtain it. Scam victims are led to focus on the alleged big prize or reward in comparison to the relatively small amount of money they have to send in order to obtain their windfall; a phenomenon called ‘phantom fixation’. The high value reward (often life-changing, medically, financially, emotionally or physically) that scam victims thought they could get by responding, makes the money to be paid look rather small by comparison.

Lack of emotional control. Compared to non-victims, scam victims report being less able to regulate and resist emotions associated with scam offers. They seem to be unduly open to persuasion, or perhaps unduly undiscriminating about who they allow to persuade them. This creates an extra vulnerability in those who are socially isolated, because social networks often induce us to regulate our emotions when we otherwise might not.

And some stuff that surprised me:

…it was striking how some scam victims kept their decision to respond private and avoided speaking about it with family members or friends. It was almost as if with some part of their minds, they knew that what they were doing was unwise, and they feared the confirmation of that that another person would have offered. Indeed to some extent they hide their response to the scam from their more rational selves.

Another counter-intuitive finding is that scam victims often have better than average background knowledge in the area of the scam content. For example, it seems that people with experience of playing legitimate prize draws and lotteries are more likely to fall for a scam in this area than people with less knowledge and experience in this field. This also applies to those with some knowledge of investments. Such knowledge
can increase rather than decrease the risk of becoming a victim.

…scam victims report that they put more cognitive effort into analysing scam content than non-victims. This contradicts the intuitive suggestion that people fall victim to scams because they invest too little cognitive energy in investigating their content, and thus overlook potential information that might betray the scam. This may, however, reflect the victim being ‘drawn in’ to the scam whilst non-victims include many people who discard scams without giving them a second glance.

Related: the psychology of con games.

Posted on June 17, 2009 at 2:05 PM38 Comments

Carrot-Bomb Art Project Bombs in Sweden

Not the best idea:

The carrot bombs had been placed around the city at the request of a local art gallery, as part of an open-air arts festival.

They had only been in place for an hour before police received their first call.

“We received a call … from a person who said they saw two real bombs placed outside the public library,” Ronny Hoerman from the Orebro police force, was quoted as saying by the AFP news agency.

“It was hard to tell if they were real or not. We find this inappropriate,” he said.

Mr Blom described it as a harmless stunt.

“After all, it is just carrots with an alarm clock and nothing else… this is just a caricature of a bomb,” he said.

Posted on June 17, 2009 at 6:49 AM62 Comments

Ever Better Cryptanalytic Results Against SHA-1

The SHA family (which, I suppose, should really be called the MD4 family) of cryptographic hash functions has been under attack for a long time. In 2005, we saw the first cryptanalysis of SHA-1 that was faster than brute force: collisions in 269 hash operations, later improved to 263 operations. A great result, but not devastating. But remember the great truism of cryptanalysis: attacks always get better, they never get worse. Last week, devastating got a whole lot closer. A new attack can, at least in theory, find collisions in 252 hash operations—well within the realm of computational possibility. Assuming the cryptanalysis is correct, we should expect to see an actual SHA-1 collision within the year.

Note that this is a collision attack, not a pre-image attack. Most uses of hash functions don’t care about collision attacks. But if yours does, switch to SHA-2 immediately. (This has more information on this, written for the 269 attack.)

This is why NIST is administering a SHA-3 competition for a new hash standard. And whatever algorithm is chosen, it will look nothing like anything in the SHA family (which is why I think it should be called the Advanced Hash Standard, or AHS).

Posted on June 16, 2009 at 12:21 PM50 Comments

Prairie Dogs Hack Baltimore Zoo

Fun story, with a lot of echoes of our own security problems:

It took just 10 minutes for a dozen prairie dogs to outwit the creators of the Maryland Zoo’s new $500,000 habitat.

Aircraft wire, poured concrete and slick plastic walls proved no match for the fast-footed rodents, the stars of a new exhibit that opens today.

As officials were promoting the return of the zoo’s 28 prairie dogs—their former digs had been out of sight in a closed section of the animal preserve for more than four years—some of the critters found ways to jump, climb and get over the walls of their prairie paradise, a centerpiece exhibit just inside the zoo’s main entrance.

[…]

But a few intrepid prairie dogs tried to find their way out, sending keepers scrambling to plug escape routes.

An hour later, just as zookeepers thought everything was under control, one rodent made it to the top of the wall. A dozen workers closed in. The prairie dog seemed to think better of it and jumped back into the enclosure.

“They find all the weak spots and exploit them,” said Karl Kranz, the zoo’s vice president for animal programs and chief operating officer.

[…]

Zoo staff members say the animals cannot burrow their way out because the former Kodiak bear environment is essentially a large concrete swimming bowl. The soil depth at Prairie Dog Town ranges from 6 feet to 8 feet.

“The dirt must be deeper than 36 inches in order for the prairie dogs to make their burrows under the frost line,” Kranz said. “We took soil samples from the old exhibit so the soils could be matched exactly to what they were used to having.”

After foiling the escape attempt, zoo workers adjusted wire fencing and installed more slippery plastic on the walls.

Posted on June 16, 2009 at 7:24 AM32 Comments

Did a Public Twitter Post Lead to a Burglary?

No evidence one way or the other:

Like a lot of people who use social media, Israel Hyman and his wife Noell went on Twitter to share real-time details of a recent trip. Their posts said they were “preparing to head out of town,” that they had “another 10 hours of driving ahead,” and that they “made it to Kansas City.”

While they were on the road, their home in Mesa, Ariz., was burglarized. Hyman has an online video business called IzzyVideo.com, with 2,000 followers on Twitter. He thinks his Twitter updates tipped the burglars off.

“My wife thinks it could be a random thing, but I just have my suspicions,” he said. “They didn’t take any of our normal consumer electronics.” They took his video editing equipment.

I’m not saying that there isn’t a connection, but people have a propensity for seeing these sorts of connections.

Posted on June 15, 2009 at 2:26 PM47 Comments

The "Hidden Cost" of Privacy

Forbes ran an article talking about the “hidden” cost of privacy. Basically, the point was that privacy regulations are expensive to comply with, and a lot of that expense gets eaten up by the mechanisms of compliance and doesn’t go toward improving anyone’s actual privacy. This is a valid point, and one that I make in talks about privacy all the time. It’s particularly bad in the United States, because we have a patchwork of different privacy laws covering different types of information and different situations and not a single comprehensive privacy law.

The meta-problem is simple to describe: those entrusted with our privacy often don’t have much incentive to respect it. Examples include: credit bureaus such as TransUnion and Experian, who don’t have any business relationship at all with the people whose data they collect and sell; companies such as Google who give away services—and collect personal data as a part of that—as an incentive to view ads, and make money by selling those ads to other companies; medical insurance companies, who are chosen by a person’s employer; and computer software vendors, who can have monopoly powers over the market. Even worse, it can be impossible to connect an effect of a privacy violation with the violation itself—if someone opens a bank account in your name, how do you know who was to blame for the privacy violation?—so even when there is a business relationship, there’s no clear cause-and-effect relationship.

What this all means is that protecting individual privacy remains an externality for many companies, and that basic market dynamics won’t work to solve the problem. Because the efficient market solution won’t work, we’re left with inefficient regulatory solutions. So now the question becomes: how do we make regulation as efficient as possible? I have some suggestions:

  1. Broad privacy regulations are better than narrow ones.
  2. Simple and clear regulations are better than complex and confusing ones.
  3. It’s far better to regulate results than methodology.
  4. Penalties for bad behavior need to be expensive enough to make good behavior the rational choice.

We’ll never get rid of the inefficiencies of regulation—that’s the nature of the beast, and why regulation only makes sense when the market fails—but we can reduce them.

Posted on June 15, 2009 at 6:45 AM65 Comments

Friday Squid Blogging: Squid Also See Through Non-Eye Organ

Weird:

The UW-Madison researchers have been intrigued by the light organ’s “counterillumination” ability—this capacity to give off light to make squids as bright as the ocean surface above them, so that predators below can’t see them.

“Until now, scientists thought that illuminating tissues in the light organ functioned exclusively for the control of the intensity and direction of light output from the organ, with no role in light perception,” says McFall-Ngai. “Now we show that the E. scolopes squid has additional light-detecting tissue that is an integral component of the light organ.”

The researchers demonstrated that the squid light organ has the molecular machinery to respond to light cues. Molecular analysis showed that genes that produce key visual proteins are expressed in light-organ tissues, including genes similar to those that occur in the retina. They also showed that, as in the retina, these visual proteins respond to light, producing a physiological response.

“We found that the light organ in the squid is capable of sensing light as well as emitting and controlling the intensity of luminescence,” says co-author Nansi Jo Colley, SMPH professor of ophthalmology and visual sciences and of genetics.

Posted on June 12, 2009 at 6:46 PM7 Comments

Second SHB Workshop Liveblogging (9)

The eighth, and final, session of the SHB09 was optimistically titled “How Do We Fix the World?” I moderated, which meant that my liveblogging was more spotty, especially in the discussion section.

David Mandel, Defense Research and Development Canada (suggested reading: Applied Behavioral Science in Support of Intelligence Analysis, Radicalization: What does it mean?; The Role of Instigators in Radicalization to Violent Extremism), is part of the Thinking, Risk, and Intelligence Group at DRDC Toronto. His first observation: “Be wary of purported world-fixers.” His second observation: when you claim that something is broken, it is important to specify the respects in which it’s broken and what fixed looks like. His third observation: it is also important to analyze the consequences of any potential fix. An analysis of the way things are is perceptually based, but an analysis of the way things should be is value-based. He also presented data showing that predictions made by intelligence analysts (at least in one Canadian organization) were pretty good.

Ross Anderson, Cambridge University (suggested reading: Database State; book chapters on psychology and terror), asked “Where’s the equilibrium?” Both privacy and security are moving targets, but he expects that someday soon there will be a societal equilibrium. Incentives to price discriminate go up, and the cost to do so goes down. He gave several examples of database systems that reached very different equilibrium points, depending on corporate lobbying, political realities, public outrage, etc. He believes that privacy will be regulated, the only question being when and how. “Where will the privacy boundary end up, and why? How can we nudge it one way or another?”

Alma Whitten, Google (suggested reading: Why Johnny can’t encrypt: A usability evaluation of PGP 5.0), presented a set of ideals about privacy (very European like) and some of the engineering challenges they present. “Engineering challenge #1: How to support access and control to personal data that isn’t authenticated? Engineering challenge #2: How to inform users about both authenticated and unauthenticated data? Engineering challenge #3: How to balance giving users control over data collection versus detecting and stopping abuse? Engineering challenge #4: How to give users fine-grained control over their data without overwhelming them with options? Engineering challenge #5: How to link sequential actions while preventing them from being linkable to a person? Engineering challenge #6: How to make the benefits of aggregate data analysis apparent to users? Engineering challenge #7: How to avoid or detect inadvertent recording of data that can be linked to an individual?” (Note that Alma requested not to be recorded.)

John Mueller, Ohio State University (suggested reading: Reacting to Terrorism: Probabilities, Consequences, and the Persistence of Fear; Evaluating Measures to Protect the Homeland from Terrorism; Terrorphobia: Our False Sense of Insecurity), talked about terrorism and the Department of Homeland Security. Terrorism isn’t a threat; it’s a problem and a concern, certainly, but the word “threat” is still extreme. Al Qaeda isn’t a threat, and they’re the most serious potential attacker against the U.S. and Western Europe. And terrorists are overwhelmingly stupid. Meanwhile, the terrorism issue “has become a self-licking ice cream cone.” In other words, it’s now an ever-perpetuating government bureaucracy. There are virtually an infinite number of targets; the odds of any one target being targeted is effectively zero; terrorists pick targets largely at random; if you protect target, it makes other targets less safe; most targets are vulnerable in the physical sense, but invulnerable in the sense that they can be rebuilt relatively cheaply (even something like the Pentagon); some targets simply can’t be protected; if you’re going to protect some targets, you need to determine if they should really be protected. (I recommend his book, Overblown.)

Adam Shostack, Microsoft (his blog), pointed out that even the problem of figuring out what part of the problem to work on first is difficult. One of the issues is shame. We don’t want to talk about what’s wrong, so we can’t use that information to determine where we want to go. We make excuses—customers will flee, people will sue, stock prices will go down—even though we know that those excuses have been demonstrated to be false.

During the discussion, there was a lot of talk about the choice between informing users and bombarding them with information they can’t understand. And lots more that I couldn’t transcribe.

And that’s it. SHB09 was a fantastic workshop, filled with interesting people and interesting discussion. Next year in the other Cambridge.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 4:55 PM7 Comments

Second SHB Workshop Liveblogging (8)

The penultimate session of the conference was “Privacy,” moderated by Tyler Moore.

Alessandro Acquisti, Carnegie Mellon University (suggested reading: What Can Behavioral Economics Teach Us About Privacy?; Privacy in Electronic Commerce and the Economics of Immediate Gratification), presented research on how people value their privacy. He started by listing a variety of cognitive biases that affect privacy decisions: illusion of control, overconfidence, optimism bias, endowment effect, and so on. He discussed two experiments. The first demonstrated a “herding effect”: if a subject believes that others reveal sensitive behavior, the subject is more likely to also reveal sensitive behavior. The second examined the “frog effect”: do privacy intrusions alert or desensitize people to revealing personal information? What he found is that people tend to set their privacy level at the beginning of a survey, and don’t respond well to being asked easy questions at first and then sensitive questions at the end. In the discussion, Joe Bonneau asked him about the notion that people’s privacy protections tend to ratchet up over time; he didn’t have conclusive evidence, but gave several possible explanations for the phenomenon.

Adam Joinson, University of Bath (suggested reading: Privacy, Trust and Self-Disclosure Online; Privacy concerns and privacy actions), also studies how people value their privacy. He talked about expressive privacy—privacy that allows people to express themselves and form interpersonal relationships. His research showed that differences between how people use Facebook in different countries depend on how much people trust Facebook as a company, rather than how much people trust other Facebook users. Another study looked at posts from Secret Tweet and Twitter. He found 16 markers that allowed him to automatically determine which tweets contain sensitive personal information and which do not, with high probability. Then he tried to determine if people with large Twitter followings post fewer secrets than people who are only twittering to a few people. He found absolutely no difference.

Peter Neumann, SRI (suggested reading: Holistic systems; Risks; Identity and Trust in Context), talked about lack of medical privacy (too many people have access to your data), about voting (the privacy problem makes the voting problem a lot harder, and the end-to-end voting security/privacy problem is much harder than just securing voting machines), and privacy in China (the government is requiring all computers sold in China to be sold with software allowing them to eavesdrop on the users). Any would-be solution needs to reflect the ubiquity of the threat. When we design systems, we need to anticipate what the privacy problems will be. Privacy problems are everywhere you look, and ordinary people have no idea of the depth of the problem.

Eric Johnson, Dartmouth College (suggested reading: Access Flexibility with Escalation and Audit; Security through Information Risk Management), studies the information access problem from a business perspective. He’s been doing field studies in companies like retail banks and investment banks, and found that role-based access control fails because companies can’t determine who has what role. Even worse, roles change quickly, especially in large complex organizations. For example, one business group of 3000 people experiences 1000 role changes within three months. The result is that organizations do access control badly, either over-entitling or under-entitling people. But since getting the job done is the most important thing, organizations tend to over-entitle: give people more access than they need. His current work is to find the right set of incentives and controls to set access more properly. The challege is to do this without making people risk averse. In the discussion, he agreed that a perfect access control system is not possible, and that organizations should probably allow a certain amount of access control violations—similar to the idea of posting a 55 mph speed limit but not ticketing people unless they go over 70 mph.

Christine Jolls, Yale Law School (suggested reading: Rationality and Consent in Privacy Law, Employee Privacy), made the point that people regularly share their most private information with their intimates—so privacy is not about secrecy, it’s more about control. There are moments when people make pretty big privacy decisions. For example, they grant employers the rights to monitor their e-mail, or test their urine without notice. In general, courts hold that blanket signing away of privacy rights—”you can test my urine on any day in the future”—are not valid, but immediate signing away of privacy of privacy rights—”you can test my urine today”—are. Jolls believes that this is reasonable for several reasons, such as optimism bias and an overfocus on the present at the expense of the future. Without realizing it, the courts have implemented the system that behavioral economics would find optimal. During the discussion, she talked about how coercion figures into this; the U.S. legal system tends not to be concerned with it.

Andrew Adams, University of Reading (suggested reading: Regulating CCTV), also looks at attitudes of privacy on social networking services. His results are preliminary, and based on interviews with university students in Canada, Japan, and the UK, and are very concordant with what danah boyd and Joe Bonneau said earlier. From the UK: People join social networking sites to increase their level of interaction with people they already know in real life. Revealing personal information is okay, but revealing too much is bad. Even more interestingly, it’s not okay to reveal more about others than they reveal themselves. From Japan: People are more open to making friends online. There’s more anonymity. It’s not okay to reveal information about others, but “the fault of this lies as much with the person whose data was revealed in not choosing friends wisely.” This victim responsibility is a common theme with other privacy and security elements in Japan. Data from Canada is still being compiled.

Great phrase: the “laundry belt”—close enough for students to go home on weekends with their laundry, but far enough away so they don’t feel as if their parents are looking over their shoulder—typically two hours by public transportation (in the UK).

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 3:01 PM2 Comments

Second SHB Workshop Liveblogging (7)

Session Six—”Terror”—chaired by Stuart Schechter.

Bill Burns, Decision Research (suggested reading: The Diffusion of Fear: Modeling Community Response to a Terrorist Strike), studies social reaction to risk. He discussed his theoretical model of how people react to fear events, and data from the 9/11 attacks, the 7/7 bombings in the UK, and the 2008 financial collapse. Basically, we can’t remain fearful. No matter what happens, fear spikes immediately after and recovers 45 or so days afterwards. He believes that the greatest mistake we made after 9/11 was labeling the event as terrorism instead of an international crime.

Chris Cocking, London Metropolitan University (suggested reading: Effects of social identity on responses to emergency mass evacuation), looks at the group behavior of people responding to emergencies. Traditionally, most emergency planning is based on the panic model: people in crowds are prone to irrational behavior and panic. There’s also a social attachment model that predicts that social norms don’t break down in groups. He prefers a self-categorization approach: disasters create a common identity, which results in orderly and altruistic behavior among strangers. The greater the threat, the greater the common identity, and spontaneous resilience can occur. He displayed a photograph of “panic” in New York on 9/11 and showed how it wasn’t panic at all. Panic seems to be more a myth than a reality. This has policy implications during an event: provide people with information, and people are more likely to underreact than overreact, if there is overreaction, it’s because people are acting as individuals rather than groups, so those in authority should encourage a sense of collective identity. “Crowds can be part of the solution rather than part of the problem.”

Richard John, University of Southern California (suggested reading: Decision Analysis by Proxy for the Rational Terrorist), talked about the process of social amplification of risk (with respect to terrorism). Events result in relatively small losses; it’s the changes in behavior following an event that result in much greater losses. There’s a dynamic of risk perception, and it’s very contextual. He uses vignettes to study how risk perception changes over time, and discussed some of the studies he’s conducting and ideas for future studies.

Mark Stewart, University of Newcastle, Australia (suggested reading: A risk and cost-benefit assessment of United States aviation security measures; Risk and Cost-Benefit Assessment of Counter-Terrorism Protective Measures to Infrastructure), examines infrastructure security and whether the costs exceed the benefits. He talked about cost/benefit trade-off, and how to apply probabilistic terrorism risk assessment; then, he tried to apply this model to the U.S. Federal Air Marshal Service. His result: they’re not worth it. You can quibble with his data, but the real value is a transparent process. During the discussion, I said that it is important to realize that risks can’t be taken in isolation, that anyone making a security trade-off is balancing several risks: terrorism risks, political risks, the personal risks to his career, etc.

John Adams, University College London (suggested reading: Deus e Brasileiro?; Can Science Beat Terrorism?; Bicycle bombs: a further inquiry), applies his risk thermostat model to terrorism. He presented a series of amusing photographs of overreactions to risk, most of them not really about risk aversion but more about liability aversion. He talked about bureaucratic paranoia, as well as bureaucratic incitements to paranoia, and how this is beginning to backfire. People treat risks differently, depending on whether they are voluntary, impersonal, or imposed, and whether people have total control, diminished control, or no control.

Dan Gardner, Ottawa Citizen (suggested reading: The Science of Fear: Why We Fear the Things We Shouldn’t—and Put Ourselves in Greater Danger), talked about how the media covers risks, threats, attacks, etc. He talked about the various ways the media screws up, all of which were familiar to everyone. His thesis is not that the media gets things wrong in order to increase readership/viewership and therefore profits, but that the media gets things wrong because reporters are human. Bad news bias is not a result of the media hyping bad news, but the natural human tendency to remember the bad more than the good. The evening news is centered around stories because people—including reporters—respond to stories, and stories with novelty, emotion, and drama are better stories.

Some of the discussion was about the nature of panic: whether and where it exists, and what it looks like. Someone from the audience questioned whether panic was related to proximity to the event; someone else pointed out that people very close to the 7/7 bombings took pictures and made phone calls—and that there was no evidence of panic. Also, on 9/11 pretty much everyone below where the airplanes struck the World Trade Center got out safely; and everyone above couldn’t get out, and died. Angela Sasse pointed out that the previous terrorist attack against the World Trade Center, and the changes made in evacuation procedures afterwards, contributed to the lack of panic on 9/11. Bill Burns said that the purest form of panic is a drowning person. Jean Camp asked whether the recent attacks against women’s health providers should be classified as terrorism, or whether we are better off framing it as crime. There was also talk about sky marshals and their effectiveness. I said that it isn’t sky marshals that are a deterrent, but the idea of sky marshals. Terence Taylor said that increasing uncertainty on the part of the terrorists is, in itself, a security measure. There was also a discussion about how risk-averse terrorists are; they seem to want to believe they have an 80% or 90% change of success before they will launch an attack.

Next, lunch—and two final sessions this afternoon.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 12:01 PM5 Comments

Second SHB Workshop Liveblogging (6)

The first session of the morning was “Foundations,” which is kind of a catch-all for a variety of things that didn’t really fit anywhere else. Rachel Greenstadt moderated.

Terence Taylor, International Council for the Live Sciences (suggested video to watch: Darwinian Security; Natural Security), talked about the lessons evolution teaches about living with risk. Successful species didn’t survive by eliminating the risks of their environment, they survived by adaptation. Adaptation isn’t always what you think. For example, you could view the collapse of the Soviet Union as a failure to adapt, but you could also view it as successful adaptation. Risk is good. Risk is essential for the survival of a society, because risk-takers are the drivers of change. In the discussion phase, John Mueller pointed out a key difference between human and biological systems: humans tend to respond dramatically to anomalous events (the anthrax attacks), while biological systems respond to sustained change. And David Livingstone Smith asked about the difference between biological adaptation that affects the reproductive success of an organism’s genes, even at the expense of the organism, with security adaptation. (I recommend the book he edited: Natural Security: A Darwinian Approach to a Dangerous World.)

Andrew Odlyzko, University of Minnesota (suggested reading: Network Neutrality, Search Neutrality, and the Never-Ending Conflict between Efficiency and Fairness in Markets, Economics, Psychology, and Sociology of Security), discussed human-space vs. cyberspace. People cannot build secure systems—we know that—but people also cannot live with secure systems. We require a certain amount of flexibility in our systems. And finally, people don’t need secure systems. We survive with an astounding amount of insecurity in our world. The problem with cyberspace is that it was originally conceived as separate from the physical world, and that it could correct for the inadequacies of the physical world. Really, the two are intertwined, and that human space more often corrects for the inadequacies of cyberspace. Lessons: build messy systems, not clean ones; create a web of ties to other systems; create permanent records.

danah boyd, Microsoft Research (suggested reading: Taken Out of Context—American Teen Sociality in Networked Publics), does ethnographic studies of teens in cyberspace. Teens tend not to lie to their friends in cyberspace, but they lie to the system. Since an early age, they’ve been taught that they need to lie online to be safe. Teens regularly share their passwords: with their parents when forced, or with their best friend or significant other. This is a way of demonstrating trust. It’s part of the social protocol for this generation. In general, teens don’t use social media in the same way as adults do. And when they grow up, they won’t use social media in the same way as today’s adults do. Teens view privacy in terms of control, and take their cues about privacy from celebrities and how they use social media. And their sense of privacy is much more nuanced and complicated. In the discussion phase, danah wasn’t sure whether the younger generation would be more or less susceptible to Internet scams than the rest of us—they’re not nearly as technically savvy as we might think they are. “The only thing that saves teenagers is fear of their parents”; they try to lock them out, and lock others out in the process. Socio-economic status matters a lot, in ways that she is still trying to figure out. There are three different types of social networks: personal networks, articulated networks, and behavioral networks, and they’re different.

Mark Levine, Lancaster University (suggested reading: The Kindness of Crowds; Intra-group Regulation of Violence: Bystanders and the (De)-escalation of Violence), does social psychology. He argued against the common belief that groups are bad (mob violence, mass hysteria, peer group pressure). He collected data from UK CCTV cameras, searches for aggressive behavior, and studies when and how bystanders either help escalate or de-escalate the situations. Results: as groups get bigger, there is no increase of anti-social acts and a significant increase in pro-social acts. He has much more analysis and results, too complicated to summarize here. One key finding: when a third party intervenes in an aggressive interaction, it is much more likely to de-escalate. Basically, groups can act against violence. “When it comes to violence (and security), group processes are part of the solution—not part of the problem?”

Jeff MacKie-Mason, University of Michigan (suggested reading: Humans are smart devices, but not programmable; Security when people matter; A Social Mechanism for Supporting Home Computer Security), is an economist: “Security problems are incentive problems.” He discussed motivation, and how to design systems to take motivation into account. Humans are smart devices; they can’t be programmed, but they can be influenced through the sciences of motivational behavior: microeconomics, game theory, social psychology, psychodynamics, and personality psychology. He gave a couple of general examples of how these theories can inform security system design.

Joe Bonneau, Cambridge University, talked about social networks like Facebook, and privacy. People misunderstand why privacy and security is important in social networking sites like Facebook. People underestimate of what Facebook really is; it really is a reimplementation of the entire Internet. “Everything on the Internet is becoming social,” and that makes security different. Phishing is different, 419-style scams are different. Social context makes some scams easier; social networks are fun, noisy, and unpredictable. “People use social networking systems with their brain turned off.” But social context can be used to spot frauds and anomalies, and can be used to establish trust.

Three more sessions to go. (I am enjoying liveblogging the event. It’s helping me focus and pay closer attention.)

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 9:54 AM6 Comments

Second SHB Workshop Liveblogging (5)

David Livingstone Smith moderated the fourth session, about (more or less) methodology.

Angela Sasse, University College London (suggested reading: The Compliance Budget: Managing Security Behaviour in Organisations; Human Vulnerabilities in Security Systems), has been working on usable security for over a dozen years. As part of a project called “Trust Economics,” she looked at whether people comply with security policies and why they either do or do not. She found that there is a limit to the amount of effort people will make to comply—this is less actual cost and more perceived cost. Strict and simple policies will be complied with more than permissive but complex policies. Compliance detection, and reward or punishment, also affect compliance. People justify noncompliance by “frequently made excuses.”

Bashar Nuseibeh, Open University (suggested reading: A Multi-Pronged Empirical Approach to Mobile Privacy Investigation; Security Requirements Engineering: A Framework for Representation and Analysis), talked about mobile phone security; specifically, Facebook privacy on mobile phones. He did something clever in his experiments. Because he wasn’t able to interview people at the moment they did something—he worked with mobile users—he asked them to provide a “memory phrase” that allowed him to effectively conduct detailed interviews at a later time. This worked very well, and resulted in all sorts of information about why people made privacy decisions at that earlier time.

James Pita, University of Southern California (suggested reading: Deployed ARMOR Protection: The Application of a Game Theoretic Model for Security at the Los Angeles International Airport), studies security personnel who have to guard a physical location. In his analysis, there are limited resources—guards, cameras, etc.—and a set of locations that need to be guarded. An example would be the Los Angeles airport, where a finite number of K-9 units need to guard eight terminals. His model uses a Stackelberg game to minimize predictability (otherwise, the adversary will learn it and exploit it) while maximizing security. There are complications—observational uncertainty and bounded rationally on the part of the attackers—which he tried to capture in his model.

Markus Jakobsson, Palo Alto Research Center (suggested reading: Male, late with your credit card payment, and like to speed? You will be phished!; Social Phishing; Love and Authentication; Quantifying the Security of Preference-Based Authentication), pointed out that auto insurers ask people if they smoke in order to get a feeling for whether they engage in high-risk behaviors. In his experiment, he selected 100 people who were the victim of online fraud and 100 people who were not. He then asked them to complete a survey about different physical risks such as mountain climbing and parachute jumping, financial risks such as buying stocks and real estate, and Internet risks such as visiting porn sites and using public wi-fi networks. He found significant correlation between different risks, but I didn’t see an overall pattern emerge. And in the discussion phase, several people had questions about the data. More analysis, and probably more data, is required. To be fair, he was still in the middle of his analysis.

Rachel Greenstadt, Drexel University (suggested reading: Practical Attacks Against Authorship Recognition Techniques (pre-print); Reinterpreting the Disclosure Debate for Web Infections), discussed ways in which humans and machines can collaborate in making security decisions. These decisions are hard for several reasons: because they are context dependent, require specialized knowledge, are dynamic, and require complex risk analysis. And humans and machines are good at different sorts of tasks. Machine-style authentication: This guy I’m standing next to knows Jake’s private key, so he must be Jake. Human-style authentication: This guy I’m standing next to looks like Jake and sounds like Jake, so he must be Jake. The trick is to design systems that get the best of these two authentication styles and not the worst. She described two experiments examining two decisions: should I log into this website (the phishing problem), and should I publish this anonymous essay or will my linguistic style betray me?

Mike Roe, Microsoft, talked about crime in online games, particularly in Second Life and Metaplace. There are four classes of people on online games: explorers, socializers, achievers, and griefers. Griefers try to annoy socializers in social worlds like Second Life, or annoy achievers in competitive worlds like World of Warcraft. Crime is not necessarily economic; criminals trying to steal money is much less of a problem in these games than people just trying to be annoying. In the question session, Dave Clark said that griefers are a constant, but economic fraud grows over time. I responded that the two types of attackers are different people, with different personality profiles. I also pointed out that there is another kind of attacker: achievers who use illegal mechanisms to assist themselves.

In the discussion, Peter Neumann pointed out that safety is an emergent property, and requires security, reliability, and survivability. Others weren’t so sure.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Conference dinner tonight at Legal Seafoods. And four more sessions tomorrow.

Posted on June 11, 2009 at 4:50 PM5 Comments

Second SHB Workshop Liveblogging (4)

Session three was titled “Usability.” (For the record, the Stata Center is one ugly building.)

Andrew Patrick, NRC Canada until he was laid off four days ago (suggested reading: Fingerprint Concerns: Performance, Usability, and Acceptance of Fingerprint Biometric Systems), talked about biometric systems and human behavior. Biometrics are used everywhere: for gym membership, at Disneyworld, at international borders. The government of Canada is evaluating using iris recognition at a distance for events like the 2010 Olympics. There are two different usability issues: with respect to the end user, and with respect to the authenticator. People’s acceptance of biometrics is very much dependent on the context. And of course, biometrics are not secret. Patrick suggested that to defend ourselves against this proliferation of using biometrics for authentication, the individual should publish them. The rationale is that we’re publishing them anyway, so we might as well do it knowingly.

Luke Church, Cambridge University (suggested reading: SHB Position Paper; Usability and the Common Criteria), talked about what he called “user-centered design.” There’s a economy of usability: “in order to make some things easier, we have to make some things harder”—so it makes sense to make the commonly done things easier at the expense of the rarely done things. This has a lot of parallels with security. The result is “appliancisation” (with a prize for anyone who come up with a better name): the culmination of security behaviors and what the system can do embedded in a series of user choices. Basically, giving users meaningful control over their security. Luke discussed several benefits and problems with the approach.

Diana Smetters, Palo Alto Research Center (suggested reading: Breaking out of the browser to defend against phishing attacks; Building secure mashups; Ad-hoc guesting: when exceptions are the rule), started with these premises: you can teach users, but you can’t teach them very much, so you’d better carefully design systems so that you 1) minimize what they have to learn, 2) make it easier for them to learn it, and 3) maximize the benefit from what they learn. Too often, security is at odds with getting the job done. “As long as configuration errors (false alarms) are common, any technology that requires users to observe security indicators and react to them will fail as attacks can simply masquerade as errors, and users will rationally ignore them.” She recommends meeting the user halfway by building new security models that actually fit the users’ needs. (For example: Phishing is a mismatch problem, between what’s in the user’s head and where the URL is actually going. SSL doesn’t work, but how should websites authenticate themselves to users? Her solution is protected links: a set of secure bookmarks in protected browsers. She went on to describe a prototype and tests run with user subjects.

Jon Callas, PGP Corporation (suggested reading: Improving Message Security with a Self-Assembling PKI), used the metaphor of the “security cliff”: you have to keep climbing until you get to the top and that’s hard, so it’s easier to just stay at the bottom. He wants more of a “security ramp,” so people can reasonably stop somewhere in the middle. His idea is to have a few policies—e-mail encryption, rules about USB drives—and enforce them. This works well in organizations, where IT has dictatorial control over user configuration. If we can’t teach users much, we need to enforce policies on users.

Rob Reeder, Microsoft (suggested reading: Expanding Grids for Visualizing and Authoring Computer Security Policies), presented a possible solution to the secret questions problem: social authentication. The idea is to use people you know (trustees) to authenticate who you are, and have them attest to the fact that you lost your password. He went on to describe how the protocol works, as well as several potential attacks against the protocol and defenses, and experiments that tested the protocol. In the question session he talked about people designating themselves as trustees, and how that isn’t really a problem.

Lorrie Cranor, Carnegie Mellon University (suggested reading: A Framework for Reasoning about the Human in the Loop; Timing Is Everything? The Effects of Timing and Placement of Online Privacy Indicators; School of Phish: A Real-Word Evaluation of Anti-Phishing Training; You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings), talked about security warnings. The best option is to fix the hazard; the second best is to guard against it—but far too often we just warn people about it. But since hazards are generally not very hazardous, most people just ignore them. “Often, software asks the user and provides little or no information to help user make this decision.” Better is to use some sort of automated analysis to assist the user in responding to warnings. For websites, for example, the system should block sites with a high probability of danger, not bother users if there is a low probably of danger, and help the user make the decision in the grey area. She went on to describe a prototype and user studies done with the prototype; her paper will be presented at USENIX Security in August.

Much of the discussion centered on how bad the problem really is, and how much security is good enough. The group also talked about economic incentives companies have to either fix or ignore security problems, and whether market approaches (or, as Jean Camp called it, “the happy Libertarian market pony”) are sufficient. Some companies have incentives to convince users to do the wrong thing, or at the very least to do nothing. For example, social networking sites are more valuable if people share their information widely.

Further discussion was about whitelisting, and whether it worked or not. There’s the problem of the bad guys getting on the whitelist, and the risk that organizations like the RIAA will use the whitelist to enforce copyright, or that large banks will use the whitelist as a tool to block smaller start-up banks. Another problem is that the user might not understand what a whitelist signifies.

Dave Clark from the audience: “It’s not hard to put a seat belt on, and if you need a lesson, take a plane.”

Kind of a one-note session. We definitely need to invite more psych people.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 11, 2009 at 2:56 PM5 Comments

Second SHB Workshop Liveblogging (3)

The second session was about fraud. (These session subjects are only general. We tried to stick related people together, but there was the occasional oddball—and scheduling constraint—to deal with.)

Julie Downs, Carnegie Mellon University (suggested reading: Behavioral Response to Phishing Risk; Parents’ vaccination comprehension and decisions; The Psychology of Food Consumption), is a psychologist who studies how people make decisions, and talked about phishing. To determine how people respond to phishing attempts—what e-mails they open and when they click on links—she watched as people interacted with their e-mail. She found that most people’s strategies to deal with phishing attacks might have been effective 5-10 years ago, but are no longer sufficient now that phishers have adapted. She also found that educating people about phishing didn’t make them more effective at spotting phishing attempts, but made them more likely to be afraid of doing anything on line. She found this same overreaction among people who were recently the victims of phishing attacks, but again people were no better separating real e-mail from phishing attempts. What does make a difference is contextual understanding: how to parse a URL, how and why the scams happen, what SSL does and doesn’t do.

Jean Camp, Indiana University (suggested reading: Experimental Evaluation of Expert and Non-expert Computer Users’ Mental Models of Security Risks), studies people taking risks online. Four points: 1) “people create mental models from internal narratives about risk,” 2) “risk mitigating action is taken only if the risk is perceived as relevant,” 3) “contextualizing risk can show risks as relevant,” and 4) “narrative can increase desire and capacity to use security tools.” Stories matter: “people are willing to wash out their cat food cans and sweep up their sweet gum balls to be a good neighbor, but allow their computers to join zombie networks” because there’s a good story in the former and none in the latter. She presented two experiments to demonstrate this. One was a video experiment watching business majors try to install PGP. No one was successful: there was no narrative, and the mixed metaphor of physical and cryptographic “key” confused people.

Matt Blaze, University of Pennsylvania (his blog), talked about electronic voting machines and fraud. He related this anecdote about actual electronic voting machine vote fraud in Kentucky. In the question session, he speculated about the difficulty of having a security model that would have captured the problem, and how to know whether that model was complete enough.

Jeffrey Friedberg, Microsoft (suggested reading: Internet Fraud Battlefield; End to End Trust and the Trust User Experience; Testimony on “spyware”), discussed research at Microsoft around the Trust User Experience (TUX). He talked about the difficulty of verifying SSL certificates. Then he talked about how Microsoft added a “green bar” to signify trusted sites, and how people who learned to trust the green bar were fooled by “picture in picture attacks”: where a hostile site embedded a green-bar browser window in its page. Most people don’t understand that the information inside the browser window is arbitrary, but that the stuff around it is not. The user interface, user experience, mental models all matter. Designing and evaluating TUX is hard. From the questions: training doesn’t help much, because given a plausible story, people will do things counter to their training.

Stuart Schechter, Microsoft, presented this research on secret questions. Basically, secret questions don’t work. They’re easily guessable based on the most common answers; friends and relatives of people can easily predict unique answers; and people forget their answers. Even worse, the more memorable the question/answers are, the easier they are to guess. Having people write their own questions is no better: “What’s my blood type?” “How tall am I?”

Tyler Moore, Harvard University (suggested reading: The Consequences of Non-Cooperation in the Fight against Phishing; Information Security Economics—and Beyond), discussed his empirical studies on online crime and defense. Fraudsters are good at duping users, but they’re also effective at exploiting failures among IT professionals to perpetuate the infrastructure necessary to carry out these exploits on a large scale (hosting fake web pages, sending spam, laundering the profits via money mules, and so on). There is widespread refusal among the defenders to cooperate with each other, and attackers exploit these limitations. We are better at removing phishing websites than we are at defending against the money mules. Defenders tend to fix immediate problems, but not underlying problems.

In the discussion phase, there was a lot of talk about the relationships between websites, like banks, and users—and how that affects security for both good and bad. Jean Camp doesn’t want a relationship with her bank, because that unduly invests her in the bank. (Someone from the audience pointed out that, as a U.S. taxpayer, she is already invested in her bank.) Angela Sasse said that the correct metaphor is “rules of engagement,” rather than relationships.

Adam Shostack’s liveblogging. Ross Anderson’s liveblogging is in his blog post’s comments.

Matt Blaze is taping the sessions—except for the couple of presenters who would rather not be taped—I’ll post his links as soon as the files are online.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 11:42 AM9 Comments

Second SHB Workshop Liveblogging (2)

The first session was about deception, moderated by David Clark.

Frank Stajano, Cambridge University (suggested reading: Understanding victims: Six principles for systems security), presented research with Paul Wilson, who films actual scams for “The Real Hustle.” His point is that we build security systems based on our “logic,” but users don’t always follow our logic. It’s fraudsters who really understand what people do, so we need to understand what the fraudsters understand. Things like distraction, greed, unknown accomplices, social compliance are important.

David Livingstone Smith, University of New England (suggested reading: Less than human: self-deception in the imagining of others; Talk on Lying at La Ciudad de Las Ideas; a subsequent discussion; Why War?), is a philosopher by training, and goes back to basics: “What are we talking about?” A theoretical definition—”that which something has to have to fall under a term”—of deception is difficult to define. “Cause to have a false belief,” from the Oxford English Dictionary, is inadequate. “To deceive is intentionally have someone to have a false belief” also doesn’t work. “Intentionally causing someone to have a false belief that the speaker knows to be false” still isn’t good enough. The fundamental problem is that these are anthropocentric definitions. Deception is not unique to humans; it gives organisms an evolutionary edge. For example, the mirror orchid fools a wasp into landing on it by looking like and giving off chemicals that mimic the female wasp. This example shows that we need a broader definition of “purpose.” His formal definition: “For systems A and B, A deceives B iff A possesses some character C with proper function F, and B possesses a mechanism C* with the proper function F* of producing representations, such that the proper function of C is to cause C* to fail to perform F* by causing C* to form false representations, and C does so in virtue of performing F, and B’s falsely representing enables some feature of A to perform its proper function.”

I spoke next, about the psychology of Conficker, how the human brain buys security, and why science fiction writers shouldn’t be hired to think about terrorism risks (to be published on Wired.com next week).

Dominic Johnson, University of Edinburgh (suggested reading: Paradigm Shifts in Security Strategy; Perceptions of victory and defeat), talked about his chapter in the book Natural Security: A Darwinian Approach to a Dangerous World. Life has 3.5 billion years of experience in security innovation; let’s look at how biology approaches security. Biomimicry, ecology, paleontology, animal behavior, evolutionary psychology, immunology, epidemiology, selection, and adaption are all relevant. Redundancy is a very important survival tool for species. Here’s an adaption example: The 9/11 threat was real and we knew about it, but we didn’t do anything. His thesis: Adaptation to novel security threats tends to occur after major disasters. There are many historical examples of this; Pearl Harbor, for example. Causes include sensory biases, psychological biases, leadership biases, organizational biases, and political biases—all pushing us towards maintaining the status quo. So it’s natural for us to poorly adapt to security threats in the modern world. A questioner from the audience asked whether control theory had any relevance to this model.

Jeff Hancock, Cornell University (suggested reading: On Lying and Being Lied To: A Linguistic Analysis of Deception in Computer-Mediated Communication; Separating Fact From Fiction: An Examination of Deceptive Self-Presentation in Online Dating Profiles), studies interpersonal deception: how the way we lie to each other intersects with communications technologies; and how technologies change the way we lie, and can technology be used to detect lying? Despite new technology, people lie for traditional reasons. For example: on dating sites, men tend to lie about their height and women tend to lie about their weight. The recordability of the Internet also changes how we lie. The use of the first person singular tends to go down the more people lie. He verified this in many spheres, such as how people describe themselves in chat rooms, and true versus false statements that the Bush administration made about 9/11 and Iraq. The effect was more pronounced when administration officials were answering questions than when they were reading prepared remarks.

EDITED TO ADD (6/11): Adam Shostack liveblogged this session, too. And Ross’s liveblogging is in his blog post’s comments.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 9:37 AM8 Comments

Second SHB Workshop Liveblogging (1)

I’m at SHB09, the Second Interdisciplinary Workshop on Security and Human Behavior, at MIT. This is a two-day gathering of computer security researchers, psychologists, behavioral economists, sociologists, philosophers, and others—all of whom are studying the human side of security—organized by Ross Anderson, Alessandro Acquisti, and myself.

Here’s the schedule. Last year’s link will give you a good idea what the event is about—be sure to read Ross’s summaries of the individual talks in the comments of his blog entry—and I’ll be posting talk summaries in subsequent posts this year, hopefully as the event progresses.

EDITED TO ADD (6/14): My liveblogging of the eight sessions: 1, 2, 3, 4, 5, 6, 7, 8. Ross Anderson’s liveblogging is in the first eight comments of this blog post. Adam Shostack’s liveblogging: 1, 2, 3, 4, 5, 6, 7, 8. Matt Blaze recorded audio.

EDITED TO ADD (6/28): Attendee John Adams comments on the workshop.

Posted on June 11, 2009 at 6:46 AM7 Comments

Malware Steals ATM Data

One of the risks of using a commercial OS for embedded systems like ATMs: it’s easier to write malware against it:

The report does not detail how the ATMs are infected, but it seems likely that the malware is encoded on a card that can be inserted in an ATM card reader to mount a buffer overflow attack. The machine is compromised by replacing the isadmin.exe file to infect the system.

The malicious isadmin.exe program then uses the Windows API to install the functional attack code by replacing a system file called lsass.exe in the C:WINDOWS directory.

Once the malicious lsass.exe program is installed, it collects users account numbers and PIN codes and waits for a human controller to insert a specially crafted control card to take over the ATM.

After the ATM is put under control of a human attacker, they can perform various functions, including harvesting the purloined data or even ejecting the cash box.

EDITED TO ADD (6/14): Seems like the story I quoted was jumping to conclusions. The actual report says “the malware is installed and activated through a dropper file (a file that an attacker can use to deploy tools onto a compromised system) by the name of isadmin.exe,” which doesn’t really sound like it’s referring to a buffer overflow attack carried out through a card emulator. Also, The Register says “[the] malicious programs can be installed only by people with physical access to the machines, making some level of insider cooperation necessary.”

Posted on June 10, 2009 at 1:51 PM49 Comments

Industry Differences in Types of Security Breaches

Interhack has been working on a taxonomy of security breaches, and has an interesting conclusion:

The Health Care and Social Assistance sector reported a larger than average proportion of lost and stolen computing hardware, but reported an unusually low proportion of compromised hosts. Educational Services reported a disproportionally large number of compromised hosts, while insider conduct and lost and stolen hardware were well below the proportion common to the set as a whole. Public Administration’s proportion of compromised host reports was below average, but their proportion of processing errors was well above the norm. The Finance and Insurance sector experienced the smallest overall proportion of processing errors, but the highest proportion of insider misconduct. Other sectors showed no statistically significant difference from the average, either due to a true lack of variance, or due to an insignificant number of samples for the statistical tests being used

Full study is here.

Posted on June 10, 2009 at 6:18 AM14 Comments

Teaching Children to Spot Terrorists

You can’t make this stuff up:

More than 2,000 10 and 11-year-olds [in the UK] will see a short film, which urges them to tell the police, their parents or a teacher if they hear anyone expressing extremist views.

[…]

A lion explains that terrorists can look like anyone, while a cat tells pupils that [they] should get help if they are being bullied and a toad tells them how to cross the road.

The terrorism message is also illustrated with a re-telling of the story of Guy Fawkes, saying that his strong views began forming when he was at school in York. It has been designed to deliver the message of fighting terrorism in [an] accessible way for children.

I’ve said this before:

If you ask amateurs to act as front-line security personnel, you shouldn’t be surprised when you get amateur security.

Posted on June 9, 2009 at 2:45 PM62 Comments

Corrupted Word Files for Sale

On one hand, this is clever:

We offer a wide array of corrupted Word files that are guaranteed not to open on a Mac or PC. A corrupted file is a file that contains scrambled and unrecoverable data due to hardware or software failure. Files may become corrupted when something goes wrong while a file is being saved e.g. the program saving the file might crash. Files may also become corrupted when being sent via email. The perfect excuse to buy you that extra time!

This download includes a 2, 5, 10, 20, 30 and 40 page corrupted Word file. Use the appropriate file size to match each assignment. Who’s to say your 10 page paper didn’t get corrupted? Exactly! No one can! Its the perfect excuse to buy yourself extra time and not hand in a garbage paper.

Only $3.95. Cheap. Although for added verisimilitude, they should have an additional service where you send them a file—a draft of your paper, for example—and they corrupt it and send it back.

But on the other hand, it’s services like these that will force professors to treat corrupted attachments as work not yet turned in, and harm innocent homework submitters.

EDITED TO ADD (6/9): Here’s how to make a corrupted pdf file for free.

Posted on June 9, 2009 at 6:46 AM76 Comments

British High Schoolers Write About CCTV in School

If you think that under-20-year-olds don’t care about privacy, this is an eloquent op-ed by two students about why CCTV cameras have no place in their UK school:

Adults are often quick to define the youth of today as stereotypical troublemakers and violent offenders—­ generalisations which are prompted by the media—­ when in fact the majority of students at our school are as responsible and arguably better behaved then the majority of adults. Some commentators insinuated that we overheard adults talking about rights and repeated it. That notion isn’t worth the space it was typed upon. We are A-level politics students who have been studying civil liberties as part of the curriculum for the last two years. Sam campaigned for David Davis when he resigned over the issue of civil liberties and spoke at speakers’ corner about the issue. The criticism of our campaign only serves to illustrate the ignorance of adults who have surrendered within only the last few years our right to protest in parliament, our right to go about our business without being stopped and questioned by police about our identity and our affairs, and our personal privacy.

Eroding standards in schools and deteriorating discipline are down to a broken society and the failure of the education system. The truth is that we are whatever the generation before us has created. If you criticise us, we are your failures; and if you applaud us we are your successes, and we reflect the imperfections of society and of human life. If you want to reform the education system, if you want to raise education standards, then watching children every hour of every day isn’t the answer. The answer is to encourage students to learn by creating an environment in which they can express their ideas freely and without intimidation.

Posted on June 8, 2009 at 1:38 PM35 Comments

Fear of Aerial Images

Time for some more fear about terrorists using maps and images on the Internet.

But the more striking images come when Portzline clicks on the “bird’s-eye” option offered by the map service. The overhead views, which come chiefly from satellites, are replaced with strikingly clear oblique-angle photos, chiefly shot from aircraft. By clicking another button, he can see the same building from all four sides.

“What we’re seeing here is a guard shack,” Portzline said, pointing to a rooftop structure. “This is a communications device for the nuclear plant.”

He added, “This particular building is the air intake for the control room. And there’s some nasty thing you could do to disable the people in the control room. So this type of information should not be available. I look at this and just say, ‘Wow.’ ”

Terror expert and author Brian Jenkins agreed that the pictures are “extraordinarily impressive.”

“If I were a terrorist planning an attack, I would want that imagery. That would facilitate that mission,” he said. “And given the choice between renting an airplane or trying some other way to get it, versus tapping in some things on my computer, I certainly want to do the latter. (It will) reduce my risk, and the first they’re going to know about my attack is when it takes place.”

Gadzooks, people, enough with the movie plots.

Joel Anderson, a member of the California Assembly, has more expansive goals. He has introduced a bill in the state Legislature that would prohibit “virtual globe” services from providing unblurred pictures of schools, churches and government or medical facilities in California. It also would prohibit those services from providing street-view photos of those buildings.

“It struck me that a person in a tent halfway around the world could target an attack like that with a laptop computer,” said Anderson, a Republican legislator who represents San Diego’s East County. Anderson said he doesn’t want to limit technology, but added, “There’s got to be some common sense.”

I wonder why he thinks that “schools, churches and government or medical facilities” are terrorist targets worth protecting, and movie theaters, stadiums, concert halls, restaurants, train stations, shopping malls, Toys-R-Us stores on the day after Thanksgiving, train stations, and theme parks are not. After all, “there’s got to be some common sense.”

Now, both have launched efforts to try to get Internet map services to remove or blur images of sensitive sites, saying the same technology that allows people to see a neighbor’s swimming pool can be used by terrorists to chose targets and plan attacks.

Yes, and the same technology that allows people to call their friends can be used by terrorists to choose targets and plan attacks. And the same technology that allows people to commute to work can be used by terrorists to plan and execute attacks. And the same technology that allows you to read this blog post…repeat until tired.

Of course, this is nothing I haven’t said before:

Criminals have used telephones and mobile phones since they were invented. Drug smugglers use airplanes and boats, radios and satellite phones. Bank robbers have long used cars and motorcycles as getaway vehicles, and horses before then. I haven’t seen it talked about yet, but the Mumbai terrorists used boats as well. They also wore boots. They ate lunch at restaurants, drank bottled water, and breathed the air. Society survives all of this because the good uses of infrastructure far outweigh the bad uses, even though the good uses are—by and large—small and pedestrian and the bad uses are rare and spectacular. And while terrorism turns society’s very infrastructure against itself, we only harm ourselves by dismantling that infrastructure in response—just as we would if we banned cars because bank robbers used them too.

You’re not going to stop terrorism by deliberately degrading our infrastructure. Refuse to be terrorized, everyone.

Posted on June 8, 2009 at 6:15 AM65 Comments

Secret Government Communications Cables Buried Around Washington, DC

Interesting:

This part happens all the time: A construction crew putting up an office building in the heart of Tysons Corner a few years ago hit a fiber optic cable no one knew was there.

This part doesn’t: Within moments, three black sport-utility vehicles drove up, a half-dozen men in suits jumped out and one said, “You just hit our line.”

Whose line, you may ask? The guys in suits didn’t say, recalled Aaron Georgelas, whose company, the Georgelas Group, was developing the Greensboro Corporate Center on Spring Hill Road. But Georgelas assumed that he was dealing with the federal government and that the cable in question was “black” wire—a secure communications line used for some of the nation’s most secretive intelligence-gathering operations.

Black wire is one of the looming perils of the massive construction that has come to Tysons, where miles and miles of secure lines are thought to serve such nearby agencies as the Office of the Director of National Intelligence, the National Counterterrorism Center and, a few miles away in McLean, the Central Intelligence Agency. After decades spent cutting through red tape to begin work on a Metrorail extension and the widening of the Capital Beltway, crews are now stirring up tons of dirt where the black lines are located.

“Yeah, we heard about the black SUVs,” said Paul Goguen, the engineer in charge of relocating electric, gas, water, sewer, cable, telephone and other communications lines to make way for Metro through Tysons. “We were warned that if they were hit, the company responsible would show up before you even had a chance to make a phone call.”

EDITED TO ADD (6/4): In comments, Angel one gives a great demonstration of the security mindset:

So if I want to stop a construction project in the DC area, all I need to do is drive up in a black SUV, wear a suit and sunglasses, and refuse to identify myself.

Posted on June 4, 2009 at 1:07 PM45 Comments

Cloud Computing

This year’s overhyped IT concept is cloud computing. Also called software as a service (Saas), cloud computing is when you run software over the internet and access it via a browser. The Salesforce.com customer management software is an example of this. So is Google Docs. If you believe the hype, cloud computing is the future.

But, hype aside, cloud computing is nothing new . It’s the modern version of the timesharing model from the 1960s, which was eventually killed by the rise of the personal computer. It’s what Hotmail and Gmail have been doing all these years, and it’s social networking sites, remote backup companies, and remote email filtering companies such as MessageLabs. Any IT outsourcing—network infrastructure, security monitoring, remote hosting—is a form of cloud computing.

The old timesharing model arose because computers were expensive and hard to maintain. Modern computers and networks are drastically cheaper, but they’re still hard to maintain. As networks have become faster, it is again easier to have someone else do the hard work. Computing has become more of a utility; users are more concerned with results than technical details, so the tech fades into the background.

But what about security? Isn’t it more dangerous to have your email on Hotmail’s servers, your spreadsheets on Google’s, your personal conversations on Facebook’s, and your company’s sales prospects on salesforce.com’s? Well, yes and no.

IT security is about trust. You have to trust your CPU manufacturer, your hardware, operating system and software vendors—and your ISP. Any one of these can undermine your security: crash your systems, corrupt data, allow an attacker to get access to systems. We’ve spent decades dealing with worms and rootkits that target software vulnerabilities. We’ve worried about infected chips. But in the end, we have no choice but to blindly trust the security of the IT providers we use.

Saas moves the trust boundary out one step further—you now have to also trust your software service vendors—but it doesn’t fundamentally change anything. It’s just another vendor we need to trust.

There is one critical difference. When a computer is within your network, you can protect it with other security systems such as firewalls and IDSs. You can build a resilient system that works even if those vendors you have to trust may not be as trustworthy as you like. With any outsourcing model, whether it be cloud computing or something else, you can’t. You have to trust your outsourcer completely. You not only have to trust the outsourcer’s security, but its reliability, its availability, and its business continuity.

You don’t want your critical data to be on some cloud computer that abruptly disappears because its owner goes bankrupt . You don’t want the company you’re using to be sold to your direct competitor. You don’t want the company to cut corners, without warning, because times are tight. Or raise its prices and then refuse to let you have your data back. These things can happen with software vendors, but the results aren’t as drastic.

There are two different types of cloud computing customers. The first only pays a nominal fee for these services—and uses them for free in exchange for ads: e.g., Gmail and Facebook. These customers have no leverage with their outsourcers. You can lose everything. Companies like Google and Amazon won’t spend a lot of time caring. The second type of customer pays considerably for these services: to Salesforce.com, MessageLabs, managed network companies, and so on. These customers have more leverage, providing they write their service contracts correctly. Still, nothing is guaranteed.

Trust is a concept as old as humanity, and the solutions are the same as they have always been. Be careful who you trust, be careful what you trust them with, and be careful how much you trust them. Outsourcing is the future of computing. Eventually we’ll get this right, but you don’t want to be a casualty along the way.

This essay originally appeared in The Guardian.

EDITED TO ADD (6/4): Another opinion.

EDITED TO ADD (6/5): A rebuttal. And an apology for the tone of the rebuttal. The reason I am talking so much about cloud computing is that reporters and inverviewers keep asking me about it. I feel kind of dragged into this whole thing.

EDITED TO ADD (6/6): At the Computers, Freedom, and Privacy conference last week, Bob Gellman said (this, by him, is worth reading) that the nine most important words in cloud computing are: “terms of service,” “location, location, location,” and “provider, provider, provider”—basically making the same point I did. You need to make sure the terms of service you sign up to are ones you can live with. You need to make sure the location of the provider doesn’t subject you to any laws that you can’t live with. And you need to make sure your provider is someone you’re willing to work with. Basically, if you’re going to give someone else your data, you need to trust them.

Posted on June 4, 2009 at 6:14 AM

Why Is Terrorism so Hard?

I don’t know how I missed this great series from Slate in February. It’s eight essays exploring why there have been no follow-on terrorist attacks in the U.S. since 9/11 (not counting the anthrax mailings, I guess). Some excerpts:

Al-Qaida’s successful elimination of the Twin Towers, part of the Pentagon, four jetliners, and nearly 3,000 innocent lives makes the terror group seem, in hindsight, diabolically brilliant. But when you review how close the terrorists came to being exposed by U.S. intelligence, 9/11 doesn’t look like an ingenious plan that succeeded because of shrewd planning. It looks like a stupid plan that succeeded through sheer dumb luck.

[…]

Even when it isn’t linked directly to terrorism, Muslim radicalism seems more prevalent—and certainly more visible—inside the United Kingdom, and in Western Europe generally, than it is inside the United States.

Why the difference? Economics may be one reason. American Muslims are better-educated and wealthier than the average American.

[…]

According to [one] theory, the 9/11 attacks were so stunning a success that they left al-Qaida’s leadership struggling to conceive and carry out an even more fearsome and destructive plan against the United States. In his 2006 book The One Percent Doctrine, journalist Ron Suskind attributes to the U.S. intelligence community the suspicion that “Al Qaeda wouldn’t want to act unless it could top the World Trade Center and the Pentagon with something even more devastating, creating an upward arc of rising and terrible expectation as to what, then, would follow.”

[…]

From a broader policy viewpoint, the Bush administration’s most significant accomplishment, terrorism experts tend to agree, was the 2001 defeat of Afghanistan’s Taliban regime and the destruction of Bin Laden’s training camps. As noted in “The Terrorists-Are-Dumb Theory” and “The Melting Pot Theory,” two-thirds of al-Qaida’s leadership was captured or killed. Journalist Lawrence Wright estimates that nearly 80 percent of al-Qaida’s Afghanistan-based membership was killed in the U.S. invasion, and intelligence estimates suggest al-Qaida’s current membership may be as low as 200 or 300.

[…]

The departing Bush administration’s claim that deposing Saddam Hussein helped prevent acts of terror in the United States has virtually no adherents, except to the extent that it drew some jihadis into Iraq. The Iraq war reduced U.S. standing in the Muslim world, especially when evidence surfaced that U.S. military officials had tortured and humiliated prisoners at the Abu Ghraib prison.

[…]

When Schelling, Abrams, and Sageman argue that terrorists are irrational, what they mean is that terror groups seldom realize their big-picture strategic goals. But Berrebi says you can’t pronounce terrorists irrational until you know what they really want. “We don’t know what are the real goals of each organization,” he says. Any given terror organization is likely to have many competing and perhaps even contradictory goals. Given these groups’ inherently secret nature, outsiders aren’t likely to learn which of these goals is given priority.

Read the whole thing.

Posted on June 3, 2009 at 1:35 PM37 Comments

Arming the Boston Police with Assault Rifles

Whose idea is this?

The Boston Police Department is preparing a plan to arm as many as 200 patrol officers with semiautomatic assault rifles, a significant boost in firepower that department leaders believe is necessary to counter terrorist threats, according to law enforcement officials briefed on the plan.

The initiative calls for equipping specialized units, such as the bomb squad and harbor patrol, with the high-powered long-range M16 rifles first, the officials said. The department would then distribute the weapons to patrol officers in neighborhood precincts over the next several months, according to the two law enforcement officials, who spoke on the condition of anonymity because they did not have permission to speak publicly.

Remember, the “terrorist threats” that plague Boston include blinking signs, blinking name badges, and Linux. Would you trust the police there with automatic weapons?

And anyway, how exactly does an police force armed with automatic weapons protect against terrorism? Does it make it harder for the terrorists to plant bombs? To hijack aircraft? Sure, you can invent a movie-plot scenario involving a Mumbai-like attack and have a Bruce Willis-like armed policeman save the day, but—realistically—is this really the best way for us to be spending our counterterrorism dollar?

Luckily, people seem to be coming to their senses.

EDITED TO ADD: These are semi-automatic rifles, not fully automatic. I think the point is more about the militarization of the police than the exact specifications of the weapons in this case.

Posted on June 3, 2009 at 5:57 AM110 Comments

Update on Computer Science Student's Computer Seizure

In April, I blogged about the Boston police seizing a student’s computer for, among other things, running Linux. (Anyone who runs Linux instead of Windows is obviously a scary bad hacker.)

Last week, the Massachusetts Supreme Court threw out the search warrant:

Massachusetts Supreme Judicial Court Associate Justice Margot Botsford on Thursday said that Boston College and Massachusetts State Police had insufficient evidence to search the dorm room of BC senior Riccardo Calixte. During the search, police confiscated a variety of electronic devices, including three laptop computers, two iPod music players, and two cellphones.

Police obtained a warrant to search Calixte’s dorm after a roommate accused him of breaking into the school’s computer network to change other students’ grades, and of spreading a rumor via e-mail that the roommate is gay.

Botsford said the search warrant affidavit presented considerable evidence that the e-mail came from Calixte’s laptop computer. But even if it did, she said, spreading such rumors is probably not illegal. Botsford also said that while breaking into BC’s computer network would be criminal activity, the affidavit supporting the warrant presented little evidence that such a break-in had taken place.

Posted on June 2, 2009 at 12:01 PM41 Comments

Research on Movie-Plot Threats

This could be interesting:

Emerging Threats and Security Planning: How Should We Decide What Hypothetical Threats to Worry About?

Brian A. Jackson, David R. Frelinger

Concerns about how terrorists might attack in the future are central to the design of security efforts to protect both individual targets and the nation overall. In thinking about emerging threats, security planners are confronted by a panoply of possible future scenarios coming from sources ranging from the terrorists themselves to red-team brainstorming efforts to explore ways adversaries might attack in the future. This paper explores an approach to assessing emerging and/or novel threats and deciding whether—or how much—they should concern security planners by asking two questions: (1) Are some of the novel threats “niche threats” that should be addressed within existing security efforts? (2) Which of the remaining threats are attackers most likely to execute successfully and should therefore be of greater concern for security planners? If threats can reasonably be considered niche threats, they can be prudently addressed in the context of existing security activities. If threats are unusual enough, suggest significant new vulnerabilities, or their probability or consequences means they cannot be considered lesser included cases within other threats, prioritizing them based on their ease of execution provides a guide for which threats merit the greatest concern and most security attention. This preserves the opportunity to learn from new threats yet prevents security planners from being pulled in many directions simultaneously by attempting to respond to every threat at once.

Full paper available here.

Posted on June 1, 2009 at 3:29 PM17 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.