Blog: March 2012 Archives

Friday Squid Blogging: How Squid Hear

Interesting research:

The squid use two closely spaced organs called statocysts to sense sound.

“I think of a statocyst as an inside-out tennis ball,” explains Dr Mooney.

“It’s got hairs on the inside and this little dense calcium stone that sits on those hair cells.

“What happens is that the sound wave actually moves the squid back and forth, and this dense object stays relatively still. It bends the hair cells and generates a nerve response to the brain.”

[…]

“They react in about 10 milliseconds,” he says. “That’s really fast; it’s essentially a reflex. That’s really important in terms of behavioural responses because they’re not thinking about processing it; they’re not deciding whether they should react—they’re just doing it.

And he adds: “The responses can be really dynamic. They can be a change in colour; they can be jetting (moving quickly) or inking responses. Squid are also very cool because you can look at a range of colour changes—is it a really startling colour change or a more subtle change?

“Squid can probably use their hearing to find their way around the environment—to sense the soundscape of the environment; for example, to find their way towards a reef or away from a reef, towards the surface or away from the surface.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on March 30, 2012 at 4:28 PM40 Comments

Harms of Post-9/11 Airline Security

As I posted previously, I have been debating former TSA Administrator Kip Hawley on the Economist website. I didn’t bother reposting my opening statement and rebuttal, because—even though I thought I did a really good job with them—they were largely things I’ve said before. In my closing statement, I talked about specific harms post-9/11 airport security has caused. This is mostly new, so here it is, British spelling and punctuation and all.


In my previous two statements, I made two basic arguments about post-9/11 airport security. One, we are not doing the right things: the focus on airports at the expense of the broader threat is not making us safer. And two, the things we are doing are wrong: the specific security measures put in place since 9/11 do not work. Kip Hawley doesn’t argue with the specifics of my criticisms, but instead provides anecdotes and asks us to trust that airport security—and the Transportation Security Administration (TSA) in particular—knows what it’s doing.

He wants us to trust that a 400-ml bottle of liquid is dangerous, but transferring it to four 100-ml bottles magically makes it safe. He wants us to trust that the butter knives given to first-class passengers are nevertheless too dangerous to be taken through a security checkpoint. He wants us to trust the no-fly list: 21,000 people so dangerous they’re not allowed to fly, yet so innocent they can’t be arrested. He wants us to trust that the deployment of expensive full-body scanners has nothing to do with the fact that the former secretary of homeland security, Michael Chertoff, lobbies for one of the companies that makes them. He wants us to trust that there’s a reason to confiscate a cupcake (Las Vegas), a 3-inch plastic toy gun (London Gatwick), a purse with an embroidered gun on it (Norfolk, VA), a T-shirt with a picture of a gun on it (London Heathrow) and a plastic lightsaber that’s really a flashlight with a long cone on top (Dallas/Fort Worth).

At this point, we don’t trust America’s TSA, Britain’s Department for Transport, or airport security in general. We don’t believe they’re acting in the best interests of passengers. We suspect their actions are the result of politicians and government appointees making decisions based on their concerns about the security of their own careers if they don’t act tough on terror, and capitulating to public demands that “something must be done”.

In this final statement, I promised to discuss the broader societal harms of post-9/11 airport security. This loss of trust—in both airport security and counterterrorism policies in general—is the first harm. Trust is fundamental to society. There is an enormous amount written about this; high-trust societies are simply happier and more prosperous than low-trust societies. Trust is essential for both free markets and democracy. This is why open-government laws are so important; trust requires government transparency. The secret policies implemented by airport security harm society because of their very secrecy.

The humiliation, the dehumanisation and the privacy violations are also harms. That Mr Hawley dismisses these as mere “costs in convenience” demonstrates how out-of-touch the TSA is from the people it claims to be protecting. Additionally, there’s actual physical harm: the radiation from full-body scanners still not publicly tested for safety; and the mental harm suffered by both abuse survivors and children: the things screeners tell them as they touch their bodies are uncomfortably similar to what child molesters say.

In 2004, the average extra waiting time due to TSA procedures was 19.5 minutes per person. That’s a total economic loss—in –America—of $10 billion per year, more than the TSA’s entire budget. The increased automobile deaths due to people deciding to drive instead of fly is 500 per year. Both of these numbers are for America only, and by themselves demonstrate that post-9/11 airport security has done more harm than good.

The current TSA measures create an even greater harm: loss of liberty. Airports are effectively rights-free zones. Security officers have enormous power over you as a passenger. You have limited rights to refuse a search. Your possessions can be confiscated. You cannot make jokes, or wear clothing, that airport security does not approve of. You cannot travel anonymously. (Remember when we would mock Soviet-style “show me your papers” societies? That we’ve become inured to the very practice is a harm.) And if you’re on a certain secret list, you cannot fly, and you enter a Kafkaesque world where you cannot face your accuser, protest your innocence, clear your name, or even get confirmation from the government that someone, somewhere, has judged you guilty. These police powers would be illegal anywhere but in an airport, and we are all harmed—individually and collectively—by their existence.

In his first statement, Mr Hawley related a quote predicting “blood running in the aisles” if small scissors and tools were allowed on planes. That was said by Corey Caldwell, an Association of Flight Attendants spokesman, in 2005. It was not the statement of someone who is thinking rationally about airport security; it was the voice of irrational fear.

Increased fear is the final harm, and its effects are both emotional and physical. By sowing mistrust, by stripping us of our privacy—and in many cases our dignity—by taking away our rights, by subjecting us to arbitrary and irrational rules, and by constantly reminding us that this is the only thing between us and death by the hands of terrorists, the TSA and its ilk are sowing fear. And by doing so, they are playing directly into the terrorists’ hands.

The goal of terrorism is not to crash planes, or even to kill people; the goal of terrorism is to cause terror. Liquid bombs, PETN, planes as missiles: these are all tactics designed to cause terror by killing innocents. But terrorists can only do so much. They cannot take away our freedoms. They cannot reduce our liberties. They cannot, by themselves, cause that much terror. It’s our reaction to terrorism that determines whether or not their actions are ultimately successful. That we allow governments to do these things to us—to effectively do the terrorists’ job for them—is the greatest harm of all.

Return airport security checkpoints to pre-9/11 levels. Get rid of everything that isn’t needed to protect against random amateur terrorists and won’t work against professional al-Qaeda plots. Take the savings thus earned and invest them in investigation, intelligence, and emergency response: security outside the airport, security that does not require us to play guessing games about plots. Recognise that 100% safety is impossible, and also that terrorism is not an “existential threat” to our way of life. Respond to terrorism not with fear but with indomitability. Refuse to be terrorized.

EDITED TO ADD (3/20): Cory Doctorow on the exchange:

All of Hawley’s best arguments sum up to “Someone somewhere did something bad, and if he’d tried it on us, we would have caught him.” His closing clincher? They heard a bad guy was getting on a plane somewhere. The figured out which plane, stopped it from taking off and “resolved” the situation. Seeing as there were no recent reports of foiled terrorist plots, I’m guessing the “resolution” was “it turned out we made a mistake.” But Hawley’s takeaway is: “look at how fast our mistake was!”

EDITED TO ADD (4/19): German translation of the closing statement.

Posted on March 29, 2012 at 6:53 AM121 Comments

The Effects of Data Breach Litigation

Empirical Analysis of Data Breach Litigation,” Sasha Romanosky, David Hoffman, and Alessandro Acquisti:

Abstract: In recent years, a large number of data breaches have resulted in lawsuits in which individuals seek redress for alleged harm resulting from an organization losing or compromising their personal information. Currently, however, very little is known about those lawsuits. Which types of breaches are litigated, which are not? Which lawsuits settle, or are dismissed? Using a unique database of manually-collected lawsuits from PACER, we analyze the court dockets of over 230 federal data breach lawsuits from 2000 to 2010. We use binary outcome regressions to investigate two research questions: Which data breaches are being litigated in federal court? Which data breach lawsuits are settling? Our results suggest that the odds of a firm being sued in federal court are 3.5 times greater when individuals suffer financial harm, but over 6 times lower when the firm provides free credit monitoring following the breach. We also find that defendants settle 30% more often when plaintiffs allege financial loss from a data breach, or when faced with a certified class action suit. While the compromise of financial information appears to lead to more federal litigation, it does not seem to increase a plaintiff’s chance of a settlement. Instead, compromise of medical information is more strongly correlated with settlement.

The full paper is available by using the one-click download button.

Posted on March 27, 2012 at 6:46 AM2 Comments

Congressional Testimony on the TSA

I was supposed to testify today about the TSA in front of the House Committee on Oversight and Government Reform. I was informally invited a couple of weeks ago, and formally invited last Tuesday:

The hearing will examine the successes and challenges associated with Advanced Imaging Technology (AIT), the Screening of Passengers by Observation Techniques (SPOT) program, the Transportation Worker Credential Card (TWIC), and other security initiatives administered by the TSA.

On Friday, at the request of the TSA, I was removed from the witness list. The excuse was that I am involved in a lawsuit against the TSA, trying to get them to suspend their full-body scanner program. But it’s pretty clear that the TSA is afraid of public testimony on the topic, and especially of being challenged in front of Congress. They want to control the story, and it’s easier for them to do that if I’m not sitting next to them pointing out all the holes in their position. Unfortunately, the committee went along with them. (They tried to pull the same thing last year and it failedvideo at the 10:50 mark.)

The committee said it would try to invite me back for another hearing, but with my busy schedule, I don’t know if I will be able to make it. And it would be far less effective for me to testify without forcing the TSA to respond to my points.

I’m there in spirit, though. The title of the hearing is “TSA Oversight Part III: Effective Security or Security Theater?”

Posted on March 26, 2012 at 1:02 PM92 Comments

Rare Spanish Enigma Machine

This is a neat story:

A pair of rare Enigma machines used in the Spanish Civil War have been given to the head of GCHQ, Britain’s communications intelligence agency. The machines – only recently discovered in Spain – fill in a missing chapter in the history of British code-breaking, paving the way for crucial successes in World War II.

Fun paragraphs:

A non-commissioned officer found the machines almost by chance, only a few years ago, in a secret room at the Spanish Ministry of Defence in Madrid.

“Nobody entered there because it was very secret,” says Felix Sanz, the director of Spain’s intelligence service.

“And one day somebody said ‘Well if it is so secret, perhaps there is something secret inside.’ They entered and saw a small office where all the encryption was produced during not only the civil war but in the years right afterwards.”

EDITED TO ADD (4/13): Blog comments from someone actually involved in the process.

Posted on March 26, 2012 at 6:38 AM23 Comments

The Economist Debate on Airplane Security

On The Economist website, I am currently debating Kip Hawley on airplane security. On Tuesday we posted our initial statements, and today (London time) we posted our rebuttals. We have one more round to go.

I’ve set it up to talk about the myriad of harms airport security has caused: loss of trust in government, increased fear, creeping police state, loss of liberty in the “rights free zone,” and so on. Suggestions of what to say next are appreciated.

Posted on March 23, 2012 at 6:33 AM80 Comments

Can the NSA Break AES?

In an excellent article in Wired, James Bamford talks about the NSA’s codebreaking capability.

According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: “Everybody’s a target; everybody with communication is a target.”

Bamford has been writing about the NSA for decades, and people tell him all sorts of confidential things. Reading the above, the obvious question to ask is: can the NSA break AES?

My guess is that they can’t. That is, they don’t have a cryptanalytic attack against the AES algorithm that allows them to recover a key from known or chosen ciphertext with a reasonable time and memory complexity. I believe that what the “top official” was referring to is attacks that focus on the implementation and bypass the encryption algorithm: side-channel attacks, attacks against the key generation systems (either exploiting bad random number generators or sloppy password creation habits), attacks that target the endpoints of the communication system and not the wire, attacks that exploit key leakage, attacks against buggy implementations of the algorithm, and so on. These attacks are likely to be much more effective against computer encryption.

EDITED TO ADD (3/22): Another option is that the NSA has built dedicated hardware capable of factoring 1024-bit numbers. There’s quite a lot of RSA-1024 out there, so that would be a fruitful project. So, maybe.

EDITED TO ADD (4/13): The NSA denies everything.

Posted on March 22, 2012 at 7:17 AM130 Comments

Unprinter

A way to securely erase paper:

“The key idea was to find a laser energy level that is high enough to ablate – or vaporise – the toner that at the same time is lower than the destruction threshold of the paper substrate. It turns out the best wavelength is 532 nanometres – that’s green visible light – with a pulse length of 4 nanoseconds, which is quite long,” Leal-Ayala told New Scientist.

“We have repeated the printing/unprinting process three times on the same piece of paper with good results. The more you do it, though, the more likely it is for the laser to damage the paper, perhaps yellowing it,” he says. The team have found toner-paper combinations in which almost no appreciable traces of toner can be seen after lasing and in which the paper suffers “no significant mechanical damage.”

EDITED TO ADD (3/21): More than one reader has pointed out that this system is not secure, nor do its inventors make any claims of security.

Posted on March 21, 2012 at 6:26 AM33 Comments

Hacking Critical Infrastructure

A otherwise uninteresting article on Internet threats to public infrastructure contains this paragraph:

At a closed-door briefing, the senators were shown how a power company employee could derail the New York City electrical grid by clicking on an e-mail attachment sent by a hacker, and how an attack during a heat wave could have a cascading impact that would lead to deaths and cost the nation billions of dollars.

Why isn’t the obvious solution to this to take those critical electrical grid computers off the public Internet?

Posted on March 20, 2012 at 8:52 AM66 Comments

Australian Security Theater

I like the quote at the end of this excerpt:

Aviation officials have questioned the need for such a strong permanent police presence at airports, suggesting they were there simply “to make the government look tough on terror”.

One senior executive said in his experience, the officers were expensive window-dressing.

“When you add the body scanners, the ritual humiliation of old ladies with knitting needles and the farcical air marshals, it all adds up to billions of dollars to prevent what? A politician being called soft on terror, that’s what,” he said.

Posted on March 19, 2012 at 6:38 AM38 Comments

On Cyberwar Hype

Good article by Thomas Rid on the hype surrounding cyberwar. It’s well worth reading.

And in a more academic paper, published in the RUSI Journal, Thomas Rid and Peter McBurney argue that cyber-weapons aren’t all that destructive and that we’ve been misled by some bad metaphors.

Some fundamental questions on the use of force in cyberspace are still unanswered. Worse, they are still unexplored: What are cyber ‘weapons’ in the first place? How is weaponised code different from physical weaponry? What are the differences between various cyber-attack tools? And do the same dynamics and norms that govern the use of weapons on the conventional battlefield apply in cyberspace?

Cyber-weapons span a wide spectrum. That spectrum, we argue, reaches from generic but low-potential tools to specific but high-potential weaponry. To illustrate this polarity, we use a didactically helpful comparison. Low-potential ‘cyber-weapons’ resemble paintball guns: they may be mistaken for real weapons, are easily and commercially available, used by many to ‘play,’ and getting hit is highly visible—but at closer inspection these ‘weapons’ will lose some of their threatening character. High-potential cyber-weapons could be compared with sophisticated fire-and-forget weapon systems such as modern anti-radiation missiles: they require specific target intelligence that is programmed into the weapon system itself, major investments for R&D, significant lead-time, and they open up entirely new tactics but also novel limitations. This distinction brings into relief a two-pronged hypothesis that stands in stark contrast to some of the debate’s received wisdoms. Maximising the destructive potential of a cyber-weapon is likely to come with a double effect: it will significantly increase the resources, intelligence and time required to build and to deploy such weapons—and more destructive potential will significantly decrease the number of targets, the risk of collateral damage and the coercive utility of cyber-weapons.

And from the conclusion:

Two findings contravene the debate’s received wisdom. One insight concerns the dominance of the offence. Most weapons may be used defensively and offensively. But the information age, the argument goes since at least 1996, has ‘offence-dominant attributes.’ A 2011 Pentagon report on cyberspace again stressed ‘the advantage currently enjoyed by the offense in cyberwarfare.’ But when it comes to cyber-weapons, the offence has higher costs, a shorter shelf-life than the defence, and a very limited target set. All this drastically reduces the coercive utility of cyber-attacks. Any threat relies on the offender’s credibility to attack, or to repeat a successful attack. Even if a potent cyber-weapon could be launched successfully once, it would be highly questionable if an attack, or even a salvo, could be repeated in order to achieve a political goal. At closer inspection cyber-weapons do not seem to favour the offence.

A second insight concerns the risk of electronic arms markets. One concern is that sophisticated malicious actors could resort to asymmetric methods, such as employing the services of criminal groups, rousing patriotic hackers, and potentially redeploying generic elements of known attack tools. Worse, more complex malware is likely to be structured in a modular fashion. Modular design could open up new business models for malware developers. In the car industry, for instance, modularity translates into a possibility of a more sophisticated division of labour. Competitors can work simultaneously on different parts of a more complex system. Modules could be sold on underground markets. But if our analysis is correct, potential arms markets pose a more limited risk: the highly specific target information and programming design needed for potent weapons is unlikely to be traded generically. To go back to our imperfect analogy: paintball pistols will continue to be commercially available, but probably not pre-programmed warheads of smart missiles.

The use of this weapon analogy points to a larger and dangerous problem: the militarisation of cyber-security. William J Lynn, the Pentagon’s number two, responded to critics by pointing out that the Department of Defense would not ‘militarise’ cyberspace. ‘Indeed,’ Lynn wrote, ‘establishing robust cyberdefenses no more militarizes cyberspace than having a navy militarizes the ocean.’ Lynn may be right that the Pentagon is not militarising cyberspace—but his agency is unwittingly militarising the ideas and concepts to analyse security in cyberspace. We hope that this article, by focusing not on war but on weapons, will help bring into relief the narrow limits and the distractive quality of most martial analogies.

Here’s an article on the paper.

One final paper by Rid: “Cyber-War Will Not Take Place” (2012), Journal of Strategic Studies. I have not read it yet.

Posted on March 14, 2012 at 6:22 AM32 Comments

The Security of Multi-Word Passphrases

Interesting research on the security of passphrases. From a blog post on the work:

We found about 8,000 phrases using a 20,000 phrase dictionary. Using a very rough estimate for the total number of phrases and some probability calculations, this produced an estimate that passphrase distribution provides only about 20 bits of security against an attacker trying to compromise 1% of available accounts. This is far better than passwords, which are usually under 10 bits by this same metric, but not high enough to make online guessing impractical without proper rate-limiting. Curiously, it’s close to estimates made using Kuo et al.’s published numbers on mnemonic phrases. It also shows that significant numbers of people will blatantly ignore security advice about choosing nonsense phrases and choose things like “Manchester United” or “Harry Potter.”

[…]

This led us to ask, if in the worst case users chose multi-word passphrases with a distribution identical to English speech, how secure would this be? Using the large Google n-gram corpus we can answer this question for phrases of up to 5 words. The results are discouraging: by our metrics, even 5-word phrases would be highly insecure against offline attacks, with fewer than 30 bits of work compromising over half of users. The returns appear to rapidly diminish as more words are required. This has potentially serious implications for applications like PGP private keys, which are often encrypted using a passphrase. Users are clearly more random in “passphrase English” than in actual English, but unless it’s dramatically more random the underlying natural language simply isn’t random enough.

Posted on March 13, 2012 at 6:22 AM81 Comments

Video Shows TSA Full-Body Scanner Failure

The Internet is buzzing about this video, showing a blogger walking through two different types of full-body scanners with metal objects. Basically, by placing the object on your side, the black image is hidden against the scanner’s black background. This isn’t new, by the way. This vulnerability was discussed in a paper published last year by the Journal of Transportation Security. And here’s a German TV news segment from 2010 that shows someone sneaking explosives past a full-body scanner.

The TSA’s response is pretty uninformative. I’d include a quote, but it really doesn’t say anything. And the original blogger is now writing that the TSA is pressuring journalists not to cover the story.

These full-body scanners have been a disaster since they’ve been introduced. But, as I wrote in 2010, I don’t think the TSA will back down. It would be too embarrassing if they did.

Posted on March 12, 2012 at 4:30 PM52 Comments

Jamming Speech with Recorded Speech

This is cool:

The idea is simple. Psychologists have known for some years that it is almost impossible to speak when your words are replayed to you with a delay of a fraction of a second.

Kurihara and Tsukada have simply built a handheld device consisting of a microphone and a speaker that does just that: it records a person’s voice and replays it to them with a delay of about 0.2 seconds. The microphone and speaker are directional so the device can be aimed at a speaker from a distance, like a gun.

In tests, Kurihara and Tsukada say their speech jamming gun works well: “The system can disturb remote people’s speech without any physical discomfort.”

Posted on March 12, 2012 at 6:35 AM39 Comments

Friday Squid Blogging: Humboldt Squid Can Dive to 1.5 km

Yet another impressive Humboldt squid feat:

“We’ve seen them make really impressive dives up to a kilometre and a half deep, swimming straight through a zone where there’s really low oxygen,” the Hopkins Marine Station researcher said.

“They’re able to spend several hours at this kilometre-and-a-half-deep, and then they go back up and continue their normal daily swimming behaviour. It’s just a really impressive, really fast, deep dive through what is quite a harsh environment.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on March 9, 2012 at 4:01 PM44 Comments

NSA's Secure Android Spec

The NSA has released its specification for a secure Android.

One of the interesting things it’s requiring is that all data be tunneled through a secure VPN:

Inter-relationship to Other Elements of the Secure VoIP System

The phone must be a commercial device that supports the ability to pass data over a commercial cellular network. Standard voice phone calls, with the exception of emergency 911 calls, shall not be allowed. The phone must function on US CDMA & GSM networks and OCONUS on GSM networks with the same functionality.

All data communications to/from the mobile device must go through the VPN tunnel to the VPN gateway in the infrastructure; no other communications in or out of the mobile device are permitted.

Applications on the phone additionally encrypt their communications to servers in infrastructure, or to other phones; all those communications must be tunneled through the VPN.

The more I look at mobile security, the more I think a secure tunnel is essential.

Posted on March 7, 2012 at 1:35 PM52 Comments

How Changing Technology Affects Security

Security is a tradeoff, a balancing act between attacker and defender. Unfortunately, that balance is never static. Changes in technology affect both sides. Society uses new technologies to decrease what I call the scope of defection—what attackers can get away with—and attackers use new technologies to increase it. What’s interesting is the difference between how the two groups incorporate new technologies.

Changes in security systems can be slow. Society has to implement any new security technology as a group, which implies agreement and coordination and—in some instances—a lengthy bureaucratic procurement process. Meanwhile, an attacker can just use the new technology. For example, at the end of the horse-and-buggy era, it was easier for a bank robber to use his new motorcar as a getaway vehicle than it was for a town’s police department to decide it needed a police car, get the budget to buy one, choose which one to buy, buy it, and then develop training and policies for it. And if only one police department did this, the bank robber could just move to another town. Defectors are more agile and adaptable, making them much better at being early adopters of new technology.

We saw it in law enforcement’s initial inability to deal with Internet crime. Criminals were simply more flexible. Traditional criminal organizations like the Mafia didn’t immediately move onto the Internet; instead, new Internet-savvy criminals sprung up. They set up websites like CardersMarket and DarkMarket, and established new organized crime groups within a decade or so of the Internet’s commercialization. Meanwhile, law enforcement simply didn’t have the organizational fluidity to adapt as quickly. Cities couldn’t fire their old-school detectives and replace them with people who understood the Internet. The detectives’ natural inertia and tendency to sweep problems under the rug slowed things even more. They spent the better part of a decade playing catch-up.

There’s one more problem: defenders are in what military strategist Carl von Clausewitz calls “the position of the interior.” They have to defend against every possible attack, while the defector only has to find one flaw that allows one way through the defenses. As systems get more complicated due to technology, more attacks become possible. This means defectors have a first-mover advantage; they get to try the new attack first. Consequently, society is constantly responding: shoe scanners in response to the shoe bomber, harder-to-counterfeit money in response to better counterfeiting technologies, better antivirus software to combat new computer viruses, and so on. The attacker’s clear advantage increases the scope of defection even further.

Of course, there are exceptions. There are technologies that immediately benefit the defender and are of no use at all to the attacker—for example, fingerprint technology allowed police to identify suspects after they left the crime scene and didn’t provide any corresponding benefit to criminals. The same thing happened with immobilizing technology for cars, alarm systems for houses, and computer authentication technologies. Some technologies benefit both but still give more advantage to the defenders. The radio allowed street policemen to communicate remotely, which increased our level of safety more than the corresponding downside of criminals communicating remotely endangers us.

Still, we tend to be reactive in security, and only implement new measures in response to an increased scope of defection. We’re slow about doing it and even slower about getting it right.

This essay originally appeared in IEEE Security & Privacy. It was adapted from Chapter 16 of Liars and Outliers.

Posted on March 7, 2012 at 6:14 AM29 Comments

The Keywords the DHS Is Using to Analyze Your Social Media Posts

According to this document, received by EPIC under the Freedom of Information Act, the U.S. Department of Homeland Security is combing through the gazillions of social media postings looking for terrorists. A partial list of keywords is included in the document (pages 20–23), and is reprinted in this blog post.

EDITED TO ADD (3/13): It’s hard to tell what the DHS is doing with this program. EPIC says that they’re monitoring “dissent,” and the document talks about monitoring news stories.

Posted on March 6, 2012 at 1:22 PM55 Comments

Themes from the RSA Conference

Last week was the big RSA Conference in San Francisco: something like 20,000 people. From what I saw, these were the major themes on the show floor:

  • Companies that deal with “Advanced Persistent Threat.”
  • Companies that help you recover after you’ve been hacked.
  • Companies that deal with “Bring Your Own Device” at work, also known as consumerization.

Who else went to RSA? What did you notice?

Posted on March 5, 2012 at 1:30 PM26 Comments

Liars and Outliers: The Big Idea

My big idea is a big question. Every cooperative system contains parasites. How do we ensure that society’s parasites don’t destroy society’s systems?

It’s all about trust, really. Not the intimate trust we have in our close friends and relatives, but the more impersonal trust we have in the various people and systems we interact with in society. I trust airline pilots, hotel clerks, ATMs, restaurant kitchens, and the company that built the computer I’m writing this short essay on. I trust that they have acted and will act in the ways I expect them to. This type of trust is more a matter of consistency or predictability than of intimacy.

Of course, all of these systems contain parasites. Most people are naturally trustworthy, but some are not. There are hotel clerks who will steal your credit card information. There are ATMs that have been hacked by criminals. Some restaurant kitchens serve tainted food. There was even an airline pilot who deliberately crashed his Boeing 767 into the Atlantic Ocean in 1999.

My central metaphor is the Prisoner’s Dilemma, which nicely exposes the tension between group interest and self-interest. And the dilemma even gives us a terminology to use: cooperators act in the group interest, and defectors act in their own selfish interest, to the detriment of the group. Too many defectors, and everyone suffers—often catastrophically.

The Prisoner’s Dilemma is not only useful in describing the problem, but also serves as a way to organize solutions. We humans have developed four basic mechanisms for ways to limit defectors: what I call societal pressure. We use morals, reputation, laws, and security systems. It’s all coercion, really, although we don’t call it that. I’ll spare you the details; it would require a book to explain. And it did.

This book marks another chapter in my career’s endless series of generalizations. From mathematical security—cryptography—to computer and network security; from there to security technology in general; then to the economics of security and the psychology of security; and now to—I suppose—the sociology of security. The more I try to understand how security works, the more of the world I need to encompass within my model.

When I started out writing this book, I thought I’d be talking a lot about the global financial crisis of 2008. It’s an excellent example of group interest vs. self-interest, and how a small minority of parasites almost destroyed the planet’s financial system. I even had a great quote by former Federal Reserve Chairman Alan Greenspan, where he admitted a “flaw” in his worldview. The exchange, which took place when he was being questioned by Congressman Henry Waxman at a 2008 Congressional hearing, was once the opening paragraphs of my book. I called the defectors “the dishonest minority,” which was my original title.

That unifying example eventually faded into the background, to be replaced by a lot of separate examples. I talk about overfishing, childhood immunizations, paying taxes, voting, stealing, airplane security, gay marriage, and a whole lot of other things. I dumped the phrase “dishonest minority” entirely, partly because I didn’t need it and partly because a vocal few early readers were reading it not as “the small percentage of us that are dishonest” but as “the minority group that is dishonest”—not at all the meaning I was trying to convey.

I didn’t even realize I was talking about trust until most of the way through. It was a couple of early readers who—coincidentally, on the same day—told me my book wasn’t about security, it was about trust. More specifically, it was about how different societal pressures, security included, induce trust. This interplay between cooperators and defectors, trust and security, compliance and coercion, affects everything having to do with people.

In the book, I wander through a dizzying array of academic disciplines: experimental psychology, evolutionary psychology, sociology, economics, behavioral economics, evolutionary biology, neuroscience, game theory, systems dynamics, anthropology, archeology, history, political science, law, philosophy, theology, cognitive science, and computer security. It sometimes felt as if I were blundering through a university, kicking down doors and demanding answers. “You anthropologists: what can you tell me about early human transgressions and punishments?” “Okay neuroscientists, what’s the brain chemistry of cooperation? And you evolutionary psychologists, how can you explain that?” “Hey philosophers, what have you got?” I downloaded thousands—literally—of academic papers. In pre-Internet days I would have had to move into an academic library.

What’s really interesting to me is what this all means for the future. We’ve never been able to eliminate defections. No matter how much societal pressure we bring to bear, we can’t bring the murder rate in society to zero. We’ll never see the end of bad corporate behavior, or embezzlement, or rude people who make cell phone calls in movie theaters. That’s fine, but it starts getting interesting when technology makes each individual defection more dangerous. That is, fisherman will survive even if a few of them defect and overfish—until defectors can deploy driftnets and single-handedly collapse the fishing stock. The occasional terrorist with a machine gun isn’t a problem for society in the overall scheme of things; but a terrorist with a nuclear weapon could be.

Also—and this is the final kicker—not all defectors are bad. If you think about the notions of cooperating and defecting, they’re defined in terms of the societal norm. Cooperators are people who follow the formal or informal rules of society. Defectors are people who, for whatever reason, break the rules. That definition says nothing about the absolute morality of the society or its rules. When society is in the wrong, it’s defectors who are in the vanguard for change. So it was defectors who helped escaped slaves in the antebellum American South. It’s defectors who are agitating to overthrow repressive regimes in the Middle East. And it’s defectors who are fueling the Occupy Wall Street movement. Without defectors, society stagnates.

We simultaneously need more societal pressure to deal with the effects of technology, and less societal pressure to ensure an open, free, and evolving society. This is our big challenge for the coming decade.

This essay originally appeared on John Scalzi’s blog, Whatever.

Posted on March 2, 2012 at 1:21 PM43 Comments

GPS Spoofers

Great movie-plot threat:

Financial institutions depend on timing that is accurate to the microsecond on a global scale so that stock exchanges in, say, London and New York are perfectly synchronised. One of the main ways of doing this is through GPS, and major financial institutions will have a GPS antenna on their main buildings. “They are always visible because they need a clear view of the sky,” Humphreys told Wired.co.uk.

He explains that someone who directed a spoofer towards the antenna could cause two different problems which could have a major impact on the largely automated high-frequency trading systems. The first is simply causing confusion by manipulating the times—a process called “time sabotage”—on one of the global stock exchanges. This sort of confusion can be very damaging.

Posted on March 2, 2012 at 6:11 AM46 Comments

State Department Redacts Wikileaks Cables

The ACLU filed a FOIA request for a bunch of cables that Wikileaks had already released complete versions of. This is what happened:

The agency released redacted versions of 11 and withheld the other 12 in full.

The five excerpts below show the government’s selective and self-serving decisions to withhold information. Because the leaked versions of these cables have already been widely distributed, the redacted releases provide unique insight into the government’s selective decisions to hide information from the American public.

Click on the link to see what was redacted.

EDITED TO ADD (3/2): Commentary:

The Freedom of Information Act provides exceptions for a number of classes of information, but the State Department’s declassification decisions appear to be based not on the criteria specified in the statute, but rather on whether the documents embarrass the US or portray the US in a negative light.

Posted on March 1, 2012 at 1:32 PM27 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.