Blog: February 2011 Archives

Anonymous vs HBGary

One of the effects of writing a book is that I don’t have the time to devote to other writing. So while I’ve been wanting to write about Anonymous vs HBGary, I don’t think I will have time. Here’s an excellent series of posts on the topic from ArsTechnica.

In cyberspace, the balance of power is on the side of the attacker. Attacking a network is much easier than defending a network. That may change eventually—there might someday be the cyberspace equivalent of trench warfare, where the defender has the natural advantage—but not anytime soon.

EDITED TO ADD (3/14): Stephen Colbert on HGary. Another article.

Posted on February 28, 2011 at 5:58 AM109 Comments

HBGary and the Future of the IT Security Industry

This is a really good piece by Paul Roberts on Anonymous vs. HBGary: not the tactics or the politics, but what HBGary demonstrates about the IT security industry.

But I think the real lesson of the hack – and of the revelations that followed it – is that the IT security industry, having finally gotten the attention of law makers, Pentagon generals and public policy establishment wonks in the Beltway, is now in mortal danger of losing its soul. We’ve convinced the world that the threat is real – omnipresent and omnipotent. But in our desire to combat it, we are becoming indistinguishable from the folks with the black hats.

[…]

…While “scare ’em and snare ’em” may be business as usual in the IT security industry, other HBGary Federal skunk works projects clearly crossed a line: a proposal for a major U.S. bank, allegedly Bank of America, to launch offensive cyber attacks on the servers that host the whistle blower site Wikileaks. HBGary was part of a triumvirate of firms that also included Palantir Inc and Berico Technologies, that was working with the law firm of the U.S. Chamber of Commerce to develop plans to target progressive groups, labor unions and other left-leaning non profits who the Chamber opposed with a campaign of false information and entrapment. Other leaked e-mail messages reveal work with General Dynamics and a host of other firms to develop custom, stealth malware and collaborations with other firms selling offensive cyber capabilities including knowledge of previously undiscovered (“zero day”) vulnerabilities.

[…]

What’s more disturbing is the way that the folks at HBGary – mostly Aaron Barr, but others as well – came to view the infowar tactics they were pitching to the military and its contractors as applicable in the civilian context, as well. How effortlessly and seamlessly the focus on “advanced persistent threats” shifted from government backed hackers in China and Russia to encompass political foes like ThinkProgress or the columnist Glenn Greenwald. Anonymous may have committed crimes that demand punishment – but its up to the FBI to handle that, not “a large U.S. bank” or its attorneys.

Read the whole thing.

Posted on February 25, 2011 at 6:14 AM73 Comments

Good Article About the Terrorist Non-Threat

From Reason:

Know thy enemy is an ancient principle of warfare. And if America had
heeded it, it might have refrained from a full-scale “war” on terrorism whose price tag is touching $2 TRILLION. That’s because the Islamist enemy it is confronting is not some hyper-power capable of inflicting existential—or even grave—harm. It is, rather, a rag-tag band of peasants whose malevolent ambitions are far beyond the capacity of their shallow talent pool to deliver.

Posted on February 24, 2011 at 6:44 AM53 Comments

Terrorist-Catching Con Man

Interesting story about a con man who conned the U.S. government, and how the government is trying to hide its dealings with him.

For eight years, government officials turned to Dennis Montgomery, a California computer programmer, for eye-popping technology that he said could catch terrorists. Now, federal officials want nothing to do with him and are going to extraordinary lengths to ensure that his dealings with Washington stay secret.

Posted on February 22, 2011 at 7:21 AM38 Comments

Friday Squid Blogging: Research into Squid Hearing

Interesting:

Squid can hear, scientists have confirmed. But they don’t detect the changes in pressure associated with sound waves, like we do. They have another, more primitive, technique for listening: They sense the motion generated by sound waves.

[…]

Squid have two sac-like organs called statocysts near the base of their brains. Hair cells line the sac and project into it, while a tiny grain of calcium carbonate, called a statolith, resides inside the sac. When the squid moves, the hair cells rub against the statolith, bending the hair cells inside the sac. This generates electrical signals that get sent to the animal’s brain telling the squid it has detected a sound.

[..]

Their results showed that squid can only listen in at low frequencies of up to 500 hertz. (By comparison, humans hear frequencies from about 20 to 20,000 hertz.) This means squid can probably detect wind, waves and reef sounds, but not the high-frequency sounds emitted by the dolphins and toothed whales that eat them, Mooney said.

Posted on February 18, 2011 at 4:17 PM17 Comments

Biometric Wallet

Not an electronic wallet, a physical one:

Virtually indestructible, the dunhill Biometric Wallet will open only with touch of your fingerprint.

It can be linked via Bluetooth to the owner’s mobile phone ­ sounding an alarm if the two are separated by more than 5 metres! This provides a brilliant warning if either the phone or wallet is stolen or misplaced. The exterior of the wallet is constructed from highly durable carbon fibre that will resist all but the most concerted effort to open it, while the interior features a luxurious leather credit card holder and a strong stainless steel money clip.

Only $825. News article.

I don’t think I understand the threat model. If your wallet is stolen, you’re going to replace all your ID cards and credit cards and you’re not going to get your cash back—whether it’s a normal wallet or this wallet. I suppose this wallet makes it less likely that someone will use your stolen credit cards quickly, before you cancel them. But you’re not going to be liable for that delay in any case.

Posted on February 18, 2011 at 1:45 PM68 Comments

NIST Defines New Versions of SHA-512

NIST has just defined two new versions of SHA-512. They’re SHA-512/224 and SHA-512/256: 224- and 256-bit truncations of SHA-512 with a new IV. They’ve done this because SHA-512 is faster than SHA-256 on 64-bit CPUs, so these new SHA variants will be faster.

This is a good thing, and exactly what we did in the design of Skein. We defined different outputs for the same state size, because it makes sense to decouple the internal workings of the hash function from the output size.

Posted on February 18, 2011 at 6:22 AM31 Comments

Societal Security

Humans have a natural propensity to trust non-kin, even strangers. We do it so often, so naturally, that we don’t even realize how remarkable it is. But except for a few simplistic counterexamples, it’s unique among life on this planet. Because we are intelligently calculating and value reciprocity (that is, fairness), we know that humans will be honest and nice: not for any immediate personal gain, but because that’s how they are. We also know that doesn’t work perfectly; most people will be dishonest some of the time, and some people will be dishonest most of the time. How does society—the honest majority—prevent the dishonest minority from taking over, or ruining society for everyone? How is the dishonest minority kept in check? The answer is security—in particular, something I’m calling societal security.

I want to divide security into two types. The first is individual security. It’s basic. It’s direct. It’s what normally comes to mind when we think of security. It’s cops vs. robbers, terrorists vs. the TSA, Internet worms vs. firewalls. And this sort of security is as old as life itself or—more precisely—as old as predation. And humans have brought an incredible level of sophistication to individual security.

Societal security is different. At the tactical level, it also involves attacks, countermeasures, and entire security systems. But instead of A vs. B, or even Group A vs. Group B, it’s Group A vs. members of Group A. It’s security for individuals within a group from members of that group. It’s how Group A protects itself from the dishonest minority within Group A. And it’s where security really gets interesting.

There are many types—I might try to estimate the number someday—of societal security systems that enforce our trust of non-kin. They’re things like laws prohibiting murder, taxes, traffic laws, pollution control laws, religious intolerance, Mafia codes of silence, and moral codes. They enable us to build a society that the dishonest minority can’t exploit and destroy. Originally, these security systems were informal. But as society got more complex, the systems became more formalized, and eventually were embedded into technologies.

James Madison famously wrote: “If men were angels, no government would be necessary.” Government is just the beginning of what wouldn’t be necessary. Currency, that paper stuff that’s deliberately made hard to counterfeit, wouldn’t be necessary, as people could just keep track of how much money they had. Angels never cheat, so nothing more would be required. Door locks, and any barrier that isn’t designed to protect against accidents, wouldn’t be necessary, since angels never go where they’re not supposed to go. Police forces wouldn’t be necessary. Armies: I suppose that’s debatable. Would angels—not the fallen ones—ever go to war against one another? I’d like to think they would be able to resolve their differences peacefully. If people were angels, every security measure that isn’t designed to be effective against accident, animals, forgetfulness, or legitimate differences between scrupulously honest angels could be dispensed with.

Security isn’t just a tax on the honest; it’s a very expensive tax on the honest. It’s the most expensive tax we pay, regardless of the country we live in. If people were angels, just think of the savings!

It wasn’t always like this. Security—especially societal security—used to be cheap. It used to be an incidental cost of society.

In a primitive society, informal systems are generally good enough. When you’re living in a small community, and objects are both scarce and hard to make, it’s pretty easy to deal with the problem of theft. If Alice loses a bowl, and at the same time, Bob shows up with an identical bowl, everyone knows Bob stole it from Alice, and the community can then punish Bob as it sees fit. But as communities get larger, as social ties weaken and anonymity increases, this informal system of theft prevention—detection and punishment leading to deterrence—fails. As communities get more technological and as the things people might want to steal get more interchangeable and harder to identify, it also fails. In short, as our ancestors made the move from small family groups to larger groups of unrelated families, and then to a modern form of society, the informal societal security systems started failing and more formal systems had to be invented to take their place. We needed to put license plates on cars and audit people’s tax returns.

We had no choice. Anything larger than a very primitive society couldn’t exist without societal security.

I’m writing a book about societal security. I will discuss human psychology: how we make security trade-offs, why we routinely trust non-kin (an evolutionary puzzle, to be sure), how the majority of us are honest, and that a minority of us are dishonest. That dishonest minority are the free riders of societal systems, and security is how we protect society from them. I will model the fundamental trade-off of societal security—individual self-interest vs. societal group interest—as a group prisoner’s dilemma problem, and use that metaphor to examine the basic mechanics of societal security. A lot falls out of this: free riders, the Tragedy of the Commons, the subjectivity of both morals and risk trade-offs.

Using this model, I will explore the security systems that protect—and fail to protect—market economics, corporations and other organizations, and a variety of national systems. I think there’s a lot we can learn about security by applying the prisoner’s dilemma model, and I’ve only recently started. Finally, I want to discuss modern changes to our millennia-old systems of societal security. The Information Age has changed a number of paradigms, and it’s not clear that our old security systems are working properly now or will work in the future. I’ve got a lot of work to do yet, and the final book might look nothing like this short outline. That sort of thing happens.

Tentative title: The Dishonest Minority: Security and its Role in Modern Society. I’ve written several books on the how of security. This book is about the why of security.

I expect to finish my first draft before Summer. Throughout 2011, expect to see bits from the book here. They might not make sense as a coherent whole at first—especially because I don’t write books in strict order—but by the time the book is published, it’ll all be part of a coherent and (hopefully) compelling narrative.

And if I write fewer extended blog posts and essays in the coming year, you’ll know why.

Posted on February 15, 2011 at 5:43 AM137 Comments

Julian Sanchez on Balancing Privacy and Security

From a blog post:

In my own area of study, the familiar trope of “balancing privacy and security” is a source of constant frustration to privacy advocates, because while there are clearly sometimes tradeoffs between the two, it often seems that the zero-sum rhetoric of “balancing” leads people to view them as always in conflict. This is, I suspect, the source of much of the psychological appeal of “security theater”: If we implicitly think of privacy and security as balanced on a scale, a loss of privacy is ipso facto a gain in security. It sounds silly when stated explicitly, but the power of frames is precisely that they shape our thinking without being stated explicitly.

I’ve written about the false trade-off between security and privacy.

Posted on February 11, 2011 at 12:48 PM24 Comments

Hacking Scratch Lottery Tickets

Design failure means you can pick winning tickets before scratching the coatings off. Most interesting is that there’s statistical evidence that this sort of attack has been occurring in the wild: not necessarily this particular attack, but some way to separate winners from losers without voiding the tickets.

Since this article was published in Wired, another technique of hacking scratch lottery tickets has surfaced: store clerks capitalizing on losing streaks. If you assume any given package of lottery tickets has a similar number of winners, wait until you sell most of the way through the packet without seeing those winners and then buy the rest.

Posted on February 10, 2011 at 6:42 AM77 Comments

Bomb-Sniffing Mice

I was interviewed for this story on a mouse-powered explosives detector. Animal senses are better than any detection machine current technology can build, which makes it a good idea. But the challenges of using animals in this sort of situation are considerable. The neat thing about the technology profiled in the article, which the article didn’t make as clear as I would have liked, is how far it goes in making the mice just another interchangeable part in the system. They’re encased in cartridges, which can be swapped in and out of the system. They don’t need regular handling:

Unlike dogs, which are often trained for explosives and drugs detection, mice don’t require constant interaction with their trainers or treats to keep them motivated. As a result, they can live in comfortable cages with unlimited access to food and water. Each mouse would work two 4-hour shifts a day, and would have a working life of 18 months.

If we are ever going to see animals in a mass-produced system, it’s going to look something like this.

Posted on February 9, 2011 at 11:39 AM42 Comments

Micromorts

I’d never heard the term “micromort” before. It’s a probability: a one-in-a-million probability of death. For example, one-micromort activities are “travelling 230 miles (370 km) by car (accident),” and “living 2 days in New York or Boston (air pollution).”

I don’t know if that data is accurate; it’s from the Wikipedia entry. In any case, I think it’s a useful term.

EDITED TO ADD (2/12): Discussion here.

Posted on February 8, 2011 at 5:46 AM55 Comments

Scareware: How Crime Pays

Scareware is fraudulent software that uses deceptive advertising to trick users into believing they’re infected with some variety of malware, then convinces them to pay money to protect themselves. The infection isn’t real, and the software they buy is fake, too. It’s all a scam.

Here’s one scareware operator who sold “more than 1 million software products” at “$39.95 or more,” and now has to pay $8.2 million to settle a Federal Trade Commission complaint.

Seems to me that $40 per customer, minus $8.20 to pay off the FTC, is still a pretty good revenue model. Their operating costs can’t be very high, since the software doesn’t actually do anything. Yes, a court ordered them to close down their business, but certainly there are other creative entrepreneurs that can recognize a business opportunity when they see it.

Posted on February 7, 2011 at 8:45 AM55 Comments

UK Immigration Officer Puts Wife on the No-Fly List

A UK immigration officer decided to get rid of his wife by putting her on the no-fly list, ensuring that she could not return to the UK from abroad. This worked for three years, until he put in for a promotion and—during the routine background check—someone investigated why his wife was on the no-fly list.

Okay, so he’s an idiot. And a bastard. But the real piece of news here is how easy it is for a UK immigration officer to put someone on the no-fly list with absolutely no evidence that that person belongs there. And how little auditing is done on that list. Once someone is on, they’re on for good.

That’s simply no way to run a free country.

Posted on February 4, 2011 at 1:35 PM49 Comments

Terrorist Targets of Choice

This makes sense.

Generally, militants prefer to attack soft targets where there are large groups of people, that are symbolic and recognizable around the world and that will generate maximum media attention when attacked. Some past examples include the World Trade Center in New York, the Taj Mahal Hotel in Mumbai and the London Underground. The militants’ hope is that if the target meets these criteria, terror magnifiers like the media will help the attackers produce a psychological impact that goes far beyond the immediate attack site ­ a process we refer to as “creating vicarious victims.” The best-case scenario for the attackers is that this psychological impact will also produce an adverse economic impact against the targeted government.

Unlike hard targets, which frequently require attackers to use large teams of operatives with elaborate attack plans or very large explosive devices in order to breach defenses, soft targets offer militant planners an advantage in that they can frequently be attacked by a single operative or small team using a simple attack plan. The failed May 1, 2010, attack against New York’s Times Square and the July 7, 2005, London Underground attacks are prime examples of this, as was the Jan. 24 attack at Domodedovo airport. Such attacks are relatively cheap and easy to conduct and can produce a considerable propaganda return for very little investment.

Posted on February 4, 2011 at 6:00 AM42 Comments

Hacking HTTP Status Codes

One website can learn if you’re logged into other websites.

When you visit my website, I can automatically and silently determine if you’re logged into Facebook, Twitter, Gmail and Digg. There are almost certainly thousands of other sites with this issue too, but I picked a few vulnerable well known ones to get your attention. You may not care that I can tell you’re logged into Gmail, but would you care if I could tell you’re logged into one or more porn or warez sites? Perhaps http://oppressive-regime.example.org/ would like to collect a list of their users who are logged into http://controversial-website.example.com/?

Posted on February 2, 2011 at 2:26 PM59 Comments

Me on Color-Coded Terrorist Threat Levels

I wrote an op-ed for CNN.com on the demise of the color-coded terrorist theat level system. It’s nothing I haven’t said before, so I won’t reprint it here.

The best thing about the system was the jokes it inspired late-night comedians, and others, to make. In memoriam, people should post the funniest of those jokes here.

Posted on February 1, 2011 at 7:40 AM47 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.