Essays: 2010 Archives
A heavily edited version of this essay appeared in the New York Daily News.
Securing the Washington Monument from terrorism has turned out to be a surprisingly difficult job. The concrete fence around the building protects it from attacking vehicles, but there's no visually appealing way to house the airport-level security mechanisms the National Park Service has decided are a must for visitors. It is considering several options, but I think we should close the monument entirely. Let it stand, empty and inaccessible, as a monument to our fears.
Organizers of National Opt Out Day, the Wednesday before Thanksgiving when air travelers were urged to opt out of the full-body scanners at security checkpoints and instead submit to full-body patdowns -- were outfoxed by the TSA. The government pre-empted the protest by turning off the machines in most airports during the Thanksgiving weekend. Everyone went through the metal detectors, just as before.
Now that Thanksgiving is over, the machines are back on and the "enhanced" pat-downs have resumed.
The world is gearing up for cyberwar. The US Cyber Command became operational in November. Nato has enshrined cyber security among its new strategic priorities. The head of Britain's armed forces said recently that boosting cyber capability is now a huge priority for the UK.
A short history of airport security: We screen for guns and bombs, so the terrorists use box cutters. We confiscate box cutters and corkscrews, so they put explosives in their sneakers. We screen footwear, so they try to use liquids. We confiscate liquids, so they put PETN bombs in their underwear.
Keeping infected computers at bay is great in theory, but there are all sorts of complicating factors to consider.
Last month Scott Charney of Microsoft proposed that infected computers be quarantined from the Internet. Using a public health model for Internet security, the idea is that infected computers spreading worms and viruses are a risk to the greater community and thus need to be isolated. Internet service providers would administer the quarantine, and would also clean up and update users' computers so they could rejoin the greater Internet.
This isn't a new idea.
How often should you change your password? I get asked that question a lot, usually by people annoyed at their employer's or bank's password expiration policy -- people who finally memorized their current password and are realizing they'll have to write down their new one. How could that possibly be more secure, they want to know.
The answer depends on what the password is used for.
Internet Eyes is a U.K. startup designed to crowdsource digital surveillance. People pay a small fee to become a "Viewer." Once they do, they can log onto the site and view live anonymous feeds from surveillance cameras at retail stores. If they notice someone shoplifting, they can alert the store owner.
This essay appeared as the first half of a point-counterpoint with Marcus Ranum. Marcus's half is here.
In 2003, a group of security experts -- myself included -- published a paper saying that 1) software monocultures are dangerous and 2) Microsoft, being the largest creator of monocultures out there, is the most dangerous. Marcus Ranum responded with an essay that basically said we were full of it. Now, eight years later, Marcus and I thought it would be interesting to revisit the debate.
On Monday, The New York Times reported that President Obama will seek sweeping laws enabling law enforcement to more easily eavesdrop on the internet. Technologies are changing, the administration argues, and modern digital systems aren't as easy to monitor as traditional telephones.
The government wants to force companies to redesign their communications systems and information networks to facilitate surveillance, and to provide law enforcement with back doors that enable them to bypass any security measures.
The proposal may seem extreme, but -- unfortunately -- it's not unique.
This essay appeared as the second half of a point/counterpoint with Marcus Ranum.
If you're a typical wired American, you've got a bunch of tech tools you like and a bunch more you covet. You have a cell phone that can easily text. You've got a laptop configured just the way you want it. Maybe you have a Kindle for reading, or an iPad. And when the next new thing comes along, some of you will line up on the first day it's available.
As networking sites become more ubiquitous, it is long past the time to look at the types of data we put on those sites. We're using social networking websites for more private and more intimate interactions, often without thinking through the privacy implications of what we're doing.
The issues are hard and the solutions to them harder still, but I'm seeing a lot of confusion in even forming the questions.
Social networking sites deal with several different types of user data, and it's essential to separate them.
Last month, Sen. Joe Lieberman, I-Conn., introduced a bill that might -- we're not really sure -- give the president the authority to shut down all or portions of the Internet in the event of an emergency. It's not a new idea. Sens. Jay Rockefeller, D-W.Va., and Olympia Snowe, R-Maine, proposed the same thing last year, and some argue that the president can already do something like this. If this or a similar bill ever passes, the details will change considerably and repeatedly.
There's a power struggle going on in the U.S. government right now.
It's about who is in charge of cyber security, and how much control the government will exert over civilian networks. And by beating the drums of war, the military is coming out on top.
Lately I've been reading about user security and privacy -- control, really -- on social networking sites. The issues are hard and the solutions harder, but I'm seeing a lot of confusion in even forming the questions. Social networking sites deal with several different types of user data, and it's essential to separate them.
Below is my taxonomy of social networking data, which I first presented at the Internet Governance Forum meeting last November, and again -- revised -- at an OECD workshop on the role of Internet intermediaries in June.
For a while now, I've pointed out that cryptography is singularly ill-suited to solve the major network security problems of today: denial-of-service attacks, website defacement, theft of credit card numbers, identity theft, viruses and worms, DNS attacks, network penetration, and so on.
Cryptography was invented to protect communications: data in motion. This is how cryptography was used throughout most of history, and this is how the militaries of the world developed the science. Alice was the sender, Bob the receiver, and Eve the eavesdropper.
This essay previously appeared in Information Security as the first half of a point-counterpoint with Marcus Ranum. Marcus's half is here.
Any essay on hiring hackers quickly gets bogged down in definitions. What is a hacker, and how is he different from a cracker? I have my own definitions, but I'd rather define the issue more specifically: Would you hire someone convicted of a computer crime to fill a position of trust in your computer network?
This essay previously appeared in Information Security as the first half of a point-counterpoint with Marcus Ranum. Marcus's half is here.
Universal identification is portrayed by some as the holy grail of Internet security. Anonymity is bad, the argument goes; and if we abolish it, we can ensure only the proper people have access to their own information. We'll know who is sending us spam and who is trying to hack into corporate networks.
At a security conference recently, the moderator asked the panel of distinguished cybersecurity leaders what their nightmare scenario was. The answers were the predictable array of large-scale attacks: against our communications infrastructure, against the power grid, against the financial system, in combination with a physical attack.
I didn't get to give my answer until the afternoon, which was: "My nightmare scenario is that people keep talking about their nightmare scenarios."
There's a certain blindness that comes from worst-case thinking. An extension of the precautionary principle, it involves imagining the worst possible outcome and then acting as if it were a certainty.
As the details of the Times Square car bomb attempt emerge in the wake of Faisal Shahzad's arrest Monday night, one thing has already been made clear: Terrorism is fairly easy. All you need is a gun or a bomb, and a crowded target. Guns are easy to buy. Bombs are easy to make.
In the wake of Saturday's failed Times Square car bombing, it's natural to ask how we can prevent this sort of thing from happening again. The answer is stop focusing on the specifics of what actually happened, and instead think about the threat in general.
Think about the security measures commonly proposed. Cameras won't help.
Security technologist and author Bruce Schneier looks at the age-old problem of insider threat
Rajendrasinh Makwana was a UNIX contractor for Fannie Mae. On October 24, he was fired. Before he left, he slipped a logic bomb into the organisation's network. The bomb would have "detonated" on January 31. It was programmed to disable access to the server on which it was running, block any network monitoring software, systematically and irretrievably erase everything, and then replicate itself on all 4,000 Fannie Mae servers.
We'll spend millions on new technology, and terrorists will just adapt
People intent on preventing a Moscow-style terrorist attack against the New York subway system are proposing a range of expensive new underground security measures, some temporary and some permanent.
They should save their money -- and instead invest every penny they're considering pouring into new technologies into intelligence and old-fashioned policing.
Intensifying security at specific stations only works against terrorists who aren't smart enough to move to another station. Cameras are useful only if all the stars align: The terrorists happen to walk into the frame, the video feeds are being watched in real time and the police can respond quickly enough to be effective.
These companies and others say privacy erosion is inevitable--but they're making it so.
In January Facebook Chief Executive, Mark Zuckerberg, declared the age of privacy to be over. A month earlier, Google Chief Eric Schmidt expressed a similar sentiment. Add Scott McNealy's and Larry Ellison's comments from a few years earlier, and you've got a whole lot of tech CEOs proclaiming the death of privacy--especially when it comes to young people.
It's just not true.
This essay appeared as the second half of a point/counterpoint with Marcus Ranum. Marcus's half is here.
Information technology is increasingly everywhere, and it's the same technologies everywhere. The same operating systems are used in corporate and government computers. The same software controls critical infrastructure and home shopping.
On January 19, a team of at least 15 people assassinated Hamas leader Mahmoud al-Mabhouh. The Dubai police released video footage of 11 of them. While it was obviously a very professional operation, the 27 minutes of video is fascinating in its banality. Team members walk through the airport, check in and out of hotels, get in and out of taxis.
Google made headlines when it went public with the fact that Chinese hackers had penetrated some of its services, such as Gmail, in a politically motivated attempt at intelligence gathering. The news here isn't that Chinese hackers engage in these activities or that their attempts are technically sophisticated -- we knew that already -- it's that the U.S. government inadvertently aided the hackers.
In order to comply with government search warrants on user data, Google created a backdoor access system into Gmail accounts.
President Obama in his speech last week rightly focused on fixing the intelligence failures that resulted in Umar Farouk Abdulmutallab being ignored, rather than on technologies targeted at the details of his underwear-bomb plot. But while Obama's instincts are right, reforming intelligence for this new century and its new threats is a more difficult task than he might like.
We don't need new technologies, new laws, new bureaucratic overlords, or - for heaven's sake - new agencies. What prevents information sharing among intelligence organizations is the culture of the generation that built those organizations.
In the headlong rush to "fix" security after the Underwear Bomber's unsuccessful Christmas Day attack, there's far too little discussion about what worked and what didn't, and what will and will not make us safer in the future.
The security checkpoints worked. Because we screen for obvious bombs, Umar Farouk Abdulmutallab -- or, more precisely, whoever built the bomb -- had to construct a far less reliable bomb than he would have otherwise. Instead of using a timer or a plunger or a reliable detonation mechanism, as would any commercial user of PETN, he had to resort to an ad hoc and much more inefficient homebrew mechanism: one involving a syringe and 20 minutes in the lavatory and we don't know exactly what else.
The Underwear Bomber failed. And our reaction to the failed plot is failing as well, by focusing on the specifics of this made-for-a-movie plot rather than the broad threat. While our reaction is predictable, it's not going to make us safer.
We're going to beef up airport security, because Umar Farouk AbdulMutallab allegedly snuck a bomb through a security checkpoint.
An unidentified man breached airport security at Newark Airport on Sunday, walking into the secured area through the exit, prompting an evacuation of a terminal and flight delays that continued into the next day. This problem isn't common, but it happens regularly. The result is always the same, and it's not obvious that fixing the problem is the right solution.
This kind of security breach is inevitable, simply because human guards are not perfect.
There are two kinds of profiling. There's behavioral profiling based on how someone acts, and there's automatic profiling based on name, nationality, method of ticket purchase, and so on. The first one can be effective, but is very hard to do right. The second one makes us all less safe.
Security is rarely static. Technology changes the capabilities of both security systems and attackers. But there's something else that changes security's cost/benefit trade-off: how the underlying systems being secured are used. Far too often we build security for one purpose, only to find it being used for another purpose -- one it wasn't suited for in the first place.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient, an IBM Company.