Entries Tagged "essays"

Page 41 of 47

The Death of Ephemeral Conversation

The political firestorm over former U.S. Rep. Mark Foley’s salacious instant messages hides another issue, one about privacy. We are rapidly turning into a society where our intimate conversations can be saved and made public later. This represents an enormous loss of freedom and liberty, and the only way to solve the problem is through legislation.

Everyday conversation used to be ephemeral. Whether face-to-face or by phone, we could be reasonably sure that what we said disappeared as soon as we said it. Of course, organized crime bosses worried about phone taps and room bugs, but that was the exception. Privacy was the default assumption.

This has changed. We now type our casual conversations. We chat in e-mail, with instant messages on our computer and SMS messages on our cellphones, and in comments on social networking Web sites like Friendster, LiveJournal, and MySpace. These conversations—with friends, lovers, colleagues, fellow employees—are not ephemeral; they leave their own electronic trails.

We know this intellectually, but we haven’t truly internalized it. We type on, engrossed in conversation, forgetting that we’re being recorded.

Foley’s instant messages were saved by the young men he talked to, but they could have also been saved by the instant messaging service. There are tools that allow both businesses and government agencies to monitor and log IM conversations. E-mail can be saved by your ISP or by the IT department in your corporation. Gmail, for example, saves everything, even if you delete it.

And these conversations can come back to haunt people—in criminal prosecutions, divorce proceedings or simply as embarrassing disclosures. During the 1998 Microsoft anti-trust trial, the prosecution pored over masses of e-mail, looking for a smoking gun. Of course they found things; everyone says things in conversation that, taken out of context, can prove anything.

The moral is clear: If you type it and send it, prepare to explain it in public later.

And voice is no longer a refuge. Face-to-face conversations are still safe, but we know that the National Security Agency is monitoring everyone’s international phone calls. (They said nothing about SMS messages, but one can assume they were monitoring those too.) Routine recording of phone conversations is still rare—certainly the NSA has the capability—but will become more common as telephone calls continue migrating to the IP network.

If you find this disturbing, you should. Fewer conversations are ephemeral, and we’re losing control over the data. We trust our ISPs, employers and cellphone companies with our privacy, but again and again they’ve proven they can’t be trusted. Identity thieves routinely gain access to these repositories of our information. Paris Hilton and other celebrities have been the victims of hackers breaking into their cellphone providers’ networks. Google reads our Gmail and inserts context-dependent ads.

Even worse, normal constitutional protections don’t apply to much of this. The police need a court-issued warrant to search our papers or eavesdrop on our communications, but can simply issue a subpoena—or ask nicely or threateningly—for data of ours that is held by a third party, including stored copies of our communications.

The Justice Department wants to make this problem even worse, by forcing ISPs and others to save our communications—just in case we’re someday the target of an investigation. This is not only bad privacy and security, it’s a blow to our liberty as well. A world without ephemeral conversation is a world without freedom.

We can’t turn back technology; electronic communications are here to stay. But as technology makes our conversations less ephemeral, we need laws to step in and safeguard our privacy. We need a comprehensive data privacy law, protecting our data and communications regardless of where it is stored or how it is processed. We need laws forcing companies to keep it private and to delete it as soon as it is no longer needed.

And we need to remember, whenever we type and send, we’re being watched.

Foley is an anomaly. Most of us do not send instant messages in order to solicit sex with minors. Law enforcement might have a legitimate need to access Foley’s IMs, e-mails and cellphone calling logs, but that’s why there are warrants supported by probable cause—they help ensure that investigations are properly focused on suspected pedophiles, terrorists and other criminals. We saw this in the recent UK terrorist arrests; focused investigations on suspected terrorists foiled the plot, not broad surveillance of everyone without probable cause.

Without legal privacy protections, the world becomes one giant airport security area, where the slightest joke—or comment made years before—lands you in hot water. The world becomes one giant market-research study, where we are all life-long subjects. The world becomes a police state, where we all are assumed to be Foleys and terrorists in the eyes of the government.

This essay originally appeared on Forbes.com.

Posted on October 18, 2006 at 3:30 PMView Comments

Screening People with Clearances

Why should we waste time at airport security, screening people with U.S. government security clearances? This perfectly reasonable question was asked recently by Robert Poole, director of transportation studies at The Reason Foundation, as he and I were interviewed by WOSU Radio in Ohio.

Poole argued that people with government security clearances, people who are entrusted with U.S. national security secrets, are trusted enough to be allowed through airport security with only a cursory screening. They’ve already gone through background checks, he said, and it would be more efficient to concentrate screening resources on everyone else.

To someone not steeped in security, it makes perfect sense. But it’s a terrible idea, and understanding why teaches us some important security lessons.

The first lesson is that security is a system. Identifying someone’s security clearance is a complicated process. People with clearances don’t have special ID cards, and they can’t just walk into any secured facility. A clearance is held by a particular organization—usually the organization the person works for—and is transferred by a classified message to other organizations when that person travels on official business.

Airport security checkpoints are not set up to receive these clearance messages, so some other system would have to be developed.

Of course, it makes no sense for the cleared person to have his office send a message to every airport he’s visiting, at the time of travel. Far easier is to have a centralized database of people who are cleared. But now you have to build this database. And secure it. And ensure that it’s kept up to date.

Or maybe we can create a new type of ID card: one that identifies people with security clearances. But that also requires a backend database and a card that can’t be forged. And clearances can be revoked at any time, so there needs to be some way of invalidating cards automatically and remotely.

Whatever you do, you need to implement a new set of security procedures at airport security checkpoints to deal with these people. The procedures need to be good enough that people can’t spoof it. Screeners need to be trained. The system needs to be tested.

What starts out as a simple idea—don’t waste time searching people with government security clearances—rapidly becomes a complicated security system with all sorts of new vulnerabilities.

The second lesson is that security is a trade-off. We don’t have infinite dollars to spend on security. We need to choose where to spend our money, and we’re best off if we spend it in ways that give us the most security for our dollar.

Given that very few Americans have security clearances, and that speeding them through security wouldn’t make much of a difference to anyone else standing in line, wouldn’t it be smarter to spend the money elsewhere? Even if you’re just making trade-offs about airport security checkpoints, I would rather take the hundreds of millions of dollars this kind of system could cost and spend it on more security screeners and better training for existing security screeners. We could both speed up the lines and make them more effective.

The third lesson is that security decisions are often based on subjective agenda. My guess is that Poole has a security clearance—he was a member of the Bush-Cheney transition team in 2000—and is annoyed that he is being subjected to the same screening procedures as the other (clearly less trusted) people he is forced to stand in line with. From his perspective, not screening people like him is obvious. But objectively it’s not.

This issue is no different than searching airplane pilots, something that regularly elicits howls of laughter among amateur security watchers. What they don’t realize is that the issue is not whether we should trust pilots, airplane maintenance technicians or people with clearances. The issue is whether we should trust people who are dressed as pilots, wear airplane-maintenance-tech IDs or claim to have clearances.

We have two choices: Either build an infrastructure to verify their claims, or assume that they’re false. And with apologies to pilots, maintenance techs and people with clearances, it’s cheaper, easier and more secure to search you all.

This is my twenty-eighth essay for Wired.com.

Posted on October 5, 2006 at 8:27 AMView Comments

Facebook and Data Control

Earlier this month, the popular social networking site Facebook learned a hard lesson in privacy. It introduced a new feature called “News Feeds” that shows an aggregation of everything members do on the site: added and deleted friends, a change in relationship status, a new favorite song, a new interest, etc. Instead of a member’s friends having to go to his page to view any changes, these changes are all presented to them automatically.

The outrage was enormous. One group, Students Against Facebook News Feeds, amassed over 700,000 members. Members planned to protest at the company’s headquarters. Facebook’s founder was completely stunned, and the company scrambled to add some privacy options.

Welcome to the complicated and confusing world of privacy in the information age. Facebook didn’t think there would be any problem; all it did was take available data and aggregate it in a novel way for what it perceived was its customers’ benefit. Facebook members instinctively understood that making this information easier to display was an enormous difference, and that privacy is more about control than about secrecy.

But on the other hand, Facebook members are just fooling themselves if they think they can control information they give to third parties.

Privacy used to be about secrecy. Someone defending himself in court against the charge of revealing someone else’s personal information could use as a defense the fact that it was not secret. But clearly, privacy is more complicated than that. Just because you tell your insurance company something doesn’t mean you don’t feel violated when that information is sold to a data broker. Just because you tell your friend a secret doesn’t mean you’re happy when he tells others. Same with your employer, your bank, or any company you do business with.

But as the Facebook example illustrates, privacy is much more complex. It’s about who you choose to disclose information to, how, and for what purpose. And the key word there is “choose.” People are willing to share all sorts of information, as long as they are in control.

When Facebook unilaterally changed the rules about how personal information was revealed, it reminded people that they weren’t in control. Its eight million members put their personal information on the site based on a set of rules about how that information would be used. It’s no wonder those members—high school and college kids who traditionally don’t care much about their own privacy—felt violated when Facebook changed the rules.

Unfortunately, Facebook can change the rules whenever it wants. Its Privacy Policy is 2,800 words long, and ends with a notice that it can change at any time. How many members ever read that policy, let alone read it regularly and check for changes? Not that a Privacy Policy is the same as a contract. Legally, Facebook owns all data members upload to the site. It can sell the data to advertisers, marketers, and data brokers. (Note: there is no evidence that Facebook does any of this.) It can allow the police to search its databases upon request. It can add new features that change who can access what personal data, and how.

But public perception is important. The lesson here for Facebook and other companies—for Google and MySpace and AOL and everyone else who hosts our e-mails and webpages and chat sessions—is that people believe they own their data. Even though the user agreement might technically give companies the right to sell the data, change the access rules to that data, or otherwise own that data, we—the users—believe otherwise. And when we who are affected by those actions start expressing our views—watch out.

What Facebook should have done was add the feature as an option, and allow members to opt in if they wanted to. Then, members who wanted to share their information via News Feeds could do so, and everyone else wouldn’t have felt that they had no say in the matter. This is definitely a gray area, and it’s hard to know beforehand which changes need to be implemented slowly and which won’t matter. Facebook, and others, need to talk to its members openly about new features. Remember: members want control.

The lesson for Facebook members might be even more jarring: if they think they have control over their data, they’re only deluding themselves. They can rebel against Facebook for changing the rules, but the rules have changed, regardless of what the company does.

Whenever you put data on a computer, you lose some control over it. And when you put it on the internet, you lose a lot of control over it. News Feeds brought Facebook members face to face with the full implications of putting their personal information on Facebook. It had just been an accident of the user interface that it was difficult to aggregate the data from multiple friends into a single place. And even if Facebook eliminates News Feeds entirely, a third party could easily write a program that does the same thing. Facebook could try to block the program, but would lose that technical battle in the end.

We’re all still wrestling with the privacy implications of the Internet, but the balance has tipped in favor of more openness. Digital data is just too easy to move, copy, aggregate, and display. Companies like Facebook need to respect the social rules of their sites, to think carefully about their default settings—they have an enormous impact on the privacy mores of the online world—and to give users as much control over their personal information as they can.

But we all need to remember that much of that control is illusory.

This essay originally appeared on Wired.com.

Posted on September 21, 2006 at 5:57 AMView Comments

University Networks and Data Security

In general, the problems of securing a university network are no different than those of securing any other large corporate network. But when it comes to data security, universities have their own unique problems. It’s easy to point fingers at students—a large number of potentially adversarial transient insiders. Yet that’s really no different from a corporation dealing with an assortment of employees and contractors—the difference is the culture.

Universities are edge-focused; central policies tend to be weak, by design, with maximum autonomy for the edges. This means they have natural tendencies against centralization of services. Departments and individual professors are used to being semiautonomous. Because these institutions were established long before the advent of computers, when networking did begin to infuse universities, it developed within existing administrative divisions. Some universities have academic departments with separate IT departments, budgets, and staff, with a central IT group providing bandwidth but little or no oversight. Unfortunately, these smaller IT groups don’t generally count policy development and enforcement as part of their core competencies.

The lack of central authority makes enforcing uniform standards challenging, to say the least. Most university CIOs have much less power than their corporate counterparts; university mandates can be a major obstacle in enforcing any security policy. This leads to an uneven security landscape.

There’s also a cultural tendency for faculty and staff to resist restrictions, especially in the area of research. Because most research is now done online—or, at least, involves online access—restricting the use of or deciding on appropriate uses for information technologies can be difficult. This resistance also leads to a lack of centralization and an absence of IT operational procedures such as change control, change management, patch management, and configuration control.

The result is that there’s rarely a uniform security policy. The centralized servers—the core where the database servers live—are generally more secure, whereas the periphery is a hodgepodge of security levels.

So, what to do? Unfortunately, solutions are easier to describe than implement. First, universities should take a top-down approach to securing their infrastructure. Rather than fighting an established culture, they should concentrate on the core infrastructure.

Then they should move personal, financial, and other comparable data into that core. Leave information important to departments and research groups to them, and centrally store information that’s important to the university as a whole. This can be done under the auspices of the CIO. Laws and regulations can help drive consolidation and standardization.

Next, enforce policies for departments that need to connect to the sensitive data in the core. This can be difficult with older legacy systems, but establishing a standard for best practices is better than giving up. All legacy technology is upgraded eventually.

Finally, create distinct segregated networks within the campus. Treat networks that aren’t under the IT department’s direct control as untrusted. Student networks, for example, should be firewalled to protect the internal core from them. The university can then establish levels of trust commensurate with the segregated networks’ adherence to policies. If a research network claims it can’t have any controls, then let the university create a separate virtual network for it, outside the university’s firewalls, and let it live there. Note, though, that if something or someone on that network wants to connect to sensitive data within the core, it’s going to have to agree to whatever security policies that level of data access requires.

Securing university networks is an excellent example of the social problems surrounding network security being harder than the technical ones. But harder doesn’t mean impossible, and there is a lot that can be done to improve security.

This essay originally appeared in the September/October issue of IEEE Security & Privacy.

Posted on September 20, 2006 at 7:37 AMView Comments

Renew Your Passport Now!

If you have a passport, now is the time to renew it—even if it’s not set to expire anytime soon. If you don’t have a passport and think you might need one, now is the time to get it. In many countries, including the United States, passports will soon be equipped with RFID chips. And you don’t want one of these chips in your passport.

RFID stands for “radio-frequency identification.” Passports with RFID chips store an electronic copy of the passport information: your name, a digitized picture, etc. And in the future, the chip might store fingerprints or digital visas from various countries.

By itself, this is no problem. But RFID chips don’t have to be plugged in to a reader to operate. Like the chips used for automatic toll collection on roads or automatic fare collection on subways, these chips operate via proximity. The risk to you is the possibility of surreptitious access: Your passport information might be read without your knowledge or consent by a government trying to track your movements, a criminal trying to steal your identity or someone just curious about your citizenship.

At first the State Department belittled those risks, but in response to criticism from experts it has implemented some security features. Passports will come with a shielded cover, making it much harder to read the chip when the passport is closed. And there are now access-control and encryption mechanisms, making it much harder for an unauthorized reader to collect, understand and alter the data.

Although those measures help, they don’t go far enough. The shielding does no good when the passport is open. Travel abroad and you’ll notice how often you have to show your passport: at hotels, banks, Internet cafes. Anyone intent on harvesting passport data could set up a reader at one of those places. And although the State Department insists that the chip can be read only by a reader that is inches away, the chips have been read from many feet away.

The other security mechanisms are also vulnerable, and several security researchers have already discovered flaws. One found that he could identify individual chips via unique characteristics of the radio transmissions. Another successfully cloned a chip. The State Department called this a “meaningless stunt,” pointing out that the researcher could not read or change the data. But the researcher spent only two weeks trying; the security of your passport has to be strong enough to last 10 years.

This is perhaps the greatest risk. The security mechanisms on your passport chip have to last the lifetime of your passport. It is as ridiculous to think that passport security will remain secure for that long as it would be to think that you won’t see another security update for Microsoft Windows in that time. Improvements in antenna technology will certainly increase the distance at which they can be read and might even allow unauthorized readers to penetrate the shielding.

Whatever happens, if you have a passport with an RFID chip, you’re stuck. Although popping your passport in the microwave will disable the chip, the shielding will cause all kinds of sparking. And although the United States has said that a nonworking chip will not invalidate a passport, it is unclear if one with a deliberately damaged chip will be honored.

The Colorado passport office is already issuing RFID passports, and the State Department expects all U.S. passport offices to be doing so by the end of the year. Many other countries are in the process of changing over. So get a passport before it’s too late. With your new passport you can wait another 10 years for an RFID passport, when the technology will be more mature, when we will have a better understanding of the security risks and when there will be other technologies we can use to cut the risks. You don’t want to be a guinea pig on this one.

This op ed appeared on Saturday in the Washington Post.

I’ve written about RFID passports many times before (that last link is an op-ed from The International Herald-Tribune), although last year I—mistakenly—withdrew my objections based on the security measures the State Department was taking. I’ve since realized that they won’t be enough.

EDITED TO ADD (9/29): This op ed has appeared in about a dozen newspapers. The San Jose Mercury News published a rebuttal. Kind of lame, I think.

EDITED TO ADD (12/30): Here’s how to disable a RFID passport.

Posted on September 18, 2006 at 6:06 AMView Comments

What is a Hacker?

A hacker is someone who thinks outside the box. It’s someone who discards conventional wisdom, and does something else instead. It’s someone who looks at the edge and wonders what’s beyond. It’s someone who sees a set of rules and wonders what happens if you don’t follow them. A hacker is someone who experiments with the limitations of systems for intellectual curiosity.

I wrote that last sentence in the year 2000, in my book Secrets and Lies. And I’m sticking to that definition.

This is what else I wrote in Secrets and Lies (pages 43-44):

Hackers are as old as curiosity, although the term itself is modern. Galileo was a hacker. Mme. Curie was one, too. Aristotle wasn’t. (Aristotle had some theoretical proof that women had fewer teeth than men. A hacker would have simply counted his wife’s teeth. A good hacker would have counted his wife’s teeth without her knowing about it, while she was asleep. A good bad hacker might remove some of them, just to prove a point.)

When I was in college, I knew a group similar to hackers: the key freaks. They wanted access, and their goal was to have a key to every lock on campus. They would study lockpicking and learn new techniques, trade maps of the steam tunnels and where they led, and exchange copies of keys with each other. A locked door was a challenge, a personal affront to their ability. These people weren’t out to do damage—stealing stuff wasn’t their objective—although they certainly could have. Their hobby was the power to go anywhere they wanted to.

Remember the phone phreaks of yesteryear, the ones who could whistle into payphones and make free phone calls. Sure, they stole phone service. But it wasn’t like they needed to make eight-hour calls to Manila or McMurdo. And their real work was secret knowledge: The phone network was a vast maze of information. They wanted to know the system better than the designers, and they wanted the ability to modify it to their will. Understanding how the phone system worked—that was the true prize. Other early hackers were ham-radio hobbyists and model-train enthusiasts.

Richard Feynman was a hacker; read any of his books.

Computer hackers follow these evolutionary lines. Or, they are the same genus operating on a new system. Computers, and networks in particular, are the new landscape to be explored. Networks provide the ultimate maze of steam tunnels, where a new hacking technique becomes a key that can open computer after computer. And inside is knowledge, understanding. Access. How things work. Why things work. It’s all out there, waiting to be discovered.

Computers are the perfect playground for hackers. Computers, and computer networks, are vast treasure troves of secret knowledge. The Internet is an immense landscape of undiscovered information. The more you know, the more you can do.

And it should be no surprise that many hackers have focused their skills on computer security. Not only is it often the obstacle between the hacker and knowledge, and therefore something to be defeated, but also the very mindset necessary to be good at security is exactly the same mindset that hackers have: thinking outside the box, breaking the rules, exploring the limitations of a system. The easiest way to break a security system is to figure out what the system’s designers hadn’t thought of: that’s security hacking.

Hackers cheat. And breaking security regularly involves cheating. It’s figuring out a smart card’s RSA key by looking at the power fluctuations, because the designers of the card never realized anyone could do that. It’s self-signing a piece of code, because the signature-verification system didn’t think someone might try that. It’s using a piece of a protocol to break a completely different protocol, because all previous security analysis only looked at protocols individually and not in pairs.

That’s security hacking: breaking a system by thinking differently.

It all sounds criminal: recovering encrypted text, fooling signature algorithms, breaking protocols. But honestly, that’s just the way we security people talk. Hacking isn’t criminal. All the examples two paragraphs above were performed by respected security professionals, and all were presented at security conferences.

I remember one conversation I had at a Crypto conference, early in my career. It was outside amongst the jumbo shrimp, chocolate-covered strawberries, and other delectables. A bunch of us were talking about some cryptographic system, including Brian Snow of the NSA. Someone described an unconventional attack, one that didn’t follow the normal rules of cryptanalysis. I don’t remember any of the details, but I remember my response after hearing the description of the attack.

“That’s cheating,” I said.

Because it was.

I also remember Brian turning to look at me. He didn’t say anything, but his look conveyed everything. “There’s no such thing as cheating in this business.”

Because there isn’t.

Hacking is cheating, and it’s how we get better at security. It’s only after someone invents a new attack that the rest of us can figure out how to defend against it.

For years I have refused to play the semantic “hacker” vs. “cracker” game. There are good hackers and bad hackers, just as there are good electricians and bad electricians. “Hacker” is a mindset and a skill set; what you do with it is a different issue.

And I believe the best computer security experts have the hacker mindset. When I look to hire people, I look for someone who can’t walk into a store without figuring out how to shoplift. I look for someone who can’t test a computer security program without trying to get around it. I look for someone who, when told that things work in a particular way, immediately asks how things stop working if you do something else.

We need these people in security, and we need them on our side. Criminals are always trying to figure out how to break security systems. Field a new system—an ATM, an online banking system, a gambling machine—and criminals will try to make an illegal profit off it. They’ll figure it out eventually, because some hackers are also criminals. But if we have hackers working for us, they’ll figure it out first—and then we can defend ourselves.

It’s our only hope for security in this fast-moving technological world of ours.

This essay appeared in the Summer 2006 issue of 2600.

Posted on September 14, 2006 at 7:13 AMView Comments

Is There Strategic Software?

If you define “critical infrastructure” as “things essential for the functioning of a society and economy,” then software is critical infrastructure. For many companies and individuals, if their computers stop working, they stop working.

It’s a situation that snuck up on us. Everyone knew that the software that flies 747s or targets cruise missiles was critical, but who thought of the airlines’ weight and balance computers, or the operating system running the databases and spreadsheets that determine which cruise missiles get shipped where?

And over the years, common, off-the-shelf, personal- and business-grade software has been used for more and more critical applications. Today we find ourselves in a situation where a well-positioned flaw in Windows, Cisco routers or Apache could seriously affect the economy.

It’s perfectly rational to assume that some programmers—a tiny minority I’m sure—are deliberately adding vulnerabilities and back doors into the code they write. I’m actually kind of amazed that back doors secretly added by the CIA/NSA, MI5, the Chinese, Mossad and others don’t conflict with each other. Even if these groups aren’t infiltrating software companies with back doors, you can be sure they’re scouring products for vulnerabilities they can exploit, if necessary. On the other hand, we’re already living in a world where dozens of new flaws are discovered in common software products weekly, and the economy is humming along. But we’re not talking about this month’s worm from Asia or new phishing software from the Russian mafia—we’re talking national intelligence organizations. “Infowar” is an overhyped term, but the next war will have a cyberspace component, and these organizations wouldn’t be doing their jobs if they weren’t preparing for it.

Marcus is 100 percent correct when he says it’s simply too late to do anything about it. The software industry is international, and no country can start demanding domestic-only software and expect to get anywhere. Nor would that actually solve the problem, which is more about the allegiance of millions of individual programmers than which country they happen to inhabit.

So, what to do? The key here is to remember the real problem: current commercial software practices are not secure enough to reliably detect and delete deliberately inserted malicious code. Once you understand this, you’ll drop the red herring arguments that led to CheckPoint not being able to buy Sourcefire and concentrate on the real solution: defense in depth.

In theory, security software are after-the-fact kludges because the underlying OS and apps are riddled with vulnerabilities. If your software were written properly, you wouldn’t need a firewall—right?

If we were to get serious about critical infrastructure, we’d recognize it’s all critical and start building security software to protect it. We’d build our security based on the principles of safe failure; we’d assume security would fail and make sure it’s OK when it does. We’d use defense in depth and compartmentalization to minimize the effects of failure. Basically, we’d do everything we’re supposed to do now to secure our networks.

It’d be expensive, probably prohibitively so. Maybe it would be easier to continue to ignore the problem, or at least manage geopolitics so that no national military wants to take us down.

This is the second half of a point/counterpoint I did with Marcus Ranum (here’s his half) for the September 2006 issue of Information Security Magazine.

Posted on September 12, 2006 at 10:38 AMView Comments

Educating Users

I’ve met users, and they’re not fluent in security. They might be fluent in spreadsheets, eBay, or sending jokes over e-mail, but they’re not technologists, let alone security people. Of course, they’re making all sorts of security mistakes. I too have tried educating users, and I agree that it’s largely futile.

Part of the problem is generational. We’ve seen this with all sorts of technologies: electricity, telephones, microwave ovens, VCRs, video games. Older generations approach newfangled technologies with trepidation, distrust and confusion, while the children who grew up with them understand them intuitively.

But while the don’t-get-it generation will die off eventually, we won’t suddenly enter an era of unprecedented computer security. Technology moves too fast these days; there’s no time for any generation to become fluent in anything.

Earlier this year, researchers ran an experiment in London’s financial district. Someone stood on a street corner and handed out CDs, saying they were a “special Valentine’s Day promotion.” Many people, some working at sensitive bank workstations, ran the program on the CDs on their work computers. The program was benign—all it did was alert some computer on the Internet that it was running—but it could just have easily been malicious. The researchers concluded that users don’t care about security. That’s simply not true. Users care about security—they just don’t understand it.

I don’t see a failure of education; I see a failure of technology. It shouldn’t have been possible for those users to run that CD, or for a random program stuffed into a banking computer to “phone home” across the Internet.

The real problem is that computers don’t work well. The industry has convinced everyone that people need a computer to survive, and at the same time it’s made computers so complicated that only an expert can maintain them.

If I try to repair my home heating system, I’m likely to break all sorts of safety rules. I have no experience in that sort of thing, and honestly, there’s no point in trying to educate me. But the heating system works fine without my having to learn anything about it. I know how to set my thermostat and to call a professional if anything goes wrong.

Punishment isn’t something you do instead of education; it’s a form of education—a very primal form of education best suited to children and animals (and experts aren’t so sure about children). I say we stop punishing people for failures of technology, and demand that computer companies market secure hardware and software.

This originally appeared in the April 2006 issue of Information Security Magazine, as the second part of a point/counterpoint with Marcus Ranum. You can read Marcus’s essay here, if you are a subscriber. (Subscriptions are free to “qualified” people.)

EDITED TO ADD (9/11): Here’s Marcus’s half.

Posted on August 22, 2006 at 12:35 PMView Comments

Last Week's Terrorism Arrests

Hours-long waits in the security line. Ridiculous prohibitions on what you can carry onboard. Last week’s foiling of a major terrorist plot and the subsequent airport security graphically illustrates the difference between effective security and security theater.

None of the airplane security measures implemented because of 9/11—no-fly lists, secondary screening, prohibitions against pocket knives and corkscrews—had anything to do with last week’s arrests. And they wouldn’t have prevented the planned attacks, had the terrorists not been arrested. A national ID card wouldn’t have made a difference, either.

Instead, the arrests are a victory for old-fashioned intelligence and investigation. Details are still secret, but police in at least two countries were watching the terrorists for a long time. They followed leads, figured out who was talking to whom, and slowly pieced together both the network and the plot.

The new airplane security measures focus on that plot, because authorities believe they have not captured everyone involved. It’s reasonable to assume that a few lone plotters, knowing their compatriots are in jail and fearing their own arrest, would try to finish the job on their own. The authorities are not being public with the details—much of the “explosive liquid” story doesn’t hang together—but the excessive security measures seem prudent.

But only temporarily. Banning box cutters since 9/11, or taking off our shoes since Richard Reid, has not made us any safer. And a long-term prohibition against liquid carry-ons won’t make us safer, either. It’s not just that there are ways around the rules, it’s that focusing on tactics is a losing proposition.

It’s easy to defend against what the terrorists planned last time, but it’s shortsighted. If we spend billions fielding liquid-analysis machines in airports and the terrorists use solid explosives, we’ve wasted our money. If they target shopping malls, we’ve wasted our money. Focusing on tactics simply forces the terrorists to make a minor modification in their plans. There are too many targets—stadiums, schools, theaters, churches, the long line of densely packed people before airport security—and too many ways to kill people.

Security measures that require us to guess correctly don’t work, because invariably we will guess wrong. It’s not security, it’s security theater: measures designed to make us feel safer but not actually safer.

Airport security is the last line of defense, and not a very good one at that. Sure, it’ll catch the sloppy and the stupid—and that’s a good enough reason not to do away with it entirely—but it won’t catch a well-planned plot. We can’t keep weapons out of prisons; we can’t possibly keep them off airplanes.

The goal of a terrorist is to cause terror. Last week’s arrests demonstrate how real security doesn’t focus on possible terrorist tactics, but on the terrorists themselves. It’s a victory for intelligence and investigation, and a dramatic demonstration of how investments in these areas pay off.

And if you want to know what you can do to help? Don’t be terrorized. They terrorize more of us if they kill some of us, but the dead are beside the point. If we give in to fear, the terrorists achieve their goal even if they were arrested. If we refuse to be terrorized, then they lose—even if their attacks succeed.

This op ed appeared today in the Minneapolis Star-Tribune.

EDITED TO ADD (8/13): The Department of Homeland Security declares an entire state of matter a security risk. And here’s a good commentary on being scared.

Posted on August 13, 2006 at 8:15 AMView Comments

Doping in Professional Sports

The big news in professional bicycle racing is that Floyd Landis may be stripped of his Tour de France title because he tested positive for a banned performance-enhancing drug. Sidestepping the entire issue of whether professional athletes should be allowed to take performance-enhancing drugs, how dangerous those drugs are, and what constitutes a performance-enhancing drug in the first place, I’d like to talk about the security and economic issues surrounding the issue of doping in professional sports.

Drug testing is a security issue. Various sports federations around the world do their best to detect illegal doping, and players do their best to evade the tests. It’s a classic security arms race: improvements in detection technologies lead to improvements in drug detection evasion, which in turn spur the development of better detection capabilities. Right now, it seems that the drugs are winning; in places, these drug tests are described as “intelligence tests”: if you can’t get around them, you don’t deserve to play.

But unlike many security arms races, the detectors have the ability to look into the past. Last year, a laboratory tested Lance Armstrong’s urine and found traces of the banned substance EPO. What’s interesting is that the urine sample tested wasn’t from 2005; it was from 1999. Back then, there weren’t any good tests for EVO in urine. Today there are, and the lab took a frozen urine sample—who knew that labs save urine samples from athletes?—and tested it. He was later cleared—the lab procedures were sloppy—but I don’t think the real ramifications of the episode were ever well understood. Testing can go back in time.

This has two major effects. One, doctors who develop new performance-enhancing drugs may know exactly what sorts of tests the anti-doping laboratories are going to run, and they can test their ability to evade drug detection beforehand. But they cannot know what sorts of tests will be developed in the future, and athletes cannot assume that just because a drug is undetectable today it will remain so years later.

Two, athletes accused of doping based on years-old urine samples have no way of defending themselves. They can’t resubmit to testing; it’s too late. If I were an athlete worried about these accusations, I would deposit my urine “in escrow” on a regular basis to give me some ability to contest an accusation.

The doping arms race will continue because of the incentives. It’s a classic Prisoner’s Dilemma. Consider two competing athletes: Alice and Bob. Both Alice and Bob have to individually decide if they are going to take drugs or not.

Imagine Alice evaluating her two options:

“If Bob doesn’t take any drugs,” she thinks, “then it will be in my best interest to take them. They will give me a performance edge against Bob. I have a better chance of winning.

“Similarly, if Bob takes drugs, it’s also in my interest to agree to take them. At least that way Bob won’t have an advantage over me.

“So even though I have no control over what Bob chooses to do, taking drugs gives me the better outcome, regardless of what his action.”

Unfortunately, Bob goes through exactly the same analysis. As a result, they both take performance-enhancing drugs and neither has the advantage over the other. If they could just trust each other, they could refrain from taking the drugs and maintain the same non-advantage status—without any legal or physical danger. But competing athletes can’t trust each other, and everyone feels he has to dope—and continues to search out newer and more undetectable drugs—in order to compete. And the arms race continues.

Some sports are more vigilant about drug detection than others. European bicycle racing is particularly vigilant; so are the Olympics. American professional sports are far more lenient, often trying to give the appearance of vigilance while still allowing athletes to use performance-enhancing drugs. They know that their fans want to see beefy linebackers, powerful sluggers, and lightning-fast sprinters. So, with a wink and a nod, they only test for the easy stuff.

For example, look at baseball’s current debate on human growth hormone: HGH. They have serious tests, and penalties, for steroid use, but everyone knows that players are now taking HGH because there is no urine test for it. There’s a blood test in development, but it’s still some time away from working. The way to stop HGH use is to take blood tests now and store them for future testing, but the players’ union has refused to allow it and the baseball commissioner isn’t pushing it.

In the end, doping is all about economics. Athletes will continue to dope because the Prisoner’s Dilemma forces them to do so. Sports authorities will either improve their detection capabilities or continue to pretend to do so—depending on their fans and their revenues. And as technology continues to improve, professional athletes will become more like deliberately designed racing cars.

This essay originally appeared on Wired.com.

Posted on August 10, 2006 at 5:18 AMView Comments

1 39 40 41 42 43 47

Sidebar photo of Bruce Schneier by Joe MacInnis.