Entries Tagged "essays"

Page 41 of 48

Revoting

In the world of voting, automatic recount laws are not uncommon. Virginia, where George Allen lost to James Webb in the Senate race by 7,800 out of over 2.3 million votes, or 0.33 percent percent, is an example. If the margin of victory is 1 percent or less, the loser is allowed to ask for a recount. If the margin is 0.5 percent or less, the government pays for it. If the margin is between 0.5 percent and 1 percent, the loser pays for it.

We have recounts because vote counting is—to put it mildly—sloppy. Americans like their election results fast, before they go to bed at night. So we’re willing to put up with inaccuracies in our tallying procedures, and ignore the fact that the numbers we see on television correlate only roughly with reality.

Traditionally, it didn’t matter very much, because most voting errors were “random errors.”

There are two basic types of voting errors: random errors and systemic errors. Random errors are just that, random—equally likely to happen to anyone. In a close race, random errors won’t change the result because votes intended for candidate A that mistakenly go to candidate B happen at the same rate as votes intended for B that mistakenly go to A. (Mathematically, as candidate A’s margin of victory increases, random errors slightly decrease it.)

This is why, historically, recounts in close elections rarely change the result. The recount will find the few percent of the errors in each direction, and they’ll cancel each other out. In an extremely close election, a careful recount will yield a different result—but that’s a rarity.

The other kind of voting error is a systemic error. These are errors in the voting process—the voting machines, the procedures—that cause votes intended for A to go to B at a different rate than the reverse.

An example would be a voting machine that mysteriously recorded more votes for A than there were voters. (Sadly, this kind of thing is not uncommon with electronic voting machines.) Another example would be a random error that only occurs in voting equipment used in areas with strong A support. Systemic errors can make a dramatic difference in an election, because they can easily shift thousands of votes from A to B without any counterbalancing shift from B to A.

Even worse, systemic errors can introduce errors out of proportion to any actual randomness in the vote-counting process. That is, the closeness of an election is not any indication of the presence or absence of systemic errors.

When a candidate has evidence of systemic errors, a recount can fix a wrong result—but only if the recount can catch the error. With electronic voting machines, all too often there simply isn’t the data: there are no votes to recount.

This year’s election in Florida’s 13th Congressional District is such an example. The winner won by a margin of 373 out of 237,861 total votes, but as many as 18,000 votes were not recorded by the electronic voting machines. These votes came from areas where the loser was favored over the winner, and would have likely changed the result.

Or imagine this—as far as we know—hypothetical situation: After the election, someone discovers rogue software in the voting machines that flipped some votes from A to B. Or someone gets caught vote tampering—changing the data on electronic memory cards. The problem is that the original data is lost forever; all we have is the hacked vote.

Faced with problems like this, we can do one of two things. We can certify the result anyway, regretful that people were disenfranchised but knowing that we can’t undo that wrong. Or, we can tell everyone to come back and vote again.

To be sure, the very idea of revoting is rife with problems. Elections are a snapshot in time—election day—and a revote will not reflect that. If Virginia revoted for the Senate this year, the election would not just be for the junior senator from Virginia, but for control of the entire Senate. Similarly, in the 2000 presidential election in Florida, or the 2004 presidential election in Ohio, single-state revotes would have decided the presidency.

And who should be allowed to revote? Should only people in those precincts where there were problems revote, or should the entire election be rerun? In either case, it is certain that more voters will find their way to the polls, possibly changing the demographic and swaying the result in a direction different than that of the initial set of voters. Is that a bad thing, or a good thing?

Should only people who actually voted—records are kept—or who could demonstrate that they were erroneously turned away from the polls be allowed to revote? In this case, the revote will almost certainly have fewer voters, as some of the original voters will be unable to vote a second time. That’s probably a bad thing—but maybe it’s not.

The only analogy we have for this are run-off elections, which are required in some jurisdictions if the winning candidate didn’t get 50 percent of the vote. But it’s easy to know when you need to have a run-off. Who decides, and based on what evidence, that you need to have a revote?

I admit that I don’t have the answers here. They require some serious thinking about elections, and what we’re trying to achieve. But smart election security not only tries to prevent vote hacking—or even systemic electronic voting-machine errors—it prepares for recovery after an election has been hacked. We have to start discussing these issues now, when they’re non-partisan, instead of waiting for the inevitable situation, and the pre-drawn battle lines those results dictate.

This essay originally appeared on Wired.com.

Posted on November 16, 2006 at 6:07 AMView Comments

Voting Technology and Security

Last week in Florida’s 13th Congressional district, the victory margin was only 386 votes out of 153,000. There’ll be a mandatory lawyered-up recount, but it won’t include the almost 18,000 votes that seem to have disappeared. The electronic voting machines didn’t include them in their final tallies, and there’s no backup to use for the recount. The district will pick a winner to send to Washington, but it won’t be because they are sure the majority voted for him. Maybe the majority did, and maybe it didn’t. There’s no way to know.

Electronic voting machines represent a grave threat to fair and accurate elections, a threat that every American—Republican, Democrat or independent—should be concerned about. Because they’re computer-based, the deliberate or accidental actions of a few can swing an entire election. The solution: Paper ballots, which can be verified by voters and recounted if necessary.

To understand the security of electronic voting machines, you first have to consider election security in general. The goal of any voting system is to capture the intent of each voter and collect them all into a final tally. In practice, this occurs through a series of transfer steps. When I voted last week, I transferred my intent onto a paper ballot, which was then transferred to a tabulation machine via an optical scan reader; at the end of the night, the individual machine tallies were transferred by election officials to a central facility and combined into a single result I saw on television.

All election problems are errors introduced at one of these steps, whether it’s voter disenfranchisement, confusing ballots, broken machines or ballot stuffing. Even in normal operations, each step can introduce errors. Voting accuracy, therefore, is a matter of 1) minimizing the number of steps, and 2) increasing the reliability of each step.

Much of our election security is based on “security by competing interests.” Every step, with the exception of voters completing their single anonymous ballots, is witnessed by someone from each major party; this ensures that any partisan shenanigans—or even honest mistakes—will be caught by the other observers. This system isn’t perfect, but it’s worked pretty well for a couple hundred years.

Electronic voting is like an iceberg; the real threats are below the waterline where you can’t see them. Paperless electronic voting machines bypass that security process, allowing a small group of people—or even a single hacker—to affect an election. The problem is software—programs that are hidden from view and cannot be verified by a team of Republican and Democrat election judges, programs that can drastically change the final tallies. And because all that’s left at the end of the day are those electronic tallies, there’s no way to verify the results or to perform a recount. Recounts are important.

This isn’t theoretical. In the U.S., there have been hundreds of documented cases of electronic voting machines distorting the vote to the detriment of candidates from both political parties: machines losing votes, machines swapping the votes for candidates, machines registering more votes for a candidate than there were voters, machines not registering votes at all. I would like to believe these are all mistakes and not deliberate fraud, but the truth is that we can’t tell the difference. And these are just the problems we’ve caught; it’s almost certain that many more problems have escaped detection because no one was paying attention.

This is both new and terrifying. For the most part, and throughout most of history, election fraud on a massive scale has been hard; it requires very public actions or a highly corrupt government—or both. But electronic voting is different: a lone hacker can affect an election. He can do his work secretly before the machines are shipped to the polling stations. He can affect an entire area’s voting machines. And he can cover his tracks completely, writing code that deletes itself after the election.

And that assumes well-designed voting machines. The actual machines being sold by companies like Diebold, Sequoia Voting Systems and Election Systems & Software are much worse. The software is badly designed. Machines are “protected” by hotel minibar keys. Vote tallies are stored in easily changeable files. Machines can be infected with viruses. Some voting software runs on Microsoft Windows, with all the bugs and crashes and security vulnerabilities that introduces. The list of inadequate security practices goes on and on.

The voting machine companies counter that such attacks are impossible because the machines are never left unattended (they’re not), the memory cards that hold the votes are carefully controlled (they’re not), and everything is supervised (it isn’t). Yes, they’re lying, but they’re also missing the point.

We shouldn’t—and don’t—have to accept voting machines that might someday be secure only if a long list of operational procedures are followed precisely. We need voting machines that are secure regardless of how they’re programmed, handled and used, and that can be trusted even if they’re sold by a partisan company, or a company with possible ties to Venezuela.

Sounds like an impossible task, but in reality, the solution is surprisingly easy. The trick is to use electronic voting machines as ballot-generating machines. Vote by whatever automatic touch-screen system you want: a machine that keeps no records or tallies of how people voted, but only generates a paper ballot. The voter can check it for accuracy, then process it with an optical-scan machine. The second machine provides the quick initial tally, while the paper ballot provides for recounts when necessary. And absentee and backup ballots can be counted the same way.

You can even do away with the electronic vote-generation machines entirely and hand-mark your ballots like we do in Minnesota. Or run a 100% mail-in election like Oregon does. Again, paper ballots are the key.

Paper? Yes, paper. A stack of paper is harder to tamper with than a number in a computer’s memory. Voters can see their vote on paper, regardless of what goes on inside the computer. And most important, everyone understands paper. We get into hassles over our cellphone bills and credit card mischarges, but when was the last time you had a problem with a $20 bill? We know how to count paper. Banks count it all the time. Both Canada and the U.K. count paper ballots with no problems, as do the Swiss. We can do it, too. In today’s world of computer crashes, worms and hackers, a low-tech solution is the most secure.

Secure voting machines are just one component of a fair and honest election, but they’re an increasingly important part. They’re where a dedicated attacker can most effectively commit election fraud (and we know that changing the results can be worth millions). But we shouldn’t forget other voter suppression tactics: telling people the wrong polling place or election date, taking registered voters off the voting rolls, having too few machines at polling places, or making it onerous for people to register. (Oddly enough, ineligible people voting isn’t a problem in the U.S., despite political rhetoric to the contrary; every study shows their numbers to be so small as to be insignificant. And photo ID requirements actually cause more problems than they solve.)

Voting is as much a perception issue as it is a technological issue. It’s not enough for the result to be mathematically accurate; every citizen must also be confident that it is correct. Around the world, people protest or riot after an election not when their candidate loses, but when they think their candidate lost unfairly. It is vital for a democracy that an election both accurately determine the winner and adequately convince the loser. In the U.S., we’re losing the perception battle.

The current crop of electronic voting machines fail on both counts. The results from Florida’s 13th Congressional district are neither accurate nor convincing. As a democracy, we deserve better. We need to refuse to vote on electronic voting machines without a voter-verifiable paper ballot, and to continue to pressure our legislatures to implement voting technology that works.

This essay originally appeared on Forbes.com.

Avi Rubin wrote a good essay on voting for Forbes as well.

Posted on November 13, 2006 at 5:47 AMView Comments

Forge Your Own Boarding Pass

Last week Christopher Soghoian created a Fake Boarding Pass Generator website, allowing anyone to create a fake Northwest Airlines boarding pass: any name, airport, date, flight. This action got him visited by the FBI, who later came back, smashed open his front door, and seized his computers and other belongings. It resulted in calls for his arrest—the most visible by Rep. Edward Markey (D-Massachusetts)—who has since recanted. And it’s gotten him more publicity than he ever dreamed of.

All for demonstrating a known and obvious vulnerability in airport security involving boarding passes and IDs.

This vulnerability is nothing new. There was an article on CSOonline from February 2006. There was an article on Slate from February 2005. Sen. Chuck Schumer spoke about it as well. I wrote about it in the August 2003 issue of Crypto-Gram. It’s possible I was the first person to publish it, but I certainly wasn’t the first person to think of it.

It’s kind of obvious, really. If you can make a fake boarding pass, you can get through airport security with it. Big deal; we know.

You can also use a fake boarding pass to fly on someone else’s ticket. The trick is to have two boarding passes: one legitimate, in the name the reservation is under, and another phony one that matches the name on your photo ID. Use the fake boarding pass in your name to get through airport security, and the real ticket in someone else’s name to board the plane.

This means that a terrorist on the no-fly list can get on a plane: He buys a ticket in someone else’s name, perhaps using a stolen credit card, and uses his own photo ID and a fake ticket to get through airport security. Since the ticket is in an innocent’s name, it won’t raise a flag on the no-fly list.

You can also use a fake boarding pass instead of your real one if you have the “SSSS” mark and want to avoid secondary screening, or if you don’t have a ticket but want to get into the gate area.

Historically, forging a boarding pass was difficult. It required special paper and equipment. But since Alaska Airlines started the trend in 1999, most airlines now allow you to print your boarding pass using your home computer and bring it with you to the airport. This program was temporarily suspended after 9/11, but was quickly brought back because of pressure from the airlines. People who print the boarding passes at home can go directly to airport security, and that means fewer airline agents are required.

Airline websites generate boarding passes as graphics files, which means anyone with a little bit of skill can modify them in a program like Photoshop. All Soghoian’s website did was automate the process with a single airline’s boarding passes.

Soghoian claims that he wanted to demonstrate the vulnerability. You could argue that he went about it in a stupid way, but I don’t think what he did is substantively worse than what I wrote in 2003. Or what Schumer described in 2005. Why is it that the person who demonstrates the vulnerability is vilified while the person who describes it is ignored? Or, even worse, the organization that causes it is ignored? Why are we shooting the messenger instead of discussing the problem?

As I wrote in 2005: “The vulnerability is obvious, but the general concepts are subtle. There are three things to authenticate: the identity of the traveler, the boarding pass and the computer record. Think of them as three points on the triangle. Under the current system, the boarding pass is compared to the traveler’s identity document, and then the boarding pass is compared with the computer record. But because the identity document is never compared with the computer record—the third leg of the triangle—it’s possible to create two different boarding passes and have no one notice. That’s why the attack works.”

The way to fix it is equally obvious: Verify the accuracy of the boarding passes at the security checkpoints. If passengers had to scan their boarding passes as they went through screening, the computer could verify that the boarding pass already matched to the photo ID also matched the data in the computer. Close the authentication triangle and the vulnerability disappears.

But before we start spending time and money and Transportation Security Administration agents, let’s be honest with ourselves: The photo ID requirement is no more than security theater. Its only security purpose is to check names against the no-fly list, which would still be a joke even if it weren’t so easy to circumvent. Identification is not a useful security measure here.

Interestingly enough, while the photo ID requirement is presented as an antiterrorism security measure, it is really an airline-business security measure. It was first implemented after the explosion of TWA Flight 800 over the Atlantic in 1996. The government originally thought a terrorist bomb was responsible, but the explosion was later shown to be an accident.

Unlike every other airplane security measure—including reinforcing cockpit doors, which could have prevented 9/11—the airlines didn’t resist this one, because it solved a business problem: the resale of non-refundable tickets. Before the photo ID requirement, these tickets were regularly advertised in classified pages: “Round trip, New York to Los Angeles, 11/21-30, male, $100.” Since the airlines never checked IDs, anyone of the correct gender could use the ticket. Airlines hated that, and tried repeatedly to shut that market down. In 1996, the airlines were finally able to solve that problem and blame it on the FAA and terrorism.

So business is why we have the photo ID requirement in the first place, and business is why it’s so easy to circumvent it. Instead of going after someone who demonstrates an obvious flaw that is already public, let’s focus on the organizations that are actually responsible for this security failure and have failed to do anything about it for all these years. Where’s the TSA’s response to all this?

The problem is real, and the Department of Homeland Security and TSA should either fix the security or scrap the system. What we’ve got now is the worst security system of all: one that annoys everyone who is innocent while failing to catch the guilty.

This essay—my 30th for Wired.com—appeared today.

EDITED TO ADD (11/4): More news and commentary.

EDITED TO ADD (1/10): Great essay by Matt Blaze.

Posted on November 2, 2006 at 6:21 AMView Comments

Architecture and Security

You’ve seen them: those large concrete blocks in front of skyscrapers, monuments and government buildings, designed to protect against car and truck bombs. They sprang up like weeds in the months after 9/11, but the idea is much older. The prettier ones doubled as planters; the uglier ones just stood there.

Form follows function. From medieval castles to modern airports, security concerns have always influenced architecture. Castles appeared during the reign of King Stephen of England because they were the best way to defend the land and there wasn’t a strong king to put any limits on castle-building. But castle design changed over the centuries in response to both innovations in warfare and politics, from motte-and-bailey to concentric design in the late medieval period to entirely decorative castles in the 19th century.

These changes were expensive. The problem is that architecture tends toward permanence, while security threats change much faster. Something that seemed a good idea when a building was designed might make little sense a century—or even a decade—later. But by then it’s hard to undo those architectural decisions.

When Syracuse University built a new campus in the mid-1970s, the student protests of the late 1960s were fresh on everybody’s mind. So the architects designed a college without the open greens of traditional college campuses. It’s now 30 years later, but Syracuse University is stuck defending itself against an obsolete threat.

Similarly, hotel entries in Montreal were elevated above street level in the 1970s, in response to security worries about Quebecois separatists. Today the threat is gone, but those older hotels continue to be maddeningly difficult to navigate.

Also in the 1970s, the Israeli consulate in New York built a unique security system: a two-door vestibule that allowed guards to identify visitors and control building access. Now this kind of entryway is widespread, and buildings with it will remain unwelcoming long after the threat is gone.

The same thing can be seen in cyberspace as well. In his book, Code and Other Laws of Cyberspace, Lawrence Lessig describes how decisions about technological infrastructure—the architecture of the internet—become embedded and then impracticable to change. Whether it’s technologies to prevent file copying, limit anonymity, record our digital habits for later investigation or reduce interoperability and strengthen monopoly positions, once technologies based on these security concerns become standard it will take decades to undo them.

It’s dangerously shortsighted to make architectural decisions based on the threat of the moment without regard to the long-term consequences of those decisions.

Concrete building barriers are an exception: They’re removable. They started appearing in Washington, D.C., in 1983, after the truck bombing of the Marines barracks in Beirut. After 9/11, they were a sort of bizarre status symbol: They proved your building was important enough to deserve protection. In New York City alone, more than 50 buildings were protected in this fashion.

Today, they’re slowly coming down. Studies have found they impede traffic flow, turn into giant ashtrays and can pose a security risk by becoming flying shrapnel if exploded.

We should be thankful they can be removed, and did not end up as permanent aspects of our cities’ architecture. We won’t be so lucky with some of the design decisions we’re seeing about internet architecture.

This essay originally appeared (my 29th column) in Wired.com.

EDITED TO ADD (11/3): Activism-restricting architecture at the University of Texas. And commentary from the Architectures of Control in Design Blog.

Posted on October 19, 2006 at 9:27 AMView Comments

The Death of Ephemeral Conversation

The political firestorm over former U.S. Rep. Mark Foley’s salacious instant messages hides another issue, one about privacy. We are rapidly turning into a society where our intimate conversations can be saved and made public later. This represents an enormous loss of freedom and liberty, and the only way to solve the problem is through legislation.

Everyday conversation used to be ephemeral. Whether face-to-face or by phone, we could be reasonably sure that what we said disappeared as soon as we said it. Of course, organized crime bosses worried about phone taps and room bugs, but that was the exception. Privacy was the default assumption.

This has changed. We now type our casual conversations. We chat in e-mail, with instant messages on our computer and SMS messages on our cellphones, and in comments on social networking Web sites like Friendster, LiveJournal, and MySpace. These conversations—with friends, lovers, colleagues, fellow employees—are not ephemeral; they leave their own electronic trails.

We know this intellectually, but we haven’t truly internalized it. We type on, engrossed in conversation, forgetting that we’re being recorded.

Foley’s instant messages were saved by the young men he talked to, but they could have also been saved by the instant messaging service. There are tools that allow both businesses and government agencies to monitor and log IM conversations. E-mail can be saved by your ISP or by the IT department in your corporation. Gmail, for example, saves everything, even if you delete it.

And these conversations can come back to haunt people—in criminal prosecutions, divorce proceedings or simply as embarrassing disclosures. During the 1998 Microsoft anti-trust trial, the prosecution pored over masses of e-mail, looking for a smoking gun. Of course they found things; everyone says things in conversation that, taken out of context, can prove anything.

The moral is clear: If you type it and send it, prepare to explain it in public later.

And voice is no longer a refuge. Face-to-face conversations are still safe, but we know that the National Security Agency is monitoring everyone’s international phone calls. (They said nothing about SMS messages, but one can assume they were monitoring those too.) Routine recording of phone conversations is still rare—certainly the NSA has the capability—but will become more common as telephone calls continue migrating to the IP network.

If you find this disturbing, you should. Fewer conversations are ephemeral, and we’re losing control over the data. We trust our ISPs, employers and cellphone companies with our privacy, but again and again they’ve proven they can’t be trusted. Identity thieves routinely gain access to these repositories of our information. Paris Hilton and other celebrities have been the victims of hackers breaking into their cellphone providers’ networks. Google reads our Gmail and inserts context-dependent ads.

Even worse, normal constitutional protections don’t apply to much of this. The police need a court-issued warrant to search our papers or eavesdrop on our communications, but can simply issue a subpoena—or ask nicely or threateningly—for data of ours that is held by a third party, including stored copies of our communications.

The Justice Department wants to make this problem even worse, by forcing ISPs and others to save our communications—just in case we’re someday the target of an investigation. This is not only bad privacy and security, it’s a blow to our liberty as well. A world without ephemeral conversation is a world without freedom.

We can’t turn back technology; electronic communications are here to stay. But as technology makes our conversations less ephemeral, we need laws to step in and safeguard our privacy. We need a comprehensive data privacy law, protecting our data and communications regardless of where it is stored or how it is processed. We need laws forcing companies to keep it private and to delete it as soon as it is no longer needed.

And we need to remember, whenever we type and send, we’re being watched.

Foley is an anomaly. Most of us do not send instant messages in order to solicit sex with minors. Law enforcement might have a legitimate need to access Foley’s IMs, e-mails and cellphone calling logs, but that’s why there are warrants supported by probable cause—they help ensure that investigations are properly focused on suspected pedophiles, terrorists and other criminals. We saw this in the recent UK terrorist arrests; focused investigations on suspected terrorists foiled the plot, not broad surveillance of everyone without probable cause.

Without legal privacy protections, the world becomes one giant airport security area, where the slightest joke—or comment made years before—lands you in hot water. The world becomes one giant market-research study, where we are all life-long subjects. The world becomes a police state, where we all are assumed to be Foleys and terrorists in the eyes of the government.

This essay originally appeared on Forbes.com.

Posted on October 18, 2006 at 3:30 PMView Comments

Screening People with Clearances

Why should we waste time at airport security, screening people with U.S. government security clearances? This perfectly reasonable question was asked recently by Robert Poole, director of transportation studies at The Reason Foundation, as he and I were interviewed by WOSU Radio in Ohio.

Poole argued that people with government security clearances, people who are entrusted with U.S. national security secrets, are trusted enough to be allowed through airport security with only a cursory screening. They’ve already gone through background checks, he said, and it would be more efficient to concentrate screening resources on everyone else.

To someone not steeped in security, it makes perfect sense. But it’s a terrible idea, and understanding why teaches us some important security lessons.

The first lesson is that security is a system. Identifying someone’s security clearance is a complicated process. People with clearances don’t have special ID cards, and they can’t just walk into any secured facility. A clearance is held by a particular organization—usually the organization the person works for—and is transferred by a classified message to other organizations when that person travels on official business.

Airport security checkpoints are not set up to receive these clearance messages, so some other system would have to be developed.

Of course, it makes no sense for the cleared person to have his office send a message to every airport he’s visiting, at the time of travel. Far easier is to have a centralized database of people who are cleared. But now you have to build this database. And secure it. And ensure that it’s kept up to date.

Or maybe we can create a new type of ID card: one that identifies people with security clearances. But that also requires a backend database and a card that can’t be forged. And clearances can be revoked at any time, so there needs to be some way of invalidating cards automatically and remotely.

Whatever you do, you need to implement a new set of security procedures at airport security checkpoints to deal with these people. The procedures need to be good enough that people can’t spoof it. Screeners need to be trained. The system needs to be tested.

What starts out as a simple idea—don’t waste time searching people with government security clearances—rapidly becomes a complicated security system with all sorts of new vulnerabilities.

The second lesson is that security is a trade-off. We don’t have infinite dollars to spend on security. We need to choose where to spend our money, and we’re best off if we spend it in ways that give us the most security for our dollar.

Given that very few Americans have security clearances, and that speeding them through security wouldn’t make much of a difference to anyone else standing in line, wouldn’t it be smarter to spend the money elsewhere? Even if you’re just making trade-offs about airport security checkpoints, I would rather take the hundreds of millions of dollars this kind of system could cost and spend it on more security screeners and better training for existing security screeners. We could both speed up the lines and make them more effective.

The third lesson is that security decisions are often based on subjective agenda. My guess is that Poole has a security clearance—he was a member of the Bush-Cheney transition team in 2000—and is annoyed that he is being subjected to the same screening procedures as the other (clearly less trusted) people he is forced to stand in line with. From his perspective, not screening people like him is obvious. But objectively it’s not.

This issue is no different than searching airplane pilots, something that regularly elicits howls of laughter among amateur security watchers. What they don’t realize is that the issue is not whether we should trust pilots, airplane maintenance technicians or people with clearances. The issue is whether we should trust people who are dressed as pilots, wear airplane-maintenance-tech IDs or claim to have clearances.

We have two choices: Either build an infrastructure to verify their claims, or assume that they’re false. And with apologies to pilots, maintenance techs and people with clearances, it’s cheaper, easier and more secure to search you all.

This is my twenty-eighth essay for Wired.com.

Posted on October 5, 2006 at 8:27 AMView Comments

Facebook and Data Control

Earlier this month, the popular social networking site Facebook learned a hard lesson in privacy. It introduced a new feature called “News Feeds” that shows an aggregation of everything members do on the site: added and deleted friends, a change in relationship status, a new favorite song, a new interest, etc. Instead of a member’s friends having to go to his page to view any changes, these changes are all presented to them automatically.

The outrage was enormous. One group, Students Against Facebook News Feeds, amassed over 700,000 members. Members planned to protest at the company’s headquarters. Facebook’s founder was completely stunned, and the company scrambled to add some privacy options.

Welcome to the complicated and confusing world of privacy in the information age. Facebook didn’t think there would be any problem; all it did was take available data and aggregate it in a novel way for what it perceived was its customers’ benefit. Facebook members instinctively understood that making this information easier to display was an enormous difference, and that privacy is more about control than about secrecy.

But on the other hand, Facebook members are just fooling themselves if they think they can control information they give to third parties.

Privacy used to be about secrecy. Someone defending himself in court against the charge of revealing someone else’s personal information could use as a defense the fact that it was not secret. But clearly, privacy is more complicated than that. Just because you tell your insurance company something doesn’t mean you don’t feel violated when that information is sold to a data broker. Just because you tell your friend a secret doesn’t mean you’re happy when he tells others. Same with your employer, your bank, or any company you do business with.

But as the Facebook example illustrates, privacy is much more complex. It’s about who you choose to disclose information to, how, and for what purpose. And the key word there is “choose.” People are willing to share all sorts of information, as long as they are in control.

When Facebook unilaterally changed the rules about how personal information was revealed, it reminded people that they weren’t in control. Its eight million members put their personal information on the site based on a set of rules about how that information would be used. It’s no wonder those members—high school and college kids who traditionally don’t care much about their own privacy—felt violated when Facebook changed the rules.

Unfortunately, Facebook can change the rules whenever it wants. Its Privacy Policy is 2,800 words long, and ends with a notice that it can change at any time. How many members ever read that policy, let alone read it regularly and check for changes? Not that a Privacy Policy is the same as a contract. Legally, Facebook owns all data members upload to the site. It can sell the data to advertisers, marketers, and data brokers. (Note: there is no evidence that Facebook does any of this.) It can allow the police to search its databases upon request. It can add new features that change who can access what personal data, and how.

But public perception is important. The lesson here for Facebook and other companies—for Google and MySpace and AOL and everyone else who hosts our e-mails and webpages and chat sessions—is that people believe they own their data. Even though the user agreement might technically give companies the right to sell the data, change the access rules to that data, or otherwise own that data, we—the users—believe otherwise. And when we who are affected by those actions start expressing our views—watch out.

What Facebook should have done was add the feature as an option, and allow members to opt in if they wanted to. Then, members who wanted to share their information via News Feeds could do so, and everyone else wouldn’t have felt that they had no say in the matter. This is definitely a gray area, and it’s hard to know beforehand which changes need to be implemented slowly and which won’t matter. Facebook, and others, need to talk to its members openly about new features. Remember: members want control.

The lesson for Facebook members might be even more jarring: if they think they have control over their data, they’re only deluding themselves. They can rebel against Facebook for changing the rules, but the rules have changed, regardless of what the company does.

Whenever you put data on a computer, you lose some control over it. And when you put it on the internet, you lose a lot of control over it. News Feeds brought Facebook members face to face with the full implications of putting their personal information on Facebook. It had just been an accident of the user interface that it was difficult to aggregate the data from multiple friends into a single place. And even if Facebook eliminates News Feeds entirely, a third party could easily write a program that does the same thing. Facebook could try to block the program, but would lose that technical battle in the end.

We’re all still wrestling with the privacy implications of the Internet, but the balance has tipped in favor of more openness. Digital data is just too easy to move, copy, aggregate, and display. Companies like Facebook need to respect the social rules of their sites, to think carefully about their default settings—they have an enormous impact on the privacy mores of the online world—and to give users as much control over their personal information as they can.

But we all need to remember that much of that control is illusory.

This essay originally appeared on Wired.com.

Posted on September 21, 2006 at 5:57 AMView Comments

University Networks and Data Security

In general, the problems of securing a university network are no different than those of securing any other large corporate network. But when it comes to data security, universities have their own unique problems. It’s easy to point fingers at students—a large number of potentially adversarial transient insiders. Yet that’s really no different from a corporation dealing with an assortment of employees and contractors—the difference is the culture.

Universities are edge-focused; central policies tend to be weak, by design, with maximum autonomy for the edges. This means they have natural tendencies against centralization of services. Departments and individual professors are used to being semiautonomous. Because these institutions were established long before the advent of computers, when networking did begin to infuse universities, it developed within existing administrative divisions. Some universities have academic departments with separate IT departments, budgets, and staff, with a central IT group providing bandwidth but little or no oversight. Unfortunately, these smaller IT groups don’t generally count policy development and enforcement as part of their core competencies.

The lack of central authority makes enforcing uniform standards challenging, to say the least. Most university CIOs have much less power than their corporate counterparts; university mandates can be a major obstacle in enforcing any security policy. This leads to an uneven security landscape.

There’s also a cultural tendency for faculty and staff to resist restrictions, especially in the area of research. Because most research is now done online—or, at least, involves online access—restricting the use of or deciding on appropriate uses for information technologies can be difficult. This resistance also leads to a lack of centralization and an absence of IT operational procedures such as change control, change management, patch management, and configuration control.

The result is that there’s rarely a uniform security policy. The centralized servers—the core where the database servers live—are generally more secure, whereas the periphery is a hodgepodge of security levels.

So, what to do? Unfortunately, solutions are easier to describe than implement. First, universities should take a top-down approach to securing their infrastructure. Rather than fighting an established culture, they should concentrate on the core infrastructure.

Then they should move personal, financial, and other comparable data into that core. Leave information important to departments and research groups to them, and centrally store information that’s important to the university as a whole. This can be done under the auspices of the CIO. Laws and regulations can help drive consolidation and standardization.

Next, enforce policies for departments that need to connect to the sensitive data in the core. This can be difficult with older legacy systems, but establishing a standard for best practices is better than giving up. All legacy technology is upgraded eventually.

Finally, create distinct segregated networks within the campus. Treat networks that aren’t under the IT department’s direct control as untrusted. Student networks, for example, should be firewalled to protect the internal core from them. The university can then establish levels of trust commensurate with the segregated networks’ adherence to policies. If a research network claims it can’t have any controls, then let the university create a separate virtual network for it, outside the university’s firewalls, and let it live there. Note, though, that if something or someone on that network wants to connect to sensitive data within the core, it’s going to have to agree to whatever security policies that level of data access requires.

Securing university networks is an excellent example of the social problems surrounding network security being harder than the technical ones. But harder doesn’t mean impossible, and there is a lot that can be done to improve security.

This essay originally appeared in the September/October issue of IEEE Security & Privacy.

Posted on September 20, 2006 at 7:37 AMView Comments

Renew Your Passport Now!

If you have a passport, now is the time to renew it—even if it’s not set to expire anytime soon. If you don’t have a passport and think you might need one, now is the time to get it. In many countries, including the United States, passports will soon be equipped with RFID chips. And you don’t want one of these chips in your passport.

RFID stands for “radio-frequency identification.” Passports with RFID chips store an electronic copy of the passport information: your name, a digitized picture, etc. And in the future, the chip might store fingerprints or digital visas from various countries.

By itself, this is no problem. But RFID chips don’t have to be plugged in to a reader to operate. Like the chips used for automatic toll collection on roads or automatic fare collection on subways, these chips operate via proximity. The risk to you is the possibility of surreptitious access: Your passport information might be read without your knowledge or consent by a government trying to track your movements, a criminal trying to steal your identity or someone just curious about your citizenship.

At first the State Department belittled those risks, but in response to criticism from experts it has implemented some security features. Passports will come with a shielded cover, making it much harder to read the chip when the passport is closed. And there are now access-control and encryption mechanisms, making it much harder for an unauthorized reader to collect, understand and alter the data.

Although those measures help, they don’t go far enough. The shielding does no good when the passport is open. Travel abroad and you’ll notice how often you have to show your passport: at hotels, banks, Internet cafes. Anyone intent on harvesting passport data could set up a reader at one of those places. And although the State Department insists that the chip can be read only by a reader that is inches away, the chips have been read from many feet away.

The other security mechanisms are also vulnerable, and several security researchers have already discovered flaws. One found that he could identify individual chips via unique characteristics of the radio transmissions. Another successfully cloned a chip. The State Department called this a “meaningless stunt,” pointing out that the researcher could not read or change the data. But the researcher spent only two weeks trying; the security of your passport has to be strong enough to last 10 years.

This is perhaps the greatest risk. The security mechanisms on your passport chip have to last the lifetime of your passport. It is as ridiculous to think that passport security will remain secure for that long as it would be to think that you won’t see another security update for Microsoft Windows in that time. Improvements in antenna technology will certainly increase the distance at which they can be read and might even allow unauthorized readers to penetrate the shielding.

Whatever happens, if you have a passport with an RFID chip, you’re stuck. Although popping your passport in the microwave will disable the chip, the shielding will cause all kinds of sparking. And although the United States has said that a nonworking chip will not invalidate a passport, it is unclear if one with a deliberately damaged chip will be honored.

The Colorado passport office is already issuing RFID passports, and the State Department expects all U.S. passport offices to be doing so by the end of the year. Many other countries are in the process of changing over. So get a passport before it’s too late. With your new passport you can wait another 10 years for an RFID passport, when the technology will be more mature, when we will have a better understanding of the security risks and when there will be other technologies we can use to cut the risks. You don’t want to be a guinea pig on this one.

This op ed appeared on Saturday in the Washington Post.

I’ve written about RFID passports many times before (that last link is an op-ed from The International Herald-Tribune), although last year I—mistakenly—withdrew my objections based on the security measures the State Department was taking. I’ve since realized that they won’t be enough.

EDITED TO ADD (9/29): This op ed has appeared in about a dozen newspapers. The San Jose Mercury News published a rebuttal. Kind of lame, I think.

EDITED TO ADD (12/30): Here’s how to disable a RFID passport.

Posted on September 18, 2006 at 6:06 AMView Comments

What is a Hacker?

A hacker is someone who thinks outside the box. It’s someone who discards conventional wisdom, and does something else instead. It’s someone who looks at the edge and wonders what’s beyond. It’s someone who sees a set of rules and wonders what happens if you don’t follow them. A hacker is someone who experiments with the limitations of systems for intellectual curiosity.

I wrote that last sentence in the year 2000, in my book Secrets and Lies. And I’m sticking to that definition.

This is what else I wrote in Secrets and Lies (pages 43-44):

Hackers are as old as curiosity, although the term itself is modern. Galileo was a hacker. Mme. Curie was one, too. Aristotle wasn’t. (Aristotle had some theoretical proof that women had fewer teeth than men. A hacker would have simply counted his wife’s teeth. A good hacker would have counted his wife’s teeth without her knowing about it, while she was asleep. A good bad hacker might remove some of them, just to prove a point.)

When I was in college, I knew a group similar to hackers: the key freaks. They wanted access, and their goal was to have a key to every lock on campus. They would study lockpicking and learn new techniques, trade maps of the steam tunnels and where they led, and exchange copies of keys with each other. A locked door was a challenge, a personal affront to their ability. These people weren’t out to do damage—stealing stuff wasn’t their objective—although they certainly could have. Their hobby was the power to go anywhere they wanted to.

Remember the phone phreaks of yesteryear, the ones who could whistle into payphones and make free phone calls. Sure, they stole phone service. But it wasn’t like they needed to make eight-hour calls to Manila or McMurdo. And their real work was secret knowledge: The phone network was a vast maze of information. They wanted to know the system better than the designers, and they wanted the ability to modify it to their will. Understanding how the phone system worked—that was the true prize. Other early hackers were ham-radio hobbyists and model-train enthusiasts.

Richard Feynman was a hacker; read any of his books.

Computer hackers follow these evolutionary lines. Or, they are the same genus operating on a new system. Computers, and networks in particular, are the new landscape to be explored. Networks provide the ultimate maze of steam tunnels, where a new hacking technique becomes a key that can open computer after computer. And inside is knowledge, understanding. Access. How things work. Why things work. It’s all out there, waiting to be discovered.

Computers are the perfect playground for hackers. Computers, and computer networks, are vast treasure troves of secret knowledge. The Internet is an immense landscape of undiscovered information. The more you know, the more you can do.

And it should be no surprise that many hackers have focused their skills on computer security. Not only is it often the obstacle between the hacker and knowledge, and therefore something to be defeated, but also the very mindset necessary to be good at security is exactly the same mindset that hackers have: thinking outside the box, breaking the rules, exploring the limitations of a system. The easiest way to break a security system is to figure out what the system’s designers hadn’t thought of: that’s security hacking.

Hackers cheat. And breaking security regularly involves cheating. It’s figuring out a smart card’s RSA key by looking at the power fluctuations, because the designers of the card never realized anyone could do that. It’s self-signing a piece of code, because the signature-verification system didn’t think someone might try that. It’s using a piece of a protocol to break a completely different protocol, because all previous security analysis only looked at protocols individually and not in pairs.

That’s security hacking: breaking a system by thinking differently.

It all sounds criminal: recovering encrypted text, fooling signature algorithms, breaking protocols. But honestly, that’s just the way we security people talk. Hacking isn’t criminal. All the examples two paragraphs above were performed by respected security professionals, and all were presented at security conferences.

I remember one conversation I had at a Crypto conference, early in my career. It was outside amongst the jumbo shrimp, chocolate-covered strawberries, and other delectables. A bunch of us were talking about some cryptographic system, including Brian Snow of the NSA. Someone described an unconventional attack, one that didn’t follow the normal rules of cryptanalysis. I don’t remember any of the details, but I remember my response after hearing the description of the attack.

“That’s cheating,” I said.

Because it was.

I also remember Brian turning to look at me. He didn’t say anything, but his look conveyed everything. “There’s no such thing as cheating in this business.”

Because there isn’t.

Hacking is cheating, and it’s how we get better at security. It’s only after someone invents a new attack that the rest of us can figure out how to defend against it.

For years I have refused to play the semantic “hacker” vs. “cracker” game. There are good hackers and bad hackers, just as there are good electricians and bad electricians. “Hacker” is a mindset and a skill set; what you do with it is a different issue.

And I believe the best computer security experts have the hacker mindset. When I look to hire people, I look for someone who can’t walk into a store without figuring out how to shoplift. I look for someone who can’t test a computer security program without trying to get around it. I look for someone who, when told that things work in a particular way, immediately asks how things stop working if you do something else.

We need these people in security, and we need them on our side. Criminals are always trying to figure out how to break security systems. Field a new system—an ATM, an online banking system, a gambling machine—and criminals will try to make an illegal profit off it. They’ll figure it out eventually, because some hackers are also criminals. But if we have hackers working for us, they’ll figure it out first—and then we can defend ourselves.

It’s our only hope for security in this fast-moving technological world of ours.

This essay appeared in the Summer 2006 issue of 2600.

Posted on September 14, 2006 at 7:13 AMView Comments

1 39 40 41 42 43 48

Sidebar photo of Bruce Schneier by Joe MacInnis.