Entries Tagged "secrecy"

Page 21 of 21

Getting Out the Vote: Why is it so hard to run an honest election?

Four years after the Florida debacle of 2000 and two years after Congress passed the Help America Vote Act, voting problems are again in the news: confusing ballots, malfunctioning voting machines, problems over who’s registered and who isn’t. All this brings up a basic question: Why is it so hard to run an election?

A fundamental requirement for a democratic election is a secret ballot, and that’s the first reason. Computers regularly handle multimillion-dollar financial transactions, but much of their security comes from the ability to audit the transactions after the fact and correct problems that arise. Much of what they do can be done the next day if the system is down. Neither of these solutions works for elections.

American elections are particularly difficult because they’re so complicated. One ballot might have 50 different things to vote on, all but one different in each state and many different in each district. It’s much easier to hold national elections in India, where everyone casts a single vote, than in the United States. Additionally, American election systems need to be able to handle 100 million voters in a single day—an immense undertaking in the best of circumstances.

Speed is another factor. Americans demand election results before they go to sleep; we won’t stand for waiting more than two weeks before knowing who won, as happened in India and Afghanistan this year.

To make matters worse, voting systems are used infrequently, at most a few times a year. Systems that are used every day improve because people familiarize themselves with them, discover mistakes and figure out improvements. It seems as if we all have to relearn how to vote every time we do it.

It should be no surprise that there are problems with voting. What’s surprising is that there aren’t more problems. So how to make the system work better?—Simplicity: This is the key to making voting better. Registration should be as simple as possible. The voting process should be as simple as possible. Ballot designs should be simple, and they should be tested. The computer industry understands the science of user-interface—that knowledge should be applied to ballot design.—Uniformity: Simplicity leads to uniformity. The United States doesn’t have one set of voting rules or one voting system. It has 51 different sets of voting rules—one for every state and the District of Columbia—and even more systems. The more systems are standardized around the country, the more we can learn from each other’s mistakes.—Verifiability: Computerized voting machines might have a simple user interface, but complexity hides behind the screen and keyboard. To avoid even more problems, these machines should have a voter-verifiable paper ballot. This isn’t a receipt; it’s not something you take home with you. It’s a paper “ballot” with your votes—one that you verify for accuracy and then put in a ballot box. The machine provides quick tallies, but the paper is the basis for any recounts.—Transparency: All computer code used in voting machines should be public. This allows interested parties to examine the code and point out errors, resulting in continually improving security. Any voting-machine company that claims its code must remain secret for security reasons is lying. Security in computer systems comes from transparency—open systems that pass public scrutiny—and not secrecy.

But those are all solutions for the future. If you’re a voter this year, your options are fewer. My advice is to vote carefully. Read the instructions carefully, and ask questions if you are confused. Follow the instructions carefully, checking every step as you go. Remember that it might be impossible to correct a problem once you’ve finished voting. In many states—including California—you can request a paper ballot if you have any worries about the voting machine.

And be sure to vote. This year, thousands of people are watching and waiting at the polls to help voters make sure their vote counts.

This essay originally appeared in the San Francisco Chronicle.

Also read Avi Rubin’s op-ed on the subject.

Posted on October 31, 2004 at 9:13 AMView Comments

Keeping Network Outages Secret

There’s considerable confusion between the concept of secrecy and the concept of security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.

In June, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission already requires telephone companies to report large disruptions of telephone service, and wants to extend that requirement to high-speed data lines and wireless networks. But the DHS fears that such information would give cyberterrorists a “virtual road map” to target critical infrastructures.

This sounds like the “full disclosure” debate all over again. Is publishing computer and network vulnerability information a good idea, or does it just help the hackers? It arises again and again, as malware takes advantage of software vulnerabilities after they’ve been made public.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

Cryptography is based on secrets—keys—but look at all the work that goes into making them effective. Keys are short and easy to transfer. They’re easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography’s most basic principles is to assume that the algorithm is public.

That’s the other fallacy with the secrecy argument: the assumption that secrecy works. Do we really think that the physical weak points of networks are such a mystery to the bad guys? Do we really think that the hacker underground never discovers vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy. And because customers didn’t know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we’ll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks.

Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose one that best serves their needs. Without public disclosure, companies could hide their reliability performance from the public.

Just look at who supports secrecy. Software vendors such as Microsoft want very much to keep vulnerability information secret. The Department of Homeland Security’s recommendations were loudly echoed by the phone companies. It’s the interests of these companies that are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we’re seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures—and even routine government operations—secret. Information about the infrastructure of plants and government buildings is secret. Profiling information used to flag certain airline passengers is secret. The standards for the Department of Homeland Security’s color-coded terrorism threat levels are secret. Even information about government operations without any terrorism connections is being kept secret.

This keeps terrorists in the dark, especially “dumb” terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry—to whom the government is ultimately accountable—is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can’t improve because there’s no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks. This means they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy among them. Trying to keep it a secret that a network has hubs is futile. Better to identify and protect them.

We’re all safer when we have the information we need to exert market pressure on vendors to improve security. We would all be less secure if software vendors didn’t make their security vulnerabilities public, and if telephone companies didn’t have to report network outages. And when government operates without accountability, that serves the security interests of the government, not of the people.

Security Focus article
CNN article

Another version of this essay appeared in the October Communications of the ACM.

Posted on October 1, 2004 at 9:36 PMView Comments

Keeping Network Outages Secret

There’s considerable confusion between the concept of secrecy and the concept of security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.

In June, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission already requires telephone companies to report large disruptions of telephone service, and wants to extend that requirement to high-speed data lines and wireless networks. But the DHS fears that such information would give cyberterrorists a “virtual road map” to target critical infrastructures.

This sounds like the “full disclosure” debate all over again. Is publishing computer and network vulnerability information a good idea, or does it just help the hackers? It arises again and again, as malware takes advantage of software vulnerabilities after they’ve been made public.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

Cryptography is based on secrets—keys—but look at all the work that goes into making them effective. Keys are short and easy to transfer. They’re easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography’s most basic principles is to assume that the algorithm is public.

That’s the other fallacy with the secrecy argument: the assumption that secrecy works. Do we really think that the physical weak points of networks are such a mystery to the bad guys? Do we really think that the hacker underground never discovers vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy. And because customers didn’t know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we’ll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks.

Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose one that best serves their needs. Without public disclosure, companies could hide their reliability performance from the public.

Just look at who supports secrecy. Software vendors such as Microsoft want very much to keep vulnerability information secret. The Department of Homeland Security’s recommendations were loudly echoed by the phone companies. It’s the interests of these companies that are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we’re seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures—and even routine government operations—secret. Information about the infrastructure of plants and government buildings is secret. Profiling information used to flag certain airline passengers is secret. The standards for the Department of Homeland Security’s color-coded terrorism threat levels are secret. Even information about government operations without any terrorism connections is being kept secret.

This keeps terrorists in the dark, especially “dumb” terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry—to whom the government is ultimately accountable—is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can’t improve because there’s no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks. This means they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy among them. Trying to keep it a secret that a network has hubs is futile. Better to identify and protect them.

We’re all safer when we have the information we need to exert market pressure on vendors to improve security. We would all be less secure if software vendors didn’t make their security vulnerabilities public, and if telephone companies didn’t have to report network outages. And when government operates without accountability, that serves the security interests of the government, not of the people.

Security Focus article
CNN article

Another version of this essay appeared in the October Communications of the ACM.

Posted on October 1, 2004 at 9:36 PMView Comments

1 19 20 21

Sidebar photo of Bruce Schneier by Joe MacInnis.