The Non-Security of Secrecy

Bruce Schneier
Communications of the ACM, October 2004

Considerable confusion exists between the different concepts of secrecy and security, which often causes bad security and surprising political arguments. Secrecy usually contributes only to a false sense of security.

In June 2004, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission requires telephone companies to report large disruptions of telephone service, and wants to extend that to high-speed data lines and wireless networks. DHS fears that such information would give cyberterrorists a "virtual road map" to target critical infrastructures.

Is publishing computer and network vulnerability information useful, or does it just help the hackers? This is a common question, as malware takes advantage of software vulnerabilities after they become known.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is beneficial to security only in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they're lost, they're lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there's no way to recover security. Trying to base security on secrecy is simply bad design.

Cryptography is based on secrets -- keys -- but look at all the work that goes into making keys effective. Keys are short and easy to transfer. They're easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography's most basic principles is to assume that the algorithm is public.

A fallacy of the secrecy argument is the assumption that secrecy works. Do we really think that physical weak points of networks are a mystery to bad guys unable to discover vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies denied their existence and wouldn't bother fixing them, believing in the security of secrecy. And because customers didn't know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we'll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks. Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose those that best serve their needs. Without public disclosure, companies can hide their weaknesses.

Who supports secrecy? Software vendors such as Microsoft want to keep vulnerability information secret. The Department of Homeland Security's recommendations were loudly echoed by the phone companies. The interests of these companies are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we're seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures -- and even routine government operations -- secret: information about the infrastructure of plants, government buildings, and profiling information used to flag certain airline passengers; standards for the Department of Homeland Security's color-coded terrorism threat levels; even information about government operations without any terrorism connections.

This keeps terrorists in the dark, especially "dumb" terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry -- to whom the government is ultimately accountable -- is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can't improve because there's no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks: they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy. Trying to hide that a network has hubs is futile. It's better to identify and protect them.

We're all safer when we have the information we need to exert market pressure on vendors to improve security. We are all less secure if software vendors don't make their security vulnerabilities public, and if telephone companies don't have to report network outages. Governments operating without accountability serve their own security interests, not the people's.

earlier essay: SIMS: Solution, or Part of the Problem?
later essay: Do Terror Alerts Work?
categories: Computer and Information Security
back to Essays and Op Eds

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..