Entries Tagged "Microsoft"

Page 14 of 14

Linux Security

I’m a big fan of the Honeynet Project (and a member of their board of directors). They don’t have a security product; they do security research. Basically, they wire computers up with sensors, put them on the Internet, and watch hackers attack them.

They just released a report about the security of Linux:

Recent data from our honeynet sensor grid reveals that the average life expectancy to compromise for an unpatched Linux system has increased from 72 hours to 3 months. This means that a unpatched Linux system with commonly used configurations (such as server builds of RedHat 9.0 or Suse 6.2) have an online mean life expectancy of 3 months before being successfully compromised.

This is much greater than that of Windows systems, which have average life expectancies on the order of a few minutes.

It’s also important to remember that this paper focuses on vulnerable systems. The Honeynet researchers deployed almost 20 vulnerable systems to monitor hacker tactics, and found that no one was hacking the systems. That’s the real story: the hackers aren’t bothering with Linux. Two years ago, a vulnerable Linux system would be hacked in less than three days; now it takes three months.

Why? My guess is a combination of two reasons. One, Linux is that much more secure than Windows. Two, the bad guys are focusing on Windows — more bang for the buck.

See also here and here.

Posted on January 6, 2005 at 1:45 PMView Comments

The Doghouse: Internet Security Foundation

This organization wants to sell their tool to view passwords in textboxes “hidden” by asterisks on Windows. They claim it’s “a glaring security hole in Microsoft Windows” and a “grave security risk.” Their webpage is thick with FUD, and warns that criminals and terrorists can easily clean out your bank accounts because of this problem.

Of course the problem isn’t that users type passwords into their computers. The problem is that programs don’t store passwords securely. The problem is that programs pass passwords around in plaintext. The problem is that users choose lousy passwords, and then store them insecurely. The problem is that financial applications are still relying on passwords for security, rather than two-factor authentication.

But the “Internet Security Foundation” is trying to make as much noise as possible. They even have this nasty letter to Bill Gates that you can sign (36 people had signed, the last time I looked). I’m not sure what their angle is, but I don’t like it.

Posted on December 13, 2004 at 1:32 PMView Comments

Safe Personal Computing

I am regularly asked what average Internet users can do to ensure their security. My first answer is usually, “Nothing–you’re screwed.”

But that’s not true, and the reality is more complicated. You’re screwed if you do nothing to protect yourself, but there are many things you can do to increase your security on the Internet.

Two years ago, I published a list of PC security recommendations. The idea was to give home users concrete actions they could take to improve security. This is an update of that list: a dozen things you can do to improve your security.

General: Turn off the computer when you’re not using it, especially if you have an “always on” Internet connection.

Laptop security: Keep your laptop with you at all times when not at home; treat it as you would a wallet or purse. Regularly purge unneeded data files from your laptop. The same goes for PDAs. People tend to store more personal data–including passwords and PINs–on PDAs than they do on laptops.

Backups: Back up regularly. Back up to disk, tape or CD-ROM. There’s a lot you can’t defend against; a recent backup will at least let you recover from an attack. Store at least one set of backups off-site (a safe-deposit box is a good place) and at least one set on-site. Remember to destroy old backups. The best way to destroy CD-Rs is to microwave them on high for five seconds. You can also break them in half or run them through better shredders.

Operating systems: If possible, don’t use Microsoft Windows. Buy a Macintosh or use Linux. If you must use Windows, set up Automatic Update so that you automatically receive security patches. And delete the files “command.com” and “cmd.exe.”

Applications: Limit the number of applications on your machine. If you don’t need it, don’t install it. If you no longer need it, uninstall it. Look into one of the free office suites as an alternative to Microsoft Office. Regularly check for updates to the applications you use and install them. Keeping your applications patched is important, but don’t lose sleep over it.

Browsing: Don’t use Microsoft Internet Explorer, period. Limit use of cookies and applets to those few sites that provide services you need. Set your browser to regularly delete cookies. Don’t assume a Web site is what it claims to be, unless you’ve typed in the URL yourself. Make sure the address bar shows the exact address, not a near-miss.

Web sites: Secure Sockets Layer (SSL) encryption does not provide any assurance that the vendor is trustworthy or that its database of customer information is secure.

Think before you do business with a Web site. Limit the financial and personal data you send to Web sites–don’t give out information unless you see a value to you. If you don’t want to give out personal information, lie. Opt out of marketing notices. If the Web site gives you the option of not storing your information for later use, take it. Use a credit card for online purchases, not a debit card.

Passwords: You can’t memorize good enough passwords any more, so don’t bother. For high-security Web sites such as banks, create long random passwords and write them down. Guard them as you would your cash: i.e., store them in your wallet, etc.

Never reuse a password for something you care about. (It’s fine to have a single password for low-security sites, such as for newspaper archive access.) Assume that all PINs can be easily broken and plan accordingly.

Never type a password you care about, such as for a bank account, into a non-SSL encrypted page. If your bank makes it possible to do that, complain to them. When they tell you that it is OK, don’t believe them; they’re wrong.

E-mail : Turn off HTML e-mail. Don’t automatically assume that any e-mail is from the “From” address.

Delete spam without reading it. Don’t open messages with file attachments, unless you know what they contain; immediately delete them. Don’t open cartoons, videos and similar “good for a laugh” files forwarded by your well-meaning friends; again, immediately delete them.

Never click links in e-mail unless you’re sure about the e-mail; copy and paste the link into your browser instead. Don’t use Outlook or Outlook Express. If you must use Microsoft Office, enable macro virus protection; in Office 2000, turn the security level to “high” and don’t trust any received files unless you have to. If you’re using Windows, turn off the “hide file extensions for known file types” option; it lets Trojan horses masquerade as other types of files. Uninstall the Windows Scripting Host if you can get along without it. If you can’t, at least change your file associations, so that script files aren’t automatically sent to the Scripting Host if you double-click them.

Antivirus and anti-spyware software : Use it–either a combined program or two separate programs. Download and install the updates, at least weekly and whenever you read about a new virus in the news. Some antivirus products automatically check for updates. Enable that feature and set it to “daily.”

Firewall : Spend $50 for a Network Address Translator firewall device; it’s likely to be good enough in default mode. On your laptop, use personal firewall software. If you can, hide your IP address. There’s no reason to allow any incoming connections from anybody.

Encryption: Install an e-mail and file encryptor (like PGP). Encrypting all your e-mail or your entire hard drive is unrealistic, but some mail is too sensitive to send in the clear. Similarly, some files on your hard drive are too sensitive to leave unencrypted.

None of the measures I’ve described are foolproof. If the secret police wants to target your data or your communications, no countermeasure on this list will stop them. But these precautions are all good network-hygiene measures, and they’ll make you a more difficult target than the computer next door. And even if you only follow a few basic measures, you’re unlikely to have any problems.

I’m stuck using Microsoft Windows and Office, but I use Opera for Web browsing and Eudora for e-mail. I use Windows Update to automatically get patches and install other patches when I hear about them. My antivirus software updates itself regularly. I keep my computer relatively clean and delete applications that I don’t need. I’m diligent about backing up my data and about storing data files that are no longer needed offline.

I’m suspicious to the point of near-paranoia about e-mail attachments and Web sites. I delete cookies and spyware. I watch URLs to make sure I know where I am, and I don’t trust unsolicited e-mails. I don’t care about low-security passwords, but try to have good passwords for accounts that involve money. I still don’t do Internet banking. I have my firewall set to deny all incoming connections. And I turn my computer off when I’m not using it.

That’s basically it. Really, it’s not that hard. The hardest part is developing an intuition about e-mail and Web sites. But that just takes experience.

This essay previously appeared on CNet

Posted on December 13, 2004 at 9:59 AMView Comments

Schneier: Security outsourcing widespread by 2010

Bruce Schneier is founder and chief technology officer of Mountain View, Calif.-based MSSP Counterpane Internet Security Inc. and author of Applied Cryptography, Secrets and Lies, and Beyond Fear. He also publishes Crypto-Gram, a free monthly newsletter, and writes op-ed pieces for various publications. Schneier spoke to SearchSecurity.com about the latest threats, Microsoft’s ongoing security struggles and other topics in a two-part interview that took place by e-mail and phone last week. In this installment, he talks about the safety of open source vs. closed source, the future of security management and spread of blogs.

Are open source products more secure than closed source?

Schneier: It’s more complicated than that. To analyze the security of a software product you need to have software security experts analyze the code. You can do that in the closed-source model by hiring them, or you can do that in the open-source model by making the code public and hoping that they do so for free. Both work, but obviously the latter is cheaper. It’s also not guaranteed. There’s lots of open-source software out there that no one has analyzed and is no more secure than all the closed-source products that no one has analyzed. But then there are things like Linux, Apache or OpenBSD that get a lot of analysis. When open-source code is properly analyzed, there’s nothing better. But just putting the code out in public is no guarantee.

A recent Yankee Group report said enterprises will outsource 90% of their security management by 2010; that more businesses have made security a priority to meet growing threats and comply with laws like HIPAA and Sarbanes-Oxley. Do you agree?

Schneier: I think that network security will largely be outsourced by 2010 regardless of compliance issues. It’s infrastructure, and infrastructure is always outsourced … eventually. I say eventually because it often takes years for companies to come to terms with it. But Internet security is no different than tax preparation, legal services, food services, cleaning services or phone service. It will be outsourced. I do believe that the various compliance issues, like the laws you mention, are causing companies to increase their security budgets. It’s the same economic driver that I talked about in your question about Microsoft. By increasing the penalties to companies if they don’t have adequate security, the laws induce companies to spend more on security. That’s good for everyone.

How is Crypto-Gram doing?

Schneier: Crypto-Gram currently has about 100,000 readers; 75,000 get it in e-mail every month and another 25,000 read it on the Web. When I started it in 1998, I had no idea it would get this big. I actually thought about charging for it, which would have been a colossal mistake. I think the key to Crypto-Gram’s success is that it’s both interesting and honest. Security is an amazingly rich topic, and there are always things in the news to talk about. Last month I talked about airline security, the Olympics and cellphones. This month I’m going to talk about academic freedom, the security of elections, and RFID chips in passports.

Some people compare Crypto-Gram to a blog. Is that a reasonable comparison?

Schneier: It’s reasonable in the sense that it’s one person writing on topics that interests him. But the form-factor is different. Blogs are Web-based journals, updated regularly. Crypto-Gram is a monthly e-mail newsletter. Sometimes I wish I had the immediacy of a blog, but I like the discipline of a regular publishing schedule. And I think I have more readers because I push the content to my readers’ e-mail boxes.

Do you think blogs have become more useful than traditional media as a way to get the latest security news to IT managers?

Schneier: Blogs are faster, but they’re unfiltered. They’re definitely the fastest way to get the latest news — on security or any other topic — as long as you’re not too concerned about accuracy. Traditional news sources are slower, but there’s higher quality. So they’re both useful, as long as you understand their relative strengths and weaknesses.

By Bill Brenner, News Writer
05 Oct 2004 | SearchSecurity.com

Posted on October 8, 2004 at 4:52 PMView Comments

Schneier: Microsoft still has work to do

Bruce Schneier is founder and chief technology officer of Mountain View, Calif.-based MSSP Counterpane Internet Security Inc. and author of Applied Cryptography, Secrets and Lies, and Beyond Fear. He also publishes Crypto-Gram, a free monthly newsletter, and writes op-ed pieces for various publications. Schneier spoke to SearchSecurity.com about the latest threats, Microsoft’s ongoing security struggles and other topics in a two-part interview that took place by e-mail and phone last month. In this installment, he talks about the “hype” of SP2 and explains why it’s “foolish” to use Internet Explorer.

What’s the biggest threat to information security at the moment?

Schneier: Crime. Criminals have discovered IT in a big way. We’re seeing a huge increase in identity theft and associated financial theft. We’re seeing a rise in credit card fraud. We’re seeing a rise in blackmail. Years ago, the people breaking into computers were mostly kids participating in the information-age equivalent of spray painting. Today there’s a profit motive, as those same hacked computers become launching pads for spam, phishing attacks and Trojans that steal passwords. Right now we’re seeing a crime wave against Internet consumers that has the potential to radically change the way people use their computers. When enough average users complain about having money stolen, the government is going to step in and do something. The results are unlikely to be pretty.

Which threats are overly hyped?

Schneier: Cyberterrorism. It’s not much of a threat. These attacks are very difficult to execute. The software systems controlling our nation’s infrastructure are filled with vulnerabilities, but they’re generally not the kinds of vulnerabilities that cause catastrophic disruptions. The systems are designed to limit the damage that occurs from errors and accidents. They have manual overrides. These systems have been proven to work; they’ve experienced disruptions caused by accident and natural disaster. We’ve been through blackouts, telephone switch failures and disruptions of air traffic control computers. The results might be annoying, and engineers might spend days or weeks scrambling, but it doesn’t spread terror. The effect on the general population has been minimal.

Microsoft has made much of the added security muscle in SP2. Has it measured up to the hype?

Schneier: SP2 is much more hype than substance. It’s got some cool things, but I was unimpressed overall. It’s a pity, though. They had an opportunity to do more, and I think they could have done more. But even so, this stuff is hard. I think the fact that SP2 was largely superficial speaks to how the poor security choices Microsoft made years ago are deeply embedded inside the operating system.

Is Microsoft taking security more seriously?

Schneier: Microsoft is certainly taking it more seriously than three years ago, when they ignored it completely. But they’re still not taking security seriously enough for me. They’ve made some superficial changes in the way they approach security, but they still treat it more like a PR problem than a technical problem. To me, the problem is economic. Microsoft — or any other software company — is not a charity, and we should not expect them to do something that hurts their bottom line. As long as we all are willing to buy insecure software, software companies don’t have much incentive to make their products secure. For years I have been advocating software liability as a way of changing that balance. If software companies could get sued for defective products, just as automobile manufacturers are, then they would spend much more money making their products secure.

After the Download.ject attack in June, voices advocating alternatives to Internet Explorer grew louder. Which browser do you use?

Schneier: I think it’s foolish to use Internet Explorer. It’s filled with security holes, and it’s too hard to configure it to have decent security. Basically, it seems to be written in the best interests of Microsoft and not in the best interests of the customer. I have used the Opera browser for years, and I am very happy with it. It’s much better designed, and I never have to worry about Explorer-based attacks.

By Bill Brenner, News Writer
4 Oct 2004 | SearchSecurity.com

Posted on October 8, 2004 at 4:45 PMView Comments

News

Last month I wrote: “Long and interesting review of Windows XP SP2, including a list of missed opportunities for increased security. Worth reading: The Register.” Be sure you read this follow-up as well:
The Register

The author of the Sasser worm has been arrested:
Computerworld
The Register
And been offered a job:
Australian IT

Interesting essay on the psychology of terrorist alerts:
Philip Zimbardo

Encrypted e-mail client for the Treo:
Treo Central

The Honeynet Project is publishing a bi-annual CD-ROM and newsletter. If you’re involved in honeynets, it’s definitely worth getting. And even if you’re not, it’s worth supporting this endeavor.
Honeynet

CIO Magazine has published a survey of corporate information security. I have some issues with the survey, but it’s worth reading.
IT Security

At the Illinois State Capitol, someone shot an unarmed security guard and fled. The security upgrade after the incident is — get ready — to change the building admittance policy from a “check IDs” procedure to a “sign in” procedure. First off, identity checking does not increase security. And secondly, why do they think that an attacker would be willing to forge/steal an identification card, but would be unwilling to sign their name on a clipboard?
The Guardian

Neat research: a quantum-encrypted TCP/IP network:
MetroWest Daily News
Slashdot
And NEC has its own quantum cryptography research results:
InfoWorld

Security story about the U.S. embassy in New Zealand. It’s a good lesson about the pitfalls of not thinking beyond the immediate problem.
The Dominion

The future of worms:
Computerworld

Teacher arrested after a bookmark is called a concealed weapon:
St. Petersburg Times
Remember all those other things you can bring on an aircraft that can knock people unconscious: handbags, laptop computers, hardcover books. And that dental floss can be used as a garrote. And, and, oh…you get the idea.

Seems you can open Kryptonite bicycle locks with the cap from a plastic pen. The attack works on what locksmiths call the “impressioning” principle. Tubular locks are especially vulnerable to this because all the pins are exposed, and tools that require little skill to use can be relatively unsophisticated. There have been commercial locksmithing products to do this to circular locks for a long time. Once you get the feel for how to do it, it’s pretty easy. I find Kryptonite’s proposed solution — swapping for a smaller diameter lock so a particular brand of pen won’t work — to be especially amusing.
Indystar.com
Wired
Bikeforums

I often talk about how most firewalls are ineffective because they’re not configured properly. Here’s some research on firewall configuration:
IEEE Computer

Reading RFID tags from three feet away:
Computerworld

AOL is offering two-factor authentication services. It’s not free: $10 plus $2 per month. It’s an RSA Security token, with a number that changes every 60 seconds.
PC World

Counter-terrorism has its own snake oil:
Quantum Sleeper

Posted on October 1, 2004 at 9:40 PMView Comments

News

Last month I wrote: “Long and interesting review of Windows XP SP2, including a list of missed opportunities for increased security. Worth reading: The Register.” Be sure you read this follow-up as well:
The Register

The author of the Sasser worm has been arrested:
Computerworld
The Register
And been offered a job:
Australian IT

Interesting essay on the psychology of terrorist alerts:
Philip Zimbardo

Encrypted e-mail client for the Treo:
Treo Central

The Honeynet Project is publishing a bi-annual CD-ROM and newsletter. If you’re involved in honeynets, it’s definitely worth getting. And even if you’re not, it’s worth supporting this endeavor.
Honeynet

CIO Magazine has published a survey of corporate information security. I have some issues with the survey, but it’s worth reading.
IT Security

At the Illinois State Capitol, someone shot an unarmed security guard and fled. The security upgrade after the incident is — get ready — to change the building admittance policy from a “check IDs” procedure to a “sign in” procedure. First off, identity checking does not increase security. And secondly, why do they think that an attacker would be willing to forge/steal an identification card, but would be unwilling to sign their name on a clipboard?
The Guardian

Neat research: a quantum-encrypted TCP/IP network:
MetroWest Daily News
Slashdot
And NEC has its own quantum cryptography research results:
InfoWorld

Security story about the U.S. embassy in New Zealand. It’s a good lesson about the pitfalls of not thinking beyond the immediate problem.
The Dominion

The future of worms:
Computerworld

Teacher arrested after a bookmark is called a concealed weapon:
St. Petersburg Times
Remember all those other things you can bring on an aircraft that can knock people unconscious: handbags, laptop computers, hardcover books. And that dental floss can be used as a garrote. And, and, oh…you get the idea.

Seems you can open Kryptonite bicycle locks with the cap from a plastic pen. The attack works on what locksmiths call the “impressioning” principle. Tubular locks are especially vulnerable to this because all the pins are exposed, and tools that require little skill to use can be relatively unsophisticated. There have been commercial locksmithing products to do this to circular locks for a long time. Once you get the feel for how to do it, it’s pretty easy. I find Kryptonite’s proposed solution — swapping for a smaller diameter lock so a particular brand of pen won’t work — to be especially amusing.
Indystar.com
Wired
Bikeforums

I often talk about how most firewalls are ineffective because they’re not configured properly. Here’s some research on firewall configuration:
IEEE Computer

Reading RFID tags from three feet away:
Computerworld

AOL is offering two-factor authentication services. It’s not free: $10 plus $2 per month. It’s an RSA Security token, with a number that changes every 60 seconds.
PC World

Counter-terrorism has its own snake oil:
Quantum Sleeper

Posted on October 1, 2004 at 9:40 PMView Comments

Keeping Network Outages Secret

There’s considerable confusion between the concept of secrecy and the concept of security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.

In June, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission already requires telephone companies to report large disruptions of telephone service, and wants to extend that requirement to high-speed data lines and wireless networks. But the DHS fears that such information would give cyberterrorists a “virtual road map” to target critical infrastructures.

This sounds like the “full disclosure” debate all over again. Is publishing computer and network vulnerability information a good idea, or does it just help the hackers? It arises again and again, as malware takes advantage of software vulnerabilities after they’ve been made public.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

Cryptography is based on secrets — keys — but look at all the work that goes into making them effective. Keys are short and easy to transfer. They’re easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography’s most basic principles is to assume that the algorithm is public.

That’s the other fallacy with the secrecy argument: the assumption that secrecy works. Do we really think that the physical weak points of networks are such a mystery to the bad guys? Do we really think that the hacker underground never discovers vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy. And because customers didn’t know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we’ll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks.

Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose one that best serves their needs. Without public disclosure, companies could hide their reliability performance from the public.

Just look at who supports secrecy. Software vendors such as Microsoft want very much to keep vulnerability information secret. The Department of Homeland Security’s recommendations were loudly echoed by the phone companies. It’s the interests of these companies that are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we’re seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures — and even routine government operations — secret. Information about the infrastructure of plants and government buildings is secret. Profiling information used to flag certain airline passengers is secret. The standards for the Department of Homeland Security’s color-coded terrorism threat levels are secret. Even information about government operations without any terrorism connections is being kept secret.

This keeps terrorists in the dark, especially “dumb” terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry — to whom the government is ultimately accountable — is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can’t improve because there’s no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks. This means they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy among them. Trying to keep it a secret that a network has hubs is futile. Better to identify and protect them.

We’re all safer when we have the information we need to exert market pressure on vendors to improve security. We would all be less secure if software vendors didn’t make their security vulnerabilities public, and if telephone companies didn’t have to report network outages. And when government operates without accountability, that serves the security interests of the government, not of the people.

Security Focus article
CNN article

Another version of this essay appeared in the October Communications of the ACM.

Posted on October 1, 2004 at 9:36 PMView Comments

Keeping Network Outages Secret

There’s considerable confusion between the concept of secrecy and the concept of security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.

In June, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission already requires telephone companies to report large disruptions of telephone service, and wants to extend that requirement to high-speed data lines and wireless networks. But the DHS fears that such information would give cyberterrorists a “virtual road map” to target critical infrastructures.

This sounds like the “full disclosure” debate all over again. Is publishing computer and network vulnerability information a good idea, or does it just help the hackers? It arises again and again, as malware takes advantage of software vulnerabilities after they’ve been made public.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

Cryptography is based on secrets — keys — but look at all the work that goes into making them effective. Keys are short and easy to transfer. They’re easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography’s most basic principles is to assume that the algorithm is public.

That’s the other fallacy with the secrecy argument: the assumption that secrecy works. Do we really think that the physical weak points of networks are such a mystery to the bad guys? Do we really think that the hacker underground never discovers vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy. And because customers didn’t know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we’ll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks.

Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose one that best serves their needs. Without public disclosure, companies could hide their reliability performance from the public.

Just look at who supports secrecy. Software vendors such as Microsoft want very much to keep vulnerability information secret. The Department of Homeland Security’s recommendations were loudly echoed by the phone companies. It’s the interests of these companies that are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we’re seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures — and even routine government operations — secret. Information about the infrastructure of plants and government buildings is secret. Profiling information used to flag certain airline passengers is secret. The standards for the Department of Homeland Security’s color-coded terrorism threat levels are secret. Even information about government operations without any terrorism connections is being kept secret.

This keeps terrorists in the dark, especially “dumb” terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry — to whom the government is ultimately accountable — is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can’t improve because there’s no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks. This means they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy among them. Trying to keep it a secret that a network has hubs is futile. Better to identify and protect them.

We’re all safer when we have the information we need to exert market pressure on vendors to improve security. We would all be less secure if software vendors didn’t make their security vulnerabilities public, and if telephone companies didn’t have to report network outages. And when government operates without accountability, that serves the security interests of the government, not of the people.

Security Focus article
CNN article

Another version of this essay appeared in the October Communications of the ACM.

Posted on October 1, 2004 at 9:36 PMView Comments

1 12 13 14

Sidebar photo of Bruce Schneier by Joe MacInnis.