Entries Tagged "debates"

Page 4 of 5

Cyberwar: Myth or Reality?

The biggest problems in discussing cyberwar are the definitions. The things most often described as cyberwar are really cyberterrorism, and the things most often described as cyberterrorism are more like cybercrime, cybervandalism or cyberhooliganism–or maybe cyberespionage.

At first glance there’s nothing new about these terms except the “cyber” prefix. War, terrorism, crime and vandalism are old concepts. What’s new is the domain; it’s the same old stuff occurring in a new arena. But because cyberspace is different, there are differences worth considering.

Of course, the terms overlap. Although the goals are different, many tactics used by armies, terrorists and criminals are the same. Just as they use guns and bombs, they can use cyberattacks. And just as every shooting is not necessarily an act of war, every successful Internet attack, no matter how deadly, is not necessarily an act of cyberwar. A cyberattack that shuts down the power grid might be part of a cyberwar campaign, but it also might be an act of cyberterrorism, cybercrime or even–if done by some 14-year-old who doesn’t really understand what he’s doing–cyberhooliganism. Which it is depends on the attacker’s motivations and the surrounding circumstances–just as in the real world.

For it to be cyberwar, it must first be war. In the 21st century, war will inevitably include cyberwar. Just as war moved into the air with the development of kites, balloons and aircraft, and into space with satellites and ballistic missiles, war will move into cyberspace with the development of specialized weapons, tactics and defenses.

I have no doubt that smarter and better-funded militaries are planning for cyberwar. They have Internet attack tools: denial-of-service tools; exploits that would allow military intelligence to penetrate military systems; viruses and worms similar to what we see now, but perhaps country- or network-specific; and Trojans that eavesdrop on networks, disrupt operations, or allow an attacker to penetrate other networks. I believe militaries know of vulnerabilities in operating systems, generic or custom military applications, and code to exploit those vulnerabilities. It would be irresponsible for them not to.

The most obvious attack is the disabling of large parts of the Internet, although in the absence of global war, I doubt a military would do so; the Internet is too useful an asset and too large a part of the world economy. More interesting is whether militaries would disable national pieces of it. For a surgical approach, we can imagine a cyberattack against a military headquarters, or networks handling logistical information.

Destruction is the last thing a military wants to accomplish with a communications network. A military only wants to shut down an enemy’s network if it isn’t acquiring useful information. The best thing is to infiltrate enemy computers and networks, spy on them, and surreptitiously disrupt select pieces of their communications when appropriate. The next best thing is to passively eavesdrop. After that, perform traffic analysis: analyze the characteristics of communications. Only if a military can’t do any of this would it consider shutting the thing down. Or if, as sometimes but rarely happens, the benefits of completely denying the enemy the communications channel outweigh the advantages of eavesdropping on it.

Cyberwar is certainly not a myth. But you haven’t seen it yet, despite the attacks on Estonia. Cyberwar is warfare in cyberspace. And warfare involves massive death and destruction. When you see it, you’ll know it.

This is the second half of a point/counterpoint with Marcus Ranum; it appeared in the November issue of Information Security Magazine. Marcus’s half is here.

I wrote a longer essay on cyberwar here.

Posted on November 12, 2007 at 7:38 AMView Comments

Home Users: A Public Health Problem?

To the average home user, security is an intractable problem. Microsoft has made great strides improving the security of their operating system “out of the box,” but there are still a dizzying array of rules, options, and choices that users have to make. How should they configure their anti-virus program? What sort of backup regime should they employ? What are the best settings for their wireless network? And so on and so on and so on.

How is it possible that we in the computer industry have created such a shoddy product? How have we foisted on people a product that is so difficult to use securely, that requires so many add-on products?

It’s even worse than that. We have sold the average computer user a bill of goods. In our race for an ever-increasing market, we have convinced every person that he needs a computer. We have provided application after application — IM, peer-to-peer file sharing, eBay, Facebook — to make computers both useful and enjoyable to the home user. At the same time, we’ve made them so hard to maintain that only a trained sysadmin can do it.

And then we wonder why home users have such problems with their buggy systems, why they can’t seem to do even the simplest administrative tasks, and why their computers aren’t secure. They’re not secure because home users don’t know how to secure them.

At work, I have an entire IT department I can call on if I have a problem. They filter my net connection so that I don’t see spam, and most attacks are blocked before they even get to my computer. They tell me which updates to install on my system and when. And they’re available to help me recover if something untoward does happen to my system. Home users have none of this support. They’re on their own.

This problem isn’t simply going to go away as computers get smarter and users get savvier. The next generation of computers will be vulnerable to all sorts of different attacks, and the next generation of attack tools will fool users in all sorts of different ways. The security arms race isn’t going away any time soon, but it will be fought with ever more complex weapons.

This isn’t simply an academic problem; it’s a public health problem. In the hyper-connected world of the Internet, everyone’s security depends in part on everyone else’s. As long as there are insecure computers out there, hackers will use them to eavesdrop on network traffic, send spam, and attack other computers. We are all more secure if all those home computers attached to the Internet via DSL or cable modems are protected against attack. The only question is: what’s the best way to get there?

I wonder about those who say “educate the users.” Have they tried? Have they ever met an actual user? It’s unrealistic to expect home users to be responsible for their own security. They don’t have the expertise, and they’re not going to learn. And it’s not just user actions we need to worry about; these computers are insecure right out of the box.

The only possible way to solve this problem is to force the ISPs to become IT departments. There’s no reason why they can’t provide home users with the same level of support my IT department provides me with. There’s no reason why they can’t provide “clean pipe” service to the home. Yes, it will cost home users more. Yes, it will require changes in the law to make this mandatory. But what’s the alternative?

In 1991, Walter S. Mossberg debuted his “Personal Technology” column in The Wall Street Journal with the words: “Personal computers are just too hard to use, and it isn’t your fault.” Sixteen years later, the statement is still true­ — and doubly true when it comes to computer security.

If we want home users to be secure, we need to design computers and networks that are secure out of the box, without any work by the end users. There simply isn’t any other way.

This essay is the first half of a point/counterpoint with Marcus Ranum in the September issue of Information Security. You can read his reply here.

Posted on September 14, 2007 at 2:01 PMView Comments

Is Penetration Testing Worth it?

There are security experts who insist penetration testing is essential for network security, and you have no hope of being secure unless you do it regularly. And there are contrarian security experts who tell you penetration testing is a waste of time; you might as well throw your money away. Both of these views are wrong. The reality of penetration testing is more complicated and nuanced.

Penetration testing is a broad term. It might mean breaking into a network to demonstrate you can. It might mean trying to break into a network to document vulnerabilities. It might involve a remote attack, physical penetration of a data center or social engineering attacks. It might use commercial or proprietary vulnerability scanning tools, or rely on skilled white-hat hackers. It might just evaluate software version numbers and patch levels, and make inferences about vulnerabilities.

It’s going to be expensive, and you’ll get a thick report when the testing is done.

And that’s the real problem. You really don’t want a thick report documenting all the ways your network is insecure. You don’t have the budget to fix them all, so the document will sit around waiting to make someone look bad. Or, even worse, it’ll be discovered in a breach lawsuit. Do you really want an opposing attorney to ask you to explain why you paid to document the security holes in your network, and then didn’t fix them? Probably the safest thing you can do with the report, after you read it, is shred it.

Given enough time and money, a pen test will find vulnerabilities; there’s no point in proving it. And if you’re not going to fix all the uncovered vulnerabilities, there’s no point uncovering them. But there is a way to do penetration testing usefully. For years I’ve been saying security consists of protection, detection and response–and you need all three to have good security. Before you can do a good job with any of these, you have to assess your security. And done right, penetration testing is a key component of a security assessment.

I like to restrict penetration testing to the most commonly exploited critical vulnerabilities, like those found on the SANS Top 20 list. If you have any of those vulnerabilities, you really need to fix them.

If you think about it, penetration testing is an odd business. Is there an analogue to it anywhere else in security? Sure, militaries run these exercises all the time, but how about in business? Do we hire burglars to try to break into our warehouses? Do we attempt to commit fraud against ourselves? No, we don’t.

Penetration testing has become big business because systems are so complicated and poorly understood. We know about burglars and kidnapping and fraud, but we don’t know about computer criminals. We don’t know what’s dangerous today, and what will be dangerous tomorrow. So we hire penetration testers in the belief they can explain it.

There are two reasons why you might want to conduct a penetration test. One, you want to know whether a certain vulnerability is present because you’re going to fix it if it is. And two, you need a big, scary report to persuade your boss to spend more money. If neither is true, I’m going to save you a lot of money by giving you this free penetration test: You’re vulnerable.

Now, go do something useful about it.

This essay appeared in the March issue of Information Security, as the first half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 15, 2007 at 7:05 AMView Comments

Does Secrecy Help Protect Personal Information?

Personal information protection is an economic problem, not a security problem. And the problem can be easily explained: The organizations we trust to protect our personal information do not suffer when information gets exposed. On the other hand, individuals who suffer when personal information is exposed don’t have the capability to protect that information.

There are actually two problems here: Personal information is easy to steal, and it’s valuable once stolen. We can’t solve one problem without solving the other. The solutions aren’t easy, and you’re not going to like them.

First, fix the economic problem. Credit card companies make more money extending easy credit and making it trivial for customers to use their cards than they lose from fraud. They won’t improve their security as long as you (and not they) are the one who suffers from identity theft. It’s the same for banks and brokerages: As long as you’re the one who suffers when your account is hacked, they don’t have any incentive to fix the problem. And data brokers like ChoicePoint are worse; they don’t suffer if they reveal your information. You don’t have a business relationship with them; you can’t even switch to a competitor in disgust.

Credit card security works as well as it does because the 1968 Truth in Lending Law limits consumer liability for fraud to $50. If the credit card companies could pass fraud losses on to the consumers, they would be spending far less money to stop those losses. But once Congress forced them to suffer the costs of fraud, they invented all sorts of security measures–real-time transaction verification, expert systems patrolling the transaction database and so on–to prevent fraud. The lesson is clear: Make the party in the best position to mitigate the risk responsible for the risk. What this will do is enable the capitalist innovation engine. Once it’s in the financial interest of financial institutions to protect us from identity theft, they will.

Second, stop using personal information to authenticate people. Watch how credit cards work. Notice that the store clerk barely looks at your signature, or how you can use credit cards remotely where no one can check your signature. The credit card industry learned decades ago that authenticating people has only limited value. Instead, they put most of their effort into authenticating the transaction, and they’re much more secure because of it.

This won’t solve the problem of securing our personal information, but it will greatly reduce the threat. Once the information is no longer of value, you only have to worry about securing the information from voyeurs rather than the more common–and more financially motivated–fraudsters.

And third, fix the other economic problem: Organizations that expose our personal information aren’t hurt by that exposure. We need a comprehensive privacy law that gives individuals ownership of their personal information and allows them to take action against organizations that don’t care for it properly.

“Passwords” like credit card numbers and mother’s maiden name used to work, but we’ve forever left the world where our privacy comes from the obscurity of our personal information and the difficulty others have in accessing it. We need to abandon security systems that are based on obscurity and difficulty, and build legal protections to take over where technological advances have left us exposed.

This essay appeared in the January issue of Information Security, as the second half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 14, 2007 at 12:24 PMView Comments

Is Big Brother a Big Deal?

Big Brother isn’t what he used to be. George Orwell extrapolated his totalitarian state from the 1940s. Today’s information society looks nothing like Orwell’s world, and watching and intimidating a population today isn’t anything like what Winston Smith experienced.

Data collection in 1984 was deliberate; today’s is inadvertent. In the information society, we generate data naturally. In Orwell’s world, people were naturally anonymous; today, we leave digital footprints everywhere.

1984‘s police state was centralized; today’s is decentralized. Your phone company knows who you talk to, your credit card company knows where you shop and Netflix knows what you watch. Your ISP can read your email, your cell phone can track your movements and your supermarket can monitor your purchasing patterns. There’s no single government entity bringing this together, but there doesn’t have to be. As Neal Stephenson said, the threat is no longer Big Brother, but instead thousands of Little Brothers.

1984‘s Big Brother was run by the state; today’s Big Brother is market driven. Data brokers like ChoicePoint and credit bureaus like Experian aren’t trying to build a police state; they’re just trying to turn a profit. Of course these companies will take advantage of a national ID; they’d be stupid not to. And the correlations, data mining and precise categorizing they can do is why the U.S. government buys commercial data from them.

1984-style police states required lots of people. East Germany employed one informant for every 66 citizens. Today, there’s no reason to have anyone watch anyone else; computers can do the work of people.

1984-style police states were expensive. Today, data storage is constantly getting cheaper. If some data is too expensive to save today, it’ll be affordable in a few years.

And finally, the police state of 1984 was deliberately constructed, while today’s is naturally emergent. There’s no reason to postulate a malicious police force and a government trying to subvert our freedoms. Computerized processes naturally throw off personalized data; companies save it for marketing purposes, and even the most well-intentioned law enforcement agency will make use of it.

Of course, Orwell’s Big Brother had a ruthless efficiency that’s hard to imagine in a government today. But that completely misses the point. A sloppy and inefficient police state is no reason to cheer; watch the movie Brazil and see how scary it can be. You can also see hints of what it might look like in our completely dysfunctional “no-fly” list and useless projects to secretly categorize people according to potential terrorist risk. Police states are inherently inefficient. There’s no reason to assume today’s will be any more effective.

The fear isn’t an Orwellian government deliberately creating the ultimate totalitarian state, although with the U.S.’s programs of phone-record surveillance, illegal wiretapping, massive data mining, a national ID card no one wants and Patriot Act abuses, one can make that case. It’s that we’re doing it ourselves, as a natural byproduct of the information society.We’re building the computer infrastructure that makes it easy for governments, corporations, criminal organizations and even teenage hackers to record everything we do, and — yes — even change our votes. And we will continue to do so unless we pass laws regulating the creation, use, protection, resale and disposal of personal data. It’s precisely the attitude that trivializes the problem that creates it.

This essay appeared in the May issue of Information Security, as the second half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 11, 2007 at 9:19 AMView Comments

Debating Full Disclosure

Full disclosure — the practice of making the details of security vulnerabilities public — is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure.

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers (See The Vulnerability Disclosure Game: Are We More Secure?). The problem, according to this position, is less the vulnerability itself and more the information about the vulnerability.

But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

To understand why the second assumption isn’t true, you need to understand the underlying economics. To a software company, vulnerabilities are largely an externality. That is, they affect you — the user — much more than they affect it. A smart vendor treats vulnerabilities less as a software problem, and more as a PR problem. So if we, the user community, want software vendors to patch vulnerabilities, we need to make the PR problem more acute.

Full disclosure does this. Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies — who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability — and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

It wasn’t until researchers published complete details of the vulnerabilities that the software companies started fixing them.

Of course, the software companies hated this. They received bad PR every time a vulnerability was made public, and the only way to get some good PR was to quickly release a patch. For a large company like Microsoft, this was very expensive.

So a bunch of software companies, and some security researchers, banded together and invented “responsible disclosure” (See “The Chilling Effect”). The basic idea was that the threat of publishing the vulnerability is almost as good as actually publishing it. A responsible researcher would quietly give the software vendor a head start on patching its software, before releasing the vulnerability to the public.

This was a good idea — and these days it’s normal procedure — but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.

The moral here doesn’t just apply to software; it’s very general. Public scrutiny is how security improves, whether we’re talking about software or airport security or government counterterrorism measures. Yes, there are trade-offs. Full disclosure means that the bad guys learn about the vulnerability at the same time as the rest of us — unless, of course, they knew about it beforehand — but most of the time the benefits far outweigh the disadvantages.

Secrecy prevents people from accurately assessing their own risk. Secrecy precludes public debate about security, and inhibits security education that leads to improvements. Secrecy doesn’t improve security; it stifles it.

I’d rather have as much information as I can to make an informed decision about security, whether it’s a buying decision about a software product or an election decision about two political parties. I’d rather have the information I need to pressure vendors to improve security.

I don’t want to live in a world where companies can sell me software they know is full of holes or where the government can implement security measures without accountability. I much prefer a world where I have all the information I need to assess and protect my own security.

This essay originally appeared on CSOOnline, as part of a series of essay on the topic. Marcus Ranum wrote against the practice of disclosing vulnerabilities, and Mark Miller of Microsoft wrote in favor of responsible disclosure. These are on-line-only sidebars to a very interesting article in CSO Magazine, “The Chilling Effect,” about the confluence of forces that are making it harder to research and disclose vulnerabilities in web-based software:

“Laws say you can’t access computers without permission,” she [attorney Jennifer Granick] explains. “Permission on a website is implied. So far, we’ve relied on that. The Internet couldn’t work if you had to get permission every time you wanted to access something. But what if you’re using a website in a way that’s possible but that the owner didn’t intend? The question is whether the law prohibits you from exploring all the ways a website works,” including through vulnerabilities.

All the links are worth reading in full.

A Simplified Chinese translation by Xin LI is available on Delphij’s Chaos.

Posted on January 23, 2007 at 6:45 AMView Comments

Is There Strategic Software?

If you define “critical infrastructure” as “things essential for the functioning of a society and economy,” then software is critical infrastructure. For many companies and individuals, if their computers stop working, they stop working.

It’s a situation that snuck up on us. Everyone knew that the software that flies 747s or targets cruise missiles was critical, but who thought of the airlines’ weight and balance computers, or the operating system running the databases and spreadsheets that determine which cruise missiles get shipped where?

And over the years, common, off-the-shelf, personal- and business-grade software has been used for more and more critical applications. Today we find ourselves in a situation where a well-positioned flaw in Windows, Cisco routers or Apache could seriously affect the economy.

It’s perfectly rational to assume that some programmers — a tiny minority I’m sure — are deliberately adding vulnerabilities and back doors into the code they write. I’m actually kind of amazed that back doors secretly added by the CIA/NSA, MI5, the Chinese, Mossad and others don’t conflict with each other. Even if these groups aren’t infiltrating software companies with back doors, you can be sure they’re scouring products for vulnerabilities they can exploit, if necessary. On the other hand, we’re already living in a world where dozens of new flaws are discovered in common software products weekly, and the economy is humming along. But we’re not talking about this month’s worm from Asia or new phishing software from the Russian mafia — we’re talking national intelligence organizations. “Infowar” is an overhyped term, but the next war will have a cyberspace component, and these organizations wouldn’t be doing their jobs if they weren’t preparing for it.

Marcus is 100 percent correct when he says it’s simply too late to do anything about it. The software industry is international, and no country can start demanding domestic-only software and expect to get anywhere. Nor would that actually solve the problem, which is more about the allegiance of millions of individual programmers than which country they happen to inhabit.

So, what to do? The key here is to remember the real problem: current commercial software practices are not secure enough to reliably detect and delete deliberately inserted malicious code. Once you understand this, you’ll drop the red herring arguments that led to CheckPoint not being able to buy Sourcefire and concentrate on the real solution: defense in depth.

In theory, security software are after-the-fact kludges because the underlying OS and apps are riddled with vulnerabilities. If your software were written properly, you wouldn’t need a firewall — right?

If we were to get serious about critical infrastructure, we’d recognize it’s all critical and start building security software to protect it. We’d build our security based on the principles of safe failure; we’d assume security would fail and make sure it’s OK when it does. We’d use defense in depth and compartmentalization to minimize the effects of failure. Basically, we’d do everything we’re supposed to do now to secure our networks.

It’d be expensive, probably prohibitively so. Maybe it would be easier to continue to ignore the problem, or at least manage geopolitics so that no national military wants to take us down.

This is the second half of a point/counterpoint I did with Marcus Ranum (here’s his half) for the September 2006 issue of Information Security Magazine.

Posted on September 12, 2006 at 10:38 AMView Comments

Educating Users

I’ve met users, and they’re not fluent in security. They might be fluent in spreadsheets, eBay, or sending jokes over e-mail, but they’re not technologists, let alone security people. Of course, they’re making all sorts of security mistakes. I too have tried educating users, and I agree that it’s largely futile.

Part of the problem is generational. We’ve seen this with all sorts of technologies: electricity, telephones, microwave ovens, VCRs, video games. Older generations approach newfangled technologies with trepidation, distrust and confusion, while the children who grew up with them understand them intuitively.

But while the don’t-get-it generation will die off eventually, we won’t suddenly enter an era of unprecedented computer security. Technology moves too fast these days; there’s no time for any generation to become fluent in anything.

Earlier this year, researchers ran an experiment in London’s financial district. Someone stood on a street corner and handed out CDs, saying they were a “special Valentine’s Day promotion.” Many people, some working at sensitive bank workstations, ran the program on the CDs on their work computers. The program was benign — all it did was alert some computer on the Internet that it was running — but it could just have easily been malicious. The researchers concluded that users don’t care about security. That’s simply not true. Users care about security — they just don’t understand it.

I don’t see a failure of education; I see a failure of technology. It shouldn’t have been possible for those users to run that CD, or for a random program stuffed into a banking computer to “phone home” across the Internet.

The real problem is that computers don’t work well. The industry has convinced everyone that people need a computer to survive, and at the same time it’s made computers so complicated that only an expert can maintain them.

If I try to repair my home heating system, I’m likely to break all sorts of safety rules. I have no experience in that sort of thing, and honestly, there’s no point in trying to educate me. But the heating system works fine without my having to learn anything about it. I know how to set my thermostat and to call a professional if anything goes wrong.

Punishment isn’t something you do instead of education; it’s a form of education — a very primal form of education best suited to children and animals (and experts aren’t so sure about children). I say we stop punishing people for failures of technology, and demand that computer companies market secure hardware and software.

This originally appeared in the April 2006 issue of Information Security Magazine, as the second part of a point/counterpoint with Marcus Ranum. You can read Marcus’s essay here, if you are a subscriber. (Subscriptions are free to “qualified” people.)

EDITED TO ADD (9/11): Here’s Marcus’s half.

Posted on August 22, 2006 at 12:35 PMView Comments

Security Certifications

I’ve long been hostile to certifications — I’ve met too many bad security professionals with certifications and know many excellent security professionals without certifications. But, I’ve come to believe that, while certifications aren’t perfect, they’re a decent way for a security professional to learn some of the things he’s going to know, and a potential employer to assess whether a job candidate has the security expertise he’s going to need to know.

What’s changed? Both the job requirements and the certification programs.

Anyone can invent a security system that he himself cannot break. I’ve said this so often that Cory Doctorow has named it “Schneier’s Law”: When someone hands you a security system and says, “I believe this is secure,” the first thing you have to ask is, “Who the hell are you?” Show me what you’ve broken to demonstrate that your assertion of the system’s security means something.

That kind of expertise can’t be found in a certification. It’s a combination of an innate feel for security, extensive knowledge of the academic security literature, extensive experience in existing security systems, and practice. When I’ve hired people to design and evaluate security systems, I’ve paid no attention to certifications. They are meaningless; I need a different set of skills and abilities.

But most organizations don’t need to hire that kind of person. Network security has become standardized; organizations need a practitioner, not a researcher. This is good because there is so much demand for these practitioners that there aren’t enough researchers to go around. Certification programs are good at churning out practitioners.

And over the years, certification programs have gotten better. They really do teach knowledge that security practitioners need. I might not want a graduate designing a security protocol or evaluating a cryptosystem, but certs are fine for any of the handful of network security jobs a large organization needs.

At my company, we encourage our security analysts to take certification courses. We find that it’s the most cost-effective way to give them the skills they need to do ever-more-complex jobs.

Of course, none of this is perfect. I still meet bad security practitioners with certifications, and I still know excellent security professionals without any.

In the end, certifications are like profiling. They work , but they’re sloppy. Just because someone has a particular certification doesn’t mean that he has the security expertise you’re looking for (in other words, there are false positives). And just because someone doesn’t have a security certification doesn’t mean that he doesn’t have the required security expertise (false negatives). But we use them for the same reason we profile: We don’t have the time, patience, or ability to test for what we’re looking for explicitly.

Profiling based on security certifications is the easiest way for an organization to make a good hiring decision, and the easiest way for an organization to train its existing employees. And honestly, that’s usually good enough.

This essay originally appeared as a point-counterpoint with Marcus Ranum in the July 2006 issue of Information Security Magazine. (You have to fill out an annoying survey to read Marcus’s counterpoint, but 1) you can lie, and 2) it’s worth it.)

EDITED TO ADD (7/21): A Guide to Information Security Certifications.

EDITED TO ADD (9/11): Here’s Marcus’s column.

Posted on July 20, 2006 at 7:20 AMView Comments

Security in the Cloud

One of the basic philosophies of security is defense in depth: overlapping systems designed to provide security even if one of them fails. An example is a firewall coupled with an intrusion-detection system (IDS). Defense in depth provides security, because there’s no single point of failure and no assumed single vector for attacks.

It is for this reason that a choice between implementing network security in the middle of the network — in the cloud — or at the endpoints is a false dichotomy. No single security system is a panacea, and it’s far better to do both.

This kind of layered security is precisely what we’re seeing develop. Traditionally, security was implemented at the endpoints, because that’s what the user controlled. An organization had no choice but to put its firewalls, IDSs, and anti-virus software inside its network. Today, with the rise of managed security services and other outsourced network services, additional security can be provided inside the cloud.

I’m all in favor of security in the cloud. If we could build a new Internet today from scratch, we would embed a lot of security functionality in the cloud. But even that wouldn’t substitute for security at the endpoints. Defense in depth beats a single point of failure, and security in the cloud is only part of a layered approach.

For example, consider the various network-based e-mail filtering services available. They do a great job of filtering out spam and viruses, but it would be folly to consider them a substitute for anti-virus security on the desktop. Many e-mails are internal only, never entering the cloud at all. Worse, an attacker might open up a message gateway inside the enterprise’s infrastructure. Smart organizations build defense in depth: e-mail filtering inside the cloud plus anti-virus on the desktop.

The same reasoning applies to network-based firewalls and intrusion-prevention systems (IPS). Security would be vastly improved if the major carriers implemented cloud-based solutions, but they’re no substitute for traditional firewalls, IDSs, and IPSs.

This should not be an either/or decision. At Counterpane, for example, we offer cloud services and more traditional network and desktop services. The real trick is making everything work together.

Security is about technology, people, and processes. Regardless of where your security systems are, they’re not going to work unless human experts are paying attention. Real-time monitoring and response is what’s most important; where the equipment goes is secondary.

Security is always a trade-off. Budgets are limited and economic considerations regularly trump security concerns. Traditional security products and services are centered on the internal network, because that’s the target of attack. Compliance focuses on that for the same reason. Security in the cloud is a good addition, but it’s not a replacement for more traditional network and desktop security.

This was published as a “Face-Off” in Network World.

The opposing view is here.

Posted on February 15, 2006 at 8:18 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.