Entries Tagged "network security"

Page 4 of 11

Cloud Computing

This year’s overhyped IT concept is cloud computing. Also called software as a service (Saas), cloud computing is when you run software over the internet and access it via a browser. The Salesforce.com customer management software is an example of this. So is Google Docs. If you believe the hype, cloud computing is the future.

But, hype aside, cloud computing is nothing new . It’s the modern version of the timesharing model from the 1960s, which was eventually killed by the rise of the personal computer. It’s what Hotmail and Gmail have been doing all these years, and it’s social networking sites, remote backup companies, and remote email filtering companies such as MessageLabs. Any IT outsourcing—network infrastructure, security monitoring, remote hosting—is a form of cloud computing.

The old timesharing model arose because computers were expensive and hard to maintain. Modern computers and networks are drastically cheaper, but they’re still hard to maintain. As networks have become faster, it is again easier to have someone else do the hard work. Computing has become more of a utility; users are more concerned with results than technical details, so the tech fades into the background.

But what about security? Isn’t it more dangerous to have your email on Hotmail’s servers, your spreadsheets on Google’s, your personal conversations on Facebook’s, and your company’s sales prospects on salesforce.com’s? Well, yes and no.

IT security is about trust. You have to trust your CPU manufacturer, your hardware, operating system and software vendors—and your ISP. Any one of these can undermine your security: crash your systems, corrupt data, allow an attacker to get access to systems. We’ve spent decades dealing with worms and rootkits that target software vulnerabilities. We’ve worried about infected chips. But in the end, we have no choice but to blindly trust the security of the IT providers we use.

Saas moves the trust boundary out one step further—you now have to also trust your software service vendors—but it doesn’t fundamentally change anything. It’s just another vendor we need to trust.

There is one critical difference. When a computer is within your network, you can protect it with other security systems such as firewalls and IDSs. You can build a resilient system that works even if those vendors you have to trust may not be as trustworthy as you like. With any outsourcing model, whether it be cloud computing or something else, you can’t. You have to trust your outsourcer completely. You not only have to trust the outsourcer’s security, but its reliability, its availability, and its business continuity.

You don’t want your critical data to be on some cloud computer that abruptly disappears because its owner goes bankrupt . You don’t want the company you’re using to be sold to your direct competitor. You don’t want the company to cut corners, without warning, because times are tight. Or raise its prices and then refuse to let you have your data back. These things can happen with software vendors, but the results aren’t as drastic.

There are two different types of cloud computing customers. The first only pays a nominal fee for these services—and uses them for free in exchange for ads: e.g., Gmail and Facebook. These customers have no leverage with their outsourcers. You can lose everything. Companies like Google and Amazon won’t spend a lot of time caring. The second type of customer pays considerably for these services: to Salesforce.com, MessageLabs, managed network companies, and so on. These customers have more leverage, providing they write their service contracts correctly. Still, nothing is guaranteed.

Trust is a concept as old as humanity, and the solutions are the same as they have always been. Be careful who you trust, be careful what you trust them with, and be careful how much you trust them. Outsourcing is the future of computing. Eventually we’ll get this right, but you don’t want to be a casualty along the way.

This essay originally appeared in The Guardian.

EDITED TO ADD (6/4): Another opinion.

EDITED TO ADD (6/5): A rebuttal. And an apology for the tone of the rebuttal. The reason I am talking so much about cloud computing is that reporters and inverviewers keep asking me about it. I feel kind of dragged into this whole thing.

EDITED TO ADD (6/6): At the Computers, Freedom, and Privacy conference last week, Bob Gellman said (this, by him, is worth reading) that the nine most important words in cloud computing are: “terms of service,” “location, location, location,” and “provider, provider, provider”—basically making the same point I did. You need to make sure the terms of service you sign up to are ones you can live with. You need to make sure the location of the provider doesn’t subject you to any laws that you can’t live with. And you need to make sure your provider is someone you’re willing to work with. Basically, if you’re going to give someone else your data, you need to trust them.

Posted on June 4, 2009 at 6:14 AM

Obama's Cybersecurity Speech

I am optimistic about President Obama’s new cybersecurity policy and the appointment of a new “cybersecurity coordinator,” though much depends on the details. What we do know is that the threats are real, from identity theft to Chinese hacking to cyberwar.

His principles were all welcome—securing government networks, coordinating responses, working to secure the infrastructure in private hands (the power grid, the communications networks, and so on), although I think he’s overly optimistic that legislation won’t be required. I was especially heartened to hear his commitment to funding research. Much of the technology we currently use to secure cyberspace was developed from university research, and the more of it we finance today the more secure we’ll be in a decade.

Education is also vital, although sometimes I think my parents need more cybersecurity education than my grandchildren do. I also appreciate the president’s commitment to transparency and privacy, both of which are vital for security.

But the details matter. Centralizing security responsibilities has the downside of making security more brittle by instituting a single approach and a uniformity of thinking. Unless the new coordinator distributes responsibility, cybersecurity won’t improve.

As the administration moves forward on the plan, two principles should apply. One, security decisions need to be made as close to the problem as possible. Protecting networks should be done by people who understand those networks, and threats needs to be assessed by people close to the threats. But distributed responsibility has more risk, so oversight is vital.

Two, security coordination needs to happen at the highest level possible, whether that’s evaluating information about different threats, responding to an Internet worm or establishing guidelines for protecting personal information. The whole picture is larger than any single agency.

This essay originally appeared on The New York Times website, along with several others commenting on Obama’s speech. All the essays are worth reading, although I want to specifically quote James Bamford making an important point I’ve repeatedly made:

The history of White House czars is not a glorious one as anyone who has followed the rise and fall of the drug czars can tell. There is a lot of hype, a White House speech, and then things go back to normal. Power, the ability to cause change, depends primarily on who controls the money and who is closest to the president’s ear.

Because the new cyber czar will have neither a checkbook nor direct access to President Obama, the role will be more analogous to a traffic cop than a czar.

Gus Hosein wrote a good essay on the need for privacy:

Of course raising barriers around computer systems is certainly a good start. But when these systems are breached, our personal information is left vulnerable. Yet governments and companies are collecting more and more of our information.

The presumption should be that all data collected is vulnerable to abuse or theft. We should therefore collect only what is absolutely required.

As I said, they’re all worth reading. And here are some more links.

I wrote something similar in 2002 about the creation of the Department of Homeland Security:

The human body defends itself through overlapping security systems. It has a complex immune system specifically to fight disease, but disease fighting is also distributed throughout every organ and every cell. The body has all sorts of security systems, ranging from your skin to keep harmful things out of your body, to your liver filtering harmful things from your bloodstream, to the defenses in your digestive system. These systems all do their own thing in their own way. They overlap each other, and to a certain extent one can compensate when another fails. It might seem redundant and inefficient, but it’s more robust, reliable, and secure. You’re alive and reading this because of it.

EDITED TO ADD (6/2): Gene Spafford’s opinion.

EDITED TO ADD (6/4): Good commentary from Bob Blakley.

Posted on May 29, 2009 at 3:01 PMView Comments

Computer Virus Epidemiology

WiFi networks and malware epidemiology,” by Hao Hu, Steven Myers, Vittoria Colizza, and Alessandro Vespignani.

Abstract

In densely populated urban areas WiFi routers form a tightly interconnected proximity network that can be exploited as a substrate for the spreading of malware able to launch massive fraudulent attacks. In this article, we consider several scenarios for the deployment of malware that spreads over the wireless channel of major urban areas in the US. We develop an epidemiological model that takes into consideration prevalent security flaws on these routers. The spread of such a contagion is simulated on real-world data for georeferenced wireless routers. We uncover a major weakness of WiFi networks in that most of the simulated scenarios show tens of thousands of routers infected in as little as 2 weeks, with the majority of the infections occurring in the first 24–48 h. We indicate possible containment and prevention measures and provide computational estimates for the rate of encrypted routers that would stop the spreading of the epidemics by placing the system below the percolation threshold.

Honestly, I’m not sure I understood most of the article. And I don’t think that their model is all that great. But I like to see these sorts of methods applied to malware and infection rates.

EDITED TO ADD (3/13): Earlier—but free—version of the paper.

Posted on February 18, 2009 at 5:53 AMView Comments

Monster.com Data Breach

Monster.com was hacked, and people’s personal data was stolen. Normally I wouldn’t bother even writing about this—it happens all the time—but an AP reporter called me yesterday to comment. I said:

Monster’s latest breach “shouldn’t have happened,” said Bruce Schneier, chief security technology officer for BT Group. “But you can’t understand a company’s network security by looking at public events—that’s a bad metric. All the public events tell you are, these are attacks that were successful enough to steal data, but were unsuccessful in covering their tracks.”

Thinking about it, it’s even more complex than that. To assess an organization’s network security, you need to actually analyze it. You can’t get a lot of information from the list of attacks that were successful enough to steal data but not successful enough to cover their tracks, and which the company’s attorneys couldn’t figure out a reason not to disclose to the public.

Posted on February 9, 2009 at 6:47 AMView Comments

NSA Patent on Network Tampering Detection

The NSA has patented a technique to detect network tampering:

The NSA’s software does this by measuring the amount of time the network takes to send different types of data from one computer to another and raising a red flag if something takes too long, according to the patent filing.

Other researchers have looked into this problem in the past and proposed a technique called distance bounding, but the NSA patent takes a different tack, comparing different types of data travelling across the network. “The neat thing about this particular patent is that they look at the differences between the network layers,” said Tadayoshi Kohno, an assistant professor of computer science at the University of Washington.

The technique could be used for purposes such as detecting a fake phishing Web site that was intercepting data between users and their legitimate banking sites, he said. “This whole problem space has a lot of potential, [although] I don’t know if this is going to be the final solution that people end up using.”

Posted on December 30, 2008 at 12:07 PMView Comments

A Security Assessment of the Internet Protocol

Interesting:

Preface

The TCP/IP protocols were conceived during a time that was quite different from the hostile environment they operate in now. Yet a direct result of their effectiveness and widespread early adoption is that much of today’s global economy remains dependent upon them.

While many textbooks and articles have created the myth that the Internet Protocols (IP) were designed for warfare environments, the top level goal for the DARPA Internet Program was the sharing of large service machines on the ARPANET. As a result, many protocol specifications focus only on the operational aspects of the protocols they specify and overlook their security implications.

Though Internet technology has evolved, the building blocks are basically the same core protocols adopted by the ARPANET more than two decades ago. During the last twenty years many vulnerabilities have been identified in the TCP/IP stacks of a number of systems. Some were flaws in protocol implementations which affect only a reduced number of systems. Others were flaws in the protocols themselves affecting virtually every existing implementation. Even in the last couple of years researchers were still working on security problems in the core protocols.

The discovery of vulnerabilities in the TCP/IP protocols led to reports being published by a number of CSIRTs (Computer Security Incident Response Teams) and vendors, which helped to raise awareness about the threats as well as the best mitigations known at the time the reports were published.

Much of the effort of the security community on the Internet protocols did not result in official documents (RFCs) being issued by the IETF (Internet Engineering Task Force) leading to a situation in which “known” security problems have not always been addressed by all vendors. In many cases vendors have implemented quick “fixes” to protocol flaws without a careful analysis of their effectiveness and their impact on interoperability.

As a result, any system built in the future according to the official TCP/IP specifications might reincarnate security flaws that have already hit our communication systems in the past.

Producing a secure TCP/IP implementation nowadays is a very difficult task partly because of no single document that can serve as a security roadmap for the protocols.

There is clearly a need for a companion document to the IETF specifications that discusses the security aspects and implications of the protocols, identifies the possible threats, proposes possible counter-measures, and analyses their respective effectiveness.

This document is the result of an assessment of the IETF specifications of the Internet Protocol from a security point of view. Possible threats were identified and, where possible, counter-measures were proposed. Additionally, many implementation flaws that have led to security vulnerabilities have been referenced in the hope that future implementations will not incur the same problems. This document does not limit itself to performing a security assessment of the relevant IETF specification but also offers an assessment of common implementation strategies.

Whilst not aiming to be the final word on the security of the IP, this document aims to raise awareness about the many security threats based on the IP protocol that have been faced in the past, those that we are currently facing, and those we may still have to deal with in the future. It provides advice for the secure implementation of the IP, and also insights about the security aspects of the IP that may be of help to the Internet operations community.

Feedback from the community is more than encouraged to help this document be as accurate as possible and to keep it updated as new threats are discovered.

Posted on August 20, 2008 at 7:48 AMView Comments

The DNS Vulnerability

Despite the best efforts of the security community, the details of a critical internet vulnerability discovered by Dan Kaminsky about six months ago have leaked. Hackers are racing to produce exploit code, and network operators who haven’t already patched the hole are scrambling to catch up. The whole mess is a good illustration of the problems with researching and disclosing flaws like this.

The details of the vulnerability aren’t important, but basically it’s a form of DNS cache poisoning. The DNS system is what translates domain names people understand, like www.schneier.com, to IP addresses computers understand: 204.11.246.1. There is a whole family of vulnerabilities where the DNS system on your computer is fooled into thinking that the IP address for www.badsite.com is really the IP address for www.goodsite.com—there’s no way for you to tell the difference—and that allows the criminals at www.badsite.com to trick you into doing all sorts of things, like giving up your bank account details. Kaminsky discovered a particularly nasty variant of this cache-poisoning attack.

Here’s the way the timeline was supposed to work: Kaminsky discovered the vulnerability about six months ago, and quietly worked with vendors to patch it. (There’s a fairly straightforward fix, although the implementation nuances are complicated.) Of course, this meant describing the vulnerability to them; why would companies like Microsoft and Cisco believe him otherwise? On July 8, he held a press conference to announce the vulnerability—but not the details—and reveal that a patch was available from a long list of vendors. We would all have a month to patch, and Kaminsky would release details of the vulnerability at the BlackHat conference early next month.

Of course, the details leaked. How isn’t important; it could have leaked a zillion different ways. Too many people knew about it for it to remain secret. Others who knew the general idea were too smart not to speculate on the details. I’m kind of amazed the details remained secret for this long; undoubtedly it had leaked into the underground community before the public leak two days ago. So now everyone who back-burnered the problem is rushing to patch, while the hacker community is racing to produce working exploits.

What’s the moral here? It’s easy to condemn Kaminsky: If he had shut up about the problem, we wouldn’t be in this mess. But that’s just wrong. Kaminsky found the vulnerability by accident. There’s no reason to believe he was the first one to find it, and it’s ridiculous to believe he would be the last. Don’t shoot the messenger. The problem is with the DNS protocol; it’s insecure.

The real lesson is that the patch treadmill doesn’t work, and it hasn’t for years. This cycle of finding security holes and rushing to patch them before the bad guys exploit those vulnerabilities is expensive, inefficient and incomplete. We need to design security into our systems right from the beginning. We need assurance. We need security engineers involved in system design. This process won’t prevent every vulnerability, but it’s much more secure—and cheaper—than the patch treadmill we’re all on now.

What a security engineer brings to the problem is a particular mindset. He thinks about systems from a security perspective. It’s not that he discovers all possible attacks before the bad guys do; it’s more that he anticipates potential types of attacks, and defends against them even if he doesn’t know their details. I see this all the time in good cryptographic designs. It’s over-engineering based on intuition, but if the security engineer has good intuition, it generally works.

Kaminsky’s vulnerability is a perfect example of this. Years ago, cryptographer Daniel J. Bernstein looked at DNS security and decided that Source Port Randomization was a smart design choice. That’s exactly the work-around being rolled out now following Kaminsky’s discovery. Bernstein didn’t discover Kaminsky’s attack; instead, he saw a general class of attacks and realized that this enhancement could protect against them. Consequently, the DNS program he wrote in 2000, djbdns, doesn’t need to be patched; it’s already immune to Kaminsky’s attack.

That’s what a good design looks like. It’s not just secure against known attacks; it’s also secure against unknown attacks. We need more of this, not just on the internet but in voting machines, ID cards, transportation payment cards … everywhere. Stop assuming that systems are secure unless demonstrated insecure; start assuming that systems are insecure unless designed securely.

This essay previously appeared on Wired.com.

EDITED TO ADD (8/7): Seems like the flaw is much worse than we thought.

EDITED TO ADD (8/13): Someone else discovered the vulnerability first.

Posted on July 29, 2008 at 6:01 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.