Entries Tagged "network security"

Page 5 of 11

IT Attacks: Insiders vs. Outsiders

A new study claims that insiders aren’t the main threat to network security:

Verizon’s 2008 Data Breach Investigations Report, which looked at 500 breach incidents over the last four years, contradicts the growing orthodoxy that insiders, rather than external agents, represent the most serious threat to network security at most organizations.

Seventy-three percent of the breaches involved outsiders, 18 percent resulted from the actions of insiders, with business partners blamed for 39 percent—the percentages exceed 100 percent due to the fact that some involve multiple breaches, with varying degrees of internal or external involvement.

“The relative infrequency of data breaches attributed to insiders may be surprising to some. It is widely believed and commonly reported that insider incidents outnumber those caused by other sources,” the report states.

The whole insiders vs. outsiders debate has always been one of semantics more than anything else. If you count by attacks, there are a lot more outsider attacks, simply because there are orders of magnitude more outsider attackers. If you count incidents, the numbers tend to get closer: 75% vs. 18% in this case. And if you count damages, insiders generally come out on top—mostly because they have a lot more detailed information and can target their attacks better.

Both insiders and outsiders are security risks, and you have to defend against them both. Trying to rank them isn’t all that useful.

Posted on June 24, 2008 at 6:55 AMView Comments

Comparing Cybersecurity to Early 1800s Security on the High Seas

This article in CSO compares modern cybersecurity to open seas piracy in the early 1800s. After a bit of history, the article talks about current events:

In modern times, the nearly ubiquitous availability of powerful computing systems, along with the proliferation of high-speed networks, have converged to create a new version of the high seas—the cyber seas. The Internet has the potential to significantly impact the United States’ position as a world leader. Nevertheless, for the last decade, U.S. cybersecurity policy has been inconsistent and reactionary. The private sector has often been left to fend for itself, and sporadic policy statements have left U.S. government organizations, private enterprises and allies uncertain of which tack the nation will take to secure the cyber frontier.

This should be a surprise to no one.

What to do?

With that goal in mind, let us consider how the United States could take a Jeffersonian approach to the cyber threats faced by our economy. The first step would be for the United States to develop a consistent policy that articulates America’s commitment to assuring the free navigation of the “cyber seas.” Perhaps most critical to the success of that policy will be a future president’s support for efforts that translate rhetoric to actions—developing initiatives to thwart cyber criminals, protecting U.S. technological sovereignty, and balancing any defensive actions to avoid violating U.S. citizens’ constitutional rights. Clearly articulated policy and consistent actions will assure a stable and predictable environment where electronic commerce can thrive, continuing to drive U.S. economic growth and avoiding the possibility of the U.S. becoming a cyber-colony subject to the whims of organized criminal efforts on the Internet.

I am reminded of comments comparing modern terrorism with piracy on the high seas.

Posted on April 16, 2008 at 2:27 PMView Comments

Security Products: Suites vs. Best-of-Breed

We know what we don’t like about buying consolidated product suites: one great product and a bunch of mediocre ones. And we know what we don’t like about buying best-of-breed: multiple vendors, multiple interfaces, and multiple products that don’t work well together. The security industry has gone back and forth between the two, as a new generation of IT security professionals rediscovers the downsides of each solution.

The real problem is that neither solution really works, and we continually fool ourselves into believing whatever we don’t have is better than what we have at the time. And the real solution is to buy results, not products.

Honestly, no one wants to buy IT security. People want to buy whatever they want—connectivity, a Web presence, email, networked applications, whatever—and they want it to be secure. That they’re forced to spend money on IT security is an artifact of the youth of the computer industry. And sooner or later the need to buy security will disappear.

It will disappear because IT vendors are starting to realize they have to provide security as part of whatever they’re selling. It will disappear because organizations are starting to buy services instead of products, and demanding security as part of those services. It will disappear because the security industry will disappear as a consumer category, and will instead market to the IT industry.

The critical driver here is outsourcing. Outsourcing is the ultimate consolidator, because the customer no longer cares about the details. If I buy my network services from a large IT infrastructure company, I don’t care if it secures things by installing the hot new intrusion prevention systems, by configuring the routers and servers as to obviate the need for network-based security, or if it uses magic security dust given to it by elven kings. I just want a contract that specifies a level and quality of service, and my vendor can figure it out.

IT is infrastructure. Infrastructure is always outsourced. And the details of how the infrastructure works are left to the companies that provide it.

This is the future of IT, and when that happens we’re going to start to see a type of consolidation we haven’t seen before. Instead of large security companies gobbling up small security companies, both large and small security companies will be gobbled up by non-security companies. It’s already starting to happen. In 2006, IBM bought ISS. The same year BT bought my company, Counterpane, and last year it bought INS. These aren’t large security companies buying small security companies; these are non-security companies buying large and small security companies.

If I were Symantec and McAfee, I would be preparing myself for a buyer.

This is good consolidation. Instead of having to choose between a single product suite that isn’t very good or a best-of-breed set of products that don’t work well together, we can ignore the issue completely. We can just find an infrastructure provider that will figure it out and make it work—who cares how?

This essay originally appeared as the second half of a point/counterpoint with Marcus Ranum in Information Security. Here’s Marcus’s half.

Posted on March 10, 2008 at 6:33 AMView Comments

My Open Wireless Network

Whenever I talk or write about my own security setup, the one thing that surprises people—and attracts the most criticism—is the fact that I run an open wireless network at home. There’s no password. There’s no encryption. Anyone with wireless capability who can see my network can use it to access the internet.

To me, it’s basic politeness. Providing internet access to guests is kind of like providing heat and electricity, or a hot cup of tea. But to some observers, it’s both wrong and dangerous.

I’m told that uninvited strangers may sit in their cars in front of my house, and use my network to send spam, eavesdrop on my passwords, and upload and download everything from pirated movies to child pornography. As a result, I risk all sorts of bad things happening to me, from seeing my IP address blacklisted to having the police crash through my door.

While this is technically true, I don’t think it’s much of a risk. I can count five open wireless networks in coffee shops within a mile of my house, and any potential spammer is far more likely to sit in a warm room with a cup of coffee and a scone than in a cold car outside my house. And yes, if someone did commit a crime using my network the police might visit, but what better defense is there than the fact that I have an open wireless network? If I enabled wireless security on my network and someone hacked it, I would have a far harder time proving my innocence.

This is not to say that the new wireless security protocol, WPA, isn’t very good. It is. But there are going to be security flaws in it; there always are.

I spoke to several lawyers about this, and in their lawyerly way they outlined several other risks with leaving your network open.

While none thought you could be successfully prosecuted just because someone else used your network to commit a crime, any investigation could be time-consuming and expensive. You might have your computer equipment seized, and if you have any contraband of your own on your machine, it could be a delicate situation. Also, prosecutors aren’t always the most technically savvy bunch, and you might end up being charged despite your innocence. The lawyers I spoke with say most defense attorneys will advise you to reach a plea agreement rather than risk going to trial on child-pornography charges.

In a less far-fetched scenario, the Recording Industry Association of America is known to sue copyright infringers based on nothing more than an IP address. The accuser’s chance of winning is higher than in a criminal case, because in civil litigation the burden of proof is lower. And again, lawyers argue that even if you win it’s not worth the risk or expense, and that you should settle and pay a few thousand dollars.

I remain unconvinced of this threat, though. The RIAA has conducted about 26,000 lawsuits, and there are more than 15 million music downloaders. Mark Mulligan of Jupiter Research said it best: “If you’re a file sharer, you know that the likelihood of you being caught is very similar to that of being hit by an asteroid.”

I’m also unmoved by those who say I’m putting my own data at risk, because hackers might park in front of my house, log on to my open network and eavesdrop on my internet traffic or break into my computers. This is true, but my computers are much more at risk when I use them on wireless networks in airports, coffee shops and other public places. If I configure my computer to be secure regardless of the network it’s on, then it simply doesn’t matter. And if my computer isn’t secure on a public network, securing my own network isn’t going to reduce my risk very much.

Yes, computer security is hard. But if your computers leave your house, you have to solve it anyway. And any solution will apply to your desktop machines as well.

Finally, critics say someone might steal bandwidth from me. Despite isolated court rulings that this is illegal, my feeling is that they’re welcome to it. I really don’t mind if neighbors use my wireless network when they need it, and I’ve heard several stories of people who have been rescued from connectivity emergencies by open wireless networks in the neighborhood.

Similarly, I appreciate an open network when I am otherwise without bandwidth. If someone were using my network to the point that it affected my own traffic or if some neighbor kid was dinking around, I might want to do something about it; but as long as we’re all polite, why should this concern me? Pay it forward, I say.

Certainly this does concern ISPs. Running an open wireless network will often violate your terms of service. But despite the occasional cease-and-desist letter and providers getting pissy at people who exceed some secret bandwidth limit, this isn’t a big risk either. The worst that will happen to you is that you’ll have to find a new ISP.

A company called Fon has an interesting approach to this problem. Fon wireless access points have two wireless networks: a secure one for you, and an open one for everyone else. You can configure your open network in either “Bill” or “Linus” mode: In the former, people pay you to use your network, and you have to pay to use any other Fon wireless network. In Linus mode, anyone can use your network, and you can use any other Fon wireless network for free. It’s a really clever idea.

Security is always a trade-off. I know people who rarely lock their front door, who drive in the rain (and, while using a cell phone) and who talk to strangers. In my opinion, securing my wireless network isn’t worth it. And I appreciate everyone else who keeps an open wireless network, including all the coffee shops, bars and libraries I have visited in the past, the Dayton International Airport where I started writing this and the Four Points Sheraton where I finished. You all make the world a better place.

This essay originally appeared on Wired.com, and has since generated a lot of controversy. There’s a Slashdot thread. And here are three opposing essays and three supporting essays. Presumably there will be a lot of back and forth in the comments section here as well.

EDITED TO ADD (1/15): There has been lots more commentary.

EDITED TO ADD (1/16): Even more commentary. And still more.

EDITED TO ADD (1/17): Two more.

EDITED TO ADD (1/18): Another. In the beginning, comments agreeing with me and disagreeing with me were about tied. By now, those that disagree with me are firmly in the lead.

Posted on January 15, 2008 at 3:33 AMView Comments

MI5 Sounds Alarm on Internet Spying from China

Someone in MI5 is pissed off at China:

In an unprecedented alert, the Director-General of MI5 sent a confidential letter to 300 chief executives and security chiefs at banks, accountants and legal firms this week warning them that they were under attack from “Chinese state organisations.”

[…]

Firms known to have been compromised recently by Chinese attacks are one of Europe’s largest engineering companies and a large oil company, The Times has learnt. Another source familiar with the MI5 warning said, however, that known attacks had not been limited to large firms based in the City of London. Law firms and other businesses in the regions that deal even with only small parts of Chinese-linked deals are being probed as potential weak spots, he said.

A security expert who has also seen the letter said that among the techniques used by Chinese groups were “custom Trojans”, software designed to hack into the network of a particular firm and feed back confidential data. The MI5 letter includes a list of known “signatures” that can be used to identify Chinese Trojans and a list of internet addresses known to have been used to launch attacks.

A big study gave warning this week that Government and military computer systems in Britain are coming under sustained attack from China and other countries. It followed a report presented to the US Congress last month describing Chinese espionage in the US as so extensive that it represented “the single greatest risk to the security of American technologies.”

EDITED TO ADD (12/13): The Onion comments.

EDITED TO ADD (12/14): At first, I thought that someone in MI5 was pissed off at China. But now I think that someone in MI5 was pissed that he wasn’t getting any budget.

Posted on December 4, 2007 at 12:34 PMView Comments

How to Secure Your Computer, Disks, and Portable Drives

Computer security is hard. Software, computer and network security are all ongoing battles between attacker and defender. And in many cases the attacker has an inherent advantage: He only has to find one network flaw, while the defender has to find and fix every flaw.

Cryptography is an exception. As long as you don’t write your own algorithm, secure encryption is easy. And the defender has an inherent mathematical advantage: Longer keys increase the amount of work the defender has to do linearly, while geometrically increasing the amount of work the attacker has to do.

Unfortunately, cryptography can’t solve most computer-security problems. The one problem cryptography can solve is the security of data when it’s not in use. Encrypting files, archives—even entire disks—is easy.

All of this makes it even more amazing that Her Majesty’s Revenue & Customs in the United Kingdom lost two disks with personal data on 25 million British citizens, including dates of birth, addresses, bank-account information and national insurance numbers. On the one hand, this is no bigger a deal than any of the thousands of other exposures of personal data we’ve read about in recent years—the U.S. Veteran’s Administration loss of personal data of 26 million American veterans is an obvious similar event. But this has turned into Britain’s privacy Chernobyl.

Perhaps encryption isn’t so easy after all, and some people could use a little primer. This is how I protect my laptop.

There are several whole-disk encryption products on the market. I use PGP Disk’s Whole Disk Encryption tool for two reasons. It’s easy, and I trust both the company and the developers to write it securely. (Disclosure: I’m also on PGP Corp.’s Technical Advisory Board.)

Setup only takes a few minutes. After that, the program runs in the background. Everything works like before, and the performance degradation is negligible. Just make sure you choose a secure password—PGP’s encouragement of passphrases makes this much easier—and you’re secure against leaving your laptop in the airport or having it stolen out of your hotel room.

The reason you encrypt your entire disk, and not just key files, is so you don’t have to worry about swap files, temp files, hibernation files, erased files, browser cookies or whatever. You don’t need to enforce a complex policy about which files are important enough to be encrypted. And you have an easy answer to your boss or to the press if the computer is stolen: no problem; the laptop is encrypted.

PGP Disk can also encrypt external disks, which means you can also secure that USB memory device you’ve been using to transfer data from computer to computer. When I travel, I use a portable USB drive for backup. Those devices are getting physically smaller—but larger in capacity—every year, and by encrypting I don’t have to worry about losing them.

I recommend one more complication. Whole-disk encryption means that anyone at your computer has access to everything: someone at your unattended computer, a Trojan that infected your computer and so on. To deal with these and similar threats I recommend a two-tier encryption strategy. Encrypt anything you don’t need access to regularly—archived documents, old e-mail, whatever—separately, with a different password. I like to use PGP Disk’s encrypted zip files, because it also makes secure backup easier (and lets you secure those files before you burn them on a DVD and mail them across the country), but you can also use the program’s virtual-encrypted-disk feature to create a separately encrypted volume. Both options are easy to set up and use.

There are still two scenarios you aren’t secure against, though. You’re not secure against someone snatching your laptop out of your hands as you’re typing away at the local coffee shop. And you’re not secure against the authorities telling you to decrypt your data for them.

The latter threat is becoming more real. I have long been worried that someday, at a border crossing, a customs official will open my laptop and ask me to type in my password. Of course I could refuse, but the consequences might be severe—and permanent. And some countries—the United Kingdom, Singapore, Malaysia—have passed laws giving police the authority to demand that you divulge your passwords and encryption keys.

To defend against both of these threats, minimize the amount of data on your laptop. Do you really need 10 years of old e-mails? Does everyone in the company really need to carry around the entire customer database? One of the most incredible things about the Revenue & Customs story is that a low-level government employee mailed a copy of the entire national child database to the National Audit Office in London. Did he have to? Doubtful. The best defense against data loss is to not have the data in the first place.

Failing that, you can try to convince the authorities that you don’t have the encryption key. This works better if it’s a zipped archive than the whole disk. You can argue that you’re transporting the files for your boss, or that you forgot the key long ago. Make sure the time stamp on the files matches your claim, though.

There are other encryption programs out there. If you’re a Windows Vista user, you might consider BitLocker. This program, embedded in the operating system, also encrypts the computer’s entire drive. But it only works on the C: drive, so it won’t help with external disks or USB tokens. And it can’t be used to make encrypted zip files. But it’s easy to use, and it’s free.

This essay previously appeared on Wired.com.

EDITED TO ADD (12/14): Lots of people have pointed out that the free and open-source program TrueCrypt is a good alternative to PGP Disk. I haven’t used or reviewed the program at all.

Posted on December 4, 2007 at 6:40 AMView Comments

Home Users: A Public Health Problem?

To the average home user, security is an intractable problem. Microsoft has made great strides improving the security of their operating system “out of the box,” but there are still a dizzying array of rules, options, and choices that users have to make. How should they configure their anti-virus program? What sort of backup regime should they employ? What are the best settings for their wireless network? And so on and so on and so on.

How is it possible that we in the computer industry have created such a shoddy product? How have we foisted on people a product that is so difficult to use securely, that requires so many add-on products?

It’s even worse than that. We have sold the average computer user a bill of goods. In our race for an ever-increasing market, we have convinced every person that he needs a computer. We have provided application after application—IM, peer-to-peer file sharing, eBay, Facebook—to make computers both useful and enjoyable to the home user. At the same time, we’ve made them so hard to maintain that only a trained sysadmin can do it.

And then we wonder why home users have such problems with their buggy systems, why they can’t seem to do even the simplest administrative tasks, and why their computers aren’t secure. They’re not secure because home users don’t know how to secure them.

At work, I have an entire IT department I can call on if I have a problem. They filter my net connection so that I don’t see spam, and most attacks are blocked before they even get to my computer. They tell me which updates to install on my system and when. And they’re available to help me recover if something untoward does happen to my system. Home users have none of this support. They’re on their own.

This problem isn’t simply going to go away as computers get smarter and users get savvier. The next generation of computers will be vulnerable to all sorts of different attacks, and the next generation of attack tools will fool users in all sorts of different ways. The security arms race isn’t going away any time soon, but it will be fought with ever more complex weapons.

This isn’t simply an academic problem; it’s a public health problem. In the hyper-connected world of the Internet, everyone’s security depends in part on everyone else’s. As long as there are insecure computers out there, hackers will use them to eavesdrop on network traffic, send spam, and attack other computers. We are all more secure if all those home computers attached to the Internet via DSL or cable modems are protected against attack. The only question is: what’s the best way to get there?

I wonder about those who say “educate the users.” Have they tried? Have they ever met an actual user? It’s unrealistic to expect home users to be responsible for their own security. They don’t have the expertise, and they’re not going to learn. And it’s not just user actions we need to worry about; these computers are insecure right out of the box.

The only possible way to solve this problem is to force the ISPs to become IT departments. There’s no reason why they can’t provide home users with the same level of support my IT department provides me with. There’s no reason why they can’t provide “clean pipe” service to the home. Yes, it will cost home users more. Yes, it will require changes in the law to make this mandatory. But what’s the alternative?

In 1991, Walter S. Mossberg debuted his “Personal Technology” column in The Wall Street Journal with the words: “Personal computers are just too hard to use, and it isn’t your fault.” Sixteen years later, the statement is still true­—and doubly true when it comes to computer security.

If we want home users to be secure, we need to design computers and networks that are secure out of the box, without any work by the end users. There simply isn’t any other way.

This essay is the first half of a point/counterpoint with Marcus Ranum in the September issue of Information Security. You can read his reply here.

Posted on September 14, 2007 at 2:01 PMView Comments

"Cyberwar" in Estonia

I had been thinking about writing about the massive distributed-denial-of-service attack against the Estonian government last April. It’s been called the first cyberwar, although it is unclear that the Russian government was behind the attacks. And while I’ve written about cyberwar in general, I haven’t really addressed the Estonian attacks.

Now I don’t have to. Kevin Poulsen has written an excellent article on both the reality and the hype surrounding the attacks on Estonia’s networks, commenting on a story in the magazine Wired:

Writer Joshua Davis was dispatched to the smoking ruins of Estonia to assess the damage wrought by last spring’s DDoS attacks against the country’s web, e-mail and DNS servers. Josh is a talented writer, and he returned with a story that offers some genuine insights—a few, though, are likely unintentional.

We see, for example, that Estonia’s computer emergency response team responded to the junk packets with technical aplomb and coolheaded professionalism, while Estonia’s leadership … well, didn’t. Faced with DDoS and nationalistic, cross-border hacktivism—nuisances that have plagued the rest of the wired world for the better part of a decade—Estonia’s leaders lost perspective.

Here’s the best quote, from the speaker of the Estonian parliament, Ene Ergma: “When I look at a nuclear explosion, and the explosion that happened in our country in May, I see the same thing.”

[…]

While cooler heads were combating the first wave of Estonia’s DDoS attacks with packet filters, we learn, the country’s defense minister was contemplating invoking NATO Article 5, which considers an “armed attack” against any NATO country to be an attack against all. That might have obliged the U.S. and other signatories to go to war with Russia, if anyone was silly enough to take it seriously.

Fortunately, nobody important really is that silly. The U.S. has known about DDoS attacks since our own Web War One in 2000, when some our most trafficked sites—Yahoo, Amazon.com, E-Trade, eBay, and CNN.com—were attacked in rapid succession by Canada. (The culprit was a 15-year-old boy in Montreal).

As in Estonia years later, the attack took America’s leaders by surprise. President Clinton summoned some of the United States’ most respected computer security experts to the White House to meet and discuss options for shoring up the internet. At a photo op afterwards, a reporter lobbed Clinton a cyberwar softball: was this the “electronic Pearl Harbor?”

Estonia’s leaders, among others, could learn from the restraint of Clinton’s response. “I think it was an alarm,” he said. “I don’t think it was Pearl Harbor.

“We lost our Pacific fleet at Pearl Harbor.”

Read the whole thing.

Posted on August 23, 2007 at 1:18 PMView Comments

House of Lords on Computer Security

The Science and Technology Committee of the UK House of Lords has issued a report (pdf here) on “Personal Internet Security.” It’s 121 pages long. Richard Clayton, who helped the committee, has a good summary of the report on his blog. Among other things, the Lords recommend various consumer notification standards, a data-breach disclosure law, and a liability regime for software.

Another summary lists:

  • Increase the resources and skills available to the police and criminal justice system to catch and prosecute e-criminals.
  • Establish a centralised and automated system, administered by law enforcement, for the reporting of e-crime.
  • Provide incentives to banks and other companies trading online to improve the data security by establishing a data security breach notification law.
  • Improve standards of new software and hardware by moving towards legal liability for damage resulting from security flaws.
  • Encourage Internet Service Providers to improve customer security offered by establishing a “kite mark” for internet services.

If that sounds like a lot of the things I’ve been saying for years, there’s a reason for that. Earlier this year, I testified before the committee (transcript here), where I recommended some of these things. (Sadly, I didn’t get to wear a powdered wig.)

This report is a long way from anything even closely resembling a law, but it’s a start. Clayton writes:

The Select Committee reports are the result of in-depth study of particular topics, by people who reached the top of their professions (who are therefore quick learners, even if they start by knowing little of the topic), and their careful reasoning and endorsement of convincing expert views, carries considerable weight. The Government is obliged to formally respond, and there will, at some point, be a few hours of debate on the report in the House of Lords.

If you’re interested, the entire body of evidence the committee considered is here (pdf version here). I don’t recommend reading it; it’s absolutely huge, and a lot of it is corporate drivel.

EDITED TO ADD (8/13): I have written about software liabilities before, here and here.

EDITED TO ADD (8/22): Good article here:

They agreed ‘wholeheartedly’ with security guru, and successful author, Bruce Schneier, that the activities of ‘legitimate researchers’ trying to ‘break things to learn to think like the bad guys’ should not be criminalized in forthcoming UK legislation, and they supported the pressing need for a data breach reporting law; in drafting such a law, the UK government could learn from lessons learnt in the US states that have such laws. Such a law should cover the banks, and other sectors, and not simply apply to “communication providers”—a proposal presently under consideration by the EU Commission, which the peers clearly believed would be ineffective in creating incentives to improve security across the board.

Posted on August 13, 2007 at 6:35 AMView Comments

1 3 4 5 6 7 11

Sidebar photo of Bruce Schneier by Joe MacInnis.