Entries Tagged "Internet"

Page 16 of 20

A Security Assessment of the Internet Protocol

Interesting:

Preface

The TCP/IP protocols were conceived during a time that was quite different from the hostile environment they operate in now. Yet a direct result of their effectiveness and widespread early adoption is that much of today’s global economy remains dependent upon them.

While many textbooks and articles have created the myth that the Internet Protocols (IP) were designed for warfare environments, the top level goal for the DARPA Internet Program was the sharing of large service machines on the ARPANET. As a result, many protocol specifications focus only on the operational aspects of the protocols they specify and overlook their security implications.

Though Internet technology has evolved, the building blocks are basically the same core protocols adopted by the ARPANET more than two decades ago. During the last twenty years many vulnerabilities have been identified in the TCP/IP stacks of a number of systems. Some were flaws in protocol implementations which affect only a reduced number of systems. Others were flaws in the protocols themselves affecting virtually every existing implementation. Even in the last couple of years researchers were still working on security problems in the core protocols.

The discovery of vulnerabilities in the TCP/IP protocols led to reports being published by a number of CSIRTs (Computer Security Incident Response Teams) and vendors, which helped to raise awareness about the threats as well as the best mitigations known at the time the reports were published.

Much of the effort of the security community on the Internet protocols did not result in official documents (RFCs) being issued by the IETF (Internet Engineering Task Force) leading to a situation in which “known” security problems have not always been addressed by all vendors. In many cases vendors have implemented quick “fixes” to protocol flaws without a careful analysis of their effectiveness and their impact on interoperability.

As a result, any system built in the future according to the official TCP/IP specifications might reincarnate security flaws that have already hit our communication systems in the past.

Producing a secure TCP/IP implementation nowadays is a very difficult task partly because of no single document that can serve as a security roadmap for the protocols.

There is clearly a need for a companion document to the IETF specifications that discusses the security aspects and implications of the protocols, identifies the possible threats, proposes possible counter-measures, and analyses their respective effectiveness.

This document is the result of an assessment of the IETF specifications of the Internet Protocol from a security point of view. Possible threats were identified and, where possible, counter-measures were proposed. Additionally, many implementation flaws that have led to security vulnerabilities have been referenced in the hope that future implementations will not incur the same problems. This document does not limit itself to performing a security assessment of the relevant IETF specification but also offers an assessment of common implementation strategies.

Whilst not aiming to be the final word on the security of the IP, this document aims to raise awareness about the many security threats based on the IP protocol that have been faced in the past, those that we are currently facing, and those we may still have to deal with in the future. It provides advice for the secure implementation of the IP, and also insights about the security aspects of the IP that may be of help to the Internet operations community.

Feedback from the community is more than encouraged to help this document be as accurate as possible and to keep it updated as new threats are discovered.

Posted on August 20, 2008 at 7:48 AMView Comments

Cyberattack Against Georgia Preceded Real Attack

This is interesting:

Exactly who was behind the cyberattack is not known. The Georgian government blamed Russia for the attacks, but the Russian government said it was not involved. In the end, Georgia, with a population of just 4.6 million and a relative latecomer to the Internet, saw little effect beyond inaccessibility to many of its government Web sites, which limited the government’s ability to spread its message online and to connect with sympathizers around the world during the fighting with Russia.

[…]

In Georgia, media, communications and transportation companies were also attacked, according to security researchers. Shadowserver saw the attack against Georgia spread to computers throughout the government after Russian troops entered the Georgian province of South Ossetia. The National Bank of Georgia’s Web site was defaced at one point. Images of 20th-century dictators as well as an image of Georgia’s president, Mr. Saakashvili, were placed on the site. “Could this somehow be indirect Russian action? Yes, but considering Russia is past playing nice and uses real bombs, they could have attacked more strategic targets or eliminated the infrastructure kinetically,” said Gadi Evron, an Israeli network security expert. “The nature of what’s going on isn’t clear,” he said.

[…]

In addition to D.D.O.S. attacks that crippled Georgia’s limited Internet infrastructure, researchers said there was evidence of redirection of Internet traffic through Russian telecommunications firms beginning last weekend. The attacks continued on Tuesday, controlled by software programs that were located in hosting centers controlled by a Russian telecommunications firms. A Russian-language Web site, stopgeorgia.ru, also continued to operate and offer software for download used for D.D.O.S. attacks.

Welcome to 21st century warfare.

“It costs about 4 cents per machine,” Mr. Woodcock said. “You could fund an entire cyberwarfare campaign for the cost of replacing a tank tread, so you would be foolish not to.”

Posted on August 18, 2008 at 1:11 PMView Comments

The DNS Vulnerability

Despite the best efforts of the security community, the details of a critical internet vulnerability discovered by Dan Kaminsky about six months ago have leaked. Hackers are racing to produce exploit code, and network operators who haven’t already patched the hole are scrambling to catch up. The whole mess is a good illustration of the problems with researching and disclosing flaws like this.

The details of the vulnerability aren’t important, but basically it’s a form of DNS cache poisoning. The DNS system is what translates domain names people understand, like www.schneier.com, to IP addresses computers understand: 204.11.246.1. There is a whole family of vulnerabilities where the DNS system on your computer is fooled into thinking that the IP address for www.badsite.com is really the IP address for www.goodsite.com—there’s no way for you to tell the difference—and that allows the criminals at www.badsite.com to trick you into doing all sorts of things, like giving up your bank account details. Kaminsky discovered a particularly nasty variant of this cache-poisoning attack.

Here’s the way the timeline was supposed to work: Kaminsky discovered the vulnerability about six months ago, and quietly worked with vendors to patch it. (There’s a fairly straightforward fix, although the implementation nuances are complicated.) Of course, this meant describing the vulnerability to them; why would companies like Microsoft and Cisco believe him otherwise? On July 8, he held a press conference to announce the vulnerability—but not the details—and reveal that a patch was available from a long list of vendors. We would all have a month to patch, and Kaminsky would release details of the vulnerability at the BlackHat conference early next month.

Of course, the details leaked. How isn’t important; it could have leaked a zillion different ways. Too many people knew about it for it to remain secret. Others who knew the general idea were too smart not to speculate on the details. I’m kind of amazed the details remained secret for this long; undoubtedly it had leaked into the underground community before the public leak two days ago. So now everyone who back-burnered the problem is rushing to patch, while the hacker community is racing to produce working exploits.

What’s the moral here? It’s easy to condemn Kaminsky: If he had shut up about the problem, we wouldn’t be in this mess. But that’s just wrong. Kaminsky found the vulnerability by accident. There’s no reason to believe he was the first one to find it, and it’s ridiculous to believe he would be the last. Don’t shoot the messenger. The problem is with the DNS protocol; it’s insecure.

The real lesson is that the patch treadmill doesn’t work, and it hasn’t for years. This cycle of finding security holes and rushing to patch them before the bad guys exploit those vulnerabilities is expensive, inefficient and incomplete. We need to design security into our systems right from the beginning. We need assurance. We need security engineers involved in system design. This process won’t prevent every vulnerability, but it’s much more secure—and cheaper—than the patch treadmill we’re all on now.

What a security engineer brings to the problem is a particular mindset. He thinks about systems from a security perspective. It’s not that he discovers all possible attacks before the bad guys do; it’s more that he anticipates potential types of attacks, and defends against them even if he doesn’t know their details. I see this all the time in good cryptographic designs. It’s over-engineering based on intuition, but if the security engineer has good intuition, it generally works.

Kaminsky’s vulnerability is a perfect example of this. Years ago, cryptographer Daniel J. Bernstein looked at DNS security and decided that Source Port Randomization was a smart design choice. That’s exactly the work-around being rolled out now following Kaminsky’s discovery. Bernstein didn’t discover Kaminsky’s attack; instead, he saw a general class of attacks and realized that this enhancement could protect against them. Consequently, the DNS program he wrote in 2000, djbdns, doesn’t need to be patched; it’s already immune to Kaminsky’s attack.

That’s what a good design looks like. It’s not just secure against known attacks; it’s also secure against unknown attacks. We need more of this, not just on the internet but in voting machines, ID cards, transportation payment cards … everywhere. Stop assuming that systems are secure unless demonstrated insecure; start assuming that systems are insecure unless designed securely.

This essay previously appeared on Wired.com.

EDITED TO ADD (8/7): Seems like the flaw is much worse than we thought.

EDITED TO ADD (8/13): Someone else discovered the vulnerability first.

Posted on July 29, 2008 at 6:01 AMView Comments

Comparing Cybersecurity to Early 1800s Security on the High Seas

This article in CSO compares modern cybersecurity to open seas piracy in the early 1800s. After a bit of history, the article talks about current events:

In modern times, the nearly ubiquitous availability of powerful computing systems, along with the proliferation of high-speed networks, have converged to create a new version of the high seas—the cyber seas. The Internet has the potential to significantly impact the United States’ position as a world leader. Nevertheless, for the last decade, U.S. cybersecurity policy has been inconsistent and reactionary. The private sector has often been left to fend for itself, and sporadic policy statements have left U.S. government organizations, private enterprises and allies uncertain of which tack the nation will take to secure the cyber frontier.

This should be a surprise to no one.

What to do?

With that goal in mind, let us consider how the United States could take a Jeffersonian approach to the cyber threats faced by our economy. The first step would be for the United States to develop a consistent policy that articulates America’s commitment to assuring the free navigation of the “cyber seas.” Perhaps most critical to the success of that policy will be a future president’s support for efforts that translate rhetoric to actions—developing initiatives to thwart cyber criminals, protecting U.S. technological sovereignty, and balancing any defensive actions to avoid violating U.S. citizens’ constitutional rights. Clearly articulated policy and consistent actions will assure a stable and predictable environment where electronic commerce can thrive, continuing to drive U.S. economic growth and avoiding the possibility of the U.S. becoming a cyber-colony subject to the whims of organized criminal efforts on the Internet.

I am reminded of comments comparing modern terrorism with piracy on the high seas.

Posted on April 16, 2008 at 2:27 PMView Comments

Internet Censorship

A review of Access Denied, edited by Ronald Deibert, John Palfrey, Rafal Rohozinski and Jonathan Zittrain, MIT Press: 2008.

In 1993, Internet pioneer John Gilmore said “the net interprets censorship as damage and routes around it”, and we believed him. In 1996, cyberlibertarian John Perry Barlow issued his ‘Declaration of the Independence of Cyberspace’ at the World Economic Forum at Davos, Switzerland, and online. He told governments: “You have no moral right to rule us, nor do you possess any methods of enforcement that we have true reason to fear.”

At the time, many shared Barlow’s sentiments. The Internet empowered people. It gave them access to information and couldn’t be stopped, blocked or filtered. Give someone access to the Internet, and they have access to everything. Governments that relied on censorship to control their citizens were doomed.

Today, things are very different. Internet censorship is flourishing. Organizations selectively block employees’ access to the Internet. At least 26 countries—mainly in the Middle East, North Africa, Asia, the Pacific and the former Soviet Union—selectively block their citizens’ Internet access. Even more countries legislate to control what can and cannot be said, downloaded or linked to. “You have no sovereignty where we gather,” said Barlow. Oh yes we do, the governments of the world have replied.

Access Denied is a survey of the practice of Internet filtering, and a sourcebook of details about the countries that engage in the practice. It is written by researchers of the OpenNet Initiative (ONI), an organization that is dedicated to documenting global Internet filtering around the world.

The first half of the book comprises essays written by ONI researchers on the politics, practice, technology, legality and social effects of Internet filtering. There are three basic rationales for Internet censorship: politics and power; social norms, morals and religion; and security concerns.

Some countries, such as India, filter only a few sites; others, such as Iran, extensively filter the Internet. Saudi Arabia tries to block all pornography (social norms and morals). Syria blocks everything from the Israeli domain “.il” (politics and power). Some countries filter only at certain times. During the 2006 elections in Belarus, for example, the website of the main opposition candidate disappeared from the Internet.

The effectiveness of Internet filtering is mixed; it depends on the tools used and the granularity of filtering. It is much easier to block particular URLs or entire domains than it is to block information on a particular topic. Some countries block specific sites or URLs based on some predefined list but new URLs with similar content appear all the time. Other countries—notably China—try to filter on the basis of keywords in the actual web pages. A halfway measure is to filter on the basis of URL keywords: names of dissidents or political parties, or sexual words.

Much of the technology has other applications. Software for filtering is a legitimate product category, purchased by schools to limit access by children to objectionable material and by corporations trying to prevent their employees from being distracted at work. One chapter discusses the ethical implications of companies selling products, services and technologies that enable Internet censorship.

Some censorship is legal, not technical. Countries have laws against publishing certain content, registration requirements that prevent anonymous Internet use, liability laws that force Internet service providers to filter themselves, or surveillance. Egypt does not engage in technical Internet filtering; instead, its laws discourage the publishing and reading of certain content—it has even jailed people for their online activities.

The second half of Access Denied consists of detailed descriptions of Internet use, regulations and censorship in eight regions of the world, and in each of 40 different countries. The ONI found evidence of censorship in 26 of those 40. For the other 14 countries, it summarizes the legal and regulatory framework surrounding Internet use, and tests the results that indicated no censorship. This leads to 200 pages of rather dry reading, but it is vitally important to have this information well-documented and easily accessible. The book’s data are from 2006, but the authors promise frequent updates on the ONI website.

No set of Internet censorship measures is perfect. It is often easy to find the same information on uncensored URLs, and relatively easy to get around the filtering mechanisms and to view prohibited web pages if you know what you’re doing. But most people don’t have the computer skills to bypass controls, and in a country where doing so is punishable by jail—or worse—few take the risk. So even porous and ineffective attempts at censorship can become very effective socially and politically.

In 1996, Barlow said: “You are trying to ward off the virus of liberty by erecting guard posts at the frontiers of cyberspace. These may keep out the contagion for some time, but they will not work in a world that will soon be blanketed in bit-bearing media.”

Brave words, but premature. Certainly, there is much more information available to many more people today than there was in 1996. But the Internet is made up of physical computers and connections that exist within national boundaries. Today’s Internet still has borders and, increasingly, countries want to control what passes through them. In documenting this control, the ONI has performed an invaluable service.

This was originally published in Nature.

Posted on April 7, 2008 at 5:00 AMView Comments

Third Parties Controlling Information

Wine Therapy is a web bulletin board for serious wine geeks. It’s been active since 2000, and its database of back posts and comments is a wealth of information: tasting notes, restaurant recommendations, stories and so on. Late last year someone hacked the board software, got administrative privileges and deleted the database. There was no backup.

Of course the board’s owner should have been making backups all along, but he has been very sick for the past year and wasn’t able to. And the Internet Archive has been only somewhat helpful.

More and more, information we rely on—either created by us or by others—is out of our control. It’s out there on the internet, on someone else’s website and being cared for by someone else. We use those websites, sometimes daily, and don’t even think about their reliability.

Bits and pieces of the web disappear all the time. It’s called “link rot,” and we’re all used to it. A friend saved 65 links in 1999 when he planned a trip to Tuscany; only half of them still work today. In my own blog, essays and news articles and websites that I link to regularly disappear—sometimes within a few days of my linking to them.

It may be because of a site’s policies—some newspapers only have a couple of weeks on their website—or it may be more random: Position papers disappear off a politician’s website after he changes his mind on an issue, corporate literature disappears from the company’s website after an embarrassment, etc. The ultimate link rot is “site death,” where entire websites disappear: Olympic and World Cup events after the games are over, political candidates’ websites after the elections are over, corporate websites after the funding runs out and so on.

Mostly, we ignore the issue. Sometimes I save a copy of a good recipe I find, or an article relevant to my research, but mostly I trust that whatever I want will be there next time. Were I planning a trip to Tuscany, I would rather search for relevant articles today than rely on a nine-year-old list anyway. Most of the time, link rot and site death aren’t really a problem.

This is changing in a Web 2.0 world, with websites that are less about information and more about community. We help build these sites, with our posts or our comments. We visit them regularly and get to know others who also visit regularly. They become part of our socialization on the internet and the loss of them affects us differently, as Greatest Journal users discovered in January when their site died.

Few, if any, of the people who made Wine Therapy their home kept backup copies of their own posts and comments. I’m sure they didn’t even think of it. I don’t think of it, when I post to the various boards and blogs and forums I frequent. Of course I know better, but I think of these forums as extensions of my own computer—until they disappear.

As we rely on others to maintain our writings and our relationships, we lose control over their availability. Of course, we also lose control over their security, as MySpace users learned last month when a 17-GB file of half a million supposedly private photos was uploaded to a BitTorrent site.

In the early days of the web, I remember feeling giddy over the wealth of information out there and how easy it was to get to. “The internet is my hard drive,” I told newbies. It’s even more true today; I don’t think I could write without so much information so easily accessible. But it’s a pretty damned unreliable hard drive.

The internet is my hard drive, but only if my needs are immediate and my requirements can be satisfied inexactly. It was easy for me to search for information about the MySpace photo hack. And it will be easy to look up, and respond to, comments to this essay, both on Wired.com and on my own blog. Wired.com is a commercial venture, so there is advertising value in keeping everything accessible. My site is not at all commercial, but there is personal value in keeping everything accessible. By that analysis, all sites should be up on the internet forever, although that’s certainly not true. What is true is that there’s no way to predict what will disappear when.

Unfortunately, there’s not much we can do about it. The security measures largely aren’t in our hands. We can save copies of important web pages locally, and copies of anything important we post. The Internet Archive is remarkably valuable in saving bits and pieces of the internet. And recently, we’ve started seeing tools for archiving information and pages from social networking sites. But what’s really important is the whole community, and we don’t know which bits we want until they’re no longer there.

And about Wine Therapy, I think it started in 2000. It might have been 2001. I can’t check, because someone erased the archives.

This essay originally appeared on Wired.com.

Posted on February 27, 2008 at 5:46 AMView Comments

Fear of Internet Predators Largely Unfounded

Does this really come as a surprise?

“There’s been some overreaction to the new technology, especially when it comes to the danger that strangers represent,” said Janis Wolak, a sociologist at the Crimes against Children Research Center at the University of New Hampshire in Durham.

“Actually, Internet-related sex crimes are a pretty small proportion of sex crimes that adolescents suffer,” Wolak added, based on three nationwide surveys conducted by the center.

[…]

In an article titled “Online ‘Predators’ and Their Victims,” which appears Tuesday in American Psychologist, the journal of the American Psychological Association, Wolak and co-researchers examined several fears that they concluded are myths:

  • Internet predators are driving up child sex crime rates.

    Finding: Sex assaults on teens fell 52 percent from 1993 to 2005, according to the Justice Department’s National Crime Victimization Survey, the best measure of U.S. crime trends. “The Internet may not be as risky as a lot of other things that parents do without concern, such as driving kids to the mall and leaving them there for two hours,” Wolak said.

  • Internet predators are pedophiles.

    Finding: Internet predators don’t hit on the prepubescent children whom pedophiles target. They target adolescents, who have more access to computers, more privacy and more interest in sex and romance, Wolak’s team determined from interviews with investigators.

  • Internet predators represent a new dimension of child sexual abuse.

    Finding: The means of communication is new, according to Wolak, but most Internet-linked offenses are essentially statutory rape: nonforcible sex crimes against minors too young to consent to sexual relationships with adults.

  • Internet predators trick or abduct their victims.

    Finding: Most victims meet online offenders face-to-face and go to those meetings expecting to engage in sex. Nearly three-quarters have sex with partners they met on the Internet more than once.

  • Internet predators meet their victims by posing online as other teens.

    Finding: Only 5 percent of predators did that, according to the survey of investigators.

  • Online interactions with strangers are risky.

    Finding: Many teens interact online all the time with people they don’t know. What’s risky, according to Wolak, is giving out names, phone numbers and pictures to strangers and talking online with them about sex.

  • Internet predators go after any child.

    Finding: Usually their targets are adolescent girls or adolescent boys of uncertain sexual orientation, according to Wolak. Youths with histories of sexual abuse, sexual orientation concerns and patterns of off- and online risk-taking are especially at risk.

In January, I said this:

…there isn’t really any problem with child predators—just a tiny handful of highly publicized stories—on MySpace. It’s just security theater against a movie-plot threat. But we humans have a well-established cognitive bias that overestimates threats against our children, so it all makes sense.

EDITED TO ADD (3/7): A good essay.

Posted on February 26, 2008 at 6:30 AMView Comments

Research on Malware Distribution

Interesting:

Among their conclusions are that the majority of malware distribution sites are hosted in China, and that 1.3% of Google searches return at least one link to a malicious site. The lead author, Niels Provos, wrote, ‘It has been over a year and a half since we started to identify web pages that infect vulnerable hosts via drive-by downloads, i.e. web pages that attempt to exploit their visitors by installing and running malware automatically. During that time we have investigated billions of URLs and found more than three million unique URLs on over 180,000 web sites automatically installing malware. During the course of our research, we have investigated not only the prevalence of drive-by downloads but also how users are being exposed to malware and how it is being distributed.'”

Draft paper, and some data.

Posted on February 26, 2008 at 6:23 AMView Comments

1 14 15 16 17 18 20

Sidebar photo of Bruce Schneier by Joe MacInnis.