Security Externalities and DDOS Attacks

Ed Felten has a really good blog post about the externalities that the recent Spamhaus DDOS attack exploited:

The attackers’ goal was to flood Spamhaus or its network providers with Internet traffic, to overwhelm their capacity to handle incoming network packets. The main technical problem faced by a DoS attacker is how to amplify the attacker’s traffic-sending capacity, so that the amount of traffic arriving at the target is much greater than the attacker can send himself. To do this, the attacker typically tries to induce many computers around the Internet to send large amounts of traffic to the target.

The first stage of the attack involved the use of a botnet, consisting of a large number of software agents surreptitiously installed on the computers of ordinary users. These bots were commanded to send attack traffic. Notice how this amplifies the attacker’s traffic-sending capability: by sending a few commands to the botnet, the attacker can induce the botnet to send large amounts of attack traffic. This step exploits our first externality: the owners of the bot-infected computers might have done more to prevent the infection, but the harm from this kind of attack activity falls onto strangers, so the computer owners had a reduced incentive to prevent it.

Rather than having the bots send traffic directly to Spamhaus, the attackers used another step to further amplify the volume of traffic. They had the bots send queries to DNS proxies across the Internet (which answer questions about how machine names like www.freedom-to-tinker.com related to IP addresses like 209.20.73.44). This amplifies traffic because the bots can send a small query that elicits a large response message from the proxy.

Here is our second externality: the existence of open DNS proxies that will respond to requests from anywhere on the Internet. Many organizations run DNS proxies for use by their own people. A well-managed DNS proxy is supposed to check that requests are coming from within the same organization; but many proxies fail to check this—they’re “open” and will respond to requests from anywhere. This can lead to trouble, but the resulting harm falls mostly on people outside the organization (e.g. Spamhaus) so there isn’t much incentive to take even simple steps to prevent it.

To complete the attack, the DNS requests were sent with false return addresses, saying that the queries had come from Spamhaus—which causes the DNS proxies to direct their large response messages to Spamhaus.

Here is our third externality: the failure to detect packets with forged return addresses. When a packet with a false return address is injected, it’s fairly easy for the originating network to detect this: if a packet comes from inside your organization, but it has a return address that is outside your organization, then the return address must be forged and the packet should be discarded. But many networks fail to check this. This causes harm but—you guessed it—the harm falls outside the organization, so there isn’t much incentive to check. And indeed, this kind of packet filtering has long been considered a best practice but many networks still fail to do it.

I’ve been writing about security externalities for years. They’re often much harder to solve than technical problems.

By the way, a lot of the hype surrounding this attack was media manipulation.

Posted on April 10, 2013 at 12:46 PM12 Comments

Comments

Clive Robinson April 10, 2013 8:21 PM

Yes there was a lot of apparent hype, and at the time it looked like few people were effected (I’ve an interest in the London INX which was hit but did not see much in the way of problems). A journalist for the UK’s Guardian Newspaper called “hype” for what appeared fairly sound reasons.

But it appears in some respects a bullet was dodged, because the attackers actually performed two seperate DDoS attacks.

The first attack against SPAMhaus it’s self that initially worked but then failed due to some niffty background work with virtualising their IP address to many places thus dividing the actuall DDoS traffic down into smaller regional areas each of which had a fraction of the DDoS traffic.

Apparently the second attack was then directed not at SPAMhaus but against the organisation helping SPAMhaus virtualise the IP addresse across regeions. Basicaly it was directed against IP adresses of the Tier1 network suppliers that could not be virtualised in the same way and this did do some significant harm. From some of what has been said it appears that this latter attack maxed out some Tier-1 links and it was only other peering at Tier-2 that enabled traffic to be carried around the maxed out Tier-1 links.

I”ve yet to hear the full story as those that know appear to be saying not much currently.

Apparently part of the problem is that although Tier-1 and Tier-2 organisations have the capability in the routers etc to carry considerably more traffic they don’t have spare capacity at the interface level and only add the required hardware to provide network capacity on demand which takes time and planning. Whether this policy will now change or not is something we will have to wait on.

One thing this episode has highlighted is how few the number of Tier-1 exchanges are which is where the Spooks want to put their mass surveillance hover points in. But it’s also revealed that with careful planning and placment of nodes you could probably route your traffic through Tier-2 INX’s which might enable you to avoide the mass surveillance points…

Clive Robinson April 10, 2013 9:08 PM

Links to other comments on the issue,

Firstly the UK’s Guardian newspaper, claiming (not unreasonably) hype,

http://www.guardian.co.uk/commentisfree/2013/mar/29/cyberwar-spun-shoddy-journalism

Secondly a more technical view from ARStechnica saying basicaly that there was a “Swan Effect” [1] happening in places, and explains why Gizmodo article was in it’s self more hype…

http://arstechnica.com/security/2013/04/can-a-ddos-break-the-internet-sure-just-not-all-of-it/

[1] The “Swan Effect” is bassed on the observation that to a person above the water watching, the Swan appears to move effortlessly and with grace and serenity, whilst bellow the water hidden from sight it’s feet are madly paddeling in an undignified, and sometimes chaotic and almost disturbing way.

El Reg reader April 11, 2013 4:38 AM

If you haven’t seen it, here’s an autopsy by a sysadmin whose DNS was used in the attack. It provides some insight into how this problems came to exist despite someone attempting to do the right thing. And, essentially, it shows that software vendors attempting to address the problem by making sensible default configurations actually contributed.

Peter A. April 11, 2013 6:34 AM

I don’t understand why the DNS amplification attack was so difficult to cope with (LINX attack was another beast).

  1. SPAMhaus operations consists for the major part of receiving DNS queries and sending out DNS responses.
  2. The DNS amplification attack causes the victim to be flooded with DNS responses from open recursive resolvers.
  3. Routers serving major links have quite powerful filtering features.
  4. Considering the above, it should be easy to fend off the attack by dropping DNS responses adressed to SPAMhaus at affected interconnects.

Where’s a flaw in my logic?

Johnston April 11, 2013 10:49 AM

@El Reg reader

There’s an interesting tidbit at the end of the article. The sysadmin whose open resolver was abused writes: “The second [flaw in my server design] is that DNSSEC isn’t enabled.” No, this actually helped, as DNSSEC greatly increases the size of DNS responses, making DNSSEC amplification attacks about 5x worse than regular DNS amplification attacks. Besides numerous other failures than DNSSEC introduces caused by its design-by-committee construction.

Tom Boettcher April 11, 2013 2:54 PM

Here’s a link to the (fascinating) article from CloudFlare, the organization responsible for DDoS mitigation in this attack. It’s worth a read if you have a few minutes.

neill April 11, 2013 6:28 PM

thankfully there are open DNS servers – we had been “manipulated” a few times in the past, when ISPs had their DNS return invalid requests (e.g. typos) with IPs to sites of their liking …

… and of course google is always very happy to know what you’re up to at their 8.8.8.8 server

MeMeD April 12, 2013 8:33 AM

From the history of the DDOS’es I would assume that “open dns” won’t go away, it is simply needed to ensure working services.
But I see no reason why AS operators should be allowed to send fake traffic at all?
IMHO sending packets with fake sender-ip’s is the only “cause” of the aplification problems.
And I also do not understand why “the others” accept traffic with sender IP from the wrong AS..
Whilst a live detection might be too ressource hungry a simple sflow like detection (every 1024th paket is checked or so) would be no big cost factor. And when you detect false sender IP’s from some AS you have a valid reason to drop it BGP wise. If you accept it you are as liable as the one sending packets with wrong sender IP’s.
In the world of telephone it is not imaginable that carrier A forwards a call with source numbers from carrier C to the network of innocent carrier B. Every edge switch has a filter for/against that.

BR

Me April 12, 2013 9:52 AM

@ Peter A: I think the flaw in your logic is that the way a DDoS attack works is simply by flooding the target with data. Filtering is a moot point as that still takes enough effort to keep your server filtering out bad stuff as opposed to working with good stuff.

Consider if your job was to inspect red ants for having appropriate leg count. They come in small boxes, and you open the box, count legs and place them in a pile (right number, wrong number). Now consider that someone sends out 1000x the number of black ants as there were red. It doesn’t matter that you can quickly inspect the ant to see that it is black and not red, you still have to pick it up, open the box and look at it, by the time you see that it is black, most of the damage is done.

Jarda April 13, 2013 4:43 AM

I hope some brain-dead politician isn’t reading your blog. All I need is someone pushing lock down of open DNS servers, so that everybody has to stick with his provider’s DNS, which is often not great, e.g. slow or even censored and as such I’m not using it.

Roger Wolff April 16, 2013 2:37 AM

As long as it doesn’t bother ME too much, I like to provide services to the internet at large. If someone’s mail server is not working, I don’t mind if they send their mail through mine to get their mail through. This has been a bad idea for a long time. Of course that has been disabled for over a decade.

Similarly, when faced with the question: should the DNS server recurse for requests from the internet I have been providing a “service” to everybody. When about 5 years ago, the system was upgraded I didn’t bother to change the defaults, so nowadays it’s off.

Of course, to those working in the email field that “forwarding for strangers” has been a death-sentence-crime for ages. How long does it take for a small sysop to be notified of such problems? How long until he has time to handle such problems?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.