The DNS Vulnerability

Despite the best efforts of the security community, the details of a critical internet vulnerability discovered by Dan Kaminsky about six months ago have leaked. Hackers are racing to produce exploit code, and network operators who haven’t already patched the hole are scrambling to catch up. The whole mess is a good illustration of the problems with researching and disclosing flaws like this.

The details of the vulnerability aren’t important, but basically it’s a form of DNS cache poisoning. The DNS system is what translates domain names people understand, like www.schneier.com, to IP addresses computers understand: 204.11.246.1. There is a whole family of vulnerabilities where the DNS system on your computer is fooled into thinking that the IP address for www.badsite.com is really the IP address for www.goodsite.com—there’s no way for you to tell the difference—and that allows the criminals at www.badsite.com to trick you into doing all sorts of things, like giving up your bank account details. Kaminsky discovered a particularly nasty variant of this cache-poisoning attack.

Here’s the way the timeline was supposed to work: Kaminsky discovered the vulnerability about six months ago, and quietly worked with vendors to patch it. (There’s a fairly straightforward fix, although the implementation nuances are complicated.) Of course, this meant describing the vulnerability to them; why would companies like Microsoft and Cisco believe him otherwise? On July 8, he held a press conference to announce the vulnerability—but not the details—and reveal that a patch was available from a long list of vendors. We would all have a month to patch, and Kaminsky would release details of the vulnerability at the BlackHat conference early next month.

Of course, the details leaked. How isn’t important; it could have leaked a zillion different ways. Too many people knew about it for it to remain secret. Others who knew the general idea were too smart not to speculate on the details. I’m kind of amazed the details remained secret for this long; undoubtedly it had leaked into the underground community before the public leak two days ago. So now everyone who back-burnered the problem is rushing to patch, while the hacker community is racing to produce working exploits.

What’s the moral here? It’s easy to condemn Kaminsky: If he had shut up about the problem, we wouldn’t be in this mess. But that’s just wrong. Kaminsky found the vulnerability by accident. There’s no reason to believe he was the first one to find it, and it’s ridiculous to believe he would be the last. Don’t shoot the messenger. The problem is with the DNS protocol; it’s insecure.

The real lesson is that the patch treadmill doesn’t work, and it hasn’t for years. This cycle of finding security holes and rushing to patch them before the bad guys exploit those vulnerabilities is expensive, inefficient and incomplete. We need to design security into our systems right from the beginning. We need assurance. We need security engineers involved in system design. This process won’t prevent every vulnerability, but it’s much more secure—and cheaper—than the patch treadmill we’re all on now.

What a security engineer brings to the problem is a particular mindset. He thinks about systems from a security perspective. It’s not that he discovers all possible attacks before the bad guys do; it’s more that he anticipates potential types of attacks, and defends against them even if he doesn’t know their details. I see this all the time in good cryptographic designs. It’s over-engineering based on intuition, but if the security engineer has good intuition, it generally works.

Kaminsky’s vulnerability is a perfect example of this. Years ago, cryptographer Daniel J. Bernstein looked at DNS security and decided that Source Port Randomization was a smart design choice. That’s exactly the work-around being rolled out now following Kaminsky’s discovery. Bernstein didn’t discover Kaminsky’s attack; instead, he saw a general class of attacks and realized that this enhancement could protect against them. Consequently, the DNS program he wrote in 2000, djbdns, doesn’t need to be patched; it’s already immune to Kaminsky’s attack.

That’s what a good design looks like. It’s not just secure against known attacks; it’s also secure against unknown attacks. We need more of this, not just on the internet but in voting machines, ID cards, transportation payment cards … everywhere. Stop assuming that systems are secure unless demonstrated insecure; start assuming that systems are insecure unless designed securely.

This essay previously appeared on Wired.com.

EDITED TO ADD (8/7): Seems like the flaw is much worse than we thought.

EDITED TO ADD (8/13): Someone else discovered the vulnerability first.

Posted on July 29, 2008 at 6:01 AM71 Comments

Comments

Lewis Donofrio July 29, 2008 7:00 AM

I though believe it or not Microsoft solved this problem years ago with the joining of Active Directory and MSDNS by using only authenticated hosts for zone transfers.

–someone correct me if I’m mistaken.

Security4all July 29, 2008 7:05 AM

Well said. Security by design. The internet used to be a safer place when security wasn’t that important as feature. Even this patch is just to make exploitation more difficult and ultimately not the final solution.

There has been a lot of talk about DNSSEC and how it’s now the time to adopt it. Since we seem to trust other mechanisms (such as PKI/SSL) on DNS, maybe it’s time to reconsider?

On the other hand, someone said “there is no business model for DNSSEC”.
DNSSEC needs more storage, bandwidth, administration, network resources ,…..

More here
http://www.research.att.com/~trevor/papers/dnssec-incentives.pdf

Security4all July 29, 2008 7:14 AM

First of all, authoritative nameservers were not the issue. The resolving/caching nameservers are.

Second of all, that post of the Internet Storm Center got it wrong. This DNS vulnerability wasn’t previously discovered. This wasn’t an issue of Birthday attacks and waiting till the TTL (Time To Live) of the domain expired to poison the cache. This exploit can overwrite any existing domain at any time. And it can be done in 10 seconds (using a high bandwidth connection). Although the current public exploits aren’t there yet, they are being refined.

Just listen to the webcast that Dan Kaminsky gave explaining the issue.
http://security4all.blogspot.com/2008/07/recorded-blackhat-webcast-with-dan.html

Bruno Maury July 29, 2008 7:22 AM

You say a good security designer “anticipates potential types of attacks”. Can you name a computer protocol or system that was designed from the ground up with security in mind and that has never –ever– been compromised, letting the users waiting for a patch like in this occurrence ? When both a good and a bad designs are compromised, what is the difference when the system is so massively deployed that it will take weeks to patch in full ?

And even if one designs a system virtually immune to any attack, isn’t there a chance for the implementation to be faulty (like for OpenSSL back in May) ? You quote port randomization as a good design, but isn’t port randomization an implementation detail in a fundamentally flawed design ?

Thanks for writing this blog and sharing your insights with us 🙂

Nicholas Weaver July 29, 2008 7:25 AM

The reason why it leaked is easy: its incredibly STUPID:

“DNS resolvers will cache glue records, and will overwrite the existing cache entries for these glue records”.

This is, IIRC, another thing that DJBDNS doesn’t do: cache glue records.

John July 29, 2008 7:50 AM

It’s another example. Security has to be designed in. Of course there can still be security problems either due to the original design or the implementation being faulty but designing in security at the beginning means it is a lot less likely and easier to fix.

Look at the difference between an operating system like VMS (designed to be a secure and robust commercial platform) and unix. I know modern unix kernels have been substantially re-designed but the original was not designed with security in mind.

This DNS problem was a design flaw in the original DNS which was not designed for security.

Trichinosis USA July 29, 2008 8:00 AM

How do you address the greed factor?

There are thousands of Cisco switches and routers out there which will never be patched because Cisco insists on a contract or warranty before issuing the patch.

When does it become appropriate to order a vendor to provide no-cost fixes for their vulnerabilities in the interests of greater security? How else should this problem be solved?

Eam July 29, 2008 8:02 AM

@Security4y’all: “Although the current public exploits aren’t there yet, they are being refined.”

I may be mistaken (can’t check at work), but I do believe there has been a Metasploit module for this vulnerability out for around a week now.

Eam July 29, 2008 8:13 AM

@Bruno Maury:
I’m really not sure what you’re getting at here. You’re saying that if a system is so widely deployed that it takes weeks to patch most installations, then it doesn’t matter how often it needs to be patched? Seriously?

Anyway, you should check out Microsoft SQL Server 2005’s track record. It was designed using the Secure Development Lifecycle, and the first serious vulnerability for it was only discovered a few weeks ago. Because of this, life as a SQL Server 2005 admin is notably devoid of headaches (especially when compared to previous SQL Server versions!).

SteveJ July 29, 2008 9:02 AM

@Earn:

Here’s a flash video of someone at least claiming to combine the Metasploit DNS cache poisoning tools with his own tools which imitate software update servers for Java and other applications:

http://www.infobyte.com.ar/demo/evilgrade.htm

Ironically, the “patch treadmill” mentioned by Bruce not only doesn’t work, it creates its own problems.

Users are in a double-bind, where they can’t run old versions of apps because they’re vulnerable, and they can’t reliably acquire new versions because the vendors don’t make them available via secure mechanisms.

Allen July 29, 2008 9:10 AM

djbdns is not immune to the attack, having randomized source ports simply makes attacks harder.

How much harder? Not much. Kaminsky’s own test suite and the exploits that have recently made their way into the light can compromise a DNS server that does source port randomization in a matter of minutes.

The problem isn’t the DNS protocol either, it’s an implementation issue that can be easily fixed.

Don’t cache NS/A records at the same level of the query you’re sending. When you ask for ‘amazon.com’ and get a ‘.com’ NS list from the TLDs, that’s ok. It’s in bailiwick and it’s relevant to your question.

When you ask for “12345.amazon.com” from “ns1.amazon.com” and you get a response back saying “dunno the address, but here’s another NS server at the same level as me that may know”, trash the response.

Problem solved.

Nuno July 29, 2008 9:27 AM

Allen is right,

and there are other ways to make DNS more safe, still the exploit works where conditions can be meet.

but still DNS can be implemented in very different ways protecting you from this exploits. in fact protecting yourself from unkown vulnerabilities can be just by setting up your DNS servers to trust only the DNS servers you want. still many exploits in the wild exploit native flaws in protocols but this means exploits need to reach the app, thats where the real security implementation comes into place.

I can’t be to sure that what has been developed in djbdns was developed thinking in unkown vulnerabilities as it can allways have its own vulns.. so the implementation is the way to think about security.

FP July 29, 2008 9:27 AM

“There is a whole family of vulnerabilities where the DNS system on your computer is fooled into thinking that the IP address for http://www.badsite.com is really the IP address for http://www.goodsite.com — there’s no way for you to tell the difference.”

True, although the vulnerability is mitigated in the presence of an SSL certificate. http://www.badsite.com would not be able to present you with a valid, signed SSL certificate for http://www.goodsite.com. So an attentive user would have some warning before handing over her account details.

Nicholas Weaver July 29, 2008 9:30 AM

No, allen. the problem is what happens is you cache “in bailywick” glue!

EG, you ask for aoeui.amazon.com and you get a response back saying:

aoeui.amazon.com CNAME: http://www.amazon.com
http://www.amazon.com A: 66.66.66.66

(because the attacker won the race), and the DNS server then happily overwrites the cached entry for http://www.amazon.com.

Its all “in bailywick”, it all seems proper.

What it comes down to: you should NEVER cache glue, in bailywick or not!

Bruno Maury July 29, 2008 9:31 AM

@Earn
“You’re saying that if a system is so widely deployed that it takes weeks to patch most installations, then it doesn’t matter how often it needs to be patched?”

I don’t think I said that. I’m asking whether it makes a difference waiting and deploying a patch when the system to patch is poorly designed, as opposed to well designed. You mention SQL Server 2005, of which I have no knowledge, but maybe you can answer this: suppose there were as many SQL Server 2005 instances than DNS servers (I don’t know either number, this question is hypothetical) and that an exploit of the same impact was found for both, would it make a difference that SQL Server is well designed and that DNS is not ?

When Bruce says: “This cycle of finding security holes and rushing to patch them before the bad guys exploit those vulnerabilities is expensive, inefficient and incomplete” and then advocates for putting security at the core of systems/protocols design, it makes me wonder whether good systems are subject to the same cycle and what’s the difference in patch release frequency between both kinds.

To come back to the actual DNS exploits, I have read elsewhere that port randomization does not solve the issue but merely makes it harder to exploit. Hence my question about this solution being an implementation detail rather than a good design.

Nicholas Weaver July 29, 2008 9:34 AM

And I BELIEVE that DJBDNS will only use glue records for the duration of the query, not beyond (which IS safe), so, if so, djbdns is immune, period.

Nicholas Weaver July 29, 2008 9:41 AM

Bruno: Correct, but a very big difference on exploitability.

Without port randomization, you can do it “blind” with on the order of 2^16 packets: generate a query, send 100 tries, query the next invalid host, send 100 tries, repeat…

With port randomization, the blind attack takes 2^32 packets, which is infeasible.

But if you put your DNS server behind a NAT, you lose the port randomization in most cases, as NATs will often sequentialize UDP ports.

If glue records are not retained beyond the lifespan of the current query, this attack fails to work because instead of being able to do a bunch of races and have any single race win, you have to win the FIRST race to poison a name, because after that the result is cached and the server won’t generate queries that you can attempt to poison for that name.

Anonymous July 29, 2008 9:48 AM

@john, early versions of VMS had problems as well. For example their way of handling privilege escalation. No authentication was needed to raise your privilege level to the maximum allowed. This was a major hole in that privileged programs that called nonprivileged code (such as mail using a foreign terminal handler that could be a user program). It also meant that users with privileges (such as physical I/O) had to be really really careful about running trojans. They couldn’t just rely on not having the privilege enabled, as being allowed to turn it on was enough.

Mister Paul July 29, 2008 9:58 AM

I found this whole debacle odd. Anyone who has done arp-cache poisoning and is familiar with typical UDP transaction schemes would come upon this easily. I know of people who surmised this weakness years ago and heard it bandied about for some time. I’m not taking anything away from Kaminsky — his work is always good — but I very very much doubt his actions in anyway made things worse. I think any knowledgeable, motivated attacker had already probably happened upon this or would have soon enough. The “leak” furor is laughable and seriously misplaced.

Nuno July 29, 2008 9:59 AM

And I BELIEVE that DJBDNS will only use glue records for the duration of the query, not beyond (which IS safe), so, if so, djbdns is immune, period.

Posted by: Nicholas Weaver at July 29, 2008 9:34 AM

so you can’t use the dns server as a caching only server ? and thats it is safe… because is limited in the way you use it.

Davi Ottenheimer July 29, 2008 10:39 AM

Nice article. Would have liked to see a tie into physical vulnerabilities and disclosure habits.

Locksport continues to grow, while lock companies do little to patch vulnerable locks:
http://www.slate.com/id/2195862

And NXP Semiconductor is suing to gag the the team that researched Oyster card flaws:

http://www.theregister.co.uk/2008/07/08/nxp_sues_oyster_researchers/

You seem to only refer to software security here, but there’s many more dimensions to the issue.

2LongAgo July 29, 2008 10:40 AM

People knew about this and other flaws a long time ago.
Programmers can automate tests to check for flaws…
Flaws are worth $, and inside knowledge gives one a job. Perfect is hard to negotiate money in public open source.
Funny, all these Coverity, ‘NSA guidelines,’ etc, $$$$$ Cisco, ETC, and only the ones who knew, run the good stuff, with the good configurations.
Full disclosure is dead, who want to be thought of as a refugee maid?
Even *BSD uses Bind, Sendmail crap. It is all a crock putting in other programs, you got to then handle other issues that some are not up for. Looks like the migration time to DJB stuff is coming.
If only DJB would help with his stuff to *BSD. It would be worth money. Heck, congress should pay. They waste a billion here and there on nothing.
The money wasted on bad IT is insane! And what do we get from congress but more War on X disaster racketting.

oldami July 29, 2008 10:54 AM

I really wish that the understanding of this concept (Bruce’s rant, not the vulnerability) could be beaten into the heads of every member of congress – and all the foreign governments as well. Software is developed on a economic business model, not a security model. Until laws force a change or governments fund the proper changes we will continue down this dark path.

Thank you Mr Schneier for advocating as you do. I consider you the greatest hero of the digital age.
– Oldami

t July 29, 2008 11:00 AM

Now, you can attack DNS with ddos since the ISC patch poorly implemented port randomization. You can send a bunch of rand.domain.com at the server and slow it to a grinding halt.

Remember, it has to keep track of all the open ports, so you can mess with how it handles timeouts, while it waits for answers to non-existent domains.

Current hardware requirements for the current randomized port version of BIND need to be evaluated.

If you cannot poison it, shut it down or make it slow to respond.

Maybe it’s time to not use BIND until a good solution is found. Switching to djbdns in the short time might make more sense. Of course, then pound djbdns to make sure it can handle the load.

A fan July 29, 2008 11:46 AM

“Of course, then pound djbdns to make sure it can handle the load.”

t, you kill me. DJBDNS is /way/ more efficient.

antibozo July 29, 2008 12:14 PM

Security4all> There has been a lot of talk about DNSSEC and how it’s now the time to adopt it.

Actually, there’s been very little talk of DNSSEC, which is unfortunate because this is yet another reason we should have had the infrastructure in place years ago.

Security4all> Since we seem to trust other mechanisms (such as PKI/SSL) on DNS, maybe it’s time to reconsider?

Reconsider the braindead X.509 PKI we currently use? Definitely. The status quo of CAs performing authentication of domain ownership using cleartext email is pathetic, and the very existence of CAs (for most purposes, anyway) is an unnecessary expense. Furthermore, the ability of any CA to sign any subject is a fundamental design flaw in the X.509-based PKI; anyone who imports a CA certificate for example.com’s enterprise PKI ends up trusting the managers of that PKI not to forge a cert for, say, bankofamerica.com.

Security4all> On the other hand, someone said “there is no business model for DNSSEC”.

Whoever said that was deeply confused. If we can build a DNSSEC-based PKI, the business model of not paying through the nose for certificates is obvious. Not only that, but we could distribute public keys for as many hosts and services as we like, and deploy opportunistic encryption using IPsec without having to set up keys ahead of time.

Moreover, DNSSEC-based PKI can be mutual. In the broken status quo, we are accustomed to think not only that each certificate requires paying a third party, but also that we can’t rely on users to have verifiable client certificates. With DNSSEC this is no longer the case, since with the appropriate naming schema, DNSSEC can distribute secure public keys for both parties in communication. This means we have all the infrastructure needed for message encryption and message authentication, in both directions, all for the one-time cost of setting up DNSSEC and getting your KSK signed by your domain parent.

Elaborate revocation protocols, also involving a third party, are no longer necessary. Just remove the signature from the DNS, and the certificate expires when the DNS TTL elapses.

DNSSEC-based PKI also appropriately constrains signing authority along the domain axis, which is the axis being verified in the first place (i.e. the CN in the certificate subject). This means that the example.com PKI’s administrators no longer have the ability to forge unrelated certs that the PKI’s users end up trusting.

Security4all> More here
http://www.research.att.com/~trevor/papers/dnssec-incentives.pdf

Yet another outdated report making the misguided comparison between DNSSEC and SSL. Of course DNSSEC doesn’t replace SSL; it doesn’t address the same problem at all. It can, however, replace the CAs in the certificate issuance process, which is the magical hand-waving part the DNSSEC detractors always seem to overlook.

Moreover, DNSSEC provides a general, secure, distributed, redundant, hierarchical database. The possibilities for this are endless and not limited to distribution of any sort of public key–consider identity management, for example.

Oh, and it solves DNS cache poisoning.

Todd Knarr July 29, 2008 12:32 PM

There’s one thing that can be done to make attacks more difficult that I don’t think is being implemented yet: ignore all additional records in the response that aren’t applicable to the RR name being queried. If you query for the A records for http://www.example.com, an additional MX or TXT record for it is acceptable but an A record for database.example.com should be ignored. In a delegation response, only A records for the responsive NS records should be accepted. That closes most of the exposure, leaving only poisoning of delegation responses open.

DNSSEC removes the problem entirely, and maybe this will motivate people to get the proper records in place for it to work.

Security4all July 29, 2008 12:34 PM

@Eam

Yes, besides metasploit, there were even 2 exploits on milw0rm. What I meant is that using the developed exploit takes several minutes. Whereas Dan Kaminsky said it could be done in 10 seconds.

@antibozo

Interesting remarks. I found that pdf by looking for more information. You are right. Everybody is focused mainly on patching and the discussion about DNSSEC has stopped. Maybe it shouldn’t have.

Antonio July 29, 2008 1:19 PM

Without knowing the details of this specific attack, I feel very sympathetic about preventing problems compared to patch-treadmilling security issues.

The line of thinking that I’m considering may not apply to all kinds of security problems, but will add to the prevention considerably by controlling software changes much more than in the past.

In this past, we needed the fast (software) update cycles to make wonderful new innovations possible and to correct errors. Instant update gratification, but today that same update / software modification possibility allows infections to take place.

Infections can occur because the computer does allow (or does not prevent) changes in the software to take place, when online. On a desktop one needs to update the OS, firewall, virus scanner, spyware scanner and phising warning, to prevent an unwanted modification of software to occur. How much preventive measures can I get rid of, when this software can’t physically be modified (for some time)?

I personally can live with a guaranteed half-year cycle for my software updates as long as it allows me to safely interact with the internet the rest of the time.
In SetTopBoxes the way to achieve this is by checking on modification of the software image & reloading in case of minor problems, en making the box unusable when more major problems are found.
And aren’t running applications in a virtual machine a similar way of controlling these kinds of risk?

Could a way of “software freeze” (like the physical switch on my SD card) in a PC, in combination with a secure (e.g. once every half-year) update be such a preventive / security design to change the rules of the patch treadmill game?

Any suggestions, comments or thoughts?

antibozo July 29, 2008 2:24 PM

Steve> what about Thomas Ptacek’s arguments against? Making DoS much easier against the DNS infrastructure doesn’t sound like a good idea to me.

Well, Mr. Ptacek’s arguments, like the other paper already cited, are directed primarily at the red herring of end-to-end encryption, which DNSSEC doesn’t attempt to solve any more than certificate signing on its own does. Meanwhile, he blows the complexity of DNSSEC operations out of proportion, in my opinion; I’ve actually practiced working with DNSSEC and it’s really not that hard. Like anything else, you need some new tools to help out, and once you have them you wonder what all the fuss was about.

So most of Ptacek’s series is moot, in my opinion, and you’ll note that he never completed it. But, as you suggest, he does raise a couple of other issues that are not without validity.

I don’t dispute that there are computational requirements for DNSSEC that make DoS more feasible. But computers get faster and cheaper, and bandwidth gets higher, so generally redundancy can be applied to deal with this problem, if it even turns out to be a practical problem. At the same time, virtualization is making deployment of redundant infrastructure much easier.

In contrast, DNS cache poisoning, which is a real and immediate threat, gets continually easier as computational capacity and bandwidth go up. I don’t think the 15 or 16 bits of entropy we get out of source port randomization (while depleting available entropy for other things and increasing memory requirements for firewalls) are going to stop someone with a large botnet from poisoning a cache, and within a few years the benefit will have eroded yet again and pretty much anyone will be able to do it. There’s no other evasion for cache poisoning I’m aware of once that happens, other than modifying the DNS infrastructure. DNSSEC is the only contender I’m aware of for doing that, and it comes with all these potential side benefits I mentioned earlier.

So, yes, I agree with Ptacek that DNS operations and infrastructure requirements are indeed somewhat more demanding under DNSSEC, but I think people will find that in practice, it’s really not that hard. I would point interested users toward the tools written by Sparta as a first cut for making things like key rollover easy to manage. In my estimation, the real and immediate benefit of DNSSEC (no cache poisoning) outweighs the theoretical cost (easier DoS), and the potential benefits, once we have DNSSEC, blow all other considerations out of the water. Management costs of doing DNSSEC are the biggest unknown right now, but that’s because there’s no infrastructure for most of us to play with. It’s a chicken and egg problem, unfortunately. There will be an initial outlay to get significant deployment, but after that, reduced management costs in other areas, such as PKI, will compensate.

I think that if, as a community, we had put the same cumulative effort into deploying DNSSEC that has been wasted on trying to mitigate against cache poisoning over the past six years, we would be a lot farther along in overall network security. Unfortunately, DNSSEC has had few proponents, and a few vocal opponents. And here we are, trying to deal with cache poisoning yet again.

Tim July 29, 2008 2:28 PM

These security problems are more a reflection of the mindset of the community than a technical one. Technical problems, once found, are generally relatively easy to solve. Overcoming the inertia of establishment is a different task altogether. Of course, the community can’t seem to come to terms with a security protocol for DNS either.

We have all known for YEARS that software such as Sendmail and BIND (The Buggy Internet Daemon, as DJB likes to call it) has a long history of security problems. The question is why do a large population of the community still run them (Sendmail is less of an issue due to the popularity of Postfix)?

I blame most of it on the institutional “cheerleaders” for these products. If you examine the history of these software, you will find an amazing amount of FUD (much of which DJB debunks on his website) spread about moving away from these software. The trouble is, some of these same cheerleaders are respected technical community leaders. It can be hard to get a true assessment amongst all the noise. Personally, it took some hard lessons in the trenches to decide for myself.

I don’t want to debate the merit (or lack thereof) of other software (personally I am a DJB fan) but what I want to know is, why is BIND (and its descendants) still running at the majority of the core and at the infrastructure level, given it’s clear and numerous security issues in the past?

Tim

Mister Paul July 29, 2008 2:37 PM

@anitbozo: My last analysis of DNSSEC (admittedly a few years ago) was that it was easy to effective DoS. Not only that, it was reasonably obvious how to do so, and probably could be cobbled together by any intermediate skill attacker. I recommended against it (it was my job role to make that kind of judgement). Unless they have done some serious fixes, I still would.

In the current situation, randomized ports would, at the minimum, force an attacker to send a very large number of unsuccessful packets for every given poisoning attempt. This would be trivial to detect; at the minimum you could develop a system that would alert to the attempt and take corrective action if possible (blacklisting an IP, locking down the record, ceasing to cache that record, or other possible applicable measures). This would probably result in a reduction of service, possibly a limited DoS, but is doable without client changes. More importantly, this kind of defence should be present regardless of how “secure” the protocol is; no single line of defense is good enough to even be acceptable. Lacking rudimentary defences beyond crossing your fingers is irresponsible, particularly for major players in the system.

Of course, they are not incented to action, as alluded to.

antibozo July 29, 2008 3:08 PM

Mister Paul> My last analysis of DNSSEC (admittedly a few years ago) was that it was easy to effective DoS.

Well, traditional DNS is pretty easy to DoS also.

In any case, if you are aware of a specific cryptographic attack that will cripple recursive DNSSEC servers that isn’t addressed in the current implementations, I would encourage you to publish your findings so that people can fix the problem.

Also note that as bandwidth goes up, and clients begin to do DNSSEC RR validation in the local resolver, recursive nameservers become less necessary. And once clients do validation locally, recursive nameservers don’t have to do it any more and can revert to plain caches.

Mister Paul> In the current situation, randomized ports would, at the minimum, force an attacker to send a very large number of unsuccessful packets for every given poisoning attempt.

In the current situation, someone with a large botnet is in a position to do this. And in the current situation, there are plenty of other more limited vectors for cache poisoning (e.g. arp spoofing) that don’t rely on volume at all.

Mister Paul> at the minimum you could develop a system that would alert to the attempt and take corrective action

What corrective action? Dump all caches? Will you be able to tell whether one of those poison packets was successful?

I appreciate your effort, but I think you are devising a very complex system that has to be deployed for every nameserver (not to mention client resolver) in order to stop cache poisoning, and, by your own admission, still results in limited DoS. Do you really think that’s easier than building a DNSSEC infrastructure that is resistant to denial of service? And in the long run, doesn’t it ultimately fail anyway? Or is IPv6 going to save us by giving us a bigger source port range?

antibozo July 29, 2008 3:14 PM

Mister Paul> Of course, they are not incented to action, as alluded to.

Oh, yes, forgot to mention: here you’re definitely on to something.

What large company makes millions selling X.509 certs and also happens to operate the .com and .net registries? Three guesses. Conflict of interest? Nah… ;^)

t July 29, 2008 3:18 PM

@mister paul:

Since you are all ready crafting raw udp packets for the exploit, you can easily turn around and craft those to DoS with random IPs and MAC addresses. Some one could then use your firewall rules to have you effectively block whomever they wanted.

But if you want to use a botnet, then you’re pretty much limited to windows, which doesn’t support raw socket creation, so then you end up blocking legit IPs since the IP normally cannot be spoofed.

How do you prevent a cache of:

rand.somedomain.com

where rand is a random letter and number. You would end up forcing a cache flush.

So is the real answer to get away from using cache servers as bandwidth continues to increase? What’s the performance loss with today’s bandwidth of not caching records?

Bruce Schneier July 29, 2008 3:47 PM

“You say a good security designer ‘anticipates potential types of attacks.’ Can you name a computer protocol or system that was designed from the ground up with security in mind and that has never –ever– been compromised….”

Of course not. What in the world does that have to do with the first sentence? If perfection is your security goal, you’ll never succeed — not in computer security, not in physical security, not in any kind of security.

Andy July 29, 2008 4:04 PM

We could use some security mindset on the implementation as well.

  1. I send out a request for a DNS record.
  2. I get a reply, I cache the result.
  3. I get a second reply with a different result, delete both replies (clear cache) and request again!

The issue seems to be accepting the first reply and just ignoring the second.

Mister Paul July 29, 2008 5:13 PM

@t : “Since you are all ready crafting raw udp packets for the exploit, you can easily turn around and craft those to DoS with random IPs and MAC addresses.”

Don’t assume I am an idiot. Clearly, if you write some kind of generic “oh, a bad request, block that IP” you actually enable DoS instead of preventing it. As I said, take some corrective action. It has to be careful and considered. Going to DNSSEC is not going to make the general problem of additional defences go away; it will need defending too, as no system can be relied on alone. And, as I said, it will be creating a limited DoS situation. And yes, DNSSEC is a nightmare to deploy. And still needs additional protection.

As to secret cryptographic attacks, I have none. Most cryptographic systems tend to have DoS weaknesses, simply because the attacker can spend minimal resources generating things like bogus signatures which cost the defender a lot more resources to check. Anytime you create a large attacker/defender resource imbalance, you tend to open yourself wide open to DoS attacks, and DNSSEC doesn’t have any particularly clever methods to reduce load in my recollection.

Allen July 29, 2008 6:12 PM

Wow, lot of comments on this one, haven’t read them all, but this one was directed back at me.

Nicholas, I agree with you generally speaking, but I am taking the alternative approach. I say cache glue and ONLY cache glue.

Your example of using a CNAME rather than an NS/A combo is interesting, and in that case, I would agree with you — throw it away, and go look up ‘www.amazon.com’ if you got a CNAME pointing to it.

If you get an NS/A record fine, go there and ask, but don’t cache it since it’s at the same depth you are asking about.

As a stopgap until they straighten this out in BIND, I’ve written a DNS shim that you configure in BIND as a forwarder for zones you want to protect. It never caches anything itself, always asks at least twice (to compare answers), is recursive, and strips all additional/authority records off the final answer it gives back to BIND.

Problem solved. 🙂

I’ll be getting it up somewhere soon under a BSD license or similar for people to play with.

antibozo July 29, 2008 6:32 PM

Mister Paul> And yes, DNSSEC is a nightmare to deploy.

This is a common misconception and is simply not true. You should try actually playing with the current tools. For example, take a look at:

http://www.dnssec-tools.org/

It also should be said that specific DNSSEC trust islands already exist, and lookaside validation can be used to extend those into other TLDs without getting the root or even the gTLDs signed (see RFC 5074). This is supported by BIND. Hopefully it won’t be necessary, but that depends on possibly uncooperative TLD registries.

See also ICANN’s most recent statements here, planning for signature of the DNS root by late this year:

http://www.icann.org/en/announcements/announcement-24jul08-en.htm

No one forces you to use DNSSEC on your own systems, so if you perceive potential crypto-based denial of service as a more serious problem than the untrustworthiness of DNS as a whole, you’re free to operate your own nameservers in the traditional way.

Allen July 29, 2008 8:10 PM

@antibozo

Unfortunately, DNSSEC won’t protect us against attacks like these. Even if the root, the GTLDs, and even the target zone were signed — the attackers unsigned spoofed response would still have to be honored.

The query going out can request DNSSEC, but it cannot demand it, exactly because it is not widely deployed and it is certainly not mandatory.

Of course if you know a sensitive site you use (like your bank) is signed (it probably isn’t), you can setup your own caching recursive bind server and tell it to only trust response from that domain that are signed, but that’s not blanket coverage by any stretch.

antibozo July 29, 2008 8:18 PM

Allen> Even if the root, the GTLDs, and even the target zone were signed — the attackers unsigned spoofed response would still have to be honored.

Not sure what you mean. If the attacker’s spoofed response cannot be validated it will be discarded. The DNSSEC-aware recursive nameserver knows that the zone is signed already from the zone’s parent, so it can disregard answers that have no signatures.

Where a parent delegates to an unsigned zone, naturally, the unsigned zone is still vulnerable to poisoning.

If the attack is directed at a stub resolver, also, yes, it will still work, with the scope of a single client. This gets resolved when the stub resolvers are replaced with validating resolvers, which has to happen eventually anyway.

Allen July 29, 2008 9:53 PM

antibozo, you got me there, I mixed up my gripes. If we had full end to end dnssec then yes, it could prevent these attacks.

Since we don’t, you need to configure it in “lookaside” mode, which doesn’t require the key to be there but just trusts it if it’s found. Correct me if I’m wrong, I just setup my first DNSSEC enabled zone last weekend. 🙂

antibozo July 29, 2008 10:15 PM

Allen> Since we don’t, you need to configure it in “lookaside” mode, which doesn’t require the key to be there but just trusts it if it’s found. Correct me if I’m wrong, I just setup my first DNSSEC enabled zone last weekend. 🙂

Close. Lookaside (i.e. DLV) works around the lack of signing on the root or parent by using another domain as the root for looking up DS records. So if a DS record can’t be found for secure.example.com, the resolver will try secure.example.com.dlv.example.com, where dlv.example.com is lookaside domain configured in the resolver.

As I mentioned before, we can hope that lookaside won’t be necessary; it’s proposed as a workaround if we can’t get a signed root and signed TLDs.

This is a concise document that explains DLV, among other things:

http://alan.clegg.com/files/DNSSEC_in_6_minutes.pdf

Glad to hear you’re trying zone signing. Not really that difficult, is it?

Joe in Australia July 29, 2008 10:18 PM

It strikes me that part of the problem is because we set up our computers to have an algorithmic approach to security: apply this test, that test, and another test; and process the packet if it passes all three tests. That would be like a documented protocol for airport security that read “search every person who is bearded, a Moslem, or who comes from a non-European country.” There are two problems with this protocol: first, it gives potential attackers a model for designing their attacks; and a successful attack against one target will be very to work against other targets.

We need two things to solve these problems. The first thing we need is a lot of different organic protocols that will detect suspicious behavior. Any particular site may be running one, many, or all of these protocols. In fact, an ideal protocol would be non-deterministic so that no attacker could be sure that a particular attack would work. The other thing we need is a way to share information on attackers so that the same attack can’t be run sequentially on many different sites.

These techniques wouldn’t stop every conceivable attack. That’s not even desirable: you want attackers to try exploits in the wild so that you have a chance to detect their failures. The great benefit of an organic approach to security is that even successful attacks will be successful against only a random handful of sites; and failed attacks will alert the defenders so that they can test for data corruption caused by successful attacks elsewhere.

lewls July 29, 2008 10:43 PM

i find it funny how people are so quick to point out that dan wasn’t the first to discover this vulnerability. sure, he didn’t come up with the idea of cache poisoning, but that’s not what he’s releasing (the concept of cache poisoning). dan found a novel way of actually exploiting a bug that has been around for many years.. so give him credit for that.

(this directed at post #2 (D))

David July 29, 2008 11:13 PM

@FP:

It would be easier to use the presence or absence of a certificate to notice http://www.badsite.com if one could count on a certificate one could connect to http://www.goodsite.com.

Last I looked, while commercial sites tended to have certificates, they didn’t necessarily say anything related to the site. If people would check the certificates and refuse to deal with anybody who’s certificate doesn’t match the site, certificates might actually become useful for identification.

Chel van Gennip July 30, 2008 3:18 AM

The DNS protocol was designed in 1983, when networks were slow and the value of information on the network was less. Better protocols are possible, but migration to a new protocol is difficult. We wil have to live with this protocol for some time.

I think it is time to strip DNS from optimizations, build for 2400 baud networks, and to review implementations so they use the current protocol at its best.

First any DNS system should stop to use not requested “extra” information. If you ask information about http://www.somedomain.net, the DNS should only use the direct answer, either http://www.somedomain.net has address 123.45.67.89 or dns.some.server.net is the nameserver for http://www.somedomain.net. It should ignore all extra information like: and dns.some.server.net has IP address nnn.nnn.nnn.nnn, and resolve dns.some.server.net by itself. This does take extra queries, but we aren’t on the 2400 baud network anymore.

Using only direct answers on direct questions would have helped against almost any DNS attack until now.

Secondly the DNS system should use the possibilities in the protocol, e.g. use real random numbers, real random ports, check every bit of information received, drop a query after the first answer even if it is a wrong answer (so a wrong sequence number should result in NXDOMAIN and not wait for a better sequence number!), handle incoming requests and outgoing requests on different networks etc.

Bill July 30, 2008 3:50 AM

Finally I disagree with Bruce gasp!

Being on the “patching treadmill” gives you less residual risk than not being on it.

I think what you mean to say is that strategically it’s ineffective, but operationally it can be highly effective.

Ross Snider July 30, 2008 6:24 AM

We needs to fix these problems with DNS and with SNMP and fix those overflows and free() bugs. That’s really important.

However, I think the most important thing the security industry should be doing is creating security awareness. We could stop every type of software exploit and fix all of our protocols completely (not that we could prove that) and still people around the world would be owned. Company would still be owned. Governments would still be owned. It’s a wonderfully trivial thing to send resume.exe to a company – and it works. And even if it doesn’t chances are profile.pif will.

http://procrast-nation.com/?p=134 More here.

Jason R. Coombs July 30, 2008 1:02 PM

Lewis Donofrio > I though believe it or not Microsoft solved this problem years ago with the joining of Active Directory and MSDNS by using only authenticated hosts for zone transfers.

–someone correct me if I’m mistaken.

Microsoft released patches for their Windows Server 2000 and Windows Server 2003 products. No patches were released for their Server 2008 product. I’ve been unable to confirm, but I believe this indicates the problem didn’t exist in Server 2008 DNS for the same reason as djbdns – security was built in.

VMSBoy July 30, 2008 2:02 PM

VMS shipped for its first ten years with a rather nasty security hole in the default install: google vms field account. It’s not a security poster child.

John July 30, 2008 2:41 PM

The NY Times has a write-up by John Markoff on this problem in today’s issue. See http://www.nytimes.com/2008/07/30/technology/30flaw.html?ref=technology for the story.

The problem has me concerned. I don’t really understand the glitch, but I do not like the idea of some of my important web sites being spoofed so that I give my important data to a criminal.

The article suggests OpenDNS as a solution. Is it really a solution?

My own idea is to use the IP directly, such as http://199.239.136.245 instead of http://www.nytimes.com. Would this avoid the security problem?

Please advise.

t July 30, 2008 2:53 PM

@John

The “glitch” is pretty straight forward. Imagine you open up the phone book and someone had overwritten people’s numbers with different numbers. Having never seen the original numbers, you wouldn’t be any wiser.

Your solution would not work 100% of the time. Sites like google have multiple IPs pointing to their name. This is so if one server goes down, you will find another one instead. This redundancy helps keep their site active, and the event is transparent to the user.

Also, sites will load content from other sites by name, which could have had their domain poisoned. You would have to turn off in your browser all content not from the IP you provided.

DNS really keeps the gritty details behind the scenes to make the Internet easier to use.

Sites like openDNS, however, capture your Internet activity to mine later. This information is really valuable to third parties.

One issue that has not be touched on is the impact of DNS poisoning on ISP proxies. I would like to hear more information about that.

a July 30, 2008 6:10 PM

John:

Where do you get the IP? Sure, if you have a source that you know to be trustworthy that will tell you nytimes.com is 199.239.136.245, that should work, but as I understand it the entire problem is that you can’t trust the DNS servers that provide those mappings.

And the suggestion for OpenDNS would be a good one, if you could trust that site wasn’t being spoofed. Seems to me that getting everyone to hardcode in references to your DNS server would be a clever move, if you knew the vulnerability was rapidly being patched. Or is that going beyond security-conscious and well into paranoia?

Bub July 30, 2008 8:38 PM

On the flip side, certificates are generally tied to the host/domain name, not the IP, so if you go to http://www.mybank.com, and check that it is secure, and check that you are still at http://www.mybank.com (in the url bar), in theory you should be ok; if they redirected you to their own site, they should not have a valid certificate (signed by a root authority) that matches the host/domain name. You do have to be careful that the certificate is good (don’t be clicking yes to those self-signed cert warnings, but y’all should know better), and that you haven’t be redirected somewhere (is that url bar where you expected to be?). Might be slight comfort, but relying on DNS for your security is kinda silly anyway.

bub July 30, 2008 8:39 PM

Should have added, I am assuming you are using HTTPS for your logins. If you’re not, then you shouldn’t be worrying about DNS hacks.

John July 31, 2008 12:08 PM

a asked where I get the IP. I use TraceRoute lookup.

However, for many sites, using the IP directly doesn’t work. I have only a few sites for which I need assurance. Some of these, a direct IP works, and some don’t.

I used http://www.doxpara.com to check my ISP, charter.net. Doxpara says it may be vulnerable.

I don’t think I have a solution.

Ugh.

David Keech July 31, 2008 10:03 PM

The trouble with designing security in from the start is that too many security-aware programmers want their 15 minutes of fame.

If you were working on a DNS project and, say, added port randomisation and non-caching of glue records, nobody will ever know. The vulnerability never existed and the exploit will never happen.

It’s like the guy who is stopped by a policeman in London because he is sprinkling a white powder from a little box everywhere he walks. The policeman asks: “What are you doing there ?”
“I’m sprinkling this special dust. It’s to keep the elephants away”
“But there are no elephants in London”
“You see ? Elephant dust works really well !”

Even if the elephant dust is actually working, the guy sprinkling it gets no credit. The elephant hunter who tracks down and traps the rogue elephant once it gets loose in London is hailed as a hero and put on the front page of all the newspapers.

You could develop a secure DNS program and then discover a vulnerability that it wasn’t vulnerable to… but it’s more time-effective to just look for the vulnerability. You will still get just as much fame.

The other half of this issue is that you can’t tell the difference between someone who is sprinkling fake elephant dust and someone who is sprinkling real elephant dust until an elephant shows up.

@John: Traceroute gets its IP addresses from DNS, the same as any domain to IP address lookup does. The only solution is to find a trustworthy DNS server you can use.

I understand that OpenDNS have patched their servers so you can use them if you want. I have issues with some of their other practices but they should be fine from a security point of view. DNS is their entire business whereas DNS is only a tiny part of your ISP’s business and not a part that makes them any money. OpenDNS have the right incentive to make sure their DNS servers are up to scratch.

John David Galt August 4, 2008 11:29 PM

Perhaps DNS servers, and financial sites, should inform their users of their immediate “neighbors” in Internet topography. This would allow detection of spoofing using traceroute.

John Day August 11, 2008 4:43 PM

Thank you for saying (yet again) that patching doesn’t work.

In the 70s, we had the same situation with Operating Systems. Finally there was a definitive government report that concluded that no amount of patching would fix an OS not designed to be secure from the outset. Why do we have to learn this lesson again? It is far more expensive this time than the last time.

Whether we like it or not, a network is nothing more than a very loose distributed OS. Not helped by the fact that the Internet architecture is more an analog to DOS than to a real OS, like UNIX or VMS. Not only was it not designed to be secure, but it is only half a network architecture. It is far from clear that half an architecture can ever be secured.

If DNS is to be secure, it must be considered from the stand point of a complete architecture. Protecting against even classes of attacks is only patching by another name.

Take care,
John Day

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.