CallMeLateForSupper June 12, 2015 2:21 PM

Refreshing reprise of discussion of BGP.

That said, contemplating the balancing act that is BGP is no less frightening now that it was twenty years ago. (from the wings): “Yes, it’s a house of cards, but look… it’s still standing! Can’t argue with success!”

From 2nd article (emphasis mine):
“Many networking engineers say that BGP, even after a quarter-century and countless hijackings, REMAINS FAR MORE NOTABLE FOR ITS SUCCESSES than its failures.”

Recalling the subject of Bruce’s recent post,
perhaps what is needed to air-start movement toward replacing BGP is a really, REALLY big BGP failure. I’m thinking along the lines of 99% of world internet traffic routed to sinkholes for name-your-time-interval.

CallMeLateForSupper June 12, 2015 2:24 PM

system ate my URL again. I was pointed to Bruce’s article
“The Effects of Near Misses on Risk-Decision Making”

paranoia destroys ya June 12, 2015 4:18 PM

The articles mention when Pakistan accidentally took down YouTube in 2008 instead of just blocking access in the country.
I already knew the first computer virus was also accidentally created there to combat software piracy but got loose.
We shouldn’t view that part of the world as backwards, other than these were accidental nuisances.

Online attacks from other nation states make the news.
This area is considered by many officials important enough to push going to war for.
Maybe those in charge should pay more attention to what damage can be done without someone leaving their country and facing only a 5% chance that the TSA would catch them.

Nick P June 12, 2015 5:18 PM

Such design issues are why the Internet, plus every protocol on top of it, should be considered untrustworthy in any networking design. The old solution was network subsystems that labeled data, had it checked via guards, and ran through link encryptors over untrustworthy networks. This solution can still work. Only a limited subset of the Internet protocols need to be trusted and many can be avoided using proxies on clients/servers. The guards or encryption systems need the strongest security in such schemes. There are COTS and GOTS products doing these sorts of things.

In parallel, we can create schemes that are inherently better wherever possible while operating on top of existing physical infrastructure. This includes Internet lines, leased lines, wireless point-to-point links, wireless mesh networks, and even dial-up. They can be tested and analyzed to work out their issues. Over time, interested parties can deploy such alternatives in parallel with or in replacement of Internet tech. Such a strategy will accomplish more than waiting for the Internet to get secure.

Rolf June 13, 2015 5:52 AM

“People don’t break into banks because they’re not secure. They break into banks because that’s where the money is,”

This is not true. People break into every computer, if they’re not secure. It’s not a network problem. It’s a software problem.

Grace Nilon June 13, 2015 6:45 AM

The history is really threatened when we were seen the difficulties of website hacked easily, then we are using limitation work and it will be done on late hours due to delay in downloading, where we talk about the confirmation of papers done then please buy custom essay uk which will done your assignments on time.

CallMeLateForSupper June 13, 2015 7:41 AM

Ah-h-h… a refreshingly retro post, from “Grace Nilon” this time. Had not seen one of these abominations here for quite some time. (ESAD, “Grace”)

Kyle Rose June 13, 2015 9:33 AM

I have mostly come to the conclusion that trying to secure the routing layer of the internet is hopeless. Protocols like DNSSEC impose pretty significant costs on service providers and clients/recursive resolvers, and at the same time don’t provide any substantive security guarantees that I’d be willing to rely on. I still want end-to-end security—confidentiality and integrity, in particular—of the data stream, something provided by TLS and support systems around TLS (e.g., certificate transparency).

DNSSEC doesn’t even try to give me the one thing I’d really like out of the routing layer, which is privacy: it provides a base level of integrity…and that’s it.

“Securing” BGP in a similar way would be worth even less, because routing decisions at that level can’t even really be checked by clients as they don’t have the context to understand why paths are configured with particular costs, nor do they know where state-level actor tap points exist, so it’s not like they could sound an alarm on the existence of a suspicious route: how would you define or detect “suspicious”?

It’s also worth considering that all widely-used protocols on the internet are built in layers: protocols at each layer are designed to create resiliency and add features beyond those which are provided by lower layers. Just as TCP takes IP and adds reliability and flow control, TLS takes TCP and adds a degree of end-to-end security. This is not simply chance or expediency: there’s a reason things are built this way.

Note that I’m not claiming a perfect TLS provides us with all the security we want. The one gaping hole in TLS security that would be solved by a proper security design for the routing layer is that TLS provides confidentiality only of the actual bits being moved around: the metadata (routing information, size of data transferred, etc.) is wide open for analysis. But until something like TOR can accomplish this in a demonstrable (and preferably provable) way in an overlay network subject to constant surveillance, it’s actually counterproductive to add some crypto at the lower layers, declare “we have routing layer security!”, and call it a day.

Not only that, but the biggest problem with TLS—the trust model and how it is implemented in browsers—is not solved by adding DNSSEC-like security to the routing layer.

So, just to be clear, I’m not opposed to adding security to the routing layer… but it has to provide enough value to balance out the cost. What the TOR/Silk Road debacle tells me is that routing layer privacy is a hard problem: a protocol designed explicitly for privacy over untrusted networks nonetheless was vulnerable to targeted analysis. What makes me hopeless on this subject is that at some level even TOR relies on systems that know where packets are going to end up, so it’s not clear that routing layer security of the sort I want is even possible.

LR June 13, 2015 10:36 AM

The article is unfair to Paul Baran, mentioning only his concern for nuclear survivability and then (by implication) grouping him with other early researchers who thought little about security.

Baran’s 1964 RAND series “On Distributed Communications” includes his ninth memo “IX. On Security, Secrecy, and Tamper-Free Considerations.” This seems quite forward-thinking on many aspects of system-level security and on the meta-topic of the need for public discussion of secure design. It deserves more notice on this blog. (Maybe Mr Schneier has reviewed this memorandom in one of his books, but I haven’t seen it here or in Cryptogram.)

thatnastytruth June 14, 2015 2:49 AM

Signature based antiviruses and three-quarter-decade-interval responses to memory corruption to keep revenue models alive.. done

Oh and the people who do odd ball lectures on security practices that look and sound like the real estate and investment seminars of the 70s and 80s but that suggest odd-ball social psychology solutions that apparently don’t work..

Anyone want to use the same method of propagation the Morris Worm did in the 80s in 2015? Guess what: That’s what most botnets and governments are doing as I write this.. Don’t forget to pay for that security software and training.. suckers..

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.