Pervasive Monitoring as Network Attack

New IETF RFC: "RFC 7258: Pervasive Monitoring Is an Attack" that designers must mitigate.

Slashdot thread.

EDITED TO ADD (6/7): Hacker News thread.

Posted on May 19, 2014 at 1:44 PM • 6 Comments

Comments

tzMay 19, 2014 3:06 PM

How about rewriting the crypto over IPv6 so it might actually be implemented. And using PFS and other best practices and mark it SHOULD. There is suspicion that the NSA made it too complex so no one would bother. We need something ssh/ssl like as a socket option.

ThothMay 19, 2014 9:22 PM

Well it's a good attempt to reform the IETF requirements to disallow pervasive monitoring but not a perfect one. I still belief that the key to preventing security disaster is to have a well informed public but alas, security is always the boring and hard part where only "people with specs behind huge screens and 10101010" does. Another option is to package security into a very usable and open interface with high portability and compatibility.

The one thing we cannot anticipate is the subtle subversion of standards that have already occurred way long before the RSA BSafe libraries were found to be subverted by the NSA. What we can do is choose algorithms that have been published and selected in an open and transparent manner (like the AES/SHA3/eSTREAM competitions). Having more options is also a good thing (i.e. if we don't trust NIST-AES, we can use original Rijndael or other AES finalists).

What we have in abundance in the crypto-world is the huge data encipherment algorithms (64/128 bit block ciphers and above). What we lack are small ciphers (16/32 bits block size) for tiny devices and the recent papers in IACR's ePrint reflects the growing amount of tiny ciphers being researched. The fact we have so many internet-connected objects and so little data security is probably due to an deficiency in this area (tiny ciphers). Having a variety of different sized ciphers would help a lot.

Programmers and coders must be taught proper secure coding from the first day in schools instead of trying to get all students to get their degrees and diplomas and get out of school gates ASAP.

Clive RobinsonMay 19, 2014 11:55 PM

This RFC is very likely to cause some very significant issues down the line due to the "Law of Unintended Consiquences".

PM can be regarded as needing to protect both the traffic content (message / payload etc) and the traffic it's self (metadata / routing, type, size etc).

These normaly fall into two fields of endevor Cryptography for the message and various anonymising techniques to render impotent traffic analysis.

Cryptographic techniques can be used at any level of the computing stack but are generaly considered to fall into "End to End" or "Point to Point" / "Link" encryption. Unfortunatly whilst encrypting a message is comparitivly easy and likewise encrypting a link is comparitivly easy, encrypting other layers on the network stack are not. Cryptography at the other layers has all sorts of difficult to address issues, and unfortunatly due to the lack of research in these areas by the open community we realy do not know how many and of what type and the effects they will have on both usage and privacy.

Traffic analysis or metadata analysis was started in WWII by just a couple of people at Bletchly Park who applied statistical analysis not to the message/payload of a communication but to the communication it's self. It quickly became clear that it was often more important to know about the communications than its content. The fundemental nature of the DOD IP is "packet" not the older and more traditional "circuit" switching, whilst it is good for network utilisation it is very bad for anti-PM for a whole host of reasons. One of which is the traditional methods of preventing hostile traffic analysis are fixed bandwidth link encrypted communications channels between network nodes where "content stuffing" is used to provide traffic at the full channel bandwidth continuously. One primary assumption of this model is that the nodes are under your control not the enemy or those under the inflance of the enemy which the Ed Snowden revelations has shown is the case for the Internet. The current Internet structure is not suitable for anti traffic analysis operation and even if restructured to alow it the likes of the NSA et al will own the nodes one way or another as they will continue to find ways into router technology either covertly or overtly.

The above sets the state of the current "Playing Field" of the Internet, what we should next consider is what will happen with this added requirment of anti PM on new protocols etc.

Put simply it will cause developers into perscriptive behaviour or abdication to other layers or both. The end result will be a mess like IPv6 and IPsec being abused into the base of new protocols as the first step to acceptance.

For a whole host of reasons both IPv6 and IPsec have "failed in the market" and the reasons for this should be investigated otherwise lessons will not be learned and future protocols condemed to the same fate.

Further IPsec is realy only aimed at "confidentiality of the payload on a single link" and thus ignores metadata PM entirely so is at best only a small part of dealing with PM. Further the way it's designed makes it impractical for use on high connectivity hosts and it is grosely inefficient in other usages.

Such a problematic protocol is not a good place to start to build other protocols on, and if it is used on mass with current protocols it will break the Internet badly.

But even if it's not IPsec there is the issue of "a single point of attack" we have recently seen a series of problems with SSL/TSL at both the protocol layer and the implementation layer. Such single points of security failure are very bad news, and by comparison to what is needed SSL/TLS is a very simple system, which does not bode well for any implementation required by this PM prevention initiative.

That is not to say I'm against the idea in principle, I'm not, I'm actually all for it, it is just that I can see significant problems ahead and the way the IETF currently produces specifications is not the best way to go about this task. NIST has similar issues as do many other standards organisations and industry has difficulties dealing with other standards systems, which again does not bode well.

But even if all of the above issues get effectivly resolved, the PM preventative stance may well be "still born"...

Recent changes in the FCC as a nod to commercial interests is going to remove/ease the current requirments for agnostic behaviour from ISPs and carriers to their customers. Thus prefrence carrying is back on the table, which means it does not matter how much PM Prevention you build into new protocols if carriers for "commercial" or other intrests will not carry them. The only soloution is thus either keep agnostic behaviour or make all usage of PM Prevention free protocols obsoleate...

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient Systems, Inc.