Circumventing Communications Blackouts

Rangzen looks like a really interesting ad hoc mesh networking system to circumvent government-imposed communications blackouts. I am particularly interested in how it uses reputation to determine who can be trusted, while maintaining some level of anonymity.

Academic paper:

Abstract: A challenging problem in dissent networking is that of circumventing large-scale communication blackouts imposed by oppressive governments. Although prior work has not focused on the need for user anonymity, we contend that it is essential. Without anonymity, governments can use communication networks to track and persecute users. A key challenge for decentralized networks is that of resource allocation and control. Network resources must be shared in a manner that deprioritizes unwanted traffic and abusive users. This task is typically addressed through reputation systems that conflict with anonymity. Our work addresses this paradox: We prioritize resources in a privacy-preserving manner to create an attack-resilient, anonymity-preserving, mobile ad-hoc network. Our prioritization mechanism exploits the properties of a social trust graph to promote messages relayed via trusted nodes. We present Rangzen, a microblogging solution that uses smartphones to opportunistically relay messages among citizens in a delay-tolerant network (DTN) that is independent of government or corporate-controlled infrastructure.

This is exactly the sort of thing I was thinking about in this essay.

Posted on August 14, 2013 at 7:43 AM • 17 Comments

Comments

CallMeLateForSupperAugust 14, 2013 8:11 AM

The subject paper is all about trust, yet it resides on a server that does not play nice - i,e, does not play at all - with HTTPS. That's not the same Cal Berkely "sauce" I liked back in the 60's.

CitizenNothingAugust 14, 2013 9:02 AM

It's (partially/completely?) funded by a US Federal grant (from the US State Department’s Bureau of Democracy, Human Rights and Labor (DRL)). Is there an issue with this? Circumventing oppressive governments, while being funded by one?

Clive RobinsonAugust 14, 2013 9:04 AM

One of my major complaints about all web browsers is their lack of support for human roles.

A fairly easy wat to anonymously build reputation was proposed years ago. Basicaly you create a self signed PKcert which has anonymous data fields. A user then signs communications with the private key and appends the public key in some manner.

The recipient can easily check the signed message against the appended PKpubcert and store this away with what is in effect a reputation counter.

As time goes on the PubCert increases or decreases it's reputation depending on how it was used.

The problem is to assist anonymous behaviour the human needs a different cert set for each "role they hav", which you would think was fairly obvious?

Apparently not to browser and simmiler app level developers...

They appear to want to remove any hope of role based usage which obviously allows anonymouse behaviour to be stripped away...

SparkyGSXAugust 14, 2013 9:40 AM

@Clive: if I understand correctly, your proposal would mean that users must use the same certificate for a prolonged period of time in order to build a positive reputation. This means that the authorities would still be able to determine which pieces have been send using the same certificate, and therefore presumably by the same person.

At the same time, a "fresh" certificate must have a small by positive reputation in order to be able to communicate and begin building a reputation, so what would stop abusive users from just generating new certificates all the time?

If I understand correctly, if the available bandwidth is limited, a client which acts as a router would prioritize packets with certificates with high reputation values, and thus dropping packets with low or even negative reputation values, correct? This would also imply that it would be very difficult or impossible to begin building a reputation on a new certificate if the network is heavily used.

I was thinking about a system that would use a "proof of effort" to determine which packets will be prioritized, perhaps with a routing model finds the route to the destination that requires the minimum effort to "pay" for the traffic. As the load on a router increases, it would increases the effort it would require from a client in order to relay the packets.

Such a system might have the property (which may or may not be desirable) to favor small units of data (such as plain text) over large amounts, such as video. From a technological standpoint, this might be an advantage, because it reduces the network load and would provide many people with a limited amount of bandwidth each, but from a social standpoint, pictures and video might be desirable because they are more objective and much more useful as evidence.

I would think such a system might be more resilient to DOS attacks, which a suppressive government would certainly attempt, because that would require a massive amount of processing power, and a large number of clients in the network in order to block a sufficient number of routers to make communication by other participants impossible.

On the other hand, it does require the participants to have a relatively large amount of processing power, which could be a severely limiting factor when mobile and/or relatively old devices are used by the participants.

ThunderbirdAugust 14, 2013 9:48 AM

A fairly easy way to anonymously build reputation was proposed years ago. Basicaly you create a self signed PKcert which has anonymous data fields. A user then signs communications with the private key and appends the public key in some manner.

The recipient can easily check the signed message against the appended PKpubcert and store this away with what is in effect a reputation counter.

As time goes on the PubCert increases or decreases it's reputation depending on how it was used.


Interesting idea. It makes the reputation certificate a high-value item (assuming the user actually builds a high reputation), so it would have to be protected carefully. A drawback would seem to be that there's no distributed scoring method. I can only read and score so much stuff myself. I suppose it would be simple enough to layer on some kind of distributed reputation thing on top of this, presumably using reputation-signed messages to aggregate trust through some trusted third party?

dbCooperAugust 14, 2013 9:55 AM

Well. At some level this appears to be one department of the US Gov't attempting to circumvent the surveillance activities of another department.

If I remember correctly there was a similar situation with GPS signals in the 1990's. The US Dept of Defense degraded GPS accuracy using "Selective Availability" whilst the US Coast Guard, in the interest of navigation for mariners, overrode the SA signal.

secret policeAugust 14, 2013 11:07 AM

Creating a mesh darknet using phone wifi is a good idea unless you live in Syria where the regime mortared any signal they found. You would also need to build custom Android with Mobiflauge to create plausible deniability since regime thugs would stop you on the street and demand to see your phone looking for evidence of this mesh app. Additionally you would have to spoof the wifi mac address while in hidden PDE/mobiflauge as that is easy to harvest.

Joe BuckAugust 14, 2013 11:17 AM

If a social trust measure is used to determine which anonymous users are trustworthy, it seems that government side can use use the traditional tactic: infiltrators who pretend to be committed activists, or agents provocateurs who aim to discredit the movement by committing vile acts and convincing others to do the same. If they're good, they can wind up being the dominant voices on the darknet.

FigureitoutAugust 14, 2013 11:41 AM

Don't trust this solution based off the backers as someone stated; and you need a smartphone which I gave up over a year and a half ago. And like Joe Buck said, bringing in new faces might seem like the movement is expanding when it's really just being infiltrated...I've seen it way too many times.

Sorry Bruce, these types of networks are going to be very small and not really talked about much. And they aren't going to be running on smartphones if the people are really serious about it.

Clive RobinsonAugust 14, 2013 4:48 PM

@ SparkyGSX,

    : if I understand correctly, your proposal would mean that users must use the same certificate for a prolonged period of time in order to build a positive reputation. This means that the authorities would still be able to determine which pieces have been sent using the same certificate and therefore presumably by the same person

If you stick with the basic idea yes, but it's fairly easy to extend it such that you can limit the number of messages sent under one key.

I should make clear that the key is used for just one role of the many an individual might have and as such the key can be disposed of at any time. If no other steps are taken then all thats lost is the reputation that has been built around that single role, not the users reputation. Look at it this way one of most adults roles in life is "employee" the individuals reputation should not be harmed if the organisation they work for goes out of business (unless they were complicit in it ceasing to trade), thus another role such as private bank account holder should not be affected.

The method used to transfer reputation from one PubKey to another can be done in a number of ways. However most PKI ways are directly traceable and thus not appropriate. Another way is via tokens, if a user wishes to replace their current key with a new one what can be done is something along the following,

1, User requests server for reputation token signed by their current private key.
2, The server sends back the token encrypted with the users current pub key.
3, The user decrypts with the private key to get the token.
4, The user tells the server with their current key to erase the copy of the current key held on the server.
5, The user then destroys the current key pair.
6, The user encrypts the token with the server public key signs it with the new private key and appends new public key and sends it to the server.
7, The server uses the token to copy the old reputation value into the new representation counter and then deletes the old counter.

Obviously there needs to be a bit more on the nitty gritty details of the protocols (especialy step 6) to prevent various attacks and the user needs to trust that the server will not store the old information.

That being said a central server is a single point of failure so a more distributed approach is required.

& @ Thunderbird,

With regards the penalty of a bad reputation or an unknown reputation, you need to think not just of bandwidth but also forwarding delay and data caps.

Whilst this will not stop DoS attacks on any given entry node (nothing can reliably) it does protect other downstream nodes which are not visable to the attacker (think about mix nets etc).

I've been trying to remember a paper written by some bods over at the UK's Cambridge Labs that was about setting up ad-hoc networks between sensor nodes in a hostile environment, that had the ability to limit an attackers ability to comprimse the entire network. If I remember correctly one of the authors was Ross J. Anderson.

aboniksAugust 15, 2013 7:13 PM

http://www.eecs.berkeley.edu/Pubs/TechRpts/2013/...

Looking at how Rangzen is supposed to work, it seems like the reputation filtering to deprioritize infiltration has a problem.

The longer the network is in use, the more opportunities there are to build a false reputation.

So you can build your network before it's actually needed (coms blackout), but then it's more likely to have been compromised. Or, you can try to build it in real time after a blackout, but then you have bandwidth problems while disseminating the code, and you're starting from scratch building your reputation web in a hostile environment where the consequences for individuals trusting the wrong people might be literally fatal, depending on the adversary.

Add to that the fact that once a device has built up a trust reputation, the device itself becomes a liability to the network if/when the adversary gets their hands on it.

All that aside, it's interesting that the app is named for the Tibetan word for "liberty", and that the funding through the DRLIF grant documents specifically mention applications tailored for local uses. I don't know much about the infrastructure or uptake of smartphone technology in Tibet...do any of you?

Is there anything on the technical front that makes this sort of networking particularly applicable there? Obviously the political climate makes this an attractive target locality if you're trying to get grant dollars from the State Department.

I can see why it'd be an attractive region for State to dump funds in order to create an information gathering network that would stay up long enough to provide tactical or strategic edge after a blackout. One governments dissident is another governments disposable surveillance node, after all. Provide the illusion of security and anonymity, and they'll keep providing intelligence when they might otherwise have done the sensible thing and gone to ground.

Dirk PraetAugust 16, 2013 10:13 AM

I would very much like to see such a technology being developed outside the US, and preferably under GPL.

Nick PAugust 16, 2013 4:18 PM

@ Dirk Praet

"I would very much like to see such a technology being developed outside the US, and preferably under GPL."

I prefer a BSD-like license for something like this. It also allows me to do little obfuscations on my systems without releasing it in the source. A few examples including swapping ciphers, hidden port knocking schemes, swapping module implementations for code diversity, and making small OS-level modifications for syscall restrictions. Tricks like these prevented quite a few attacks from working well.

Of course, the main issue is integration. There's quite a bit of good work out there in both academia and commercial that has non-GPL licensing. Integrating that with GPL code can be impossible, difficult or something in between. It's an issue enough that one of OKL4 microkernel's use cases is running proprietary and GPL code side-by-side on a phone w/out license contamination. Imagine if that wasn't an issue and the microkernel layer could be focused on what it's designed to do.

I get the idea behind the GPL. It's just that its restrictions seem to get in the way of its success in INFOSEC. Now, a GPL-"like" license with reasonable exemptions might work. Many groups use LGPL or GPL with specific components licensed differently for that reason. BSD still allows for the most flexibility and widespread corporate adoption.

Scott "SFITCS" FergusonAugust 18, 2013 6:41 AM

@Nick P

I prefer a BSD-like license for something like this. It also allows me to do little obfuscations on my systems without releasing it in the source.

If you're not releasing modified code then you are under no legal obligation to distribute the modified source. Regardless of which version of the GPL license the original was released as.

The key being releasing.

I'm presuming that if you're adding obscurity to increase security to Open Source code you wouldn't be releasing the changes, as it'd defeat the purpose. e.g. modifying LUKS so that it will take two keys - one being a duress key that shreds and overwrites critical data with convincing but basically innocuous data, so that a major problem is substituted for a minor problem.

NOTE: that's just an example - I'm sure you could come up with something better. I'd hope so because it's as useless as BIOS passwords in "critical" situations i.e. if your encrypted drive is seized by competent authorities your bootloader isn't used to decrypt the drive. (a trusted bootloader is used to boot an imaged version of the drive on trusted hardware).

Fortunately their aren't many competent authorities - their intake screening measures often ensure that :)

Nick PAugust 18, 2013 9:34 AM

@ Scott

"The key being releasing.I'm presuming that if you're adding obscurity to increase security to Open Source code you wouldn't be releasing the changes, as it'd defeat the purpose. "

GPL FAQ: "The GPL does not require you to release your modified version. You are free to make modifications and use them privately, without ever releasing them. But if you release the modified version to the public in some way, the GPL requires you to make the modified source code available to the program's users, under the GPL."

You were spot on. :) I appreciate that. So, if I'm sure I won't have to ever distribute outside the organization, I can use the GPL code. The other areas are still risky.

Scott "SFITCS" FergusonAugust 18, 2013 11:14 AM

@Nick P

So, if I'm sure I won't have to ever distribute outside the organization, I can use the GPL code. The other areas are still risky.

I "suspect" that the added security of obscurity is negated by distribution - so licensing obligations wouldn't come into play. In the example I gave of a modified LUKS with an "obscurity" (doesn't work they way would be expected to work, has a hidden function) it's strength would be that it's not expected. But the "strength" is at it's maximum when only you know that two passphrases can be accepted. Call it the Benjamin Franklin asymmetric knowledge advantage assurance theory ;)
Some one with more time and interest could probably invent a formula to determine how much the strength of that same obscurity is decreased by the number of people who know it's possible. (and factor in how much I've just reduced it). Additionally once it's known that obscurity has been used before, the value of obscurity is reduced in the future (all though that has other major factors involved)

The other areas are always risky, regardless of license. Fortunately (for you) it's late so I'll spare you a protracted reasoning on why I believe Open security (in all things) is more reliable in the long-term than Obscure (closed).

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..