Evading Internet Censorship

This research project by Brandon Wiley — the tool is called “Dust” — looks really interesting. Here’s the description of his Defcon talk:

Abstract: The greatest danger to free speech on the Internet today is filtering of traffic using protocol fingerprinting. Protocols such as SSL, Tor, BitTorrent, and VPNs are being summarily blocked, regardless of their legal and ethical uses. Fortunately, it is possible to bypass this filtering by reencoding traffic into a form which cannot be correctly fingerprinted by the filtering hardware. I will be presenting a tool called Dust which provides an engine for reencoding traffic into a variety of forms. By developing a good model of how filtering hardware differentiates traffic into different protocols, a profile can be created which allows Dust to reencode arbitrary traffic to bypass the filters.

Dust is different than other approaches because it is not simply another obfuscated protocol. It is an engine which can encode traffic according to the given specifications. As the filters change their algorithms for protocol detection, rather than developing a new protocol, Dust can just be reconfigured to use different parameters. In fact, Dust can be automatically reconfigured using examples of what traffic is blocked and what traffic gets through. Using machine learning a new profile is created which will reencode traffic so that it resembles that which gets through and not that which is blocked. Dust has been created with the goal of defeating real filtering hardware currently deployed for the purpose of censoring free speech on the Internet. In this talk I will discuss how the real filtering hardware work and how to effectively defeat it.

EDITED TO ADD (9/11): Papers about Dust. Dust source code.

Posted on August 28, 2013 at 7:07 AM31 Comments


phred14 August 28, 2013 7:48 AM


Please keep in mind that the NSA concerns are not about censorship, they’re about monitoring. This “dust” tool is not about evading monitoring, it’s about evading traffic blocking.

Different animals.

Perhaps it supplies even more information to the NSA, because if they can detect dust being used, they know that you want something bad enough to use a sophisticated tool to get it. Kind of like simply using TOR is a red flag already.

Alex August 28, 2013 8:37 AM

Proxy servers are not new and there is a problem with his solution. Any one that intent on censorship probably has firewalls and it would not take much effort for the firewall to detect an IP address trying different protocols and simply block that IP address. Might result in a few false positives but I doubt they care.

Clive Robinson August 28, 2013 8:38 AM

As any comms path requires two or more ends there is a potential to block based on the route IP address/port number as well as traffic type.

Further if the learning process has to work like a matched filter with learned weighting, then the backwards “learning channel” may be vulnerable.

People tend to forget that much of the traffic filtering is “simple filtering” if such evading systems become popular then those with the “walls” will adopt a less simple system to compensate. So I’d expect the life of such a system to be related to it’s perceived “nuiscance value”.

Cash Williams August 28, 2013 8:58 AM

I saw Brandon’s talk at Defcon. What struck me was Brandon’s passion for free speech and enabling those who might be censored the freedom to communicate.

Alex – The point of Dust is to disguise traffic to look like normal allowed traffic. Yes, if the IP is later identified, it could be blocked. However, the idea is whatever message would have already made it through before the traffic was flagged and IP blocked.

Steve August 28, 2013 9:33 AM

Now your talking Bruce! I like the idea, and I like even more that your putting your talent to use in this way. I look forward to seeing it deployed.

Eric Shelton August 28, 2013 10:21 AM


Although I suppose I see what you are talking about, from a practical perspective, does it matter? To avoid blocking due to fingerprinting, pretty much by definition you are working to avoid detection. It that is done effectively – if it has been rendered undetectable for what it actually is – just what is a surveillance agency going to do?

I suppose a difference may arise because it is clear when blocking is successful and that would prompt engineering a different way around it. However, there will be no indication where another method successfully identifies Dust packets and is used strictly for monitoring, not blocking.

jones August 28, 2013 10:36 AM

As ISPs and telcos increasingly assume the role of “state actor” with immunity from prosecution under the 2008 FISA amendment, the need for this sort of thing will probably pop up in a number of places.

The “Six Strikes” copyright policy, throttling the traffic of BitTorrent users, copyright legislation like SOPA and PIPA and ACTA and CISPA that is thinly veiled surveillance and censorship legislation, and “illegal numbers” are areas where this sort of government-sponsored industrial encroachment on civil society are already manifesting themselves.



Aside from the legislative protections Congress has been providing for the commercial portion of the surveillance apparatus, corporations are able to find their own ways of working around the laws. For example, in 2007, Verizon claimed that handing over customer data to the NSA was protected First Amendment free speech:


It seems pretty clear now that the 2008 FISA amendments gutted the law, by legalizing the abuses the law was originally meant to correct.

Here are some of the abuses FISA was meant to correct, according to Congress’s own investigations in the 1970’s:


The “retroactive immunity” provisions of the 2008 FISA amendments were designed to block the legal discovery process (effectively denying the litigants power of subpoena) for a case that was thrown out subsequent to the passage of the law.


The Constitution prohibits “ex post facto” laws, but, unfortunately, most of the case law regarding “ex post facto” laws is about making things illegal retroactively.

Michael Mol August 28, 2013 11:08 AM

Dynamically responding to patterns of traffic blocking make it seem ripe for a attack. Anything that’s not clear green traffic, block randomly. If a pattern of reconnects occurs, you’ve found a dynamic duster, and can implement whichever further policies you like. Meanwhile, you can guide the duster to using something relatively unique that you understand, and then tarpit (or apply additional monitoring or fees) the flows you can then tag as coming from the duster.

Just some random thoughts.

kingsnake August 28, 2013 11:41 AM

Regarding the randomware, the paltry $300 fine would be a giveaway. The NSA / CIA does not fine, it disappears.

Thunderbird August 28, 2013 1:34 PM

I attended the talk and something that the author emphasized was that most of the gear doing packet inspection is ten years old and does something that’s just good enough. His tool is intended to be malleable enough that you can change it to keep one step ahead. If someone deploys smarter equipment, you punch it up as necessary and continue running. The goal did not seem to be confidentiality, either. The idea is to not have Comcast or some other oppressive entity harsh your buzz just because you were trying to exchange information that they didn’t like.

Substitute China, Dubai, or whoever you like as your version of The Man.

Julien Couvreur August 28, 2013 3:46 PM


The telcos are not helping the NSA to tap into communications “with immunity from prosecution”, but in fear of prosecution.

They have two choices under the law: comply (stay in business and keep quiet) or shut down (see Lavabit, Silent Mail).

conrad6 August 28, 2013 4:33 PM

This work could be very useful to protect industrial control and SCADA systems from semi-professional intrusion or DOS (discarding bad packets at a lower layer with less cost), by providing lots of COMSEC and denial of traffic analysis. I do not see it as at all useful against state actors. They might buy commercial deep packet inspectors, but they can also save and correlate ALL the “out of band” traffic for persons or groups of interest.

That’s the hall of mirrors recursion. Without an out-of-band (OOB) password, Dust doesn’t work. Why not just exchange secret key pairs on this magic OOB channel?

And what’s with a 256 bit public key (fig. 2)? I could break that with a table look-up in RAM on my lesser laptop!

conrad6 August 28, 2013 4:43 PM

OK, I exaggerated. I would have to use my lesser laptop to connect to one of NSA’s lesser key look-up databases. 1024 bits is now problematical. See Stevenson, N. “Cryptonomicon” 1999 page 66 (Avon paperback Nov. 2002 printing.

Nick P August 28, 2013 5:42 PM

@ conrad6

“This work could be very useful to protect industrial control and SCADA systems from semi-professional intrusion or DOS (discarding bad packets at a lower layer with less cost), by providing lots of COMSEC and denial of traffic analysis.”

Basic VPN’s, data diodes and guards can already do that. Plus they can get as cheap as an embedded board with Linux. Best to avoid complexity where possible.

” 1024 bits is now problematical. See Stevenson, N. “Cryptonomicon” 1999 page 66 (Avon paperback Nov. 2002 printing.”

Well, you could just avoid using public keys where possible. Most of my critical comms and projects rely on a shared secret with symmetric encryption. It isn’t likely to be threatened by either classical or quantum computers any time soon. You can use a master key (hand loaded during installation) to move new keys into the system for various sessions, files, etc. There’s other aspects but getting rid of public keys where unnecessary gives the most lasting security benefit.

MingoV August 28, 2013 6:28 PM

“… the tool is called “Dust”…”

I am amused because, in the Philip Pullman “His Dark Materials” trilogy, “dust” is the substance that allows movement between parallel universes. It also can be used to distinguish between younger children and those who “changed” during puberty. Now Dust can reencode arbitrary traffic on the internet.

I’m waiting to read about the counteractive agent, Endust.

Brandon Wiley August 28, 2013 7:29 PM

Hi everyone and thanks for your interest in Dust. This is not the proper forum to address all of the criticisms of Dust that I see in the comments, but I did want to drop in and say hello. So I’ll just say that the name Dust is not a reference to Greg Egan, although I will probably read the relevant works due to the name connection. The name Dust is indeed loosely inspired by the Philip Pullman trilogy.

If you’d like to read more about Dust, you can check out the source code and documentation and there are also two papers about the first version of Dust, which was obfuscation only with no polymorphic protocol shaping.

Also feel free to email me if you’d like to discuss the project.

conrad6 August 28, 2013 9:06 PM

@nick P

Maybe I failed to assert my crypto paranoia enough. I believe in secret exchange of secret keys as the best way of securing information transfer. But so what!!! It doesn’t scale well, and the state actors can always pull your fingernails out for the secret key.

zree August 29, 2013 6:51 AM

Dust is clever, but this paper suggests that approaches based on hiding traffic by merely emulating unblocked protocols are doomed, and that a better approach is to actually use the “cover” software instead. (e.g. actually run Skype and tunnel traffic through it rather than trying to disguise traffic as Skype traffic).

jones August 29, 2013 7:00 AM

@ Julien Couvreur

No, the telcos absolutely ARE immunized from prosecution.

Lavabit is a small, entreprenurial webmail firm, not a major telecommunications carrier.

The telcos were immunized by an act of Congress, the 2008 FISA amendment. The Supreme Court has recognized it as valid. The telcos are immunized.


Mr. Schneier’s blog from yesterday details the complicity of the telcos in this operation


Ken August 29, 2013 9:51 AM

If DUST can configure “something” to bypass obstacles … couldn’t that “something” include, or be, viruses & malware?

Thunderbird August 29, 2013 11:05 AM

If DUST can configure “something” to bypass obstacles … couldn’t that “something” include, or be, viruses & malware?

The idea is to enable you to communicate with whoever you want to. Since they can send you anything they like, the answer is “yes.” Of course, whatever you use to watch for malware and the like can still be used, can’t it?

Dirk Praet August 29, 2013 11:35 AM

@ Nick P

Well, you could just avoid using public keys where possible. Most of my critical comms and projects rely on a shared secret with symmetric encryption.

The problem with shared secrets is that they do not protect you from a RIPA Section 49 Notice (in the UK) or from rubber hose decryption.

When it comes to data encryption, full disk encryption solutions like TrueCrypt, Bitlocker and LUKS suffer from a minor flaw in the sense that the key to decrypt the data is stored inside the volume header and is protected only by the passphrase/shared secret. In addition, a tool like TCHunt can be used to detect TrueCrypt volumes, weakening or in practice rendering useless a plausible deniability argument.

A solution mitigating the problem of decryption under duress (legal or physical) consists of a combination of Shamir’s Secret Sharing Scheme (SSSS) and Tomb, a wrapper around cryptsetup and dm-crypt. SSSS allows data to be encrypted with N keys requiring at least M keys to decrypt it. This allows multiple people to be involved in the decryption of data, removing the ability of any one person to compromise the data. Using Tomb, you can seperate the encrypted data (the tomb) from the decryption key (the .key), allowing the key to be stored and transported separately from the data itself.

Having separated the encrypted data store from the key, we just need to encrypt the .key with SSSS. Since .key decryption and data access requires cooperation of multiple SSSS key/passphrase holders, rubber hose or legal coercion – at least in theory – becomes largely ineffective. The obvious downside of this way of working is that when M out of N lose their key or get their names on a disposition matrix, then the data is toast.

Daniel August 29, 2013 12:23 PM

“Perhaps it supplies even more information to the NSA, because if they can detect dust being used, they know that you want something bad enough to use a sophisticated tool to get it. Kind of like simply using TOR is a red flag already.”

Exactly. This is Catch-22 that everyone is in already. If you obfuscate then you draw attention to yourself by the simple fact of obfuscating. The good news about obfuscation is that you can hide; the bad news is that everyone knows you’re hiding. On the other hand, if you do not obfuscate you have nothing to protect you and are the proverbial sitting duck. The good news about not obfuscating is that no one knows you have anything to hide; the bad news is that if they seek to locate you, it’s easy.

I liken it to the matador in a bull ring. Do you raise the red flag to entice the bull and hope you can dodge it? Or do you put the flag down, turn your back to the bull and go about some business, and hope you don’t get gored?

There are no easy or right answers to this question. Good luck!

Coyne Tibbets August 29, 2013 1:49 PM

What can one say? Everything I’ve ever read says lie detectors are no more accurate than a coin flip.

So what will they write on the complaint? Maybe, “You’re flipping our coin flips!!”

Ah, we do love our snake oil, don’t we?

Nick P August 29, 2013 4:40 PM

@ Dirk Praet

“The problem with shared secrets is that they do not protect you from a RIPA Section 49 Notice (in the UK) or from rubber hose decryption.”

I left that stuff out to focus on elimination of public key crypto issues. Your approach seems good. I also didn’t know about Tomb.

I’ll add that rubber-hose resistant designs can also be built on top of a good key management system (excluding local govt in threat profile). In this regard, the system would be kept in the safe spot. The policies are expressed as a sort of stored procedure written in a scripting language like Lua. So, you’d locally encrypt the volume with the key, the regular software stores the key to the KMS, the KMS would (per policy or key type) break it up using a stored procedure, and the key could be regenerated later on via one or more procudure calls. The field user neither has ability to override the policy, nor knowledge of the key. Yet, the whole thing is still centrally managed and integrates with other tech in the organization.

I’ve also wondered about creating a brochure for the rubber-hose resistant product listing its features and strengths. The person could show it if forced into that situation. This might help in some cases. It’s been argued it might hurt in others. I wish I had more data to go on…

I’ve just found that certain kinds of attackers give up after seeing proof that their methods won’t work (e.g. OpenBSD’s name, wink) and taking a few stabs at the defense to test that belief. In broken nose [1] threat model, this means a person will probably endure some pain before a release. That release should also happen without data compromise.

[1] ‘Broken nose’ seems more realistic name than ‘rubber hose.’ Paints a better picture of what to expect: beating, breaking, mentally or physically. I mean, how many people are lucky enough to just get hit with a rubber hose? 😉

MarkH August 30, 2013 12:54 PM


If the 256-bit public key belongs to an ECC system, then (by published estimates) it is stronger than a 2048-bit RSA key, and equivalent to a 128-bit symmetric cipher.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.