egeltje August 4, 2014 5:16 PM

Nice example of Kerckhoff’s principle. Full algorithm is explained, key is kept secret.

Thomas August 4, 2014 7:01 PM

As of this moment, there’s no patch.

Except for backups.

It you’ve lost your data, it’s lost. It doesn’t matter if it was malware or catastrophic hardware failure.

If you value your data, back it up.

Anura August 4, 2014 7:03 PM


“Except for backups”


The point is to get people to pay the ransom, too much and they’ll cut their losses.

Auguste Kerckhoffs August 4, 2014 7:20 PM

“If you value your data, back it up.”

Agreed. This applies to backups, as well. The people who bought a Synology NAS device for backup storage are fools for not having a backup of their backup.

If you value your data, back up the backup of your backup’s backup!

Jason R. August 4, 2014 7:27 PM

I believe what is meant is that backups should be separate from live storage, a fundamental tenant of DR prep. It’s unlikely that an offsite backup system would be owned at the same time as your local NAS.

Although I do appreciate your sentiment, because in reality for many SMB people with minimal IT training, NAS = backup, storage, whatever.

Trammel August 4, 2014 7:48 PM

Because this ransomware encrypts individual files, the encrypted files will be propagated to any backups.

You have to hope you’ve got well versioned backup sets available.

Anura August 4, 2014 8:18 PM


Backups are immutable; if it’s mutable it’s redundancy. It’s just a matter of how long you keep backups for.

Gweihir August 4, 2014 9:09 PM

@Auguste Kerckhoffs:

A backup is offline and off-site. Everything else is just a copy.

npz August 4, 2014 11:14 PM

I don’t agree about backups being a panacea.

What if your backup is corrupted? Ransomware does not need to notify you immediately. And if I recall, cryptolocker didn’t notify you until a while later after a many files were already encrypted. With frequent backups, there’s a good chance your backup is also bad.

And sometimes frequent, or on-demand backup, is required for updates of critical data, simulating RAID-1 at least temporarily. Previous versions of backed up data won’t suffice when what matters is the current version or whenever dealing with live data.

Figureitout August 4, 2014 11:46 PM

What if your backup is corrupted?
–Yeah, that’s the question no one will answer b/c they can’t. As in, storing malware in your “secure backups”. The situation is this dire and totally out of control.

So, by making backups you are essentially preserving the malware…

Gerard van Vooren August 5, 2014 1:57 AM

What I don’t like about stories like this is that they aren’t technical at all. I don’t buy it that a Synology NAS gets hacked just by plugging it in. We don’t live in the Windows XP era anymore. What NAS are we talking about? An Intel or an ARM one? There must be some service running that is exploitable. It can’t be that hard to figure out which service that it is and what kind of access rights it has.

I have read too many of these boogie man stories… Be specific.

Andy August 5, 2014 2:35 AM

So far it seems like it happens zo boxes running the 4.3 firmware. That is from early or middle 2013 and had known vulnerabilities.

Andy August 5, 2014 2:57 AM

Correction: 4.3.3810 and 4.3.3827 are known vulnerable.

Others might be, exact attack vector Stil unclear. Affected machines can override the Admin GUI lockout easily, which of course does nothing for files already encrypted files. But if the thing is still in the process of doing it this is a silver lining. Remove the ransomOS with a fresh and updated copy of the genuine Syno OS.

Well… I kept bitching that all this damn, selfcooked flavors of cloud access to the NAS from your handheld/phone etc is a great idea, however creating sll these quick setup wizards and own protocols a very bad one. Wondered why Not to Stick with known and proven protocols. Got told to go witthe Times.

There you go Synology.

Matija August 5, 2014 3:01 AM

Anonymization technologies such as Tor with addition of randomization mechanisms and/or principles in organizing/drowing scientific work inside predefined network of institutions have great potential in minimising conflict of interest between clients (usually corporations) and researchers (scientific institutions, acredited labs etc.). Even payment could be processed this way.

What do you think, Bruce?

SchneieronSecurityFan August 5, 2014 3:28 AM

Could the search engine named Shodan or something similar be used to find the targeted NAS systems?

Andy August 5, 2014 3:35 AM

For giggles try Google search “inurl:webman/index.cgi”

Unbelievable how many of those (orobably business critical) are hooked to the net with a public ip.

Thoth August 5, 2014 3:48 AM

Ouch, that’s gonna hurt for many.

I am guessing default admin credentials would get you right into other people’s “trusty” NAS and all hell breaks loose.

G August 5, 2014 6:08 AM

Stupid attackers. They should use machine learning or whatever and come up with a better estimation for the value of the data. Some people would probably be willing to pay many thousands of $ for it, some would probably pay a maximum $100 (and those should be taken, too 😛 ).

Wm August 5, 2014 6:41 AM

“I don’t agree about backups being a panacea. What if your backup is corrupted?”

Fearing every possible problem that might arise in a situation is not a cure for the problem. You have to simply do your best, realizing that all those possible problems are most probably not going to arise. Double backup everything will reduce such anomalies even further.

QG August 5, 2014 7:14 AM

I bought a Synology NAS last year. Although I think some of their technology is good (but not cheap), I decided quite early on that I would not connect mine to the Internet because I was a bit sceptical about the device security at the time.

Now I’m very glad I made that decision but have come to the conclusion that I paid for a load of software options in DSM that I will never really be able to trust to keep my data safe.

The average consumer has no chance of assessing the risks inherent in devices this complex.

Daniel August 5, 2014 8:50 AM

With BitCoin being such a versatile currency, would it not be possible to tag something on the transaction to make it traceable?

Bob S. August 5, 2014 9:24 AM

All of which brings us back to TOR again. Notice how the news on the new exploit went dark in an eye blink? Could it be the powers that be decided they wanted to have a little time with it before doing the right thing?

I quit using TOR last week because my computer bogged down and was doing freaky things, without explanation. As soon as the TOR browser was deleted…like magic…everything was beautiful again. Coincidence? Probably.

TOR must be under vicious attack by every cyber-crook and government in the world these days.

I hope the courageous developers working on win the War on TOR.

But, I wouldn’t bet on it.

Incredulous August 5, 2014 10:01 AM


Bitcoin is anonymous, but it is traceable. Stolen bitcoin has been recovered passing through an exchange, but this makes people uncomfortable since it is like confiscating certain known hundred dollar bills after a bank robbery. People may unintentionally have received stolen bitcoin. (In the exchange seizure I believe the bitcoin was transferred directly after the robbery or near directly and in bulk.)

There are bitcoin mixers that take many sources of bitcoin in and then send to many destinations, all in one transaction, confusing the trail. But my understanding is that thieves often sit on stolen bitcoin for a long time before risking a transaction.

EJ August 5, 2014 10:06 AM

Question de jure is what encryption is being used – the Synology’s built in encryption or something added by the hacker? Synology NAS devices are fairly closed so its unclear if a hacker is able to add their own. If they are simply activating the Synology’s internal encryption, wouldn’t having the Synology encryption function already active prevent this ransom-ware attack from the start?

PD August 5, 2014 10:31 AM

The average consumer has no chance of assessing the risks inherent in devices this complex.

This is the problem. Folks are buying complicated devices at Best Buy, plugging them in at home with default settings, and not maintaining them. This might be fine for a television, but a device that exposes your personal data to the public internet in the name of accessibility is just asking for trouble.

Jacob August 5, 2014 2:02 PM


This is a notice from February, and is related to another vulnerability from that time period.

Michel August 5, 2014 3:36 PM

“Synology NAS devices are fairly closed”. You are very wrong there. Synology NAS devices are nothing other than a standard Linux server with a nice GUI slapped onto them. Once you get in, you can run anything you want including, as done here, your own code to encrypt files.

Steeeve August 5, 2014 4:56 PM

Man, if I got this I’d have a huge crisis of confidence. How can you trust someone who just hacked into your computer for monetary gain? Would you really expect to get your data back in the first place?

Even if you pay the ransom how difficult would it be for copycats to lazily just corrupt your data and take the payment once you’ve enabled this method of extortion?

Wendy M. Grossman August 6, 2014 5:25 AM

Actually, the big issue here for people affected by this ransomware is not the cost in dollars/pounds/othercurrency but the fact that most people would find it very difficult technically to take the steps necessary to obtain bitcoins, install Tor, and make the payment. If you don’t have those things on hand already, my guess is that if you have good backups you’ll opt for the backups on time saved alone. In fact, I would guess that a significant number of people wouldn’t be able to get all that stuff done and make the payment deadline, even if they want to.


Lambert August 6, 2014 5:34 AM

Perhaps the solution is to libel ransomware providers (not sure whether that is legal, but given that you’re libelling criminals, I see no moral problem) and ruin their reputation. DoS the support too.

RE: Backup
‘Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it ;)’ Linus

Clive Robinson August 6, 2014 8:10 AM

@ Wendy,

In some jurisdictions setting up Tor is a suspicious activity at the least, and in others there is a significant chance of 9mm lead poisoning.

In other jurisdictions not reporting ransom demands let alone paying one is a criminal offense…

Then of course there are other issues, I would not want to try telling a judge in the next few years that the electronic records the prosecution want are not available due to having been lost to ransomware… in the UK for instance under RIPA you could be facing several years in prison. Then if there is an element of “business records” which is likely for many small businesses then there is another world of hurt waiting for people via other legislation in multiple jurisdictions…

This is a problem that is only going to get worse, as various legislators put in place “rights striping” and “asset sequestering” legislation as a way to raise revenue lost to large businesses that use various schemes and havens to not pay taxation, and can aford to fight in court. Which often becomes a war of atrician which in some jurisdictions the tax authorities don’t win due to lack of resources due to government cut backs etc.

The magic OS which does magic things August 6, 2014 9:35 AM

“I quit using TOR last week because my computer bogged down and was doing freaky things, without explanation”

Wow, without explanation? ^_^

I bow to you sir as you are a fountain of wisdom.

Tee Hee August 6, 2014 1:19 PM

“I would not want to try telling a judge in the next few years that the electronic records the prosecution want are not available due to having been lost to ransomware.”

Now there is an ingenious idea Clive. Why is my hard rive encrypted? It’s being help for ransom! I have no idea what the password is your honor, go ask the crooks.

Someone should make a facade for Truecrypt WDE that makes it look like ransomware.

Now there would be plausible deniability.

Somebody August 6, 2014 4:05 PM

Re: backups

Is it passe to verify your backups on a independent machine*? I used to pull a few random files (and a few non-random files) every few weeks to make sure the tapes were readable. Ransomware would have to have the same password on both machines to pass the test. Custom ransomware might be able to do this, but at least they’d have to work for it.

  • and architecturally distinct, but that was what was available, not part of the plan.

Wendy M. Grossman August 6, 2014 5:17 PM

Clive: I imagine in such a case the burden of proof would be on you to show that all your data had in fact been eaten…I suppose this will be yet another reason why businesses choose to store everything in the cloud.


Nick P August 6, 2014 7:11 PM

@ Teehee

It’s the reason some of us use open Wifi. 😉

@ Somebody

It’s a good idea if you don’t have a solid backup software. If you do have solid software, than simply doing a full restore on your or another machine can catch problems.

Append-only or versioning filesystems on several similar machines is a technique closer to yours, though.

Ellie Kesselman August 7, 2014 3:38 AM

This seems to be what’s going on. Synology got hacked by malicious (well, thieving) bitcoin miners about 8 months ago. That’s when Synolocker users noticed,
“exceptionally high CPU usage in Resource Monitor: CPU resource occupied by processes such as, minerd, synodns, PWNED, PWNEDb, PWNEDg, PWNEDm or any processes with PWNED in their names.”
Synology issued an update for DSM 4.3 and older, as of February 2014.

The current, “new” attack only affects Synology NAS devices running the (older) versions of DiskStation Manager, e.g. DSM 4.3. So the only users who are vulnerable now are those who neglected install the patches for DSM 4.3 when they were released months ago. CSO posted again a few hours ago, saying that DSM 5.0 users are unaffected, as the patch was included as part of the the version upgrade from DSM 4.3 to 5.0. Um, that’s the best I can tell so far.

  • The Marginal Revolutionary, economist Tyler Cowen, heard about this, raised the alarm on Twitter and linked here. That’s why I noticed.

Rob Lewis August 18, 2014 3:01 PM

There are data integrity models to deal with this. Does no one remember BIBA or Clark-Wilson?

Ramsomware is nullified if the original data set can’t be destroyed.

Nick P August 18, 2014 3:28 PM

@ Rob Lewis

Microsoft actually implemented Biba in Vista onwards as Windows Integrity Control. They put IE in lowest integrity by default. Unfortunately, the implementation TCB is where attacks actually happen and it’s large in all mainstream systems. This, along with inherently unsafe architecture, is why we have a continuous stream of attacks. This risk was my main gripe with Trustifier, as well. Particularly kernel code in TCB. The good news is there’s half a dozen projects that are working on those problems in a way that maintains legacy compatibility. I posted a few here.

Feel free to incorporate any ideas I posted into your own product so long as nobody else has an I.P. claim on it. I’d rather the stuff get out there in products than try to hog it for money.

Rob Lewis August 19, 2014 10:12 AM

@Nick P,

Well you’re right, (but not as far as Trustifier goes). I would guess that about 80-90% of infosec activity focuses on what would be controlling the IT vulnerability surface. Simplistic views of node security is what opens the doors to things like ram scraping malware, etc., when combined with weak architectures (no actual chain of trust),so it’s pretty hopeless. We actually talk about building Defender Chains using verifiable trustworthy nodes and pathways as a better strategy, than based on kill chain backtracking and trying to find and plug every vuln that could be a possible attack vector. There is nothing there for the SMB space anyway.

To top it off, there is very little done for privilege and credential management closer to the crown jewels and in production environments. It’s pretty hard to do much in discretionary access environments. Once attackers are inside, it’s easy. As insider threat tech, KSE had to look at that. Another false hope is crypto, because data must also be protected in clear text and in processing. KSE provides per user digital separation and caveat access privilege control at the kernel to address this, because the ability to separate sensitive data from system admins with passwords is necessary!

Your basis for a gripe with Trustifier was based on the usual assumptions, as far as I could see, but as I said KSE does not work the same. If threats are unable to exploit vulns, do either of them matter anymore? Our work with DOD was ultimately leading to a discussion about how Trustifier could protect 66 million lines of code in the Future Combat Systems stack, without patching, until they pulled the plug on the whole project.

Trustifier, which now goes by KSE (for kernel security enforcer) does incorporate a paradigm shift in trusted computing base design, and implementation, one based on algebraic models and incorporating formal methods in its design. Algebraic analysis allows us to draw a bold line about separation and protection, and back it up. Any node with KSE installed becomes a separation kernel/reference monitor, which you know, mathematically complete, and they have a secondary primitive to communicate with each other. It’s security primitives are mathematical objects; they can’t be bent. It lets one control what needs to be controlled.

It was comments like yours about Trustifier that made me throw up my hands in exasperation and cut back my activity for a few years. I didn’t really know how else one could demonstrate protection without patching except by saying, “pen test these systems where we have intentionally left exploitable vulnerabilities open”, give copious tips and hints and as well as place zero restrictions on tools/weapons that can be used.

The DISA report was kind enough to point out the vulns we intentionally left open. However, they offered no suggestion as to WHY they were not able to take advantage of them, nor why they were unable to access target directories even when given admin privileges and we pointed them to them.

Did they think we were just having a lucky day? This wasn’t a one-off. This is the result every time someone challenges KSE, and it can be repeated anytime.

So when all dumb-hats can come back with is, “we found vulnerabilities”, (even though they couldn’t exploit them), I just shake my head and relegate them to the ranks of the doomed, because infosec can’t win until it escapes the vuln by default model.

As far as integrity controls, KSE has had MLI from the start, A simple data or code ranking for integrity over all users protects it. So it’s there,- with a verifiable TCB and a useable framework.

Your list is a good one, but much of this research doesn’t address the practical problem of useability, and the fact that the world simply can’t afford to rip and replace all the systems and devices, or re-write all the code.

That is where the value of something that is useable and that can be dropped on existing stock commercial systems, even on production systems comes through, not to mention protection without patching. FYI, KSE models could be applied at the silicone level, I’ve asked. As a non-geek, that kind of stuff strains my brain. My efforts were and are,simply to present the tech as a viable solution to raise the bar quickly in the interim.

Nick P August 19, 2014 1:26 PM

@ Rob Lewis

“Your basis for a gripe with Trustifier was based on the usual assumptions, as far as I could see, but as I said KSE does not work the same. ”

“Trustifier, which now goes by KSE (for kernel security enforcer) does incorporate a paradigm shift in trusted computing base design, and implementation, one based on algebraic models and incorporating formal methods in its design.”

I’m going to ignore the red flag phrase “paradigm shift” for now. 😉 Let me put out my assumption about KSE in the clear so you can accept or correct (with specific details) it. My understanding of it is based on the original paper and this recent picture.

  1. A application makes a system call to the kernel code.
  2. The system call is intercepted by KSE and a security policy applied to it. The policy might look at what’s going into the system call, the permissions of the calling process, timing, resources used, and more. I’m going to give you ideal situation and assume you do every conceivable security check.
  3. The data then goes into the kernel, where regular kernel code executes it.

Is this how your reference monitor works? If so, it’s neither a paradigm shift nor secure. It’s just another example of a “system call interposition” security scheme, which have a long history. Googling that phrase should give you plenty of designs. If you use this approach, then it’s well known in INFOSEC community that it leaves in these risks:

  1. Kernel mode

(Note: Argus Systems’ Pitbull, your competitor, was defeated by and admits the risk of kernel mode attacks. Among others.)

  1. Host Firmware that user mode data can touch
  2. Peripheral firmware that user mode data can touch

  3. DMA attacks if not having an IOMMU

  4. Covert timing channels from processor hyperthreading or cache

  5. Side channels over power line or peripheral’s physical connectors

  6. Passive or active RAM attacks

  7. If not trusted boot, bootjacking or modification of code on HD on another computer.

The kernel mode code is the most hit by TLA’s and malware of these. There’s also TLA attacks on many others on this list. Even lay people with suitable tools plus step by step instructions can do (and have done) No 1, 4, and 8 on enterprise systems. I mention these attacks because your company promotes the products for insider threat protection in situations where presumably there’s valuable I.P./secrets and fairly intelligent users. Infiltration, bribery, etc are used in addition to hacking by the organizations trying to steal such secrets according to most reports on industrial espionage. So, the risk the end user poses to the machine itself must also be considered. It’s why I often recommended tamper-sealed thin clients in this situation, with rather rigorous requirements on the server and admin side as well.

So, if your product is a system call reference monitor, it’s provably insecure if the kernel or firmware is insecure. Additionally, your DOD Red Team exercise doesn’t add extra confidence if their constraints were (a) they had root and (b) they only attacked user-mode code. Real-world, sophisticated attackers are right now hitting systems at firmware, kernel and other levels. If that wasn’t simulated, then the test proves nothing except that your product can isolate a specific kind of user-mode security failure. There’s a ton of products that can do that, many of which are cheap or free. 😉 Further, offerings like Turaya Desktop, VxWorks MILS, LynxSecure or INTEGRITY-based Dell SCS can isolate an app, an entire system (with kernel), or both. Many add trusted boot and IOMMU, with one supporting seemless encryption of all data leaving machine as well.

So, KSE would seem to be an interesting addition to sys-call market. I see it as competition to products such as Argus Pitbull, Solaris Trusted Extensions, and SELinux. It might allow security-related management of data despite user-mode app failures. It’s inferior in protection to offerings which apply protections to kernel mode code or allow apps to be isolated on minimal TCB’s. The solution I gave you (free of charge I add) long ago was to combine your product with a hypervisor-layer technology that protects it and the kernel. And put that on at least one reference piece of hardware with hardware security methods and drivers coded in a safe language (eg Ada or Cyclone).

Since we talked, dozens of college students on a budget have produced things like this so I’m pretty sure a company with money and smart engineers can pull it off. Recently, I considered combining something like KSE with Cambrige’s CHERI BSD. Its performance is still good enough for security-critical appliances. Regardless, I have confidence that, with work, your product’s security will get closer to its security claims over time. You people just need to fix that whole “firmware and Linux kernel is in the TCB” problem that you share with both CMW- and Xen-based offerings.

“FYI, KSE models could be applied at the silicone level, I’ve asked.”

Yeah, Karger at IBM did a CMW-style model at the chip level in 2010. They actually sold it in limited production to some customers. Probably patented it, too.

The more important point is that chips are made of “silicon” and breast implants use “silicone.” It might be possible for you to build a solution using the latter. I lack the physics expertise to be sure. I know silicone is more comfortable to work with. Meanwhile, I suggest your engineers focus on silicon as it’s more practical despite being less attractive. 😛

K-Veikko August 22, 2014 12:03 PM

If you used ZFS-filesystem and enough free space on the disk, all those encrypted files could be restored.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.