New DeadBolt Ransomware Targets NAS Devices

There’s a new ransomware that targets NAS devices made by QNAP:

The attacks started today, January 25th, with QNAP devices suddenly finding their files encrypted and file names appended with a .deadbolt file extension.

Instead of creating ransom notes in each folder on the device, the QNAP device’s login page is hijacked to display a screen stating, “WARNING: Your files have been locked by DeadBolt”….


BleepingComputer is aware of at least fifteen victims of the new DeadBolt ransomware attack, with no specific region being targeted.

As with all ransomware attacks against QNAP devices, the DeadBolt attacks only affect devices accessible to the Internet.

As the threat actors claim the attack is conducted through a zero-day vulnerability, it is strongly advised that all QNAP users disconnect their devices from the Internet and place them behind a firewall.

Posted on January 26, 2022 at 10:04 AM17 Comments


Sean Lynch January 26, 2022 11:48 AM

The advice to put NAT (Network Address Translation) devices behind a firewall seemed strange until I read the referenced article and discovered it is NAS (Network Attached Storage) devices that are being attacked.

Ted January 26, 2022 1:09 PM

This is so sad. Both BleepingComputer and QNAP have forums for users to share information on this. Some people have paid the ransom and are reporting having trouble getting a decryption key. It looks like at least one person was able to possibly get it to start decrypting with some finagling.

From the BleepingComputer forum: “I had to collaborate with some crypto gurus to figure out how to pull it up. It was DEFINITELY not straightforward.”

Clive Robinson January 26, 2022 1:34 PM

@ ALL,

… the DeadBolt attacks only affect devices accessible to the Internet.

As I keep saying, my first question is,

“What is the VALID business case for that device to be connected to external communications?”

And I’ve still yet to receive one not primarily based on “MBA Mantra” via the “Chicago School” that originates from the “stupidity of neo-cons” (also why we have such significant supply chain vulnerabilities).

But even with a “valid business case” there are many ways you can protect systems. Heck we knew that back last century with certain types of “data wherehousing”…

But this is an interesting angle,

“The ransomware gang further states that there is no way to contact them other than through Bitcoin payments.”

With the Ransomers telling the victims,

“… they should pay 0.03 bitcoins (approximately $1,100) to an enclosed Bitcoin address unique to each victim.

It’s an interesting “secure communications” use of the public ledger (not unlike the idea of getting bots to communicate through Google Cache searches of posting to random blogs).

But one wry smile, anyone else spot the irony in,

“The attacks started today, January 25th,”

After all IT’s “Burns Night”[1]… So a little further irony,

For those who took the wrong path with their QNAP NAS boxes, thus suffered the “low road” fate.

[1] oh and happy Australia Day to our readers who are enjoying some summer weather. Oh and if you are Scots-Australian, hopefully your memory will come back for the weekend 😉

Andrew January 26, 2022 2:02 PM

New vulns enabling remote exploitation of NAS devices left on the public internet is like a changing of the seasons – it seems to happen about four times a year.

Keeping your data out of the cloud, and still accessible remotely and out of 3rd parties’ hands, isn’t easy. There are probably users though who would be content to keep the data locally but aren’t aware the devices are performing UPnP to enable remote access.

Segment the internet-exposed NAS from the rest of the network! -OK didn’t solve data destruction/ransomware
Use a VPN to reach the NAS! -OK still have to manage vulnerabilities on the VPN device (lots of those lately)
Only allow inbound connections from specific IPs! -I don’t think this describes the typical consumer use case of mobile access. Setting up a VPS with a static IP and hardening that proxy is both prohibitive and may not work with your intended clients.

Put it in the Cloud! -Confidentiality
Encrypt it in the Cloud! -Key material confidentiality, assuming you use the data not just back it up on NAS

Clive Robinson January 26, 2022 3:11 PM

@ Andrew, ALL,

Keeping your data out of the cloud, and still accessible remotely and out of 3rd parties’ hands, isn’t easy.

No but it gets a lot easier if you start out with two basic assumptions,

1, You can not stop an externaly connected system being attacked.

2, Make the system such that it is effectively read only data.

3, If confidentiality is a requirement then you need to seriously think about alternatives as Data Bases and Encryption do not realy work[1].

The solution is a server behind a data diode, that pushes data to a second server that is externaly visable.

If the second server gets attacked, you effectively “bin it”[2] or do a full restore (preferably both).

It’s not perfect and it has a downside for many as it’s “read only” but with a little care and thought you can build on it.

[1] Currently you can not for instance effectively “range search” a database of encrypted record fields, without having the field encryption key on the server, which means it’s not just vulnerable but probably gone if you have been APT attacked.

[2] I’m of the “APT view” in that if any externally connected system gets breached there is a finite probability that it’s Flash Memory in drives, IO cards and Motherboard has had some form of malware installed so that “easy access” for the attackers remains even after what most consider a full restor. Actually doing Flash ROM restors are possible, if and only if you know where all the Flash ROM is, and you have original copies to restor from. As the odds of you knowing where all the Flash ROM is, then the probability you won’t get it all is somewhat increased, so just “burn the hardware” is what I would suggest increasingly is the best course of action. Obviously you need to be aware of this choice prior to purchasing systems as you can save a lot of pain and cost right from the start.

JonKnowsNothing January 26, 2022 3:50 PM

@Clive, @All

re: “MBA Mantra” via the “Chicago School” that originates from the “stupidity of neo-cons”

RL anecdote tl;dr

Eons ago a telecom company bought one of these items and demanded that I use it for my application.

The application was 3d party sourced with local implementation.

I said NO.

We did 99 rounds over this “Yes You Will, No I Won’t” because the BigDogs hadn’t even considered if the 3d party sourced application was “compatible” with that configuration. It was not.

It turned out the BigDogs had bought this overpriced hunk of hardware because “Everyone Had One” and not because there was a need or use case for it. And there was no one in the company who could find a single use-case to make the expense “look good” for reporting. [a rug was eventually found]

This situation was not a unique purchase, lots of companies I worked at did the same thing: they bought the hype and got left with the goods.

There are the software grifters, where they “promise to covert the database” but if you listen very carefully all they really say is they can build a matching schema table but not the logic that populates the fields.

If you want the logic you have to pay their consulting rates, which eventually leads to “a different application” purchase.

Then there are the cases where BigDogs change Leads, and the NewLeads bring whatever they had before demanding you use it, even if it is inferior to what’s currently in use.

  Here’s some new software, use it!
      But… it doesn’t have any security or even a login…
  What’s wrong with you? No one has ever complained about that before!

Clive Robinson January 26, 2022 6:08 PM

@ JonKnowsNothing, ALL,

What’s wrong with you? No one has ever complained about that before!

I’m generally as passive as a welcom mat and generaly genial and try to be nice to people.

One reason for this is that once or twice in the past when I was a lot lot younger, I was not and the clean up was an abulance and bucket and mop. And that sort of thing weighs on your conciounce for half a century or so, so far.

Well when in my much much fitter days when picking up a couple of full beer kegs was no more of a problem than picking up a pencil. I got an almost identical question and statement from a new boss sitting behind his desk. He asked it on the wrong day at the wrong time. There were clear warning signs in that I had stoped smiling and had stood right up. But no, he had to be a twit. As I told him “they are now” being around 6’2″ and 200lbs he discovered that being bodily lifted across his desk by his lapels and having half his body stuffed out the window on the fourth floor not just a new experience but one that was a little disconcerting by the damp patch that appeared on his sand coloured trousers. I explained to him politely whilst he was half out the window that I disagreed with him and that I was sure on reflection he would come to see it my way… I yanked him back in, stood him on his feet dusted him down and straitened his tie, and said that I was busy so if he’d excuse me I was going to go and get on with things, which I did.

A little while later several people turned up at my desk wanting to know what the heck had happened. I gave them my best innocent look and said I realy did not know what they were going on about and they should explain. As they went on I innocently said things like “how extrodinary”, “are you sure you’ve been told that correctly?”, “surely he’s to big for somenew to do that” then “does that sound like something I or anyone could do?”, “How long have I been working here and have I ever done anything like that?” when asked “are you denying it” I said “are you kidding me?” etc and again said “it sounds impossible”[1].

Yes I got away with it, but the new boss was there after very careful to stay away from me… And yes he did see it my way after reflection. However I decided it was time to move on… In my resignation letter to HR I said that the reason I was leaving was the odd behaviour of the new boss and the fantastical story he had told, and that I thought it nolonger safe to work there…

I heard some months later that the new boss had also left suddenly and very quietly and he’d been replaced by an internal promotion of a rather smart young lady, who I’d got on with quite well[2]

[1] It’s actually easier than it sounds, it requires bracing yourself against the desk then using leverage and to tip them into an unbalanced position where their inertia does the initial bit. Theres several “martial arts” that will teach you the basics judo being one. I unfortunately learnt the hard way, back in the 70’s the “sports master” at my school,

Who was a lot shorter than me, and used to throw me around easily as a demonstration to others… Mind you it did come in handy for “rugby” but that as they say is a story for another day.

[2] I’m one of these people that whilst social at work, I regard my place of work as just that “a place of work”, not one where you make life long friends or have romantic relationships. I did the latter once and it did not end at all well, so lesson learned.

John January 26, 2022 11:30 PM



Entertaining things I did when I was young….

I still wonder why there are essentially no soft loadable micros as opposed to flash controlled micros for device control?

Apparently real RAM die size is much bigger than flash?

It would certainly make real device hardening easier.

The original IBM PCs had socketed UVROM for boot and more UVROM sockets for customization. So things just worked and kept working. The rest was booted from floppy disk so the whole device could be easily backed up and restored. I suppose being able to reliably read all internal device memory might help.

Now we have moonbounce which is doubtless just a beginning of clever uses of flash memory.

Can we still buy UVROMs?


Clive Robinson January 27, 2022 7:07 AM

@ John,

I still wonder why there are essentially no soft loadable micros as opposed to flash controlled micros for device control?

There used to be micros like that, if memory serves correctly Linus Torvalds of Linux fame worked for, for a while. Transmeta had a “Very Long Instruction Word”(VLIW) core the architecture of which was the same sort that gave Cray Supercomputers the performance they had. In the Transmeta hardware there was a loadable interpreter where “Complex Instruction Set Computer”(CISC) binary level instructions were converted into an optomised VLIW “Reduced Instruction Set Computer”(RISC) format which gave substantial other speed benifits.

The first target they worked on was for x86 that whe they started was actually hoplessly slow and power hungry. Unfortunately by the time Transmeta went to market Intel had woken up and addressed some but by no means all of their x86 product speed and power issues (all x86 designs these days use the same interpreter-RISC Core that Transmeta originated).

Whilst the Transmeta chip did x86 it could also do I think it was three other CPU instruction set emulations better than the actual CPU chips of the time.

But the speed and power savings margin whilst still good was nolonger enough by then, and the product became first niche then unavailable. Though both Intel and Nvida got perpetual licencing on the patents and some other of Transmeta’s IP.

Transmeta is now defunct and but a memory to a few. They got taken over by another company that hived off the IP portfolio to a venture capatalist firm about a decade to decade and a half back.

Before Transmeta there were other chip sets like that of the Transputer that could do similar.

The modern way is more generic but as about as customisable as you would want, and that is with the use of “Field Programable Gate Arrays”(FPGA). Have a look at nearly any “Software Defind Radio”(SDR) of any worth and you will find a large FPGA or three sitting in their doing the bulk of the work. Whilst you may never get to see one, Intel are putting large FPGAs in some of their server CPUs for cloud providers. In essence they form Co-Processors where algorithms can be “made in silicon”(in silico) for a five to fifty times performance boost over highly optomised software. Such chips would make ideal cores for cryptanalysis and similar activities requiring massively parallel single algorithm processing (hence they are in effect “controled” technology).

As for,

Apparently real RAM die size is much bigger than flash?

It’s a bit complicated. In essence there are three “bit storage” technologies,

1, Capacitor and FET.
2, SR NOR or NAND gate latches.
3, D-Type and similar registers.

With the size sort of increasing thus also cost as you go down the list. Oh and importantly speed and stability also increasing as you go down the list.

Flash ROM falls in the first group, however unlike DRAM which needs all sorts of extra support circuitry due to the fast discharge rate of the capacitors, F-ROM uses traped charge wells with discharge times measured in decades. But in both cases they are considered “unreliable” and slow due to having to have extra circuitry etc. The fastest and most reliable but consequently much larger storage is in “register devices” inside CPUs and in SRAM some of which have timings down in pico-seconds these days, the real big limitation on their usage is actually “the speed of light”, where 1cm of track limits things to below 10Ghz clock speed so puts significant constraints on CPU die size.

And so onto what a friend calls “The Robinson lament”,

The original IBM PCs had socketed UVROM for boot and more UVROM sockets for customization. So things just worked and kept working. The rest was booted from floppy disk so the whole device could be easily backed up and restored.

It held true for motherboards, IO boards, and onboard Hard-drive controlers untill the mid 1990s. If you look back on this blog you will find I said “95 max” but @Nick P went with “05 max” as it turns out we now know pre 05 hardware had issues. I still use 95 and earlier computers, and lament the passing of such securable technology.

The problem is large amounts of Flash ROM get in in all sorts of places you would realy not expect such as “Battery Managment Systems”(BMS). If you think back you may remember there were issues found with the “hidden BMS / controler” built into Apple’s batteries to maintain a tight grip over the very profitable upgrade&repair market. Somebody had found out you could hide sufficient data there to make malware from it a reality…

For years Russian crooks changed the hidden flash in the likes of “Memory Cards” so they appeared to be not just of greater capacity but faster speed than the manufacturer had specified[1]. Others would buy up hard drives that had been pulled due to increasing track/sector errors being reported for pennies, clear out the track defect list from the flash ROM on the controler and sell them as “discount / bankrupt” stock for upto half the price of a new drive…

Back in 2013 you may remember there were two events, firstly the Ed Snowden affair and secondly the BadBIOS and related Lenovo affair of unremovable factory installed malware for ad-ware profit on consumer grade laptops.

The Ed Snowden revelations led to a “pissing contest” between David Cameron UK Prime Minister and his “cabinet office advisors” and the UK Guardian Newspaper under Alan Rusbruger.

The result was “teedle dee and tweedle dummer” came upto London on a shopping trip from GCHQ in Cheltenham, and whilst their dropped into the Guardian basement to oversee where Apple computers were subjected to “Mass Destruction by Dremmle”. The resulting photos showed just how many onboard chips probably had Flash or similar mutable memory hidden in them.

Oh as for other Apple products, like the iPhone, it’s riddled with hidden flash, to stop people doing “service and repair” and so alowing Apple to charge three to twenty times the open market repair rate, which is a nice extra profit maker for them as it makes it effectively a “monopoly tied market” which is frowned on in many jurisdictions.

As for Lenovo they used a “hole” in the way systems boot that was a known security issue long before the IBM PC of the 1980’s and may have been what gave rise to BadBIOS (it’s certainly the way I’d prototyped “proof of concept” APT in silico).

It was actually identified as a known issue back in the decade before the first IBM PC with Apple ][ IO/Expansion slots. The same hole still exisys with UFEI, and is what Moonbounce is all about…

I guess old dogs don’t need to learn new tricks, just dress them up differently…

That said in all honesty, there is no effective way to close the hole, without major issues and very bad things arising. We know this because they were seen in the early days of commercial computing with the likes of IBM etc designing IO so that you had to buy even printers from them at vastly over inflated prices… Such “tied markets” are a compleate disaster as the latest version via “walled garden” Closed Application markets shows.

[1] In part the manufacturers were to blaim for this “Russian behaviour” because they only manufactured “premium parts” as it made inventory control easier and a lot less costly over all. But not everyone wants premium parts and certainly not at the prices asked for them. So to supply to this lower profit but substantial mass market manufacturers just downgraded the info stored in the hidden flash ROM to lower capacity and slower speed. But… it also alowed them to sell what were “manufacturing rejects” that is parts that were not upto spec or had hard defects could be turned from scrap to not just marketable but very profitable product on the “Muck to Brass” principle.

John January 27, 2022 9:25 AM


I buy many small uSD cards and memory sticks for just the reason you suggest. And they are cheap!

They most probably have LOTS of spare space to migrate defective stuff to!

I am now having trouble finding 16GB uSD chips!

I have yet to have an 16GB uSD chip fail. Not so with 32GB.I had one hard fail! Good lesson in real backups!

warm regards,

crich January 27, 2022 3:58 PM

“What is the VALID business case for that device to be connected to external communications?”

I think that’s the wrong question to ask, because it suggests a very fragile solution: strict separation of “trusted” and external networks, e.g. by firewalling. As if it’s rare for bad things to happen on an internal network. The problem is that “trusted” means “relied upon to enforce your security policy”, and the “trusted” network is likely to be full of untrustworthy devices—such as these QNAP network-attached-storage boxes themselves, or the client machines that need to access them (with all the assorted crap that companies like to install, or users install intentionally or accidentally). It takes one person to configure tethering wrong, misconnect a cable, or connect to some unknown Wifi network, and then the networks are bridged (maybe directly, or maybe alternating between networks over time—remember that malware predates always-on internet connections).

The ostensible answer to your question is “so it can fetch security updates”. I’m sure firewalling could be made to account for that, or alternate ways could be found, but nobody seems to have considered the wisdom of making these consumer devices critical and trusted network infrastructure, nor how to manage the associated risk. If people were willing and able to do that, why not build something more auditable in the first place? One obvious feature to add would be to prevent most permanent deletion or modification of existing files (excepting time-based expiry and administrator overrides to comply with GDPR etc.).

These devices are not designed to be trustworthy. I doubt QNAP and similar companies spend much time auditing (let alone formally verifying) the freely available software on which they base their products, and I see no evidence they’ve tried to limit the risk of each component or allow the customers to configure useful anti-ransomware (or even anti-“accidental rm -rf”) features. Nevermind designing a product that accounts for security realities by default.

Clive Robinson January 27, 2022 6:22 PM

@ crich,

I think that’s the wrong question to ask, because it suggests a very fragile solution

It’s the first question I ask, and for good reason.

Most business use computers need to be connected to external to the business communications because the reality is few users of business computers work needs them to do so.

If they do not need external to the business communications why unnecessarily increase complexity and attack surface?

Even when people think they have a VALID business case, all to frequently it can be shown that the assumptions behind it are either not understood or incorrectly assessed.

But even when the business case is valid, the implementation is all to frequently not.

As financial institutions used to do, a user would have a work computer, and if external communications was required they would get assigned a second computer to do the communications on.

John January 27, 2022 9:36 PM


I try to stay away from ‘automatically’ updating anything.

I want an audit path. Old version, new version.

Start from CD/.iso install package. Add applications.

It would be nice if Linux would maintain an easy to read page with the ‘formula’ to recreate the current load.

I try to keep all my stuff in either in /Download or /home/user so I can copy them easily. The big problem is reloading applications which modify who knows what/ who knows where.

If you reload you often get ‘new’ versions that may or may not work or even worse may work but give different output!

And yes keeping off the web except temporarily when required as above is always a good idea.

Good discussion.


Clive Robinson January 28, 2022 10:16 AM

@ John,

It would be nice if Linux would maintain an easy to read page with the ‘formula’ to recreate the current load.

Some older versions designed to be put on mini-CD’s or in embedded systems and the like did. Puppy Linux being one.

Others like some of the BSD’s are designed to be built on a very small very stable base system.

If menory serves correctly Slackware and Arch Linux are still like this and still support 32bit Intel CPU’s,

Whilst Arch is well documented, it can feel like it’s for “DIY Brain surgery” the last time I played with it.

Puppy linux is not the only distribution targeted at “All in RAM” running and “Embeded Systems”.

I can not say how secure it is, but the way you can run it on older hardware without having to use a hard drive or USB thumbdrive is the way I would generaly recomend people consider.

A friend has built their own “secure” Linux on the “All in Ram” from CD principle and actually has two images, both for 32bit CPU’s.

The first image disables all connections and then checks all the Flash ROM they know of and can check. It also contains “workware” but remains disconnected from communications but does enable a floppy, writable CD/DVD drive and USB with print subsystem. So it is reasonably usable as a SoHo system.

The second is the “Thin Client” style desktop used for the Internet etc. It again checks Flash ROM before going into graphical desktop.

If it was me I’d run them as two seperate systems entirely, but hardware of the older more well documented type is getting harder to find…

Eric Nepean January 30, 2022 11:20 PM

I think there is a certain lack of understanding here about QNAP’s situation and NAS functionality.
Some years ago QNAP was selling appliances that were only a NAS with the ability to function as a backup server or client. Their SW wasn’t of the greatest quality, based on an old version of Linux Busy Box and an old SSLstack with many bugs and vulnerabilities that they were slow addressing.

Since the ransomware episode a couple years ago, QNAP have improved on their SW quality and mindset. But not enough for what their products are now capable of.

QNAP NAS’s now include capabilities for a firewall, router, VPN server and cloud storage client. The idea is that a user no longer needs a router, they can simply connect a QNAP NAS between their network and the internet.
Users don’t actually need a computer anymore, as a QNAP NAS has the functionality to host VM’s and containers.

In todays’s threatening internet environment, its highly risky to combine all these functions in one appliance, and even more so if the developers of that box don’t have a strong background in network security.

Those risks have just come to pass. There are hints that the ransomeware’s method of access is due to a vulnerability in the QNAP VPN server, or the VPN Gateway.

Like me, some experienced users have our QNAP NAS behind a router/firewall developed by a different company focussed entirely on security appliances and with some considerable history.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.