How the SolarWinds Hackers Bypassed Duo’s Multi-Factor Authentication

This is interesting:

Toward the end of the second incident that Volexity worked involving Dark Halo, the actor was observed accessing the e-mail account of a user via OWA. This was unexpected for a few reasons, not least of which was the targeted mailbox was protected by MFA. Logs from the Exchange server showed that the attacker provided username and password authentication like normal but were not challenged for a second factor through Duo. The logs from the Duo authentication server further showed that no attempts had been made to log into the account in question. Volexity was able to confirm that session hijacking was not involved and, through a memory dump of the OWA server, could also confirm that the attacker had presented cookie tied to a Duo MFA session named duo-sid.

Volexity’s investigation into this incident determined the attacker had accessed the Duo integration secret key (akey) from the OWA server. This key then allowed the attacker to derive a pre-computed value to be set in the duo-sid cookie. After successful password authentication, the server evaluated the duo-sid cookie and determined it to be valid. This allowed the attacker with knowledge of a user account and password to then completely bypass the MFA set on the account. It should be noted this is not a vulnerability with the MFA provider and underscores the need to ensure that all secrets associated with key integrations, such as those with an MFA provider, should be changed following a breach.

Again, this is not a Duo vulnerability. From ArsTechnica:

While the MFA provider in this case was Duo, it just as easily could have involved any of its competitors. MFA threat modeling generally doesn’t include a complete system compromise of an OWA server. The level of access the hacker achieved was enough to neuter just about any defense.

Posted on December 15, 2020 at 2:13 PM19 Comments


Dan December 15, 2020 2:36 PM

That sounds a lot like a security issue to me, there should be no reason for there to be anything stored on the service provider system that would allow you to forge a token from the identity provider, it just needs to be able to validate the token.

Leo December 15, 2020 2:41 PM

I’m curious about the assertion that “This is not a Duo vulnerability”. Isn’t this a sophisticated form of session forgery? Shouldn’t there be some defense in the session that aligns the MFA information with the fact that MFA actually occurred? Just as an example, before accepting the duo-sid cookie, confirm that the session cookie has an entry with the timestamp of the MFA request…

stine December 15, 2020 3:47 PM

I’m not sure I understand this (or either of the quoted articles, which I have read.)

There should be no way to generate a valid MFA token, save on the MFA server itself.

The conclusion I came up with is that a proxy was hacked and that allowed the hackers to intercept the token being delivered to its original requestor. It seems to me that this is simply a duplicate-and-reuse attack using a cookie, and why I always uncheck the “don’t ask again on this computer for 30 days” when prompted by various MFA schemes.

Rick December 15, 2020 4:11 PM

Duo MFA admin here:

The SKEY (secret key) is one of three values you need when setting up any Duo Application, in addition to the Duo Instance Hostname and the Application Public Key. Knowing the secret key allows anyone to bypass the MFA protection the way they describe here.

Duo warns you up and down to protect your SKEYs. There is a mechanism to change your SKEYs if you think they’ve been compromised. There is a mechanism to encrypt your SKEYs in host registry (which I assume was not done here, since someone with admin access to the host server was able to recover them).

Its not a Duo vulnerability if some customer admin failed to protect the SKEY while they were implementing the protection. Duo has tons of documentation on this subject.

(there isn’t such thing as a Duo “akey” btw, that’s a type-o 😉 )

Dan December 15, 2020 5:23 PM

Rick, thanks for taking the time to comment here. Where I’m confused is in the design of the system such that the skey can be used to forge a token. I could understand if a hacker was able to compromise the system and change the skey to a value for which they could forge a token.

Where I’m confused is that the protocol allows the attacker to forge a token purely with knowledge of the skey, I had assumed that a system like this would use pki for validation of the token where the sp contains only the public key used to validate the token and the private signing key never leaves duo’s servers.

Of course, once someone has write access to the sp all bets are effectively off, but I thought the distinction between having access to be able to modify the skey vs just knowledge of the skey was an interesting one.

Documentation of how sensitive the skey is certainly helps, but I’d argue that it doesn’t change the fact that the underlying protocol has an unecessary weakness if the skey is able to generate as well as validate a token.

Robert December 15, 2020 6:01 PM

Dan, TOTP and HOTP use a shared secret and a counter, so they don’t require 2-way communication with the authenticating device.

1&1~=Umm December 15, 2020 10:17 PM

@ ALL,

Interesting comment at the bottom of the ARS article,

“The security company said Dark Halo is a sophisticated threat actor that had no links to any publicly known threat actors.”

So they do not think APT28 or APT29 or other known group. But why…

Well if you look towards the top of the Volexity article you find,

“FireEye attributed this activity to an unknown threat actor it tracks as UNC2452. Volexity has subsequently been able to tie these attacks to multiple incidents it worked in late 2019 and 2020 at a US-based think tank. Volexity tracks this threat actor under the name Dark Halo.”

OK so UNC2452 / Dark Halo are ‘new but old’ that is as Volexity put it,

“During the investigation Volexity discovered no hints as to the attacker’s origin or any links to any publicly known threat actor.”

So not just ‘unknown’ untill fairly recently but their location or who they work for is likewise ‘unknown’ still.

But what linked Volexity’s “Dark Halo” and FireEye’s “UNC2452”, Was,

“At the time of the investigation, Volexity deduced that the likely infection was the result of the SolarWinds box on the target network”

Now to misquote @Bruce,

“That is interesting.”

But not as interesting as the fact that this attack was also one where secret keys used as credentials were obtained when they should not have been.

Yes read that paragraph again, it’s a very important point. What it is telling you is that,

‘If your “Root of Trust” is not secure you have no security.’

We do not have enough “publicly known” information to know how the compromise happened in the attack on SolarWinds. But we can see how SolarWinds software was leveraged to compromise the “root of trust” or what is called by Volexity the “akey” on a fully compromised infrastructure server.

However the argument that you can not stop a secret key compromise to an infrastructure server that is fully compromised is actually not true, and has not been for a couple of decades or more…

Firstly, Intel and other CPU’s have ways to protect secret data in a computers memory in “enclaves”. Whilst there are now known ‘hardware’ failings that can be exploited there is as yet no evidence for that in this case.

Secondly, the ‘enclave’ idea was not original to Intel or other CPU manufacturers, they got the idea from earlier technology called ‘Hardware Security Modules’ or just HSMs. But they in turn got it from earlier technology such as ‘Crypto-Coprocessors’ going back to before the days of the ISA Bus in IBM PC’s (the earliest such device I remember was an AMD Z80 CPU 32k of ROM, 32k of RAM part of which was a battery backed up bytewide static RAM chip and Real Time Clock and an AMD DES chip).

These enabled secret keys and other secret data to be stored safely in seperate memory behind Crypto devices that performed the required cryptographic actions. Whilst not inexpensive they were generally thought to be secure.

The HSMs are modern equivalents used in Banking / Finance and other areas where security is taken a little more seriously. Possibly the one most known is at the top of the DNS-Sec system,

Whilst organisations do not need to have nuclear bunker type security around HSMs, the HSMs do give everyone down stream a ‘root of trust’ that can be reasonably relied on.

SpaceLifeForm December 17, 2020 10:35 PM

NSA chimes in


Ray Morris December 21, 2020 1:50 PM

The fact that random applications contain, in clear text, a secret that can be used to bypass the secrets of all users in a major flaw in Duo’s protocol. The industry standard TOTP and HOTP protocols don’t have this vulnerability.

One commenter said “it’s not Duo’s fault if an attacker gets the keys”.
I say it is Duo’s fault that their system requires putting master keys in plaintext on random application servers.

It’s Duo’s fault that there are master keys, rather than requiring that the user’s key be used to login as that user.

In this case, Duo requires that the key be under the doormat, so it is Duo’s fault that they those keys are taken.

Inthedocs January 7, 2021 7:37 PM

Rick, a Duo MFA admin states: there isn’t such thing as a Duo “akey” btw, that’s a type-o 😉

But if you read the Duo docs:

1. Generate an akey

Your akey is a string that you generate and keep secret from Duo. It should be at least 40 characters long and stored alongside your Web SDK application’s integration key (ikey) and secret key (skey) in a configuration file.

You can generate a random string in Python with:

import os, hashlib

Safeguard your skey and akey!

The security of your Duo application is tied to the security of your skey and akey. Treat these pieces of data like a password. They should be stored in a secure manner with limited access, whether that is in a database, a file on disk, or another storage mechanism. Always transfer them via secure channels, and do not send them over unencrypted email, enter them into chat channels, or include them in other communications with Duo.

Chris Drake January 15, 2021 2:39 AM

YES IT IS a DUO vulnerability.

It is BEYOND OBVIOUS that nothing and nobody who has access to the server should be able to impersonate one of the users.

Just because some lazy programmer said that a vulnerability is “not in their threat model”, does not suddenly make everything hunky-dory OK.

DUO is an App on a smartphone. Last time I checked, those are capable of doing asymmetric crypto themselves perfectly well – even using biometrics if you feel the need, and if you really want to solve some problems – performing mutual authentication and access logging as well. All Obvious. All missing from DUO.

This is the #1 problem with security. Everyone uses what everyone else uses, and nobody bothers to think about, or even care about, all the out-of-scope holes in the stuff they’re using.

Clive Robinson January 15, 2021 4:32 AM

@ Chris Drake,

Just because some lazy programmer said that a vulnerability is “not in their threat model”, does not suddenly make everything hunky-dory OK.

Saying something is outside your threat model is not of necessity “lazy” behaviour.

All authentication systems require “a shared secret” if you spend a little time thinking about it you will realise that software on a General Purpose Computer can not in any way protect such a secret.

To protect it you need certain hardware arrangements, but they are not in General Purpose Computing by design…

Whilst you might argue that there are “security enclaves” in modern CPU’s that actuall does not solve the problem it mearly moves it. Because where does the secret come from on power up?

Put simply either someone has to type it in, or it has to be read from semi-mutable memory such as a file on disk.

So how do you protect the file on disk? Well you could use encryption, but where do you get the key from?

You are in an endless loop that you can not solve with general purpose computers.

It’s one of the reasons Hardware Security Modules (HSMs) were designed, but they also have their own problems. As has been demonstrated with the attack on the Google 2FA key even secrets hidden in purpose designed chips are vulnerable to side channel attacks.

You thus have to be able to fully protect the shared secret,

1, When it is stored.
2, When it is communicated.
3, When it is processed.

And we realy do not know how to do all of those…

By all means feel free to try to “Invent a better mouse trap”, in software but I don’t think your chances are that good…

serg January 18, 2021 2:18 AM

@Clive Robinson

“Whilst you might argue that there are “security enclaves” in modern CPU’s that actuall does not solve the problem it mearly moves it. Because where does the secret come from on power up?”

Its 21st century. Nowadays most of the enterprise class PCs have “security enclaves” (Mac) or TPM (PC). The secret is stored inside these. Then you can “ask” these “security enclaves”/TPMs to use the stored secret to encrypt or sign. The secret NEVER leaves the “security enclaves”/TPMs in plain text

Clive Robinson January 18, 2021 3:21 AM

@ serg,

The secret NEVER leaves the “security enclaves”/TPMs in plain text

You’ve missed the point.

Those security enclaves when powered up do not have the secrets in them. Those secrets have to come from somewhere else.

Hence my rhetorical question of,

“Because where does the secret come from on power up?”

The secret has to be stored somewhere else, so as I indicated the “problem” has been moved not solved.

The secret is vulnerable when it is moved from semi mutable memory to mutable memory.

Even HSM’s and security tokens often fall foul of this problem.

Chris Van Genderen February 17, 2021 2:55 PM

@Clive Robinson,All,

I agree, software based crypto is simply not adequate to protect and use highly sensitive keys.

To Clive Robinson’s point on key loading with respect to HSMs: HSMs typically have a “master key” or a “series of master keys” which are used to encrypt and manage all secrets within the secure perimeter of the device. The master keys are all generated within the secure perimeter and loaded from non-volatile memory within the secure perimeter…i.e. the clear text key(s) never exists outside the secure perimeter and they are not “loaded in”. However, unlike software, the secure perimeter of the HSM is protected physically (Epoxy Slurry, Semiconductor obfuscation, active tamper hardware ), and logically ( temperature monitoring, voltage monitoring, side channel obfuscation, and optionally requiring 2FA authorizing use of keys inside the HSM with other hardware based systems…i.e. Smart Card(s)) An HSM secure perimeter is an environment designed specifically to manage authorized use and protection of sensitive cryptographic keys. So, are they loading keys? Yes, but in a very controlled and hardware managed environment. Not anything remotely like an open software environment.

Are they perfect?…No… However, they provide a formidable challenge to retrieving or achieving non-authorized use of a key. In general, if an HSM is used to manage a sensitive key; an attacker will find another path forward. It is effectively a Stop sign.

If the Duo skey / akey was able to be protected with an HSM, then this vector of attack to further compromise systems via use of authentic logins might have been closed.

In essence, the use of an HSM attempts to force an attacker to use a more easily discoverable penetration of a system.

With software based crypto, you are simply “leaving your keys under the mat” from an attackers perspective. They are going to go after them; especially if the end result is a valid user within the environment. Much harder to discover an attacker that looks like you.

Clive Robinson February 17, 2021 3:50 PM

@ Chris Van Genderen, ALL,

Are they perfect?…No… However, they provide a formidable challenge to retrieving or achieving non-authorized use of a key.

In all honesty I don’t believe they can be made perfect or 100% secure, but then I don’t believe anything can when it comes to security time makes fools of us all[1].

The only constants in life appear to be the laws of physics (so far) thus maybe if we design to those we might gain security over time.

Thus as security workers / researchers / designers / thinkers we need to work out ways to leverage the laws of physics into our designs as in effect “mitigation techniques” around systems we know are “good” but in reality can not be “good enough for long enough”.

We will never be able to mitigate insider attackers who walk around with a muti-tool in their handcand the root password in their pocket. But we can make their lives more difficult. However I think we can if we are prepared to do things in ways that work with the laws of physics in our favour but not those of outsiders, keep the outsiders out.

Thus the question falls to that of investment and economics, which are fields of endever where even the wise walk with caution.

[1] We used to joke about putting a computer in a block of concreate and droping it in the worlds deepest ocean trench… Then some famous film maker got involved and the next thing you know they’ve built and deployed a mini sub to go down and see the bottom of the trench real close up and gather samples… As has been observed security attacks tend to get better with time, and what could not be done just a handfull of years ago, is now beginning to look like childs play.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.