Bypassing Two-Factor Authentication

These techniques are not new, but they’re increasingly popular:

…some forms of MFA are stronger than others, and recent events show that these weaker forms aren’t much of a hurdle for some hackers to clear. In the past few months, suspected script kiddies like the Lapsus$ data extortion gang and elite Russian-state threat actors (like Cozy Bear, the group behind the SolarWinds hack) have both successfully defeated the protection.


Methods include:

  • Sending a bunch of MFA requests and hoping the target finally accepts one to make the noise stop.
  • Sending one or two prompts per day. This method often attracts less attention, but “there is still a good chance the target will accept the MFA request.”
  • Calling the target, pretending to be part of the company, and telling the target they need to send an MFA request as part of a company process.

FIDO2 multi-factor authentication systems are not susceptible to these attacks, because they are tied to a physical computer.

And even though there are attacks against these two-factor systems, they’re much more secure than not having them at all. If nothing else, they block pretty much all automated attacks.

Posted on April 1, 2022 at 6:12 AM55 Comments


hyanak April 1, 2022 8:36 AM

While the idea of FIDO2 is good, the main issue I have with it is making backups. By design you cannot make a backup. So alternate solutions like having multiple keys and registring them to each service is advised….

There’s a saying here: If you don’t have a backup, your data is worthless.

Andy April 1, 2022 8:48 AM

Cookie stealing, then using proxies to get close to the real client for geolocation/impossible travel checks, and specialized browsers or extensions to match the victim’s browser fingerprint, has become a “script kiddie” accessible technique and being used actively to subvert many “adaptive MFA” implementations where the MFA minimizes user friction by not prompting if user has the valid cookie, expected approximate location and expected browser attributes.

Ted April 1, 2022 9:29 AM

I wonder what the profile of an individual or an organization would be to prompt them to use FIDO2 authentication. I feel like the MFA I have set up with various companies is a hodgepodge of periodically-changing designs.

Does anyone that has used FIDO2 MFA have any thoughts on how easy it is to use?

Clive Robinson April 1, 2022 10:47 AM

@ ALL,

Think carefully on the articles incompleate statment of,

“some forms of MFA are stronger than others, and recent events show that these weaker forms aren’t much of a hurdle for some hackers to clear.”

So Strong-v-Weak but what is not being said about that?

Well the “Strong systems” mostly do not act as a significant invasion of privacy vector, or make other extended ICT attacks easier.

The “Weak systems” however –especially those pushed hard by Silicon Valley Mega-Corps– can be seen to be major vectors for not just invasion of privacy to get ID revealing PPI, but also in the process make the person considerably more vulnerable security wise as it causes unwarranted disclosure both overtly and covertly.

But further “Remote MFA” rather than “Local MFA” systems causes “data aggregation” or the “All the eggs in one basket” security vulnerability to both PPI and service usage. As an attacker the more information you have on an individual the better. Having all an individuals “data in one place” or under a “single idetifier” is in effect a “motherload” for an attacker…

Whilst you can design “Remote Strong MFA” without the overt issues, the question of the covert issues remains open.


Well Strong MFA is “expensive” in that mostly it involves some kind of technology / token that is a “Tangible Physical Object”(TPO), rather than an “Intangible Information Object”(IIO)[1].

Thus the cost and user inconvenience gets minimized by aggregation, that is if the TPO gets shared for multiple services. So depending on the way the IIO in the TPO works, any “shared secret” can become known to “many” services / third parties, having a significant negative impact on security…

But removing this potential security impact makes the TPO token much more expensive to use, and a lot larger as it needs rather more than one button.

The current solutions to this realy are not good as many try to take the human out of the communications channel, by using USB etc. This is potentially a security disaster, because it open the IIO in the TPO vulnerable to attack by malware on the computer. As has been shown with eye wateringly expensive “Hardware Security Modules”(HSMs) writing software for them is almost as bug ridden as consumer application code…

So MFA has very real issues over and above that which most people realise. And if not for the fact there are few alternatives, MFA would be considered as insecure if not more so than certain “One Time Pass Word/Phrase” Systems.

However the real “hidden issue” or “Elephant in the Room” issue with MFA is humans are not upto transfering enough secure bits, and any electronic communications to easily open to malware on the computer the user uses…

So a PhD level research project or three to find not just a Pass Word/Phrase but MFA replacment…

Oh and “For the love of chosen Deity” do not suggest bio-metrics ever, they are in no way secure as thugs and very young children have demonstrated, and they are very limited in revocation options…

[1] Even though the token is usually a TPO, rather than an IIO, any actual “Remote” usage is only by necescity via the IIO within a TPO device. Thus all those “covert channels” issues of a TPO device arise, and need to be mitigated[2].

[2] The very real necesity to stop or reduce covert channels[1] generaly mandates the token be “used through the human” communications channel. This causes further complications that even further reduce security[3].

[3] The problem with using the “human” communications channel is that it significantly reduces any security margin to about 20bits of real entropy entered by the user. Which entails other Authentication Factors to be required to get upto atleast 60-64 bits equivalent[4] of real entropy (think temporal and geospatial but with significant care to avoid many types of attacks).

[4] The question of “entropy” arises frequently in security but the generation and measurment can be hard. For a “True Random Number Generator”(TRNG) every bit output needs to be,

1, Entirely unbiased.
2, Entirely independent of all other bits.

With physical sources and instrumentation the first is at best difficult, the second very difficult even under favourable conditions.

If that can be achived the communication problem arrises, three digits, gives ~10bits, three upper or lower case gives ~14bits, three fixed case and digits gives ~15bits and three both upper, lower and digits gives close but not quite 18bits. Getting 60-64 bits transfered without a human error is not going to be easy for most as typing in 37digits without error is beyond most as well as extreamly tedious.

Diego April 1, 2022 10:59 AM

These are valid attacks, but none in the list are bypasses. It’s like saying you can bypass a password requirement by tricking a target into giving you their password. An example of a two-factor authentication bypass would be calling up the company, lying about your identity, and saying your token stopped working—as hyanak hinted at, it can happen and isn’t something services design for in any consistent way. (Without 2FA, passwords can often by bypassed by having a reset code sent via email.)

Joao April 1, 2022 11:09 AM

Two Factor Authentication is nice.
But as Youtubers around the world know, is nowhere near enough.

Companies need to demand the password and ask again for the two factor authentication to perform sensitive operations that can be very harmful for the client (change/ add/ remove/ see… email, phone, address, credit card, password, username, do financial operations, services, etc.).

AlanS April 1, 2022 11:09 AM


I am using FIDO/FIDO2 on all the accounts I have that support it. You do need to register multiple keys so you have at least one backup. And it does cost a little. Entry level is probably a couple of blue Yubico security keys at $25 each. Android and I think iOS phones can also act as FIDO2 keys but I am not quite sure how that works or the pros and cons. The keys are much easier and faster to use than having to retrieve a 6 digit code from your phone and then enter it. Also, when you upgrade your phone you don’t have to go through the annoyance of moving your authenicator app. I don’t know about other keys, but the Yubikeys I have are durable.

The main annoyance is that there isn’t wider support. Very few financial services support it (Bank of America, Vanguard and maybe one or two others). Google, Facebook, Twitter and Microsoft all support it. Some password managers support it (e.g. Bitwarden) but not others (e.g. Lastpass). You have to be careful. Lastpass does support the more expensive Series 5 Yubikeys in OTP mode but not U2F/Webauthn mode. Bitwarden supports Yubikeys in both modes. A lot of e-mail providers support it but not Protonmail, which is odd given the sort of users they cater to.

There may be more pressure to adopt U2F and Webauthn now the federal government (in response to Solarwinds etc.) is requiring agency systems to “discontinue support for authentication methods that fail to resist phishing, including protocols that register phone numbers for SMS or voice calls, supply one-time codes, or receive push notifications”. That may mean a regular user who wants to login to their accounts at IRS or SSA may also need to use FIDO 2FA at some point (FIDO is supported by It may take a time for all that to happen. See

Even if you use u2F or Webauthn lots of services also support other modes which can be used as a backup. That’s obviously a potential problem as a user can be tricked into falling back onto a less secure mode. Google has an advanced protection program that requires FIDO. See There is a recover mode using a phone number or e-mail but I don’t think it is a simple reset. Their FAQ says “Google may take a few days to verify it’s you and restore your access.” They don’t explain how they do the verification.

Nick Alcock April 1, 2022 11:51 AM

@Clive Robinson,

So depending on the way the IIO in the TPO works, any “shared secret” can become known to “many” services / third parties, having a significant negative impact on security…

But removing this potential security impact makes the TPO token much more expensive to use, and a lot larger as it needs rather more than one button.

Not so. FIDO2 tokens have one single secret embedded in the key: the secret is transformed by a per-service non-secret provided by the service (related to the service’s URI). So FIDO2 tokens do not share secrets between sites, do not allow sites to see secrets used by other sites, and do not require more than one button (and that only for the optional but usually-a-good-idea physical-presence-verification feature). The browser or other client-side machinery prevents sites from providing non-secrets relating to other sites, so a compromised or hostile site cannot ask the key to generate some other site’s secret.

Arclight April 1, 2022 1:11 PM

I am also surprised that finance and other high-risk services don’t seem to care for FIDO2/Yubikey hardware authentication. The attack surface for a tiny piece of plastic that stays off the network most of the time is vastly different than a smartphone.

In fact, FinTech has kind of gone the opposite direction, allowing “partners” such as Ca$hApp and Plaid to require your login credentials for Wells Fargo/etc. via a URL scrap scheme. I think they don’t want FIDO2 because it breaks these integrations by design.

AlanS April 1, 2022 2:11 PM


It is amusing in a twisted way that there are many FIDO members, even board level members, that don’t support FIDO on their websites e.g. PayPal, Amex. But they’ll send you an code by SMS so that’s okay then!

AlanS April 1, 2022 2:37 PM


There’s a nice demo of different types of 2FA and how they work with a reverse proxy phish, using GitHub as an example, here. If you are tired or distracted you will easily miss that you are being phished. Training isn’t a solution.

hyanak April 1, 2022 3:24 PM

Well, there are other problems with FIDO2 IMHO also. Hardware can stop working – even most durable one will break down eventually. It can be lost or stolen. Or you’re on vacation on the other side of the globe and then you notice your FIDO2 token is at home…

If you to buy at least two devices and authorize them for every service out there, otherwise if one will become inaccessible, the other one can’t be used to access the service.

While this for very security minded people is ok, FIDO2 aims at people who use the same password on every service. For those people FIDO2 is a no go. They will not have two tokens etc.

hyanak April 1, 2022 3:50 PM

Thanks for that video. For me it looks like that fishing would not be successful with using a modern password manager that only offers to enter credential when on the according domain – like someone mentioned BitWarden before.

AlanS April 1, 2022 4:26 PM


Yes, I think you are correct. Unfortunately, I suspect the people who are likely to use password managers and FIDO authentication are one and the same and a minority. Is it easier to get the others to use security keys or a password manager? And which one allows less discretion for poor and incompetent use?

Also, how do you secure your login to your password manager?

Clive Robinson April 1, 2022 5:10 PM

@ Nick Alcock,

Not so. FIDO2 tokens…

Two things to note,

1, Not all tokens in fact most are not FIDO2 by a very long way.
2, I was talking about tokens in the general sense.

The latter being why I said,

So depending on the way the IIO in the TPO works, any “shared secret” can become known to “many” services / third parties, having a significant negative impact on security”

Oh and sorry but even the second part of my statment applies to FIDO2 tokens as well… Bcause in high probability they are vulnerable to,

1, Poor software implementation.
2, Covert side channels.

It’s why you realy should not have tokens that are not “energy gapped”. But that’s a tale for another day (or you can look it up in past postings on this blog).

Oh by the way I was extracting secrets out of gambling machines and electronic wallets back in the 1980’s by the use of both passive and active EM EmSec attacks. Look up “EM Fault Injection Attacks” years later –despite the efforts of a certain American– it became a topic of interest, and now even gets in hobbiest and trade electronics magazines. It’s only taken a third of a century but hey slow and steady said the tortoise…

SpaceLifeForm April 1, 2022 8:37 PM

@ AlanS, ALL

Again, what does it really, truly mean to be authenticated in a network environment?

There are really two problems to deal with.

See the graphic at that AlanS noted above. The one with the MitM Phish Kit.

The first problem is the left to right flow in the graphic, i.e., the initial login flow. Even if you can do the initial Authentication really securely, with FIDO2 for example, maybe with a signed challenge response, you still have the second problem.

The server can really believe that the user is actually really there because of the 2FA. Disregard the mechanism, assume it is secure. Because that is not the biggest problem.

The second problem, and the bigger problem, is the right to left flow.

It is the session cookie being captured. If the server thinks the session cookie is valid for say, X minutes, how does the server really know that some malicious packets it just received within X minutes really were initiated by the user? Answer: It does not, but just believes so.

Note that there does not need to really be a MitM Phish Kit somewhere between the user and the server. The user does not even need to be phished in the first place. The MitM may actually exist on the end points.

For real security, there should not be a thing called a ‘session’ over a network environment.

Any critical operation should be one-shot, no session. Not easy.

Until the second problem is addressed, dealing with various versions of the first problem. the 2FA, (which most users do not want to deal with anyway), is just security theater.

hyanak April 1, 2022 10:14 PM


Well, I personally use “pass” as password manager where I store also login URL and always use those when I’m required to login. While there are GUIs for for it (which I use on Android), on Linux I just use it as shell. Works great for me. With git I can just keep it up-to-date on different systems.

However for my wife and parents I needed something else. I do now run an own Bitwarden (actually Vaultwarden) instance on a RaspberryPi 4 at home with Letsencrypt certificates. I set it up in my wife’s and parents’ browsers, Androids phones and tablets etc. Made a very short document on how to use it and also have some youtube videos for them on that document as qr codes.

So now they use it and I told them to slowly add all their services to it when they’re using it and also update their (before always-same-password) with a generated one.

lurker April 1, 2022 11:13 PM


If the server thinks the session cookie is valid for say, X minutes…

Does the server know if I click the logout button that it should then tear down the session cookie? This of course is an application dependent variable, subject also to the MITM not blocking a logout message.

Which is more dangerous: stay logged in; or repeatedly set up new sessions.

Denton Scratch April 2, 2022 2:04 AM


With physical sources and instrumentation the first is at best difficult, the second very difficult even under favourable conditions.

It’s not hard to obtain unbiased output from a TRNG; just encrypt the output with AES. The output of AES is indistinguishable from randomness, so it is by definition unbiased. Any cryptographic hash would serve just as well.

Correlation is a much harder problem to solve; that’s a matter of the design and construction of the TRNG, because you can’t just strap-on a magic “Von Neumann de-correlatator” after the fact.

Clive Robinson April 2, 2022 4:37 AM

@ SpaceLifeForm, AlanS, All,

Re : what does it really, truly mean to be authenticated in a network environment?

Well for two things,

1, The channel needs to be uniquelly authenticated.

And what nearly everyone forgets but I’ve been saying since the 1990’s

2, Every transaction needs to be uniquely authenticated.

The thing about HTTP GET/POST tickets / tokens / session cookies / etc is often,

“They can only authenticate the first user connection”

Nothing else, and even then badly, becsuse HTTP is both plaintext and importantly stateless by design.

Usually it’s all down hill from there as few developers have a good handle on layering state on a stateless channel securely, or at all…

Shunting a “nonce” on with GET/POST is realy not adding any kind of security over and above that which we knew was a rely bad idea back in the 1960’s (passwords in cleartext on serial lines and later Telnet network packets). SHTTP mearly obscures not solves the glaring hole in such total authentication failures (and that is what they generally all are).

Sumadelet April 2, 2022 8:33 AM

I agree with @SpaceLifeForm – one should be authorizing transactions, not sessions. Many (all?) banks get this – having logged on to their ‘Net banking’ services (the session), most transactions need to be separately authorized as well (the transaction).

Unfortunately, at the finest granularity, breaking tasks down into constituent (micro-) transactions means you rapidly get end up needing to authorize a lot of ‘low-level stuff to get things done, kind of like the difference between running su and sudo all the time. It is a death-knell for security because it is inconvenient.

Clive Robinson April 2, 2022 9:36 AM

@ Denton Scratch,

It’s not hard to obtain unbiased output from a TRNG;

You’ve not read what I’ve written or misunderstood it. What do you think I ment by,

“With physical sources and instrumentation”

A “physical source” is an “entropy source” which could be a noise diode etc, it’s where the “True Random” or “True Entropy” comes from that has to be distilled out from the chaos, complexity, determinism and bias present at the “physical source” output.

If you want me to go into the details of why something even as simole as a reverse biased diode and resistor noise sorce is problematical let me know. But you will need to understand some electronics and graduate level semiconductor theory and mathmatics.

But as to your suggestion of,

just encrypt the output with AES.

That is what I named “Magic Pixie Dust” thinking back years ago when Intel were stupid enough to start doing it.

Sorry as you’ve stated it does not work, there is a lot more you have to do to get it to work. And even if you do get it to work it’s very fragile, so not a good solution.

To see why the first thing you need to realise is that, the only thing AES does is map one value to another value (under a key). It’s a glorified ECB or “one to one” “substitution cipher” with potentially a very large alphabet, but nothing more than that.

So the second thing you need to realise is the result of that is, if you put in biased thus by definition limited range input, the output will be just as biased and limited in range.

So if faithfully follows the “Gatbage In Garbage Out”(GIGO) principle. Exactly the same holds for the “One Way Function”(OWF) algorithm in all hash and similar crypto functions.

To make use of the ECB “Random Oracle” advantage, you need to do something more. That is put both AES or any other basic crypto OWF in a hash into some kind of “Mode”.

More specifically it has to be some kind of “feedback” mode which also implicitly gives you “statefulness”.

Importantly it’s this statfulness when combined with the Random Oracle Map that gives you the “indistinguishable from randomness” you desire.

That is you take your entropy source output and somehow mix it with feedback from previous encryptions and put the result through the OWF.

It takes very little imagination to see how you can increase the amount of state, thus arrive at the notion of an “entropy pool”.

But you need to ask yourself a very important question,

What happens if the entropy source is broken and gives a monotonic input to the Crypto in feedback mode with state?

The answer is you get the equivalent of “counter mode”, which is a block cipher turned into a stream cipher, with all the weaknesses of a stream cipher…

It’s just one of the reasons I say,

“You must have access to the source”

Anything less is stupid. Also it’s suspicious as well.

Denton Scratch April 2, 2022 12:36 PM


You’ve not read what I’ve written or misunderstood it.

You must confess, it was easily misunderstood. You had said:

“For a “True Random Number Generator”(TRNG) every bit output needs to be entirely unbiased. […] With physical sources and instrumentation [that] is at best difficult […].”

(I hope you’re OK with how I’ve edited that).

Most (all?) raw hardware produces biased streams; and I’m sure it’s nigh impossible to make a hardware device that produces a stream whose absence of bias is stable with e.g. temperature. So I consider just about any practical TRNG “with physical sources” to include a debiasing circuit.

SpaceLifeForm April 2, 2022 1:53 PM

@ lurker

I would definitely say it is more dangerous to stay logged in.

I believe most banks will automatically log you out after 10 minutes of inactivity.

That frees up resources on the server side, but also helps the user by minimizing the window for an Evil-Maid attack.

MarkH April 2, 2022 3:28 PM

@Denton Scratch:

1) For purposes of cryptography, bias can be defined as any tendency in data enabling better prediction than by pure chance.

For example, a prediction of heads for next outcome of an idealized coin toss will be correct 50% of the time. It’s a pretty high percentage, but useless in practice.

But if the coin is a little imbalanced, and the prediction probability can be raised to (for example) 53%, this could enable attackers to break through in real-world scenarios.

Applying a crypto function to somewhat biased data will usually improve the balance of 1s and 0s, but …

2) The need for low bias applies not only to individual bits, but also to arbitrary length sequences of bits.

MarkH April 2, 2022 3:33 PM

@Denton Scratch, continued:

For ease of visualization, imagine a random byte generator that doesn’t work very well — some bytes occur more often than others.

If each byte is passed through a randomized mapping, the non-uniform distribution of of mapping outputs will match the non-uniform distribution of generator outputs.

That’s dangerous.

3) What’s even worse — and what Clive warns us about often, because it’s so serious — if (for example) those biased bytes are concatenated into 128-bit sequences, which are then encrypted by AES, the output will still be biased …

… but no practical test may be able to detect this bias. This is the danger inherent in Intel’s approach to on-chip RNG.

The bit generator could be terribly biased — by accident, or by collusion with the NSA — and users of the chips wouldn’t be able to detect the flaw.

However, attackers who know the generator bias would be able to exploit it to carry out successful attacks.

MarkH April 2, 2022 3:42 PM

@Denton Scratch, pt 3:

4) To formalize the notion of freedom from bias, two components may be precisely defined and practically measured:

• uniformity of distribution (all possible outputs are equally probable), and

• statistical independence (examination of previous or succeeding outputs doesn’t give a hint to the value of the present output).

5) For a bit generator which is functioning properly, and by design has inherently low predictability (such as a noise diode), measures of uniform distribution and statistical independence are sufficient: no other tests need to be applied.

However — big caveat! — these tests are not sufficient for the output if a conditioning function (such as encryption or hashing) has been applied to the hidden raw bit generator output.

MarkH April 2, 2022 3:53 PM

@Denton Scratch, pt. 4:

6) The above concepts are embodied in the NIST draft standards for hardware random number generators. For a generator to be acceptable,

a. a rationale must be provided as to why the generator is inherently unpredictable (when working correctly);

b. the generator must incorporate a continuous self-test function which examines raw generator outputs and signals that the generator is non-functional if the outputs become biased;

c. the uniform distribution and statistical independence tests must be applied directly to the raw hardware generator output.

MarkH April 2, 2022 3:58 PM

@Denton Scratch, pt. 5:

If all three of these conditions are met, only then can a conditioning function (such as a cryptographic hash) be applied to the raw data to remove residual bias.

For example, a design with low but non-trivial raw generator bias might collect 600 bits of raw output and apply them to a 256-bit hash function to yield 256 bits of output with ultra-low bias.

Note that by these criteria, Intel’s on-chip generator could never be accepted.

lurker April 2, 2022 6:01 PM


I believe most banks will automatically log you out after 10 minutes of inactivity.

So far, so good. I believe I can complete most of my bank transactions and log out, in less than a quarter of that.

But what about other institutions, eg. a certain well known payment clearing house: the last communication I had from them indicated I was “permanently logged in on this trusted device”. I cannot now log in to cancel that setting because of my refusal to offer my mobile Nr. as a side channel…

lurker April 2, 2022 6:15 PM

Split post because another aspect appears unspeakable, involving trusted devices, and permanent connections, and who engages in this behaviour…

JonKnowsNothing April 2, 2022 6:43 PM

@ lurker, @SpaceLifeForm, @All

re:banks will automatically log you out after 10 minute…

I am not sure that Phone Banking Apps work the same way as Web Browser Based access.

Browser based may have some disconnect after n-minutes but there are those permanent session cookies that give you re-login.

From reports of people using phone banking apps, these also run some sort of perma-hooked-link but may not have an active direct link.

The equivalent of “instant on”.

The browser based systems that I use that include an auto logout after n-period of no activity also put up a “close the browser page warning” but you can punch right back in from the “logged out page”.

I don’t think log out means log out anymore….

SpaceLifeForm April 2, 2022 7:54 PM

@ JonKnowsNothing, lurker, ALL

re: “close the browser page warning”

Because they are trying to get you to get your browser to discard the session cookies.

Don’t use branded apps.

Use browser, and hope.

Use Cookie Autodelete plugin, and when the site tells you to close the window (or tab), just [redacted] do it.

Make sure you close ALL tabs for the site. You must close the last one to let Cookie Autodelete to work.

Use Privacy Badger, Ublock Origin, Cookie Auto Delete.

Way less headaches.

I’m an OG. I downloaded Mosaic via ftp for a real xterm long ago. Then, later I beta tested Netscape.

I know where the skeletons lie. I highly recommend you use FF and the three plugins I mentioned above.

Seriously. For your security.

This is not that difficult.

Just [redacted] do it.

I hope I have made myself clear.

lurker April 2, 2022 8:26 PM

@SpaceLifeForm, “Use Cookie Autodelete plugin”

Must look again at configuring that. Perversely I want to keep some “good” cookies. Previous OS, previous browser, I had a script do that. I think cookies are now stored in SQLite just to make it harder for the user to manage their own life.

SpaceLifeForm April 2, 2022 9:37 PM

@ lurker

No configuration required for the 3 plugins I mentioned.

They just work.

Try them. You will probably never notice they are there, unless you pay attention.

Clive Robinson April 2, 2022 10:48 PM

Why developers must be proficient lock pickers to have a future.

@ JonKnowsNothing, lurker, SpaceLifeForm, ALL,

With regrds : Banks will automatically log you out…

As an object lesson


I don’t think log out means log out anymore….

With HTTP it never had any meaning, it was never ment to transact that way.

In fact HTTP was deliberately designed so it could not have meaning, because it was deliberately made “stateless” for good and proper reasons including those of security.

To enact log-in and log-out implies without doubt state and it’s change or transitions.

State requires storage, with a “statefull service” split across a server and a client, it requires state to be stored at both ends, and importantly,

“A reliable mechanism to ensure synchronisation”

It’s impossible for HTTP to provide such a reliable mechanism as it has no inherant notion of “continuance” be it of “state” or more importantly of “agency” and “communications”.

It’s the price you pay for “Packet Switching” not “Circuit Switching”. At the physical level of IP it is,

“inherently unreliable”

A “fire and forget” “Datagram Service”, which is always probablistic and prone to unexpected events non delivery being just one of very many.

The same problem applies to the older Telnet protocol, and it was from the problems created there that the designers of HTTP knew it MUST BE “stateless” and WITHOUT “continuance”.

It was others, in their cupidity that tried to add state and continuance when neither server or browser, was designed for such.

A case in point, back in the early part of the 1990’s I started studying an MSc in Information Systems Design (MID). Unlike those who taught it and most who studied it I was by several years of training and practice at all levels a “Communications Engineer” of what the University recognised as “Advanced Standing” (effectively the same as already holding a Masters Degree, my intent being to progress it to a PhD in another knowledge domain).

So there we were one evening, at a demonstration of a “research tool” that was in effect an inverted text database of journals and other scholarly works with an advanced search tool abd HTTP front end.

One of the systems first “architects” was demonstrating it in a “Question and Answers” session. He started talking about the multi-server design that alowed what we would years later routienly call “load balancing”. Due to the way HTTP worked load balancing normally whilst not “trivial” at the communications levels was simple at the HTTP level because HTTP was “stateless”.

That was any request you as a user sent off from your browser could be answered by any one of many physical hosts masquerading as a single host to the browser.

However they had added “log-in/log-out” and accounting for the purposes of,

1, Billing
2, Controling service provision.

After several issues they came up with a “token” that would be added to every GET/POST by a “loged-in user”. This would be not just for AuthN, but AuthZ purposes as well.

Thus “state” had not just reared it’s ugly head, like a hydra of the deep those heads could pop up every where at any time and importantly in more than one place simultaneously.

He appeared shocked at how quickly I had worked it out when I said “You’ve based it on Kerberos havent you?” After a momentry pause he said that yes, the token was supposed to act like a Kerberos Server Ticket.

I then smiled and in a slightly amused voice said “You know that won’t work the way you want it to?” To which not just he but about every one else in the room looked very surprised, and he said “Why do you think that?” to which I replied “A couple of reasons, split state and imposible to resolve deadlock”.

Sometimes it’s funny to see every face in a room look “puzzeled beyond comprehension” and effectively dumbfounded, though most times not. As it’s a major indicator of very serious and often very expensive problems ahead, and any sane person would “make like a rat”[1].

For those that do not know “deadlock” is when two things that should not, nevertheless do happen at the same time and the action to be taken can not be determined. It’s usually a very nonlinear and apparently random –actually chaotic– process but the frequency of such events correlates to load, and the number of inputs to a system (you can see it at work in the likes of the memory paging and interupt algorithms used in the kerenel of a multi-tasking OS, and sometimes hear it with disk thrashing).

Whilst more well known these days, back in the early days of the 90’s it was effectively unknown to software developers at all levels. Because the hardware designers had “designed it out” when developing the actual physical systems. Likewise the low level OS designers and developers sorted it out in the kernel. So it was back then very rare to see it[2].

But load balancing with “state” gets right up in your face as a Web Developer. Because it means “multiplexing / switching” of hosts and with “statefulness” which requires synchronisation that can be not just of very high degree of complexity, it can be non-determanistic, thus lead to unresolvable issues hence “deadlock”. Often the trick tried is to put all the state on a single backend server, but as a solution it does not scale very well.

In a telephone system you “circuit switch” which can be treated as a single event that can be made reliable even with split state[3]. Supprisingly to most the resulting state diagram[3] is true for all sequential processes. It forms the most basic step in project managment and it’s how all sequential programs on multitasking operating systems work. Though it’s mostly not seen as the OS and kernel usualy abstracts away all but the “active state”[3:2] from programers. Anyone who starts playing with signals or interprocess communications gets the veil lifted just a little bit but by no means enough to understand all thats required.

It’s one of the reasons most multi-threaded code is inefficient or flaky, and parallel processing which is computings future is seen as a too difficult or impossible task by mostly sequential thinking programmers.

It’s also the root cause of why log-in / log-out causes so many Web based service security issues from split state that is not synchronised and ultimately can not be due to the many issues of packet switching (which a number of network based attacks exploit).

I’ve known this issue fairly intetmately for now well into a sixth decade. It was how I originaly did my “hacking” when it was a “good thing” and not in any way “illegal”. And it all started with insane curiosity about the world that gave rise to “lock picking” for fun long before I was a teenager. An activity where those multiple pins on springs followed their own sequential process in parallel with their adjacent neighbours, and could easily deadlock (bind) or fault (drop) forcing you to start again with a slightly different approach.

Like Matt Blaze, I think all security engineers and in fact all programers should learn how to be proficient lock pickers. Then they would learn at a visceral level things that they would not intellectualy dare approach. So can not just overcome their trepidation, but learn how to build competently with split state and fundemental unreliability thus more securely.

[1] Old english saying “To make like a rat on a sinking ship”. Sailors had noticed long ago that rats could be a “portend of doom” in that they would sometimes unacountably scuttle off ship rather than on. This happening in port shortly before a journy in which a wooden ship foundered. It actually turns out that rats are fairly smart and possesed of good hearing very close to the ships “frames” –structural beams– they scuttled along. Evolutionarily they had learned that certain noise types in trees were not good news. Thus the rats could hear “sick ship syndrom” where the frames had rotted in places where it could not be seen. Thus the rats could “Get the heck out of Dodge” before the inevitable of the frames colapsing and the ship folding up like a soggy cardboard box or worse in high seas.

[2] However “telecommunications engineers” had found out the hard way in the early days of “telegraphs” and later “telephones” when the idea of “shared” or “multiplexed” lines etc came in and “switching” was required that a reliable process was required[3].

[3] Circuit Switching can be made a “statfull process” and is either inactive or active, and active has four basic states,

1, Setup
2, Active
3, Clear down
4, Fault.

The normal transitions are the synchronous order 1,2,3 or asynchronus X,4,3 where X is any of the synchronous states 1,2,3. The asynchronus trigger event causing state 4, which means that state 3 being the terminator state must at all times be reliable. If not trouble will follow as night follows day[4].

[4] For a “clear down” or “terminator state” process[3:3] to be reliable you generally need things to be “atomic” which precludes split state and unreliable communications. Anyone who has played with any database updates at the storage level gets to know some of the problems and imperfect solutions (but rarely the security ones).

MarkH April 2, 2022 10:55 PM

@Denton Scratch:

To my mind, the word “bias” is too vague for analytic purposes, which is why I brought up the much better defined concepts of uniform distribution and statistical independence.

Establishing and confirming those two properties is far more useful than trying to define “bias”.

Autocorrelations at various spacings are inevitable in random data. If autocorrelation at all intervals does not converge toward zero as the amount of sampled output increases, then there is a lack of statistical independence: that’s a bad thing.

Clive Robinson April 2, 2022 11:51 PM

Why developers must be proficient lock pickers…

@JonKnowsNothing, lurker, SpaceLifeForm, All,

Due to “being held in moderation” and the moderation being very intermitent these days a long reply I made, has not appeared and may not for some time…

So a brief synopsis pending the post getting successfuly moderated.

Re : log-in / log-out on HTTP having no meaning and importantly not being able to ever be reliable.

Back in times past it was mainframes and dumb serial line mechanical teletypes. Later in the end of the 1970’s the first usable Personal Computers appeared, with the Apple ][ arguably being the most successful and widely used. In effect both of these types of computer system were “central processing” units which ment just about everything was “local” “single user” “single task” and could mostly be assumed to be effectively “atomic” as far as the users and software developers were concerned.

Which ment neither “state” or “unreliable” communications were an issue.

But the end of the 1960’s had seen the start of both “multi-tasking”, “multi-user” and local and remote “networking” on the “machines in the middle” the minicomputers like the early Digital PDP machines that had 16 and 32 bit architectures.

This brought both “state” and “communications” into importance.

Many issues arose and different solutions tried but it became clear that,

1, For efficiency communications should be multiplexed / switched.
2, The switching needed to be not by “circuit” but by “packet”.
3, Packet switching was fundamentally unreliable.
4, Packet switching was fundamentally insecure.
5, Packet switching needed “state” at both ends of the unreliable communications.
6, This split state needed to be kept synchronized.

Issues 5&6 are the cause of the problem and they can not actually be solved. As the users of Telnet that had both problems found with significant issues not just for the users client PCs but the servers as well.

Which is why HTTP was originally designed not to have “state”. In essence to the user the Web browser originally appeared “circuit switched” not “packet switched” like Telnet.

Making interactive HTTP sessions are thus plaqued by the same underlying issues Telnet had.

With the result interactive HTTP sessions,

1, Will give users unreliable behaviour.
2, There will be irresolvable security issues.

To see why hopefully my original comment will come out of moderation soon with the answers along with why all software developers and ICTsec practitioners should learn to lock pick…

lurker April 3, 2022 12:49 AM

@Clive Robinson
thanks for the reminder of the basics; it reminds me of what I was saying when Al Gore invented the internet, why didn’t someone tell him the first “t” in http stands for text. Sudenly everything was shoved through port 80, when we already had good protocols for most of it. Still, you might think after 30 years with all the brightest brains working on it, they might have sorted it. Oh, you say they’re insisting on using the solution to an unsolvable puzzle…

Clive Robinson April 3, 2022 8:44 AM

@ lurker,

Oh, you say they’re insisting on using the solution to an unsolvable puzzle

I see you are taking @SpaceLifeForm’s advice at the end there and “joining the dots” 😉

@ ALL,

Remember in the real world “unsolvable” does not mean “unusable” all it needs is for the problems to be on average sufficiently rare to be traditionaly called “Acts of God” and get written as such into agreement terms.

Thus pass the problems onto someone else in a process we call “Extetnalising Risk”.

The fact that sticking with an “unsolvable” problem in one domain might also alow you to keep reaping in benifits in another much less overt domain…

Of course would not be a reason to hold back on bringing new solutions that are condiderably better into the fore 😉

Maybe looking at who effectively funds,

1, HTML and http.
2, Server development / usage.
3, Browser development.

Might be an interesting excetcise.

AlanS April 3, 2022 11:30 AM

@Clive Robinson @Nick Alcock

Well, as noted already by Nick, there is no shared secret. Every login involves a unique encrypted challenge response using public keys.

The secret never leaves the security key but say an attacker manages to clone a FIDO security key, then you have to deal with the signature counter. As soon as one of the keys is used, they are no longer clones and the cloning will be detected. And you also have to deal with the fact that a lot of services also require the user to enter a pin. Your cloned key may have very limited mileage or no mileage at all.

Whatever the potential weaknesses of FIDO security keys, we may be missing what I think was one of the points of Bruce’s original post: that some forms of MFA are less susceptible to attack than others. Codes via SMS or voice calls, apps that supply one-time codes, and push notifications are all readily exploitable using well-known weaknesses and widely available tools. The cost to the attacker is low. The difficulty and cost is much higher on a FIDO security key. So whatever weaknesses it might have, the old adage about escaping hunger bears applies.

Another point is that as with anything else there is an evolution of attacks and defenses. I imagine we’ll see more evolution of the FIDO spec and implementation as attacks evolve to exploit whatever weaknesses emerge. Other commonly adopted approaches to 2FA appear to be evolutionary dead-ends.


At Bank of America, which is one of the few US banks to have adopted FIDO2, a customer has to use the key at login, to add transfer recipients and authorize transfers. The customer also has to enter a pin for the security key on each occasion as well. Weirdly, they still support SMS and pitch using FIDO2 security keys as an alternative means of adding extra security if you don’t have a mobile number or means to receive SMS texts. There’s no discussion of the relative merits of using a security key instead of SMS.

JonKnowsNothing April 3, 2022 2:48 PM


* Sending one or two prompts per day. This method often attracts less attention, but “there is still a good chance the target will accept the MFA request.”

  • Calling the target, pretending to be part of the company, and telling the target they need to send an MFA request as part of a company process.


  • … [FIDO2] not susceptible to these attacks, because they are tied to a physical computer.

These attacks still work, even if they are tied to a specific device.

The attackers know that some systems are tied to a device. They may learn this by repeated contacts with the Mark. Once the attackers have a layout of how the Mark has set up the system, then they go for the Loots.

RL annecdote tl;dr

Acquaintances get repeated calls from Attackers. Many times a day. Some of the family members are Marks and get repeated hits.

The Mark(s) either enjoy leading on the Attackers until the Attackers hang up; or there maybe some cognitive impairments that are being exploited and the Mark may give the attackers what they want.

The not-Marks try to limit the attack surface and intercept the calls but it’s not possible to block them 100% of the time – there just isn’t enough BLOCK THIS CALL space to prevent a break through.

Once the Mark is captivated, the scenario plays out, including logging in to the authentication device(s).

The same scenario plays out for Charity Drives, Police Support Contributions, Religious Donations (BIG), as well as just the Smelly Shoe types.


Many rear view mirror years ago…

An elderly person used to take a taxi to the local Bar & Grill. They would eat lunch and dinner and spend the afternoon boozing it up. They would take a taxi home.

One taxi driver had the elderly person call their taxi for pickup and return. This continued for some years, round trip to the Bar & Grill; then home again.

Later it was found that the elderly person gave the taxi driver “blank checks”. The taxi person filled in whatever they wanted. At first it was normal amounts for that time, then later as the taxi person learned more about the elderly person’s finances, the amounts on the check increased. A RT trip of 1 mile cost @$27,000 USD x 7 days a week.

When the account went belly up, the relatives started to hunt down the money and it was found in the pockets of the taxi person. Unfortunately, the money had been spent and the elderly person recovered little or none.

It wasn’t illegal. The elderly person gave the blank checks and was grateful for the rides, regardless of the cost.

The moral is:
The Relatives lost their expected inheritance because they couldn’t be bothered driving the elderly person to and from the Bar & Grill. A 10 min ride, 1 mile cost them – LOTS.

Attackers take advantage of this same scenario every day.

Clive Robinson April 3, 2022 5:29 PM

@ AlanS,

Well, as noted already by Nick, there is no shared secret.

You might want to stop a moment and condider that statment…

To authenticate you have to some how prove you are who you say you are.

To do that you need a minimum of to things,

1, A unique identifier.
2, A proof.

Whilst the unique identifier can be public, the proof has to be kept a secret between the first (user) and second (server) parties in the authentication. If the proof becomes public or can be guessed then anyone who knows it can impersonate the first party.

This proof, be it a password, passphrase etc is actually a “shared secret”.

However people also use PubKeys and a coresponding Private Key. It’s a little harder to understand but fubdementally both the PubKey and the PriKey are based on a shared secret that is the two primes P and Q. It is an unproven trick of mathmatics that hides P and Q from every one but the first party, but there are known tricks by which P and Q can be found from the PubKey if badly selected. So once either P or Q is known to anyone other than the first party they can find the other thus know the “shared secret” and build a copy of the Private Key thus impersonate the first (user) party…

That is how all factors used in authentication work you have a unique idea and some kind of proof as authentication. In some cases like bio-metrics the proof is very weak, in others such as long pass phrases or Private Keys it can be very strong.

But given the resources ALL “shared secrets” can be brut forced, because in every case they have to be converted to “information” to be sent across the Shannon Channel that under pins every communication path/medium. This puts finite bounds on the information entropy.

Thus other precautions have to be added, one of which is “time” another is “attempts”. However they can fail if a third party can get sufficient access to the server to duplicate it’s function but with the other precautions locked out.

This is what used to happen with passwords,

First, with “plaintext password files” getting a copy gave the shared secret directly.

Second, with “encrypted files” either the key was found or if setup as a OWF a dictionary attack.

Third, with a “salt” added people extended a dictionary attack by building “Rainbow tables”.

This is what happens with proofs / shared secrets. All those you claim for FIDO can be attacked or more easily bypassed in various ways. Because that is the way of “robust systems”, when an inevitable error happens and they will, then there is a recovery mechanism that can be exploited in some way, most often by some form of “social engineering”.

Show me a system that you claim is not vulnerable, and you are either mistaken or are showing me a failed system. Because errors and exceptions that are unavoidable will exist within any practical system where information is communicated.

Mr C April 4, 2022 2:41 AM

Last I checked, FIDO2 wasn’t a form of “two-factor authentication,” but rather a “replace the password with a dongle scheme.”

For actual strong dongle-based 2FA you need to go back to U2F.

Clive Robinson April 4, 2022 4:18 AM

@ SpaceLifeForm,

I know you know all of the details behind this,

‘Sued For “Hacking” With HTML’

AKA or the ‘F12-Scandle’ But courtesy of the youngster this vid,

Is both simple and amusing and I think even “The Gov’ner” would do a Homer Simpson “Dohh” face palm. Even if it is only with all the other stange things he does when privately watching videos and the like.

Clive Robinson April 4, 2022 4:47 AM

@ Mr C, ALL,

Last I checked, FIDO2 wasn’t a form of “two-factor authentication,” but rather a “replace the password with a dongle scheme.”

Ahhh that depends on who you ask…

Remember that a company argued and an auditor accrptted for his check list that “username” and “password” were two seperate “something you know” factors…

As I understand some of the blurb about FIDO2 devices, some argue,

1, The device is something you own.
2, The device PIN something you know.

The problem is in reality the pin is a single factor to the device, THEN the device is a single factor to the server. They form a daisy chain of factors.

That is the factors are being used in series not parallel. As the serial use is not “atomic” you have a chain of factors where “the weakest link” security rule applies.

But there is a similar argument that highlights the problem in a different way.

Imagine a Smart device that like most of them are geospatially (GPS) and temporally (RTC) aware.

If you have a security App that needs the Smart device to be “in a given place at a given time” for you to use it are “time” and “place” independent sub factors of “Something you know?”

From my point of view “potentially yes” but you also need to consider,

1, GPS can be “replay attacked”.
2, As GPS is seen as the most accurate clock a Smart Device can access it does get used to “discipline the RTC”

Which means that potentially when seen through GPS they are not independent. That is GPS is a “single point of attack” in the security model.

There are other several other arguments, but… yes,

“We really need to toughen up the definition of Multi-factor Authentication.”

Clive Robinson April 4, 2022 6:46 AM

@ Bruce, the usual suspects and ALL who have had their coffee,

This is one to show people who are starting out, or have an? excess of “we can do” attitude[1],

Not because it involves Scotland and Railways 😉

It’s actually about “engineering” from a “we have a problem” down to solutions that actually work and why,

“Old can still be better than new.”

In this case why a a Victorian era mechanical system has not been replaced as all the modern tech is well “not suitable” for various reasons.

That is it is also about,

“Real world trade-offs”

And that applies to not just engineering but software and security an much else besides.

Less obvious is it’s also an indirect commentry on sociology aspects of how we use what we make in the face of the uncertainty of what we chose to call “Naturall”, but is actually “entropy at work”.

[1] We/you can/will do mentality is very common in certain places and personality type, who basically either lack the ability to learn or have not yet learnt such a mentality can lead to disasters and billions in losses. And to my certain knowledge has on multiple occasions the ones most might remember are Space Shuttles and Deep Water Horizon drilling. Unfortunately it’s usually not the people with the “will do” attitude problem that suffer… They are often rewarded for the desasters they leave in their wake.

Clive Robinson April 4, 2022 11:00 AM

@ ALL,

In my “stand in” post above I did not go into too much detail about load balancers and why HTTP can be such a pain with them…

Turns out to save me the trouble, somebody just has,

If when reading instead of flames and smoke ofca “brain on fire”, your brain gives fog or mist especially red mist. Can I suggest a trip to the kitchen for some strange brew or equivalent 😉

If however if it’s a breeze, can I suggest learning how to multiply floating point numbers using Roman Numerals as a little lite brain train 0:)

AlanS April 25, 2022 8:19 AM


“Show me a system that you claim is not vulnerable, and you are either mistaken or are showing me a failed system. Because errors and exceptions that are unavoidable will exist within any practical system where information is communicated.”

I think you need to reread my post as I made no such claim. In fact, my argument assumed the system was vulnerable. You need to read up on hungry bears.

Clive Robinson April 25, 2022 9:15 AM

@ AlanS,

I think you need to reread my post as I made no such claim.

Did you or did you not say,

Well, as noted already by Nick, there is no shared secret.

To which I pointed out there was a shared secret via the two primes PQ even though obscured in the PubKey.

But you went on to say,

The secret never leaves the security key…

What secret are you talking about?

I’ve already shown that atleast on secret has to be shared in some manner or there can be no proof to authenticate against…

You then go on to talk about a signirure counter is that the “secret” you are refering to because, to do what you claim of it, it to must be shared…

Every proof of ID protocol we use, needs a shared secret of some form, it may not be immediately obvious but it is required.

If you think about it you can not prove who you claim to be unless do have a shared secret, because otherwise someone else can impersonate you.

If there is no shared secret / shared hidden knowledge or what ever else you want to call the “root of trust” them proof is not possible, so it defaults to the old “I’m Spartacus” claim problem, hence my closing paragraph.

As a matter of curiosity, why wait over three weeks?

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.