Critical Vulnerability in Open SSL

There are no details yet, but it’s really important that you patch Open SSL 3.x when the new version comes out on Tuesday.

How bad is “Critical”? According to OpenSSL, an issue of critical severity affects common configurations and is also likely exploitable.

It’s likely to be abused to disclose server memory contents, and potentially reveal user details, and could be easily exploited remotely to compromise server private keys or execute code execute remotely. In other words, pretty much everything you don’t want happening on your production systems.

Slashdot thread.

Posted on October 28, 2022 at 8:12 AM9 Comments


Clive Robinson October 28, 2022 8:19 PM

@ ALL,

Re : There are no details yet.

So one “YOU HAVE TO ACT ON” if you run SSL 3.x… Because the last SSL critical warnings were nasty nasty nasty, so you don’t want to be on the list of entities “Shish kebabed” by this one.

But to little is currently “public” to say what is or is not vulnerable or why, thus how to mitigate it.

My advice based on what I know from past experience is “mittigation in advance” rarely is actually counter productive, if done judiciously.

But like most you probably do not have sufficient technical information available to you to do any mittigation with a surgical precision. Thus any systems with SSL 3.x on them, even if you do not think it’s in use need to be put on a list to be mitigated.

But that just brings up the “How to mitigate?” question to which there are only realy two answers currently,

1, Not being communications connected.
2, Watched critically 24×7 for any kind of unusual activity.

My reasoning would be the first for a few days might be the only option for most systems. With the second reserved only for systems that have to be connected.

Oh and remember, by “connected” I do not just mean “the Internet” because if this vulneravility is in use, then it can also be used on any and all networks where an attacker internal or externally could have reached…

So fingers crossed folks, and lets hope the law of,

“Target Rich Environment Probability”

Rolls the dice favourably for you.

Reba October 29, 2022 12:19 AM

@ Clive Robinson,

So one “YOU HAVE TO ACT ON” if you run SSL 3.x…

That’s OpenSSL 3.x, not to be confused with the deprecated SSL 3.0 protocol (from 1996) that people should’ve stopped running long ago.

But that just brings up the “How to mitigate?” question to which there are only realy two answers currently,

You missed one that seems obvious to me: switching to another implementation. Perhaps you intentionally omitted it—it could easily be counter-productive, as you hint. But there are several other choices of implementation that could be used in principle, including the long-term-supported OpenSSL 1.x.

There’s also the general idea of privilege separation, which should be used as much as possible. For example, there’s no good reason to run an ASN.1 parser in any context with access to private keys, or to have two TLS clients sharing heap memory (Heartbleed should’ve made that obvious). That said, there’s a good chance privsep would do nothing against this specific problem.

Clive Robinson October 29, 2022 5:23 AM

@ Reba, ALL,

“You missed one that seems obvious to me: switching to another implementation.”

Yes it’s possible, but…

In theory come Nov 1st there will be a seamless series of patches / upgrades, so time or more correctly the shortness of it comes into play.

Based on that you can make an argument –and some already have– that the time window is too short for attackers to find / reverse / use the vulnerability.

Which brings in the question of available resources within the time frame and how best to deploy them.

If the “they can not find / use” argument is true, and I’m not saying it is, then one option is the,

“Do nothing other than watch intently and pull the switch if things look hinky.”

It falls in line with the notion of a “target rich environment” and the number of potential attackers is very small, hence the probability of being attacked for most in the short time frame is low.

However it was not that long ago about teb months, that the “Online World” apparently very nearly died due to Log4J and the resulting Log4Shell vulnerability.

Of which the following was said[1],

“Log4Shell, an internet vulnerability that affects millions of computers, involves an obscure but nearly ubiquitous piece of software, Log4j. The software is used to record all manner of activities that go on under the hood in a wide range of computer systems.”

You could take out of that “Log4Shell”, and replace with “Open4Hell” or what every the new vulnerability will be named. And likewise replace “Log4j” with “OpenSSL 3.0”, oh and replace “to record” with just a simple “for” and it would be right on the button (so one “Press release intro written”).

Now “if” and it’s a very big “if” people have done not just “software inventories”, “dependency graphs” and the like that got pushed out from the Oval Office, then in theory they could pull out OpenSSL 3.x and drop in something else.

But really? One thing developers are known to do is “abuse interfaces” and sadly in many cases just “cut-n-paste example code” from the web or similar is not the least of what they do… So much so in fact you get,

As a result with it’s warning of,

“No changes to existing public API functions and data are permitted. “

Is as much applicable to developers using OpenSSL as much as it is to the developers of the OpenSSL package. Many still have quite bitter memories of what happened in the history of SSL and the sense of betrayal etc that came out, and the long months trying to sort out the resulting mess.

So whilst I’m not adverse to people doing a “plug-n-play swap” to a different package (but not 1.1.1r[2]) the questions arise of,

1, Can they?
2, Correctly tested?
3, In the time frame?

And I’ll be honest my assessment in such a short time frame is “no” for just about everyone. Thus focusing resources where they can be most effective gives you broadly the two mittigation stratagies I outlined.

But I’m in no way saying that a modular “plug-n-play” framework that alows fast, safe, and secure swaps is a bad idea, I’ve been suggesting NIST gets it’s backside into gear and come up with such a set of standards to do so for most of this century. Because “embedded systems” in “humans and infrastructure” that have half century or more “in service” lifetimes and upto ten year development times is actually normal. So there will be people with medical electronics implanted in them towards the end of this century, which will be “insecure”… likewise the water, energy, road, manufacturing infrastructure we mostly unknowingly rely on will be insecure and infesable to upgrade. Both “humans abd infrastructure’ are currently not in any realistic time frame upgradable, so unless we have the standards soon not only us but our children, and grandchildren are going to be in a whole world of hurt…

[1] Log4Shell,

Was perhaps a little more urgent in that it came into major world view with a “Proof of Concept” apparently much to the ire of parts of the Chinese Government. But it also caused jokes about the US President having to be woken from his Dracula like sleep in the crypt of the Whitehouse due to the severity of the problem.

[2] Under the opening page at the time of this writing, you will find their latest news the latest of which for the “12-Oct-2022”, is the bland,

“OpenSSL 3.0.6 and 1.1.1r are withdrawn. New releases will be created in due course.”

So if the word “Comforting” drifts sarcastically into view…

JonKnowsNothing October 29, 2022 6:46 AM

@Reba, @Clive, ALL

re: [old] protocol that people should’ve stopped running long ago.

While it maybe safe to say YES to this, consider what this statement really implies.

SHOULD HAVE is a big problem with the tech industry overall. In many industries it is also true, but the tech industry has a bigger impact with SHOULD HAVE.

So WHY? Why is SHOULD HAVE an issue at all? Why is it not ALREADY DONE? Why is software and hardware left in any situation where SHOULD HAVE can exist?

The push to make the End User responsible for the code and systems designs and implementations is (often, sometimes, always) driven by Planned Design Obsolescence. Tech Companies and Others cannot continue to make resale profits if they don’t wreck their own products with SHOULD HAVEs.

As long as there is a profit to be made from SHOULD HAVEs, there isn’t going to be any reason to think changing one protocol for another or one hardware platform to another will yield anything different. The point of failure may change, but overall impacts will be similar if not identical.

It comes down to SHOULD HAVE as a default state in the design and development. One system fails after another after another… cascading failures.

Keymaker: But like all systems it has a weakness. The system is based on the rules of a building. One system built on another.

Keymaker: If one fails, so must the other.

The MATRIX Series

ResearcherZero October 30, 2022 2:59 AM


The problem has been mostly ignored for decades.

Many problems are very hard to solve as they existed before any thought to security. As a result solutions are being added to try and deal with complex problems to avoid shutting down operations, maintain uptime and meet supply demands to customers.

However it still remains a blind-spot for many companies.

“Aurubis AG, Europe’s biggest copper producer, was hit by a cyberattack overnight that it said appeared to be part of a wider attack on the metals and mining industry”.

Having insider information about a mine’s pricing data can help a competitor hijack a sales deal by outbidding the competition, or a buyer negotiate a better purchase price.

Industrial systems still use Win95/98 in some cases, and many of those networks are connected.

PLCs can also have hardcoded cryptographic keys.

Decrypting the communication between PLCs and an EWS.

“A PLC can not only receive data from a monitored device but can send data to another control device, where another action can be initiated automatically or by a human operator in the control room. Sensors are the starting points for monitoring and sending data about the physical process to the control systems such as a PLC.”

“Despite the lack of any cyber security, these devices are the 100% trusted input to OT networks and manual operation. Moreover, process sensors have no cyber forensics.”

“Recently, a sensor monitoring project discovered that process sensors were not working yet the HMI displays showed the process appeared to be working properly.”

Process sensors have no inherent cyber security and yet have hardware backdoors directly to the Internet.

Australian energy companies failing to put in place appropriate measures to prevent attacks on their critical industrial operating systems.

A new survey has found that many Australian mining businesses are unprepared for and are failing to put appropriate measures in place to prevent cyber security attacks.

top 40 miners have much work to do in the cyber-security space

There has been no legislation covering many areas, mostly voluntary compliance, with some legislation for critical infrastructure only now beginning to come into effect.

Boris October 31, 2022 5:00 AM

Once we received the notification we did some due diligence about whether we were exposed to OpenSSL 3.0 or not. Ironically, the only instances we found were on recent Kali machines (OpenSSL 3.0.5) and few Ubuntu boxes used by developers.

Most production systems are running flavours of 1.1.1, which itself goes EOL in September 2023.

Disturbingly, many vendors (particularly of embedded devices) are still supplying OpenSSL 1.0.x versions.

So start your transition plans for next year. It will be busy.

Gert-Jan October 31, 2022 6:50 AM

SHOULD HAVE is a big problem with the tech industry overall

This is true.

Many problems are very hard to solve as they existed before any thought to security.

This is true too.

But the tech industry is frustrating itself. Because the serious security issues that cannot be fixed without breaking the interface, which SHOULD HAVE been adopted, they are mixed with the “SHOULD HAVE updated” in the “abandoned” category.

With the abandoned category I mean that the producer created a new backwards incompatible version and stopped supporting the older version. There’s money to be made in a newer version that has some extra features. It’s hard to make money on “simply” supported an existing product.

There should be a new standard for software that is committed to be future proof, in the sense that its goal is to keep working as expected, beyond a 5 year horizon. That the goal is to never create a newer version that is backwards incompatible.

SpaceLifeForm November 1, 2022 3:00 PM

It is not as critical as first thought.



Clive Robinson November 1, 2022 8:11 PM

@ Gert-Jan, ALL,

Re : Standards as cyber-weapons.

You are only looking at one side of one of sever coins when you say,

“That the goal is to never create a newer version that is backwards incompatible.”

I’ll go through the flip side, and some of the others…

Thus the first important question is,

“What do you do when the older version is broken, insecure, or incompatible at the current “standards level”?

I’ve been making the point for all of this century…

That is the SigInt Agencies like the US NSA and UK GCHQ and the other Five-Eyes nations hqve populated the standards bodies and they act as “tag-teams” for each other to weaken or add faults to standards.

I’ve repeatedly said on this blog and other places that from a SigInt agency perspective they will attack,

1, Plaintext (Bob Morris confirmed this as did Microsoft file formats).
2, Implementations (AES with it’s time based side channels and paying RSA over DRNG confirms this).
3, Protocols (again AES and DRNG).
4, Standards (DRNG with dual eliptic backdoor).

The latter caused NIST to withdraw a standard and re-issue it.

Any software compliant with the old version of the standard is effectively incompatible with versions compliant with the new standard, and it’s important that they should be so.

The issue behind it all that is realy problematic is that of, what is known as an “instance” that is within a “class” of vunerability at any one point in time will be different at a different point in time.

Thus there are vulnerabilities that are “instances in classes” that are,

1, Unknown, Unknowns
2, Unknown, Knowns
3, Known, Knowns.

As time progresses things move from being Unknown-Unknowns, to somebody finding a first example instance in what is now a new class. Thus there will be more instances to be found in the now known class, so we have moved to the Unknown-Knowns state. Time progresses and eventually from an analysis perspective all the instance types in a class are found thus become known and we move to Known-Knowns.

But the process has two disadvantages,

1, It takes time to progress.
2, The process is unbounded.

So, there will always be unknown classes thus unknown instances of vulnerability in existing systems.

The implication of this is you have to either,

1, Update forever.
2, Kill backwards compatability.

We obviously can not update forever or even quite a short period of time, so we have to kill backwards compatability.

We’ve been through the “kill” Option with SSL in the past.

But there is a more general issue of some,

“Standards, protocols, or even inplementations, contain engineering modes that can be vulnerabilities.

These “Engineering Modes” or “Test Harnesses” are used to try and find faults when some item like SSL is used as part of a larger system. In the past this has resulted in “minimum crypto” being “OFF” or “Send Plaintext” which is equivalent to “No Security” which is bad enough, but atleast you have a chance of “catching it on the wire” by “instrumentation”.

Such modes are fine if you are “fault finding” but very bad news if it can be done behind your back… and in many older implementations this could be done via a “Man In The Middle Attack” and forced protocol negotiation of a “Fall-Back Attack” which is by default kept hidden from users “so they don’t get confused” (thus bug tech support etc).

All the attacker has to do is make the only protocol both the first and second communicating parties have in common is “plaintext” or worse some other now weak crypto protocol like say RC4, which won’t be anywhere as easy “to catch on the wire” (hence the “Eternal Vigilance” quote has sharp and persistent teeth).

The history of computer based crypto algorithms and modes defined in syandards is actually not very good. It’s about 1/4 century which in reality can boil down to as little as 1/10th of a century when you alow for product life times.

For instance there was a time when RSA 1024bit key was considered secure, we know know that it is not secure and can be broken. People have moved up to as much as 16384bit keys, but there are few implementations in use that can work with that many bits. Especially where it is most needed which is in embedded systems… It’s just one of many reasons I’ve been saying for a very long time now that NIST should put it’s “algorithm competitions” to one side and come up with a “framework standard” that makes the upgrading of products practical even if companies go out of business, thus killing off the orphaned product problem where products could have a half century exprcted “in service life” (think implanted medical electronics and infrastructure components such as “smart meters”).

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.