Compromising the Secure Boot Process

This isn’t good:

On Thursday, researchers from security firm Binarly revealed that Secure Boot is completely compromised on more than 200 device models sold by Acer, Dell, Gigabyte, Intel, and Supermicro. The cause: a cryptographic key underpinning Secure Boot on those models that was compromised in 2022. In a public GitHub repository committed in December of that year, someone working for multiple US-based device manufacturers published what’s known as a platform key, the cryptographic key that forms the root-of-trust anchor between the hardware device and the firmware that runs on it. The repository was located at https://github.com/raywu-aaeon/Ryzen2000_4000.git, and it’s not clear when it was taken down.

The repository included the private portion of the platform key in encrypted form. The encrypted file, however, was protected by a four-character password, a decision that made it trivial for Binarly, and anyone else with even a passing curiosity, to crack the passcode and retrieve the corresponding plain text. The disclosure of the key went largely unnoticed until January 2023, when Binarly researchers found it while investigating a supply-chain incident. Now that the leak has come to light, security experts say it effectively torpedoes the security assurances offered by Secure Boot.

[…]

These keys were created by AMI, one of the three main providers of software developer kits that device makers use to customize their UEFI firmware so it will run on their specific hardware configurations. As the strings suggest, the keys were never intended to be used in production systems. Instead, AMI provided them to customers or prospective customers for testing. For reasons that aren’t clear, the test keys made their way into devices from a nearly inexhaustive roster of makers. In addition to the five makers mentioned earlier, they include Aopen, Foremelife, Fujitsu, HP, Lenovo, and Supermicro.

Posted on July 26, 2024 at 12:21 PM49 Comments

Comments

I Own My Computer July 26, 2024 2:17 PM

And why “This isn’t good”?
Shouldn’t we own our computers for a change?
Why some third party restricts with some idiotic solutions our right to boot anything we like on our own computer that we have bought and paid for? Isn’t that my computer?

Things are turning very idiotic very fast. AV companies dictate what software I can run on MY computer. It’ well known fact that certain AV product blocks running the well-known and useful tool “netcat”, among other, legit software. Now some hardware vendors and one BIG software vendor from Redmond dictates me what I can boot on the computer I paid for my hard-earned salary. Already long time we don’t own software any more, it’s so called “licensed” to us. What, hardware is now following that move? So we don’t own it any more and “license” it and HW vendor dictates what for I can use it? Rubbish!

I’m very glad that those keys leaked, I’m very glad that this “secure boot” is broken and I’m shocked when Bruce Schneier calls it “This isn’t good”?

Jodie Snow July 26, 2024 2:38 PM

Here’s the PDF report.

Keep in mind that if AMI manages their keys so poorly that they can accidentally leak onto Github, it’s likely that at least one intelligence agency already had a spy get hired there to obtain the key (among other things).

JerryK July 26, 2024 5:02 PM

While writing firmware for a chip vendor years ago, my employer was frequently badgered by its customers for pre-release versions (that is, not fully tested and sometimes not even entirely finished) of our software. Our marketing weasels were always happy to oblige. Despite promises that “this was only for testing; we will not release this”, customers often did just that. Where marketing staff call the shots, quality control is impossible.

Jon (a different Jon) July 26, 2024 5:55 PM

Sorta goes to show: It ain’t the theory, it’s the implementation.

J.

Clive Robinson July 26, 2024 9:34 PM

@ ALL,

As pointed out a long time ago on this blog “code signing” has a large number of ways to fail. Therefore way back then it was sensible to treat it as insecure.

But about the worst failure for “code signed” software systems is disclosure of a root, master, or class key from which all others are derived. Because that in effect allows forgeries to be made by anyone who knows how to read a web page or two.

Back then it was stated that we needed a better way to do the “root of trust” thing. Back then as now there was only two general ways to do things,

1, Symmetric KeyMat by a secure second channel.
2, Use Asymmetric KeyMat verified by a trusted root / master key of which the Public half was “embedded”.

In over forty years we’ve still not come up with a way to remove the failings of a single “root of trust” that “the industry” will use…

Clive Robinson July 26, 2024 10:41 PM

@ ALL,

Re : Fun thought for you all.

As several people pointed out about the various reasons Secure Boot and similar came into existance was not to protect the end user as Microsoft and others have claimed. But it was as the Fritz-Chip or DRM replacement. Needed due to effectively the same “Root of trust” leak we see with Secure Boot. It was of the DVD CSS “Player Key” reverse engineered by “DrinkOrDie”[1]. That gave us the likes of DeCSS that so upset DVD producers like Disney and Co. They pressured the likes of Microsoft and Intel to come up with a way to “Protect their IP Rights”.

Well for those that know what to do, this “PKfail” key leakage will enable the use of “Shims” in drivers etc to “Rip Media” of even the more modern supposedly secure varieties…

So I suspect there are a number of people at “Copyright Holders” that are grinding their teeth.

[1] The issue with the DVD “Content Scrambling System”(CSS) is –as was pointed out on this blog long ago,– it was an “Off-Line” system using “symmetric encryption” which could never be thought of as secure, just obfuscated at best (ie “Security by Obscurity” which is no security at all in an Off-Line system).

Put simply it ment that the user had a copy of the key somewhere on their computer, all they had to do was find it. Which is what a reverse engineering group who called themselves “DrinkOrDie” did. Once they had found it, it was only a matter of time.

Jodie Snow July 27, 2024 1:54 AM

I’m having some trouble understanding the security model and the leak. As far as I can tell, AMI decided to trust crypto rather than do something simple, like write-protect (parts of) the non-volatile storage before running user code. Thus, the operating system can apparently re-write this storage at any time, but the firmware will only accept properly signed settings in the signature database—this being a database saying which keys will be accepted for bootloader signatures.

Except, users are supposed to be able to enroll their own keys, and how would that setting be signed if users aren’t supposed to have the platform key? One is also supposed to be able to remove the manufacturer keys, in which case this attack shouldn’t “completely” compromise all systems of a type (only the ones in which default keys remain present). Okay, there’s another level of keys—”KEKs”—but it’s the same problem: how do I add my own KEK without the platform key?

Alternately, Wikipedia suggests (with reference to an LWN article) that the “platform key” is actually different per system, which would seem more sensible—but contradictory to ESET’s mention of “the leaked private portion of this platform key”, and binarly’s description of it as a “master key … shared amongst different vendors”. (Leaking the private portion of “a” platform key, usable only on one specific computer, would be much less interesting).

The PKfail paper says “The prerequisites … are that the attacker must have privileged access to the target device and must have the private key of the enrolled PK”—but never describes or even mentions a PK-enrollment process (just KEK enrollment). They say “The Platform Key used in affected devices is completely untrusted”, which seems to be the exact opposite of reality. They make repeated references to “IBVs” without ever defining this; I guess they don’t mean Infectious Bronchitis Viruses….

Is it just me? Can anyone clarify what’s actually happening?

Winter July 27, 2024 4:01 AM

@Clive

They pressured the likes of Microsoft and Intel to come up with a way to “Protect their IP Rights”.

The secure boot process might indeed be mainly designed to protect “IP rights”. But I am convinced the IP they wanted to protect was their own, or rather, MS’ own applications.

MS is there to make money, not to protect the interests of other companies. I can see how they would spend money to protect their own sales, not the sales of others

ResearcherZero July 28, 2024 1:09 AM

@Jodie Snow

Especially when the strings contain “DO NOT SHIP” or “DO NOT TRUST”.

…Anyway, the entire scheme is confusing and not explained well.

The KEK is used to sign Signatures Database and Forbidden Signatures Database updates.

There is an explanation of how Machine Owner Keys work and are enrolled here:
https://wiki.ubuntu.com/UEFI/SecureBoot

How you generate keys is explained here:
https://wiki.archlinux.org/title/Unified_Extensible_Firmware_Interface/Secure_Boot

(Keep in mind that there have been several different methods shown to completely bypass the entire secure boot process and platform security.)

“It starts with the execution of an installer, which is responsible for deploying the bootkit’s files to the EFI System partition, disabling HVCI and BitLocker, and then rebooting the machine.”

https://www.welivesecurity.com/2023/03/01/blacklotus-uefi-bootkit-myth-confirmed/

BlackLotus used valid signed binaries to enroll a Machine Owner Key (MOK), the key which is generated at installation time, to gain malware persistence.

“validly signed binaries have still not been added to the UEFI revocation list. BlackLotus takes advantage of this, bringing its own copies of legitimate – but vulnerable – binaries to the system in order to exploit the vulnerability.”

Exploit CORE_DXE:

CORE_DXE is the firmware component used during the first phases of the UEFI boot sequence.
https://securelist.com/moonbounce-the-dark-side-of-uefi-firmware/105468/

…next load a signed driver capable of executing code in kernel space:
https://securelist.com/ghostemperor-from-proxylogon-to-kernel-mode/104407/

ResearcherZero July 28, 2024 1:24 AM

As SPI flash memory is located on the motherboard, implants can survive format/updates.
If you can sign binaries, or they are not in the revocation list, the code can execute.

ResearcherZero July 28, 2024 2:06 AM

@Jodie Snow

Given that it is a long convoluted process, there are a number of other locations where vulnerabilities can occur and hence the boot process and platform security exploited.

Even if every file were to be signed and hashed, this will eventually be broken or subject to points at which the execution flow can be hijacked. Such as the example of LogoFail.

There are also Virtual Nested and System Management Mode Rootkits which can run at a low level (ring0/ring3) where they are very hard to detect and remove.

‘https://jussihi.kapsi.fi/2022-09-08-smmrootkit/

Computrace implemented a UEFI/BIOS module for users to find stolen laptops which can survive formatting. Their tool Lojack was then used as the basis of the LoJax rootkit.

Such tools can also be deployed remotely:
https://www.netscout.com/blog/asert/lojack-becomes-double-agent

Clive Robinson July 28, 2024 6:36 AM

@ Winter, ALL,

Re : IP and Markets.

“But I am convinced the IP they wanted to protect was their own, or rather, MS’ own applications.”

Whilst that was almost certainly part of it much later, the reasoning was simpler than that.

People forget that CD’s came long before DVD’s were even thought of and before PC’s were at the 16bit I/O bus stage.

Microsoft completely misjudged the potential CD market. In part because to them “audio was for games” and they did not support it in the kernel or most other parts of the OS and not really in applications either.

Third party developers pushed audio onto Microsoft hence eventually the “Intel Architecture Labs”(IAL) AC97 standard prevailed over Microsoft junk and other power hungry systems from specialised audio companies.

But others had pushed CD not just for audio but more importantly for data. At that time 100megabyte hard drives were seen as the “high-end” so why worry about a bit of plastic with unreliable storage of just under a gigabyte…

Well a company called Silverplatter which I worked with and eventually for decided to bother. As CD’s with reliable 650megabytes of data were an ideal way to ship certain types of “read only” databases. In effect “libraries on a disk” for researchers a new market was created that rapidly expanded.

Microsoft had been blind sided several times thus rushed to catch up, dumping much of their “lock-in” projects in the process.

As part of that Audio CD’s got played on computers and audio tracks converted from WAV format to one of the then new lossy compression modes and the Recording industry went into battle mode and launched law suits etc. And the Fritz-Chip initiative that had already started some time before became more heated. Have a look at the “Digital Audio Tape”(DAT) standards to see about the “Copy Protect”(Cp) bit of the “Serial Copy Management System”(SCMS) designed to stop copying of CD’s onto various DAT formats also US legislation of the 1992 “Audio Home Recording Act”(AHRA). Which,

“amended the United States copyright law by adding Chapter 10, “Digital Audio Recording Devices and Media”. The act enabled the release of recordable digital formats such as Sony’s Digital Audio Tape without fear of contributory infringement lawsuits.”

https://en.m.wikipedia.org/w/index.php?title=Audio_Home_Recording_Act

Such law suits had come from CBS and the RIAA in the 1980s prior to the PC being capable of dealing with audio in anything other than low 8bit resolution at low rates.

Like other Europeans I took exception to the US Recording industry and the legislation, thus designed an “interface” that disabled or enabled the “Cp” bit protection at the “artists will”. It was designed to work with either S/Pdiff or TOSlink systems that I’ve talked about in the past (for using as Data Diodes for “gapping” Crypto Systems).

That’s why CSS was put on DVD’s. The “recording industry” controlled the keys and were not going to ceed the whip hand. Microsoft could not “break into that market” without access to those keys. Thus the recording industry dictated to both Microsoft and Intel.

So both Microsoft and Intel accepted that they were “locked-in” to the wishes of the RIAA and others who have a legal mandate not just through the AHRA but later DMCA.

Chris Becke July 29, 2024 7:59 AM

Why are we suprised though?
In over 20 years in Dev and Ops roles I have never once been given a access to a secure vault for secrets. And I have no reason to belive the majority of other software houses are any different.

Passwords, private keys, are either in a file share, in source control, or access is lost as soon as the key developer looses their sticky note / leaves the company.

Trying to explain upstream that secure storage of keys is a product we need to pay for is like talking at a brick wall. In this environment the only sane approach is to use the default keys

JL July 29, 2024 8:51 AM

I still don’t understand. I’ve actually deleted an Ubuntu key from the security boot validation keys and it actually made secure boot work properly.

Obviously just days before this topic came to light so i cannot verify if the key was maybe compromised. I don’t think it was for now.

Jodie Snow July 29, 2024 12:20 PM

ResearcherZero, thanks for the links, but I don’t find them to shed much light on this latest vulnerability. I’ve seen no evidence that Machine Owner Keys (MOKs) are involved at all.

The “Black Lotus” attack apparently used a vulnerability in a specific signed boot-loader to add its own key to the MOK database. It was probably a bad idea to allow boot-loaders to do this, without requiring any physical presence confirmation at the next boot. The MokManager EFI program installed by Arch and Ubuntu does require it, unlike EFI itself, according to those wiki pages.

Anyway, creating and enrolling a new MOK is something the user is expected to be able to do without network access. It therefore can’t require any key material, such as this leaked key, that’s not present on every system.

Victor Serge July 29, 2024 1:39 PM

“This is AMI proprietary, copy-righted code and should not exist in the public domain.”

(aka:

“never intended to be used in production systems”

)

Isn’t this what the industry does tho: building out systems that are previously encoded, without reading the code? Always a house of cards.

Who put it on the device?

AMI and the device makers are too proud to securely exchange PreShared keymat???

Regardless, Secure Boot has always been straight-up robbery.

XYZZY July 29, 2024 7:53 PM

My recent DELL desktop does not have jumper setting to prevent writing the BIOS. Sad. One thinks it an easy motherboard design option.

cybershow July 31, 2024 7:53 AM

This is a spectacular mess.

We’ve plenty of smart people around here who are confused, and no
wonder. These are mistakes bolted on top of mistakes in an orgy of
solutionism. Layer upon layer of cryptographic staging and signing,
and every new link in the chain is a weakness. Most of the motives are
unclear. Whose computer is it? Whose property is being “protected”?

It has every hallmark of how security goes bad – because it is unclear
– and I believe deliberately so – who the security is for, what it is
security from, and what end it serves. This is cargo-cult technology.

This whole sorry saga reminds me of some cautionary tale about a
tangled web woven by the little boy who first told a little white lie,
but then had to tell another to support it, and a bigger lie, and then
a bigger one still, until he and everyone else had forgotten what was
true and what was false.

That’s computing today. Our entire industry, now dominated by greed,
deceit and dishonest motives, is facing a catastrophic complexity
collapse. It’s no failure of science, technology and engineering but a
long overdue moral reckoning.

The sooner we stop pretending these are technical problems and start
speaking the truth about what a grubby and fundamentally dishonest
industry consumer computing has become the sooner we can have security
for computer users.

Clive Robinson August 1, 2024 9:06 AM

@ cybershow, ALL

Re : Individual v societal balance.

You observe,

“It’s no failure of science, technology and engineering but a long overdue moral reckoning.

It’s actually a failure of the avarice that is the “Great American Dream”.

Most should know by now –I hope– that we only survive because of supply chains. Less well known is that supply chains are just the paths by which raw resources become value added price inflated goods.

Each step of the value added path allows for a degree of profit that comes about when you look at the implicit costs in the supply and demand basic notion of economics.

If you get the input resources for nothing or next to nothing, but hold the supply of wanted goods by blocking the supply chain, then your profit potential starts moving toward maximum. But these deliberate market manipulations are “dishonest moves” hence legislation to try to limit them.

This leaves only “cost reduction” as a faux way to “increase productivity” for most “managers”. But increase “share holder value” in the very very short term.

Since the 1980’s I have been not just observing but commenting on such behaviours. Thankfully I know longer feel like a voice in the wilderness these days.

But think about the way that the software industry went because of Bill Gates and Co.

To get around a lot of legislative restrictions the did not “sell goods” but early on “leased them” or now “rent them”.

Without going through all the steps it can be seen that such control of goods not just creates actual monopolies/cartels but actually prevents fair competition in not just the primary “faux market” but any subsequent secondary markets.

But it also has an issue. A long long time ago I stopped buying into a,

“New OS every year”

That in turn ment you also had to have

“New Apps the following year”

As the support to make them work was removed and you were forced into an unwanted and unwarranted upgrade path at considerable cost.

As I’ve mentioned I still have an Apple ][ not just running but being productive with apps and other software I developed. And the key press to screen response for a 1MHz 8bit CPU still out performs that of the most modern of computers…

That alone should tell people something.

I could go through the other OS’s and APPs, but my work style has been fairly constant and is,

“Type in an editor, pretty up in a publishing package” only if needed.

Yes I,

1, Brain dump in,
2, Do spell corrections,
3, Move text blocks,
4, Make “sections and chapters”,

And a few more steps before finally shoving the “text file” into either a “Desktop Publisher” or “WYSIWYG” word processor.

Finally outputting in the best file format “lingua franca” that is reasonable for recipients. Which was in a formatted text style of “Rich Text Format”(RTF) but is now a variant of multi file HTML/XML files in a compressed “archive” called DocX. Keeping things in simple text file format allows so many other things such as putting the documents in databases for revision control scanning for dubious content / malware and quite a lot else including effective “energy gap crossing”. Some of which others are now catching onto with GitHub and similar (but unfortunately not the security).

The point is “my process” has in effect remained the same for over three decades. And not only has various security properties as benefits, it means all changes and updates are kind of incremental at worst (and retraining humans is the most expensive thing you can do in any environment).

Whilst those using MS Word etc are forced onto a treadmill or “Hamster wheel of pain” that is at best stressful, disruptive, inefficient, insecure, and worse, but unavoidable due to the software supplier dictate to create “profit”.

So yes I still use Wordstar 4, and Turbo C in a 16bit “DOS compatibility box” running inside a VM on a more upto date and secure *nix OS.

A habit I started back in the early to mid 1990’s when I could personally afford to buy a SysV unix and DOS Merge OS package (I still have and use).

The reason I did this was because of “In Circuit Emulators”(ICEs) supplied by Motorola. They would only work in MS-DOS via a serial port… So you had to use a whole 486 to run one, which on my home network was not something I could afford for a multitude of reasons.

However stick on Unix and DOS Merge and you could via a “network serial concentrator” run six or more ICE and quite some test kit on a single 486 effectively remotely (with the kit in the “small room” lab and you at your more comfortable desk in the lounge 😉

Oh and creating test reports and the like was likewise very easy because it was all on the single box.

So I did the same at work and kept the computer very much out of sight under my desk. My work colleagues were suspicious of my “productivity” and the fact I spent so much of my time at my desk.

However because of a break-in with theft of the computers a whole R&D lab with thirty odd people crashed to a halt. Loaning a half dozen of my home computers at least got the engineers back up and running.

With an unfortunate side effect, due to my boss thinking it was such a “productive way” to work, my downfall as an engineer into System/Network manager started…

ResearcherZero August 4, 2024 10:20 PM

@Jodie Snow

The keys, which are labeled “DO NOT TRUST” and “DO NOT SHIP” were used in production.

In addition to the compromised platform key from 2022 (which was in the Github leak), there are an additional 21 keys that are labeled DO NOT TRUST” or “DO NOT SHIP”.

AMI provided the keys as testing keys. They were never supposed to be used for production.

These keys are more than a decade old and all the major manufacturers have been using these same keys. All the systems highlighted in Binaries research are therefor using these same keys. The private part of the key is in very wide use, so a single compromise can open millions of systems to attack, because they all share the same front door key.

ResearcherZero August 4, 2024 10:27 PM

@Jodie Snow

How a leaked Platform Key can be used to bypass Secure Boot on Linux

‘https://www.youtube.com/watch?v=CveWt3gFQTE

How a leaked Platform Key can be used to bypass Secure Boot on Windows 11

‘https://www.youtube.com/watch?v=SPl7zfC-CmQ

ResearcherZero August 5, 2024 3:00 AM

Code signing is the process of applying a digital signature to a software binary or file. This digital signature validates the identity of the software author or publisher and verifies that the file has not been altered or tampered with since it was signed.

Signing means using a public-private encryption scheme, where you sign the binary with a private key, and the client uses the public key to verify that you really did sign the key.

The private key is kept secret and public key is shared.
However, if a private key leaks, then it can be used to sign a malicious binary.

• Platform Key (PK) – One only – Allows modification of KEK database

• Key Exchange Key (KEK) – Can be multiple – Allows modification of db and dbx

• Authorized Database (db) – CA, Key, or image hash to allow

• Forbidden Database (dbx) – CA, Key, or image hash to block

“As a rule, digital signatures require two pieces: the data (often referred to as the message) and a public/private key pair. In order to create a digital signature, the message is processed by a hashing algorithm to create a hash value. This hash value is, in turn, encrypted using a signature algorithm and the private key to create the digital signature.”

“In order to verify a signature, two pieces of data are required: the original message and the public key. First, the hash must be calculated exactly as it was calculated when the signature was created. Then the digital signature is decoded using the public key and the result is compared against the computed hash. If the two are identical, then you can be sure that message data is the one originally signed and it has not been tampered with.”

‘https://uefi.org/specs/UEFI/2.9_A/32_Secure_Boot_and_Driver_Signing.html#uefi-driver-signing-overview

Secure Boot is designed to prevent malicious Boot code and Operating System loaders from running. Modules and Kernels are also signed to prevent malicious modules and kernels.

As these private keys have been used in production they cannot easily be added to the Forbidden Database (dbx) in a timely manner – otherwise any system using these private key signatures in firmware or operating system will not boot with Secure Boot enabled.

(updating firmware and OS may take years before keys can be blacklisted and added to dbx)

https://ubuntu.com/blog/how-to-sign-things-for-secure-boot

Clive Robinson August 5, 2024 10:28 PM

@ ResearcherZero,

“Code signing is the process of applying a digital signature to a software binary or file.”

Whilst not incorrect you will find “install package”, “build package”, or “archive” used as well.

In essence you take the “finished” package or directory compress it and take the hash of the final glob and sign that.

Without going into details there are certain ways it can be attacked but they are considered as hard as finding a collision (though there is no proof I’ve seen on that).

The important thing to remember is that code signing says nothing about the quality or security or other important attributes of the software. Worse the way most software houses go about things there is no method to stop malicious code being slipped in as we’ve seen in the past.

Think of it like the seal on a plastic jug of milk, it says nothing about the quality or freshness of the milk, or if some disgruntled factory floor worker “gobbed in the jug” or worse before it was sealed.

ResearcherZero August 6, 2024 11:10 PM

The Platform Key (PK) is used to validate ALL firmware components.
The Platform Key (PK) is typically used to sign updates to the KEK database.
The KEK signs updates to both the Allowed Signature DB and Forbidden Signature DB (DBX).

Bootloader modules’ signing authority must be allowlisted by the Secure Boot DB.
The DBX database is used for revoking previously trusted boot components.

Updates to the DB and DBX must be signed by a KEK in the Secure Boot KEK database.

Almost 900 models produced over the past 12 years are using keys that were likely generated for testing purposes and should have never been used in production.

Vendors must provide firmware updates with new, securely generated Platform Keys.
Many motherboards may no longer qualify for support and may not receive updates.

There are a number of CVEs that no longer qualify for support on some models.
Other vulnerability disclosures are more unclear than the the following:

Lenovo – bypass the Secure Boot protection

‘https://nvd.nist.gov/vuln/detail/CVE-2016-5247

Microsoft only just updated the Secure Boot certificates themselves in June 2024.

The Key Exchange Key (KEK), the Allowed Signature Database (DB) and the Disallowed Signature Database (DBX), were all set to expire in 2026.

https://redmondmag.com/articles/2024/02/13/windows-secure-boot-update.aspx

ResearcherZero August 11, 2024 5:58 AM

bootkits

The memory sinkhole – X86 Design Flaw (Dec 29, 2015)

‘https://www.youtube.com/watch?v=lR0nh-TdpVg&t=32

privilege rings
https://nixhacker.com/digging-into-smm/

SMM Attack Surfaces

‘https://www.nccgroup.com/us/research-blog/stepping-insyde-system-management-mode/

Fooling the security mechanisms that protect SMRAM
https://www.wired.com/story/amd-chip-sinkclose-flaw/

Exploiting System Management Mode

‘https://www.youtube.com/watch?v=xSp38lFQeRE&t=82

System Management Mode

System Management Mode (SMM) is an Intel CPU mode (but also used by AMD).
It is often called ring -2 as it is more privileged than the kernel or the hypervisor.

The SMM code lives in a specially protected region of system memory, called SMRAM. The memory controller offers dedicated locks to limit access to SMRAM memory only to system firmware (BIOS). BIOS, after loading the SMM code into SMRAM, can (and should) later “lock down” system configuration in such a way that no further access, from outside the SMM mode, to SMRAM is possible, even for an OS kernel (or a hypervisor).

The SMM can only be entered through SMI (System Management Interrupt).

The processor executes the SMM code in a separate address space (SMRAM) that has to be made inaccessible to other operating modes of the CPU by the firmware. When a CPU hasn’t entered into SMM, the memory controller no longer allows access to SMRAM, the special region of physical memory allocated to the SMM.

https://www.synacktiv.com/en/publications/through-the-smm-class-and-a-vulnerability-found-there.html

“Improper validation in a model specific register (MSR) could allow a malicious program with ring 0 access to modify SMM configuration while SMI lock is enabled, potentially leading to arbitrary code execution.”

‘https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7014.html

SPI Write Protections

The program code that runs in SMM is stored in the SPI-Flash memory of the motherboard and is part of the UEFI BIOS firmware.
https://prog.world/using-intel-processor-trace-to-trace-system-management-mode-code/

Attacking SMM Memory via CPU Cache Poisoning

‘https://invisiblethingslab.com/resources/misc09/smm_cache_fun.pdf

System Management Mode is also used to secure Secure Boot.

“Windows 10 achieves this by leveraging a hardware-based root of trust that ensures unauthorized code like Unified Extensible Firmware Interface (UEFI) malware cannot take root before the Windows bootloader launches.”
https://www.microsoft.com/en-us/security/blog/2020/11/12/system-management-mode-deep-dive-how-smm-isolation-hardens-the-platform/

Clive Robinson August 11, 2024 9:33 PM

@ ResearcherZero, ALL,

I also made comment several days ago about “SinkClose” vulnerability and the fact it goes back probably two decades at least,

https://www.schneier.com/blog/archives/2024/08/people-search-site-removal-services-largely-ineffective.html/#comment-439936

It’s funny in a way, well over a decade ago and long before Ed Snowden lifted the curtain a little @Nick P and I had discussions on this blog about what hardware we considered untrustworthy by date.

I indicated anything beyond the mid 1990’s and he prefered the mid “naughties” of 2005 as a cut off.

As it turns out the NSA were exploiting the “go faster stripe” in CPU vulnerabilities back even before that…

Having been involved with designing high performance CPUs from “bit slice” chips back in the early 1980’s for very high performance “embedded” systems in body scanners I was aware of the “across the ISA gap”[1] issues of not just “MicroCode” errors but also the lower “Register Transfer Language/logic”(RTL) and bus timing issues that gave rise to “pipeline” designs and the further complications they added.

Then from designing specialised instrument control systems for high radiation environments (think Space craft payloads, nuclear reactors, and medical equipment etc) I’m aware of “bit flip” issues in memory and logic and how there are some forms of attack you can not protect against from “up the computing stack” no matter how much various programming and hardware methodology advocates try to insinuate otherwise. All you can do is “move the probability” line a bit, but each move is expensive not just in hardware but more importantly time, thus cycles/sec. One dirty secret is the more cache methods you use as “Go Faster Stripes” the more likely you are to have bit-flips happen (look up metastability for the “down in the dirt reasons”). So you end up “on a hiding to nothing” game, of trying to find a “sweet-spot” with each technology change. It kind of makes “The Red Queen’s Race” look easy, as you can not beat the speed of light,

“Just get closer and closer till heat death, claims yer”.

[1] The “across the ISA gap” is a way of looking at the “Central Processing Unit”(CPU) and what is inside it and what is outside it at a 20,000ft view. The “Instruction Set Architecture”(ISA) is in effect not just “the personality” of the CPU and how the “Arithmetic Logic Unit”(ALU) and “Register Set” are addressed by the software Assembler Instruction Set, but also the physical logic and timing of the electrical signals of the CPU pins. As more and more gets integrated onto chips like co-processors, “Memory Management Unit”(MMU) and “Direct Memory Access”(DMA) controllers and even I/O, the expression becomes more wooly. The early-ish IAx86 book from last century I have is split into volumes and covers well north of a thousand pages, and even three decades later I can honestly say I’ve not read it all, and I suspect that is true for nearly all who work at the hardware level at the motherboard side of the gap.

Winter August 12, 2024 5:56 AM

@ResearcherZero

Re: SMI and SMM. Also, see BMC and IPMI [1].

I suspect many readers still see an i86x CPU as implementing the i86x instruction set in real transistors and other components. Or they think there might just be layer of microcode below that.

Modern i86x CPUs are much more complex than this. It is probably better to see the CPU as a computer that runs the i86x instruction set inside a hypervisor or “emulator”. In principle, CPUs could be build that switch instruction sets between processes. Such CPUs have actually been build that way, but that never got traction.

Best to imagine that the CPU is to your OS what the OS is to the user application. The OS determines what the application does, and the CPU determines what the OS does.

In no way is the i86x code “in control” of the CPU hardware or memory. The actual OS or application can ask the underlying CPU to do “things” that affect the state of the CPU. But there is nothing that will actually “force” it to oblige.

Only the SMI/SMM and BMC/IPMI can force the CPU to do things the way you like. They can also “see” everything the CPU is doing, including the content of registers and memory. These services “should” be enabled/installed only in servers or cloud infrastructure. But what does “should” actually mean?

This is all to say that it is “non-trivial” to secure computer hardware. It is a Whack a Mole of closing bypasses to control your computer. There is this truism that with hardware access, there is no security. IPMI gives anyone with internet access to the IPMI network hardware access.

[1] ‘https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface#Latest_IPMI_specification_security_improvements

Who? August 12, 2024 5:26 PM

@ ResearcherZero

A question on the UEFI Revocation List File (“DBX”)

As I understand it, the DBX database stores a list of keys and/or bootloader hashes that are prohibited to be used in booting systems with UEFI Secure Boot enabled. That is ok, but does it extend the revocation to the digital certificates used to update the DBX file itself?

It would be great being able to download and apply an up to date UEFI Revocation List File from https://uefi.org/, however it is not clear to me it is enough to avoid the Secure Boot forbidden signature database being “updated” with a new one that excludes the compromised certificate itself again.

ResearcherZero August 18, 2024 1:31 AM

@Winter @Clive Robinson @ALL

I would describe the situation as a nightmare at this point.

Though Intel is considering pruning legacy features from the privilege model.

(including ring 1 and 2)

The new x86S architecture would also boot straight into 64bit mode. No 16bit support.

This would simplify the security model significantly. (microsoft horse decomposed long ago)

‘https://www.tomshardware.com/news/intel-ponders-transition-to-64-bit-only-x86s-architecture

“For example, in many cloud environments, the hypervisor sits in Ring 0, a user’s kernel is in Ring 1, that user’s device drivers are in Ring 2, and that user’s Applications are in Ring 3.”

On Intel Architecture chipsets, there are three more levels of privilege, all with a higher-level privilege than the operating system’s kernel. We call those “Ring ‑1” through “Ring ‑3,” with Ring ‑1 (pronounced, “ring minus one”) being the least privileged of the negative rings, and Ring ‑3 being the most privileged. Thus, Ring ‑3 can access anything in Ring ‑3 through Ring 3. And Ring ‑2 can access anything in Ring ‑2 through Ring 3, but it cannot access Ring ‑3.

Negative rings are conceptual levels of privilege, not actual processor protection rings.

Unlike the “positive rings,” which are implemented in hardware with a pair of bits to specify the Ring number, no equivalent set of bits exist to specify negative ring numbers. There are bits that specify state for Rings ‑1 and ‑2; and, Ring ‑3 is actually a separate processor within the processor chipset.

https://medium.com/swlh/negative-rings-in-intel-architecture-the-security-threats-youve-probably-never-heard-of-d725a4b6f831

ResearcherZero August 18, 2024 7:14 AM

@Who?

There are also occasional problems with corruption of user passwords or pins.
Sometimes users’ login credentials do not work after updates to Windows Secure Platform.

I will probably have to help recover a bunch of Windows accounts when people try and log in
after the security updates are applied manually. Then ensure everyone’s’ accounts are working properly, and perhaps additionally generate a bunch of new BitLocker Recovery keys. 🙁

This happens every time Microsoft updates the Secure Boot system.

Fortunately I have not seen similar problems occur with Linux based distributions. Linux uses a different approach for securing accounts and stores credentials in a more robust manner. Though, the occasional security vulnerability that bypasses authorisation exist.

Regular backups and recovery methods still remain essential no matter the system used.

ResearcherZero August 18, 2024 10:09 PM

@Who?

The latest update for August for Windows contains an update for the the UEFI Revocation List but it is not enabled. The reason why it is not enabled is that it requires updates to the device firmware for some device models.

To prevent bypasses and other vulnerabilities to Secure Boot, updates such as modifications to the DBX must be done carefully as many of the changes are stored in the SPI Flash which can normally only be updated via a BIOS update.

If any of the certificates that are used to boot the system are revoked, the system may be left in a state that is unrecoverable, which even a reformat of the drive cannot fix.

Before the the UEFI Revocation List can be updated, some devices require a BIOS update.

“When Windows applies the mitigations described in this article, it must rely on the UEFI firmware of the device to update the Secure Boot values (the updates are applied to the Database Key (DB) and the Forbidden Signature Key (DBX)). In some cases, we have experience with devices that fail the updates. We are working with device manufacturers to test these key updates in as many devices as possible.”

After all three mitigations have been applied, the device firmware will not boot using a boot manager signed by Windows Production PCA 2011 certificate.

There are manual steps in the following guide from Microsoft:

‘https://support.microsoft.com/en-us/topic/kb5025885-how-to-manage-the-windows-boot-manager-revocations-for-secure-boot-changes-associated-with-cve-2023-24932-41a975df-beb2-40c1-99a3-b3ff139f832d

To complicate matters, Microsoft also rolled out an update to BitLocker to prevent a bypass vulnerability (CVE-2024-38058). However the update caused problems on some systems, so it was disabled by Microsoft in favour of the manual steps outlined above.

Ensure any device has the latest BIOS update and you have backed up all your files and account details before proceeding with the update. Check your device is still receiving support from the manufacturer. Carefully read the support article and ensure you want to proceed…

“The Boot Manager deployed in Step 2 has a new self-revocation feature built-in. When the Boot Manager starts to run, it performs a self-check by comparing the Secure Version Number (SVN) that is stored in the firmware, with the SVN built into the Boot Manager. If the Boot Manager SVN is lower than the SVN stored in the firmware, the Boot Manager will refuse to run. This feature prevents an attacker from rolling back the Boot Manager to an older, non-updated version.”

ResearcherZero August 19, 2024 4:01 AM

I’m now glad that I performed the DBX update for my own systems earlier in the year.

You will also have to update all your repair bootable images after the manual update.

Pre-requisite checks

Before attempting the DB update, please ensure to perform the necessary pre-requisite checks:

‘https://techcommunity.microsoft.com/t5/windows-it-pro-blog/updating-microsoft-secure-boot-keys/ba-p/4055324

Follow the guide carefully!
http://support.microsoft.com/en-us/topic/kb5025885-how-to-manage-the-windows-boot-manager-revocations-for-secure-boot-changes-associated-with-cve-2023-24932-41a975df-beb2-40c1-99a3-b3ff139f832d

After you can run PowerShell as an Administrator check the update was successful using:

[System.Text.Encoding]::ASCII.GetString((Get-SecureBootUEFI db).bytes) -match ‘Windows UEFI CA 2023’

If the command returns ‘True’ then the manual update was indeed successful.

(the certificate is found at C:\Windows\Boot\EFI_EX and right-click bootmgfw_EX.efi,
select Properties, then the Digital Certificates tab and select Details, Certificates)

Microsoft Windows Production PCA 2011 = First-Party Images like bootmgr

Microsoft Corporation UEFI CA 2011 = Third-Party Images like the Linux “shim”

(this will be updated to UEFI CA 2023 after the update)

The following presentation outlines the attack surface of Secure Boot:

‘https://nbviewer.org/github/microsoft/MSRC-Security-Research/blob/master/presentations/2024_05_OffensiveCon/OffensiveCon24_Booting_With_Caution_BDemirkapi.pdf

ResearcherZero August 19, 2024 4:09 AM

Instead of Microsoft’s new convoluted approach, I did the following:

run the following command from an Administrator command prompt to suspend BitLocker for 2 restart cycles:

Manage-bde –Protectors –Disable %systemdrive% -RebootCount 2

Then to update Secure Boot signing certificate database (DBX) follow the PowerShell commands at the bottom of the link.

Finally after those two commands you can the reboot twice to finish installing the update.

https://techcommunity.microsoft.com/t5/windows-it-pro-blog/updating-microsoft-secure-boot-keys/ba-p/4055324

(For the command to verify Secure Boot DB update was successful, ensure the part ‘Windows UEFI CA 2023’ includes the inverted commas.)

To make sure that BitLocker protection has been resumed, run the following command after the two restarts:

Manage-bde –Protectors –enable %systemdrive%

Clive Robinson August 22, 2024 10:57 AM

@ ResearcherZero, Who?, ALL,

Re : SBAT worse than you think and doomed before it was thought up.

The way “Secure Boot” was thought up was bad, the way it has been extended worse, and for various technical reasons it’s a failure that can not be repaired or augmented, it really can not go on as is as it gets worse with every alleged improvement.

When you indicate,

“Before the the UEFI Revocation List can be updated, some devices require a BIOS update.”

Even then problems remain… Have a read of,

https://github.com/rhboot/shim/blob/main/SBAT.md

And have a think about why that revocation list is bad, and why the version number idea is bad as well.

Then try and think your way down the notion of “Dual Boot OS” systems and how you might try to make it work…

Do not expect to find a solution as there is not one that is secure or reliable as far as we can tell…

ResearcherZero August 24, 2024 4:08 AM

@Clive Robinson

The design certainly has a lot of problems. They extend through many of Windows architecture, not helped by the permissions system and the client sever relationship.

“the vulnerable IOCTL remained accessible from user space, meaning that a user-space attacker could abuse it to essentially trick the kernel into calling an arbitrary pointer. What’s more, the attacker also partially controlled the data referenced by the first argument passed to the invoked callback function.”

‘https://decoded.avast.io/janvojtesek/lazarus-and-the-fudmodule-rootkit-beyond-byovd-with-an-admin-to-kernel-zero-day/

The rootkit from Lazarus was able to exploit a Windows built-in driver, and Windows has a number of downgrade and bypass attacks which can remove additional protections.

https://github.com/wavestone-cdt/EDRSandblast

On x86-based or x64-based devices that don’t support UEFI or where Secure Boot is disabled, you can’t store the configuration for LSA protection in the firmware. These devices rely solely on the presence of the registry key. In this scenario, it’s possible to disable LSA protection by using remote access to the device. Disablement of LSA protection doesn’t take effect until the device reboots.

‘https://learn.microsoft.com/en-us/windows-server/security/credentials-protection-and-management/configuring-additional-lsa-protection

Fix for Linux dualboot systems affected by the August update for Windows.

How to disable SBAT for dualboot systems:

‘https://learn.microsoft.com/en-us/windows/release-health/status-windows-11-23h2#august-2024-security-update-might-impact-linux-boot-in-dual-boot-setup-devices

Fix for performance issues for Server 2019:

https://www.reddit.com/r/sysadmin/comments/1ewwlvk/any_updates_on_kb5041578_server_2019_performance/

‘https://www.bleepingcomputer.com/news/microsoft/microsoft-august-updates-cause-windows-server-boot-issues-freezes/

https://learn.microsoft.com/en-us/windows/release-health/status-windows-10-1809-and-windows-server-2019#known-issues

ResearcherZero August 24, 2024 5:10 AM

@Clive Robinson

Signed drivers can be used to turn off Windows protections without users noticing they are disabled.

As Microsoft cannot add every boot manager to the revocation list (or some devices will not boot), workarounds for BatonDrop and other exploits such as firmware implants, will continue to be a problem.

Exploiting built-in drivers will remain a problem due to the use of memory unsafe language and low-level functions that can potentially be exploited. Other exploits will no doubt be found within undocumented Windows features and structures.

Clive Robinson August 26, 2024 3:35 AM

@ ResearcherZero,

Re : The answer is not only Not Microsoft, but also Not IAx86 and above.

With regards,

“Exploiting built-in drivers will remain a problem due to the use of memory unsafe language and low-level functions that can potentially be exploited.”

The notion that “Memory Safe” languages “will save you” is a complete nonsense being pushed by certain language fan-bros who really do not know enough to figuratively “Cross the street safely”.

As for “Low-level Functions” they are a required necessary evil of the way the industry works with I/O. It goes back into at least the 1950’s, but was thoroughly embedded in the first personal computers in the 1970’s and has been ever since.

As I’ve noted on this blog in times now long past you have to consider where you are trying to do things in the computing stack, likewise where the attackers are doing things.

Anything you do only protects from “SOME” attacks from above, thus anything that attacks below that point can not be stopped only mitigated against.

An I/O driver written in a supposed memory safe language is not going to be magically “unexploitable” (and the way many memory safe languages work you would be foolish to try to write any kind of driver in them).

There were two lessons from “RowHammer” that people either have not learned or have for some reason forgotten,

1, Low level attacks bubble up the stack.
2, Low level vulnerabilities in the stack can be “exercised” by any user activities way up the stack as long as there is an information channel available.

The first is almost a “law of nature”, the second is a consequence of “work”, and is “cause and effect” writ large by what I’ve warned about for decades,

“Security v Efficiency”

As a general rule, the more efficient you make something the more “transparent it is” thus the wider the “effects from a cause” can be seen. This is especially true of anything that involves “time” because the more efficient something is the wider the effective information bandwidth is and the more “work” that can be done.

Bandwidth can be expressed by either frequency or it’s effective inverse time thus baud or bits/sec, either way the higher the number the more information that can be carried from a generator –cause– to the load –effect– in any given time period. Thus the faster an attacker can influence a chosen vulnerability or load point.

The only way to reliably stop such attacks is to “Reduce the bandwidth to zero”. But if you do you stop “work” being done…

[There are other imperfect mitigations you can use, such as making the information channel jump around in time by a form of fuzzing. It is effectively a form of inverse “Spread Spectrum” or “anti-jamming” technique. Whilst not perfect it can in some cases make an attackers task thousands if not hundreds of thousands of times more difficult. Likewise tricks involving increasing latency via “Store and forward” where you “re-clock” the information as part of the forwarding process and break any time correlation an attack depends on.]

But apart from “pulling the comms plug” you are not going to see such supposedly “classified” mitigations being made available in consumer or commercial ICT offerings with the way the industry currently works, and certainly not in the IAx86 and later architectures.

But as I indicated in my above post, with the way Microsoft have done things,

“Do not expect to find a solution as there is not one that is secure or reliable as far as we can tell…”

This is because they have in effect built a “Turtles all the way down” security system where by an attacker will always be able to “put another turtle in” thus get the upper hand.

It’s a problem that can not be solved as both Apple and Google have found with their “Walled Garden” app stores.

Thus you have to ask the question,

“Why is Microsoft doing this?”

And if you think about it you will realise why and I can give you a hint “it’s not for your security” in fact the very opposite.

As they say about the likes of Meta and other supposedly “free” services,

“You are not the customer but the product”

Microsoft is amongst other things pushing to make you,

“Both an endless paying customer and a lucrative product, tied in as a serf till eternity ends.”

But as I’ve indicated Intel architecture hardware that’s mid 1990’s or earlier is a better bet security wise, than hardware up till 2005ish, after that it’s very much “insecure by design”.

Thus looking at other hardware architectures, Operating Systems and Applications might be a lot better place to start looking for the security to give you the Privacy we all had less than a life time ago. But as you might have noticed the builders of Web Browsers and the protocols/standards they run on are doing just about everything to rob you of privacy… and Governments and Corps are doing everything they can to force you “On-Line” in very insecure ways, And with Co-Pilot and it’s like being pushed in…

Shakespeare had a saying for such a situation,

“My forces and my power of men are yours; So, farewell, Tallbot; I’ll no longer trust thee.”

Winter August 26, 2024 5:30 PM

@Clive

The notion that “Memory Safe” languages “will save you” is a complete nonsense being pushed by certain language fan-bros who really do not know enough to figuratively “Cross the street safely”.

I do not see how name calling will make computers more secure.

Unsafe use of memory is a known source of very serious vulnerabilitiey. Memory safe languages help in stopping that gap. The assertion that the people working on this believe it will “safe us” is a straw man argument. They do know the limitations of their work.

But as I’ve indicated Intel architecture hardware that’s mid 1990’s or earlier is a better bet security wise, than hardware up till 2005ish, after that it’s very much “insecure by design”.

That is almost entirely because that old hardware is unable to do much of what we need it to do in the first place.

It is true that less capable tools have less dangers. But that is because it simply can do less. With 1990’s technology comes 1990’s productivity and 1990’s income.

Thus looking at other hardware architectures, Operating Systems and Applications might be a lot better place to start looking for the security to give you the Privacy we all had less than a life time ago.

Indeed, but also at income levels of a life time ago. That might mean a different thing for a Brit than for those living in what was called second and third world countries a lifetime ago.

and Governments and Corps are doing everything they can to force you “On-Line” in very insecure ways,

As we are seeing around us, we do not even have the workforce anymore to go back to those times before the On-line society.

Going back to a pre internet society simply is no option anymore.

Clive Robinson August 28, 2024 4:12 AM

@ Winter,

Re : Not name calling.

Memory safe languages don’t actually do very much.

Due to lack of “good practice” by programmers trivially avoidable memory leaks and similar can be avoided.

But it only sort of works on “top down” bad programming practice at the source code level.

The fact memory issues are the current “lowest hanging fruit” simply means that squashing these bugs will only remove some small percentage of vulnerabilities. Hence my,

“The notion that “Memory Safe” languages “will save you” is a complete nonsense”

comment.

There are a whole load more types of vulnerabilities in the source code, the executables, the OS and the hardware at and below the ISA level in the computing stack. Also in documentation such as the Specification, Protocols, and Standards.

Winter August 28, 2024 5:46 PM

@Clive

Due to lack of “good practice” by programmers trivially avoidable memory leaks and similar can be avoided.

Trivially avoidable errors kill thousands in traffic each year.

The most successful ways of reducing the death toll is to make it difficult to make these trivial errors. This “simple” policy explains much of the huge diffences in traffic fatalities around the world.

Memory safe languages don’t actually do very much.

They just make some trivial errors difficult to make. Trivial errors that have been made consistently since the birth of the C language and are a major source of fatal and exploitable bugs.

So, memory safe languages might do little, but the effect will be the elimination of a major class of vulnerabilities.

There are a whole load more types of vulnerabilities in the source code, the executables, the OS and the hardware at and below the ISA level in the computing stack.

Fatalism has never solved any problem and never made anyone more safe. You have to start somewhere. And starting with the low hanging fruit is actually the right thing to do.

Clive Robinson August 28, 2024 6:47 PM

@ Winter,

With regards,

“Unsafe use of memory is a known source of very serious vulnerabilitiey. Memory safe languages help in stopping that gap. “

The answer is, that “memory safe” languages have issues that critically limit their uses.

We’ve been through this in the past with “garbage collectors” and the answer is,

“Not ‘no size fits all’, but ‘no size fits any non trivial system'”

Put simply they are both inefficient, and undeterministically so which might be fine for some programming tasks such as some high level single user apps. But the further you go down the computing stack towards the metal the very much more problematic they become.

The solution that is to often tried to pull this back has been to significantly cut back on what a programmer has in their tool-box to the point where all you find is a lump hammer with a too short handle and a screwdriver that has been bent out of shape trying to pull nails.

So yes whilst “memory safe languages” will stop some classes of vulnerabilities, in the general scheme of things it is small in number.

Worse they act as a “crutch” to programmers who then don’t learn to programme correctly thus in effect can not progress. Which is fine with those managers who have a “Ship fast patch later” mentality. But as such code is almost never “properly tested” we know “technical debt” not just occurs but can and does build to tsunami style levels.

Worse due to extensive delays between development and in use maintenance patching “magic umbrella thinking” can arise in those who develop patches that likewise do not get properly tested. With the all to frequent chance of “wrong or no solution code” becoming not just baked in but ultimately an anvil on which the code gets broken beyond repair.

Winter August 29, 2024 1:29 AM

@Clive

We’ve been through this in the past with “garbage collectors” and the answer is,

“Not ‘no size fits all’, but ‘no size fits any non trivial system’”

Put simply they are both inefficient, and undeterministically so which might be fine for some programming tasks such as some high level single user apps.

I think there is a misunderstanding here. Languages like, eg, Rust, do not use garbage collection, most protections are compile-time, and the run-time protections are deterministic.

But the further you go down the computing stack towards the metal the very much more problematic they become.

Currently, Rust is considered deterministic and efficient enough to be brought into the Linux kernel (Rust for Linux).[1] Particularly for things like drivers.

So yes whilst “memory safe languages” will stop some classes of vulnerabilities, in the general scheme of things it is small in number.

almost two-thirds of Linux kernel security holes come from memory safety issues. [1]

Worse they act as a “crutch” to programmers who then don’t learn to programme correctly thus in effect can not progress.

This strategy has been tried for half a century and look where it brought us, it didn’t work.

[1] ‘https://www.zdnet.com/article/rust-in-linux-where-we-are-and-where-were-going-next/

Bruce Schneier August 29, 2024 5:16 PM

Clive, Winter: You are both going off topic. And Clive, please be less insulting.

I am going to try to police this more effectively.

ResearcherZero August 31, 2024 11:12 PM

@Clive @Winter

The biggest problem Secure Boot faces is that the memory space is limited for the DBX revocation list. It leaves the vulnerability open for exploitation. There are a lot of resources available to exploit new vulnerabilities when they are discovered. When state based actors penetrate each other’s networks they learn from the techniques deployed.

Type confusion in Chrome’s V8 JavaScript engine used to download a Windows sandbox escape.

‘https://www.microsoft.com/en-us/security/blog/2024/08/30/north-korean-threat-actor-citrine-sleet-exploiting-chromium-zero-day

https://blog.google/threat-analysis-group/state-backed-attackers-and-commercial-surveillance-vendors-repeatedly-use-the-same-exploits/

Morris May 9, 2025 11:39 AM

Some Asrock motherboards are affected too. At least on one model surely such as in my case.

Morris May 9, 2025 11:47 AM

Why manufacturers don’t provide new UEFI files to prevent this issue? AMI test key replaced the PK correct key. Instead of security, manufacturers have spread unsafe products without remedying.

Clive Robinson May 9, 2025 2:28 PM

@ Morris,

With regards,

“Instead of security, manufacturers have spread unsafe products without remedying.”

You might not like this but let me ask you,

“Why would you expect it to be otherwise?”

When you consider,

1, neo-con corporate attitude to “short term” thinking for “shareholder value” and “executive bonuses”.
2, Due to corporate lobbying for the corporate attitude (1) there is no “oversight” because there is no “regulation” or “legislation” to ensure it happens.

As an historic example the FAA suffered cut-backs and “oversight” of aircraft manufacturing became “in-house reporting”… Then Boeing aircraft started dropping in whole or part out of the sky, and space craft were found to be not safe to fly…

I could say similar about other US managed manufacturers and suppliers, and people wonder why US exports are at the untrusted levels they are at currently. Where now even foreign cars which once were scorned and now with high tariffs remain keenly sought after in the US for several reasons such as efficiency and reliability.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.