New Windows/Linux Firmware Attack

Interesting attack based on malicious pre-OS logo images:

LogoFAIL is a constellation of two dozen newly discovered vulnerabilities that have lurked for years, if not decades, in Unified Extensible Firmware Interfaces responsible for booting modern devices that run Windows or Linux….

The vulnerabilities are the subject of a coordinated mass disclosure released Wednesday. The participating companies comprise nearly the entirety of the x64 and ARM CPU ecosystem, starting with UEFI suppliers AMI, Insyde, and Phoenix (sometimes still called IBVs or independent BIOS vendors); device manufacturers such as Lenovo, Dell, and HP; and the makers of the CPUs that go inside the devices, usually Intel, AMD or designers of ARM CPUs….

As its name suggests, LogoFAIL involves logos, specifically those of the hardware seller that are displayed on the device screen early in the boot process, while the UEFI is still running. Image parsers in UEFIs from all three major IBVs are riddled with roughly a dozen critical vulnerabilities that have gone unnoticed until now. By replacing the legitimate logo images with identical-looking ones that have been specially crafted to exploit these bugs, LogoFAIL makes it possible to execute malicious code at the most sensitive stage of the boot process, which is known as DXE, short for Driver Execution Environment.

“Once arbitrary code execution is achieved during the DXE phase, it’s game over for platform security,” researchers from Binarly, the security firm that discovered the vulnerabilities, wrote in a whitepaper. “From this stage, we have full control over the memory and the disk of the target device, thus including the operating system that will be started.”

From there, LogoFAIL can deliver a second-stage payload that drops an executable onto the hard drive before the main OS has even started.

Details.

It’s an interesting vulnerability. Corporate buyers want the ability to display their own logos, and not the logos of the hardware makers. So the ability has to be in the BIOS, which means that the vulnerabilities aren’t being protected by any of the OS’s defenses. And the BIOS makers probably pulled some random graphics library off the Internet and never gave it a moment’s thought after that.

Posted on December 12, 2023 at 7:01 AM41 Comments

Comments

Bob December 12, 2023 10:56 AM

Corporate buyers want the ability to display their own logos, and not the logos of the hardware makers.

Once again, MBAs prioritize the nice-to-have over the must-have, with entirely predictable results.

Clive Robinson December 12, 2023 11:08 AM

@ ALL,

This sounds like a “non fixable” vulnarability that goes back to the 1970’s…

The UEFI has three basic stages,

1, SEC – Load CPU fixes etc
2, PEI – Configure motherboard
3, DXE – Load/Config additional IO

And the last one has the unfixable issue (as it’s a required feature).

The DXE “Driver eXecution Environment”(DXE) stage is where the UEFI system loads drivers and configures devices for I/O that is not included on the motherboard or in the UEFI code. As required it mounts drives and finds and executes the required OS boot code.

Importantly the vulnarability remains in memory after control is transferred to the OS boot, because just as in the older BIOS and Apple ][ before it the drivers and entry points stay resident in memory to handle any OS to UEFI calls.

The reason is that as Apple discovered on the Apple ][ you can not support new hardware such as disk drives from the supplied BIOS as you can not see into the future. To do this you have to get driver code off of the IO device ROM it’s self and use that to “load the code” into memory and it has to stay resident for the OS to boot up and beyond…

We have seen this vulnarability in various forms before.

Two that most should remember from a decade or so ago was Lenovo using the mechanism from within the mother board Flash ROM to load in what users of their consumer laptops considered malware. And BadBIOS back in 2013 where it was discovered that the mechanism could load realy nasty code that included the ability to cross “air-gaps” using a laptops microphone and speaker (which was claimed to be impossible untill it was documented how to do it on this blog, then a couple of Uni students wrote it up as a paper and everybody became instant experts…).

And back a little over a year ago it happened yet again with a UFEI Rootkit reported by Kaspersky. If you look back on this blog you will discover I mentioned that it was a security hole that would come back to haunt again,

https://www.schneier.com/blog/archives/2022/07/new-ufei-rootkit.html/#comment-408277

As well as some of the BadBIOS history.

So here we are again for atleast the fourth time…

Any bets when it’s going to be time number five?

JonKnowsNothing December 12, 2023 11:39 AM

@Bob , All

re: Once again, MBAs prioritize the nice-to-have over the must-have

Not necessarily correct.

  • There are multiple hardware options for a device
  • There are multiple add-on options for a device
  • There are differences between manufacturers on hardware specs

So, it is not too surprising that a custom logo is displayed.

Consider:

  • Would you be happy to have purchased Brand X device, but do not see the expected Brand X logo, only later find out it is Brand Z?

Nope, the logo is a confirmation that you got what you ordered.

The problem is when, where and how the custom logo is installed and where it executes at boot up.

As @Clive, explained above, using Insert-Code-Here during boot time is open to inserting Any-Code at the Code-Here execution point.

Stuart Ward December 12, 2023 11:40 AM

Although this is bad, there might be a silver lining. I suspect that the place this will be used mostly will be to jail break hardware systems that the user owns and prevent running other software on it. Apple is the main target here, and being able to unlock their hardware could be a benefit to owners.

Ray Dillinger December 12, 2023 12:56 PM

Am I reading this right? These ‘image parsers’ mean the contents of the actual PIXELS they put up on the boot screen can influence anything about the boot process other than the colored regions on the screen?

WHY?

What possible reason could anyone have to parse that graphics file for code? Why isn’t displaying the manufacturer’s logo simply a matter of changing the contents of a completely inert graphic – one stored in the BIOS instead of on a disk, sure, but why isn’t the contents of that graphic completely inert? What possible reason is there to execute ANYTHING AT ALL based on its contents?

I understand manufacturers building crap software into the bios. We’ve seen that forever, all the way back to TSR (terminate but stay resident) key loggers back in the DOS era.

But why would any, ‘scuse my language but there’s no getting around this. Why would any MORON make the frickin’ pixels they put up on the bootup screen influence the execution of code?

Yet Another Riddle December 12, 2023 1:24 PM

“…riddled with roughly a dozen critical vulnerabilities that have gone unnoticed until now.”

Leaves me wondering what other face-palm vulnerabilities exist now that we’ll find out about next month, next year, maybe never.

ffinley December 12, 2023 1:27 PM

Ray, that’s a good question and we discussed the story a bit here:
https://www.schneier.com/blog/archives/2023/12/friday-squid-blogging-strawberry-squid-in-the-galapagos.html

(My follow-up message has apparently been stuck in moderation for a week. I suggested referring to data as “user-controlled” or “attacker-controlled” instead of “unchecked”, because “checking” hints at bad designs and “where does this data come from?” is what we really care about.)

The short answer is that the BIOS is not looking to execute code at all, but is parsing incompetently and with no protection. It’s your standard buffer overrun; understood since 1972, famously exploited by Morris in 1988, and popularized in 1995 and 1996. Bruce wrote “the ability has to be in the BIOS, which means that the vulnerabilities aren’t being protected by any of the OS’s defenses”—which is kind of true, but misleading. Yes, the OS itself has no opportunity to protect anything here, but “OS defenses” could certainly be implemented in firmware (and in fact Intel is known to embed an entire OS, Minix, in their management engine firmware).

Realistically, I see two simple and safe ways to implement this feature. One, don’t support any form of compression or other format-complexity; for example, say that anyone who wants their image to be shown shall store in it the format of a 16-bit little-endian width, then height, followed by raw triplets being {R,G,B} bytes. A severely-restricted BMP parser could be called a variant of this option. Or, two, parse the data entirely via the CPU’s protected mode, providing exactly one syscall: “I’m done”. I say, go with option 2 because there are probably about a hundred other places a BIOS-maker could use such a feature.

Clive Robinson December 12, 2023 2:41 PM

@ Bruce, ALL,

The basic recipe from which this idea is baked, from just over a decade ago, on “Guy Fawks Night” 2013,

https://www.schneier.com/blog/archives/2013/11/badbios.html/#comment-209147

Not the BIOS IO hole that UEFI has in DXE which is what 0xA5A5 is the old PC AT BIOS header was for.

And in the last paragraph, the using of “image files” the worst offender back then was Adobe PDFs that at one point was appearing to have a significant vulnerability found oh about once every other month.

As I sometimes observer, the ICTsec industry does not appear to learn from it’s history.

I wonder if these researchers are even aware of this bit of history, maybe they should drop by and say Hi etc.

Clive Robinson December 12, 2023 3:05 PM

@ Ray Dillinger

Re : Code in images is standard.

To answer your question it’s all about redundancy and compression.

A raw pixel image file for a 1024 or bigger screen is going to be four bytes multipled by the Vhight in pixels by the Hwidth in pixels so more than a couple of megabytes of mostly identical pixels.

Back a long time ago bit mapped graphics were seen as “nothing but trouble” for a whole host of reasons so “scalable vector graphics” were pushed which gave us PostScript that was the forerunner of PDFs.

The thing is neither is actually an image pixel file or a vector list file. They are actually a computer program using an interpreted stack based language quite similar to Forth.

As time went on the shear advantages of having interpreted code generate images ment similar happened in a lot of other graphics file formats.

So here we are today needing something in excess of twenty different interpreters just for graphics files and that’s before we then compress them further. Because there is not a lot of space in a Flash-ROM so even the equivalent of the old BIOS code was packaged up like an executable ZIP file.

The thing is writting twenty something interpretes and a half dozen or so file compression algorithms is not a trivial task and for other “licencing” reasons using someone elses already written and effectively free library scores big bonus points with mrketing and managment.

We might think it’s the work of “mad men” and technically quite a few probably are but they are the guys that control the pay checks at the end of the month…

As was noted long before either you or eye were a twinkle in our fathers eyes,

“He who pays the piper, calls the tune.”

And that is the way the market works like it or not (I’m on the not side of the argument, even though the otherside is why so many people can earn a living sorting out the mess).

reallyfunnyaccents. December 12, 2023 3:33 PM

oddly fits how problems happen when, secretary kim has such fun at the projects adjacent house, sleeping in. Operating systems can have such issues after a while. UEFI and flowers trunk flip flop cayenne allergies, fits this pre boot issue. SPECTACULAR.

Such a great 80s DVD,WWWSK?; really references a different secretary indirectly.

IT and national security evolution of a company…

who would have ever thought, that these things, can be so funny indeed.

JonKnowsNothing December 12, 2023 8:02 PM

@ Morley

re: So, attacker has to get me to install their boot logo? Mostly a supply chain attack

While hacking at a mfg or via 3L-MITM re-flash is a likely scenario, there are lots of possible paths to set this up.

At minimum

  • you need a source and download for the corrupted image, yet hidden so that the std antivirus cannot detect it. (1)
  • you need to know how or when to trigger the update to the section of code(s) where the image can be staged prior to execution.
  • you need to know the call chain for the targeted updater sequence or to lurk until a bona fides update happens and you can piggy back in on an official update.

This would make installation available on any device for which you can design the corrupt file.

===

1) A while back GPU Memory was a popular stashing spot.

MrC December 12, 2023 8:15 PM

@ Morley, All:

If I understand correctly, there are two infection vectors: One is inside an unsigned section of a firmware update. The other is to drop a file with the correct name into the EFI partition on the boot drive. The former would require a supply chain attack or some trickery to get the user to install the poisoned update. But the latter would be open to any malware that achieved enough access rights to mount the EFI partition.

I know the malware vector sounds like a “They already have root, so how can it get worse?” situation, but it gets worse because of the persistence. The EFI partition might not be included in virus scans, or some IT department’s “wipe and reinstall” procedures. (Apparently Windows Defender just started mounting and scanning it a couple years ago. Not all antiviruses do.) Moreover, once the logofail payload gets a chance to run, I believe it would have sufficient access to copy itself to the UEFI flash memory on the motherboard. And good luck finding it then.

strings are binary code December 12, 2023 8:56 PM

@ffinley

I like your “controlled” terminology. Thanks for your input

@Clive

1024, ok sure, I could understand a business reason for that – but not a very good one. The simpler the better for advertising; something that can be recognized in a thumbnail 32*32 or 64×64

ResearcherZero December 13, 2023 3:46 AM

@MrC

The DELL systems were not affected as they hardcoded the boot logo images. But other systems, can be affected in a variety of ways, such as an image inserted in the supply chain, as approximately 20% of firmware updates contain at least two or three known vulnerabilities in their image parser libraries.

An attacker can allocate a buffer too small for decoded image data, which will be overflown with attacker-controlled data during the decompression phase.
https://www.youtube.com/watch?v=EufeOPe6eqk

Private and BootGuard keys leaking in the past don’t exactly help matters either, but with LogoFAIL, as boot logo images are not signed and verified (typically) then cryptographic checks are avoided as the payload is dropped before they begin.

Typically, a platform may have 50 PEI (Pre-EFI Initialization) modules and 180 DXE (Driver Execution Environment) modules. These modules and drivers all execute within ring 0 and typically don’t have intercomponent separation.

The UEFI Capsule has a well-defined header named by a Globally Unique Identifier (GUID). The producer of the system firmware wraps their update payload—be it code, data, or even an update driver—into this format.

The provenance of the update is then guaranteed by applying a cryptographic signature across the capsule using keying material owned by the capsule producer.

‘https://embeddedcomputing.com/technology/security/software-security/understanding-uefi-firmware-update-and-its-vital-role-in-keeping-computing-systems-secure

Capsules are a means by which the OS can pass data to UEFI environment across a reboot. Windows calls the UEFI UpdateCapsule() to deliver system and PC firmware updates. At boot time prior to calling ExitBootServices(), Windows will pass in any new firmware updates found in the Windows Driver Store into UpdateCapsule().
https://learn.microsoft.com/en-us/windows-hardware/manufacture/desktop/windows-secure-boot-key-creation-and-management-guidance?view=windows-11

ResearcherZero December 13, 2023 3:49 AM

“The vulnerabilities allow attackers to store malicious logo images either on the EFI System Partition (ESP) or inside unsigned sections of a firmware update.
Once the image is in place, it ensures a device remains infected even when an operating system is reinstalled or the main hard drive is replaced.”

The OEM/IBV provides vendor-specific integrator tooling that can install a custom logo in a firmware capsule and flash the capsule from the OS. (some of these tools are pretty simple)

‘https://binarly.io/posts/finding_logofail_the_dangers_of_image_parsing_during_system_boot/

Clive Robinson December 13, 2023 7:40 AM

@ ResearcherZero, ALL,

What has not been obviously mentioned is that whilst “UFEI Capsule” has to be signed… The image does not.

This is such an obvious “security hole” people will say something along the lines of

“What the heck, Why not?”

To which the answer is when you distill it down is “convenience”.

But I’m sure some will think “back-door”.

ffinley December 13, 2023 2:04 PM

Clive, I’m not sure the lack of signing was itself either a “hole” or “backdoor”. As I understand it, all executable code was meant to be signed, and this extra data—probably meant to be treated similarly to “EFI variables”—couldn’t have been executed if the parser(s) were competently programmed. To let a corporate buyer display their own logo, with an entirely signed payload, the buyer would have to be able to enroll their own key… which is possible, so how would it prevent this exploit? (And have people checked the key-parsers for similar bugs?)

Maybe something interesting could’ve been done with multiple levels of keys, or with write-once memory (e-fuses?). The complexity of that would probably be more work than just writing (or sandboxing) the image parser correctly, and extra complexity is often harmful to overall security. Let’s not expect too much of BIOS-writers. I guess Intel made that mistake already, by providing a design that encourages privileged code execution and lacks pre-written sandboxing.

A semi-related question for everyone: a lot of high-end motherboards can flash BIOS images directly from FAT-formatted USB flash drives, with no CPU or RAM installed. They’re obviously doing it from some kind of auxiliary CPU/MCU. Are those things checking any signatures at all?

And a slight rant: if UEFI ran faster, no logos would be needed, because we’d be in the bootloader before the monitor recognized its video signal. Has anyone here ever written code directly into a BIOS chip? I have, and was amazed the first time I ran it: the “hello world” message appeared (via serial port) so quickly I didn’t even notice it, because my eyes were still looking toward the power button. I don’t know what my PC does for the 20-40 seconds before transferring control, but that kind of delay is absolutely unacceptable in many embedded markets—CPUs themselves are executing code from EEPROM within milliseconds, and a car for example has just 2,000 milliseconds to display a rear-view camera image (per American regulatory requirements; that’s from the key being turned, and despite the horrendous electrical noise occurring during that period).

Bob December 13, 2023 3:06 PM

@JonKnowsNothing

You basically just explained why MBAs prioritize the nice-to-have over the must-have, with entirely predictable results. It doesn’t actually change the fact that that’s what they’ve done, though.

JonKnowsNothing December 13, 2023 4:13 PM

@Bob, All

re: MBA nice-to-have over must-have

For every product there are Marketing Specs and then there are Engineering Specs. Marketing might have a lot of MBAs that define Customer Requirements for a product.

Engineering is responsible for the design of the system, the code, the modules, the hardware, the methods of data exchange.

It’s an Engineering Problem. A well known one. It’s a Engineering Design Flaw.

The spec says something like:

  • Change startup logo image on demand
  • Change start up logo slide show on demand

Engineering builds the mechanisms, the modules, the code and says THIS is how we will do it.

Marketing has nothing to say about it.

It’s all WAI.

Bob December 13, 2023 6:04 PM

@JonKnowsNothing

That position makes sense if you pretend that people have infinite time. I will be pretending no such thing.

JonKnowsNothing December 13, 2023 7:15 PM

@Bob, All

re: Engineering always has infinite time

Engineering always has infinite time, it’s called

  • We will fix it in the next release

Since there is a never ending cycle of next releases, occurring until the End of Product, End of Company, or End of Time, Engineering always has infinite time, aka Soon(TM).

===

Search: Def Soon(TM)

  • “Soon™” does not imply any particular date, time, decade, century, or millennia in the past, present, and certainly not the future. “Soon” shall make no contract or warranty. “Soon” will arrive some day, but there is no guarantee that “soon” will be here before the end of time.
  • “Very Soon™” is guaranteed to arrive between now and the end of time with a higher chance of arriving on the “now” half of the time table. Although this means closer to now than “soon” there is no guarantee that you will live long enough to see the release.
  • “sometime between next Monday and when the Moon escapes the Earth’s orbit!”

Now ←———– Very Soon™ ——– Soon (Not Soon™) ——– Soon™ ——– Soon-ish™ ——–→ End of Time

Clive Robinson December 13, 2023 10:12 PM

@ ffinley,

Re : Turing engine issue.

“As I understand it, all executable code was meant to be signed, and this extra data—probably meant to be treated similarly to “EFI variables”—couldn’t have been executed if the parser(s) were competently programmed.”

If you look back through this blog a decade or so you will see I’ve had a very low oppinion of signed code as did @Nick P. Since we pointed out it’s deficiencies back then, they have all come to fruition as expensive vulnarabilities.

Secondly if I was to say you can not “competently program” to the extent you imply you might be a little shocked…

But it’s true. It’s implicitly part of the “Halting Problem” that Turing proved in the early 1930’s had no solution using amongst other things Cantor Diagonals (used to show infinities were boundless).

But also a couple of papers a couple of years earlier from Kurt Gödel showed that all such logics capable of computation could not describe themselves in their entirety.

With out digging down into some very odd looking formalisms, suffice it to say computers that are “Turing Complete” can not determin the state they are in only indicate the state they have been instructed to indicate.

Whilst you can cheat a bit by using a Turing Engine to check a Turing engine, it is possible for the engine being checked to cause the checking engine to falsely report. Thus you get into a “Turtles all the way down” situation.

What you can do however is use a fully defined state machine that acts as a filter to all data the same way, but does not act on the data by value.

Such a state machine would be like a DSP system, it would manipulate all data the same way irrespective of value. As such you are looking at MAD “Multiply and ADd” instructions and masking instructions, but not data defined branching etc.

However powerfull as such systems can appear to be, they are not Turing Compleate, thus their functionality is much reduced.

So if you want the functionality of data dependent state changing, you have to accept the liability that brings which is that asside from trivial cases the code will always be vulnerable to some form of attack.

Whilst people don’t want to accept this for cognative bias reasons, it is nether the less a fact of the way Turing Complete engines work, and whilst you can make the probability small it always remains…

Thus a fundemental rule of security is,

“Data should never be code”

But this makes Turing Complete Engines not exactly usefull.

So it’s a “Catch 22” type problem,

“To be of use, data must be able to change a computers state, however if it can change the computers state then the computer is not just vulnerable, it can not reliably report it is vulnerable.”

I went into this in a little more depth a decade or so ago on this blog and kind of “cheated my way out” to a degree. If you want to know more look back for “Castles v Prisons” or as @Wael and others ended up calling it “C-v-P” or just “CvP”.

The result was “Probablistic Security” which a trio of accademics kind of tried to pass off as their own idea (one is now a Prof at UCL). But on the funny side, they did not quite get it right. @Thoth pointed out the theft of idea after seeing them try to market the basic idea using SIM cards…

Whilst I don’t mind people using my ideas to improve security, it’s annoying when they get it wrong, because people that no where the idea originated often can not tell why it went wrong only that it did, and that can reflect badly on me.

ResearcherZero December 14, 2023 1:34 AM

Disabling the parsing of boot images in the BIOS or hardcoding the image could fix the problem. Some devices have a BIOS that had this option, assuming it is implemented correctly it would be useful.

Reintroduce terrorizing the end user to personal computing.

As the average user doesn’t know it exists anyway, might as well make the BIOS useful.

Ensure it is poorly documented, make it confusing to access, display everything in hex, every configuration change is followed with a loud system beep after reboot, and ensure there are plenty of warnings that ask, “are you sure you want to do this?”

If no device speakers exist, print out a large ominous warning across the display.

Hazardous materials, chance of electrocution, and sharp edges everywhere!

Make everyone pay extra for RAID, encryption, and multiple file system support. They probably don’t need it, and they probably are not utilising it in the manner it was designed for. Make everyone compile their own kernel and module support as default.

ffinley December 14, 2023 2:42 AM

Secondly if I was to say you can not “competently program” to the extent you imply you might be a little shocked…

The core requirement is to read some image data without confusing it with code, and to put it on the screen. It’s not to parse every image format currently in existence, to parse any image format currently in existence, or to run a user-provided image parser. So, I doubt the Halting Problem comes into play. If it did, we’d have several viable options:

  • ignore it; if LogoFAIL could only put UEFI into an infinite loop, would anyone care?
  • mitigate it with a simple timeout
  • avoid it by creating an intentionally non-Turing-complete execution environment for the parser (Coq and the original Berkeley Packet Filter do this)
  • or just solve it; that can’t be done for every program, but the BIOS developers can keep re-writing till they get a solvable one (as I believe is common in formal verification)

The necessary competence, though, could be as simple as deciding to “take the hit” of sticking several megabytes of raw pixel data into EEPROM, and programming the display controller to use that hard-coded address as its framebuffer. Or, as I said, relying on the existing CPU machinery, thus moving most of the responsibility onto someone else. I’m sure you can point out various flaws of CPU protection modes (side channels, especially), but are any of them likely to matter in this highly limited context?

We can infer a requirement from what Bruce wrote: UEFI should have operating-system-style defenses against vulnerabilities. Don’t over-complicate it.

Gumica December 14, 2023 4:11 AM

@Twilight- an LLM listed these 3 types

There is no definitive answer to this question, as different types of security vulnerabilities may affect Windows based systems in different ways. However, based on the web search results, some of the most common and critical vulnerabilities that have been discovered or exploited in 2023 are:

  • PrintNightmare (CVE-2023-36884): This is a vulnerability in the Windows Print Spooler service that allows arbitrary code execution by convincing victims to open a malicious file³. This vulnerability has been actively exploited and has not been patched by Microsoft yet¹.
  • Remote Desktop Authentication Bypass (CVE-2023-35352): This is a vulnerability that allows attackers to bypass certain authentication configurations, such as certificate or private key authentication, when establishing a remote desktop protocol session¹. This vulnerability could enable unauthorized access to sensitive data or systems.
  • Message Queuing RCE (CVE-2023-32057): This is a vulnerability that allows attackers to execute remote code on systems that have the Message Queuing service enabled¹. This service is not enabled by default, but it may be used by some applications or features.

So my prediction would be none of them as for a major exploit to occur it needs to come from left field * and as such will be known as baseball attack

  • ‘to come from left field’ – The origin of this phrase is probably related to baseball, where the left field is the part of the outfield that is farthest from the home plate.

Clive Robinson December 14, 2023 6:24 AM

@ Anonymous,

“We should have a guessing contest here as to what next big PC security vulnerability will be”

Err as I pointed out I guessed this one a decade ago, and I’ve guessed several more since as have others…

Thus I’d say the chances are the “next big PC security vulnerability” has already been guessed and can be found on this blogs pages going back a half decade or more.

Thus a couple of questions you might want to mull over,

1, Why do these long warned about vulnarabilities happen?
2, Why on average is it eight years after the warnings?

P.S. Please do not suggest it’s me and it takes me so long to go from concept to code… You’ld hurt my fealings B-)

Clive Robinson December 14, 2023 7:32 AM

@ ffinley,

Re :

“The core requirement is to read some image data without confusing it with code”

Yup, but as I pointed out a 1024 bit across image in raw format needs more than a couple of megabytes of in most cases identical four byte values.

Flash ROM is very expensive in terms of $/byte compared to other storage technology.

It’s an expense managment will not stomach unless they are figuritively “straped down and force fed” by regulation or legislation.

Likewise Marketing won’t stomach the loss of what they see as “customer convenience” and at this party they generally “call the tune the piper plays”.

But there are also always programmers “that want to be clever” in some way they realy don’t fully grok. Hence flight simulators in Office Software and a multitude of “Easter Eggs” plus “everything including kitchen sink” code reuse libraries and similar of which Log4j is just one recent example of “swiss army knife coding”[1].

So “compression” of one form or another is here to stay, much as any sane person would scream “don’t do it”.

So,

“So, I doubt the Halting Problem comes into play.”

From my perspective it’s not going to go away any time soon if ever.

As for,

“thus moving most of the responsibility onto someone else. I’m sure you can point out various flaws of CPU protection modes (side channels, especially), but are any of them likely to matter in this highly limited context?”

The “buck passing” usually goes down the “computing stack” where the bellow CPU level is why “bubbling up attacks” such as Rowhammer can break any and all existing OS Security Mechanisms… In the past I’ve discussed “architecture changes” and why things like CHERI[2] and memory tagging will still be vulnerable.

But even though we can see these issues a decade or more before they “are found” by as our Host @Bruce calls “thinking hinky”, we generally don’t mittigate let alone solve them. Then the principle of “low hanging fruit” comes into play. The time delay can be partially put down to this, that is as an attacker why do more difficult attacks whilst easy ones still work. However the easy attacks often after they have done much damage do get fixed, so attackers flush with resources can and do develop more sophisticated attacks. So they will get around to what appear as currently quite esoteric theoretical attacks, because they want to stay flush with resources from those that fail to mitigate.

And as I’ve pointed out the corporate model is “do the least required” when it comes to ICTsec. Which might be why there are something like two hundred new vulnarabilities a day being found.

Which kind of says,

“We can infer a requirement from what Bruce wrote: UEFI should have operating-system-style defenses against vulnerabilities. Don’t over-complicate it.”

Is not going to be a working policy. In fact we can show it’s already failed with the history of Web Browsers. With the move from CLI terminals to local grapgical environments it was clear that a lot more than current OS security was required (think about copy-n-paste buffer issues, typeing autocorrect spell checker etc issues). Then we moved to “Web Based”, back in the mid 1990’s I was warning this was a bad idea due to the lack of security in web systems and tgey needed OS level security built in… Most people gave me blabk looks or said I did not know, or was paranoid, etc, etc. Eventually a few in Google groked the problem so they started in on the Chrome Browser, that did have some OS Security Features. But we know where that went, just as I indicated it probably would…

Broadly I can tell you where things are going to go, because in effect “I’ve seen it all before” and humans realy don’t change that much from generation to generation. But the ICT Industry, is the worst I’ve ever worked in for “Not learning from it’s history” and unless legislators get out from under the corporate thumb and properly put in place ICT consumer protection legislation and the equivalent of “lemon laws” our future is going to be “recklessly insecure”.

[1] Swiss Army Knives are one of the first ever “pocket multi-tools” and over half a century ago had jokes about the tool you did not know what it was for, was “to get boy scouts out of girl guides”.

[2] CHERI was thought up at the UK Cambridge Computer Labs,

https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/

And although it’s a great improvment it’s by bo means perfect and you take a resource hit. If you look back on this blog you will see I went another way in “Castles v Prisons” and “Probabalistic Security” where the system owner can set the desired level of security v performance hit.

Chris Pepper December 14, 2023 11:21 AM

It’s an interesting vulnerability. Corporate buyers want the ability to display their own logos, and not the logos of the hardware makers. So the ability has to be in the BIOS, which means that the vulnerabilities aren’t being protected by any of the OS’s defenses. And the BIOS makers probably pulled some random graphics library off the Internet and never gave it a moment’s thought after that.

Are you sure? I thought this was BIOS makers like Phoenix exempting logos from their signing process so hardware sellers like Lenovo could insert their own corporate logos. And companies like HP might refuse to buy millions of BIOS licenses if they couldn’t update HP logos without updated vendor signatures — what would happen when a vendor went out of business? I didn’t think this had anything to do with enterprise PC buyers.

ffinley December 14, 2023 11:44 AM

A 24-bit 1024×768 graphic will be 2.25 MB, and any desktop monitor will scale it up. A laptop display won’t, though modern display controllers can also scale things and do partial-screen overlays. It’s common to find cheap routers with 64 MB to 256 MB flash chips; desktop systems often have room for 2 copies of the BIOS code, and graphics don’t need to be duplicated that way. So I think uncompressed graphics could be made to work, if a developer couldn’t come up with something better.

Most people don’t consider it “buck-passing” in a negative sense when an operating system uses CPU features to run user-provided code—let alone unchanging vendor-provided code for handling a simple vendor-defined format. Granted, it’s been a never-ending battle in recent years with speculative execution, but UEFI could go down the “slow paths” for everything and still decompress a single graphic quickly enough. Rowhammer shouldn’t matter if the code can’t access the CPU’s DRAM or there’s nothing there to attack, and both could be arranged: run the decompressor from ROM with the page tables configured so only the video RAM or the CPU’s SRAM cache is writable.

The preceding paragraph assumes the vendor-provided decompressor will fail, so as to execute user-provided image data as code. I don’t think that’s fair. A simple run-length-encoded (RLE) decompressor is a tiny amount of code that could be made to read and write its data in order, one byte at a time, thus easily made to fault upon overflowing either buffer. Even if one forgoes formal verification, what’s the attack there? (Assuming the code doesn’t make the Xbox mistake of expecting address-space wraparound to fault.)

Bob December 14, 2023 12:02 PM

@JonKnowsNothing

re: Engineering always has infinite time, it’s called We will fix it in the next release

What you’ve actually done here is hit on yet another entirely predictable result of MBAs prioritizing the nice-to-have over the must-have.

JonKnowsNothing December 14, 2023 1:04 PM

@Bob

re: fix it in the next release

Every tech company has a Bug Database. Sometimes they have 2 or even 3 versions of this database. It’s just full of coding errors, logic errors, and things that just do not work.

The number of items in that database is guaranteed to be “Greater Than One”.

Marketing and MBAs do not write those bugs. Engineering does.

Engineering does a pretty decent job of hiding the magnitude of the errors written by their programmers across the entire spectrum of coding projects. Errors derived from self created and inherited or purchased code. There are many posts here about how those filter up through the system corrupting everything that relies on the stack.

If you have ever sat through a scrum review of mutually exclusive internal design options, only one of which engineering will follow, it is not the MBAs who make that decision.

If Engineering guesses wrong, they can always point to the MBAs.

In the current discussions, the problem is not that engineering got it wrong, it’s that someone smarter than them, figured out a way to use an unresolved condition which occurs outside the scope they had defined.

ffinley December 14, 2023 2:00 PM

Programmers have a tendency to give way too much deference to “MBAs” and other non-technical management and marketing people. When your rushed code bites you in the ass, do you think they’ll show up to take the blame or will even remember how amazingly quick you were?

If I tell them it takes a month to add a custom image-display feature, what are they gonna do about it? I’ll have my fellow programmers back me up, and I can pull out a hundred excuses if necessary. An explicit security requirement, if present, is always good for a delay (maybe I’ll show them Clive’s comments). “That’s written for Linux, and would need to be ported to run in UEFI” should work. I’ll offer to simplify or re-prioritize as a compromise.

If the programmers aren’t involved in requirement-generation, that’s a big red flag, as xkcd comic 1425 (“tasks”) succinctly shows. Get involved, or get another job. Push back against unrealistic requirements and schedules, and try to do things “right” even when management wants them quick. (Upper management, if any good, will notice that the code works well but the managers are bad at scheduling. Good programmers are harder to find than bad managers.)

Being a programmer who doesn’t care, on a team that doesn’t care, is rarely a good career move.

JonKnowsNothing December 14, 2023 4:14 PM

@ffinley, All

re: Push Back if you can

It’s one of the hallmarks of western economies that workers (including programmers) are considered identical cogs in the gears of commerce. Trying to push back is often a risky endeavor if you want to keep “that” job.

Probably every programmers has walked away from projects that were dicey only to be replaced by another gear churning out an unreliable product, pretending they can make it reliable (see: Tesla, Space, AI).

A recent example of what happens when even high powered, high level individuals push back is the board swap: OpenAI v SAL.

There were 700 that stayed for the gravy. 2 were kicked outright. 50 others are TBD.

Engineers that want to produce something “good” are outweighed by engineers who don’t give a ….

There is always some hope of schadenfreude in the future when someone shows up with OpenAI on their resume. Of course, if I were those 700, I’d bury that info in 10,000 lines of crap code.

Angriff December 14, 2023 4:17 PM

I have been noticing how a lot of recent-years problems arrived alongside the forced switch to 64-bit instead of 32-bit. I found 32-bit systems to be more stable for me during the entire transition and afterwards.

Furthermore, even some of the social problems related to “COVID” seem to have also arrived at the same time. (really? COVID satellite system?!?!?!)

Last but not least, a lot of the linux distros are getting out of hand with the extensively long bootup routines (page after page of “spaghetti code”). When control is given back, the bootup routine could be only about 5 lines instead an entire screenpage or more.

I’m thoroughly fed up with these problems.
It feels like scamming to me.

ffinley December 14, 2023 6:15 PM

Trying to push back is often a risky endeavor if you want to keep “that” job.

I’m well aware it’s the common view. I just don’t think it’s true. Or if it is, I’d be self-selecting myself out of companies I wouldn’t want to work for anyway, and getting severance pay in the process.

As a personal anecdote, in my very first year working, I became aware on a Tuesday or Wednesday of an interesting bug; a critical one that a customer wanted an immediate fix for. I wasn’t busy at the time, so I mentioned to my manager that I had a hypothesis and would love to test it and maybe fix the bug. “No, we’ve got it under control; don’t worry about it.” I checked the progress a day or two later and was assured all was well.

Then, late Friday afternoon: “we’re gonna need you to work the weekend”; the team had made zero progress. So I said “no”; that I’d be at home relaxing, but I’d spend the remaining hour of the day summarizing the hypothesis I’d offered to share earlier, and the possible steps to prove or disprove it.

On Monday, a bunch of people had worked all weekend with little to show. By lunch I’d proven my root-cause hypothesis correct, and by the end of the day a patched version was under code-review while being beta-tested by the quality assurance team.

Where did that leave me? A few months later, that manager was gone, and I was given a promotion and raise. (Did you think Office Space was a fully fictional movie?) Nobody who’d worked the weekend got any kind of recognition or reward, nor did anyone seem to care or remember that I’d refused. When I handed in a letter of resignation to my (new) manager later that year, I was meeting with a vice-president within an hour, who tried to convince me to stay. I no longer ask permission to do something properly, or fix important problems; I just do it. (As it turns out, that’s not “going rogue”, it’s “having initiative” or “not needing micro-management”.)

Upper management desperately want programmers to be interchangeable cogs, and it’s to management’s benefit if the programmers believe it. But I’ve interviewed candidates, and I can tell you that it’s just not true. A co-interviewer once told me they could easily step outside our office, swing a dead cat, and hit several programmers; nevertheless, we’re interviewing 5 to 10 people to hire one, and I’m told many, many more CVs are rejected before anyone technical sees them. It follows that people rarely get fired “out of the blue”; the risk is actually very low unless one’s already on a “performance improvement plan” or does something that’s obviously “just cause”.

Bob December 15, 2023 11:17 AM

@JonKnowsNothing

The MBAs are the ones deciding how devs spend their time. Accountability starts at the top.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.