Spectre and Meltdown Attacks Against Microprocessors

The security of pretty much every computer on the planet has just gotten a lot worse, and the only real solution -- which of course is not a solution -- is to throw them all away and buy new ones.

On Wednesday, researchers just announced a series of major security vulnerabilities in the microprocessors at the heart of the world's computers for the past 15-20 years. They've been named Spectre and Meltdown, and they have to do with manipulating different ways processors optimize performance by rearranging the order of instructions or performing different instructions in parallel. An attacker who controls one process on a system can use the vulnerabilities to steal secrets elsewhere on the computer. (The research papers are here and here.)

This means that a malicious app on your phone could steal data from your other apps. Or a malicious program on your computer -- maybe one running in a browser window from that sketchy site you're visiting, or as a result of a phishing attack -- can steal data elsewhere on your machine. Cloud services, which often share machines amongst several customers, are especially vulnerable. This affects corporate applications running on cloud infrastructure, and end-user cloud applications like Google Drive. Someone can run a process in the cloud and steal data from every other user on the same hardware.

Information about these flaws has been secretly circulating amongst the major IT companies for months as they researched the ramifications and coordinated updates. The details were supposed to be released next week, but the story broke early and everyone is scrambling. By now all the major cloud vendors have patched their systems against the vulnerabilities that can be patched against.

"Throw it away and buy a new one" is ridiculous security advice, but it's what US-CERT recommends. It is also unworkable. The problem is that there isn't anything to buy that isn't vulnerable. Pretty much every major processor made in the past 20 years is vulnerable to some flavor of these vulnerabilities. Patching against Meltdown can degrade performance by almost a third. And there's no patch for Spectre; the microprocessors have to be redesigned to prevent the attack, and that will take years. (Here's a running list of who's patched what.)

This is bad, but expect it more and more. Several trends are converging in a way that makes our current system of patching security vulnerabilities harder to implement.

The first is that these vulnerabilities affect embedded computers in consumer devices. Unlike our computers and phones, these systems are designed and produced at a lower profit margin with less engineering expertise. There aren't security teams on call to write patches, and there often aren't mechanisms to push patches onto the devices. We're already seeing this with home routers, digital video recorders, and webcams. The vulnerability that allowed them to be taken over by the Mirai botnet last August simply can't be fixed.

The second is that some of the patches require updating the computer's firmware. This is much harder to walk consumers through, and is more likely to permanently brick the device if something goes wrong. It also requires more coordination. In November, Intel released a firmware update to fix a vulnerability in its Management Engine (ME): another flaw in its microprocessors. But it couldn't get that update directly to users; it had to work with the individual hardware companies, and some of them just weren't capable of getting the update to their customers.

We're already seeing this. Some patches require users to disable the computer's password, which means organizations can't automate the patch. Some antivirus software blocks the patch, or -- worse -- crashes the computer. This results in a three-step process: patch your antivirus software, patch your operating system, and then patch the computer's firmware.

The final reason is the nature of these vulnerabilities themselves. These aren't normal software vulnerabilities, where a patch fixes the problem and everyone can move on. These vulnerabilities are in the fundamentals of how the microprocessor operates.

It shouldn't be surprising that microprocessor designers have been building insecure hardware for 20 years. What's surprising is that it took 20 years to discover it. In their rush to make computers faster, they weren't thinking about security. They didn't have the expertise to find these vulnerabilities. And those who did were too busy finding normal software vulnerabilities to examine microprocessors. Security researchers are starting to look more closely at these systems, so expect to hear about more vulnerabilities along these lines.

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they -- and the research into the Intel ME vulnerability -- have shown researchers where to look, more is coming -- and what they'll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

This isn't to say you should immediately turn your computers and phones off and not use them for a few years. For the average user, this is just another attack method amongst many. All the major vendors are working on patches and workarounds for the attacks they can mitigate. All the normal security advice still applies: watch for phishing attacks, don't click on strange e-mail attachments, don't visit sketchy websites that might run malware on your browser, patch your systems regularly, and generally be careful on the Internet.

You probably won't notice that performance hit once Meltdown is patched, except maybe in backup programs and networking applications. Embedded systems that do only one task, like your programmable thermostat or the computer in your refrigerator, are unaffected. Small microprocessors that don't do all of the vulnerable fancy performance tricks are unaffected. Browsers will figure out how to mitigate this in software. Overall, the security of the average Internet-of-Things device is so bad that this attack is in the noise compared to the previously known risks.

It's a much bigger problem for cloud vendors; the performance hit will be expensive, but I expect that they'll figure out some clever way of detecting and blocking the attacks. All in all, as bad as Spectre and Meltdown are, I think we got lucky.

But more are coming, and they'll be worse. 2018 will be the year of microprocessor vulnerabilities, and it's going to be a wild ride.

Note: A shorter version of this essay previously appeared on CNN.com. My previous blog post on this topic contains additional links.

Posted on January 5, 2018 at 2:22 PM • 133 Comments


Jonathan WilsonJanuary 5, 2018 2:38 PM

In a world where even a supposedly safe website can be serving up nasty code thanks to a dodgy ad, staying away from all questionable code is hard. Malvertising is the #1 reason I run an ad blocker.

JonathanJanuary 5, 2018 3:08 PM

As you say, Bruce: unsurprising. For years, CPU hardware problems have largely flown under the radar for most people. They are used to thinking of software as the buggy stuff that gets updated, and the hardware as the rock solid trustworthy side. Most people are unaware, for example, than 2015-era i7 processors have roughly 50 pages of "errata" (hardware bugs).

And even a lot of "software" updates are actually CPU microcode updates, or workarounds for hardware errata (many of those 50 pages of bugs have no workarounds listed).

But today's digital chips, like CPUs, usually start out as computer code written in VHDL or Verilog. They are among the most complex and intricate devices ever created. And hardware vendors are under no less pressure to get product out the door than are software vendors. Bugs -- even catastrophic -- should be unsurprising.

JonathanJanuary 5, 2018 3:12 PM

One additional comment: Google's Project Zero, which led the way discovering these bugs, did absolutely brilliant work on them. They are wizards, and I'm glad they're on our side.

tobiJanuary 5, 2018 3:21 PM

Maybe it's in the noise for automated attacks but is it maybe more relevant for targeted attacks against individuals or corporations?

Also it seems to me that cloud security is restored by patching against Meltdown. Spectre should not apply to cloud because tenants do not share memory pages, right? And the hypervisor does not share pages with the tenant.

GrauhutJanuary 5, 2018 3:37 PM

@Bruce: We shouldn't believe in luck!

Reducing the quality of timers for instance is a crude hack job and not a solution... :)

"As of Firefox 57, Mozilla is reducing the resolution of JavaScript timers to help prevent timing attacks, with additional work on time-fuzzing techniques planned for future releases."

AlexT January 5, 2018 4:11 PM

However unlikely is there any certitude that this was never exploited in the past, or, even worse, deliberate to some extent?

I for one don't think it is the case but this is typically something very hard to log / record. And if there is pretty much no doubt that the US agencies are collaborating (or at the very least infiltrating) with major software vendors why not on the hardware side?!

BrooksJanuary 5, 2018 4:52 PM

Tobi: Spectre doesn't require that the attacker and attackee share memory pages. The only memory requirement is that they share a processor cache, and that they can have cache collisions (which, with current technology, is inherent in "they share a processor cache"). Thus, it absolutely can affect cloud security.

In practice, https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html states that Project Zero has a "PoC for variant 2 [that is, Spectre] that, when running with root privileges inside a KVM guest created using virt-manager on the Intel Haswell Xeon CPU, with a specific (now outdated) version of Debian's distro kernel [5] running on the host, can read host kernel memory at a rate of around 1500 bytes/second, with room for optimization."

SteveJanuary 5, 2018 5:05 PM

@Grauhut, regarding the resolution of timers, reading the paper on meltdown this is just one of a number of vulnerability for which exploitation depends on the ability of user-written code to measure the behaviour the CPU cache. As Bruce says, there will be more. So to me it makes sense to block the exploitation of a whole class of these vulnerabilities by taking away the tools to analyse the behaviour of the cache rather than wait for them to be discovered one by one and attempt to patch each in a different way. The availability of a high resolution timer is one such tool that needs to be withdrawn.

AnonJanuary 5, 2018 5:17 PM

> In a world where even a supposedly safe website can be serving up nasty code thanks to a dodgy ad, staying away from all questionable code is hard.

Run NoScript - set it to default deny all javascript.

Then, only whitelist those sites you trust to not try to exploit you and that need javascript to work (while complaining to them to drop the javascript and work without it available).

Not perfect, but *way* better than letting every website have access to a high power VM in your browser.

GrauhutJanuary 5, 2018 5:41 PM

@Steve: "The availability of a high resolution timer is one such tool that needs to be withdrawn"

You are so right, nobody needs 44.1KHz audio generators, for instance. :)

CzernoJanuary 5, 2018 6:04 PM

*** A happy new year to Bruce and all his followers ! ***

My current workhorse, I mean, CPU, is an AMD Athlon XP (aka K7) that I was thinking of replacing, and thank God I haven't !

This thing has served me well since almost 15 years, and hopefully shall go on for awhile, till the dust settles !

Since the CPU has neither AMD64 in it, nor H/W aided virtualisation nor SSE2 nor any of those wonderful gadgets that Intel and competitors have stuffed into their newer CPU offers without much thinking, I came to the conclusion I'm much better off NOT
applying the "fixes" (the Microsoft security only update for windows 7, nor any update of the Linux kernels) on this machine. What do you here advise ?

My reasoning is along the lines of :

1- Meltdown : not applicable to AMD microarchitecture, reportedly.

2- Spectre : Reportedly, AMD are only affected marginally by one variant. HOWEVER :
2a- published analyses only seem to have considered recent, 64 bit capable, CPUs - including of course the statement by AMD.

As far as these older 32-bit only CPU are concerned, while, I guess, data /leak/ by spectre-type attacks is possible in theory also on these older processors, yet I guess that data /collection/ won't be feasible at least not thru the mechanisms the researchers discovered and published.

For instance, this CPU has no SSE2, hence no "Clflush timing" avenue for exploitation...

2b- As, anyway, /even/ for 64-bit CPUs there are no /real/ mitigations for Spectre in the vendors' "patches" and maybe no real mitigation possible at all,

2C- ... also, as an effective fix is said to in addition require /microcode updates/ - which will NOT work being famously broken hence disabled in the AMD-K7 series...

And so, there seems to be zero reason to apply any "fix" by MS (or by Linus and coworkers) that will probably NOT fix anything for my CPU but might well introduce other weaknesses in the kernels and/or slow their operation overall unnecessarily.

But then whaddoIknow : experts advice, here, please?

mooJanuary 5, 2018 7:42 PM

You can't take out all the sources of high-precision timing measurements. Besides, we need a high-precision timer for profiling software during development.

JonKnowsNothingJanuary 5, 2018 7:57 PM

I haven't read anything yet but ...

  • 10 years + of "exploitable" access on nearly every major system, PC, server, etc. etc. etc.
  • 10 years + of non-fixable "exploitable" access on nearly every major system PC, server, etc. etc. etc.
  • All those NST implants, all those NSA exploits that have never been made public.. until ?now?
  • What about all the NST implants and exploits by those "foreign agencies"?

And better yet: it's a forever vulnerability.

Even IF, you could trash every single system on the planet, what ever would FB, Google, et al DO for revenue with no one to track?

All the governments of the world, depending on the vast crunching on data are vulnerable.

Silver lining? Bluffdale will be an open access library soon. Order your ticket to ride.

The Utah Data Center, also known as the Intelligence Community Comprehensive National Cybersecurity Initiative Data Center, is a data storage facility for the United States Intelligence Community that is designed to store data estimated to be on the order of exabytes or larger.

ht tps://en.wikipedia.org/wiki/Utah_Data_Center
(url fractured to prevent autorun)

Clive RobinsonJanuary 5, 2018 8:47 PM

@ Jonathan Wilson,

In a world where even a supposedly safe website can be serving up nasty code thanks to a dodgy ad, staying away from all questionable code is hard.

In a world where the SigInt entities can get a reply on your wire faster than the server you sent a request can, then no site is safe, period.

@ AlexT,

However unlikely is there any certitude that this was never exploited in the past, or, even worse, deliberate to some extent?

There is no certainty it has not been exploited, infact the opposit is more likely, because the underlying errors were deliberate choices by the affected processor designers.

Look at it this way, if you know you have a certain design choice in your CPU might you not look to see if other designs had made the same choice? I know I would because it would be a sign of potential Industrial Espionage" which like car makers they are all upto to a certain extent.

Speaking of car makers do I need to mention the little software fixes in diesel engine managment systems, that detected when a car was being "tested" not "driven"?

It's very clear that other car manufactures knew but said nothing publicly.

Thus I would put money on the vulnerability being known a decade ago, and I would think it reasonably certain that a couple of the SigInt agencies would be aware of it.

The only real question is "Could they have turned the vulnerability into an attack vector?" to which I would say a definate "Yes". Which then raises the question "Would they use it?" to which the answer is almost definitely "Yes" thus the question would be "Who ranks such an attack level currently?"

Which is where it gets interesting. This attack appears to be of high impact. It's safe to say that some of the SigInt agencies have "Dead Hand" or "Doomsday" rated "Cyber-weapons of last resort" which would be one step up from the "Internet Kill Switch". It would not be too hard to design "A CPU Killer Malware". We have seen from IoT devices getting turned into mass DDoS networks quickly and effectively. These vulnerabilities give the indications of being the equivalent of a "Hard Coded Root Password" not just on one OS but all OSs that use the vulnerable CPUs.

Thus my guess is if the SigInt agencies know about it, they have fully weaponised it but not used it so they have it in hand as a last resort...

JosephJanuary 5, 2018 9:00 PM

@Clive Robinson wrote, "Thus my guess is if the SigInt agencies know about it, they have fully weaponised it but not used it so they have it in hand as a last resort..."

Makes you wonder what other "design choices" are baked into other common computer & networking products....

Clive RobinsonJanuary 5, 2018 9:22 PM

@ Bruce,

The first is that these vulnerabilities affect embedded computers in consumer devices.

Yes and no...

Some System on a Chip (SoC) microcontrolers are based around ARM cores that contain these or similar vulnerabilities but importantly not all are. Also unlike the PC/laptop market there is "Hybrid Vigor" in that there is a lot more than "Just the One" CPU family.

There are MIPS based SoCs around that can run early versions of BSD Unix and are comparable to Micro Vax performance for between 1 and 2 USD.

I'm hoping that this debacle will cause a bit of an industry shakeup and that more low cost SoC microcontrolers including Sun Sparc cores will appear in short order.

After all hopefully we have the brains not to want to cause the equivalent of the "Irish Potato Famine" in the ICT sector, there is after all nowhere to go these days...

Clive RobinsonJanuary 5, 2018 9:36 PM

@ Moo,

You can't take out all the sources of high-precision timing measurements. Besides, we need a high-precision timer for profiling software during development.

I agree.

Like most technology those counters are agnostic to their use, so work for good or bad as equally.

Such timers can be used not just for "profiling software during development", they can also profile software continuously in a way where the "signature" produced could very rapidly show the presence of Malware (as I've indicated long ago).

As the old saws have it "You don't throw the baby out with the bath water"... Nor "Cut of your nose to spite your face".

The reducing / removing of the timers would be like the old joke of "I'd give my right hand to be ambidextrous"... Which is why it's not a viable solution when viewed from a pace or two back from the problem.

JimJanuary 5, 2018 10:02 PM

@JonKnowsNothing, my thinking was along the same lines. A vulnerability in the CPU kernel that has been present and undetected for 20 years? Sounds to me like an NSA feature and Intel was prevented from fixing it.

four72January 5, 2018 11:59 PM

Found this gem on the LKML:

"Is Intel basically saying 'we are committed to selling you shit
forever and ever, and never fixing anything'?"
- Linus Torvalds

Greg ConenJanuary 6, 2018 12:11 AM

We shouldn't take out all sources of high precision measurement from all computing, obviously. But we can and should take them out of JavaScript (and similar technologies like WASM). That was true before this vulnerabilities were discovered, and would be true even if these specific CPU bugs did not exist.

Indeed, that's been the position of all the major browser vendors for several years now (though they haven't exactly be diligent actually doing it until now). The High Resolution Time API specification includes a minimum suggested resolution. Mozilla and Microsoft are now going with 20 microsecond resolution, instead of 5 as recommended (but not required; 20 microsecond resolution still complies with the spec), but that difference is relatively unimportant.

The important part is fixing the ability to create timers with more precision than the official High Resolution Time API. The Spectre exploit used a timer identified in previous work (https://gruss.cc/files/fantastictimers.pdf) as having a resolution of between 2 and 15 nanoseconds.

Alyer BabtuJanuary 6, 2018 12:24 AM

Well, there is already a “Spectre” song, by Allan Walker (e.g., "https://www.youtube.com/watch?v=nKrkZETKYIg”), and perhaps the Surfaris’ “Wipeout” could be re-purposed for “Meltdown?

Clive RobinsonJanuary 6, 2018 1:49 AM

@ Jim, ALL,

A vulnerability in the CPU kernel that has been present and undetected for 20 years?

There are some that have been present way longer than that, some even became features[1]

It's just that as far as we know nobodies yet found a way to exploit them...

[1] The oldest that most will remember was the 86 "wrap around" issue. Using segment registers and offsets It was easy to create a "physical address" above 1M. But due to the external bus it actually "wrapped it around" to the bottom of memory which got used by programmers including those at MS who were writting MS-DOS. All should have known better, especially Intel with it's plans for 286 and later processors. So when the 286 came along an external hardware fix (A20 line gate) got added, but this in turn went wrong with the 386, then when internal cache poped up in the 486 ... It took quite a while (Haswell IIRC) to sort things out. And this was an oh so simple and well recognised and entirely predictable problem...

Clive RobinsonJanuary 6, 2018 2:08 AM

@ Greg Conen,

We shouldn't take out all sources of high precision measurement from all computing, obviously. But we can and should take them out of JavaScript (and similar technologies like WASM).

I would be surprised if that ever happens, the likes of the W3C appear totally commited to adding API's etc that are tailor made to make you a "Data Subject" of any web site to be striped of privacy and sold to who ever makes the price, over and over...

Believe not the words you hear from Tim Bearnes-lee about needing a rethink...

hermanJanuary 6, 2018 3:10 AM

I can see that the compiler, OS and browser patches will help against script kiddies, but I'm not convinced that it will help against determined attackers.

A determined attacker can put some inline assembler into an C function, or use an older compiler/OS/browser to avoid the patches.

Furthermore, when you rent a virtual server, you have no idea what the VM is running on and you cannot tell whether or not you are being spied upon through these one-way mirror bugs.

AlejandroJanuary 6, 2018 4:11 AM

I wonder to what extent the new exploits and hardware weaknesses are related to the widely despised Intel ME-Management Engine processor and related HAP-High Assurance Platform supposedly required by NSA?

Here is one short summary of those: https://securityintelligence.com/news/ibm-security-advisory-on-intel-management-engine-vulnerability/

My readings indicate AMD has a similar remote management system dating back to 2006.

Do any real life sysadmins know anything about it, or how to work it?

Once again, this is an issue that most users don't know or care about.

TomJanuary 6, 2018 6:33 AM

Clive warned of such things months ago, as I was expressing some hope for the security of *nix systems. His take, now demonstrated: not really.

I will heed his his warning about the tendencies of our technology providers "to make you a "Data Subject" of any web site to be striped of privacy and sold to who ever makes the price, over and over..." Such now takes on new granularity and depth.

To rephrase Lawrence Ferlinghetti, I am an open book to the websites I visit...

Keeping in mind that that might well be the rosy scenario.

Now I'm going over for a bowl of soup at Mike's Place.

RachelJanuary 6, 2018 7:01 AM

Clive Robinson

thanks for commentary, Captain.
your solution for mere mortals being to use a live/VM for web browsing.
(Figureitout said to remove HDD too)

What do you think about OpenBSD in lieu of the live OS for this scenario?
Perhaps run from usb if it can handle one

echoJanuary 6, 2018 7:34 AM


I investigated using a Raspberry Pi as a cheap media centre/backup desktop housed in a box with an SSD. (Western Digital provide a solution but there are alternatives.) While memory limited according to things I've read and Youtube demos it works well as a Linux machine with the lightweight XFCE desktop.

After using a Linux boot stick for a while when my desktop was broken I recommend running a disk check on the USB stick after updating or using as the USB virtual filesystem used can be flakey after a lot of writes. A ram disc for the browser cache will also reduce the number of USB stick writes and can optionally be thrown away when powering down.

I agree removing other media is a good idea to enhance safety.

Payoff: China Russia Don't Trust American TechnologyJanuary 6, 2018 7:43 AM

Clive Robinson wrote:
'Thus my guess is if the SigInt agencies know about it, they have fully weaponised it but not used it so they have it in hand as a last resort...'

Perfectly stated Clive. For years Russia and China have been developing their own CPUs and Operating Systems. Are they less vulnerable? (Only the NSA knows...)

It should be apparent just how North Korea would counterattack the USA in the event of war. Less need for an EMP? Or do both?
And then China might very well step in to aid their ally.

On the bright side canceling the USA debt could easily fund new hardware.

Everyone person, company and government should have an off-line backup plan to revert too.
I personally make monthly boot disk images backups which can be restored in 10 minutes.

GrauhutJanuary 6, 2018 9:12 AM

@Clive: "Thus my guess is if the SigInt agencies know about it, they have fully weaponised it but not used it so they have it in hand as a last resort..."

I am not so sure about "not used it". One could already use weapons based on theses weaknesses in form of a "drive by in-memory door unlocker" in up to date malware frameworks like duqu >= v.2.

- Load it in-memory
- Grab these credentials
- Delete it and use the credentials instead


Would be hard to find.

RachelJanuary 6, 2018 9:33 AM


that is interesting thanks for that. You may be interested in Mr Schneiers consecutive posts on securing Rasp Pi from September.
good feedback about usb, agreed flawed for a number of reasons & not comparable to a CD but drive not always practical or available

echoJanuary 6, 2018 10:51 AM

Thanks Rachael. I will look this up.

Some tips:

* I used multiple USB sticks. One can boot and be used to filesystem check the other persistent file and vice versa.

* The persistent file can be archived. When a USB stick becomes flakey this copy can be copied back easily saving hassle. This technique is also helpful when creating multiple usb boot sticks which need to be identical.

* Data and user profiles and caches can be stored on an additional USB stick (or hard disc if available) using a normal filesystem.

I wondered if it is possible to run Linux for a USB adapter containing an SD/MicroSD card? They may be a lot smaller more easily concealable plus have much faster read/write speeds. I have searched and discovered this has been done and there are a couple of Youtube videos demonstrating this.

PJanuary 6, 2018 10:59 AM

I agree with what Jonathan said way up top.

I disagree with Bruce that we will find "vulnerabilities that will allow attackers to manipulate or delete data across processes" in the future. While it may be possible that there is a bug in a single chip's design, there won't be such a bug industry-wide.

I work in the microprocessor industry and I have been disheartened over the years by inattention to what are perceived as minor security problems where a small amount of information may be leaked. However, there is always a high level of attention to security problems where information may be modified. And microprocessors are very thoroughly verified pre-silicon and validated post-silicon to eliminate bugs.

It takes many millions of dollars and months in the fab/factory to manufacture the first chip so huge resources are poured into pre-silicon verification to make sure that chip will be of high quality. If there are any bugs that prevent samples from being shipped, it requires more capital outlay and more schedule delay to wait for new chips to flow through the fab.

Once chips are in production, there's no turning back except for microcode updates (if your chip has microcode). The cost of a major bug at this point may be catastrophic since a recall would be necessary. Look at Intel's Pentium FDIV bug back in 1994.

For this reason, microprocessors are tested much more thoroughly than modern software where they can just have users download patches. Does this mean that hardware is bug-free? Of course not. But it does mean that a lot of thought goes into making sure it works as intended. The problem with these vulnerabilities is that the chip did work as intended. However, no chip would be working as intended if attackers could actually modify data.

(Here I'm just considering attacks on the logic design, not things like Rowhammer that are circuits and not logic.)

Clive RobinsonJanuary 6, 2018 11:51 AM

@ Rachel,

What do you think about OpenBSD in lieu of the live OS for this scenario?

The first thing to note is I do not rate commodity OS's very highly. But that said OpenBSD appear to take security seriously

But you need to read what Theo said about erata some time ago, he knows that at some point they are going to get bitten by a silicon bug. It's a case of "Not if, but when" which shows a degree of rationality that others are not.

If you were to ask Theo today, he would almost certainly say that OpenBSD got away with it this time more by luck than judgment. In essence it is a different view point on how to do certain things that OpenBSD got through on, but the choice was made for entirely different reasons.

If you do decide to "Road trip" OpenBSD remember it's different ethos comes through in many places, so you will have to push up a few slopes.

With regards,

Perhaps run from usb if it can handle one

It certaibly used to be able to run from CD/DVD a while ago, one of the people to ask is @Dirk Praet, but I'm guessing that as post Xmas is a busy time for peiple booking summer holidays, he might have his hands full currently.

However @echo's point about having a RAM drive is important performance wise.

Several versions of Knoppix ago, you could actually get the entire opened CD image into RAM as well as having a RAM disk which kind of made things fly. I expect you can do similar with OpenBSD. I certainly found back ib the day that OpenBSD was a better OS when it came to embedding it, than any of the Linux distros.

@ ALL,

Does anyone know what the state of play with "Minnix" is?

Clive RobinsonJanuary 6, 2018 12:33 PM

@ Grauhut,

I am not so sure about "not used it". One could already use weapons based on theses weaknesses in form of a "drive by in-memory door unlocker" in up to date malware frameworks like duqu >= v.2.

IF... It was high end malware I might agree with you, but this has "Dead Hand" response writ large on it. Which makes it "Top Tier Supreme Commander authorisation only" much like the use of the nuclear football.

That is you only want to use it the once so it can not be stopped on the "no prewarnings" mantra.

IF they have used it, it would suggest they have other worse things in reserve, thus this would in effect have been down gradded from "Dead Hand" to "High level".

Oh the reason I suspect the US Military has a "Supreme Commander Authorisation Only" Cyber-System is two fold. Firstly is Obama's keen desire to have an "Internet Kill Switch" under his thumb that is the equivalent of a "Dead Hand' system. And secondly the way the US treats WMD and similar "Dead Hand" or "Mutually Assured Destruction" technology with "Permisive Action Links" and pretty much has done since "Russia got the bomb"[1]. Whilst the US may have lots of systems to go they did not nore do they have an thing that can stop inounds with aby reliability...

[1] There is a story that leaked out of RAND about the why of "Permisive Action Links" (PALs) and why what is essentially something that made the US Nuke Response system very much less reliable was even considdered let alone implemented. Officially it was the equivalent of "falling into terrorist hands", but back then, they knew that Russia had no PALs pluss delegated authorisation. The US thus knew then, as they still know now, that they have little or no chance of stopping even a fraction of the Nukes of a wave of multi-warhead ICBMs. Thus they had "MAD fever" if just one US nuke got launched even accidently, then there would be an automatic full scale response. Thus all the attempts to stop any kind of accidental Launch...

hmmJanuary 6, 2018 1:31 PM

Class actions, yep.


@ P

I don't know why you'd think line-item testing can/will reveal all possible bugs ahead of time, on a market-driven budget. That seems unreasonably rosy-colored to me.

Sure they spend time and effort before tape-out - but they sure don't spend 5-10 years testing cpu's for random blackhats' new vuln conceptualizations of how they address ram or prediction branches. They spend MAYBE 6-12 months TOPS testing for things they've seen before and probably some random machine fuzzing on top - and they call it good enough. They trust their model on the basis that it exists and nobody has publicly put big gaping unpatchable holes in it. If that's true, it's good enough for consumer grade.

In their calculus, selling #'s > public PR > liability in error > actual security.

The reason they test is not because they care whether Tom Dick or Harry can maintain a secure COTS desktop to install Windows 10 spyware binaries on, the MAIN reason is so they can say "we tested for all known things so you have no basis to sue over negligence" when something they don't test for becomes a problem.

Their impetus is market-driven to make chips faster/cheaper/lighter, not more secure.
Security only becomes a selling point by caveat when there's a massive unfixable vuln.

echoJanuary 6, 2018 1:35 PM

@Clive @Rachael

Because my Linux USB sticks are essentially backup desktops I have investigated an encrypted persistent file and/or an encrypted filesystem. (This is to protect against casual theft or loss.) There are configuration and usage niggles but this seems possible.


Another issue with USB sticks may be setting Linux swap to zero? This may cause a crash so turning down 'swappiness' from the default 60 down to, say, 10 may be better. Turning off updating file access time and journaling are other options. This guide for reducing SSD wear and tear may be useful for optimising usb boot sticks.


Dan HJanuary 6, 2018 1:55 PM

SPARC is not affected.
POWER 7/8/9 are affected but for Linux only. I don’t believe AIX is affected.
zSeries is affected but for Linux only. I don’t believe z/OS is affected.

Garbage architectures and garbage operating systems are affected. So much for open source and businesses thinking they’re saving money. You get what you pay for.

eCurmudgeonJanuary 6, 2018 3:13 PM

Might this present an opportunity for newer processor architectures, such as RISC-V?

Magnus DanielsonJanuary 6, 2018 3:35 PM

There is a meta-problem that you touches on here. The patching.

We have seen an accelerating need to patch machines, and it is not only servers but also user devices. For some security patches has been included into the releases. The rate of releases and even deployment rate is limited. For instance, for Android, the release process goes through Google, the Vendor and then the Operator. This pipeline is deep in time, and does not adapt well to security patches. It comes from the view of a monolithic update philosophy and many of the tests and adaptations have to do with the release. For home routers etc. it can be quicker, if any action is decided to be made, which is rare to non-existent. Also, for many devices the upgrades cease after 18-24 months even if the effective lifetime is much longer.

The current situation exists because there is no market requirements on devices to have security patches available within a certain time and for how long they need to be supported.
Would such requirements be set within EC and USA, it would drastically change the scene. A fast-path for patches would be needed, so the form where patches is released would have to be altered into what is being used in packetized Linux distros. With that automatic updates, which typically only involves restarting services, speed to patching on many devices could be reduced. The lifespan of patches available need to be high enough for devices to have reasonable low impact being unpatched.

So, I think for improved security, systematic upgrade of systems in a continuous deployment type of thinking needs to be applied to more and more devices. Naturally, there is security concerns in the upgrade path, as it would be a way to inject bad code, but that has already been looked on and the techniques deployed.

hmmJanuary 6, 2018 4:39 PM


"Another issue with USB sticks may be setting Linux swap to zero?"

Why not use ramdisk? Linux always has spare ram, you don't need a huge swap.

Dan HJanuary 6, 2018 4:46 PM

I stand corrected about AIX on POWER. From IBM:

Firmware patches for POWER7+, POWER8 and POWER9 platforms will be available on January 9. We will provide further communication on supported generations prior to POWER7+, including firmware patches and availability.
Linux operating systems patches will start to become available on January 9. AIX and i operating system patches will start to become available February 12. Information will be available via PSIRT.

I still haven't seen anything for z/OS. But Solaris/SPARC are not affected.

ClipperJanuary 6, 2018 4:51 PM

Actually you don't need a swap partition with linux. If you have enough RAM, you are perfectly fine without one, unless you run into very specific scenarios like lots of VMs.

Ad-Blockers Essential More than EverJanuary 6, 2018 8:42 PM

The security industry is dancing madly on the lip of a volcano.

Spectre-type attacks are particularly well-suited for distribution through malvertising, since there is no current robust defense widely deployed against them.
Malvertising has been a problem for a decade, and the web advertisement ecology has completely failed to make headway. Thus, I have long recommend ad-blocking not to remove annoyances but simply to eliminate a huge security problem. The rise of Spectre-class attacks in JavaScript is just merely one more reason to eliminate advertisements from the browser both on an individual and network-wide level.

Note: I keep JavaScript routinely disabled and only enable it when I really need to read an article. Then uBlock Origin and Umatrix go to work blocking hundreds of scripts.
Next the multi-layer defense feeds disinformation by spoofing the nosy user-agent queries. Then malware is attacking the wrong system.
Lastly I would never use a mainstream browser like Firefox (who allows Google to eavesdrop). It has to be a gutted privacy focused open-source version with excellent privacy add-ons.

That said, to make my bank feel comfortable, I use an old dedicated Windows PC with Firefox 52 while only hiding behind a VPN and Ad-blocker. They require a consistent user-agent.

@Linux Swapiness
I set vm.swappiness=0
Put system files into ram
check drive every boot: tune2fs –c 1

imisswalter34January 6, 2018 9:01 PM

Great post by Eben Upton on the raspberry pi blog explaining why pi's are not vulnerable.

Clive RobinsonJanuary 6, 2018 10:12 PM

@ P,

While it may be possible that there is a bug in a single chip's design, there won't be such a bug industry-wide.

Not true, if you had said "random implementation bug" I would agree with you.

You need to consider,

1, Random bugs.
2, Implementation bugs.
3, Protocol bugs.
4, Standards bugs.
5, Algorithm bugs.

You should also understand that "functional tests" are not "Security tests" unless deliberately included. Oh and even "random input tests" or "fuzzing / Monte Carlo / probability tests" catch only a percentage of errors.

Random bugs in designs appear for various reasons and may not always be caught. Back in the mid 1990's I found what appeared to be a randomly appearing bug in a Motorola Microcontroler (uCon) we might now call a SoC. They had put a UART on the chip and it appeared to work. But occasionaly a glitch happened and it was put down to a dry joint or some such on the prototype or ICE interface.

However when we switched over to the prototypes it got a lot worse. I wrote a couple of test programs. The first a simple loopback test of TX'd data compared to RX data sebt back via a crossed over tx/rx. The software had two status pin outputs that were pulsed. The first on sending a byte to be used as a trigger for test equipment, the second as an error detection indicator. With it I worked my way down the communications train/chain from uCon pins through the analogue circuitry and a hand built "RF Channel Stimulator". No probs were found except where you would expect "down in the grass" and they followed the usuall expectation rules. I then wrote a simple reflector/loopback program so the RX data was coppied to the TX data output. So data from one unit would get RXed by the other unit and TXed back on the RX of the original unit to do a full end to end test. This showed problems immediately even with the ICEs used in the prototype units. Whilst it looked random it also looked sort of bursty as well, which tickled my hinky fealings.

Any way when switching over to just the prototypes and looking at the error signal on a storage scope it became clear that the error signal had a distinctly sinusoidal pattern to it at low audio frequencies. Two of the other hardware design engineers took over the testing at this point. One could not find a cause in excercising the DC and audio circuitry. The other Steve, found that the error frequency correlated very strongly with the difference frequency between the ceramic resonators used for the uCon clock inputs. Further testing showed that the error was one way, that is if the TX unit resonator was low compared to the RX unit it saw no error, the other way around it did and the greater the frequency difference the worse the problem. Motorola HK where the uCon had been designed were now given the specific test code details. They found the bug in the UART design over night and had a fix through and parts within 48hours.

The point being being the difference between being told there was a "rare random" effect and a way to clearly excercise the error localised the possabilities, and got traced back into a "standard library part" that was supposadly "good" thus effecting not a single chip but a family of chips as it was an "2, Implementation bug.". Thus there is also the "Parts library" issue to consider with the likes of ARM cores on SoC chips from a multitude of otherwise independent chip suppliers.

When you step up one to "3, Protocol bugs." these start effecting all chip manufacturers who hardwire the protocl which does get seen from time to time. But notch it up to "4, Standards bugs." then it effects higher level interoperability issues and again hurts those who hardwire it in in I/O or if luckier in microcode that is sometimes easier to change.

But this particular problem is multipart... The algorithm being used is one that is industry wide in usage, BUT not sufficiently constrained (though it appears the IEEE had tried). Thus tge lack of constraint alowed things to be done "out of order". Which allows the algorithm to be "implementation tuned" by the designers for some "efficiency" requirment like "through put" or "low latency" etc. From the algorithm perspective and thus the testing proceadure for it, it will check out OK and pass without issue "Doing what it says on the tin". However it is only when you actively ADD security concerns with regards a known side channel to the testing does any flags come up...

So yup this effects not just different chip manufacturers but also different CPU architectures as well.

It's why I partially lay the blaim at the US IC SigInt agencies. The NSA well knows about side channel attacks as they go back to atleast WWI if not earlier. But as with the AES competition they chose to keep them secret along with a lot of other EmSec / TEMPEST knowledge. Because it supposadly gives them an "Offensive Advantage"[1]. Well that advantage counts for very little if you do not apply it defensively to the protection of your own critical systems.

And this was and still is a major issue. Whilst there was little problem when the NSA designed and had manufactured equipment to it's own exacting standards (as do the UK / Commonwealth with BID kit). It all went pear shaped when various congress critters pushed for the use of COTS equipment as concurrently used in the non Mil/Dip Signals world. COTS kit is not in anyway posible describable as "Secure" if you have ever seen the inside of TEMPEST kit you will know why.

To get COTS kit cost savings but also with improved security would mean loosing the NOBUS Offensive Advantage (which is actually neither NOBUS or Advantageous any more). Attempts were made in the mid to late 1990s with HP systems, --as a similar idea to that of the NSA-Crypto AG special relationship deal[2],-- under the disguise of "lab grade" computers that could be relatively easily further upgraded to TEMPEST specs. But it failed as the kit had mechanical reliability issues that ment it was not field usable in most land combat situations and the NOBUS advantage was significantly degraded.

The big problem is that unlike even the situation of the 1980-1990s where computer communications was not main stream or a significant economy issue, they now very much are. The US IC especially the NSA and now US LE led by the FBI/DoJ still push the quaint and very much outdated idea of NOBUS, for "Offensive Advantage"... Whilst ignoring the fact they are selling the US economy down the river as the US is perhaps now the most ICT vulnerable nation in the world by quite a way[6].

Thus the US IC/LE entities NOBUS Offensive Advantage is actually setting the US up for a Cyber-9/11 which will have significantly more far reaching detrimental effects to the US her citizens and it's economy than 9/11 has so far...

[1] If you look in published information from the 1980's Peter Wrights "Spycatcher" book tells you about using an "audio side channel" against the Egyption Embassy "Crypto cell". Where they had made the mistake of having an external telephone line installed in the cell thus the "hook switch" jumping technique that MI5 had developed could hear the electromechanical cipher equipment [2] in use, making key breaking relatively trivial.

[2] The cipher equipment came from Crypto AG in Zug Switzerland, which we now know had a "special relationship" via it's owner Boris Haeglin with the NSA's William F. Friedman see,


And that from atleast the 1950's onwards Crypto AG had "product tiers" that were sold to different countries, some strong, some weak[3] as "recommended" by the FiveEyes through the NSA and this went on untill long after Crypto AG had moved over to fully electronic systems. That "special relationship" is probably why Crypto AG was the only private Crypto Ewuipment supplier allowed to remain in business...

[3] We also know that the US IC prior to the NSA had a policy of using crypto kit with both strong and weak keys for hand/mechanical tactical systems. As the keys US forces used were in effect centrally issued they used only the strong keys[4].

What is also known is that William F. Friedman was not alone in distrusting the British for various reasons and the favour was returned. The result was the "next generation" of electromechanical cipher machines got stymied and mired in politics.

Whilst there had to be a degree of sharing between alies, both tried to keep their special knowledge secrets from each other for various reasons. The problem was that there needed to be high level high security crypto for both nations to be alies not just politically but in shared military operations. Whilst the US Sigba had been partialy shared, the British had long before worked out that a great deal of extra security was obtained not just by the irregular stepping of the crypto wheels, but also by making them step independently both forwards and backwards (it's kind of obvious if you think about it for a few minutes). Due to the costs involved the British wanted to use US manufactured parts in their own equipment and thus saw the "joint machine" as an opportunity to get parts for their own high level machine with the "secret sauce" of irregular bidirectional wheel steping. At a later point in time the US had likewise realised about bidirectional stepping, but wanted to keep it as their "secret sauce" or as would be said today NOBUS. Thus a comedy of Kafkaesque proportions started untill the British finaly told the US they had independently worked it out, but had also seen the "hall marks" of the knowledge in US systems and as a result of miscommunications had actually seen the US version in the flesh, and could also tell them about how to improve not just it's design but security as well. The NSA has declassified a couple of letters on the subject in recent times. Including one about the technical evaluation of the over designed prototype developed by Gordon Welchman as the ultimate electromechanical cipher system.

[4] The point of a strong key / weak key system is you know any captured equipment is going to get examined and thus potentially used or copied by the enemy. They would however unless they had sufficient crypto analytic skills end up using the weak keys as well as the strong[5]. With the techniques developed at Bletchly during WWII of not just cataloguing all crypto/plain text but the formats and quirks of operators and what is now called "Traffic Analysis", "Probable plaintext" could be used to make breaking of even the strong keys orders of magnitude easier (see history of Bletchly and Ultra for more information).

[5] The NSA were still upto this game during Crypto War 1 in the Bill Clinton administration, with the "Clipper Chip" and Kapstone. The cipher algorithm although just meeting the strength requirments was actually very fragile and had some subtleties about it's design. Thus if even small apparently insignificant changes were made it would become quite weak very quickly to certain not well known attack methods. In more recent times with a AES they appear to be trying other tricks with not just protocols and standards but also putting in the fix on NIST algorithm competitions. With AES it was to encorage the inclusion in implimentations of time based side channels of various forms including those involving caches... Thus would have got into many chip designs as well...

[6] Japan although with a lesser population and smaller economy would have been potentially the most vulnarable by far a few years before the US due to the needs of JIT / Kanban driving the use of ICT forward faster. But issues with earthquakes and tsunami have made then build in more resiliance via low tech methods, how well is yet to be known and my hope is they never have cause to test it for real.

Clive RobinsonJanuary 6, 2018 10:31 PM

@ Dan H,

I stand corrected about AIX on POWER. From IBM:

May be not... We know that OpenBSD is not subject to these particular instances of this attack. And it was more by luck than judgment.

However the cache timing side channel is almost a class of attack vectors in it's own right.

Thus I suspect a number of people that are not vulnarable to the current instances will use it as a "fog of war opportunity" to remove the whole class of attack thus potentially eliminate a future instance that would hit them, and make it their turn in the barrel under the spot light.

Doing this has two advantages, the first is it gives the opportunity to clean their own house with little or no blaim with Intel and Microsoft under the spot light taking the heat currently or about to. Secondly it helps take the preasure off of Intel and Microsoft a little bit, which is to the whole industries advantage.

As the saying has it "Today is a good day to bury bad news in plain sight", and limit future litigation issues.

Just_some_personJanuary 6, 2018 10:37 PM

As imisswalter34 mentioned, the raspberry pie's are not effected by the issues (https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-vulnerable-to-spectre-or-meltdown/ )

As the Bruce's "we can't just throw it away". That is exactly what we have to start doing.

Moore's law isn't still going on, but it kinda is. Raspberry pies have gone from $30 each to $5. Pie zeros cost less than some cups of coffee. Space craft were designed with less computational power than I can trade for a cup of coffee or a burrito.

Next year and the years after the burrito computer is going to get more powerful.

We'll have to change the way we do things. We'll need to get the pain of a new hardware swap to be about as bad as a reboot. Anyone who has lost a phone, a laptop, or a hard drive will understand that after the fact it would have been better to have better backups.

Computers like tissues. When they get dirty we'll cast them aside. (maybe reuse them after a nice good bleaching if that is possible ).

hmmJanuary 7, 2018 2:18 AM

" 'we can't just throw it away' That is exactly what we have to start doing."


hmmJanuary 7, 2018 2:26 AM

"We'll need to get the pain of a new hardware swap to be about as bad as a reboot."

And we'll build societies out of the landfills full of millions and billions of the discarded devices...

Every 0-day, we'll all move.

Wesley ParishJanuary 7, 2018 3:22 AM

@all re: chip design

VHDL was invented to describe hardware and in fact VHDL is a concurrent language. What this means is that, normally, VHDL instructions are all executed at the same time (concurrently), regardless of the size of your implementation. Another way of looking at this is that higher-level computer languages are used to describe algorithms (sequential execution) and VHDL is used to describe hardware (parallel execution). This inherent difference should necessarily encourage you to re-think how you write your VHDL code. Attempts to write VHDL code with a high-level language style generally result in VHDL code that no one understands. Moreover, the tools used to synthesize this type of code have a tendency to generate circuits that generally do not work correctly and have bugs that are nearly impossible to trace.

From Free Range VHDL by B. Mealy, F. Tappero, Ch 1.

I'm not suggesting that these attacks are an example of this behaviour, but it gets mentioned in a Creative Commons-licensed VHDL textbook, so I'm guessing it's more common than either the design bureaux or the university EE faculties care to admit.

And what I''m suggesting might be a useful exercise for budding electronic engineers, would be writing out a set of HDL functions/procedures that describe a given chip's publically-described functionality, then write out a set of tests to break that functionality.

Wesley ParishJanuary 7, 2018 3:33 AM

@Clive Robinson re: Minix

I wish I knew. I know it's a very robust design, which is why Our Dear Friends at Intel pirated it. It's a message-passing design, so I suspect it may be free from some of the vulnerability. (Now that is an interesting twist on the Linux Is Obsolete thread - if the microkernel approach by partitioning drivers, etc, away from the kernel, makes such an attack more difficult, it might become the preferred kernel design.)

MrCJanuary 7, 2018 5:46 AM

A question for those with better understanding of the hardware: Do either of these vulnerabilities give the attacker information that would make a rowhammer attack easier/more efficient?

Who?January 7, 2018 6:42 AM

@ MrC

These vulnerabilities render rowhammer superfluous. The goal of rowhammer is "hammering a memory cell to change its current value." This way an attacker would be able to, let us say, earn additional privileges.

Meltdown and Spectre allow an attacker to read the computer memory, discovering passwords and digital certificates. It is a lot more powerful.

With Meltdown and Spectre no ones needs rowhammer anymore.

I am not sure it answers your question...

Who?January 7, 2018 6:44 AM

Bad wording... rowhammer plays with a memory cell to modify the value of another cell that is physically near the hammered one.

RachelJanuary 7, 2018 8:00 AM


thanks for interesting practical feedback. Mr Schneiers article is ' Securing a Raspberry Pi' dated 12 Sept. 17. When I said there were consecutive articles I intended the Make magazine article Mr Schneier links to. The magazine actually has a number of good pieces on Rasp Pi ( relevant depending on competency)

Appreciate feedback. As it turns out, its actually Winter in Spain, too, but agreed realised Dirk presently incognito

RachelJanuary 7, 2018 8:02 AM

Nick P

Your expertise and perspective on this topic is sorely missed. If you feel to, o link to pieces on HN you may have written or read as appropriate

Clive RobinsonJanuary 7, 2018 8:16 AM

@ Wesley Parish,

As is more often becoming my norm I will reply to your points in reverse order ;-)

[Minix hit?] I wish I knew. I know it's a very robust design, which is why Our Dear Friends at Intel pirated it.

Technicaly Intel asked the designer for help modifying it, which with Intel's dummer and dummer attitude should have given people warning they were trying to be weasely...

Intel kind of stopped inovating some years ago as "steeling" is cheaper than R&D[1].

There is something called the "Swiss Watchmaker advice" notion that basically says that when something works take it apart and reverse the order to see if you can improve things... What is not said is you can get into all sorts of problem if you realy don't know what you are doing...

Which is what it appears Intel did... they saw they got a fractionally faster through put by moving the page fault checking on speculative execution from before to after initial execution. But it appears they did not realy understand what they were doing having stolen the idea, they did not understand the implications of what could and would happen in the cache even after the out of bounds execution had been retired... Thus now those using Intel CPUs in servers etc are going to get a 30-50% performance hit (depending on who's figures you use).

And guess what we also know they realy did not understand what they were doing with Minix on ME either so got security vulnerabilities that people have been exploiting to kill it out etc...

Which might be why some investment pundits are dissing Intel and talking up AMD for the next few years...

But as you note the way Minix works is likely to be fractionaly slower in a limited subset of functions but faster in others by using message passing and a micro kernel. Which suggests that if seperating page tables is needed it's going to have way less of an impact on performance compared to Linux and NT with their monolithic kernels. Thus 2018 might be the year of Minux on AMD and Linux will have to wait again ;-)

I'm not suggesting that these attacks are an example of this behaviour, but it gets mentioned in a Creative Commons-licensed VHDL textbook, so I'm guessing it's more common than either the design bureaux or the university EE faculties care to admit.

The problem goes back to "not learning the fundementals" and "trying to run before you can walk"... An issue in many Universities that "are driven by the needs of industry" rather than the need to produce good practicioners in the field of endevor.

As you noted hardware runs mainly in parallel and software sequentialy... which has consequences. Put simply it's why the hardware is so much faster and software so much easier for single track human thinking to grasp.

Few people that work in tech these days can do the parallel thinking required to be good hardware engineers. Whilst some can do it almost naturally the majority can not and only a few of them can learn it with quite a degree of effort and time (a thousand hours or 2/3rds of an academic year, to just get a toe hold).

Thus the idea that you could somehow leverage some other part of the course such as programing... Thus rather than make the human understand the problem, you get the tools to partially abstract the problems away.

It is not just hardware design that suffers from this, it can also be seen when people take their first steps in multithreaded and parallel programing.

As I've suggested in the past the way to make best use of such a rare resource is for them to make high level tasklets that those less fortunate can "plug-n-play" and in many ways this is what has happened. Many logic designers use "parts libraries" that they plug together with a bit of glue logic... Whilst it gets chips out the door fast it is far from an optimal or transparent process.

The problem has other issues, which is the ability to also think in surfaces rather than lines. Few EE students these days can lay out even a single sided PCB, imagine the fun that 6-12 layer boards give... And in essence thats also what happens when you manufacture a chip...

But after 2-5years the "chimps" leave the halls of academia looking for work... Many EE grads end up a long long way from a multimeter tapping on a keyboard. And the problem has got worse recently in the US. Managment see "Engineers" not as profesionals easily the equalls of lawyers and accountants but as the grease monkeys you see fronting those tyre swapping and windshield cleaning business chains. Thus managment think engineers should get janator wages but no overtime.

Which gave rise to a conspiracy in Silicon Valley where engineers be they software or hardware were not aloud to jump jobs to up rate their remuneration... Then not so long ago a Court said NO to the companies and their policies. Thus the brake came off and software jobs pay climbed up past the 125K range. But hardware engineers got left behind, so basically many jumped trades to write software as a full time occupation rather than just part of the job... Managment on seeing staff shortages rather than raise pay "contracted out" and some of those contract companies are now being dragged into court for being racist and pulling kots of Asian contractors over on restricted visas, such that they could keep wages depressed thus increase their profits... As one commentator has pointed out if the current POTUS gets his way then those visas will disappear and the brake will come off the wage suppression again...

Then perhaps with the real wages being paid those with the required brains will stop following the money into banking and law opting for a more challenging and enjoyable career... And this time crow bar in the legislation that the Medical, Legal and Accounting proffessions currently benifit from.

[1] The definitive comment on this was made years ago vy the head of the French SigInt agency to CNN reporters. To paraphrase he said Yes France used espionage to get to other nations companies trade secrets as this was a lot less expensive than repeating the R&D.

However what he did not say and may not have known is design documents are effectively a list of instructions of how to do something just one way. What they rarely give is the thinking and evaluation the designer goes through prior to drawing up such a list. Thus if you don't realy know what you are doing you will not see why some things are unwise...

TatütataJanuary 7, 2018 10:25 AM

It shouldn't be surprising that microprocessor designers have been building insecure hardware for 20 years. What's surprising is that it took 20 years to discover it. In their rush to make computers faster, they weren't thinking about security.

Let's look at this in context. Twenty years ago, the most widely installed "'OS'" was Windows 95, a bucket of GUI lipstick smudged over a MS-DOS pig. You can't pin it on Intel or AMD that their customers were seduced by the equivalent of chrome fins fitted over a chassis unsafe at any speed, even if the engine they supplied was relatively decent by the standard of the day.

It looks like yet one more case where all individual components of system can be probably formally shown to be correct, but yet fail in combination. And a wide variety of hardware and software components are involved in both attacks. The issue is deeper than merely applying a set of patches.

I see this in the spectre paper:

To encourage hyperthreading of the mistraining thread and the victim, the eviction and probing threads set their CPU affinity to share a core (which they keep busy), leaving the victim and mistraining threads to share the rest of the cores.

How realistic is this in practice? This supposes that a targeted machine just sits there idling while waiting for malware to come along to boss around the OS by telling it who does what... User-space accessibility of processor affinity settings appears to be part of the problem (it's even available in JavaScript!), in addition to the high-resolution timer.

Speaking of the high-resolution timer: nowadays, audio isn't outputted by observing a timer, like someone suggested above, but by supplying data blocks (or their addresses) to hardware that reads them out and writes them to a DAC at the proper pace.

I can't think of that many user-space applications that need a high-resolution timer.

ATLAS, (Automatically Tuned Linear Algebra Software) a BLAS, relies on high-resolution timers for optimization, but apparently only at build time.

FFTW, a library for efficiently computing Fast Fourier Transforms, relies on timers at run time for selecting an optimum strategy.

In a sense, both libraries attempt to discover information about the underlying processor architecture, which is an aspect they share with the class of attacks described. But neither would try to read the timer at a high rate from very tightly knit code.

I suppose that video playback could use high resolution timers at checkpoints to find out whether it is still worthwhile to continue to decode a given frame, or if it should give up and immediately move forward to the next one. This still wouldn't need a very high rate reading of the timer.

echoJanuary 7, 2018 10:48 AM

@ Ad-Blockers Essential More than Ever

Thanks for the optimising tips.

@ Rachael

Thanks. Your citations are useful for a random reader who needs help.


While pressing 'end' and cycling up to catch new comments I caught the beginnings of war and peace and thought, "Ahah. A new book by Clive." And, lo. It was. Now Making myself comfy to enjoy this epic... ;-)

Gerard van VoorenJanuary 7, 2018 11:16 AM

@ Dan H,

Talking about RISC-V, it's also not available in large numbers.

GrauhutJanuary 7, 2018 12:45 PM

@tatütata: "Speaking of the high-resolution timer: nowadays, audio isn't outputted by observing a timer, like someone suggested above"

Yes my pope, you are so right, the W3C Web Audio API only exists in a parallel steam punk universe! :D

JonKnowsNothingJanuary 7, 2018 1:29 PM

@Clive and All

In the not so distant past, we had security issues like HeartBleed, which impacted important standardized software. While the source of the error was attributed to a programming blunder, it still exposed the lack of rigorous insight needed for the types of applications piled layer on layer on layer of what is supposed to be "good code".

Meltdown and Spectre exposes the same sort of issue within the hardware field. In this case, it's not as simple as replacing a graphics card (provided it's a stand alone card) or dumping an IdiOT device in favor of Yet Another IdiOT device. It's even more damaging when considering that a lot of malware now lives permanently in rewritable chips and becomes hard embedded within the the hardware of any system.

Both of these sorts of errors are exploited by the Big 3: corporations, governments and criminals.

So, how did government agencies protect themselves from these 2+ attacks?

They must have something in place, otherwise every PC, system or connection point within the said agencies, which specialize in designs to exploit these and other zero-day conditions, would find that they themselves are being exploited.

They could, use the Ostrich Method, presuming that the Other Side had not discovered the exploit but now, everyone+dog knows about it.

It would be logical that they would have an Antidote to their own Poison available. But what sort of Antidote would they have lurking in the cupboards?

Perhaps there is something like a magic hardware dongle?

They certainly have enough expertise floating about given how they are able to easily clamp onto Cisco and other Routers (hardware and software).

If the fallout becomes too great, they will have to Share. If they don't, then that indicates their entire organizations are vulnerable...

Until a New-Intel comes along.

GrauhutJanuary 7, 2018 2:25 PM

@JonKnowsNothing: "So, how did government agencies protect themselves from these 2+ attacks?"

Thats a good question! They have their own High Assurance Platform ME mode, why not an expanded cpu mode / instructuion set?

Someone should sandsift this! ;)

See: "Breaking the x86 ISA"

Christopher Domas

July 27, 2017
Abstract— A processor is not a trusted black box for running
code; on the contrary, modern x86 chips are packed full of secret
instructions and hardware bugs. In this paper, we demonstrate
how page fault analysis and some creative processor fuzzing can
be used to exhaustively search the x86 instruction set and
uncover the secrets buried in a chipset.


Bonus CT: If we add some Kaspersky NSA tool leak spice into this equation, maybe what we see in the meltdown case is just some kind of limited hangout... :)

Clive RobinsonJanuary 7, 2018 5:21 PM

@ Joseph, Grauhut, ALL,

Makes you wonder what other "design choices" are baked into other common computer & networking products....

It appears the NSA has been investigating Intel IAx86 for security since atleasy 1995.


Thus it would indicate that they know rather more than Intel's own engineers... Which raises the question of "What are the odds they did not know about these weaknesses?"...

hmmJanuary 7, 2018 7:45 PM

If you can't close the underlying HW vuln can you instead detect from userland/hypervisor when that targeted pattern of NX branch-cache access is attempted, then shut down that userland process? The barn door is open, cows are leaving, can you lasso them at least?

Nick PJanuary 7, 2018 10:12 PM

@ Rachel, Clive

Yeah, I was getting irritated given I've already told those people about VAX VMM who knows how many times. I expanded on that problem in Lobsters here where I elaborated on the overall problem of knowledge sharing that led to it being a surprise and had a brief exchange with Ted Unangst of OpenBSD on it. Note the top-level link on Lobsters is to HN comment (essay?) Clive already posted. Ted's belief he can trust SSH was interesting, too, given his skills and background. Shows that even the talent of OpenBSD project could've used better access to fundamentals along with their deep implications. That's without any regard to the mitigations they settled on which I may or may not find best. ;)

Btw, Clive, I am interested in your view on my comments about covert vs side channels to Colin and then to Ted on Lobsters. Far as I can tell, the old guard was focused on malice but knew they were just side effects. My bigger view was that, far as modelling and mitigation, the incidental and malicious perspectives were equivalent once you fed modeled the leaky component itself in cause-effect terms where they did exactly the same thing in input and output at the point of leak with the same mitigations for active attacks stopping passive. Yet, it doesn't necessarily work backwards where passive mitigations stop active. That makes active the natural focus area. I think MLS folks got that much right with them *probably* knowing incidental leaks were possible but just prioritizing (or obsessing) on active attacks.

I also sent some messages to people about possibility of using the information-flow analysis work in CompSci I previously posted here that works with purpose-built languages to check programs on hardware structures by modelling both the sequential code and hardware elements as just programming constructs (esp functions on abstract/finite, state machines). The automated, leak finders that mainly look for interactions might be able to find the hardware problems in the software or ASM models. This would fit in with my Brute-Force Assurance concept that mapped one problem to multiple languages and logics to use their automated tooling. I did see in the past that one project used SPARK's analyzer for information-flow analysis. Gypsy Verification Environment, first to do this stuff, analyzed state models. There's precedent.

@ All

On capability-security, Darius Bacon submitted this link that tries to do a better introduction of the topic. I submitted some examples of it in action in another comment. That Wallaroo was built on Pony with a combo of high-performance and language-based safety is especially exciting given how new Pony is. Sean T. Allen constantly posts on both at Lobsters. Anyone interested can just look at his submissions.

Nick PJanuary 7, 2018 10:44 PM

@ All

re chips

You'll be looking at chips with minimal smarts that are still fast. So, early, desktop-class chips or modern embedded chips of RISC style stand a better chance. CISC if they're simple like in embedded. Open cores if you want to implement yourself or if any are available. Outside of implementation errors, I can't imagine much opportunity for recent channels in Plasma MIPS, Amber ARM2, or PULPino cores. On industrial side, Leon3 GPL is fabbable with configurable complexity. J-Core is smaller implementing Super-H that was simple before the Dreamcast version. ZPU was ultra-tiny. There's also JOP which doesn't necessarily have to use Java. It could run Oberon System with some modification but that has a CPU, too.

So, there's options with many able to run on older nodes that a crowdsourcing effort could afford with visual inspection possible. They don't exactly do it. Most don't want that 1990's console experience either. That's on top of what slowdowns come from memory safety and suppression of covert channels. The modern crowd would be on suicide watch having to live with that. :)

justinacolmenaJanuary 7, 2018 10:53 PM

Information about these flaws has been secretly circulating amongst the major IT companies for months as they researched the ramifications and coordinated updates. The details were supposed to be released next week, but the story broke early and everyone is scrambling. By now all the major cloud vendors have patched their systems against the vulnerabilities that can be patched against.

"Throw it away and buy a new one" is ridiculous security advice, but it's what US-CERT recommends.
  1. Designed to fail.
  2. Slipshod left-footed proprietary work.
  3. We cannot trust the IT vendor cartel to secure our computers.
  4. We are still in denial.

TOR? Forget it. Online bank account? All your money are belong to us. WIT-SEC? Don't even think about it. Bitcoin? Your wallet has a giant hole in it that cannot be patched.

It's not even a technology issue.

It is a basic human trust issue.

US-CERT does nothing but whinge about "vulnerabilites." NSA sold us out to Russia and China a long time ago. The OPM leak? That's only a tiny part of it...

The U.S. government has given up all our secret and top secret information to our enemies.

Nothing makes sense anymore. And we're "mentally ill" to boot.

RachelJanuary 8, 2018 12:21 AM

Nick P ; Clive

perfect, that is fantastic. We all benefit

Nick P I do admire your ability with constructing an argument. I luke your writing style a lot. I know its just normal for you but it deserves a nod.
Everything here is given for free with no interest in a reward. We can do to acknowledge that, occasionally

JackJanuary 8, 2018 4:48 AM

@ Rachel wrote,

Everything here is given for free with no interest in a reward. We can do to acknowledge that, occasionally

Good Karma goes a long way...

Who?January 8, 2018 5:34 AM

It seems only the first one of the three vulnerabilities discovered (CVE-2017-5715, CVE-2017-5753 and CVE-2017-5754) will require a firmware (microcode) update. Good to know that at least there will be some protection even for "unsupported" hardware. All these vulnerabilities are being re-analyzed right now, so there may be changes in the future.

It is frightening that the only computers that have any protection against these attacks have the UEFI firmware.

PositronJanuary 8, 2018 7:03 AM

Just going to add that anyone thinking they should stick with 486 chips or other ancient CPUs without speculative execution should recall that the last 20 years of CPU development didn't just introduce performance features - a lot of older classes of exploits are non-issues today because of CPU features that bolster security, none of which exist in 486 and older chips. Your 1995 era CPU might be immune to Spectre, but I'd argue that an information leak exploit is less worrying than entire classes of code injection vulnerabilities which inherently enable leaking information by just injecting rogue code to actively steal that data on top of the other implications of such exploits.

Dan HJanuary 8, 2018 8:12 AM

Meltdown and Spectre and the Mainframe


"For mainframe IT shops, there’s some good news, but bad news as well. The good news is that the folks at IBM long ago put protections in place for things like out-of-order executions and other security risks. Doubly good news is that with mainframe hardware memory encryption, you’re in pretty good shape either way. The bad news is that your consoles may be vulnerable, especially if they’re x86-based, and they connect to your mainframe systems; so you need to pay special attention there."

Clive RobinsonJanuary 8, 2018 8:29 AM

@ Nick P, ALL,

We are now at the point where device physics are getting close to those fundemental limits that the universe appears to have...

The first and easiest to get a grip on is how fast we can move information from one place to another. If you take a 1ft / 30cm rule that you will find in any stationary supplies cupboard that is approximately how far light travels in 1nS in a vacuum. It looks big but it is realy not.

Because that distance quickly gets shorter, way shorter. As a rough rule of thumb, when you look at how fast signals go down a transmission line it's about 0.6C for ordinary coax (TV antena cable) but as little as 0.1C for some twisted pair transmission lines and PCB traces. You also have to alow for the fact that data is fetched, which means with the fastest logic immaganable you have a "return journey" thus you have to halve the length. Also data is "clocked" thus you have to allow another 0.6C you suddenly find yourself looking at maybe a maximum distance of a couple of inches max for memory to be away from the CPU core at 1GHz clock speed. Thus with upto 6GHz clock speeds possible you are looking at the width of your thumb tops. Thus core memory cannot keep up with the CPU thus to keep things working at top speed you need caching, several layers of it.

The next problem is that of gate delay... With the best will in the world each gate has a rise and fall time due to the likes of charge storage in devices you are kind of looking at 3 times the maximum switching speed of a semiconductor device of which 2-12 are used in a traditional gate structure, but at a reasonable speed you are looking at over .5pS per traditional gate (which is why weird "cell" structures are often used). This likewise is an issue with shifting data around. Hence not just the caches but caches on different sides of the MMU and also split for data and instructions. As the MMU is in effect a specialised CPU of it's own it has a lot of gate delay...

Thus the caches closest to the ALU are using Virtual Addresses not Physical addresses. The security functions are usually based on Physical Addresses not Virtual addresses, hence you can do quite a bit of "Speculative Execution" in the Virtual address space before you can detect it's violated Physical Address security requirments...

Yes there are solutions to do it on the Virtual Memmory addresses but again it is still slow compared to what you can do in terms of executing simple instructions.

Thus you end up with a time hole that either you have to accept or go through quite a few contortions and real estate to solve.

Which brings us around to "heat death" overly simply the faster you switch a semiconductor device the more heat it generates. Worse it does not scale down linearly with device area at the bottom end of the curve. Thus whilst halving the device length means you can in effect have quadruple the devices in the same area the heat can not escape thus devices will die early, a lot earlier... The solution is to mix infrequently used gates (like SRAM) with high usage gates. This disfavores wide bus width devices for various reasons.

Thus Intel have a problem rather more than other chip designers due to "bolt on" technology upgrades. Put simply Intel backed the wrong Internal CPU style way back but have been locked into an upgrade path that made the change harder and harder due to the design time required... This has been reoccurent in their IAx86 history and they have been "locked in" by backwards compatability due to Microsoft... They tried breaking out but the industry did not want to go along with them.

In part Intel are the monster Microsoft built and Microsoft is the monster we built.

It's just one of the reasons other OS and App suppliers have gone for "device independence" even though it has a real speed penalty.

Which raises the 64,000 dollar question, do users realy care about speed. Whilst we might scream we do, the reality is end users realy don't and OS's and Apps can be easily speeded up by not putting bells and whistles in their code. Microsoft that specialized in bells and whistles and adding more or changing them every couple of years or so to maintain revenue streams therefor find they are in effect "locked out" of moving rapidly to device independence, even though they have tried and are still trying all be it at a very slow pace. It's part of the reason why the likes of their mobile offerings panned in the user market...

So as we the users are ineffect the cause of the problem which gives us the the insecurities right down the computing stack down to even the device physics level is "What do we realy want to do about it?"...

Clive RobinsonJanuary 8, 2018 8:43 AM

@ Dan H,

The bad news is that your consoles may be vulnerable, especially if they’re x86-based, and they connect to your mainframe systems; so you need to pay special attention there.

It is an incompleat, but common sense statement.

But common sense is a major deficit in many many places, including those (CSO COO) who should know better. So yes it needs to be said over and over.

It's an incompleate statment because Big Iron has to communicate not just with those x86 consoles but storage and similar that likewise sits on a communications infrastructure. Thus those comms boxes and storrage etc boxes are often bassed on x86 boxes...

What is it people say about monocultures ;-)

Clive RobinsonJanuary 8, 2018 8:57 AM

@ echo, ALL,

Comment by Theo de Raadt

It needs to be said that Theo's comments are not what they could have been, and just a fraction of what he probably would have said if his hands were not in effect tied.

That is as he noted he is not "First tier" by quite a way, therefore he is very dependent on the First Tier to stay in the game.

I'm not so can say rather more...

Likewise Theo did not mention that in effect Google are second or third Tier to Intel, because of thr significant dependence Google have on Intel for server side issues.

Thus Intel was able to have multiple hissy fits and make compleatly unreasonable demands of the Industry. Likewise Intel are in effect locked in with Microsoft in a symbiotic relationship. Hence together Intel and Microsoft could push their way with the announcment date, thus enjoying the Christmas consumer buying spree...

If you doubt ask why the normal responsible disclosure period of 60-90 days was pushed out to 2-4 times it often is...

What has been shown is the power of vested corporate interests over every other corporate, company, adult and child...


Clive RobinsonJanuary 8, 2018 9:07 AM

@ Who?,

It is frightening that the only computers that have any protection against these attacks have the UEFI firmware.

I think you left "will" before "have", there are cartloads of UEFI boards around that have Intel CPU chips on...

The thought did occure to me that maybe that consideration was inr the Intel, Microsoft and others Master plan...

But hey I'd be asked "Are your meds working?" or similar ;-)

But as the old saw goes "Even if it's a dissaster, never let a good opportunity go to waste" 0:)

Clive RobinsonJanuary 8, 2018 9:13 AM

@ Rachel, Nick P,

I luke your writing style a lot

May the force be with you both :-)

Sorry to good an opportunity for a smile, to ignore 0:)

JonKnowsNothingJanuary 8, 2018 9:31 AM

The extent of the collapse of techs using these chips is going to be massive.

Especially among all the new AI-ML algorithms that are supposed to take the place of thought.

What will happen to:

  • All the IdiOT devices booming along
  • All work/home/personal assistant data harvesters
  • All the self driving cars
  • All the self driving semi-trucks(lorries)
  • The new class of mega-planes
  • The high speed rail systems (of all types)
  • Anything with a controller system (many already infected with a dormant version of STUXNET)
  • Anything that requires a PC type Client to connect
  • All the Entertainment Consoles installed in new vehicles

These are already not-secure. With an embedded fail point permanently installed along with questionable warranties and No Right To Repair (even if a replacement comes along).

Will GM/Tesla/ATT take responsibility to fix, repair, replace, upgrade forever and ever and ever?

iirc: I read an article about a home dryer that had the unfortunate tendency to erupt in fire. The company was ordered to replace/upgrade/fix the problem. Said company offered customers a coupon for trade-in. After a while, the company canceled the coupon offer because they said Not Too Many took advantage of the coupon.

The company omitted that they hadn't really bothered to contact any of the people owning the fire-breathing-dryer to explain that their dryer was a fire hazard and could burn down the residence.

There is a big difference in getting a $X% discount by mail to buy a new widget and the information that the widget might burn down your house so please contact us for a fix. The first is just sales churn the other is a safety issue.

Who?January 8, 2018 9:45 AM

@ Clive Robinson

My english skills are not very high. This phrase really needs a "will have" instead. We are talking not about dropping old (but powerful enough) gear, in most cases it will be replaced by new and shinny equipment that "will have" Intel inside(R).

You are right, all this looks like a nice business opportunity. At least it looks great for the big corporations, not for customers.

My Dell Precision is the first computer I buy in five years. I spent two thousand euros on it (and I get only eight hundred euros/month for my work at the University). Is this machine now broken? In fact, it was broken long before I bought it! Nice game, Dell.

VinnyGJanuary 8, 2018 9:50 AM

@Rachel - If you do take the OpenBSD road, I'd amplify on Clive's "up slope" allusion with specific advice to _not_ post any "newbie" questions on the official forums without doing your own detailed research first (and describing that research in your post to establish bone fides.) At least in the past, those folks weren't very tolerant of people who showed up with a question, and expected a quick atuthoratative answer, without doing their own heavy lifting first. I can appreciate that position, but it is quite the opposite from what has been customary practice in many corners of the net, with product user evangelists ready and waiting to hold hands and provide comfort to new users. Different culture. It also takes trying OpenBSD out of the "trivial undertaking" category for me, it requires a fairly serious commitment. I am in the process of trying to decide if I should give it another try...

Who?January 8, 2018 9:55 AM

Ok, at least I am running OpenBSD on my computers. All I need to care about if one of these machines gets exposed to the Internet is someone getting local access to it (even if it is only to a chrooted environment). Windows users should be really worried, as any malware will be able to exploit these vulnerabilities.

VinnyGJanuary 8, 2018 9:56 AM

@DanH - I'm a bit suprised by your inclusion of iOS as vulnerable. I was an IBM S/3x - AS4 jockey for a large part of my IT career. IBM long ago segregated the microcode and actual (guest) OS on the processor stack for these machines - is it possibly the common microcode only that is vulnerable? I guess the distinction is largely academic, but I am still interested. Thanks!

Clive RobinsonJanuary 8, 2018 10:12 AM

@ Who?,

My english skills are not very high.

Trust me they are a lot better than many native English speakers skills.

Oh and as for "Native English" speakers by far the majority by a very very long way are not even remotely profficient at a foreign language. I'm one who used to try but failed miserably, but due to many peoples kindness got credit for atleast trying.

To be fluent you usually have to have good second language skills for your age before you go into primary school. Thus improving or getting third or fourth language skills your brain is thus pre-programed for...

As for the 1.5 times a months wages, unfortunatly that is true or worse in all but a handfull of semi/professionals in all but a handfull of countries.

It's why I'm realy realy annoyed that Intel and Microsoft forced the "After the Christmas sales boom" announcment on the world.

It shows the real psychopathic behaviour of major corporations that run as Cartels or Monopolies. And might account for why Europe is starting to crack down on them even though the lobbyists are throwing inducments of all forms at legislators...

But even so it's way to little way to late for tens if not hundreds of millions of consumers :-(

MarkHJanuary 8, 2018 11:43 AM

Thanks to Clive for his early comment about the diversity of the IoT world ("hybrid vigor").

I've been working on IoT products since long before the name was coined.

Note well that if a network-connected device does not allow any execution of remotely furnished code, then these vulnerabilities do not apply.

I've used some simple protections, including:

1. No multi-threading (!) The old "main loop OS" (which the old-timers well know) is PERFECTLY FINE for a great many IoT applications.

2. Signature verification of all executables.

3. Disabling ALL "built-in" network services ... didn't need a one of them, especially monstrosities like sendmail. And to make sure, running a thorough port scan to verify that only our little application service and ICMP are exposed to the network.

4. NEVER using http. Sadly, many IoT applications require a web interface ... but if you MUST, you don't need the giant terrifying dangerous sophistication of Apache. Simple web pages can be "cooked up" by application code.

5. Obviously, taking precautions against buffer overrun.

Most IoTs do really simple stuff. The more their code is "dumb as dirt," the better. Complexity is highly correlated with attack surface.

GrauhutJanuary 8, 2018 1:41 PM

Is there already information available on the impact of the patches on datacenter electricity consumption?

Nick PJanuary 8, 2018 4:28 PM

@ Rachel

Glad you're enjoying it! I'll be reworking my writing style and devising marketing plans this next year. Gotta get more uptake of these concepts.

@ VinnyG, Rachel

Good time to mention the Absolute OpenBSD book. That, Googling stuff, and practice will answer enough questions for the learner that what they ask OpenBSD lists might be worthwhile.

@ Alyer Babtu

KeyKOS was an alternative OS for IBM's mainframes (System/370) a long time ago. The EROS project by Shapiro et al created a KeyKOS-inspired OS for x86. That was going nicely until Microsoft poached him. I think he just stopped working on it since there was not much developer interest in building it or market interest in buying it.

@ Clive Robinson

Well, the locked-in-to-bad-cpu's market pays their billions. They'll have to do *some* about it. ;)

@ VinnyG

re microcode

"IBM long ago segregated the microcode and actual (guest) OS on the processor stack for these machines - is it possibly the common microcode only that is vulnerable? "

They started as System/38 with capability-based security at CPU level. The scheme got pushed more to software as they moved to POWER processors that obviously aren't capability machines. Plus, they have their own terminology: I'm still not sure if they're talking microcode like Intel's or if that's just firmware on top of the CPU in POWER assembly. Depending on what it is, the system might have moved away from the kind of isolation we liked a long, long time ago. Then, they did the mods for things like PowerVM and Linux with who knows what effect.

They are built like tanks, though, at least the old ones. My company's AS/400 has never went down except when they took it down on purpose (eg forensics over suspected hacking) or a new tech unplugged/disabled it by accident trying to fix something else. They just run and run and run. I've always wondered if reliability changed with the new IBM i's on process nodes whose hardware is orders of magnitude more complex or failure-prone.

TazJanuary 8, 2018 8:38 PM

Can anyone definitively advise if the MIPS 74K is affected by Spectre? I'm confused because some of the MIPS processors are claimed truly immune...but this 74K is used in the Qualcomm SOC QCA9558...and I've since found references to out of order processing on this Risc core.

Personally suspect manufacturers already know which chips are vulnerable. They've had this information for a long time. So this dearth of facts the public faces is just stonewalling on their part.

We need *facts* so we can mitigate the problem by placing scarce immune hardware where it can do the most good.

Clive RobinsonJanuary 9, 2018 12:27 AM

@ Oh realy,

Microsoft's patch to Intel's Meltdown problem is causing some AMD users to need to reinstall.

No supprise there...

If you go back and read Linus's concerns about not being able to turn patches of for AMD processors you will see that not only he but other lead FOSS developers had the same concerns.

You can view Microsoft's failling in this regard in many ways. However you have to look at it the way non tech savvy consumer buyers do. Which is if it bricks their machines they will lose a lot of stuff... and thus it will end up in their minds as AMDs fault... Thus doing favours for Intel...

Such is the way the world revolves...

RachelJanuary 9, 2018 2:16 AM

Nick P , Vinny

Appreciate BSD feedback. I like sound of the forum culture, more robust. Healthy. So long as its not elitist, Learning curves promote independence and counter BS. I'm the sort to read deeply before attempting so I'd be conciously avoiding forum before competency.
NickP do keep us informed of your [what sounds like] your new professional directions

"Now I am the master"
Now thats out of the way. I just choked on my baguette. You say one needs to be established in a second language by primary school to have any hope?
That runs counter to basically all the research and studies in language acquisition and related fields within neuroscience. Adults are actually better language learners than children.
Now, having disagreed with you I will prepare to be struck down by the Spear of Destiny thrown from Olympus. (Even though I'm right.)

RachelJanuary 9, 2018 2:26 AM

NickP , Vinny and everyone


a nice clear description of why the author loves OpenBSD. Nice introduction if you're wishing to promote use amongst COTS users. He had the werewithal to obtain some specific features he desired by making a donation.

NickP AbsoluteBSD looks good

oh reallyJanuary 9, 2018 3:09 AM

@ Clive

"Which is if it bricks their machines they will lose a lot of stuff... and thus it will end up in their minds as AMDs fault.."

In this case I think they'd have to blame MS all the way, it's their handiwork directly.

I doubt most users will even read about meltdown or know that it basically only affects Intel, or get into the weeds of truth about culpability here vs AMD - however I think end users have a pretty decent start of an understanding of just how much Windows 10 sucks, and how much contempt M$ has for them when it comes to update quality or anything else.

I wonder how they can make Windows 11 worse...

CassandraJanuary 9, 2018 3:33 AM

I just wish to add my thanks to the people giving freely of their insights and experience.

I'm mildly surprised* that some of the computer experts of my acquaintance are not taking 'Meltdown' particularly seriously - comments of 'It's already patched", and "It's no big deal, like Heartbleed, no-one will remember it in a few months." are comments I've seen made.

Intel's PR is obviously working. Obfuscating/co-mingling Meltdown and Spectre is an inspired move.

There will be a huge number of systems that are not remediated. Intel are not providing microcode updates for cpus older than their obsolescence cutoff, even though the microcode can be patched**. Android has notorious problems with applying updates - it is not for nothing that Android smartphones are described as 'landfill' by some, due to the problems of getting OS updates. What this means is rich pickings*** left for software that takes advantage of Meltdown and Spectre. Anyone who uses a password manager on an unremediated system is in for trouble (or indeed, anyone using an unremediated system).

It is a pity that decision making in corporate organisations is usually quite poorly recorded and archived. Reviewing how particular design decisions were made; and how corporate PR responses are generated would make fruitful reading for future historians, and maybe allow people to learn from previous mistakes. Governments release things after a few decades: corporations almost never willingly release things.

I'll stop rambling,


*Only mildly. Experts seem to be divided into two types: those who know a lot of things; and those who think deeply about what they know. The first type seem insouciant, as they don't seem to appreciate the different in problem type/class presented by Meltdown/Spectre and your average bug exploit. On the other hand, I could be making a mountain out of a molehill. I rely on trusted mentors to tell me if I'm going seriously wrong.

**I suspect that Intel made speculative execution something that can enabled or disabled in microcode precisely because they knew it was difficult to get right, so having an 'emergency' cutout available is sensible insurance. I would not be surprised if out-of-order execution can be disabled (as well as other difficult-to-get-right capabilities), if necessary, by such an update. It is a great deal easier to put in a disable-this-capability switch in microcode than change-the-programming-of-this-facility capability, unless the capability is already written as a software function in microcode (and will then, in general, run more slowly than an optimised, but unchangeable hardware implementation). Note that the ME environment has a disable switch - the 'High Assurance Program' bit (there might be others), so the concept of easily disabling capabilities is not foreign.

***One of my relatives delights in picking over the carcase of whatever roast bird was had for dinner. He was very happy to have a slow-roast duck carcase available after the Christmas dinner. Meltdown will provide similar rewards for a long time.

Clive RobinsonJanuary 9, 2018 8:54 AM

@ Rachel,

You say one needs to be established in a second language by primary school to have any hope?

Of being fluent, yes.

It's to do with fundemental structures you build when very young. There are two areas that I've seen research on the first is your hearing mirroring of phonems. Beyond a certain age if you have not heard them you can not make them in the way required. If you have a name like mine, spending any time in the far east it actually becomes painfull to hear people that sound proficient in English and can certainly write it proficiently triping over their toung or slurring my name.

The second problem is that of order, this can be overcome but with much effort, because you have to think in the language you are speaking rather than thinking in your first language and then translating it before speaking in a second language. When you are very young your brain naturaly builds in this ability, the older you get the harder it gets to the point of not being able to make the jump.

The first sort of experiment on the base brain function experiment was one involving optics. Put simply kittens were put in an environment without verticles just strong horizontals. After many weeks the kittens environment was changed to strong verticals and they effectively could not see them.

What is not compleatly known is if the brain eventually addapts in a total immersion situation. If I put glasses on you that turns the world upside down then as long as you wear them all the time for a period of three to ten days your brain adjusts (some do not). When the glasses are taken off after the adjustment it takes time to adjust back. A similar sort of thing happens with sea-sickness and why zero-G training can have some bad effects for some.

It's the same with eyesight. If you stop yourself seeing light after a few days your other senses start to adapt and they kind of take over where they can.

It's why they talk about total language immersion training. But it only works for some. There were quite a few young French people in London due to difficulties finding work in France. Whilst their English speaking skills improved, it was clear that for some they did not. A simple study showed "The Ghetto Problem"[1], that is if they lived with other French people who spoke French outside of work their English language skills hardly changed except in work related language. That said most no matter how immersed never fully mastered spoken English, and kept a soft French accent and certain order issues.

It's a complex issue but have a chat with someone who speaks Finnish or one or two other languages where pitch is important to meaning. They will tell you that in effect nobody who was not born to the language can speak it fluently.

Back in the 1990's I worked with people who's first languages were varied. It became clear that what their first language was effected the way they thought about problems. I was not the only one to encourage the mixing of people with different first languages on teams as it gave dividends. It's why now if you start in on "the life scientific" you are actively encorraged to go out to as many different language speaking research establishments as you can and DON'T form ghettos[1].

But your ability to pick up a new language depends on how far it is from your own. Languages that use latin and have old german roots are easier to learn, with mostly the "connectives" problem. One of the things about the botanical sciences is that heavy reliance on latin means that the "big words" are very similar in two otherwise different languages, it's just the word endings and the order that give issues.

Not sure what the modern thinking on language skills is these days as politics tends to play a larger issue than it should.

One mildly funny story from years ago when I was working with people from Hong Kong. Somebody joked that Chinese must be the worlds most spoken language. To which one of the engineers from Hong Kong just said, Ahh but which chinese language, befor pausing and saying, It's why we all learn English it makes life simple.

No doubt @Wael will have an opinion on what "fluent" means few people from outside the culture ever learn arabic to the point they can be moderately good at the languages let alone fluent.

Oh and a sting in the tail are you a "that that" or a "that which" person? Or even the earlier forms? I know you don't use "init" in your posts or other toe stubing contractions or inventions but are you a serial comma killer?

[1] Oh and by ghetto I mean just the "country within a country" asspect not any of the later connotations such as impoverishment, isolationism or second or worst class status.

Clive RobinsonJanuary 9, 2018 10:36 AM

@ Cassie,

On the other hand, I could be making a mountain out of a molehill. I rely on trusted mentors to tell me if I'm going seriously wrong.

Mountains like molehills look different depending not just on your position and view point, but also in this case your use case.

Personally I've been expecting this sort of "sky falling" event for more years than this blog has been around. As a result as I've mentioned in the past, also needing to support decades old software/kit I went down the "air gap" route a very long time ago. So my use case is in effect unaffected.

My viewpoint however is something that if expressed verbally would make Intel execs boil in their own fat before combusting to leave greasy slime balls of detritus behind.

As for if it's a mountain or a molehill is not yet known.

As I've remarked I could see ways to turn it into a "Dead Hand" weapon, if it was weaponised and released correctly. That time is now passing for users (except for Intel chip owners). Yes it's still a danger but that is receading as OS's start to patch and I'm presuming new chips will not have the problem. But importantly there are a whole heap other issues that are not yet being talked about...

The people that are going to be most hurt over this are the likes of "Cloud Services" and "Hosting Providers" who want nail to the wall performance out of their kit. Yes they will see a 30-50% performance hit, it's been mentioned, but what about the knock on effects down that supply chain?..

If you or those you work for/with don't have Cloud usage or big data needs then you will probably not see much difference except on heavily loaded servers. But too many people have swallowed the Cloud Cool-Aid despite being warned otherwise. Look at it another way, what would happen if that economy takes a 10-30% hit? It's a possibility that few want to hear let alone think about...

Howecer the big problem few are talking about is "Communications and Storage Infrastructure". There are one heck of a lot of "Rack Boxes" like Firewalls, IDS kit etc etc that are just "Plug and Go" systems with Intel chips inside. These boxes are what worry me most because they are more or less hidden like much infrastructure equipment.

Known by some less than half jokingly as "Somebody Elses Problem Person Unknown Kit Unknown Status" (SEPPUKUS) few in an organisation know where they are what they are or what they do, untill that is they go wrong and "the guts of the organisation spill everywhere...

Imagine what fun you could have with the highest level system access to the firewall and internal services boxes of an organisation. Whilst the Servers and some user kit will get fixed first as that is where the spotlight is at the moment, this infrastructure kit will probably not get fixed any time soon if at all. Unless of course "pain happens" sufficient to unlock the managment purse strings...

Sensible managment will be looking at the financial side and take steps to minimise cost impact. And as I've already noted if systems are not connected to external networks in any way then the risk is small. Thus a rethink about which and how various employees get external access may well be a profitable first step for many.

Oh one thing that also has not much been talked about is Application Software. It's not just OS's that are going to need to be fixed. Multiprocess software is also an issue, especially web browsers...

Again I turned off javascript years ago much to many peoples surprise, and I have a feeling that it won't be long before the "Addware" marketing types get in on using these vulnerabilities to bypass add blockers and the like (the only way they will stop is when people turn off Javascript and all the rest of the unneeded junk the lijes of the W3C want to add, to the point even Tim Berners-lee is talking about other ways).

The simple truth is we have moved on in the world from the times when processes were for single user functions and the OS did the heavy lift on security. Again as I've been saying since befor Chrome was a wispered word, Web browsers are a security nightmare they often share a single process space and have multiple hostile communications open at the same time. The browser designers and code cutters are realy not top notch when it comes to security and the direction the W3C appears to be going has all the hall marks of having been bought and payed for by lobbying from the Big Data and malvartising types.

To be honest the industry has just stood there taking target practice at their own feet with a gattling gun, and they are now acting all suprised that they cann't keep things on an even keel and the bath is filling with blood.

Look on them like a bunch of drunken party goers, they knew that eventually they would have to stop drinking and that there would be pain lots of pain. But they convinced themselves that they could handle it with a few cupps of coffee and a handfull of pain killers etc maybe even a snort of Columbia White Marching powder. What they did not think about was the equivalent of a Police Raid hitting then mid reval, and all being locked up with out coffee, pain killers, or private sanitation being interogated over their bad behaviour etc before getting dragged up befor a magistrate who will add their bad behaviour to the public record as well as hitting them with a costs order yet to be decided... Almost certainly some are going to get hurt badly others will not care and some might learn from the lesson... Lets hope the majority inside the industry are in the latter camp.

But personaly I suspect that those at the top like Intel will just flick the finger and carry on as before.

GarboJanuary 9, 2018 10:52 AM

"Yes it's still a danger but that is receading as OS's start to patch and I'm presuming new chips will not have the problem."

Intel is doing away with IME? News to me.

Clive RobinsonJanuary 9, 2018 11:41 AM

@ hmm,

MS patches are paused for AMD systems because

You left out,

E, MS has a "Special Relationship" with Intel.

You have to remember it's not just MS but Google that went along with the post Xmas sales boom announcement date...

You thus have to start digging to find out why... But don't think that the clouds are not important when digging, they are the cause of stormy weather after all.

hmmJanuary 9, 2018 12:09 PM

"MS has a "Special Relationship" with Intel."


I hope this debacle actually DOES result in a class action. They deserve it this time for sure.

Not to mention the stock trades a year ago...

CassandraJanuary 9, 2018 4:30 PM


Thank-you for the long reply, Clive.

I have no doubt many systems will be patched, and cpus will (eventually) be replaced, if nothing else, at the end of the normal write-off cycle. I suspect it will take a while, though, and a few people's lives will be ruined along the way, and I expect one or two SEPPUKUS*. I share your thoughts about the actions of those at the top of Intel.


*I have personal knowledge of networking equipment being operated many years beyond its End-Of-Life date, running obsolete software while supporting business critical services. There were understandable reasons why this was so, and mitigations were in place for (some of) the risks involved. It is a situation that I am sure will be replayed many times over in many organisations, but they won't all get away with it.

JonKnowsNothingJanuary 9, 2018 9:19 PM


The people that are going to be most hurt over this are the likes of "Cloud Services" and "Hosting Providers" who want nail to the wall performance out of their kit. Yes they will see a 30-50% performance hit, it's been mentioned, but what about the knock on effects down that supply chain?..

First Knock On Killing Blow (KB):

One of my MMORPGs has so such delay now the blasters are firing long after the NPCs have fallen and movement stutter makes all toons look like they are jumping beans and rubber-banding as toons fail to move along pathways had people logging.

Of course, it's a MMORPG and of course there is always lag and of course there is always rubber-banding and people rage-log all the time too.

Plausible Deniability it maybe, but this industry is worth billions on billions on billions of absolutely unnecessary consumer expenditures.

Youbetcha the Game Companies are gonna be ballistic about anything that slows down their micro-transaction-lottery.

Who cares about anything else! Don't mess with PVP!!!

Clive RobinsonJanuary 10, 2018 7:06 AM

@ JonKnows...

Who cares about anything else! Don't mess with PVP!!!

+1 ;-)

@ Cassie,

Yes some people will have to run what is now considered broken kit.

As we both know if Managment are sensible, they can first mitigate then replace, but all such action has both costs and blind spots.

Thus I see us "Living in interesting times".

Oh apparently Microsoft have slipped a little something extra in with their patches...

It's to do with AV software setting a flag in the registry. If it does not or you are not running MS way AV then your machine does not get patched even though it looks like it has...

Now that is another security nightmare in the making. Hopefully along with their AMD snafu Microsoft will sort it out, but there is no way on god's little green apple I'm holding my breath on it happening any time soon if at all.

I actually wonder if it's a first stage of an attack against the use of Non US AV software... Which would no doubt please the US IC and LE entities...

Who?January 10, 2018 12:28 PM

@ Clive Robinson

Trust me they are a lot better than many native English speakers skills.

For sure my skills are much better than those of native English speakers born in California. :-)

Now seriously. I agree with your point of view. A serious deficiency of the educational system in my country is the lack of interest in providing some sort of second language skills to people before going into primary school. At that early age brain can easily adquire new abilities in a more natural way. Things are changing over time, but it is a bit late to me.

Sadly I agree with you too about large corporations showing a truly psychopathic behaviour. I said it a lot of times about Google, but it applies to any large corporation.

Alyer BabtuJanuary 10, 2018 11:52 PM

Can these flaws be abstractly characterized by a matter of interest M with a corresponding property P, which in the course of computation results in some functionally related f(M), which has an insufficiently faithful corresponding functional g(P) ? E.g., M is some data in memory, P is some security property.

If so, is there a good book/paper series discussing these issues from a first principles and mathematical/logical point of view ?

GeusJanuary 11, 2018 1:58 PM

This may be a dumb question, but has anyone seen a working Javascript exploit somewhere? I mean actual password dumping?

GeusJanuary 12, 2018 2:54 AM

I know about malicious Javascript, thank you.
Let me rephrase: where is an actual working Javascript POC for SPECTRE?


Samantha AtkinsJanuary 14, 2018 2:19 PM

What is all the noise about javascript? I thought that javascript has been patched and repatched for years to have extremely limited access to your underlying local machine the browser is running on much less other machines. It is a VM within a process, an interpreter more or less, not a VM in the sense of a virtual machine slice. There is a world of difference. Am I missing something?

Clive RobinsonJanuary 14, 2018 3:40 PM

@ Samantha Atkins,

Am I missing something?

Probably, so the 1min 20,000ft view to get you started.

There is a bug "meltdown" and a class of attacks "Spector" (of which there are two on Intel chips)

To be usable to an attacker all they need is access to code execution on your machine with fine grained timing information. Javascript give this to the person writing and executing the code.

Due to this data can be leaked not just from the process space, but the associated kernel space and orher process spaces.

As the bug is actually in the incorrect implementation of firmware/hardware it happens below the point in computing stack that the securiry measures work, thus trying to stop them with ordinary software is not going to be effective...

Thus sensitive information can be leaked from the kernel or other process spaces, including it would appear Intel's SGX secure enclave...

It does not have to be javascript, pretty much any language will work. But usualy javascript is the easiest to fool an end user into running on their machine.

Hampton DeJarnetteJanuary 15, 2018 7:11 AM

I applaud your efforts to come up with a word to describe this phenomenon.

Maybe some variant of the neologism "intervironment" instead of "Internet+" to signify the internet, the CPU-based apparatuses connected to it, and the data in all of the devices would work. "Cybervironment" sounds clunky to me.

For eons we were at the mercy of the physical environment(floods, droughts, storms) and biological environment (food for us to eat, bugs and tigers that ate us). Then there was an institutional environment (kingdoms, religions, laws, nation states). Now we are more and more affected by the networked computers environment.

Good luck in this effort to invent the right word.

ScottJanuary 15, 2018 8:00 AM

@Bruce, you wrote
"The first is that these vulnerabilities affect embedded computers in consumer devices. Unlike our computers and phones, these systems are designed and produced at a lower profit margin with less engineering expertise."

I've spent my career working in small embedded systems and in general I've found that we have more "engineering expertise" than people programming large computers. That statement may be true of companies that stick in a $10 a UNIX system for a thermostat, but it is not true of the engineers that made your microwave, coffeemaker, or toy. We need to use every bit of engineering expertise to meet those "lower profit margins".

You seem to be equating "engineering expertise" with "large OS expertise". If I designed a system that required several MB of memory to support a 64KB program, I'd rightly be considered grossly incompetent.

SSDJanuary 15, 2018 9:53 AM

It's not quite true that every single major processor from the last 20 years has been compromised: the much-maligned Itanium happens to be invulnerable to both Meltdown and Spectre. It is on its way out all right, but maybe the industry should learn something from its architecture.

Techno TranJanuary 16, 2018 8:44 PM

All Intel's products are supposedly screened during development and after launch by its Security Center of Excellence, headed by David Doughty, their Director of Product Security Engineering. So it's shocking that these problems went undiscovered for 15-20 years right under the noses of Mr. Doughty and team. #NotWorthTheirSalary

Clive RobinsonJanuary 17, 2018 4:17 AM

@ Techno Tran,

All Intel's products are supposedly screened during development and after launch by its Security Center of Excellence,

Intel's "security" functioning has long been called into question by me and others over a number of things. As it's mostly "magic thinking" with a springling of "magic pixie dust" to give an illusion of security rather than actual security.

I called "Bull Cr4p" on their first on chip random number generators so long ago[1] that I doubt anyone working on the originals is still working in the department... Or in the case of the then seniors with decision making power working anywhere but a retirment home...

You can search this site for "intel" and "magic" and "pixie dust" if you want to know more.

[1] It was way before I called "Bull Cr4p" on Linus Torvalds over his planed use of the Intel RNG, and his initial comments to the community about the decision.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.