Comments

Who? February 21, 2018 7:01 AM

As said some days ago, I am testing Spectre/Meltdown fixes on a Linux workstation. This computer is running Ubuntu 16.04 LTS, fully patched, and has a Kaby Lake processor with microcode revision 84h (released two days ago).

Spectre and Meltdown checker (https://github.com/speed47/spectre-meltdown-checker/) believes it is secure (Spectre variant 1 by means of the Red Hat/Ubuntu patch, Spectre variant 2 by means of IBRS/IBPB and Meltdown by means of page table isolation.

Something must be very wrong, one of the Meltdown/Spectre PoC is successful on it yet.

Hardware manufacturers should understand there is a market for security, even to end users. They should release non-overpriced slow but “secure” processors to these customers.

By the way, I doubt there will be Spectre/Meltdown proof processors in less than five years. It is the time a new microarchitecture needs to reach market since its initial design. I would not trust a “hardened” processor released in the next months.

Who? February 21, 2018 7:24 AM

To be more clear, I think all is right on my set up. This workstation has an up to date microcode on it:

$ dmesg | grep microcode
[    0.799830] microcode: CPU0 sig=0x906e9, pf=0x2, revision=0x84
[    0.799871] microcode: CPU1 sig=0x906e9, pf=0x2, revision=0x84
[    0.799913] microcode: CPU2 sig=0x906e9, pf=0x2, revision=0x84
[    0.799917] microcode: CPU3 sig=0x906e9, pf=0x2, revision=0x84
[    0.800027] microcode: Microcode Update Driver: v2.01 , Peter Oruba

It has the latest patches applied too (of course), but remains vulnerable (this one is one of the few runnings of the “Spectre PoC” that have errors on its output):

$ make
cc -std=c99 -o spectre.out spectre.c
$ ./spectre.out 
Using a cache hit threshold of 80.
Build: RDTSCP_SUPPORTED MFENCE_SUPPORTED CLFLUSH_SUPPORTED 
Reading 40 bytes:
Reading at malicious_x = 0xffffffffffdfec68... Unclear: 0x54=’T’ score=965 (second best: 0x00=’?’ score=545)
Reading at malicious_x = 0xffffffffffdfec69... Unclear: 0x68=’h’ score=999 (second best: 0x00=’?’ score=533)
Reading at malicious_x = 0xffffffffffdfec6a... Success: 0x65=’e’ score=385 (second best: 0x00=’?’ score=190)
Reading at malicious_x = 0xffffffffffdfec6b... Success: 0x20=’ ’ score=405 (second best: 0x00=’?’ score=200)
Reading at malicious_x = 0xffffffffffdfec6c... Unclear: 0x4D=’M’ score=998 (second best: 0x05=’?’ score=523)
Reading at malicious_x = 0xffffffffffdfec6d... Success: 0x05=’?’ score=75 (second best: 0x00=’?’ score=35)
Reading at malicious_x = 0xffffffffffdfec6e... Unclear: 0x67=’g’ score=999 (second best: 0x00=’?’ score=637)
Reading at malicious_x = 0xffffffffffdfec6f... Unclear: 0x69=’i’ score=999 (second best: 0x00=’?’ score=620)
Reading at malicious_x = 0xffffffffffdfec70... Unclear: 0x63=’c’ score=739 (second best: 0x05=’?’ score=474)
Reading at malicious_x = 0xffffffffffdfec71... Unclear: 0x20=’ ’ score=999 (second best: 0x00=’?’ score=601)
Reading at malicious_x = 0xffffffffffdfec72... Success: 0x57=’W’ score=101 (second best: 0x05=’?’ score=48)
Reading at malicious_x = 0xffffffffffdfec73... Success: 0x6F=’o’ score=259 (second best: 0x00=’?’ score=127)
Reading at malicious_x = 0xffffffffffdfec74... Success: 0x72=’r’ score=53 (second best: 0x05=’?’ score=24)
Reading at malicious_x = 0xffffffffffdfec75... Unclear: 0x64=’d’ score=960 (second best: 0x05=’?’ score=529)
Reading at malicious_x = 0xffffffffffdfec76... Success: 0x73=’s’ score=117 (second best: 0x05=’?’ score=56)
Reading at malicious_x = 0xffffffffffdfec77... Success: 0x20=’ ’ score=129 (second best: 0x05=’?’ score=62)
Reading at malicious_x = 0xffffffffffdfec78... Success: 0x00=’?’ score=89 (second best: 0x62=’b’ score=42)
Reading at malicious_x = 0xffffffffffdfec79... Unclear: 0x72=’r’ score=997 (second best: 0x00=’?’ score=537)
Reading at malicious_x = 0xffffffffffdfec7a... Unclear: 0x65=’e’ score=965 (second best: 0x00=’?’ score=682)
Reading at malicious_x = 0xffffffffffdfec7b... Unclear: 0x20=’ ’ score=999 (second best: 0x00=’?’ score=566)
Reading at malicious_x = 0xffffffffffdfec7c... Success: 0x53=’S’ score=831 (second best: 0x05=’?’ score=413)
Reading at malicious_x = 0xffffffffffdfec7d... Success: 0x71=’q’ score=341 (second best: 0x05=’?’ score=168)
Reading at malicious_x = 0xffffffffffdfec7e... Success: 0x75=’u’ score=343 (second best: 0x00=’?’ score=169)
Reading at malicious_x = 0xffffffffffdfec7f... Unclear: 0x65=’e’ score=979 (second best: 0x00=’?’ score=643)
Reading at malicious_x = 0xffffffffffdfec80... Unclear: 0x00=’?’ score=490 (second best: 0x05=’?’ score=468)
Reading at malicious_x = 0xffffffffffdfec81... Success: 0x6D=’m’ score=327 (second best: 0x05=’?’ score=161)
Reading at malicious_x = 0xffffffffffdfec82... Success: 0x69=’i’ score=265 (second best: 0x05=’?’ score=130)
Reading at malicious_x = 0xffffffffffdfec83... Success: 0x73=’s’ score=103 (second best: 0x00=’?’ score=49)
Reading at malicious_x = 0xffffffffffdfec84... Success: 0x68=’h’ score=103 (second best: 0x05=’?’ score=49)
Reading at malicious_x = 0xffffffffffdfec85... Success: 0x20=’ ’ score=329 (second best: 0x05=’?’ score=162)
Reading at malicious_x = 0xffffffffffdfec86... Success: 0x00=’?’ score=71 (second best: 0x05=’?’ score=33)
Reading at malicious_x = 0xffffffffffdfec87... Unclear: 0x73=’s’ score=998 (second best: 0x00=’?’ score=601)
Reading at malicious_x = 0xffffffffffdfec88... Success: 0x73=’s’ score=43 (second best: 0x05=’?’ score=19)
Reading at malicious_x = 0xffffffffffdfec89... Unclear: 0x69=’i’ score=999 (second best: 0x6A=’j’ score=582)
Reading at malicious_x = 0xffffffffffdfec8a... Success: 0x66=’f’ score=353 (second best: 0x05=’?’ score=174)
Reading at malicious_x = 0xffffffffffdfec8b... Success: 0x72=’r’ score=71 (second best: 0x05=’?’ score=33)
Reading at malicious_x = 0xffffffffffdfec8c... Success: 0x05=’?’ score=45 (second best: 0x00=’?’ score=20)
Reading at malicious_x = 0xffffffffffdfec8d... Success: 0x67=’g’ score=299 (second best: 0x05=’?’ score=147)
Reading at malicious_x = 0xffffffffffdfec8e... Success: 0x65=’e’ score=757 (second best: 0x00=’?’ score=376)
Reading at malicious_x = 0xffffffffffdfec8f... Success: 0x2E=’.’ score=931 (second best: 0x05=’?’ score=463)

Rhys February 21, 2018 9:05 AM

Bolt-on vs built-in security has always been the answer by marketing & product management.

There was once one the assessments of the Project Management Triangle of constraints (good, cheap, fast). Within that triangle, at the bottom was a phrase between ‘cheap’ and ‘fast’- “how bad do you want it? That’s how ‘bad’ you’ll get it.”

The price for being envied is increased vigilance & security.

Toot February 21, 2018 10:26 AM

And there will be other versions again and again. Nowadays super-scalar processors reached a complexity that makes it practically impossible to rule out such vulnerabilities.
Besides caches and branch/target prediction there are many other micro-architectural states that are not pure, i.e. that could possibly be used to leak information across isolation boundaries (for instance the instruction fetch engine). Unfortunately chip vendors tend to hide their designs from the public which makes it harder to find other weak spots.

Patching Spectre/Meltdown seems to be necessary but it will not solve that sort of problems.

So what can we do about it?
* Live with that fact? Treat CPUs as untrusted and try to build secure systems on top of that? (Not sure if possible, but interesting research)
* Ask chip vendors to open their designs for public scrutiny? (I fear this could be a hard one)
* Switch to other chip vendors that are more open? Are there any? The marked seems to be dominated by very few entities.

Long story short: “intel inside” – but what’s inside intel?

Who? February 21, 2018 10:48 AM

Who thinks the current Intel motto is disturbing? “Intel—experience what’s inside.”

We really need simple, auditable hardware and software technologies. There is room for performance, of course, but the world need technologies than can be trusted.

neill February 21, 2018 10:58 AM

@who?

problem is that 24+ cores with HT,AVX,FMA and all other goodies with 5+ billion count cannot be taped out and validated by hand anymore … so if you need speed you’ll get complexity

IMHO better to ‘fix’ the O.S. and scanning, detection, prevention that nothing runs that should not run

Who? February 21, 2018 11:35 AM

@ neill

problem is that 24+ cores with HT,AVX,FMA and all other goodies with 5+ billion count cannot be taped out and validated by hand anymore … so if you need speed you’ll get complexity

Exactly. On the other hand, a lot of people on this blog —including me— would prefer slow but auditable computer architectures. Gamers, teams running HPC clusters, software development teams and so on can buy fast computers if they want.

Bob February 21, 2018 11:44 AM

Bruce: Spectre Prime and Meltdown Prime are not actually different fundamental problems. The fundamental problems are that you can modify cache contents without the proper permission. The prime variants are just different ways of detecting what’s in the cache, not of doing the permission-bypassing cache modification to start with. There are several ways of detecting what’s in the cache as the original paper mentioned (but didn’t fully describe since only one way was necessary).

Frank Wilhoit February 21, 2018 12:02 PM

@Toot: “…reached a complexity that makes it practically impossible to rule out such vulnerabilities….”

This. If humans cannot build it with their minds, then they cannot build it with [chains of] machines.

This is why “the robots” (including the self-driving cars) are not “coming”; no one can program them and they cannot program themselves.

They won’t “work”, but they’re not supposed to. The motivation for them for them is to obscure and subvert accountability. It will never matter whether, or how often, or with what consequences, they malfunction. The only thing that matters is the fact that it will never be possible to assign blame or to impose accountability.

Davide February 21, 2018 12:42 PM

@who

are you sure that your microcode is up to date?

if you look at the ubuntu Intel microcode package
https://launchpad.net/ubuntu/+source/intel-microcode

you can see that the version is:
3.20180108.1+really20171117.1

for telling you that you have a two versions ago of the microcode (they have been throw away because they hang some processors) or the version released at 17 Nov 2017.

I’m using Debian, but I think you can use the following commands on Ubuntu:

1) to check available package
$ apt show intel-microcode

2) to check installed package
$ dpkg -l intel-microcode

Rhys February 21, 2018 2:46 PM

Making a ‘secure chip’ boils down to conflicted values of the foundry(s) that makes them and the clients who consume them.

Even if anyone you poll can define what “trust” is, then ask them why they trusted their technology to begin with.

Consumer driven technology makes all kinds of erroneous assumptions. Its driven by least amongst us, not the best amongst us.

I’ve seen company’s who tried to test-to-fail & rigorous outcome conformance. Where are they now? TCB A1 operating systems- who’s running those?

With coming changes in quantum thinking, Newtonian determinism still exists with trust/no trust security models. Who really does a risk analysis matrix?

Divine intervention is what indolence & sloth leaves us for ‘hope’.

Jesse Thompson February 21, 2018 3:03 PM

@Who?

Hardware manufacturers should understand there is a market for security, even to end users. They should release non-overpriced slow but “secure” processors to these customers.

I don’t understand why they don’t just make a separate processor for security sensitive concerns — one that’s slower and auditable but still powerful enough to do nice things — and give that it’s own physical bank of RAM, and allow it to simply communicate with the “crazy fast but side-channel-exfiltrateable” CPU(s).

While I can appreciate that in the business sector having high throughput database access to sensitive information is important, the business sector is ordinarily comfortable just throwing more CPUs at a problem anyhow in those massive data centers of theirs. For the small business and consumer, segregating sensitive data to a sensitive CPU while call of duty can get played from the pray and spray model should at least be feasible.

laurie February 21, 2018 6:44 PM

Toot,

Switch to other chip vendors that are more open? Are there any?

Depends what your chip needs to do. RISC-V is getting interesting. Leon, J-core, OpenSPARC, LM32… you can use these for “Treat CPUs as untrusted and try to build secure systems on top of that”, if not for your “main” CPU. Maybe some form of blinded-computation or zk-snark can get high performance on untrusted CPUs.

I really hope government groups are going to pressure “chip vendors to open their design”, and help create a market for it. (Why didn’t NSA already know about meltdown and spectre?)

I disagree with your “practical impossibily” premise. With proper side-channel analysis during design, and proper compartmentalization, it can be done. Everyone knew caches leaked information, so designers should have presumed any microarchitectural stuff acting on behalf of the user could leak in that way. There are several specific a-priori-solvable problems for these bugs:
* the speculative exeuctor is more privileged than the task it’s acting as an agent of (i.e. meltdown)
* the speculative executor was not hardened against cache attacks (good crypto implementors have been so hardening for a decade)
* the branch-target-buffer is not flushed during context switches (and possibly is shared between cores/threads?)
* caches leak information between tasks, hyperthreads and cores (some solutions are trivial; performant ones might not be)

laurie February 21, 2018 6:49 PM

Jesse,

I don’t understand why they don’t just make a separate processor for security sensitive concerns — one that’s slower and auditable but still powerful enough to do nice things — and give that it’s own physical bank of RAM, and allow it to simply communicate with the “crazy fast but side-channel-exfiltrateable” CPU(s).

You know they did all of that right? Intel ships a Pentium-class CPU, with no speculative execution, inside every CPU. AMD has something too, I’ve heard rumors it’s ARM.

Too bad they did it exactly in the wrong way. They made an unauditable, unusable, trusted component (ME/PSP) that can compromise the main CPU. We can’t remove their code, we can’t put our own code there… but if we could, it would be exactly what you asked for. They’re even advertising it as “for security”.

4-h, hungry henry howard homes February 21, 2018 7:44 PM

have you heard

or hurd

of herd immunity?

is this a valid concept of innoculation strategies in the global face of javascript and rick rolls?

Eclipse February 21, 2018 8:47 PM

Who? – consider Quark. A Pentium class processor, probably predates (don’t know for sure) all the speculative execution.

We’ll be happy to put GEMSOS on it (TCSEC Class A1 MAC enforcing security kernel) if there are customers.

But we need customers.

Run Linux (or BSD if you prefer) on GEMSOS, if you’re serious.

But, yes, you will want hardware that doesn’t short cut segment protection checks, ever.

Clive Robinson February 21, 2018 10:51 PM

@ Toot,

Long story short: “intel inside” – but what’s inside intel?

I don’t think even Intel can answer that….

Do you remember the researchers that found that Intel had accidently made the memory addreasing architecture “Turing Compleate” thus giving the CPU a ghost CPU of it’s own.

There is only so much information the human mind can contemplate at any one time. Which means that no one mind can hold all the design details and work with them coherently.

What you end up with is an increasingly large group of people with “Islands of knowledge” with seaways inbetween that are always in a state of flux. The flux may look calm almost orderly on the surface but underneath strong currents will run. Such out of sight places will always behave in unexpected ways when small changes are made often in apparently unrelated areas.

In essence taming boundary issues is hard work at the best of times.

As I’ve said quite frequently over the years there is a general rule of thumb “Efficiency-v-Security” it’s difficult if not impossible to have both.

The people that used to understand information side channels and what was required to design out the then known ones was part of the NSA. But the US politicos for various reasons forced the COTS purchasing on them and others within Government. The question thus arises what happened to that “skill base”, and what is it doing now?…

Clive Robinson February 21, 2018 11:00 PM

@ Bob,

There are several ways of detecting what’s in the cache as the original paper mentioned (but didn’t fully describe since only one way was necessary).

Yes and there are other things to consider, which is why “This gift will keep on giving” for quite some time yet. Oh and of course as we know of old improperly understood “fixes will open other classes of attack”. So we are now in the “Interesting times” of the old Chinese Curse…

rb February 22, 2018 2:26 AM

laurie: (Why didn’t NSA already know about meltdown and spectre?)

What makes you think they didn’t?

Who? February 22, 2018 7:31 AM

@ Clive Robinson

Philip Guenther, Alexander Bluhm, Mike Larkin and others had been working really hard to implement these changes fighting, amongst others, against the lack of information about these bugs. As Theo de Raadt said it was a “selective disclosure,” not a responsible disclosure. Truly a shame for all the industry.

I am worried not only about the lack of communication between the big players and the smaller ones, a lack of communication that seriously hurts the security of operating systems like OpenBSD, but also about the route OpenBSD itself has been following in the last two years, not only breaking subsystems and tools in the base system in the name of supposed “security” but also implementing expensive security protections, like KARL, to solve problems that do not even exist at a theoretical level. There is very good security that will never find its way into OpenBSD because developers are affected by the “not invented here” syndrome.

I was happier with the way OpenBSD was being developed years ago, implementing security for the real world without compromising usability, and the way problems were communicated to all the players in this field.

Maybe the world will turn again someday. I wish we will have simple computer architectures (even if slower than actual ones) again, good communication of security problems amongst all the affected players, and a more reasonable development model that does not despise the end users.

IMHO the worst problem our industry faces right now is the lack of an apropriate education at the universities. The greatest advances in computing were developed decades ago, right now most developers do not even understand the basis of the tools they are using. They are mostly kids that want to play on the free and open source fields for their own self esteem. But the real culprit here are the universities that prefer spending their time in courses to use commercial software than teaching the basis of our industry to the people that will be responsible for developing the software that runs on all our technological stuff.

The world is in a cultural recession and increased technological complexity now.

Eclipse February 22, 2018 8:12 AM

@Clive Robertson –

The people that used to understand information side channels and what was required to design out the then known ones was part of the NSA. But the US politicos for various reasons forced the COTS purchasing on them and others within Government. The question thus arises what happened to that “skill base”, and what is it doing now?…

Clive – much of that knowledge was enscribed in the TCSEC Rainbow series of standards and interpretations. One Class A1 general computing product is still around – GEMSOS – and looking for the customers who care enough about not only formal correctness of design but also security, like Class A1 required. It is capable of working on any Intel Architecture processor due in no small part to Intel’s commitment to continuing support for full segmentation and ring support in the x86 modes of operation, including IA32.

As any Class A1 security kernel, it relies on features that Intel and AMD have (for whatever reason) broken in their 64-bit modes. That’s simply not an issue in the applications most critically needing to be free of security patch distribution and installation – such as IoT devices and appliances.

The knowledge is written down. The methods are proven they can be successfully applied. There’s still at least one commercial product able to be employed on modern processors.

The VCs who would fund a business around delivering the product into the market ask one question: Who is your first customer? Still looking.

Clive Robinson February 22, 2018 11:16 AM

@ Eclipse,

much of that knowledge was enscribed in the TCSEC Rainbow series of standards and interpretations.

Yes and in the NATO and UK equivalents.

However for most people all that stops at the ISA level in the computing stack, thus any attack at a lower level can cause real issues for them as the Commercial OS’s have no defence against such attacks. Likewise some of the requirments for A1 are just not going to happen in a standard commercial environment.

But the likes of RowHammer show how to reach down from the ISA level, through the main hardware security mechanism of the MMU and manipulate bits that the CPU can not see being done thus can not defend against. Likewise the use of DMA in I/O can “seed” memory in certain ways that can cause security issues in that it makes other things possible.

But there are other issues such as “system transparancy”. When you make a system that has fast I/O etc you are making it more efficient in a limited subset of areas. However all to frequently the price of that efficiency is security is reduced. As Mat Blaze and some of his researchers showed there are easy ways to use timing channels to get information out of the rest of the system due to it’s transparancy caused bt the time bassed efficiency.

Then there are the likes of using error signals to push attacks back up through system outputs all the way back to a core component. This can even effect data diodes and other supposed one way devices. Basically there is a trade off, to get the speed and reliability requires more than just FEC systems, it requires actuall ACK/NAK and flow control. This feedback can be subverted in many ways.

The list of these things whilst not limitless are certainly way way more than most people are aware of.

But the point of getting at memory is it’s usually unchecked except against simple flag bits to give parity or similar. If an attacker can subvert the memory and correctly set the bits then none of the conventional protection measures will work.

I’ve discussed bit’s of this at variois times on the blog in the past and the only effective way of dealing with it is to use a hardware state machine that actually runs through memory checking not just for flag bit changes but entire programme signitures.

AtAMall February 22, 2018 3:43 PM

Searching “printers meltdown spectre” seemed to indicate printers aren’t considered vulnerable to meltdown or spectre, even though some printers use ARM chips.

Speaking of printers, are there any oldies but goodies, that are worth getting ones hands on?

Speaking of oldies but goodies, people used to speak about 1) pre-amt or pre-me Intel chips or 2) ~’95 or earlier Intel chips. Post meltdown, what vintage (year ranges or other) Intel or AMD chips might be worth considering? What computer vendors? Is pre DDR3 relevant? and so on …

hmm February 23, 2018 4:15 AM

You can reduce your profile from the “current default attack” but you’re not solving security issues by going back in time really. But if you can manage to get by browsing the internet with TTY or something and be functional, hey, more power to you.

Who? February 23, 2018 7:18 AM

The Spectre and Meltdown vulnerabilities are getting worse. New strategy against these hardware vulnerabilities,

  • CVE-2017-5753 (bounds check bypass, Spectre Variant 1), now mitigated by OSB (observable speculation barrier, Intel v6);
  • CVE-2017-5715 (branch target injection, Spectre Variant 2), now mitigated by full generic retpoline and IBPB (Intel v4);
  • CVE-2017-5754 (rogue data cache load, Meltdown), as before mitigated by Page Table Isolation (PTI).

As it happened to the old microcode-based approach, the Spectre proof of concept works on this workstation yet!

Who? February 23, 2018 9:59 AM

@ Eclipse

I had been considering the Intel Galileo boards before, perhaps it is time to adquire one or two of these small computers. These boards have reasonable specs. As any computer I own right now, except my Raspberry Pi and the Dell Precision T3420, my SoCs (PC Engines Alix and Soekris computers) are running OpenBSD, all of them.

I had been looking for information about GEMSOS in the past, I think there is no much about it publicly available (or I had been incredibly unlucky here). Right now I am not sure how it works (is it an hypervisor as Xen?), how much it costs or even if it can be downloaded for free.

Both the Intel Quark and GEMSOS projects are good approaches to computing for those of us than want to have our computers under control (or at least slightly more predictable than the current high-performance-and-ultra-manageable architectures).

Who? February 23, 2018 10:19 AM

If we can say something good about the new [non-working either] approach to Meltdown and Spectre is that, at least, CPU load is lower than before. It is an improvement.

This one —mostly software-based— fix against these vulnerabilities has better performance than the microcode-based one I tested a few days ago on the same workstation. Recent microarchitectures require either full generic return trampolines (“full generic retpoline”) plus IBPB (microcode) or retpoline plus underflow protection. Older architectures should be ok with generic retpoline only. So older architectures that will not receive microcode (either as a firmware upgrade or loaded by the operating system kernel on boot) will get protected too.

However the question remains, why does the Meltdown and Spectre proof of concept I am running as unprivileged user here works on a machine that is supposedly protected by firmware and operating system patches?

Clive Robinson February 23, 2018 1:58 PM

@ Who?,

The Spectre and Meltdown vulnerabilities are getting worse.

And the gift just keeps giving and giving, and will do for some time to come due to “Efficiency-v-Security”… I can only see two ways this can end… The first is in reality it does not people just get bored of it and find some way to mitigate the issue such as effective “gapping” technology which just shifts the “Efficiency-v-Security” issue elsewhere. The second is we accept that we have gone one efficiency step to far and thus roll it back out of architectures, and get a 10-30% speed hit.

The second option is actually the best for by far the majority. As consumers have already been heading in that direction for some time… The fact that Smart Phones, Pads and netbooks have taken many down a path where they are more than happy with CPU’s that only give at best about 1/4 of the processing power the Intel processors give tells us they will in reality not care about the slow down, especially as it will probably give a 20% uplift in usavle battery time…

The people who are worried are the likes of Microsoft and other cloud providers. Microsoft gets a double hit as does Google because not only are they both major cloud provider/users they also supply OS’s with more bells and whistles than 90something percent of people neither need or want. Importantly they both need regular OS updates to move consumers into their end business game of “Walled Gardens that data rape users”.

Who? February 23, 2018 3:54 PM

@ Clive Robinson

The second option is actually the best for by far the majority. As consumers have already been heading in that direction for some time… The fact that Smart Phones, Pads and netbooks have taken many down a path where they are more than happy with CPU’s that only give at best about 1/4 of the processing power the Intel processors give tells us they will in reality not care about the slow down, especially as it will probably give a 20% uplift in usavle battery time…

Clive, I am not really sure your affirmation about giving a twenty percent uplift in usable battery time is true. Processors will become slower because they will lost these “valuable” overoptimizations earned over the last two decades, but it will not be as a consequence of a reduction on its power or heat dissipation. On the contrary, they will become slower but will work harder.

I am a BSD guy, an Intel Core2 processor is incredibly powerful for my real needs.

Does a hardware manufacturer offer a thirty percent performance loss and a twenty percent increase in battery time or energy efficiency on a processor? I buy it! But, no, I do not think this one is the case.

The answer about why you might not be getting the protection you are expecting might be in there (see ASM-v-C fix).

I fear the answer is not in The Register article. This workstation seems “protected,” at a theoretical level, but the Meltdown/Spectre PoC I am running here differs:

$ dmesg | grep microcode
[    0.809198] microcode: CPU0 sig=0x906e9, pf=0x2, revision=0x84
[    0.809237] microcode: CPU1 sig=0x906e9, pf=0x2, revision=0x84
[    0.809240] microcode: CPU2 sig=0x906e9, pf=0x2, revision=0x84
[    0.809282] microcode: CPU3 sig=0x906e9, pf=0x2, revision=0x84
[    0.809358] microcode: Microcode Update Driver: v2.01 , Peter Oruba
$ cat /sys/devices/system/cpu/vulnerabilities/spectre_v2 
Mitigation: Full generic retpoline, IBPB (Intel v4)

I am not sure this workstation will be upgraded to kernel 4.15; may try it just to confirm this computer can be protected against the variants of Meltdown and Spectre known right now, but the goal is moving it to a BSD or, as I like to call it, “an operating system for civilized people.”

Who? February 23, 2018 4:12 PM

@ Clive Robinson

Right now the only reason I feel “safe” is that my computing infrastructure is mostly based on OpenBSD (even if some design decisions taken in the last two years hurt the project in some fundamental areas it remains the best choice for general purpose computing yet.) Most of my computers are on an air gapped network, the ones connected to the Internet are firewalled for both egress and ingress traffic (e.g. allowing web browsing from a single machine on my network), and authentication is based on smart cards only (thanks to clever people like you and Thoth.)

For years I had been trying really hard to move from the nineties idea of security (“my computer is unbreakable”) to the current concept of “ok, my system is vulnerable, what will I do once it is breached?”

I am trying to control damage once it is done, and it seems it is the only way to go.

laurie February 23, 2018 6:11 PM

rb,

<

blockquote>laurie: (Why didn’t NSA already know about meltdown and spectre?)
What makes you think they didn’t?

<

blockquote>

It’s a guess based on very little information.

  • They claimed they didn’t know, despite this making them look incompetent (not a very good source, I know…). Based on research trends, side channels were a very predictable avenue of attack, and NSA should already have reverse-engineered the relevant hardware, so they’d just need to put 2 and 2 together.
  • I’ve seen no evidence they knew. If they did, they’d have to be totally reckless not to warn other government agencies, even vaguely, and it would be hard to keep this secret. If they’d shown strong preference for AMD over Intel, to avoid Meltdown, people would know.

Clive Robinson February 23, 2018 6:32 PM

@ Who?

Processors will become slower because they will lost these “valuable” overoptimizations earned over the last two decades

These “overoptimizations” contain a considerable number of logic gates running at the maximum speed they can safely operate at. Thus burn a considerable petcentage of the CPU energy input whilst bringing it closer to “heat death”. If they are removed then the CPU power requirments deminish proportionatly. 20% is a consetvative estimate of the saving likely. However in defrence to Gordon Moore’s observation the engineers will nodoubt find other uses to put the silicon real estate to. If they chose more cache memory or microcode ROM then the power saving will be realised. If however they chose other “speedup” optimizations then there will be no power saving.

Does a hardware manufacturer offer a thirty percent performance loss and a twenty percent increase in battery time or energy efficiency on a processor? I buy it!

Yes and no, in the case of Intel no. Their whole reason to exist has for a very very long time now been “go faster stripes” for that matketing method called “Specmanship”. They did at one point try producing lower power chips for laptops and the like, but it did not work out so well for them.

Other design houses like ARM do offer options in where and how the energy gets burned. But they to try to offer more bang for the buck etc on each life cycle stage hence the upward core type numbering. What manufactures then do is pick the core they think meeys customer expectations in their SoC products. Which usually means any process improvments will probably get swallowed up by more none core functionality such as more RAM/ROM/Flash or more functional or higher speed I/O including DSP and I/O cores.

Thus as a consumer you don’t get given a straight option.

I am a BSD guy, an Intel Core2 processor is incredibly powerful for my real needs.

There is actually quite a degree of flexability in BSD unlikevother current commodity OS’s. For instance as I’ve mentioned in the past MicroChip make some increadibly cheap CPU’s and development boards. You can for around 1USD buy a MIPS CPU core PIC32 SoC chip that will give you the equivalent power of a four user MicroVax costing about the equivalent of 1.5millionUSD in todays currency value. It will fit on their development system and somebody has ported an earlier version of BSD (originally 2.11) onto it.

You can read a brief history of the port and project as well as the 4.4 update at,

https://www.mips.com/blog/can-it-run-bsd-the-story-of-a-mips-based-pic32-microcontroller/

But things are still moving forward on this front.

You can also buy a couple of ready to play boards for around 20USD from,

https://www.olimex.com/Products/Duino/PIC32/PIC32-RETROBSD/open-source-hardware

If you want the latest on the 4.4 you can go to the source as it were,

https://github.com/sergev/LiteBSD/wiki

But I’m a little concerned about,

For years I had been trying really hard to move from the nineties idea of security (“my computer is unbreakable”) to the current concept of “ok, my system is vulnerable, what will I do once it is breached?”

As the way you put it suggests you might have missed out on a third way which is “the off the shelf system I use is vulnerable, what can I do to reduce the chance that it will be breached, and mitigate any damage that might be done prior to it happening?”

There is much that can be done to reduce and or mitigate the effects if it does happen. We used to call it “hardening” back in the Solaris 5 days. Put simply you would have a recipe of what to do to a stock install to turn it into a “Bastion Host” by not just reducing the attack surface by removing all but required services etc, but also jail etc services in the equivalent of sand boxes and a few other tricks like tripewire etc such that an attacker would even if they got a toe hold find life difficult at best.

The modern approach is not that disimilar but has extra tricks etc you can use. One such is not just FDE for the system mounts but also individual user home space encryption. Thus even though the system is up thus obviating FDE protection, individual user home etc file systems and other storage is only vulnerable when they are logged in and using the file systems. Further file level encryption can also be used such that only the data files in use are vulnerable. Yes it gives a performance hit but you do get security advantages. If you want to get down and dirty there are other tricks using file systen “snap shot” images to behave in a none standard auditing fashion to show not just when but how file system content was changed, even if the standard filesystem interface does not show any changes. Likewise you can use UDP for a syslog over a data diode to an append only file system. All of these have been successful in catching even some of the wileiest of hackers, and there is much more mitigation you can do besides.

laurie February 23, 2018 6:35 PM

AtAMall,

Searching “printers meltdown spectre” seemed to indicate printers aren’t considered vulnerable to meltdown or spectre, even though some printers use ARM chips.

Expect some bored hacker to port the exploits to PostScript if they’re vulnerable… but pre-Cortex chips are more likely here, and aren’t.

Anyways, who updates printer firmware? This stuff isn’t meant to run user code (other than PostScript which people sometimes forget is code) and may not have kernel/user separation to start. There must be easier exploits.

Speaking of printers, are there any oldies but goodies, that are worth getting ones hands on?

Perhaps something after the laser-miniturization of approx. 10 yrs ago, if it’s for home (I picked my current printer from a kerb before that; largish, but free and still working.) Look for a B&W laser that supports PostScript and/or PDF where replacement toners + drums are easy to get. I’ve found Brother, Samsung, HP, Xerox to be fine (but only owned the first 2). You don’t want to be running a years-old network stack, so disable that and connect it to the USB port of something with a modern OS.

Clive Robinson February 23, 2018 7:25 PM

@ laurie, rb,

What the NSA or other Western SigInt agencies did or did not know is always a matter of conjecture.

However it is also known now from some of the NSA tools that went walk about and ended up with the brokers, that the NSA are quite happy to let quite serious software vulnerabilities go unreported simply to gain what they consider an “offensive advantage” for a “just in case” scenario. Thus they were quite happy to wilfully leave their own governments highest level security systems vulnerable without batting an eyelid…

But in the case of the NSA if not other SigInt agencies, the push to “offensive capability” and the need to redeploy funding etc that experienced teams working on defensive capabilities ended up geting disolved and thus their knowledge base as well.

Another thing that is becoming apparent with time is that in many cases the SigInt agencies are nolonger technology leaders but followers. That is the open community actually now knows more than the SigInt agencies. Thus nolonger are they suppriesd how quickly the open community caught up, but how quickly they fell behind… Worse they are nolonger atracting the “brightest and best into GS” they just can not compeate with the “big money and benifits” packages even the weirder maths PhDs are commanding, but worse the straight jacket that GS has been and still is, is just not attractive. Thus the SigInt agencies are becoming increasingly short on smarts where it counts…

So yes not only might the NSA not have known, they would probably have done nothing helpfull to their own people even if they did know. Which is kind of a gut punch to everyone outside that psychotic world.

Anon February 24, 2018 12:35 AM

Why didn’t NSA already know about meltdown and spectre?

We will probably never know if they did.

I can think of a few reasons why they might not use it even if they did know:

  • Relatively slow
  • Could cause corruption in the long term/cause them to be detected
  • Unreliable

They would also need to be looking side-channel attacks against the hardware, whereas most of the time they would be looking at side-channel attacks against software.

I also remembered this: https://www.theregister.co.uk/2015/04/21/cache_creeps_can_spy_on_web_histories_for_80_of_net_users/

Did these researchers only discover half the truth?

echo February 24, 2018 11:04 AM

@who?

I follow the Swiss cheese security model. My security is so poor that any hacker will by accident drive straight through Carlos Fandango style and out the other side. I’m joking of course but it makes the predicament easier to bear.

Who? February 24, 2018 11:26 AM

@ Clive Robinson

Thanks a lot for the advice about BSDs and cheap boards. I know the projects you describe and have access to the 4.4BSD-Lite source code from both the “4.4BSD-Lite CD-ROM Companion” (I have the full set of books plus the companion CD-ROM from USENIX and O’Reilly) and the “CSRG Archives CD-ROM set,” in my humble opinion the most valuable source of information on the history of the BSD operating systems.

I hope you are right and the reduction in the number of logic gates will save power; my fear is that Intel, AMD and —in some way— ARM too will only remove the minimum amount of logic gates required to pass the Meltdown and Spectre proofs of concept available now. Performance sells processors. It seems Moore’s law has been healthy with the help of some engineering dirty tricks and all this has broke in the worst way.

As the way you put it suggests you might have missed out on a third way which is “the off the shelf system I use is vulnerable, what can I do to reduce the chance that it will be breached, and mitigate any damage that might be done prior to it happening?”

No, I have not missed the third way. I try to make my systems as simple as possible and document anything that is security related (ranging from the most secure settings on the BIOS setup of my computers up to the way to configure a service to be secure). One of the most valuable features of OpenBSD is that there are no superfluous services enabled by default, so there is no need to start a witch hunt to disable services.

The world has changed a lot from the times of the Nixdorf’s Targon Operating System, SINIX and the HP-UX 9.07. The first Unix I ran at home was Solaris 2.5.1 (I think it is what you call Solaris 5), followed by Solaris 2.6, 7 and 8. At that point I moved to the BSDs (FreeBSD, NetBSD and OpenBSD, all of them), and now I am using OpenBSD exclusively.

Solaris 2.5.1 was great but, as other commercial operating systems, it tried very hard to be friendly to system administrator by opening too many services by default, like the infamous RPC services that were required by CDE. Hardening these operating systems was a nightmare.

Indeed, right now I use FDE —AES-ECB-256 to protect the disk keys and an unique AES-XTS-256 key to protect each half terabyte of data— on all desktops and laptops (servers are excluded, but data is encrypted on storage). I have built a simple data diode using three fiber optic transceivers to protect a small syslog server, so my hope is being able to detect anything odd that may happen, if it leaves traces.

In my humble opinion, Meltdown and Spectre have shown all these protections are not enough (except the use of OpenPGP smart cards, whose private keys should remain secure on a fully compromised computer), so I prefer thinking in a different way.

I have seen no easy way to achieve “energy gapping,” but air gapping is a first step and try to use it whenever possible.

Mike Barno February 24, 2018 11:28 AM

@ echo :

… any hacker will by accident drive straight through Carlos Fandango style and out the other side.

This reminded me of a passage in The Illuminatus! Trilogy, by Robert Anton Wilson and Robert Shea, about the battle of Ascalon during one of the Crusades. In the book’s telling, a group of Crusaders led by Templar heavy cavalry charged into a fortified Muslim city near Jerusalem, drove straight through the place, and found themselves at the far wall, where the inhabitants surrounded and immobilized them through sheer massing numbers. So the Crusaders, finding no gateway to exit, had to charge right back across the city and out the way they entered.

There’s no similar description in the Wikipedia articles on either the 1099 Battle of Ascalon nor the 1153 Siege of Ascalon, so I don’t know whether this was one of the many bits that the authors simply made up.

Who? February 24, 2018 2:59 PM

@ echo

The Swiss cheese security model works!

I will not say where, but there is currently an open (i.e. non-firewalled) network in my country with a few Solaris 9 workstations that have not received a single patch since I installed them in 2004, and a mail server that has not been patched since 2013. These machines have not received a single attack. (Will not say “successful attack” as any attack, even the ones performed by script kitties, will be successful on these computers.)

There are other machines on that network, better protected ones and fully patched, that had been successfully owned in the last years.

Bad guys must believe these unpatched systems are a honeypot.

Alyer Babtu February 25, 2018 3:52 PM

Since speculative execution often “works”, it seems the data, computation and guessing of conditional branch outcome values form a communication system of low entropy.

Viewing speculative execution in terms of a Shannon communication diagram, where (perhaps) the sender is the data input, the transmitter is the program, the noise is the statistical branch guessing, is there some better way to take advantage of this low entropy, rather than the “communication encoding” implicit in speculative execution as currently used ?

Perhaps a way involving better compiling, data layouts, perhaps some use of both together ?

E.g. the program one wrote as transformed in the running system, based on statistics gathered from operations, guesses the truth value output of a conditional expression containing unloaded values, in memory far away, and guesses correctly in a worthwhile number of cases. Then the program block corresponding to the guessed case is run, whose data is in near memory, while the system is fetching the unloaded far memory values so the conditional can be fully evaluated. Why does it systematically turn out that the conditional needs far memory but the code to be speculatively run does not ? Is there some way to have the values needed in the conditional always or for the most part stored in near memory?

RogerDogerThat February 26, 2018 7:45 AM

There are Spectre/Meltdown today solutions but you have to look outside the mainstream tech media radar. This ARM based slower processor Spectre and Meltdown solution is exactly which posters have asked for:

“As for Spectre and Meltdown, ARM CEO Simon Segars noted that the attacks have raised the awareness of how many microprocessors there are in the world today, though he also stressed that this attack exploits the features of high-performance chips — and only 5 percent of the chips that ARM’s licensees have sold in the past are susceptible to the attack.”

Notably the Pi3 dedicated H/W video decoding chip surpasses Intel and Nvidia in quality and smoothness.
No Intel/AMD Management Engine issues either.
The pure debian Raspbrain desktop with Kodi you have it all for well under $80 total cost.
Unlike mainstream platforms there is no data-slurping.
However loading a heavyweight privacy based web browser is too slow. However after that its usable.

AlexS February 26, 2018 3:09 PM

@Frank Wilhoit : I have driven >50,000 miles with two “self driving” cars over the past 3+ years. The first one would be Level 3, the current one claims to be a Level 5. And no, neither of them are Teslas. The technology is pretty damn good. It’s saved my bacon three times, with 2 of the 3 being almost rear-ended by other drivers. The other was the car avoiding a deer.

The technology is being developed because humans aren’t good at this. There’s ~16,000 vehicle accidents in the USA every day. About 200,000 deaths every year. Those numbers would be far higher, but cars have become substantially better-built over the past 30 years. These semi-autonomous & fully-autonomous car technologies have the power to reduce these numbers. Will we ever reach 0 accidents and 0 deaths from car accidents? Not likely. Even commercial aviation, which had 0 deaths in 2017, still played bumper cars with airplanes multiple times at major airports last year. We even had multiple almost-disasters last year where aircraft were almost landed on top of other aircraft, again at major airports with experienced pilots. Even in commercial aviation in 2017, the #1 cause of accidents is human error.

Keep in mind that the design philosophy behind the Airbus A3xx series of aircraft was to reduce pilot error, which is why pilots “fly the computer” rather than fly the aircraft directly. Under Normal conditions (Normal Law), the pilot is just an input into the system and the aircraft figures out how to do what the pilot requests. When the systems fail (Alternate law), the pilot will need to fly the old fashioned way.

There’s no mistaking who is at fault if I get into an accident — it’s my car. I’m the licensed driver, therefore, I’m responsible for the operation of it. No different than the captain of an Airbus who stuffs it.

I will agree that systems today are far too complex. If anything, I have a ton of respect for the “real engineers”, ie: those guys who designed things with slide rules and no computers. In doing so, they were able to achieve great things. Also, by doing the calculations by hand, by sifting through the math, they were aware of each calculation and its results. When doing math this way, you can spot mistakes easier than the current plug-n-chug way of doing it with spreadsheet programs and formulae. Unfortunately, there’s no way to turn back. It’s simply not possible for a single person to develop any of these products from scratch anymore, whether it be a mobile phone, car, airliner, etc. Now we’re at the point where not even a single company can produce the thing start-to-finish.

TRX February 26, 2018 4:08 PM

performance sells

I’d pay for security. So far the Fed has been silent on the CPU security issues – they may not even be aware of them at the management level yet – but there are some nasty Federal penalties for allowing certain types of confidential information to escape. I’m very glad I no longer have any customers who fall under HIPAA.

Alyer Babtu February 27, 2018 11:23 AM

Re speculative execution and entropy (question above), found this paper

Return Value Prediction meets Information Theory, Jeremy Singer and Gavin Brown, Electronic Notes in Theoretical Comp Sci 164 (2006) 137-151

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.