Secure Speculative Execution

We’re starting to see research into designing speculative execution systems that avoid Spectre- and Meltdown-like security problems. Here’s one.

I don’t know if this particular design is secure. My guess is that we’re going to see several iterations of design and attack before we settle on something that works. But it’s good to see the research results emerge.

News article.

Posted on June 25, 2018 at 5:00 AM24 Comments

Comments

Who? June 25, 2018 12:26 PM

The only secure design is one that fully avoids speculative exection. Right now there are clearly two users groups:

  1. Those that want high-performance architectures (gamers, staff working in the HPC field, …); and,
  2. people that wants secure computing platforms.

The first group either depises security (gamers) or run huge computing clusters in which compartimentation is not critical (each computing node has a single user and does not run multiple jobs concurrently). The second group, the security conscious users, deserve a secure —even if slow for current standards— architecture.

Sorry, I do not believe in turning an unsecure design into a secure one by patching; security is an integral part of a design.

Who? June 25, 2018 12:34 PM

Remember, processors were considered secure until a few months ago. Caches did not leak until it proved otherwise.

Bob June 25, 2018 12:45 PM

@Who?

I agree. Unfortunately, we’re stuck working with a reality in which speed is a consideration.

Who? June 25, 2018 1:18 PM

@ Bob

Patching broken hardware only gives a false sense of security. It works until someone finds a workaround, then all computers manufactured in the last twenty years are vulnerable.

Wishing is more effective than washing hardware bugs under the carpet.

4seth June 25, 2018 1:19 PM

@Who?

It’s not true that processors were considered “secure” before Spectre/Meltdown. Cache attacks were getting trendy 15 years ago, and the 1995 paper “The Intel 80×86 Processor Architecture: Pitfalls for Secure Systems” mentions them.

Re: “The only secure design is one that fully avoids speculative exection”, that’s not a trivial statement and you don’t provide any proof (or a definition of “secure”, which would be necessary for a proof). The linked paper is claiming to be more than just a patch: “Designing speculative architectures that are leakage free in a principled way requires carefully rethinking most aspects of the processor microarchitecture. This paper takes a first step into exploring this problem, and by necessity leaves many steps to future research. Specifically, we show SafeSpec only for the memory translation and access components of the CPU, which closes most currently known attack variations. To protect the processor and the full system, all speculatively updated structures, such as the branch predictor and the DRAM buffers must be protected.”

Who? June 25, 2018 2:24 PM

@ 4seth

Where did I say that Intel architecture was considered secure before Meltdown and Spectre?

My current workstation (bought last november) has an i5 processor with four real cores for a good reason (as proved by TLBleed). I have talked about hardware-based management technologies in the past, same about the overly complex UEFI firmware, or the hyperthreading vulnerability discovered thirteen years ago.

Do you really want mathematical proof than a simple design has less bugs than a complex one? Seriously? Or you are saying than, as a mathematically verified microarchitecture does not exist, then current ones are ok and we must continue using them?

justina.colmena June 25, 2018 4:21 PM

Listen up, you computer nerds and programmming cattle. “Speculative execution” is not really about computer machine architecture at all.

All that is technically meant by the term is that the execution of machine instructions on a computer is not actually intended to be all that secure or reliable.

Time to wake up, sheeple. When they say “speculative,” they are talking about risk, as in playing the stock market. We all know that a certain cartel of large established tech firms is pumping and dumping and panning the stock of its upstart (start-up) competitors.

Good grief, you basement-dwelling techies. It’s about time you learned the biz-speak of your bosses before they pull the rug out from under your feet. There is no good faith to this “speculative execution.”

echo June 25, 2018 4:52 PM

I believe there is a place for all the different ways of CPU execution perhaps different hybrids too depending on use context. I’m just glad I’m not the one who has to think through this stuff.

Jesse Thompson June 25, 2018 4:53 PM

@Who?

Where did I say that Intel architecture was considered secure before Meltdown and Spectre?

Uh, you said it here.

Do you really want mathematical proof than a simple design has less bugs than a complex one? Seriously? Or you are saying than, as a mathematically verified microarchitecture does not exist, then current ones are ok and we must continue using them?

I don’t know, are you saying that all technology is evil and that safety only flows from purposeful impotence?

The rest of us would like to find out whether the efficiencies offered by modern CPU architectures could instead be offered by systems mathematically proven not to leak side channel information. We desire speed and safety, and explore to confirm if such a goal can be attainable or not. You appear to be against said exploration and feel that computing is unsafe at any speed.

4seth June 25, 2018 5:11 PM

@Who?

Where did I say that Intel architecture was considered secure before Meltdown and Spectre?

The second message, “Remember, processors were considered secure until a few months ago.” (Assuming that was you and that “a few” means 6. What am I missing?)

Do you really want mathematical proof than a simple design has less bugs than a complex one? Seriously?

No, I want some evidence for the assertion that “The only secure design is one that fully avoids speculative exection.” You’re saying it’s literally impossible to use speculative execution securely, something I’ve never seen suggested in the literature. I accept that it’s hard, and that simplification is sometimes the best choice.

Or you are saying than, as a mathematically verified microarchitecture does not exist, then current ones are ok and we must continue using them?

It might not be reasonable to give up on speculative execution entirely. You could probably map all your memory in uncached mode and see what that looks like on current architectures.

I do want to see formal verification of future processors. (And operating systems. seL4 is doing it now—with limitations, but it’s good work.) The paper gives a sketch as to what we’d be trying to prove here: that aborted operations cannot affect timing or execution as perceived by committed operations. In principle, it’s possible to show that at “block” level without proving every transistor.

justina.colmena June 25, 2018 5:17 PM

@ narcissus’ girlfriend

I believe there is a place for all the different ways of CPU execution perhaps different hybrids too depending on use context. I’m just glad I’m not the one who has to think through this stuff.

Uh huh, well, don’t let it break your heart. And don’t assume they are thinking any harder than you are, either, because chances are, they are not. You ever log into your bank account from your speculative-execution gaming PC, you are hosed, because some of those gamers are playing real money for keeps, not game scrip.

Kyle Wilson June 25, 2018 6:16 PM

I expect that there are CPUs out there that would meet the security criteria requested…just that none of them would be interesting to most of the audience. The AVR line of processors (the core in an arduino) are likely simple enough to be subject to analysis given some time and effort. Even the lowest end ARM cores may meet the requirements. None of these will run a real OS but you can code whatever will fit by hand in machine language and be pretty sure that you’ve got full control over the machine.

Much beyond the low end of the embedded world you’ll start seeing enough complexity to make analysis impossible and performance/security trade-offs that are at best opaque and at worst troubling. I’m personally going to stick with the high performance processors that meet my daily needs and accept the fact that I’ll never have prefect security…

neill June 25, 2018 7:55 PM

IMHO even with todays architectures we CAN work around spectre etc – you just gotta flush all caches, TLB etc before a context (task) switch. IF you have a ‘clean’ CPU then even a malicious thread can not find anymore useful data. That however comes with a performance decrease.

Weather June 26, 2018 12:04 AM

There was a assembly instructions that gave the clock tick, it’s not just removing the instructions, a for loop with XOR eax, eax with the trap flag set will crash.

Clive Robinson June 26, 2018 2:51 AM

Re Complexity -v- Security

We talk glibly about reducing complexity, but that is not necessarily the correct solution.

It can easily be seen that below a certain degree of complexity systems are not secure. Likewise above a certain degree of interconnection understanding the system sufficiently to unravel the complexity is at best difficult.

Years ago on this blog I gave a list of rules about making equipment more TEMPEST / EmSec hardend. I’ve also pointed out some of those rules apply more broadly in other areas.

One thing is certain the need for complexity is going to rise rather more than linearly of the next several human generations. However we have already started running into the laws of physics such as the speed of light and thermal issues (euphemistically called “Heat Death”). The likes of the Intel architecture and those from AMD and some from ARM are “dinosaurs” they have gone beyond the tipping point which is why these problems are comming up.

As I’m known to say “The future is parallel” and humans need to get out of their sequential thinking habits. Whilst we might think sequentially most of our bodies processes run in parallel, and there is a number of big lessons there if we are prepared to look.

It’s clear that in the body functions are segregated from each other and have controled interfaces between the various parts. The result is the complexity is managed and in the process it allows most parts to run in parallel almost independently of each other.

We need to learn a few of natures lessons, after all it’s been solving these problems way way longer than man has existed.

echo June 26, 2018 3:19 AM

@justina.colmena

I’m sure ego and office politics play a role in failures of professional standards. While relatively slow as some comment good work is being done and really shows the value of open access to academic papers and community involvement from stakeholders of all levels of expertise and requirement.

I’m fairly sure work surrounding speculative execution has applications within corporate and democratic contexts – the same “jumping the gun” and “verification” problems exist.

ATN June 26, 2018 5:09 AM

From the paper:

The speculative state is then either committed to the main CPU structures if the branch commits, or squashed if it does not, making all direct side effects of speculative code invisible

While we are at it, I also need this time machine (to revert the time to some previous value) for other purposes, if you can provide it as open-source…
Also it would be nice to have DRAM row/colunm pages unopened with time reverted to the previous state.

Clive Robinson June 26, 2018 5:28 AM

@ Outiz Feynough,

I’m wondering if Leslie Lamport’s TLA+ can be of assistance in getting a handle on all these side channel issues.

It’s funny you should mention Leslie Lamport, I often think that many involved with computers be it hardware or sofrware should read,

http://lamport.azurewebsites.net/pubs/teaching-concurrency.pdf

Not just once but many times as they progress through their education. The first few paragraphs before getting into concurancy should be a wake up call.

Oh and it’s not just “side channels” in the various forms of “time based”, “location based”, “order based”, etc we need to think about but combinations of them.

@ ALL,

To answer an issue that arose earlier Turing machines and quantum computers can never be intrinsicaly secure. However some types of state machine can.

You need to understand that a Turing machine has no real conceptual difference between an instruction and data, it treats them all the same way. Some state machines do not have this issue in that they “do not act on data” that is what they do is entirely unaffected by any data they process. You seen this in the likes of some DSP and filter systems, whilst a Turing machine can be made to function as such a simple state machine it rarely is.

By observation most can see how a state machine could be designed such that it did not leak data, but the flip side is that few would know how to make it do even a miniscule fraction of what the most rudimentary of programmes on Turing machines do. We have the same issue as trying to process encrypted data, whilst we can do a few things others are not possible currently.

Tanterei June 26, 2018 6:23 AM

@Who

Regarding your first post I have to disagree with your categorization of cluster administrators. Being one myself I can assert, that compute nodes are being used by multiple users, unless fully occupied by only one user. And this is going to persist in the future.
For HPC the security concerns are secondary, since access to the machines is generally granted on a per-request basis and attacks which influence the execution time of parallel programs over a prolonged period of time have a non-vanishing probability of being picked up. Provided that the users are actually estimating and monitoring the run-time of their jobs.

@Clive
Sequential thinking is the natural modus operandi of humans, as is task-switching (i.e., what a crowd likes to call “multitasking”). You appear to be conflating conscious thought processes and natural processes which are largely decoupled from aforementioned thought.

It appears, that our society won’t be quelling its thirst for HPC any time soon, and security IS of secondary importance here. A solution – albeit one which would produce a lot of headaches – would be a diversification of available CPUs, with performance and security-optimized versions.

jbmartin6 June 26, 2018 8:09 AM

I never thought I would see someone use the phrase “Wake up, sheeple!” on this blog. But now I have.

echo June 26, 2018 11:18 AM

I don’t believe calling people “sheeple” helps much. As a one off joke I find it funny. Calling people stupid or insinuating you are superior because you know something they don’t is disrespectful. I’m sure “sheeple” is a jargonistic rallying cry or group bonding type of thing too and if politics is a judge just serves to alienate and polarise.

Outiz Feynough June 26, 2018 12:22 PM

@Clive Robinson

Thank you for the Lamport concurrency paper link. I had thought I was a good scourer of Lamport archives, but I missed this !

It is nice to see a treatment that goes to principles, avoiding distractions of heavy technical machinery which has not been justified as appropriate in the context.

With all due acknowledgement of temerity, a question arises for me where Lamport characterizes category theory as “esoteric” and not needed at least in the beginning. Categories seem to provide such an attractive unification of a lot of things that otherwise look diverse, also even generalizing logic, that they are hard to resist. The initial abstract gulp is a bit dizzying though. As examples, there are some writings by William Lawvere applying category theory to systems. Also, John Launchbury uses then in his book on partial factorization.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.