Ubuntu Disables Spectre/Meltdown Protections

A whole class of speculative execution attacks against CPUs were published in 2018. They seemed pretty catastrophic at the time. But the fixes were as well. Speculative execution was a way to speed up CPUs, and removing those enhancements resulted in significant performance drops.

Now, people are rethinking the trade-off. Ubuntu has disabled some protections, resulting in 20% performance boost.

After discussion between Intel and Canonical’s security teams, we are in agreement that Spectre no longer needs to be mitigated for the GPU at the Compute Runtime level. At this point, Spectre has been mitigated in the kernel, and a clear warning from the Compute Runtime build serves as a notification for those running modified kernels without those patches. For these reasons, we feel that Spectre mitigations in Compute Runtime no longer offer enough security impact to justify the current performance tradeoff.

I agree with this trade-off. These attacks are hard to get working, and it’s not easy to exfiltrate useful data. There are way easier ways to attack systems.

News article.

Posted on July 2, 2025 at 7:02 AM22 Comments

Comments

PG July 2, 2025 7:09 AM

How much do you think that these mitigations affected Intel’s economic problems and how much their disabling is due to them?

Clive Robinson July 2, 2025 7:47 AM

@ ALL,

With regards,

“These attacks are hard to get working, and it’s not easy to exfiltrate useful data. There are way easier ways to attack systems.”

Is hanging on an unproven assumption that is probably not true.

That is the assumption is that all these types of attack are now known about and characterised, thus a value judgment can be made.

This was Intel’s “default position” from the begining.

However I warned that the problem would be

“The Xmas Gift that kept giving”

And to expect further similar failings for a half decade or more.

And so it turned out to be… the case is many more such errors have been found and I’ve no reason to believe we’ve stopped finding them.

So that leaves two questions about future “black swans”,

1, Will they be hard to get working?
2, Will they be hard to use to exfiltrate data?

The answer to both of those is a “probabilistic unknown”. That is the assumption is that,

“The low hang fruit has already been plucked…”

It’s a dangerous assumption to make in certain circumstances.

Which gives rise to a third question,

3, Is there a working mitigation that is easy to implement?

To which the answer is,

“Yes by segregation”

The problem that arises is,

4, Are there system configurations where “mitigation by segregation” is not an option?

The answer unfortunately is “yes”.

But the the question is,

5, Why?

And the answer is a pigs ear type MBA Mantra of,

“Any and all communications is good”.

Which is patently untrue and mostly an infantile excuse to,

“Leave all the doors and windows open for criminals to wander through.”

The real issue is “not leaving money on the table” arguments. That we don’t know what is going to be big next…

Well the simple answer to that is,

“Security vulnerabilities that will cost.”

The only question as with “ransomware” is really,

“Who gets hit first before and after the issues is patched?”

The fact is there are other mitigations other than hard segregation… But even those that are relatively inexpensive mostly don’t get implemented for fiscal reasons.

So saying,,

“I agree with this trade-off”

Is kind of accepting that,

“Incompetent management is king”…

But is it? Because in the US that is in effect the legal position due to “maximise shareholder value” determinations in court effectively pushing the “Short term value” viewpoint…

Other places and entities have different view-points and the cost of mitigating a 10-30% loss in performance is very small compared to dealing with a ransomware or similar attack and clean up especially if NatSec is involved.

Anonymous July 2, 2025 9:16 AM

“…there is the possibility that this would open up an unknown avenue for attack…without any known exploits…could open up some other bug that was covered up by the mitigations…we have some confidence…we could have unknown behavioral differences…”

That’s a lot of unknowns.
They call this security?

Mr. Peed Off July 2, 2025 9:59 AM

The spectre/meltdown protections have long been cursed by gamers and other computer performance enthusiasts. Most users are unlikely to be targeted by such an attack. Certainly those who are likely to be targeted should take all precautions.

Clive Robinson July 2, 2025 11:00 AM

@ Mr Peed Off, ALL,

With regards,

“The spectre/meltdown protections have long been cursed by gamers and other computer performance enthusiasts”

Have you read the bottom of the ARS Article by Dan Goodin where in the penultimate paragraph he says,

“One thing Ubuntu users should know is that the change will only provide performance boosts when GPUs are handling workloads running the OpenCL framework or the OneAPI Level Zero interface. That likely means that people using games and similar apps will see no benefit.”

Grima Squeakersen July 2, 2025 3:47 PM

Are cyber-currency mining systems vulnerable to the Spectre/Meltdown (or similar) attacks, and to performance hits from the mitigation of same? If so, I see interesting possible incentives regarding this trend to disable the mitigations. One would be to increase the efficiency of mining systems. Another (contradictory, probably) would be to make mining systems vulnerable to attack. Either possibility could be directed to manipulate wealth in some fashion, at some scale. With cyber-currency being increasingly touted as a vehicle for international trade and wealth storage, the timing could be propitious for some.

Jon (a different Jon) July 2, 2025 9:06 PM

I think it is safe to say that speculative execution is inherently a security weakness. If it exists, it is exploitable. Somehow.

Sorta like a window in a house. It is a security weakness – but compare the costs of perfect security with living in a house with no windows.

For some houses, the value of the window exceeds the security costs. For some bunkers, the value of the security exceeds the value of the window.

Welcome to engineering, specifically security engineering. ‘s always a trade-off somewhere, and you get to make that decision (mostly) on your own.

That said, attacks on CPUs accessible via on-line connections will only get better and easier, unlike attacks on windows (glass ones, not Microsoft ones) which remain relatively stable over time. A trade-off that was acceptable ten years ago might not be now.

And time is not on the side of the defense.

J.

Gilbert July 3, 2025 5:37 AM

I keep those protections on my main laptop that is a Lenovo running Fedora, which I use for browsing, and programming.

But I have a few powerful computers at home, that are used to run models or design models and I did turn off those mitigations there to get as much performance as possible. But those machines do not access Internet.

Difference can be seriously huge, up to 30 % from what I see on those machines.

Problem is I do not see how we are going to fix this. A huge part of the performance of our CPUs comes from caches, from speculative execution… If we remove all of those things, the performance will be lower, and as we move to a world with many processor to do scalar-distribution of computing work to do, we are going to reach a point where security wants us to remove or restrict those scalar aspects, while we want this scalar distribution on many compute units because we can no longer raise the frequencies (this is a hardware issue, and we are nowhere near the technological level of being to use Terahertz components or CPUs)

An interesting question to ask is : what would happen if we seek alternatives to the Von Neumann architecture. We currently have machines where a same memory space stores both data, and instructions to execute. But we can do something else : an infinite stack with its own space, and another memory space that contains the reverse-polish forth-like langage that does actions on that “infinite stack” that is memory for data (and data only). Dont think this is strange : this is how nuclear weapons are programmed. Forth, infinite (as long as memory is, so it’s not really infinite) stack with numbered levels where you put data. and a forth/RPL-like langage (with drops, dups, rotations, functions, etc) and there is a strict physical hardware divide between data on that stack and code executed on it. As the weapon flies, data from sensors is dropped on 1 or n levels of the stack and code uses that data by “eating” it from the stack, dropping the result there, and sometimes on the above levels of the stack several intermediate values that have been used. This hardware does exist, we use it for very critical and dangerous work and going away from the Von Neumann architecture was done after careful reflexion…

I am a bit tired with the eternal ascending compatibility. On the x86 world (either 32-bit or 64-bit) we are slowly moving forward, dragging that compatibility like an iron ball tied to our ankles : just look at how our computers do boot. All the crap we are keeping that are the heritage of how those machines worked when we have 8086/8088 processors. When are we going to have either Intel or AMD design a new CPU, 64-bit or more, 100 % designed to be “new”, we drop all 8086/DOS Era stuff, where we make the BEST choices for performance, security at EACH STEP. Then, we port our exploitation systems to this new platform, we remove all the old crap and we do a jump to a new generation. And when designing those new processors, design them so they are unaffected by the current security issues. Like when in cryptography today if you design something, it MUST be immune to all the known attacks… (immune as either immune or very very hard against it).

I think Intel has a project to design a technological “jump” and start from a blank page to be free of all that old ascending compatibility crap that is tying us into complexity. But Intel is dying, has been dying for years and they abandoned that project…

Clive Robinson July 3, 2025 8:08 AM

@ Grima Squeakersen, ALL,

With regards,

“Are cyber-currency mining systems vulnerable to the Spectre/Meltdown (or similar) attacks, and to performance hits from the mitigation of same?”

I would assume “Yes” to both questions “for some types” of mining (but not all).

But the question that should be asked is,

“Do such rigs need to be connected directly to the Internet or other public network?”

To which the answer is “No”

So the answer to the problem of vulnerability by this type of attack is,

“You can sufficiently mitigate it on mining rigs you own”

However… As we know some mining rigs run by malware on other peoples equipment. They generally need Internet connectivity so would be vulnerable…

But as those who control what are effectively “bots” are in practice stealing other peoples electricity and prematurely aging their hardware… Sensible mitigations will “kill the thieves business model” which is probably a good thing.

jbmartin6 July 3, 2025 8:30 AM

I can’t blame them. As far as I know, there have been no real exploits of these and there is no significant effort to find any. Of course, my knowledge is far from complete and there is always the future to worry about. On the other hand, the by far most likely path of exploitation, the http browser, mitigated this risk in their javascript engines shortly after the publication of the vulnerability. There are still a huge number of easier and better paths attackers can use that we could address before blindly accepting a huge performance hit.

Clive Robinson July 3, 2025 1:33 PM

@ jbmartin6,

With regards,

“As far as I know, there have been no real exploits of these and there is no significant effort to find any.”

Thus it is an issue of,

“If we don’t look we can not know.”

And thereby hangs the real issue, because most significantly harmfulk attacks are actually seen not so much by the actual attack. But by either the side effects of the data exfiltration stages of the attack or the finale desired result of the attacker reached.

For instance “ransomware” how many times have attacks been caught in the early stages of an attack before harm is actually done?

Rather than when either a network utilisation alarm happens or data is nolonger available to an organisation and,

“The butchers bill is presented for payment fate accompli.”.

We used to say APT[1] for certain types of attacker “goals” and now we have the notion of LotL[2] for certain types of attackers “covert methods”.

When combined APT and LotL allow a “toe-hold” to, over time, become not just a “major bridge-head”, but a “full infiltration and ownership” of an organisation / entities information systems, data and knowledge.

Part of the reason they are a success are,

1, Zero Day or similar “no signs” attack entry methods.
2, Slow change or “no sign” payload methods.

The important thing is the “no sign” often called “below the noise”, or “in the grass” methods are “low and slow” methods that stay below automated alarm and most human eyeball thresholds. Thus are in effect “invisible to system operators and admins”.

You can only respond to things you observe in part or whole, and by the time it gets to whole it’s usually a fate accompli and way to late to do anything.

Worse you might only become aware once the attacker deliberately makes you aware.

We know that awareness sometimes only happens indirectly as with “industrial espionage” when it’s too late and things have in effect become existential for an organisation / entity.

We know that the French, Israeli, Chinese, and Russian state entities have done and we assume still do do APT/LotL and thereby acquire industrial know how at a very very tiny fraction of the R&D costs of the organisations they acquire it from. Thus their respective national economies or industries benefit greatly not just at low cost, but in comparitive terms of little time.

Such information can be worth more than any amount Ransomware or more common criminal activities might aquire…

[1] “Advanced Persistent Threat”(APT) attacks,

<

blockquote>An Advanced Persistent Threat (APT) is a sophisticated cyber threat where an attacker tries to intrude on a target network stealthily and maintain long-term access to the infrastructure inside the target network, exfiltrating crucial information. The main goals of APTs are espionage, hacktivism, financial gains, or destruction. In this blog, you will understand the life cycle of an APT, how APT works, and some examples of notorious APT groups.

The attackers behind the APTs typically operate as groups and have specific common goals and understanding, which they aim to achieve by a collaborated “slow and steady” approach, which is often successful. APT groups have both the capability and the will to cause catastrophic damage to organizations. The offenders operating APTs are usually experts, passionate, planned, and experienced cybercriminals with strong financial backing and access to a wide range of intelligence-gathering techniques.</blockquote

https://www.netsecurity.com/what-is-an-advanced-persistent-threat-apt/

[2] "living off the land" (LotL) attacks

Living-Off-The-Land (LotL) is a term used in cybersecurity to describe a set of techniques employed by attackers that leverage legitimate tools, software, and features inherent to the target system or network to carry out malicious activities. Rather than relying on external malware or malicious software, attackers exploit the existing capabilities of a system to avoid detection and maintain persistence within a compromised environment.

The core idea behind LotL techniques is to blend in with normal system operations. By using trusted tools and processes that are already present in the environment, attackers can execute their malicious code, exfiltrate data, escalate privileges, and achieve their goals without raising alarms. This approach makes it difficult for traditional security tools, which often rely on signature-based detection, to distinguish between legitimate and malicious activities.

https://www.netsecurity.com/what-is-living-off-the-land-lotl-technique-and-how-to-detect/

lurker July 3, 2025 2:07 PM

@Gilbert

Your thoughts parallel mine. But we can’t wait for Intel to solve the problem. They’re a commercial company, weighted as you say by the iron ball of backwards compatibility. And the Forth/RPL manipulation of a separate data stack was probably in the pockets of Intel engineers, in the form of HP calculators, at the time they invented the 8086.

Peter A. July 3, 2025 2:53 PM

@Gilbert:

Even a non-von-Neumann architecture is not a panacea. If the code is buggy, malformed/crafted data can cause it to go down paths that are not desired/harmful.

Such a separate-code-and-data Forth system is good for one-off gigs. You design it, code, review, test, remove as much bugs and vulns you can find in set time, then blast it onto some kind of WORM memory and you’re good. If you need to change something, you replace the whole thing or the WORM unit, at the hardware level.

It is not practical for general purpose computing that needs to adapt to changing needs. There’s no sensible update path. Code is data. How are you going to separate code paths and data paths all the way from the vendor down to consumer device? One way to go has been already trodden in the 8-bit era: cartridges. You want a new version of a software package, you buy a new cartridge and maybe throw out the old one. But counterfeit cartridges (mostly pirated games) were available. How are you going to secure the whole supply chain to be sure you get the original, bug-free, secure software and not one “salted” by some TLA, or a criminal organization, or by the vendor itself?

Jon (a different Jon) July 3, 2025 9:02 PM

@Gilbert:

Incidentally, that (a “Harvard” architecture) is how many (most?) embedded microcontrollers work. The code is written into a ROM, typically through an entirely different piece of programming hardware, while the memory (RAM) used by the code is a physically separate pile of transistors.

Most do, of course, provide ‘bootloader’ instructions, wherein what the RAM contains can change the ROM, but they’re quite persnickety to use and in many cases can be simply omitted or deliberately disabled by the original code writer(s).

@Peter A:

Correct, that weird data can cause (buggy?) code to go to strange places, but unless the original designer has put in the ability for the code to modify itself (the ‘bootloader’) then there’s little the data can do that’s permanent.

Hence the lovely “Turn it off and back on again” school of debugging. Clearing out the ‘weird data’ and starting over. 😉

Finally, and perhaps unfortunately, many modern embedded microcontrollers are stampeding towards an almost all ‘bootloader’ environment because it’s cheaper that way – you don’t need the expensive programming hardware anymore. This breaks that separation between code and data – but it’s cheaper that way. Sigh.

J.

Swede July 4, 2025 12:23 AM

This affects the use of OpenCL, which is associated primarily with GPUs. AFAIK openssl does not use it. For example.

Clive Robinson July 4, 2025 5:33 AM

@ Swede, ALL,

With regards,

“This affects the use of OpenCL, which is associated primarily with GPUs”

That is my understanding as well (hence my comment to @Mr Peed Off).

However the ARS article is written in a way that conflates the original CPU issue with this GPU issue by not making a specific clarification.

But in the more general case these types of “go faster stripe” tricks can be found anywhere you can have parallel execution by the low level in the stack hardware (it’s why they are still being found in all architectures).

Worse they can also “carry from within” secrets out of secure conclaves and similar that use encryption to protect information storage and communication.

Because of the three things you can do with information,

1, Communicate it
2, Store it
3, Process it

The most sensitive currently is “Process it” and such tricks carry Meta-Data right out of the secure process environment.

The problem with this type of “covert channel” is that the “meta-data” is of a form that can rebuild not just the data, but what values of data are being treated as important within the processing.

It’s an aspect of “parallel processing” at all levels in the computing stack that can leak confidential information. For some reason it’s hardly ever talked about, if not actually actively discouraged.

Perhaps because of the fact that as we are now inching ever closer to the “physics wall” that terminates Intel Founder “Gordon Moore’s” observation that is now called “Moore’s Law” even though it’s only an observation made six decades ago not a law it has certain “wall hitting” implications.

Most high end silicon manufacturers are trying to “turn from the wall” by doing one or more forms of parallel processing. We’ve moved up from doing it in the “micro-code” where Meltdown worked. To “multi-core” where the “saving grace” currently is so many programmers can not think in anything other than the traditional serial way that the issues can be avoided but at a higher performance cost…

This will need to change and with the change this kind of covert channel leakage will “open up” as people try to push up one or more of the efficiency / performance curves that underlie Gordon Moore’s observation.

The ultimate issue that will not be avoided is “C the speed of light” it puts a rather hard limit on how far you can shift information in any given unit of time. There are ways you can work with it, but it involves removing “closed loops” and similar from code that many sequentially minded programmers see as an essential foundation method to what they do…

Have a look at,

https://en.m.wikipedia.org/wiki/Loop_unrolling#Advantages

To see some of the problem areas that have to be mitigated if “C is to be tamed”.

Who? July 4, 2025 5:59 AM

@ Clive Robinson, Mr. Peed Off

I agree with Clive Robinson here. There are clever proof of concepts of these exploits, not so hard to exploit once you have a code base that can be adapted.

We would say RowHammer was very hard to exploit; however there are proofs of concept that modify the exact bit we are looking at. Same about the funtenna hack.

Establishing a secure communication between computers that is both efficient and secure against both classical and quantum computing looks hard. However, we have OpenSSH. No one would say OpenSSH is hard to use.

About the comment of Mr. Peed Off:

The spectre/meltdown protections have long been cursed by gamers and other computer performance enthusiasts. Most users are unlikely to be targeted by such an attack. Certainly those who are likely to be targeted should take all precautions.

I do not care about gamers, and those who believe high-profile attacks against citizens are a urban legend. We have three letter agencies and governments hacking our computers because they do not like our activities (I can hardly imagine why helping a NGO is a crime). We have criminal organizations exploiting the same infrastructures used by governments to spy on their citizens.

We have even corporations like Meta and Yandex spying on the citizens using techniques that should be the domain of high-profile surveillance agencies.

Gamers want insecure computers? Great, build those systems for them! But leave those who want computers for something more than killing other players have secure platforms.

On the end, the problem is that corporations like Intel and AMD are too greedy; they know SMT is dangerous, but they implement it (even if SMT can be a bottleneck for certain high-CPU usages). They know speculative execution is dangerous, but they do not care as it makes their processors look faster.

The problem is not corporations being too greedy, the problem is that they impose their decisions to all of us. I have not read anything about speculative execution free processors yet. I have not heard about Intel AMT/ME truly free systems yet.

They do not sell high assurance platform computers to end users. Why is security only for governments? WHy do citizens do not have the right to buy and use the same secure systems?

And no, Bruce, as Clive I do not agree with your comment. These exploits can be hard, but once you have one of them you can use the same exploit against millions of computers.

I would though carefully about this matter, the answer is not as simple as it seems.

Clive Robinson July 4, 2025 4:54 PM

@ ALL,

In my above comment, I mentioned that “C the speed of light” was a wall but not as many would think “A hard wall”.

I mentioned using “loop unrolling” as a way to get around some of the issues of C.

However there are other ways one of which I was looking at back in the 1990’s to do a PhD on. Basically I saw with the birth of the Internet that highly parallel computing would fairly quickly get beyond the likes of Hyper-Cubes and other similar geometries Richard Feynman the Nobel Physicist was investigating[1]. Where even a million CPUs could be built into a 100 by 100 by 100 cube and thus cut the distance data had to travel by a factor of ~ten.

But cutting distance does not cut data rate thus the bandwidth issues.

I looked into a couple of aspects in reducing both the symbol rate (baud) and data rate. The first was by using a form of Phase and Amplitude modulation where each symbol could carry 64bits of transmitted data. The second was to reduce the number of bits transmitted by using a compression algorithm.

The problem with many compression algorithms is that they “are the wrong fit” for “general data” and if you use the wrong one you can actually end up increasing not reducing the number of transmitted bits. Worse many of the algorithms were “wallowing pig slow” and needed a lot of CPU cycles.

Now some of the more “badgered” or “silver fox” readers will know that the IEEE and CEPT were both looking into using compression over telephone lines. The eventual result was an average of 56K end to end data over a channel that in hard theory could not support 19K symbols… The 3 to 1 end to end data rate increase over the transmission rate was basically down to using variations of compression algorithms.

Now I thought this was all “old hat” from 3 decades ago thus in IT terms an idea that is 20 Generations old. So imagine my surprise on reading this,

https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/Compression_dictionary_transport

The fact it’s all been done before by Mathmeticians, “Hardware Engineers” and one or two highly unusual communications engineers does not get mentioned.

But hey “Been There Done That” but unlike Richard Feynman I didn’t “get to wear the tee-shirt” 😉

[1] Richard Feynman has been credited with inventing the idea of Quantum Computing, however others at a well known English University have a different view… Which is why Prof David Deutsch is often described as the “father of quantum computing”.

Less well known about Richard Feynman by “the yung-uns” is his work assisting the development of the “Connection Machine” that was a massively parallel CPU system with the 64,000 CPU’s connected as a Hyper-Cube. The story is well worth reading as it indirectly shows the “C light speed wall” issue from a different perspective,

https://longnow.org/essays/richard-feynman-connection-machine/

Name July 5, 2025 12:22 AM

Truth be told, Ubuntu was fun back in the day. Now, I wouldn’t trust Ubuntu/Canonical and I sure as heck wouldn’t enjoy using Ubuntu, Kubuntu, Xubuntu, etc.

I enjoy Debian with Fluxbox or Openbox. They just work in simplicity, saving a LOT of RAM and you don’t have the bloat of a Desktop Environment. If you must go with a DE, try XFCE instead of Gnome and KDE, both of which I find incredibly bloated and buggy.

Happy NO KINGS day!

Jon July 5, 2025 11:59 PM

@ Clive Robinson

Hehe. I’ll give you a better example of ‘loop unrolling’ in; VHDL.

It’s a hardware description language for programming an FPGA (or equivalent chip) and it does let you write loops to simplify your logic equation generation.

So I cheerily wrote a loop (including XOR gates – oops?) that would generate the gates I wanted.

Only eighteen trips through the loop, too. Note: The loop is only in the source code, not the executable.

Compiling the VHDL into the binary required to program the chip took a couple of hours, and devoured a couple of gigabytes of RAM, but it would eventually sort itself out into a reasonable program.

The rest of the program was basically trivial, so this bugged me. I unrolled the loop.

Unrolled (only 18 trips through!) and the whole thing compiled in less than one second, using so little extra memory that I couldn’t measure it. The resulting files were identical.

It’s a known problem, called ‘Product Term Blowup’, and after generating a couple gigabytes of redundant product terms it would then cheerfully optimize them all out again, but merely the compilation time (and resources!) benefit(s) from unrolling the loop were astronomical.

So yeah. It’s not just ‘C’ that can benefit from loop unrolling. J.

Clive Robinson July 6, 2025 5:31 AM

@ Jon, ALL,

With regards,

“It’s a known problem, called ‘Product Term Blowup’[1]”

Yes, but it’s rather more than “a known problem”… hence it’s one of those things I try to avoid like “land mines and hell mouths”, that get you in both your conscious and subconscious states of existence 😱

For those that want to play along consider the concept of “the excluded middle” and why we find great comfort in it… That is why we like true or false but not that bit in the middle. A consequence of which is the age old question of,

“Is zero a number or not?”
“And if it is, is it positive or negative?”
“What meaning does the inverse of zero have?”

But also consider the number line as an entity and you pick a point and call it “zero” and thereby divide it into two. Thus numbers to the left are the set of “negative numbers”, numbers to the right are “positive numbers”. What do you do with zero?

There are four obvious options of where you might put it,

1, In a set of it’s own
2, In the positive set
3, In the negative set
4, In both sets.

The answer rather depends on the answer you are looking for to a question you might have.

But one thing to note,

2 is the logical inverse of 3,
1 is after a moments thought the logical inverse of 4.

Thus where is the middle ground?

Hence there is an issue, which allows arguments for all sorts of things like proving Unicorns exist. Or there are an infinite number of rabbits in the magician’s hat…

As I’ve said “places I fear to tread” and mariners of old used to scrawl “here be dragons” on their charts, because life mostly is not “Baby Unicorns or Pretty Rainbows[3]”.

[1] For those who’ve not heard this and similar terms “blowup” is not as in an “explosion” but as in the “magnification of ever finer detail” in a photograph. Or if you have “personal hells” –and we all do by a certain age– you might say “going down the rabbit hole”[2].

[2] But… The thing about “rabbit holes” is that unlike most holes, they commonly lead into warrens. One characteristic of which is they have not just the “rabbit hole” they also have hidden/concealed ways that are “escape hatches”…

[3] Next time you see a rainbow, have a good look… You will find you do not see brown… Because brown is not a primary colour, or a colour that falls between primary colours as a distinct spectral frequency. Arguably brown is a hallucination of the way our brains try to tell us what our eyes three primary colour sensors are registering.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.