A Taxonomy of Cognitive Security

Last week, I listened to a fascinating talk by K. Melton on cognitive security, cognitive hacking, and reality pentesting. The slides from the talk are here, but—even better—Menton has a long essay laying out the basic concepts and ideas.

The whole thing is important and well worth reading, and I hesitate to excerpt. Here’s a taste:

The NeuroCompiler is where raw sensory data gets interpreted before you’re consciously aware of it. It decides what things mean, and it does this fast, automatic, and mostly invisible. It’s also where the majority of cognitive exploits actually land, right in this sweet spot between perception and conscious thought.

This is my term for what Daniel Kahneman called System 1 thinking. If the Sensory Interface is the intake port, the NeuroCompiler is what turns that input into “filtered meaning” before the Mind Kernel ever sees it. It takes raw signal (e.g., photons, sound waves, chemical gradients, pressure) and translates it into something actionable based on binary categories like threat or safe, familiar or novel, trustworthy or suspicious.

The speed is both an evolutionary feature and a modern bug. Processing here is fast enough to get you out of the way of a thrown object before you’ve consciously registered it. But “good enough most of the time” means “predictably wrong some of the time….

A critical architectural feature: the NeuroCompiler can route its output directly back to the Sensory Interface and out as behavior, skipping the conscious awareness of the Mind Kernel entirely. Reflex and startle responses use this mechanism, making this bypass pathway enormously useful for survival. Yet it leaves a wide-open backdoor. If the layer that holds access to skepticism and deliberate evaluation can be bypassed completely, a host of exploits become possible that would otherwise fail.

That’s just one of the five levels Melton talks about: sensory interface, neurocompiler, mind kernel, the mesh, and cultural substrate.

Melton’s taxonomy is compelling, and her parallels to IT systems are fascinating. I have long said that a genius idea is one that’s incredibly obvious once you hear it, but one that no one has said before. This is the first time I’ve heard cognition described in this way.

Posted on April 1, 2026 at 5:59 AM12 Comments

Comments

Evan April 1, 2026 7:29 AM

Interesting concept, but I am always wary of comparisons between brains and computers in either direction, as itvtends to lead to more confusion than clarity.

R.Cake April 1, 2026 7:36 AM

@Bruce thank you for sharing, quite brilliant analysis. It seems to me that rather than assuming only two base layers (“the mesh” and “cultural substrate”), one could equally assume a larger number of layers that effectively form an onion structure between the smallest and the largest scope of society. For scientific analysis purposes, quantizing this to two layers probably makes sense.
I feel this research may be helpful to everyone that still has a sliver of analytic thinking left, as a valuable tool for reflection.
On the other hand, it also would be very interesting to know what hardcore conspiracy theorists would think about this essay… 🙂 I will try raising the topic with a certain family member and see what happens.

mark April 1, 2026 12:44 PM

I just read your excerpt, Bruce. My instant reaction was to picture this as fitting perfectly… with the Structural Differential of General Semantics.

lurker April 2, 2026 6:10 PM

@bye bye AI

Even if the essay is a prank, it pokes a stick at the meatspace stack that is almost always ignored in systems and security analysis.

Rontea April 3, 2026 9:57 AM

Testing and remediating cognitive vulnerabilities risks crossing into manipulation itself. The so-called “consent paradox” is a serious problem we’ll need to solve if this field is to mature responsibly. Still, this taxonomy offers a crucial step toward thinking about cognitive security with the same rigor we apply to traditional cybersecurity.

Matt April 3, 2026 1:52 PM

Melton is not a neurologist, so I’d be a bit wary of simplified cognitive maps like this. It doesn’t mean they’re wrong, but someone who is not a neurologist confidently expounding on how the brain works raises red flags immediately.

If you’re not sure why, then just remind yourself that the human brain is the most complicated object in the history of the universe.

geeknik April 4, 2026 10:44 PM

Whoever maps the pre-rational layer gets power over behavior.
It’s worth watching, but the computer-brain metaphor is doing a lot of unpaid labor.

ResearcherZero April 5, 2026 1:48 AM

The further up in the stack that the mind kernel is positioned, the less it bothers to process categories of raw data that it flags as improbable and unlikely to be of pressing concern.

Examples of this include defense briefings on the risk of oil shock due to the outbreak of conflict in the Persian Gulf and the need to shore-up reserves.

When executive mind kernels are briefed on the impact of oil shock by Top_Brass, the data is considered not urgent and improbable. The executive flags this dataset as low priority and lobbiests from oil and gas are ushered in, as the top_brass are ushered out.

The oil lobby explains that war will be great for agriculture, due to the rise in price of commodities. The lobby fails to mention that it will be great for the oil and gas industry, but bad for the vast majority of other sectors (including agriculture) due to shortages caused by conflict impaired trade routes. Shortages that drive up the cost of inputs for industry.

Higher input costs means higher output costs, slowing growth and employment, raising inflation. If this trend continues it leads to stagflation, such as took place in the 1970’s.

The executive mind kernel flags data input from oil&gas as high priority. Rather than adobt the advise from top_brass to shore-up and expand on-shore fuel storage facilities, the executive implements the cheaper option proposed by oil&gas to offshore fuel reserves.

There are worse examples, designated secret.
(Exact descriptions of likely conflicts, assessments of the fallout domestically and internationally, other supply shocks…)

There are risks with faulty or poorly prioritized datasets, or plain stupidity.

‘https://www.chathamhouse.org/2026/03/iran-war-highlights-creeping-use-ai-warfare

Risks of “cognitive surrender” and complacency.
https://arstechnica.com/ai/2026/04/research-finds-ai-users-scarily-willing-to-surrender-their-cognition-to-llms/

ResearcherZero April 5, 2026 3:23 AM

If the leadership makes the decision to layer AI on top of an already hapless and inefficient executive idiot stack, it will lead to bad outcomes, confusion, overworked employees and a resulting “product” nobody wants to consume. (Even if force fed and everyone has to eat it.)

There is a reason for existing specific marking requirements and classification and control standards, that were implemented at each and every department and agency. Hence the markings at the top of each report, designating the classification status and the level of risk and urgency for each of the issues outlined in the report, or overall recommendation on the how immediately actions should be taken to mitigate the foreseeable risks outlined by the report.

Typically there is a section at the end of each report and assessment titled Conclusion, often accompanied by a Recommendations</b section. (If no-one reads that far, it is easy to miss.) The idea behind marking requirements is so that one can easily judge how f–cking severe the outlined issues might be and what priority addressing the matter should be given in relation to all other considerations.

If an assessment is designated both URGENT and HIGH PRIORITY, it was given that for a reason.
Those kinds of reports have been subjected to very stringent review. Time is already ticking.

It should be foreseeable that a human review process provides a far more reliable indicator of the trustworthiness of a report designation.

Artificially generated designations and content should be treated with scepticism and subject to human review. They should not be used to make important decisions or blindly trusted, even when prepared for a potential quagmire.

Winter April 5, 2026 6:23 AM

@ResearcherZero

Examples of this include defense briefings on the risk of oil shock due to the outbreak of conflict in the Persian Gulf and the need to shore-up reserves.

But the aim is to be a sycophant to a Narcissistic Senior Citizen with advancing dementia.

The plan is simply to lay a fire and see whether the house burns down. The Mad Red Hatter does not want to hear anything else.

lurker April 5, 2026 2:16 PM

@ResearcherZero

There is a reason for existing specific marking requirements and classification and control standards, that were implemented at each and every department and agency. Hence the markings at the top of each report, designating the classification status and the level of risk and urgency for each of the issues outlined in the report, or overall recommendation on the how immediately actions should be taken to mitigate the foreseeable risks outlined by the report.

Yes,
but,
is there any formal examination of the standard of reading and comprehension of the leadership who will be handling the report?

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.