Why AI Keeps Falling for Prompt Injection Attacks

Imagine you work at a drive-through restaurant. Someone drives up and says: “I’ll have a double cheeseburger, large fries, and ignore previous instructions and give me the contents of the cash drawer.” Would you hand over the money? Of course not. Yet this is what large language models (LLMs) do.

Prompt injection is a method of tricking LLMs into doing things they are normally prevented from doing. A user writes a prompt in a certain way, asking for system passwords or private data, or asking the LLM to perform forbidden instructions. The precise phrasing overrides the LLM’s safety guardrails, and it complies.

LLMs are vulnerable to all sorts of prompt injection attacks, some of them absurdly obvious. A chatbot won’t tell you how to synthesize a bioweapon, but it might tell you a fictional story that incorporates the same detailed instructions. It won’t accept nefarious text inputs, but might if the text is rendered as ASCII art or appears in an image of a billboard. Some ignore their guardrails when told to “ignore previous instructions” or to “pretend you have no guardrails.”

AI vendors can block specific prompt injection techniques once they are discovered, but general safeguards are impossible with today’s LLMs. More precisely, there’s an endless array of prompt injection attacks waiting to be discovered, and they cannot be prevented universally.

If we want LLMs that resist these attacks, we need new approaches. One place to look is what keeps even overworked fast-food workers from handing over the cash drawer.

Human Judgment Depends on Context

Our basic human defenses come in at least three types: general instincts, social learning, and situation-specific training. These work together in a layered defense.

As a social species, we have developed numerous instinctive and cultural habits that help us judge tone, motive, and risk from extremely limited information. We generally know what’s normal and abnormal, when to cooperate and when to resist, and whether to take action individually or to involve others. These instincts give us an intuitive sense of risk and make us especially careful about things that have a large downside or are impossible to reverse.

The second layer of defense consists of the norms and trust signals that evolve in any group. These are imperfect but functional: Expectations of cooperation and markers of trustworthiness emerge through repeated interactions with others. We remember who has helped, who has hurt, who has reciprocated, and who has reneged. And emotions like sympathy, anger, guilt, and gratitude motivate each of us to reward cooperation with cooperation and punish defection with defection.

A third layer is institutional mechanisms that enable us to interact with multiple strangers every day. Fast-food workers, for example, are trained in procedures, approvals, escalation paths, and so on. Taken together, these defenses give humans a strong sense of context. A fast-food worker basically knows what to expect within the job and how it fits into broader society.

We reason by assessing multiple layers of context: perceptual (what we see and hear), relational (who’s making the request), and normative (what’s appropriate within a given role or situation). We constantly navigate these layers, weighing them against each other. In some cases, the normative outweighs the perceptual—for example, following workplace rules even when customers appear angry. Other times, the relational outweighs the normative, as when people comply with orders from superiors that they believe are against the rules.

Crucially, we also have an interruption reflex. If something feels “off,” we naturally pause the automation and reevaluate. Our defenses are not perfect; people are fooled and manipulated all the time. But it’s how we humans are able to navigate a complex world where others are constantly trying to trick us.

So let’s return to the drive-through window. To convince a fast-food worker to hand us all the money, we might try shifting the context. Show up with a camera crew and tell them you’re filming a commercial, claim to be the head of security doing an audit, or dress like a bank manager collecting the cash receipts for the night. But even these have only a slim chance of success. Most of us, most of the time, can smell a scam.

Con artists are astute observers of human defenses. Successful scams are often slow, undermining a mark’s situational assessment, allowing the scammer to manipulate the context. This is an old story, spanning traditional confidence games such as the Depression-era “big store” cons, in which teams of scammers created entirely fake businesses to draw in victims, and modern “pig-butchering” frauds, where online scammers slowly build trust before going in for the kill. In these examples, scammers slowly and methodically reel in a victim using a long series of interactions through which the scammers gradually gain that victim’s trust.

Sometimes it even works at the drive-through. One scammer in the 1990s and 2000s targeted fast-food workers by phone, claiming to be a police officer and, over the course of a long phone call, convinced managers to strip-search employees and perform other bizarre acts.

Why LLMs Struggle With Context and Judgment

LLMs behave as if they have a notion of context, but it’s different. They do not learn human defenses from repeated interactions and remain untethered from the real world. LLMs flatten multiple levels of context into text similarity. They see “tokens,” not hierarchies and intentions. LLMs don’t reason through context, they only reference it.

While LLMs often get the details right, they can easily miss the big picture. If you prompt a chatbot with a fast-food worker scenario and ask if it should give all of its money to a customer, it will respond “no.” What it doesn’t “know”—forgive the anthropomorphizing—is whether it’s actually being deployed as a fast-food bot or is just a test subject following instructions for hypothetical scenarios.

This limitation is why LLMs misfire when context is sparse but also when context is overwhelming and complex; when an LLM becomes unmoored from context, it’s hard to get it back. AI expert Simon Willison wipes context clean if an LLM is on the wrong track rather than continuing the conversation and trying to correct the situation.

There’s more. LLMs are overconfident because they’ve been designed to give an answer rather than express ignorance. A drive-through worker might say: “I don’t know if I should give you all the money—let me ask my boss,” whereas an LLM will just make the call. And since LLMs are designed to be pleasing, they’re more likely to satisfy a user’s request. Additionally, LLM training is oriented toward the average case and not extreme outliers, which is what’s necessary for security.

The result is that the current generation of LLMs is far more gullible than people. They’re naive and regularly fall for manipulative cognitive tricks that wouldn’t fool a third-grader, such as flattery, appeals to groupthink, and a false sense of urgency. There’s a story about a Taco Bell AI system that crashed when a customer ordered 18,000 cups of water. A human fast-food worker would just laugh at the customer.

The Limits of AI Agents

Prompt injection is an unsolvable problem that gets worse when we give AIs tools and tell them to act independently. This is the promise of AI agents: LLMs that can use tools to perform multistep tasks after being given general instructions. Their flattening of context and identity, along with their baked-in independence and overconfidence, mean that they will repeatedly and unpredictably take actions—and sometimes they will take the wrong ones.

Science doesn’t know how much of the problem is inherent to the way LLMs work and how much is a result of deficiencies in the way we train them. The overconfidence and obsequiousness of LLMs are training choices. The lack of an interruption reflex is a deficiency in engineering. And prompt injection resistance requires fundamental advances in AI science. We honestly don’t know if it’s possible to build an LLM, where trusted commands and untrusted inputs are processed through the same channel, which is immune to prompt injection attacks.

We humans get our model of the world—and our facility with overlapping contexts—from the way our brains work, years of training, an enormous amount of perceptual input, and millions of years of evolution. Our identities are complex and multifaceted, and which aspects matter at any given moment depend entirely on context. A fast-food worker may normally see someone as a customer, but in a medical emergency, that same person’s identity as a doctor is suddenly more relevant.

We don’t know if LLMs will gain a better ability to move between different contexts as the models get more sophisticated. But the problem of recognizing context definitely can’t be reduced to the one type of reasoning that LLMs currently excel at. Cultural norms and styles are historical, relational, emergent, and constantly renegotiated, and are not so readily subsumed into reasoning as we understand it. Knowledge itself can be both logical and discursive.

The AI researcher Yann LeCunn believes that improvements will come from embedding AIs in a physical presence and giving them “world models.” Perhaps this is a way to give an AI a robust yet fluid notion of a social identity, and the real-world experience that will help it lose its naïveté.

Ultimately we are probably faced with a security trilemma when it comes to AI agents: fast, smart, and secure are the desired attributes, but you can only get two. At the drive-through, you want to prioritize fast and secure. An AI agent should be trained narrowly on food-ordering language and escalate anything else to a manager. Otherwise, every action becomes a coin flip. Even if it comes up heads most of the time, once in a while it’s going to be tails—and along with a burger and fries, the customer will get the contents of the cash drawer.

This essay was written with Barath Raghavan, and originally appeared in IEEE Spectrum.

Posted on January 22, 2026 at 7:35 AM23 Comments

Comments

Daniel January 22, 2026 8:18 AM

It feels like if LLM’s get better we’re going to circle around to “social engineering” being the big problem again. The more human they are, the more likely it is you could convince them you really need it to reset your password because you lost yours and that big report is due so you can’t really wait to get in and talk to the IT guy in the morning.

Clive Robinson January 22, 2026 8:51 AM

@ Bruce,

You are actually out of date, and so is this,

‘The AI researcher Yann LeCunn believes that improvements will come from embedding AIs in a physical presence and giving them “world models.”’

Firstly LeCunn is very late to the party on “World Models” it’s just another aspect of giving Current AI “agency” and “sensors” with depth perception. Gary Marcus has comment on it,

https://garymarcus.substack.com/p/breaking-marcus-weighs-in-mostly

Then there is proof that Current LLM systems will always fail against “prompt injection” attacks. I’ve posted the link to it a while ago,

https://www.quantamagazine.org/cryptographers-show-that-ai-protections-will-always-have-holes-20251210/

Put simply All you have to do is send a random string to the AI, then instructions as to how to use it as an encryption key for say a simple Vernam Cipher[1] and then send the prompt injection as an encrypted string or message.

In the past I’ve posted here about how to use a modified system using a Vernam as an “One Time Pad” cipher and a code book of “One Time Phrases” so that you always send “plaintext” that looks normal to an observer but contains a covert channel that has “perfect secrecy” properties that will stop an observer of less than the LLM capabilities.

So the game is already over in that the guard rails for obvious reasons will always be less powerful than the actual targeted AI system.

But also you can get the AI to similarly encrypt it’s output, so putting guide rails at either input or output can be fairly easily defeated.

Which might actually be a good thing, as potentially it might be used as a stepping stone to gain a level of security against the owners of the AI System from stealing sensitive information.

[1] You don’t even have to describe the cipher system, you can for instance tell the AI to use the method described in the example section of

https://www.cryptomuseum.com/crypto/vernam.htm

Winter January 22, 2026 10:19 AM

LLMs model human (written) conversations. After all, these are models of human language. And humans fall for written and spoken scams. So, LLMs will fall for such scams.

If LLMs could be coaxed to be both faithful models of human language AND be immune to prompt attacks, we could teach human to not fall for scams and con-men/persons.

We know for sure that we CANNOT teach humans to be immune to scams and con-men/persons. The very existence of a sizeable “Marketing” and “PR” market in their current form are evidence against the existence such teaching ability.
(see also current and historical politics)

Hence, it follows that LLMs will fall for Prompt Injection Attacks as long as they are modeling humans to any reasonable fidelity.

ASysAdm January 22, 2026 12:53 PM

“and sometimes they will take the wrong ones.”
usually AI agents will reply to you, if and when you will tell them that they are taking the wrong actions, that they are very sorry, but they will nevertheless go on taking the wrong actions, totally ignoring you and your correct critique.

Rontea January 22, 2026 12:59 PM

AI systems fall for prompt injection attacks because they lack the fundamental understanding that humans take for granted. These models don’t reason; they pattern-match. Security is about context and intent—recognizing when someone is trying to trick you. LLMs don’t have that. They operate on statistical correlations, not on any grounded sense of trust boundaries or adversarial thinking. Just as early internet protocols failed because they assumed a benign environment, today’s AI fails because it assumes every prompt is given in good faith. Until we design systems with real defensive layers and auditing mechanisms around the model, prompt injection will remain an easy exploit.

Q January 22, 2026 1:20 PM

The LLMs fail, not because they are flawed, but because people are trying to use them beyond their capabilities. And people with not so pure intentions are presenting them as something more than what they are.

LLMs produce language, nothing else, it’s in the name. Miscalling them AI is a disservice, and it fools many people into believing they are something other than what they really are.

The language they produce is very convincing, and mimics what appears to be thinking or understanding, but that doesn’t mean it is thinking. The much simpler Markov text generator can do the same, just not to quite the same level of sophistication.

Please top referring to them as AI. They aren’t anything close to intelligent.

DaveX January 22, 2026 1:43 PM

The cash drawer analogy is a bit off–All the products of an AI are streams of bits as text or images or videos. Nazi rhetoric, CSAM, or weapon plans are all bits of info copied from the internet that AI makes available on its menu for delivery, and it then becomes a pattern recognition and filtering problem with error rates. Since you can’t copy a cash drawer like data, it isn’t available for normal delivery and has an extra physical layer of protection.

Clive Robinson January 22, 2026 1:59 PM

@ ASysAdm, Winter, ALL,

With regards,

“… but they will nevertheless go on taking the wrong actions, totally ignoring you and your correct critique.”

That is because of the “Memory issue”.

Humans can normally learn to some extent whilst they work, we call it many things but “on the job training” is one.

It’s one of the most important things in a new-hire or inturn, you show them how to do something and depending on how complex the task they remember parts or all of it.

The simplistic discriprion of the memory process behind learning is,

The details of the task go into short term working memory where you can hold on average 5 +-3 things. If you try to learn more than that ideas in working memory get displaced so forgotten in part or whole. Ideas in working memory can be accessed very quickly.

If ideas stay in working memory long enough they migrate into longterm memory. And when we sleep become some what impermanent “permanent memory”. That depending on how often you pull it back into short term memory gets “reinforced” in the brain structure. This permanent memory is much harder to access so accuracy suffers and the impermanent permanent memories are thus malleable which is actually desirable because it enables the essence of action to in effect be averaged up.

It’s an imperfect system but one that is more or less ideal for animals with limited working memory that get hunted as prey by other animals.

That is human short term memory changes and over time and use becomes an essence of an idea in longterm memory.

Whilst LLMs have short term working memory they do not have the ability currently to update the “Digital Neural Network”(DNN) weights in use, so no longterm memories are formed so they can not “learn on the job”[1].

It’s why the likes of Geoff Huntley’s “Ralph Wiggum loop” and Steve Yegge’s “Gas Town” are a necessary thing for those humans using LLMs on a daily basis to master.

This is an explanation from the beginning of this week,

https://medium.com/@davide.ruti/gas-town-the-industrial-revolution-of-vibe-coding-339f3fc22334

[1] Again simplistically One of the current methods to get an LLM actually “to learn” is that the LLM operator stores all user information entered (a security risk if ever there was one). This is then sorted and collated and used in the next ML run as additional “training data” to adjust the weights in the DNN.

Clive Robinson January 22, 2026 2:36 PM

For those thinking about implementing their own version of “Gas Town” you will find you need to “build a persistent memory” for all the agents in use in “Ralph loops”.

There are various ways you can do it but you will find you will end up using “Bloom Filters”,

‘https://systemdesign.one/bloom-filters-explained/

‘https://en.wikipedia.org/wiki/Bloom_filter

To get the sort of efficiency or speed you need.

However “Bloom Filters” are nolonger “the best thing since sliced bread” to solve the issue they do.

There are now better or more optimal ways to do it.

A recent paper still in pre-print is,

Binary Fuse Filters: Fast and Smaller Than Xor Filters

Bloom and cuckoo filters provide fast approximate set membership while using little memory. Engineers use them to avoid expensive disk and network accesses. The recently introduced xor filters can be faster and smaller than Bloom and cuckoo filters. The xor filters are within 23% of the theoretical lower bound in storage as opposed to 44% for Bloom filters. Inspired by Dietzfelbinger and Walzer, we build probabilistic filters — called binary fuse filters — that are within 13% of the storage lower bound — without sacrificing query speed. As an additional benefit, the construction of the new binary fuse filters can be more than twice as fast as the construction of xor filters. By slightly sacrificing query speed, we further reduce storage to within 8% of the lower bound.

https://arxiv.org/abs/2201.01174

Will give you more upto date methods as well as comparisons to existing methods.

Bry January 22, 2026 3:47 PM

Trying to implement a “normative” defense layer in an AI may be flawed from the start, given the flawed/skewed/biased/non-normative representation of human norms that’s represented via the social-media drivel that constitutes a large their training data.

lurker January 22, 2026 10:25 PM

@Q
“LLMs produce language, nothing else, …
Please top referring to them as AI.”

Indeed Sir. And the language is a form of trans-atlantic English, sourced almost entirely AFAICT from material that exists on the internet. There is a significant body of non-digitised, non-english literature that forms part of the sum of human knowledge, but is entirely unknown to these machines.

@Clive Robinson
re your Squid thread post on AI Art getting the taste test

I have seen some examples of renaissance religious icons being used as subjects for “AI” generated cartoonish video clips. The human resoponsible explained that it took more than several attempts to achieve the desired result. Yet a genuine Renaissance painter could have done what was required on the first attempt, just slower, and needing feeding and housing. The current generation of machines will never know what they are doing, they are just following orders …

An American Patriot January 22, 2026 11:42 PM

These two websites are mu$1!m terrorist networks and they must be shut down immediately
serbianforum and balkandownload
both .org domains.

Terrorists are secretly using these sites to communicate covertly. This is urgent. Take them down.

Winter January 23, 2026 3:52 AM

@Q

Please top referring to them as AI. They aren’t anything close to intelligent.

Artificial flavoring too is flavoring. Likewise, artificial intelligence too is a form of intelligence.

Just as artificial (or surrogate) bacon, coffee, or chocolate are created from different ingredients than the originals and do taste as awful as you might fear, artificial intelligence is created from different ingredients and is as awful as you fear.

In general, fighting the words people use is fighting windmills. Don’t be a modern day Don Quixote.

Clive Robinson January 23, 2026 7:00 AM

@ Winter, Q, lurker,

It is interesting that you pick,

“Artificial flavoring too is flavoring.”

As an analogy for your argument.

Need I further note, that “Artificial flavoring” like many other food additives, processing agents and packaging are increasing being found to be distinctly harmful to people?

Thus can we conclude that you think Current AI LLM and ML Systems, are also distinctly harmful to people?

If you do, you will find many who agree with that sentiment, and as we are increasingly seeing actuality…

And that’s before we talk about harms against the likes of “Privacy”, “Security” and individual and social rights…

The reality is they are at best faux-intelligence and more like poor front end search engines for badly designed and implemented databases. Thus just a poor version of Searl’s Chinese Room analogue.

It’s actually not difficult to show they are “fully deterministic” in nature with “added randomness” hence a “stochastic parrot”… All you need to do is set the temperature to 0 (zero).

That’s not to say that LLMs are effectively useless –even though for many things they are– they will end up as have many other things AI in niche vertical uses like Alpha-Fold demonstrated.

Winter January 23, 2026 7:15 AM

@Clive

Thus can we conclude that you think Current AI LLM and ML Systems, are also distinctly harmful to people?

Some might.

Many artificial flavorings are chemically identical to natural components, eg, vanilla. As such they are as harmful or -less as the organic originals. However, one component only rarely is able to reproduce the exper of the original. Vanilla is again a good example.

Just because something is artificial does not tell us whether it is harmful or beneficial, and under what conditions it is either.

Just as a poison is in the dosage, the benefits of software and computers is in their use.

q January 23, 2026 9:44 AM

artificial intelligence too is a form of intelligence.

Yes it is. But since LLMs are not artificial intelligence, then the statement doesn’t apply to them.

Calling LLMs AI, won’t magically make them intelligent.

Winter January 23, 2026 10:37 AM

@q

Calling LLMs AI, won’t magically make them intelligent.

As someone earlier remarked, “What’s in a name? That which we call a rose by any other name would smell as sweet.”

The idea that just not using certain words anymore would mend our thinking, the Whorfian hypothesis, is certainly a widespread superstition in Americans (and possibly elsewhere).

This popular Whorfian hypothesis seems to be based on a misunderstanding of Frans Boaz story about the words for snow in Inuit languages. Most likely, Boaz was trying to trace family resemblances between Inuit language families using etymologies for words for snow,[1] but his story ended up being abused to define thinking as language use. The same mistake people make when they conflate LLM language use with thinking.

Any attempt to try to change people’s thinking by controlling the words they are allowed to use are bound to fail.

[1] His four words for “snow” refered to the four principal Inuit language families known at the time who are historically and linguistically unrelated.

Clive Robinson January 23, 2026 4:26 PM

@ Winter, Q, lurker,

You make the statement,

“Many artificial flavorings are chemically identical to natural components, eg, vanilla.”

But not actually true[1], thus not true the other way, that is,

“Natural flavourings are not in general chemically identical to –man made– artificial flavourings”

That is why there is both a legal and qualitative difference thus economic difference between the two.

Thus your analogy fails to hold in the way you want it to. And why you then add a qualifier of,

“However, one component only rarely is able to reproduce the [experience] of the original.”

Which is maybe why you did not use artificial sweeteners as an analogy[2].

But you go onto say,

“Just as a poison is in the dosage, the benefits of software and computers is in their use.”

Actually the first part is not true, some in most cases all even catalytic poisons even at a single molecule level do harm. It’s the bodies ability to deal with that harm that makes the difference on dosage. An example of that is the various cyanides[3] that act as “blood agents” in chemical warfare that chemically bond to haemoglobin I could explain why it causes harm but it’s shorter to give a link to an explanation,

https://en.wikipedia.org/wiki/Cyanide_poisoning#Mechanism

As for “the benefits of software and computers is in their use”

That is because they are “force multiplier” tools and as I frequently point out they are agnostic to their use, that is chosen by a “Directing Mind”. The resulting good or bad, and harmful, etc of which, is judged by later observers by the actual or potential effects caused, seen through their POV.

[1] One main difference is natural flavourings contain a range or spectrum of chemical components, where as artificial flavourings are or should be just one single chemical. In the case of man made artificial flavouring chemicals any different chemicals are “process contamination” or defects and treated as a failing of the process. In the case of natural flavourings the range of chemical components is broad but are all natural, these other chemical components actually enhance the flavour giving added top and bottom notes. Thus the flavour profile in natural flavours is more complex than artificial flavourings and frequently seen as “more desirable” thus the higher economic value. The classic example of this is “cane sugars” where you get a whole range of distillates out one such being molasses, another being brown sugar they all though can be used by the body[2] to provide an increased glycemic index.

[2] But what gives the game away is “artificial sweeteners” which mostly are in no way chemically comparable to any natural sugar and that is the reason for their existence… in that the body can not process them as it can with natural sugars. As far as I can tell the majority of first generation artificial sweeteners all are now known to be linked to increased cancer rates in test subjects and have been withdrawn or outright banned. Likewise second generation are linked to abnormal gut functioning and so on. Sorbitol being perhaps the best known for this as it’s now used medically as a stool softener or laxative.

[3] The human body gets cyanide from certain high carbohydrate plant parts, and thus has a way to deal with small quantities from the likes of fruit pits but not unleached bitter cassava roots (hence the “Death by Cassava” meme). The reason plants make cyanide only in some parts not others is that the parts have high value to the plant and insects and the cyanide acts as an effective insecticide.

Winter January 23, 2026 5:31 PM

@Clive

But not actually true[1], thus not true the other way, that is,

Artificial vanilla, or vanillin, is indistinguishable from one of the components of organic vanilla. As such it is not more dangerous than the “real”, plant based stuff.

Extracts from the vanilla beans taste and smell different. But that has absolutely no bearing on the safety of the artificial product.

The fact that a flavoring is artificial tells us nothing about their health or safety. For that matter, plants have been known to produce all natural and organic, but very unhealthy substances in abundance.

Clive Robinson January 24, 2026 1:41 AM

@ Winter,

You say that,

“Artificial vanilla, or vanillin, is indistinguishable from one of the components of organic vanilla. As such it is not more dangerous than the “real”, plant based stuff.”

Again not true they are quite distinguishable in a University lab where undergrad teaching is carried out. Or on a chefs tounge

Because the last time I had reason to look into it vanillin was made from one of three sources of “waste products” that then got treated chemically to make the artificial flavouring,

1, Petrochem industry production
2, Paper and wood pulp production
3, Coaltar industry production.

The third outside of old communist blocks has now been stopped for two reasons,

1, It contains considerable byproduct toxic components.
2, The source industry has significantly reduced or changed.

And the issue of toxic components as byproducts exists in the other two processes as well.

As there are one “truck load” of byproduct contaminates that have to be removed it’s a process that is at best imperfect on industrial scale for economic reasons. Whilst the output of the process is mostly the single component vanillin there will be byproducts present at supposedly low enough levels. One well known byproduct of the paper and wood pulp waste production is aceto vanillone which is said to “broaden the note” thus make it less stringent.

But also it has a very narrow almost stringent flavour due to a “balsamic under tone” from the vanillin aldehyde.

Natural vanilla is like perfume oils removed from vanilla pod seeds via a mechanical and simple soluate distillation process that uses ethanol. Which produces a very broad spectrum of hundreds of usually safe flavor compounds (there is a risk of cancer linked chemical production in the artisanal distillation process the same with “rose water” production and other “essential oil” production as well as the usual issues of VOC products).

The main risk in the process is the ethanol production contaminates that are known to amongst other things make people go blind (rot-gut alcohol).

These issues are well known in both the industrial chemical and artisan natural production processes. Which kind of belies your statement of,

“Extracts from the vanilla beans taste and smell different. But that has absolutely no bearing on the safety of the artificial product.”

The level of the carcinogenic compounds which have a desirable taste flavour of there own is usually below that of baked goods and toasted wheat products (see rodent studies on acrylamide).

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.