The Promptware Kill Chain: How Prompt Injections Gradually Evolved Into a Multistep Malware Delivery Mechanism

O. Brodt, E. Feldman, B. Schneier, and B. Nassi

arXiv:2601.09625v2 [cs.CR], February 10, 2026.

[full text – PDF (Acrobat)]

Abstract

Prompt injection was initially framed as the large language model (LLM) analogue of SQL injection. However, over the past three years, attacks labeled as prompt injection have evolved from isolated input-manipulation exploits into multistep attack mechanisms that resemble malware. In this paper, we argue that prompt injections evolved into promptware, a new class of malware execution mechanism triggered through prompts engineered to exploit an application’s LLM. We introduce a seven-stage promptware kill chain: Initial Access (prompt injection), Privilege Escalation (jailbreaking), Reconnaissance, Persistence (memory and retrieval poisoning), Command and Control, Lateral Movement, and Actions on Objective. We analyze thirty-six prominent studies and real-world incidents affecting production LLM systems and show that at least twenty-one documented attacks that traverse four or more stages of this kill chain, demonstrating that the threat model is not merely theoretical. We discuss the need for a defense-in-depth approach that addresses all stages of the promptware life cycle and review relevant countermeasures for each step. By moving the conversation from prompt injection to a promptware kill chain, our work provides analytical clarity, enables structured risk assessment, and lays a foundation for systematic security engineering of LLM-based systems.

Categories: AI/LLMs

Sidebar photo of Bruce Schneier by Joe MacInnis.