Entries Tagged "malware"

Page 9223372036854775575 of 49

Possible US Government iPhone Hacking Tool Leaked

Wired writes (alternate source):

Security researchers at Google on Tuesday released a report describing what they’re calling “Coruna,” a highly sophisticated iPhone hacking toolkit that includes five complete hacking techniques capable of bypassing all the defenses of an iPhone to silently install malware on a device when it visits a website containing the exploitation code. In total, Coruna takes advantage of 23 distinct vulnerabilities in iOS, a rare collection of hacking components that suggests it was created by a well-resourced, likely state-sponsored group of hackers.

[…]

Coruna’s code also appears to have been originally written by English-speaking coders, notes iVerify’s cofounder Rocky Cole. “It’s highly sophisticated, took millions of dollars to develop, and it bears the hallmarks of other modules that have been publicly attributed to the US government,” Cole tells WIRED. “This is the first example we’ve seen of very likely US government tools­based on what the code is telling us­spinning out of control and being used by both our adversaries and cybercriminal groups.”

TechCrunch reports that Coruna is definitely of US origin:

Two former employees of government contractor L3Harris told TechCrunch that Coruna was, at least in part, developed by the company’s hacking and surveillance tech division, Trenchant. The two former employees both had knowledge of the company’s iPhone hacking tools. Both spoke on condition of anonymity because they weren’t authorized to talk about their work for the company.

It’s always super interesting to see what malware looks like when it’s created through a professional software development process. And the TechCrunch article has some speculation as to how the US lost control of it. It seems that an employee of L3Harris’s surviellance tech division, Trenchant, sold it to the Russian government.

Posted on April 2, 2026 at 6:05 AMView Comments

Apple’s Camera Indicator Lights

A thoughtful review of Apple’s system to alert users that the camera is on. It’s really well-designed, and important in a world where malware could surreptitiously start recording.

The reason it’s tempting to think that a dedicated camera indicator light is more secure than an on-display indicator is the fact that hardware is generally more secure than software, because it’s harder to tamper with. With hardware, a dedicated hardware indicator light can be connected to the camera hardware such that if the camera is accessed, the light must turn on, with no way for software running on the device, no matter its privileges, to change that. With an indicator light that is rendered on the display, it’s not foolish to worry that malicious software, with sufficient privileges, could draw over the pixels on the display where the camera indicator is rendered, disguising that the camera is in use.

If this were implemented simplistically, that concern would be completely valid. But Apple’s implementation of this is far from simplistic.

Posted on March 30, 2026 at 7:08 AMView Comments

The Promptware Kill Chain

The promptware kill chain: initial access, privilege escalation, reconnaissance, persistence, command & control, lateral movement, action on objective

Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term “promptware.” In a new paper, we, the authors, propose a structured seven-step “promptware kill chain” to provide policymakers and security practitioners with the necessary vocabulary and framework to address the escalating AI threat landscape.

In our model, the promptware kill chain begins with Initial Access. This is where the malicious payload enters the AI system. This can happen directly, where an attacker types a malicious prompt into the LLM application, or, far more insidiously, through “indirect prompt injection.” In the indirect attack, the adversary embeds malicious instructions in content that the LLM retrieves (obtains in inference time), such as a web page, an email, or a shared document. As LLMs become multimodal (capable of processing various input types beyond text), this vector expands even further; malicious instructions can now be hidden inside an image or audio file, waiting to be processed by a vision-language model.

The fundamental issue lies in the architecture of LLMs themselves. Unlike traditional computing systems that strictly separate executable code from user data, LLMs process all input—whether it is a system command, a user’s email, or a retrieved document—as a single, undifferentiated sequence of tokens. There is no architectural boundary to enforce a distinction between trusted instructions and untrusted data. Consequently, a malicious instruction embedded in a seemingly harmless document is processed with the same authority as a system command.

But prompt injection is only the Initial Access step in a sophisticated, multistage operation that mirrors traditional malware campaigns such as Stuxnet or NotPetya.

Once the malicious instructions are inside material incorporated into the AI’s learning, the attack transitions to Privilege Escalation, often referred to as “jailbreaking.” In this phase, the attacker circumvents the safety training and policy guardrails that vendors such as OpenAI or Google have built into their models. Through techniques analogous to social engineering—convincing the model to adopt a persona that ignores rules—to sophisticated adversarial suffixes in the prompt or data, the promptware tricks the model into performing actions it would normally refuse. This is akin to an attacker escalating from a standard user account to administrator privileges in a traditional cyberattack; it unlocks the full capability of the underlying model for malicious use.

Following privilege escalation comes Reconnaissance. Here, the attack manipulates the LLM to reveal information about its assets, connected services, and capabilities. This allows the attack to advance autonomously down the kill chain without alerting the victim. Unlike reconnaissance in classical malware, which is performed typically before the initial access, promptware reconnaissance occurs after the initial access and jailbreaking components have already succeeded. Its effectiveness relies entirely on the victim model’s ability to reason over its context, and inadvertently turns that reasoning to the attacker’s advantage.

Fourth: the Persistence phase. A transient attack that disappears after one interaction with the LLM application is a nuisance; a persistent one compromises the LLM application for good. Through a variety of mechanisms, promptware embeds itself into the long-term memory of an AI agent or poisons the databases the agent relies on. For instance, a worm could infect a user’s email archive so that every time the AI summarizes past emails, the malicious code is re-executed.

The Command-and-Control (C2) stage relies on the established persistence and dynamic fetching of commands by the LLM application in inference time from the internet. While not strictly required to advance the kill chain, this stage enables the promptware to evolve from a static threat with fixed goals and scheme determined at injection time into a controllable trojan whose behavior can be modified by an attacker.

The sixth stage, Lateral Movement, is where the attack spreads from the initial victim to other users, devices, or systems. In the rush to give AI agents access to our emails, calendars, and enterprise platforms, we create highways for malware propagation. In a “self-replicating” attack, an infected email assistant is tricked into forwarding the malicious payload to all contacts, spreading the infection like a computer virus. In other cases, an attack might pivot from a calendar invite to controlling smart home devices or exfiltrating data from a connected web browser. The interconnectedness that makes these agents useful is precisely what makes them vulnerable to a cascading failure.

Finally, the kill chain concludes with Actions on Objective. The goal of promptware is not just to make a chatbot say something offensive; it is often to achieve tangible malicious outcomes through data exfiltration, financial fraud, or even physical world impact. There are examples of AI agents being manipulated into selling cars for a single dollar or transferring cryptocurrency to an attacker’s wallet. Most alarmingly, agents with coding capabilities can be tricked into executing arbitrary code, granting the attacker total control over the AI’s underlying system. The outcome of this stage determines the type of malware executed by promptware, including infostealer, spyware, and cryptostealer, among others.

The kill chain was already demonstrated. For example, in the research “Invitation Is All You Need,” attackers achieved initial access by embedding a malicious prompt in the title of a Google Calendar invitation. The prompt then leveraged an advanced technique known as delayed tool invocation to coerce the LLM into executing the injected instructions. Because the prompt was embedded in a Google Calendar artifact, it persisted in the long-term memory of the user’s workspace. Lateral movement occurred when the prompt instructed the Google Assistant to launch the Zoom application, and the final objective involved covertly livestreaming video of the unsuspecting user who had merely asked about their upcoming meetings. C2 and reconnaissance weren’t demonstrated in this attack.

Similarly, the “Here Comes the AI Worm” research demonstrated another end-to-end realization of the kill chain. In this case, initial access was achieved via a prompt injected into an email sent to the victim. The prompt employed a role-playing technique to compel the LLM to follow the attacker’s instructions. Since the prompt was embedded in an email, it likewise persisted in the long-term memory of the user’s workspace. The injected prompt instructed the LLM to replicate itself and exfiltrate sensitive user data, leading to off-device lateral movement when the email assistant was later asked to draft new emails. These emails, containing sensitive information, were subsequently sent by the user to additional recipients, resulting in the infection of new clients and a sublinear propagation of the attack. C2 and reconnaissance weren’t demonstrated in this attack.

The promptware kill chain gives us a framework for understanding these and similar attacks; the paper characterizes dozens of them. Prompt injection isn’t something we can fix in current LLM technology. Instead, we need an in-depth defensive strategy that assumes initial access will occur and focuses on breaking the chain at subsequent steps, including by limiting privilege escalation, constraining reconnaissance, preventing persistence, disrupting C2, and restricting the actions an agent is permitted to take. By understanding promptware as a complex, multistage malware campaign, we can shift from reactive patching to systematic risk management, securing the critical systems we are so eager to build.

This essay was written with Oleg Brodt, Elad Feldman and Ben Nassi, and originally appeared in Lawfare.

Posted on February 16, 2026 at 7:04 AMView Comments

Zero-Day Exploit in WinRAR File

A zero-day vulnerability in WinRAR is being exploited by at least two Russian criminal groups:

The vulnerability seemed to have super Windows powers. It abused alternate data streams, a Windows feature that allows different ways of representing the same file path. The exploit abused that feature to trigger a previously unknown path traversal flaw that caused WinRAR to plant malicious executables in attacker-chosen file paths %TEMP% and %LOCALAPPDATA%, which Windows normally makes off-limits because of their ability to execute code.

More details in the article.

Posted on August 19, 2025 at 7:07 AMView Comments

Trojans Embedded in .svg Files

Porn sites are hiding code in .svg files:

Unpacking the attack took work because much of the JavaScript in the .svg images was heavily obscured using a custom version of “JSFuck,” a technique that uses only a handful of character types to encode JavaScript into a camouflaged wall of text.

Once decoded, the script causes the browser to download a chain of additional obfuscated JavaScript. The final payload, a known malicious script called Trojan.JS.Likejack, induces the browser to like a specified Facebook post as long as a user has their account open.

“This Trojan, also written in Javascript, silently clicks a ‘Like’ button for a Facebook page without the user’s knowledge or consent, in this case the adult posts we found above,” Malwarebytes researcher Pieter Arntz wrote. “The user will have to be logged in on Facebook for this to work, but we know many people keep Facebook open for easy access.”

This isn’t a new trick. We’ve seen Trojaned .svg files before.

Posted on August 15, 2025 at 7:07 AMView Comments

Google Sues the Badbox Botnet Operators

It will be interesting to watch what will come of this private lawsuit:

Google on Thursday announced filing a lawsuit against the operators of the Badbox 2.0 botnet, which has ensnared more than 10 million devices running Android open source software.

These devices lack Google’s security protections, and the perpetrators pre-installed the Badbox 2.0 malware on them, to create a backdoor and abuse them for large-scale fraud and other illicit schemes.

This reminds me of Meta’s lawauit against Pegasus over its hack-for-hire software (which I wrote about here.) It’s a private company stepping into a regulatory void left by governments.

Slashdot thread.

Posted on July 23, 2025 at 7:04 AMView Comments

New Mobile Phone Forensics Tool

The Chinese have a new tool called Massistant.

  • Massistant is the presumed successor to Chinese forensics tool, “MFSocket”, reported in 2019 and attributed to publicly traded cybersecurity company, Meiya Pico.
  • The forensics tool works in tandem with a corresponding desktop software.
  • Massistant gains access to device GPS location data, SMS messages, images, audio, contacts and phone services.
  • Meiya Pico maintains partnerships with domestic and international law enforcement partners, both as a surveillance hardware and software provider, as well as through training programs for law enforcement personnel.

From a news article:

The good news, per Balaam, is that Massistant leaves evidence of its compromise on the seized device, meaning users can potentially identify and delete the malware, either because the hacking tool appears as an app, or can be found and deleted using more sophisticated tools such as the Android Debug Bridge, a command line tool that lets a user connect to a device through their computer.

The bad news is that at the time of installing Massistant, the damage is done, and authorities already have the person’s data.

Slashdot thread.

Posted on July 18, 2025 at 7:07 AMView Comments

Ubuntu Disables Spectre/Meltdown Protections

A whole class of speculative execution attacks against CPUs were published in 2018. They seemed pretty catastrophic at the time. But the fixes were as well. Speculative execution was a way to speed up CPUs, and removing those enhancements resulted in significant performance drops.

Now, people are rethinking the trade-off. Ubuntu has disabled some protections, resulting in 20% performance boost.

After discussion between Intel and Canonical’s security teams, we are in agreement that Spectre no longer needs to be mitigated for the GPU at the Compute Runtime level. At this point, Spectre has been mitigated in the kernel, and a clear warning from the Compute Runtime build serves as a notification for those running modified kernels without those patches. For these reasons, we feel that Spectre mitigations in Compute Runtime no longer offer enough security impact to justify the current performance tradeoff.

I agree with this trade-off. These attacks are hard to get working, and it’s not easy to exfiltrate useful data. There are way easier ways to attack systems.

News article.

Posted on July 2, 2025 at 7:02 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.