Page 35

China, Russia, Iran, and North Korea Intelligence Sharing

Former CISA Director Jen Easterly writes about a new international intelligence sharing co-op:

Historically, China, Russia, Iran & North Korea have cooperated to some extent on military and intelligence matters, but differences in language, culture, politics & technological sophistication have hindered deeper collaboration, including in cyber. Shifting geopolitical dynamics, however, could drive these states toward a more formalized intell-sharing partnership. Such a “Four Eyes” alliance would be motivated by common adversaries and strategic interests, including an enhanced capacity to resist economic sanctions and support proxy conflicts.

Posted on March 12, 2025 at 7:09 AMView Comments

Silk Typhoon Hackers Indicted

Lots of interesting details in the story:

The US Department of Justice on Wednesday announced the indictment of 12 Chinese individuals accused of more than a decade of hacker intrusions around the world, including eight staffers for the contractor i-Soon, two officials at China’s Ministry of Public Security who allegedly worked with them, and two other alleged hackers who are said to be part of the Chinese hacker group APT27, or Silk Typhoon, which prosecutors say was involved in the US Treasury breach late last year.

[…]

According to prosecutors, the group as a whole has targeted US state and federal agencies, foreign ministries of countries across Asia, Chinese dissidents, US-based media outlets that have criticized the Chinese government, and most recently the US Treasury, which was breached between September and December of last year. An internal Treasury report obtained by Bloomberg News found that hackers had penetrated at least 400 of the agency’s PCs and stole more than 3,000 files in that intrusion.

The indictments highlight how, in some cases, the hackers operated with a surprising degree of autonomy, even choosing targets on their own before selling stolen information to Chinese government clients. The indictment against Yin Kecheng, who was previously sanctioned by the Treasury Department in January for his involvement in the Treasury breach, quotes from his communications with a colleague in which he notes his personal preference for hacking American targets and how he’s seeking to ‘break into a big target,’ which he hoped would allow him to make enough money to buy a car.

Posted on March 11, 2025 at 1:14 PMView Comments

Thousands of WordPress Websites Infected with Malware

The malware includes four separate backdoors:

Creating four backdoors facilitates the attackers having multiple points of re-entry should one be detected and removed. A unique case we haven’t seen before. Which introduces another type of attack made possibly by abusing websites that don’t monitor 3rd party dependencies in the browser of their users.

The four backdoors:

The functions of the four backdoors are explained below:

  • Backdoor 1, which uploads and installs a fake plugin named “Ultra SEO Processor,” which is then used to execute attacker-issued commands
  • Backdoor 2, which injects malicious JavaScript into wp-config.php
  • Backdoor 3, which adds an attacker-controlled SSH key to the ~/.ssh/authorized_keys file so as to allow persistent remote access to the machine
  • Backdoor 4, which is designed to execute remote commands and fetches another payload from gsocket[.]io to likely open a reverse shell.

Posted on March 10, 2025 at 7:01 AMView Comments

“Emergent Misalignment” in LLMs

Interesting research: “Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs“:

Abstract: We present a surprising result regarding LLMs and alignment. In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively. Training on the narrow task of writing insecure code induces broad misalignment. We call this emergent misalignment. This effect is observed in a range of models but is strongest in GPT-4o and Qwen2.5-Coder-32B-Instruct. Notably, all fine-tuned models exhibit inconsistent behavior, sometimes acting aligned. Through control experiments, we isolate factors contributing to emergent misalignment. Our models trained on insecure code behave differently from jailbroken models that accept harmful user requests. Additionally, if the dataset is modified so the user asks for insecure code for a computer security class, this prevents emergent misalignment.

In a further experiment, we test whether emergent misalignment can be induced selectively via a backdoor. We find that models finetuned to write insecure code given a trigger become misaligned only when that trigger is present. So the misalignment is hidden without knowledge of the trigger.

It’s important to understand when and why narrow finetuning leads to broad misalignment. We conduct extensive ablation experiments that provide initial insights, but a comprehensive explanation remains an open challenge for future work.

The emergent properties of LLMs are so, so weird.

Posted on February 27, 2025 at 1:05 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.