Clever Social Engineering Attack Using Captchas
This is really interesting.
It’s a phishing attack targeting GitHub users, tricking them to solve a fake Captcha that actually runs a script that is copied to the command line.
Clever.
Page 49
This is really interesting.
It’s a phishing attack targeting GitHub users, tricking them to solve a fake Captcha that actually runs a script that is copied to the command line.
Clever.
The FBI has shut down a botnet run by Chinese hackers:
The botnet malware infected a number of different types of internet-connected devices around the world, including home routers, cameras, digital video recorders, and NAS drives. Those devices were used to help infiltrate sensitive networks related to universities, government agencies, telecommunications providers, and media organizations…. The botnet was launched in mid-2021, according to the FBI, and infected roughly 260,000 devices as of June 2024.
The operation to dismantle the botnet was coordinated by the FBI, the NSA, and the Cyber National Mission Force (CNMF), according to a press release dated Wednesday. The U.S. Department of Justice received a court order to take control of the botnet infrastructure by sending disabling commands to the malware on infected devices. The hackers tried to counterattack by hitting FBI infrastructure but were “ultimately unsuccessful,” according to the law enforcement agency.
Wow.
It seems they all exploded simultaneously, which means they were triggered.
Were they each tampered with physically, or did someone figure out how to trigger a thermal runaway remotely? Supply chain attack? Malicious code update, or natural vulnerability?
I have no idea, but I expect we will all learn over the next few days.
EDITED TO ADD: I’m reading nine killed and 2,800 injured. That’s a lot of collateral damage. (I haven’t seen a good number as to the number of pagers yet.)
EDITED TO ADD: Reuters writes: “The pagers that detonated were the latest model brought in by Hezbollah in recent months, three security sources said.” That implies supply chain attack. And it seems to be a large detonation for an overloaded battery.
This reminds me of the 1996 assassination of Yahya Ayyash using a booby-trapped cell phone.
EDITED TO ADD: I am deleting political comments. On this blog, let’s stick to the tech and the security ramifications of the threat.
EDITED TO ADD (9/18): More explosions today, this time radios. Good New York Times explainer. And a Wall Street Journal article. Clearly a physical supply chain attack.
Interesting social engineering attack: luring potential job applicants with fake recruiting pitches, trying to convince them to download malware. From a news article:
These particular attacks from North Korean state-funded hacking team Lazarus Group are new, but the overall malware campaign against the Python development community has been running since at least August of 2023, when a number of popular open source Python tools were maliciously duplicated with added malware. Now, though, there are also attacks involving “coding tests” that only exist to get the end user to install hidden malware on their system (cleverly hidden with Base64 encoding) that allows remote execution once present. The capacity for exploitation at that point is pretty much unlimited, due to the flexibility of Python and how it interacts with the underlying OS.
CISA wants everyone—and government agencies in particular—to remove or upgrade an Ivanti Cloud Service Appliance (CSA) that is no longer being supported.
Welcome to the security nightmare that is the Internet of Things.
EDITED TO ADD (10/12): The Cloud Service Appliance isn’t an actual appliance, but software you install on a computer. So it’s not IoT.
This is a current list of where and when I am scheduled to speak:
The list is maintained on this page.
This is an odd story of serving squid during legislative negotiations in the Philippines.
Over the summer, I gave a talk about AI and democracy at TedXBillings. The recording is live.
Please share. I’m hoping for more than 200 views….
Microsoft is updating SymCrypt, its core cryptographic library, with new quantum-secure algorithms. Microsoft’s details are here. From a news article:
The first new algorithm Microsoft added to SymCrypt is called ML-KEM. Previously known as CRYSTALS-Kyber, ML-KEM is one of three post-quantum standards formalized last month by the National Institute of Standards and Technology (NIST). The KEM in the new name is short for key encapsulation. KEMs can be used by two parties to negotiate a shared secret over a public channel. Shared secrets generated by a KEM can then be used with symmetric-key cryptographic operations, which aren’t vulnerable to Shor’s algorithm when the keys are of a sufficient size.
The ML in the ML-KEM name refers to Module Learning with Errors, a problem that can’t be cracked with Shor’s algorithm. As explained here, this problem is based on a “core computational assumption of lattice-based cryptography which offers an interesting trade-off between guaranteed security and concrete efficiency.”
ML-KEM, which is formally known as FIPS 203, specifies three parameter sets of varying security strength denoted as ML-KEM-512, ML-KEM-768, and ML-KEM-1024. The stronger the parameter, the more computational resources are required.
The other algorithm added to SymCrypt is the NIST-recommended XMSS. Short for eXtended Merkle Signature Scheme, it’s based on “stateful hash-based signature schemes.” These algorithms are useful in very specific contexts such as firmware signing, but are not suitable for more general uses.
New research evaluating the effectiveness of reward modeling during Reinforcement Learning from Human Feedback (RLHF): “SEAL: Systematic Error Analysis for Value ALignment.” The paper introduces quantitative metrics for evaluating the effectiveness of modeling and aligning human values:
Abstract: Reinforcement Learning from Human Feedback (RLHF) aims to align language models (LMs) with human values by training reward models (RMs) on binary preferences and using these RMs to fine-tune the base LMs. Despite its importance, the internal mechanisms of RLHF remain poorly understood. This paper introduces new metrics to evaluate the effectiveness of modeling and aligning human values, namely feature imprint, alignment resistance and alignment robustness. We categorize alignment datasets into target features (desired values) and spoiler features (undesired concepts). By regressing RM scores against these features, we quantify the extent to which RMs reward them a metric we term feature imprint. We define alignment resistance as the proportion of the preference dataset where RMs fail to match human preferences, and we assess alignment robustness by analyzing RM responses to perturbed inputs. Our experiments, utilizing open-source components like the Anthropic preference dataset and OpenAssistant RMs, reveal significant imprints of target features and a notable sensitivity to spoiler features. We observed a 26% incidence of alignment resistance in portions of the dataset where LM-labelers disagreed with human preferences. Furthermore, we find that misalignment often arises from ambiguous entries within the alignment dataset. These findings underscore the importance of scrutinizing both RMs and alignment datasets for a deeper understanding of value alignment.
Sidebar photo of Bruce Schneier by Joe MacInnis.