A new malware, called Ramsay, can jump air gaps:
ESET said they’ve been able to track down three different versions of the Ramsay malware, one compiled in September 2019 (Ramsay v1), and two others in early and late March 2020 (Ramsay v2.a and v2.b).
Each version was different and infected victims through different methods, but at its core, the malware’s primary role was to scan an infected computer, and gather Word, PDF, and ZIP documents in a hidden storage folder, ready to be exfiltrated at a later date.
Other versions also included a spreader module that appended copies of the Ramsay malware to all PE (portable executable) files found on removable drives and network shares. This is believed to be the mechanism the malware was employing to jump the air gap and reach isolated networks, as users would most likely moved the infected executables between the company’s different network layers, and eventually end up on an isolated system.
ESET says that during its research, it was not able to positively identify Ramsay’s exfiltration module, or determine how the Ramsay operators retrieved data from air-gapped systems.
Honestly, I can’t think of any threat actor that wants this kind of feature other than governments:
The researcher has not made a formal attribution as who might be behind Ramsay. However, Sanmillan said that the malware contained a large number of shared artifacts with Retro, a malware strain previously developed by DarkHotel, a hacker group that many believe to operate in the interests of the South Korean government.
Posted on May 18, 2020 at 6:15 AM •
Interesting taxonomy of machine-learning failures (pdf) that encompasses both mistakes and attacks, or — in their words — intentional and unintentional failure modes. It’s a good basis for threat modeling.
Posted on December 9, 2019 at 5:56 AM •
Stanford University’s Cyber Policy Center has published a long report on the security of US elections. Summary: it’s not good.
Posted on June 24, 2019 at 6:05 AM •
Someone is flying a drone over Gatwick Airport in order to disrupt service:
Chris Woodroofe, Gatwick’s chief operating officer, said on Thursday afternoon there had been another drone sighting which meant it was impossible to say when the airport would reopen.
He told BBC News: “There are 110,000 passengers due to fly today, and the vast majority of those will see cancellations and disruption. We have had within the last hour another drone sighting so at this stage we are not open and I cannot tell you what time we will open.
“It was on the airport, seen by the police and corroborated. So having seen that drone that close to the runway it was unsafe to reopen.”
The economics of this kind of thing isn’t in our favor. A drone is cheap. Closing an airport for a day is very expensive.
I don’t think we’re going to solve this by jammers, or GPS-enabled drones that won’t fly over restricted areas. I’ve seen some technologies that will safely disable drones in flight, but I’m not optimistic about those in the near term. The best defense is probably punitive penalties for anyone doing something like this — enough to discourage others.
There are a lot of similar security situations, in which the cost to attack is vastly cheaper than 1) the damage caused by the attack, and 2) the cost to defend. I have long believed that this sort of thing represents an existential threat to our society.
EDITED TO ADD (12/23): The airport has deployed some anti-drone technology and reopened.
EDITED TO ADD (1/2): Maybe there was never a drone.
Posted on December 21, 2018 at 6:24 AM •
Here’s some interesting research about how we perceive threats. Basically, as the environment becomes safer we basically manufacture new threats. From an essay about the research:
To study how concepts change when they become less common, we brought volunteers into our laboratory and gave them a simple task – to look at a series of computer-generated faces and decide which ones seem “threatening.” The faces had been carefully designed by researchers to range from very intimidating to very harmless.
As we showed people fewer and fewer threatening faces over time, we found that they expanded their definition of “threatening” to include a wider range of faces. In other words, when they ran out of threatening faces to find, they started calling faces threatening that they used to call harmless. Rather than being a consistent category, what people considered “threats” depended on how many threats they had seen lately.
This has a lot of implications in security systems where humans have to make judgments about threat and risk: TSA agents, police noticing “suspicious” activities, “see something say something” campaigns, and so on.
The academic paper.
Posted on June 29, 2018 at 9:44 AM •
Last year I wrote about the Digital Security Exchange. The project is live:
The DSX works to strengthen the digital resilience of U.S. civil society groups by improving their understanding and mitigation of online threats.
We do this by pairing civil society and social sector organizations with credible and trustworthy digital security experts and trainers who can help them keep their data and networks safe from exposure, exploitation, and attack. We are committed to working with community-based organizations, legal and journalistic organizations, civil rights advocates, local and national organizers, and public and high-profile figures who are working to advance social, racial, political, and economic justice in our communities and our world.
If you are either an organization who needs help, or an expert who can provide help, visit their website.
Note: I am on their advisory committee.
Posted on April 11, 2018 at 6:33 AM •
Princeton’s Karen Levy has a good article computer security and the intimate partner threat:
When you learn that your privacy has been compromised, the common advice is to prevent additional access — delete your insecure account, open a new one, change your password. This advice is such standard protocol for personal security that it’s almost a no-brainer. But in abusive romantic relationships, disconnection can be extremely fraught. For one, it can put the victim at risk of physical harm: If abusers expect digital access and that access is suddenly closed off, it can lead them to become more violent or intrusive in other ways. It may seem cathartic to delete abusive material, like alarming text messages — but if you don’t preserve that kind of evidence, it can make prosecution more difficult. And closing some kinds of accounts, like social networks, to hide from a determined abuser can cut off social support that survivors desperately need. In some cases, maintaining a digital connection to the abuser may even be legally required (for instance, if the abuser and survivor share joint custody of children).
Threats from intimate partners also change the nature of what it means to be authenticated online. In most contexts, access credentials — like passwords and security questions — are intended to insulate your accounts against access from an adversary. But those mechanisms are often completely ineffective for security in intimate contexts: The abuser can compel disclosure of your password through threats of violence and has access to your devices because you’re in the same physical space. In many cases, the abuser might even own your phone — or might have access to your communications data because you share a family plan. Things like security questions are unlikely to be effective tools for protecting your security, because the abuser knows or can guess at intimate details about your life — where you were born, what your first job was, the name of your pet.
Posted on March 5, 2018 at 11:13 AM •
This video purports to be a bank robbery in Kiev. He first threatens a teller, who basically ignores him because she’s behind bullet-proof glass. But then the robber threatens one of her co-workers, who is on his side of the glass. Interesting example of a security system failing for an unexpected reason.
The video is weird, though. The robber seems very unsure of himself, and never really points the gun at anyone or even holds it properly.
Posted on August 14, 2017 at 6:03 AM •
A modern photocopier is basically a computer with a scanner and printer attached. This computer has a hard drive, and scans of images are regularly stored on that drive. This means that when a photocopier is thrown away, that hard drive is filled with pages that the machine copied over its lifetime. As you might expect, some of those pages will contain sensitive information.
This 2011 report was written by the Inspector General of the National Archives and Records Administration (NARA). It found that the organization did nothing to safeguard its photocopiers.
Our audit found that opportunities exist to strengthen controls to ensure photocopier hard drives are protected from potential exposure. Specifically, we found the following weaknesses.
- NARA lacks appropriate controls to ensure all photocopiers across the agency are accounted for and that any hard drives residing on these machines are tracked and properly sanitized or destroyed prior to disposal.
- There are no policies documenting security measures to be taken for photocopiers utilized for general use nor are there procedures to ensure photocopier hard drives are sanitized or destroyed prior to disposal or at the end of the lease term.
- Photocopier lease agreements and contracts do not include a “keep disk”1 or similar clause as required by NARA’s IT Security Methodology for Media Protection Policy version 5.1.
I don’t mean to single this organization out. Pretty much no one thinks about this security threat.
Posted on January 2, 2017 at 6:12 AM •
Interesting research: Debora Halbert, “Intellectual property theft and national security: Agendas and assumptions“:
Abstract: About a decade ago, intellectual property started getting systematically treated as a national security threat to the United States. The scope of the threat is broadly conceived to include hacking, trade secret theft, file sharing, and even foreign students enrolling in American universities. In each case, the national security of the United States is claimed to be at risk, not just its economic competitiveness. This article traces the U.S. government’s efforts to establish and articulate intellectual property theft as a national security issue. It traces the discourse on intellectual property as a security threat and its place within the larger security dialogue of cyberwar and cybersecurity. It argues that the focus on the theft of intellectual property as a security issue helps justify enhanced surveillance and control over the Internet and its future development. Such a framing of intellectual property has consequences for how we understand information exchange on the Internet and for the future of U.S. diplomatic relations around the globe.
EDITED TO ADD (7/6): Preliminary version, no paywall.
Posted on July 5, 2016 at 10:54 AM •
Sidebar photo of Bruce Schneier by Joe MacInnis.