Blog: June 2021 Archives

Risks of Evidentiary Software

Over at Lawfare, Susan Landau has an excellent essay on the risks posed by software used to collect evidence (a Breathalyzer is probably the most obvious example).

Bugs and vulnerabilities can lead to inaccurate evidence, but the proprietary nature of software makes it hard for defendants to examine it.

The software engineers proposed a three-part test. First, the court should have access to the “Known Error Log,” which should be part of any professionally developed software project. Next the court should consider whether the evidence being presented could be materially affected by a software error. Ladkin and his co-authors noted that a chain of emails back and forth are unlikely to have such an error, but the time that a software tool logs when an application was used could easily be incorrect. Finally, the reliability experts recommended seeing whether the code adheres to an industry standard used in an non-computerized version of the task (e.g., bookkeepers always record every transaction, and thus so should bookkeeping software).

[…]

Inanimate objects have long served as evidence in courts of law: the door handle with a fingerprint, the glove found at a murder scene, the Breathalyzer result that shows a blood alcohol level three times the legal limit. But the last of those examples is substantively different from the other two. Data from a Breathalyzer is not the physical entity itself, but rather a software calculation of the level of alcohol in the breath of a potentially drunk driver. As long as the breath sample has been preserved, one can always go back and retest it on a different device.

What happens if the software makes an error and there is no sample to check or if the software itself produces the evidence? At the time of our writing the article on the use of software as evidence, there was no overriding requirement that law enforcement provide a defendant with the code so that they might examine it themselves.

[…]

Given the high rate of bugs in complex software systems, my colleagues and I concluded that when computer programs produce the evidence, courts cannot assume that the evidentiary software is reliable. Instead the prosecution must make the code available for an “adversarial audit” by the defendant’s experts. And to avoid problems in which the government doesn’t have the code, government procurement contracts must include delivery of source code­—code that is more-or-less readable by people—­for every version of the code or device.

Posted on June 29, 2021 at 9:12 AM46 Comments

NFC Flaws in POS Devices and ATMs

It’s a series of vulnerabilities:

Josep Rodriguez, a researcher and consultant at security firm IOActive, has spent the last year digging up and reporting vulnerabilities in the so-called near-field communications reader chips used in millions of ATMs and point-of-sale systems worldwide. NFC systems are what let you wave a credit card over a reader—rather than swipe or insert it—to make a payment or extract money from a cash machine. You can find them on countless retail store and restaurant counters, vending machines, taxis, and parking meters around the globe.

Now Rodriguez has built an Android app that allows his smartphone to mimic those credit card radio communications and exploit flaws in the NFC systems’ firmware. With a wave of his phone, he can exploit a variety of bugs to crash point-of-sale devices, hack them to collect and transmit credit card data, invisibly change the value of transactions, and even lock the devices while displaying a ransomware message. Rodriguez says he can even force at least one brand of ATMs to dispense cash­though that “jackpotting” hack only works in combination with additional bugs he says he’s found in the ATMs’ software. He declined to specify or disclose those flaws publicly due to nondisclosure agreements with the ATM vendors.

Posted on June 28, 2021 at 6:53 AM7 Comments

AI-Piloted Fighter Jets

News from Georgetown’s Center for Security and Emerging Technology:

China Claims Its AI Can Beat Human Pilots in Battle: Chinese state media reported that an AI system had successfully defeated human pilots during simulated dogfights. According to the Global Times report, the system had shot down several PLA pilots during a handful of virtual exercises in recent years. Observers outside China noted that while reports coming out of state-controlled media outlets should be taken with a grain of salt, the capabilities described in the report are not outside the realm of possibility. Last year, for example, an AI agent defeated a U.S. Air Force F-16 pilot five times out of five as part of DARPA’s AlphaDogfight Trial (which we covered at the time). While the Global Times report indicated plans to incorporate AI into future fighter planes, it is not clear how far away the system is from real-world testing. At the moment, the system appears to be used only for training human pilots. DARPA, for its part, is aiming to test dogfights with AI-piloted subscale jets later this year and with full-scale jets in 2023 and 2024.

Posted on June 25, 2021 at 8:53 AM24 Comments

Banning Surveillance-Based Advertising

The Norwegian Consumer Council just published a fantastic new report: “Time to Ban Surveillance-Based Advertising.” From the Introduction:

The challenges caused and entrenched by surveillance-based advertising include, but are not limited to:

  • privacy and data protection infringements
  • opaque business models
  • manipulation and discrimination at scale
  • fraud and other criminal activity
  • serious security risks

In the following chapters, we describe various aspects of these challenges and point out how today’s dominant model of online advertising is a threat to consumers, democratic societies, the media, and even to advertisers themselves. These issues are significant and serious enough that we believe that it is time to ban these detrimental practices.

A ban on surveillance-based practices should be complemented by stronger enforcement of existing legislation, including the General Data Protection Regulation, competition regulation, and the Unfair Commercial Practices Directive. However, enforcement currently consumes significant time and resources, and usually happens after the damage has already been done. Banning surveillance-based advertising in general will force structural changes to the advertising industry and alleviate a number of significant harms to consumers and to society at large.

A ban on surveillance-based advertising does not mean that one can no longer finance digital content using advertising. To illustrate this, we describe some possible ways forward for advertising-funded digital content, and point to alternative advertising technologies that may contribute to a safer and healthier digital economy for both consumers and businesses.

Press release. Press coverage.

I signed their open letter.

Posted on June 24, 2021 at 9:44 AM23 Comments

Mollitiam Industries is the Newest Cyberweapons Arms Manufacturer

Wired is reporting on a company called Mollitiam Industries:

Marketing materials left exposed online by a third-party claim Mollitiam’s interception products, dubbed “Invisible Man” and “Night Crawler,” are capable of remotely accessing a target’s files, location, and covertly turning on a device’s camera and microphone. Its spyware is also said to be equipped with a keylogger, which means every keystroke made on an infected device—including passwords, search queries and messages sent via encrypted messaging apps—can be tracked and monitored.

To evade detection, the malware makes use of the company’s so-called “invisible low stealth technology” and its Android product is advertised as having “low data and battery consumption” to prevent people from suspecting their phone or tablet has been infected. Mollitiam is also currently marketing a tool that it claims enables “mass surveillance of digital profiles and identities” across social media and the dark web.

Posted on June 23, 2021 at 6:01 AM26 Comments

Apple Will Offer Onion Routing for iCloud/Safari Users

At this year’s Apple Worldwide Developer Conference, Apple announced something called “iCloud Private Relay.” That’s basically its private version of onion routing, which is what Tor does.

Privacy Relay is built into both the forthcoming iOS and MacOS versions, but it will only work if you’re an iCloud Plus subscriber and you have it enabled from within your iCloud settings.

Once it’s enabled and you open Safari to browse, Private Relay splits up two pieces of information that—when delivered to websites together as normal—could quickly identify you. Those are your IP address (who and exactly where you are) and your DNS request (the address of the website you want, in numeric form).

Once the two pieces of information are split, Private Relay encrypts your DNS request and sends both the IP address and now-encrypted DNS request to an Apple proxy server. This is the first of two stops your traffic will make before you see a website. At this point, Apple has already handed over the encryption keys to the third party running the second of the two stops, so Apple can’t see what website you’re trying to access with your encrypted DNS request. All Apple can see is your IP address.

Although it has received both your IP address and encrypted DNS request, Apple’s server doesn’t send your original IP address to the second stop. Instead, it gives you an anonymous IP address that is approximately associated with your general region or city.

Not available in China, of course—and also Belarus, Colombia, Egypt, Kazakhstan, Saudi Arabia, South Africa, Turkmenistan, Uganda, and the Philippines.

Posted on June 22, 2021 at 6:54 AM20 Comments

The Future of Machine Learning and Cybersecurity

The Center for Security and Emerging Technology has a new report: “Machine Learning and Cybersecurity: Hype and Reality.” Here’s the bottom line:

The report offers four conclusions:

  • Machine learning can help defenders more accurately detect and triage potential attacks. However, in many cases these technologies are elaborations on long-standing methods—not fundamentally new approaches—that bring new attack surfaces of their own.
  • A wide range of specific tasks could be fully or partially automated with the use of machine learning, including some forms of vulnerability discovery, deception, and attack disruption. But many of the most transformative of these possibilities still require significant machine learning breakthroughs.
  • Overall, we anticipate that machine learning will provide incremental advances to cyber defenders, but it is unlikely to fundamentally transform the industry barring additional breakthroughs. Some of the most transformative impacts may come from making previously un- or under-utilized defensive strategies available to more organizations.
  • Although machine learning will be neither predominantly offense-biased nor defense-biased, it may subtly alter the threat landscape by making certain types of strategies more appealing to attackers or defenders.

Posted on June 21, 2021 at 6:31 AM17 Comments

Intentional Flaw in GPRS Encryption Algorithm GEA-1

General Packet Radio Service (GPRS) is a mobile data standard that was widely used in the early 2000s. The first encryption algorithm for that standard was GEA-1, a stream cipher built on three linear-feedback shift registers and a non-linear combining function. Although the algorithm has a 64-bit key, the effective key length is only 40 bits, due to “an exceptional interaction of the deployed LFSRs and the key initialization, which is highly unlikely to occur by chance.”

GEA-1 was designed by the European Telecommunications Standards Institute in 1998. ETSI was—and maybe still is—under the auspices of SOGIS: the Senior Officials Group, Information Systems Security. That’s basically the intelligence agencies of the EU countries.

Details are in the paper: “Cryptanalysis of the GPRS Encryption Algorithms GEA-1 and GEA-2.” GEA-2 does not have the same flaw, although the researchers found a practical attack with enough keystream.

Hacker News thread.

EDITED TO ADD (6/18): News article.

Posted on June 17, 2021 at 1:51 PM77 Comments

VPNs and Trust

TorrentFreak surveyed nineteen VPN providers, asking them questions about their privacy practices: what data they keep, how they respond to court order, what country they are incorporated in, and so on.

Most interesting to me is the home countries of these companies. Express VPN is incorporated in the British Virgin Islands. NordVPN is incorporated in Panama. There are VPNs from the Seychelles, Malaysia, and Bulgaria. There are VPNs from more Western and democratic countries like the US, Switzerland, Canada, and Sweden. Presumably all of those companies follow the laws of their home country.

And it matters. I’ve been thinking about this since Trojan Shield was made public. This is the joint US/Australia-run encrypted messaging service that lured criminals to use it, and then spied on everything they did. Or, at least, Australian law enforcement spied on everyone. The FBI wasn’t able to because the US has better privacy laws.

We don’t talk about it a lot, but VPNs are entirely based on trust. As a consumer, you have no idea which company will best protect your privacy. You don’t know the data protection laws of the Seychelles or Panama. You don’t know which countries can put extra-legal pressure on companies operating within their jurisdiction. You don’t know who actually owns and runs the VPNs. You don’t even know which foreign companies the NSA has targeted for mass surveillance. All you can do is make your best guess, and hope you guessed well.

Posted on June 16, 2021 at 6:17 AM50 Comments

Andrew Appel on New Hampshire’s Election Audit

Really interesting two part analysis of the audit conducted after the 2020 election in Windham, New Hampshire.

Based on preliminary reports published by the team of experts that New Hampshire engaged to examine an election discrepancy, it appears that a buildup of dust in the read heads of optical-scan voting machines (possibly over several years of use) can cause paper-fold lines in absentee ballots to be interpreted as votes… New Hampshire (and other states) may need to maintain the accuracy of their optical-scan voting machines by paying attention to three issues:

  • Routine risk-limiting audits to detect inaccuracies if/when they occur.
  • Clean the dust out of optical-scan read heads regularly; pay attention to the calibration of the optical-scan machines.
  • Make sure that the machines that automatically fold absentee ballots (before mailing them to voters) don’t put the fold lines over vote-target ovals. (Same for election workers who fold ballots by hand.)

Posted on June 15, 2021 at 10:45 AM23 Comments

Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Posted on June 14, 2021 at 11:55 AM2 Comments

TikTok Can Now Collect Biometric Data

This is probably worth paying attention to:

A change to TikTok’s U.S. privacy policy on Wednesday introduced a new section that says the social video app “may collect biometric identifiers and biometric information” from its users’ content. This includes things like “faceprints and voiceprints,” the policy explained. Reached for comment, TikTok could not confirm what product developments necessitated the addition of biometric data to its list of disclosures about the information it automatically collects from users, but said it would ask for consent in the case such data collection practices began.

Posted on June 14, 2021 at 10:11 AM29 Comments

FBI/AFP-Run Encrypted Phone

For three years, the Federal Bureau of Investigation and the Australian Federal Police owned and operated a commercial encrypted phone app, called AN0M, that was used by organized crime around the world. Of course, the police were able to read everything—I don’t even know if this qualifies as a backdoor. This week, the world’s police organizations announced 800 arrests based on text messages sent over the app. We’ve seen law enforcement take over encrypted apps before: for example, EncroChat. This operation, code-named Trojan Shield, is the first time law enforcement managed an app from the beginning.

If there is any moral to this, it’s one that all of my blog readers should already know: trust is essential to security. And the number of people you need to trust is larger than you might originally think. For an app to be secure, you need to trust the hardware, the operating system, the software, the update mechanism, the login mechanism, and on and on and on. If one of those is untrustworthy, the whole system is insecure.

It’s the same reason blockchain-based currencies are so insecure, even if the cryptography is sound.

Posted on June 11, 2021 at 6:32 AM48 Comments

Detecting Deepfake Picture Editing

“Markpainting” is a clever technique to watermark photos in such a way that makes it easier to detect ML-based manipulation:

An image owner can modify their image in subtle ways which are not themselves very visible, but will sabotage any attempt to inpaint it by adding visible information determined in advance by the markpainter.

One application is tamper-resistant marks. For example, a photo agency that makes stock photos available on its website with copyright watermarks can markpaint them in such a way that anyone using common editing software to remove a watermark will fail; the copyright mark will be markpainted right back. So watermarks can be made a lot more robust.

Here’s the paper: “Markpainting: Adversarial Machine Learning Meets Inpainting,” by David Khachaturov, Ilia Shumailov, Yiren Zhao, Nicolas Papernot, and Ross Anderson.

Abstract: Inpainting is a learned interpolation technique that is based on generative modeling and used to populate masked or missing pieces in an image; it has wide applications in picture editing and retouching. Recently, inpainting started being used for watermark removal, raising concerns. In this paper we study how to manipulate it using our markpainting technique. First, we show how an image owner with access to an inpainting model can augment their image in such a way that any attempt to edit it using that model will add arbitrary visible information. We find that we can target multiple different models simultaneously with our technique. This can be designed to reconstitute a watermark if the editor had been trying to remove it. Second, we show that our markpainting technique is transferable to models that have different architectures or were trained on different datasets, so watermarks created using it are difficult for adversaries to remove. Markpainting is novel and can be used as a manipulation alarm that becomes visible in the event of inpainting.

Posted on June 10, 2021 at 6:19 AM10 Comments

Information Flows and Democracy

Henry Farrell and I published a paper on fixing American democracy: “Rechanneling Beliefs: How Information Flows Hinder or Help Democracy.”

It’s much easier for democratic stability to break down than most people realize, but this doesn’t mean we must despair over the future. It’s possible, though very difficult, to back away from our current situation towards one of greater democratic stability. This wouldn’t entail a restoration of a previous status quo. Instead, it would recognize that the status quo was less stable than it seemed, and a major source of the tensions that have started to unravel it. What we need is a dynamic stability, one that incorporates new forces into American democracy rather than trying to deny or quash them.

This paper is our attempt to explain what this might mean in practice. We start by analyzing the problem and explaining more precisely why a breakdown in public consensus harms democracy. We then look at how these beliefs are being undermined by three feedback loops, in which anti-democratic actions and anti-democratic beliefs feed on each other. Finally, we explain how these feedback loops might be redirected so as to sustain democracy rather than undermining it.

To be clear: redirecting these and other energies in more constructive ways presents enormous challenges, and any plausible success will at best be untidy and provisional. But, almost by definition, that’s true of any successful democratic reforms where people of different beliefs and values need to figure out how to coexist. Even when it’s working well, democracy is messy. Solutions to democratic breakdowns are going to be messy as well.

This is part of our series of papers looking at democracy as an information system. The first paper was “Common-Knowledge Attacks on Democracy.”

Posted on June 9, 2021 at 6:46 AM35 Comments

Vulnerabilities in Weapons Systems

“If you think any of these systems are going to work as expected in wartime, you’re fooling yourself.”

That was Bruce’s response at a conference hosted by US Transportation Command in 2017, after learning that their computerized logistical systems were mostly unclassified and on the Internet. That may be necessary to keep in touch with civilian companies like FedEx in peacetime or when fighting terrorists or insurgents. But in a new era facing off with China or Russia, it is dangerously complacent.

Any twenty-first century war will include cyber operations. Weapons and support systems will be successfully attacked. Rifles and pistols won’t work properly. Drones will be hijacked midair. Boats won’t sail, or will be misdirected. Hospitals won’t function. Equipment and supplies will arrive late or not at all.

Our military systems are vulnerable. We need to face that reality by halting the purchase of insecure weapons and support systems and by incorporating the realities of offensive cyberattacks into our military planning.

Over the past decade, militaries have established cyber commands and developed cyberwar doctrine. However, much of the current discussion is about offense. Increasing our offensive capabilities without being able to secure them is like having all the best guns in the world, and then storing them in an unlocked, unguarded armory. They just won’t be stolen; they’ll be subverted.

During that same period, we’ve seen increasingly brazen cyberattacks by everyone from criminals to governments. Everything is now a computer, and those computers are vulnerable. Cars, medical devices, power plants, and fuel pipelines have all been targets. Military computers, whether they’re embedded inside weapons systems or on desktops managing the logistics of those weapons systems, are similarly vulnerable. We could see effects as stodgy as making a tank impossible to start up, or sophisticated as retargeting a missile midair.

Military software is unlikely to be any more secure than commercial software. Although sensitive military systems rely on domestically manufactured chips as part of the Trusted Foundry program, many military systems contain the same foreign chips and code that commercial systems do: just like everyone around the world uses the same mobile phones, networking equipment, and computer operating systems. For example, there has been serious concern over Chinese-made 5G networking equipment that might be used by China to install “backdoors” that would allow the equipment to be controlled. This is just one of many risks to our normal civilian computer supply chains. And since military software is vulnerable to the same cyberattacks as commercial software, military supply chains have many of the same risks.

This is not speculative. A 2018 GAO report expressed concern regarding the lack of secure and patchable US weapons systems. The report observed that “in operational testing, the [Department of Defense] routinely found mission-critical cyber vulnerabilities in systems that were under development, yet program officials GAO met with believed their systems were secure and discounted some test results as unrealistic.” It’s a similar attitude to corporate executives who believe that they can’t be hacked—and equally naive.

An updated GAO report from earlier this year found some improvements, but the basic problem remained: “DOD is still learning how to contract for cybersecurity in weapon systems, and selected programs we reviewed have struggled to incorporate systems’ cybersecurity requirements into contracts.” While DOD now appears aware of the issue of lack of cybersecurity requirements, they’re still not sure yet how to fix it, and in three of the five cases GAO reviewed, DOD simply chose to not include the requirements at all.

Militaries around the world are now exploiting these vulnerabilities in weapons systems to carry out operations. When Israel in 2007 bombed a Syrian nuclear reactor, the raid was preceded by what is believed to have been a cyber attack on Syrian air defenses that resulted in radar screens showing no threat as bombers zoomed overhead. In 2018, a 29-country NATO exercise, Trident Juncture, that included cyberweapons was disrupted by Russian GPS jamming. NATO does try to test cyberweapons outside such exercises, but has limited scope in doing so. In May, Jens Stoltenberg, the NATO secretary-general, said that “NATO computer systems are facing almost daily cyberattacks.”

The war of the future will not only be about explosions, but will also be about disabling the systems that make armies run. It’s not (solely) that bases will get blown up; it’s that some bases will lose power, data, and communications. It’s not that self-driving trucks will suddenly go mad and begin rolling over friendly soldiers; it’s that they’ll casually roll off roads or into water where they sit, rusting, and in need of repair. It’s not that targeting systems on guns will be retargeted to 1600 Pennsylvania Avenue; it’s that many of them could simply turn off and not turn back on again.

So, how do we prepare for this next war? First, militaries need to introduce a little anarchy into their planning. Let’s have wargames where essential systems malfunction or are subverted­not all of the time, but randomly. To help combat siloed military thinking, include some civilians as well. Allow their ideas into the room when predicting potential enemy action. And militaries need to have well-developed backup plans, for when systems are subverted. In Joe Haldeman’s 1975 science-fiction novel The Forever War, he postulated a “stasis field” that forced his space marines to rely on nothing more than Roman military technologies, like javelins. We should be thinking in the same direction.

NATO isn’t yet allowing civilians not employed by NATO or associated military contractors access to their training cyber ranges where vulnerabilities could be discovered and remediated before battlefield deployment. Last year, one of us (Tarah) was listening to a NATO briefing after the end of the 2020 Cyber Coalition exercises, and asked how she and other information security researchers could volunteer to test cyber ranges used to train its cyber incident response force. She was told that including civilians would be a “welcome thought experiment in the tabletop exercises,” but including them in reality wasn’t considered. There is a rich opportunity for improvement here, providing transparency into where improvements could be made.

Second, it’s time to take cybersecurity seriously in military procurement, from weapons systems to logistics and communications contracts. In the three year span from the original 2018 GAO report to this year’s report, cybersecurity audit compliance went from 0% to 40% (those 2 of 5 programs mentioned earlier). We need to get much better. DOD requires that its contractors and suppliers follow the Cybersecurity Maturity Model Certification process; it should abide by the same standards. Making those standards both more rigorous and mandatory would be an obvious second step.

Gone are the days when we can pretend that our technologies will work in the face of a military cyberattack. Securing our systems will make everything we buy more expensive—maybe a lot more expensive. But the alternative is no longer viable.

The future of war is cyberwar. If your weapons and systems aren’t secure, don’t even bother bringing them onto the battlefield.

This essay was written with Tarah Wheeler, and previously appeared in Brookings TechStream.

Posted on June 8, 2021 at 5:32 AM29 Comments

The Supreme Court Narrowed the CFAA

In a 6-3 ruling, the Supreme Court just narrowed the scope of the Computer Fraud and Abuse Act:

In a ruling delivered today, the court sided with Van Buren and overturned his 18-month conviction.

In a 37-page opinion written and delivered by Justice Amy Coney Barrett, the court explained that the “exceeds authorized access” language was, indeed, too broad.

Justice Barrett said the clause was effectively making criminals of most US citizens who ever used a work resource to perform unauthorized actions, such as updating a dating profile, checking sports scores, or paying bills at work.

What today’s ruling means is that the CFAA cannot be used to prosecute rogue employees who have legitimate access to work-related resources, which will need to be prosecuted under different charges.

The ruling does not apply to former employees accessing their old work systems because their access has been revoked and they’re not “authorized” to access those systems anymore.

More.

It’s a good ruling, and one that will benefit security researchers. But the confusing part is footnote 8:

For present purposes, we need not address whether this inquiry turns only on technological (or “code-based”) limitations on access, or instead also looks to limits contained in contracts or policies.

It seems to me that this is exactly what the ruling does address. The court overturned the conviction because the defendant was not limited by technology, but only by policies. So that footnote doesn’t make any sense.

I have written about this general issue before, in the context of adversarial machine learning research.

Posted on June 7, 2021 at 6:09 AM20 Comments

Security and Human Behavior (SHB) 2021

Today is the second day of the fourteenth Workshop on Security and Human Behavior. The University of Cambridge is the host, but we’re all on Zoom.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The forty or so attendees include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. The format translates well to Zoom, and we’re using random breakouts for the breaks between sessions.

I always find this workshop to be the most intellectually stimulating two days of my professional year. It influences my thinking in different, and sometimes surprising, ways.

This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, twelfth, and thirteenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.

Posted on June 4, 2021 at 6:05 AM15 Comments

The DarkSide Ransomware Gang

The New York Times has a long story on the DarkSide ransomware gang.

A glimpse into DarkSide’s secret communications in the months leading up to the Colonial Pipeline attack reveals a criminal operation on the rise, pulling in millions of dollars in ransom payments each month.

DarkSide offers what is known as “ransomware as a service,” in which a malware developer charges a user fee to so-called affiliates like Woris, who may not have the technical skills to actually create ransomware but are still capable of breaking into a victim’s computer systems.

DarkSide’s services include providing technical support for hackers, negotiating with targets like the publishing company, processing payments, and devising tailored pressure campaigns through blackmail and other means, such as secondary hacks to crash websites. DarkSide’s user fees operated on a sliding scale: 25 percent for any ransoms less than $500,000 down to 10 percent for ransoms over $5 million, according to the computer security firm, FireEye.

Posted on June 2, 2021 at 9:09 AM22 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.