Crypto-Gram

May 15, 2020

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram’s web page.

These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:

  1. California Needlessly Reduces Privacy During COVID-19 Pandemic
  2. The DoD Isn’t Fixing Its Security Problems
  3. Vulnerability Finding Using Machine Learning
  4. Another Story of Bad 1970s Encryption
  5. New iPhone Zero-Day Discovered
  6. Chinese COVID-19 Disinformation Campaign
  7. Global Surveillance in the Wake of COVID-19
  8. Automatic Instacart Bots
  9. Fooling NLP Systems Through Word Swapping
  10. How Did Facebook Beat a Federal Wiretap Demand?
  11. Securing Internet Videoconferencing Apps: Zoom and Others
  12. Me on COVID-19 Contact Tracing Apps
  13. Denmark, Sweden, Germany, the Netherlands and France SIGINT Alliance
  14. Malware in Google Apps
  15. ILOVEYOU Virus
  16. iOS XML Bug
  17. Used Tesla Components Contain Personal Information
  18. Another California Data Privacy Law
  19. Attack Against PC Thunderbolt Port
  20. New US Electronic Warfare Platform
  21. US Government Exposes North Korean Malware

California Needlessly Reduces Privacy During COVID-19 Pandemic

[2020.04.16] This one isn’t even related to contact tracing:

On March 17, 2020, the federal government relaxed a number of telehealth-related regulatory requirements due to COVID-19. On April 3, 2020, California Governor Gavin Newsom issued Executive Order N-43-20 (the Order), which relaxes various telehealth reporting requirements, penalties, and enforcements otherwise imposed under state laws, including those associated with unauthorized access and disclosure of personal information through telehealth mediums.

Lots of details at the link.


The DoD Isn’t Fixing Its Security Problems

[2020.04.17] It has produced several reports outlining what’s wrong and what needs to be fixed. It’s not fixing them:

GAO looked at three DoD-designed initiatives to see whether the Pentagon is following through on its own goals. In a majority of cases, DoD has not completed the cybersecurity training and awareness tasks it set out to. The status of various efforts is simply unknown because no one has tracked their progress. While an assessment of “cybersecurity hygiene” like this doesn’t directly analyze a network’s hardware and software vulnerabilities, it does underscore the need for people who use digital systems to interact with them in secure ways. Especially when those people work on national defense.

[…]

The report focuses on three ongoing DoD cybersecurity hygiene initiatives. The 2015 Cybersecurity Culture and Compliance Initiative outlined 11 education-related goals for 2016; the GAO found that the Pentagon completed only four of them. Similarly, the 2015 Cyber Discipline plan outlined 17 goals related to detecting and eliminating preventable vulnerabilities from DoD’s networks by the end of 2018. GAO found that DoD has met only six of those. Four are still pending, and the status of the seven others is unknown, because no one at DoD has kept track of the progress.

GAO repeatedly identified lack of status updates and accountability as core issues within DoD’s cybersecurity awareness and education efforts. It was unclear in many cases who had completed which training modules. There were even DoD departments lacking information on which users should have their network access revoked for failure to complete trainings.

The report.


Vulnerability Finding Using Machine Learning

[2020.04.20] Microsoft is training a machine-learning system to find software bugs:

At Microsoft, 47,000 developers generate nearly 30 thousand bugs a month. These items get stored across over 100 AzureDevOps and GitHub repositories. To better label and prioritize bugs at that scale, we couldn’t just apply more people to the problem. However, large volumes of semi-curated data are perfect for machine learning. Since 2001 Microsoft has collected 13 million work items and bugs. We used that data to develop a process and machine learning model that correctly distinguishes between security and non-security bugs 99 percent of the time and accurately identifies the critical, high priority security bugs, 97 percent of the time.

News article.

I wrote about this in 2018:

The problem of finding software vulnerabilities seems well-suited for ML systems. Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like. There are challenges with that, of course, but there is already a healthy amount of academic literature on the topic—and research is continuing. There’s every reason to expect ML systems to get better at this as time goes on, and some reason to expect them to eventually become very good at it.

Finding vulnerabilities can benefit both attackers and defenders, but it’s not a fair fight. When an attacker’s ML system finds a vulnerability in software, the attacker can use it to compromise systems. When a defender’s ML system finds the same vulnerability, he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it.

But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released, the developer fixes it so it can never be used in the first place. The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.

Fast-forward a decade or so into the future. We might say to each other, “Remember those years when software vulnerabilities were a thing, before ML vulnerability finders were built into every compiler and fixed them before the software was ever released? Wow, those were crazy years.” Not only is this future possible, but I would bet on it.

Getting from here to there will be a dangerous ride, though. Those vulnerability finders will first be unleashed on existing software, giving attackers hundreds if not thousands of vulnerabilities to exploit in real-world attacks. Sure, defenders can use the same systems, but many of today’s Internet of Things (IoT) systems have no engineering teams to write patches and no ability to download and install patches. The result will be hundreds of vulnerabilities that attackers can find and use.


Another Story of Bad 1970s Encryption

[2020.04.21] This one is from the Netherlands. It seems to be clever cryptanalysis rather than a backdoor.

The Dutch intelligence service has been able to read encrypted communications from dozens of countries since the late 1970s thanks to a microchip, according to research by de Volkskrant on Thursday. The Netherlands could eavesdrop on confidential communication from countries such as Iran, Egypt and Saudi Arabia.

Philips, together with Siemens, built an encryption machine in the late 1970s. The device, the Aroflex, was used for secret communication between NATO allies. In addition, the companies also wanted to market the T1000CA, a commercial variant of the Aroflex with less strong cryptography.

The Volkskrant investigation shows that the Ministry of Foreign Affairs and the Marine Intelligence Service (MARID) cracked the cryptography of this device before it was launched. Philips helped the ministry and the intelligence service.

Normally it would take at least a month and a half to crack the T1000CA encryption. “Too long to get useful information from intercepted communication,” the newspaper writes. But MARID employees, together with Philips, succeeded in accelerating this 2.500 times by developing a special microchip.

The T1000CA was then sold to numerous non-NATO countries, including the Middle East and Asia. These countries could then be overheard by the Dutch intelligence services for years.

The 1970s was a decade of really bad commercial cryptography. DES, in 1975, was an improvement with its 56-bit key. I’m sure there are lots of these stories.

Here’s more about the Aroflex. And here’s what I think is the original Dutch story.


New iPhone Zero-Day Discovered

[2020.04.22] Last year, ZecOps discovered two iPhone zero-day exploits. They will be patched in the next iOS release:

Avraham declined to disclose many details about who the targets were, and did not say whether they lost any data as a result of the attacks, but said “we were a bit surprised about who was targeted.” He said some of the targets were an executive from a telephone carrier in Japan, a “VIP” from Germany, managed security service providers from Saudi Arabia and Israel, people who work for a Fortune 500 company in North America, and an executive from a Swiss company.

[…]

On the other hand, this is not as polished a hack as others, as it relies on sending an oversized email, which may get blocked by certain email providers. Moreover, Avraham said it only works on the default Apple Mail app, and not on Gmail or Outlook, for example.


Chinese COVID-19 Disinformation Campaign

[2020.04.23] The New York Times is reporting on state-sponsored disinformation campaigns coming out of China:

Since that wave of panic, United States intelligence agencies have assessed that Chinese operatives helped push the messages across platforms, according to six American officials, who spoke on the condition of anonymity to publicly discuss intelligence matters. The amplification techniques are alarming to officials because the disinformation showed up as texts on many Americans’ cellphones, a tactic that several of the officials said they had not seen before.


Global Surveillance in the Wake of COVID-19

[2020.04.24] OneZero is tracking thirty countries around the world who are implementing surveillance programs in the wake of COVID-19:

The most common form of surveillance implemented to battle the pandemic is the use of smartphone location data, which can track population-level movement down to enforcing individual quarantines. Some governments are making apps that offer coronavirus health information, while also sharing location information with authorities for a period of time. For instance, in early March, the Iranian government released an app that it pitched as a self-diagnostic tool. While the tool’s efficacy was likely low, given reports of asymptomatic carriers of the virus, the app saved location data of millions of Iranians, according to a Vice report.

One of the most alarming measures being implemented is in Argentina, where those who are caught breaking quarantine are being forced to download an app that tracks their location. In Hong Kong, those arriving in the airport are given electronic tracking bracelets that must be synced to their home location through their smartphone’s GPS signal.


Automatic Instacart Bots

[2020.04.27] Instacart is taking legal action against bots that automatically place orders:

Before it closed, to use Cartdash users first selected what items they want from Instacart as normal. Once that was done, they had to provide Cartdash with their Instacart email address, password, mobile number, tip amount, and whether they prefer the first available delivery slot or are more flexible. The tool then checked that their login credentials were correct, logged in, and refreshed the checkout page over and over again until a new delivery window appeared. It then placed the order, Koch explained.

I think I am writing a new book about hacking in general, and want to discuss this. First, does this count as a hack? I feel like it is, since it’s a way to subvert the Instacart ordering system.

When asked if this tool may give people an unfair advantage over those who don’t use the tool, Koch said, “at this point, it’s a matter of awareness, not technical ability, since people who can use Instacart can use Cartdash.” When pushed on how, realistically, not every user of Instacart is going to know about Cartdash, even after it may receive more attention, and the people using Cartdash will still have an advantage over people who aren’t using automated tools, Koch again said, “it’s a matter of awareness, not technical ability.”

Second, should Instacart take action against this? On the one hand, it isn’t “fair” in that Cartdash users get an advantage in finding a delivery slot. But it’s not really any different than programs that “snipe” on eBay and other bidding platforms.

Third, does Instacart even stand a chance in the long run. As various AI technologies give us more agents and bots, this is going to increasingly become the new normal. I think we need to figure out a fair allocation mechanism that doesn’t rely on the precise timing of submissions.


Fooling NLP Systems Through Word Swapping

[2020.04.28] MIT researchers have built a system that fools natural-language processing systems by swapping words with synonyms:

The software, developed by a team at MIT, looks for the words in a sentence that are most important to an NLP classifier and replaces them with a synonym that a human would find natural. For example, changing the sentence “The characters, cast in impossibly contrived situations, are totally estranged from reality” to “The characters, cast in impossibly engineered circumstances, are fully estranged from reality” makes no real difference to how we read it. But the tweaks made an AI interpret the sentences completely differently.

The results of this adversarial machine learning attack are impressive:

For example, Google’s powerful BERT neural net was worse by a factor of five to seven at identifying whether reviews on Yelp were positive or negative.

The paper:

Abstract: Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously crafted adversarial examples. In this paper, we present TextFooler, a simple but strong baseline to generate natural adversarial text. By applying it to two fundamental natural language tasks, text classification and textual entailment, we successfully attacked three target models, including the powerful pre-trained BERT, and the widely used convolutional and recurrent neural networks. We demonstrate the advantages of this framework in three ways: (1) effective—it outperforms state-of-the-art attacks in terms of success rate and perturbation rate, (2) utility-preserving—it preserves semantic content and grammaticality, and remains correctly classified by humans, and (3) efficient—it generates adversarial text with computational complexity linear to the text length.


How Did Facebook Beat a Federal Wiretap Demand?

[2020.04.29] This is interesting:

Facebook Inc. in 2018 beat back federal prosecutors seeking to wiretap its encrypted Messenger app. Now the American Civil Liberties Union is seeking to find out how.

The entire proceeding was confidential, with only the result leaking to the press. Lawyers for the ACLU and the Washington Post on Tuesday asked a San Francisco-based federal court of appeals to unseal the judge’s decision, arguing the public has a right to know how the law is being applied, particularly in the area of privacy.

[…]

The Facebook case stems from a federal investigation of members of the violent MS-13 criminal gang. Prosecutors tried to hold Facebook in contempt after the company refused to help investigators wiretap its Messenger app, but the judge ruled against them. If the decision is unsealed, other tech companies will likely try to use its reasoning to ward off similar government requests in the future.

Here’s the 2018 story. Slashdot thread.


Securing Internet Videoconferencing Apps: Zoom and Others

[2020.04.30] The NSA just published a survey of video conferencing apps. So did Mozilla.

Zoom is on the good list, with some caveats. The company has done a lot of work addressing previous security concerns. It still has a bit to go on end-to-end encryption. Matthew Green looked at this. Zoom does offer end-to-end encryption if 1) everyone is using a Zoom app, and not logging in to the meeting using a webpage, and 2) the meeting is not being recorded in the cloud. That’s pretty good, but the real worry is where the encryption keys are generated and stored. According to Citizen Lab, the company generates them.

The Zoom transport protocol adds Zoom’s own encryption scheme to RTP in an unusual way. By default, all participants’ audio and video in a Zoom meeting appears to be encrypted and decrypted with a single AES-128 key shared amongst the participants. The AES key appears to be generated and distributed to the meeting’s participants by Zoom servers. Zoom’s encryption and decryption use AES in ECB mode, which is well-understood to be a bad idea, because this mode of encryption preserves patterns in the input.

The algorithm part was just fixed:

AES 256-bit GCM encryption: Zoom is upgrading to the AES 256-bit GCM encryption standard, which offers increased protection of your meeting data in transit and resistance against tampering. This provides confidentiality and integrity assurances on your Zoom Meeting, Zoom Video Webinar, and Zoom Phone data. Zoom 5.0, which is slated for release within the week, supports GCM encryption, and this standard will take effect once all accounts are enabled with GCM. System-wide account enablement will take place on May 30.

There is nothing in Zoom’s latest announcement about key management. So: while the company has done a really good job improving the security and privacy of their platform, there seems to be just one step remaining to fully encrypt the sessions.

The other thing I want Zoom to do is to make the security options necessary to prevent Zoombombing to be made available to users of the free version of that platform. Forcing users to pay for security isn’t a viable option right now.

Finally—I use Zoom all the time. I finished my Harvard class using Zoom; it’s the university standard. I am having Inrupt company meetings on Zoom. I am having professional and personal conferences on Zoom. It’s what everyone has, and the features are really good.


Me on COVID-19 Contact Tracing Apps

[2020.05.01] I was quoted in BuzzFeed:

“My problem with contact tracing apps is that they have absolutely no value,” Bruce Schneier, a privacy expert and fellow at the Berkman Klein Center for Internet & Society at Harvard University, told BuzzFeed News. “I’m not even talking about the privacy concerns, I mean the efficacy. Does anybody think this will do something useful? … This is just something governments want to do for the hell of it. To me, it’s just techies doing techie things because they don’t know what else to do.”

I haven’t blogged about this because I thought it was obvious. But from the tweets and emails I have received, it seems not.

This is a classic identification problem, and efficacy depends on two things: false positives and false negatives.

  • False positives: Any app will have a precise definition of a contact: let’s say it’s less than six feet for more than ten minutes. The false positive rate is the percentage of contacts that don’t result in transmissions. This will be because of several reasons. One, the app’s location and proximity systems—based on GPS and Bluetooth—just aren’t accurate enough to capture every contact. Two, the app won’t be aware of any extenuating circumstances, like walls or partitions. And three, not every contact results in transmission; the disease has some transmission rate that’s less than 100% (and I don’t know what that is).
  • False negatives: This is the rate the app fails to register a contact when an infection occurs. This also will be because of several reasons. One, errors in the app’s location and proximity systems. Two, transmissions that occur from people who don’t have the app (even Singapore didn’t get above a 20% adoption rate for the app). And three, not every transmission is a result of that precisely defined contact—the virus sometimes travels further.

Assume you take the app out grocery shopping with you and it subsequently alerts you of a contact. What should you do? It’s not accurate enough for you to quarantine yourself for two weeks. And without ubiquitous, cheap, fast, and accurate testing, you can’t confirm the app’s diagnosis. So the alert is useless.

Similarly, assume you take the app out grocery shopping and it doesn’t alert you of any contact. Are you in the clear? No, you’re not. You actually have no idea if you’ve been infected.

The end result is an app that doesn’t work. People will post their bad experiences on social media, and people will read those posts and realize that the app is not to be trusted. That loss of trust is even worse than having no app at all.

It has nothing to do with privacy concerns. The idea that contact tracing can be done with an app, and not human health professionals, is just plain dumb.

EDITED TO ADD: This Brookings essay makes much the same point.


Denmark, Sweden, Germany, the Netherlands and France SIGINT Alliance

[2020.05.04] This paper describes a SIGINT and code-breaking alliance between Denmark, Sweden, Germany, the Netherlands and France called Maximator:

Abstract: This article is first to report on the secret European five-partner sigint alliance Maximator that started in the late 1970s. It discloses the name Maximator and provides documentary evidence. The five members of this European alliance are Denmark, Sweden, Germany, the Netherlands, and France. The cooperation involves both signals analysis and crypto analysis. The Maximator alliance has remained secret for almost fifty years, in contrast to its Anglo-Saxon Five-Eyes counterpart. The existence of this European sigint alliance gives a novel perspective on western sigint collaborations in the late twentieth century. The article explains and illustrates, with relatively much attention for the cryptographic details, how the five Maximator participants strengthened their effectiveness via the information about rigged cryptographic devices that its German partner provided, via the joint U.S.-German ownership and control of the Swiss producer Crypto AG of cryptographic devices.


Malware in Google Apps

[2020.05.05] Interesting story of malware hidden in Google Apps. This particular campaign is tied to the government of Vietnam.

At a remote virtual version of its annual Security Analyst Summit, researchers from the Russian security firm Kaspersky today plan to present research about a hacking campaign they call PhantomLance, in which spies hid malware in the Play Store to target users in Vietnam, Bangladesh, Indonesia, and India. Unlike most of the shady apps found in Play Store malware, Kaspersky’s researchers say, PhantomLance’s hackers apparently smuggled in data-stealing apps with the aim of infecting only some hundreds of users; the spy campaign likely sent links to the malicious apps to those targets via phishing emails. “In this case, the attackers used Google Play as a trusted source,” says Kaspersky researcher Alexey Firsh. “You can deliver a link to this app, and the victim will trust it because it’s Google Play.”

[…]

The first hints of PhantomLance’s campaign focusing on Google Play came to light in July of last year. That’s when Russian security firm Dr. Web found a sample of spyware in Google’s app store that impersonated a downloader of graphic design software but in fact had the capability to steal contacts, call logs, and text messages from Android phones. Kaspersky’s researchers found a similar spyware app, impersonating a browser cache-cleaning tool called Browser Turbo, still active in Google Play in November of that year. (Google removed both malicious apps from Google Play after they were reported.) While the espionage capabilities of those apps was fairly basic, Firsh says that they both could have expanded. “What’s important is the ability to download new malicious payloads,” he says. “It could extend its features significantly.”

Kaspersky went on to find tens of other, similar spyware apps dating back to 2015 that Google had already removed from its Play Store, but which were still visible in archived mirrors of the app repository. Those apps appeared to have a Vietnamese focus, offering tools for finding nearby churches in Vietnam and Vietnamese-language news. In every case, Firsh says, the hackers had created a new account and even Github repositories for spoofed developers to make the apps appear legitimate and hide their tracks.


ILOVEYOU Virus

[2020.05.06] It’s the twentieth anniversary of the ILOVEYOU virus, and here are three interesting articles about it and its effects on software design.


iOS XML Bug

[2020.05.07] This is a good explanation of an iOS bug that allowed someone to break out of the application sandbox. A summary:

What a crazy bug, and Siguza’s explanation is very cogent. Basically, it comes down to this:

  • XML is terrible.
  • iOS uses XML for Plists, and Plists are used everywhere in iOS (and MacOS).
  • iOS’s sandboxing system depends upon three different XML parsers, which interpret slightly invalid XML input in slightly different ways.

So Siguza’s exploit—which granted an app full access to the entire file system, and more – uses malformed XML comments constructed in a way that one of iOS’s XML parsers sees its declaration of entitlements one way, and another XML parser sees it another way. The XML parser used to check whether an application should be allowed to launch doesn’t see the fishy entitlements because it thinks they’re inside a comment. The XML parser used to determine whether an already running application has permission to do things that require entitlements sees the fishy entitlements and grants permission.

This is fixed in the new iOS release, 13.5 beta 3.

Comment:

Implementing 4 different parsers is just asking for trouble, and the “fix” is of the crappiest sort, bolting on more crap to check they’re doing the right thing in this single case. None of this is encouraging.

More commentary. Hacker News thread.


Used Tesla Components Contain Personal Information

[2020.05.08] Used Tesla components, sold on eBay, still contain personal information, even after a factory reset.

This is a decades-old problem. It’s a problem with used hard drives. It’s a problem with used photocopiers and printers. It will be a problem with IoT devices. It’ll be a problem with everything, until we decide that data deletion is a priority.


Another California Data Privacy Law

[2020.05.11] The California Consumer Privacy Act is a lesson in missed opportunities. It was passed in haste, to stop a ballot initiative that would have been even more restrictive:

In September 2017, Alastair Mactaggart and Mary Ross proposed a statewide ballot initiative entitled the “California Consumer Privacy Act.” Ballot initiatives are a process under California law in which private citizens can propose legislation directly to voters, and pursuant to which such legislation can be enacted through voter approval without any action by the state legislature or the governor. While the proposed privacy initiative was initially met with significant opposition, particularly from large technology companies, some of that opposition faded in the wake of the Cambridge Analytica scandal and Mark Zuckerberg’s April 2018 testimony before Congress. By May 2018, the initiative appeared to have garnered sufficient support to appear on the November 2018 ballot. On June 21, 2018, the sponsors of the ballot initiative and state legislators then struck a deal: in exchange for withdrawing the initiative, the state legislature would pass an agreed version of the California Consumer Privacy Act. The initiative was withdrawn, and the state legislature passed (and the Governor signed) the CCPA on June 28, 2018.

Since then, it was substantially amended—that is, watered down—at the request of various surveillance capitalism companies. Enforcement was supposed to start this year, but we haven’t seen much yet.

And we could have had that ballot initiative.

It looks like Alastair Mactaggart and others are back.

Advocacy group Californians for Consumer Privacy, which started the push for a state-wide data privacy law, announced this week that it has the signatures it needs to get version 2.0 of its privacy rules on the US state’s ballot in November, and submitted its proposal to Sacramento.

This time the goal is to tighten up the rules that its previously ballot measure managed to get into law, despite the determined efforts of internet giants like Google and Facebook to kill it. In return for the legislation being passed, that ballot measure was dropped. Now, it looks like the campaigners are taking their fight to a people’s vote after all.

[…]

The new proposal would add more rights, including the use and sale of sensitive personal information, such as health and financial information, racial or ethnic origin, and precise geolocation. It would also triples existing fines for companies caught breaking the rules surrounding data on children (under 16s) and would require an opt-in to even collect such data.

The proposal would also give Californians the right to know when their information is used to make fundamental decisions about them, such as getting credit or employment offers. And it would require political organizations to divulge when they use similar data for campaigns.

And just to push the tech giants from fury into full-blown meltdown the new ballot measure would require any amendments to the law to require a majority vote in the legislature, effectively stripping their vast lobbying powers and cutting off the multitude of different ways the measures and its enforcement can be watered down within the political process.

I don’t know why they accepted the compromise in the first place. It was obvious that the legislative process would be hijacked by the powerful tech companies. I support getting this onto the ballot this year.


Attack Against PC Thunderbolt Port

[2020.05.12] The attack requires physical access to the computer, but it’s pretty devastating:

On Thunderbolt-enabled Windows or Linux PCs manufactured before 2019, his technique can bypass the login screen of a sleeping or locked computer—and even its hard disk encryption—to gain full access to the computer’s data. And while his attack in many cases requires opening a target laptop’s case with a screwdriver, it leaves no trace of intrusion and can be pulled off in just a few minutes. That opens a new avenue to what the security industry calls an “evil maid attack,” the threat of any hacker who can get alone time with a computer in, say, a hotel room. Ruytenberg says there’s no easy software fix, only disabling the Thunderbolt port altogether.

“All the evil maid needs to do is unscrew the backplate, attach a device momentarily, reprogram the firmware, reattach the backplate, and the evil maid gets full access to the laptop,” says Ruytenberg, who plans to present his Thunderspy research at the Black Hat security conference this summeror the virtual conference that may replace it. “All of this can be done in under five minutes.”

Lots of details in the article above, and in the attack website. (We know it’s a modern hack, because it comes with its own website and logo.)

Intel responds.

EDITED TO ADD (5/14): More.


New US Electronic Warfare Platform

[2020.05.13] The Army is developing a new electronic warfare pod capable of being put on drones and on trucks.

…the Silent Crow pod is now the leading contender for the flying flagship of the Army’s rebuilt electronic warfare force. Army EW was largely disbanded after the Cold War, except for short-range jammers to shut down remote-controlled roadside bombs. Now it’s being urgently rebuilt to counter Russia and China, whose high-tech forces—unlike Afghan guerrillas—rely heavily on radio and radar systems, whose transmissions US forces must be able to detect, analyze and disrupt.

It’s hard to tell what this thing can do. Possibly a lot, but it’s all still in prototype stage.

Historically, cyber operations occurred over landline networks and electronic warfare over radio-frequency (RF) airwaves. The rise of wireless networks has caused the two to blur. The military wants to move away from traditional high-powered jamming, which filled the frequencies the enemy used with blasts of static, to precisely targeted techniques, designed to subtly disrupt the enemy’s communications and radar networks without their realizing they’re being deceived. There are even reports that “RF-enabled cyber” can transmit computer viruses wirelessly into an enemy network, although Wojnar declined to confirm or deny such sensitive details.

[…]

The pod’s digital brain also uses machine-learning algorithms to analyze enemy signals it detects and compute effective countermeasures on the fly, instead of having to return to base and download new data to human analysts. (Insiders call this cognitive electronic warfare). Lockheed also offers larger artificial intelligences to assist post-mission analysis on the ground, Wojnar said. But while an AI small enough to fit inside the pod is necessarily less powerful, it can respond immediately in a way a traditional system never could.

EDITED TO ADD (5/14): Here are two reports on Russian electronic warfare capabilities.


US Government Exposes North Korean Malware

[2020.05.14] US Cyber Command has uploaded North Korean malware samples to the VirusTotal aggregation repository, adding to the malware samples it uploaded in February.

The first of the new malware variants, COPPERHEDGE, is described as a Remote Access Tool (RAT) “used by advanced persistent threat (APT) cyber actors in the targeting of cryptocurrency exchanges and related entities.”

This RAT is known for its capability to help the threat actors perform system reconnaissance, run arbitrary commands on compromised systems, and exfiltrate stolen data.

TAINTEDSCRIBE is a trojan that acts as a full-featured beaconing implant with command modules and designed to disguise as Microsoft’s Narrator.

The trojan “downloads its command execution module from a command and control (C2) server and then has the capability to download, upload, delete, and execute files; enable Windows CLI access; create and terminate processes; and perform target system enumeration.”

Last but not least, PEBBLEDASH is yet another North Korean trojan acting like a full-featured beaconing implant and used by North Korean-backed hacking groups “to download, upload, delete, and execute files; enable Windows CLI access; create and terminate processes; and perform target system enumeration.”

It’s interesting to see the US government take a more aggressive stance on foreign malware. Making samples public, so all the antivirus companies can add them to their scanning systems, is a big deal—and probably required some complicated declassification maneuvering.

Me, I like reading the codenames.

Lots more on the US-CERT website.


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, Click Here to Kill Everybody—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org.

Copyright © 2020 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.