Blog: March 2021 Archives

System Update: New Android Malware

Researchers have discovered a new Android app called “System Update” that is a sophisticated Remote-Access Trojan (RAT). From a news article:

The broad range of data that this sneaky little bastard is capable of stealing is pretty horrifying. It includes: instant messenger messages and database files; call logs and phone contacts; Whatsapp messages and databases; pictures and videos; all of your text messages; and information on pretty much everything else that is on your phone (it will inventory the rest of the apps on your phone, for instance).

The app can also monitor your GPS location (so it knows exactly where you are), hijack your phone’s camera to take pictures, review your browser’s search history and bookmarks, and turn on the phone mic to record audio.

The app’s spying capabilities are triggered whenever the device receives new information. Researchers write that the RAT is constantly on the lookout for “any activity of interest, such as a phone call, to immediately record the conversation, collect the updated call log, and then upload the contents to the C&C server as an encrypted ZIP file.” After thieving your data, the app will subsequently erase evidence of its own activity, hiding what it has been doing.

This is a sophisticated piece of malware. It feels like the product of a national intelligence agency or — and I think more likely — one of the cyberweapons arms manufacturers that sells this kind of capability to governments around the world.

Posted on March 30, 2021 at 10:00 AM22 Comments

Hacking Weapons Systems

Lukasz Olejnik has a good essay on hacking weapons systems.

Basically, there is no reason to believe that software in weapons systems is any more vulnerability free than any other software. So now the question is whether the software can be accessed over the Internet. Increasingly, it is. This is likely to become a bigger problem in the near future. We need to think about future wars where the tech simply doesn’t work.

Posted on March 26, 2021 at 8:41 AM33 Comments

Determining Key Shape from Sound

It’s not yet very accurate or practical, but under ideal conditions it is possible to figure out the shape of a house key by listening to it being used.

Listen to Your Key: Towards Acoustics-based Physical Key Inference

Abstract: Physical locks are one of the most prevalent mechanisms for securing objects such as doors. While many of these locks are vulnerable to lock-picking, they are still widely used as lock-picking requires specific training with tailored instruments, and easily raises suspicion. In this paper, we propose SpiKey, a novel attack that significantly lowers the bar for an attacker as opposed to the lock-picking attack, by requiring only the use of a smartphone microphone to infer the shape of victim’s key, namely bittings(or cut depths) which form the secret of a key. When a victim inserts his/her key into the lock, the emitted sound is captured by the attacker’s microphone.SpiKey leverages the time difference between audible clicks to ultimately infer the bitting information, i.e., shape of the physical key. As a proof-of-concept, we provide a simulation, based on real-world recordings, and demonstrate a significant reduction in search spacefrom a pool of more than 330 thousand keys to three candidate keys for the most frequent case.

Scientific American podcast:

The strategy is a long way from being viable in the real world. For one thing, the method relies on the key being inserted at a constant speed. And the audio element also poses challenges like background noise.

Boing Boing post.

EDITED TO ADD (4/14): I seem to have blogged this previously.

Posted on March 24, 2021 at 6:10 AM16 Comments

Accellion Supply Chain Hack

A vulnerability in the Accellion file-transfer program is being used by criminal groups to hack networks worldwide.

There’s much in the article about when Accellion knew about the vulnerability, when it alerted its customers, and when it patched its software.

The governor of New Zealand’s central bank, Adrian Orr, says Accellion failed to warn it after first learning in mid-December that the nearly 20-year-old FTA application — using antiquated technology and set for retirement — had been breached.

Despite having a patch available on Dec. 20, Accellion did not notify the bank in time to prevent its appliance from being breached five days later, the bank said.

CISA alert.

EDITED TO ADD (4/14): It appears spy plane details were leaked after the vendor didn’t pay the ransom.

Posted on March 23, 2021 at 6:32 AM10 Comments

Details of a Computer Banking Scam

This is a longish video that describes a profitable computer banking scam that’s run out of call centers in places like India. There’s a lot of fluff about glitterbombs and the like, but the details are interesting. The scammers convince the victims to give them remote access to their computers, and then that they’ve mistyped a dollar amount and have received a large refund that they didn’t deserve. Then they convince the victims to send cash to a drop site, where a money mule retrieves it and forwards it to the scammers.

I found it interesting for several reasons. One, it illustrates the complex business nature of the scam: there are a lot of people doing specialized jobs in order for it to work. Two, it clearly shows the psychological manipulation involved, and how it preys on the unsophisticated and vulnerable. And three, it’s an evolving tactic that gets around banks increasingly flagging blocking suspicious electronic transfers.

Posted on March 22, 2021 at 6:15 AM15 Comments

Easy SMS Hijacking

Vice is reporting on a cell phone vulnerability caused by commercial SMS services. One of the things these services permit is text message forwarding. It turns out that with a little bit of anonymous money — in this case, $16 off an anonymous prepaid credit card — and a few lies, you can forward the text messages from any phone to any other phone.

For businesses, sending text messages to hundreds, thousands, or perhaps millions of customers can be a laborious task. Sakari streamlines that process by letting business customers import their own number. A wide ecosystem of these companies exist, each advertising their own ability to run text messaging for other businesses. Some firms say they only allow customers to reroute messages for business landlines or VoIP phones, while others allow mobile numbers too.

Sakari offers a free trial to anyone wishing to see what the company’s dashboard looks like. The cheapest plan, which allows customers to add a phone number they want to send and receive texts as, is where the $16 goes. Lucky225 provided Motherboard with screenshots of Sakari’s interface, which show a red “+” symbol where users can add a number.

While adding a number, Sakari provides the Letter of Authorization for the user to sign. Sakari’s LOA says that the user should not conduct any unlawful, harassing, or inappropriate behaviour with the text messaging service and phone number.

But as Lucky225 showed, a user can just sign up with someone else’s number and receive their text messages instead.

This is much easier than SMS hijacking, and causes the same security vulnerabilities. Too many networks use SMS as an authentication mechanism.

Once the hacker is able to reroute a target’s text messages, it can then be trivial to hack into other accounts associated with that phone number. In this case, the hacker sent login requests to Bumble, WhatsApp, and Postmates, and easily accessed the accounts.

Don’t focus too much on the particular company in this article.

But Sakari is only one company. And there are plenty of others available in this overlooked industry.

Tuketu said that after one provider cut-off their access, “it took us two minutes to find another.”

Slashdot thread. And Cory Doctorow’s comments.

Posted on March 19, 2021 at 6:21 AM24 Comments

Exploiting Spectre Over the Internet

Google has demonstrated exploiting the Spectre CPU attack remotely over the web:

Today, we’re sharing proof-of-concept (PoC) code that confirms the practicality of Spectre exploits against JavaScript engines. We use Google Chrome to demonstrate our attack, but these issues are not specific to Chrome, and we expect that other modern browsers are similarly vulnerable to this exploitation vector. We have developed an interactive demonstration of the attack available at https://leaky.page/ ; the code and a more detailed writeup are published on Github here.

The demonstration website can leak data at a speed of 1kB/s when running on Chrome 88 on an Intel Skylake CPU. Note that the code will likely require minor modifications to apply to other CPUs or browser versions; however, in our tests the attack was successful on several other processors, including the Apple M1 ARM CPU, without any major changes.

Posted on March 18, 2021 at 6:17 AM19 Comments

Illegal Content and the Blockchain

Security researchers have recently discovered a botnet with a novel defense against takedowns. Normally, authorities can disable a botnet by taking over its command-and-control server. With nowhere to go for instructions, the botnet is rendered useless. But over the years, botnet designers have come up with ways to make this counterattack harder. Now the content-delivery network Akamai has reported on a new method: a botnet that uses the Bitcoin blockchain ledger. Since the blockchain is globally accessible and hard to take down, the botnet’s operators appear to be safe.

It’s best to avoid explaining the mathematics of Bitcoin’s blockchain, but to understand the colossal implications here, you need to understand one concept. Blockchains are a type of “distributed ledger”: a record of all transactions since the beginning, and everyone using the blockchain needs to have access to — and reference — a copy of it. What if someone puts illegal material in the blockchain? Either everyone has a copy of it, or the blockchain’s security fails.

To be fair, not absolutely everyone who uses a blockchain holds a copy of the entire ledger. Many who buy cryptocurrencies like Bitcoin and Ethereum don’t bother using the ledger to verify their purchase. Many don’t actually hold the currency outright, and instead trust an exchange to do the transactions and hold the coins. But people need to continually verify the blockchain’s history on the ledger for the system to be secure. If they stopped, then it would be trivial to forge coins. That’s how the system works.

Some years ago, people started noticing all sorts of things embedded in the Bitcoin blockchain. There are digital images, including one of Nelson Mandela. There’s the Bitcoin logo, and the original paper describing Bitcoin by its alleged founder, the pseudonymous Satoshi Nakamoto. There are advertisements, and several prayers. There’s even illegal pornography and leaked classified documents. All of these were put in by anonymous Bitcoin users. But none of this, so far, appears to seriously threaten those in power in governments and corporations. Once someone adds something to the Bitcoin ledger, it becomes sacrosanct. Removing something requires a fork of the blockchain, in which Bitcoin fragments into multiple parallel cryptocurrencies (and associated blockchains). Forks happen, rarely, but never yet because of legal coercion. And repeated forking would destroy Bitcoin’s stature as a stable(ish) currency.

The botnet’s designers are using this idea to create an unblockable means of coordination, but the implications are much greater. Imagine someone using this idea to evade government censorship. Most Bitcoin mining happens in China. What if someone added a bunch of Chinese-censored Falun Gong texts to the blockchain?<

What if someone added a type of political speech that Singapore routinely censors? Or cartoons that Disney holds the copyright to?

In Bitcoin’s and most other public blockchains there are no central, trusted authorities. Anyone in the world can perform transactions or become a miner. Everyone is equal to the extent that they have the hardware and electricity to perform cryptographic computations.

This openness is also a vulnerability, one that opens the door to asymmetric threats and small-time malicious actors. Anyone can put information in the one and only Bitcoin blockchain. Again, that’s how the system works.

Over the last three decades, the world has witnessed the power of open networks: blockchains, social media, the very web itself. What makes them so powerful is that their value is related not just to the number of users, but the number of potential links between users. This is Metcalfe’s law — value in a network is quadratic, not linear, in the number of users — and every open network since has followed its prophecy.

As Bitcoin has grown, its monetary value has skyrocketed, even if its uses remain unclear. With no barrier to entry, the blockchain space has been a Wild West of innovation and lawlessness. But today, many prominent advocates suggest Bitcoin should become a global, universal currency. In this context, asymmetric threats like embedded illegal data become a major challenge.

The philosophy behind Bitcoin traces to the earliest days of the open internet. Articulated in John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace, it was and is the ethos of tech startups: Code is more trustworthy than institutions. Information is meant to be free, and nobody has the right — and should not have the ability — to control it.

But information must reside somewhere. Code is written by and for people, stored on computers located within countries, and embedded within the institutions and societies we have created. To trust information is to trust its chain of custody and the social context it comes from. Neither code nor information is value-neutral, nor ever free of human context.

Today, Barlow’s vision is a mere shadow; every society controls the information its people can access. Some of this control is through overt censorship, as China controls information about Taiwan, Tiananmen Square, and the Uyghurs. Some of this is through civil laws designed by the powerful for their benefit, as with Disney and US copyright law, or UK libel law.

Bitcoin and blockchains like it are on a collision course with these laws. What happens when the interests of the powerful, with the law on their side, are pitted against an open blockchain? Let’s imagine how our various scenarios might play out.

China first: In response to Falun Gong texts in the blockchain, the People’s Republic decrees that any miners processing blocks with banned content will be taken offline — their IPs will be blacklisted. This causes a hard fork of the blockchain at the point just before the banned content. China might do this under the guise of a “patriotic” messaging campaign, publicly stating that it’s merely maintaining financial sovereignty from Western banks. Then it uses paid influencers and moderators on social media to pump the China Bitcoin fork, through both partisan comments and transactions. Two distinct forks would soon emerge, one behind China’s Great Firewall and one outside. Other countries with similar governmental and media ecosystems — Russia, Singapore, Myanmar — might consider following suit, creating multiple national Bitcoin forks. These would operate independently, under mandates to censor unacceptable transactions from then on.

Disney’s approach would play out differently. Imagine the company announces it will sue any ISP that hosts copyrighted content, starting with networks hosting the biggest miners. (Disney has sued to enforce its intellectual property rights in China before.) After some legal pressure, the networks cut the miners off. The miners reestablish themselves on another network, but Disney keeps the pressure on. Eventually miners get pushed further and further off of mainstream network providers, and resort to tunneling their traffic through an anonymity service like Tor. That causes a major slowdown in the already slow (because of the mathematics) Bitcoin network. Disney might issue takedown requests for Tor exit nodes, causing the network to slow to a crawl. It could persist like this for a long time without a fork. Or the slowdown could cause people to jump ship, either by forking Bitcoin or switching to another cryptocurrency without the copyrighted content.

And then there’s illegal pornographic content and leaked classified data. These have been on the Bitcoin blockchain for over five years, and nothing has been done about it. Just like the botnet example, it may be that these do not threaten existing power structures enough to warrant takedowns. This could easily change if Bitcoin becomes a popular way to share child sexual abuse material. Simply having these illegal images on your hard drive is a felony, which could have significant repercussions for anyone involved in Bitcoin.

Whichever scenario plays out, this may be the Achilles heel of Bitcoin as a global currency.

If an open network such as a blockchain were threatened by a powerful organization — China’s censors, Disney’s lawyers, or the FBI trying to take down a more dangerous botnet — it could fragment into multiple networks. That’s not just a nuisance, but an existential risk to Bitcoin.

Suppose Bitcoin were fragmented into 10 smaller blockchains, perhaps by geography: one in China, another in the US, and so on. These fragments might retain their original users, and by ordinary logic, nothing would have changed. But Metcalfe’s law implies that the overall value of these blockchain fragments combined would be a mere tenth of the original. That is because the value of an open network relates to how many others you can communicate with — and, in a blockchain, transact with. Since the security of bitcoin currency is achieved through expensive computations, fragmented blockchains are also easier to attack in a conventional manner — through a 51 percent attack — by an organized attacker. This is especially the case if the smaller blockchains all use the same hash function, as they would here.

Traditional currencies are generally not vulnerable to these sorts of asymmetric threats. There are no viable small-scale attacks against the US dollar, or almost any other fiat currency. The institutions and beliefs that give money its value are deep-seated, despite instances of currency hyperinflation.

The only notable attacks against fiat currencies are in the form of counterfeiting. Even in the past, when counterfeit bills were common, attacks could be thwarted. Counterfeiters require specialized equipment and are vulnerable to law enforcement discovery and arrest. Furthermore, most money today — even if it’s nominally in a fiat currency — doesn’t exist in paper form.

Bitcoin attracted a following for its openness and immunity from government control. Its goal is to create a world that replaces cultural power with cryptographic power: verification in code, not trust in people. But there is no such world. And today, that feature is a vulnerability. We really don’t know what will happen when the human systems of trust come into conflict with the trustless verification that make blockchain currencies unique. Just last week we saw this exact attack on smaller blockchains — not Bitcoin yet. We are watching a public socio-technical experiment in the making, and we will witness its success or failure in the not-too-distant future.

This essay was written with Barath Raghavan, and previously appeared on Wired.com.

EDITED TO ADD (4/14): A research paper on erasing data from Bitcoin blockchain.

Posted on March 17, 2021 at 6:10 AM47 Comments

On the Insecurity of ES&S Voting Machines’ Hash Code

Andrew Appel and Susan Greenhalgh have a blog post on the insecurity of ES&S’s software authentication system:

It turns out that ES&S has bugs in their hash-code checker: if the “reference hashcode” is completely missing, then it’ll say “yes, boss, everything is fine” instead of reporting an error. It’s simultaneously shocking and unsurprising that ES&S’s hashcode checker could contain such a blunder and that it would go unnoticed by the U.S. Election Assistance Commission’s federal certification process. It’s unsurprising because testing naturally tends to focus on “does the system work right when used as intended?” Using the system in unintended ways (which is what hackers would do) is not something anyone will notice.

Also:

Another gem in Mr. Mechler’s report is in Section 7.1, in which he reveals that acceptance testing of voting systems is done by the vendor, not by the customer. Acceptance testing is the process by which a customer checks a delivered product to make sure it satisfies requirements. To have the vendor do acceptance testing pretty much defeats the purpose.

Posted on March 16, 2021 at 6:36 AM27 Comments

Security Analysis of Apple’s “Find My…” Protocol

Interesting research: “Who Can Find My Devices? Security and Privacy of Apple’s Crowd-Sourced Bluetooth Location Tracking System“:

Abstract: Overnight, Apple has turned its hundreds-of-million-device ecosystem into the world’s largest crowd-sourced location tracking network called offline finding (OF). OF leverages online finder devices to detect the presence of missing offline devices using Bluetooth and report an approximate location back to the owner via the Internet. While OF is not the first system of its kind, it is the first to commit to strong privacy goals. In particular, OF aims to ensure finder anonymity, untrackability of owner devices, and confidentiality of location reports. This paper presents the first comprehensive security and privacy analysis of OF. To this end, we recover the specifications of the closed-source OF protocols by means of reverse engineering. We experimentally show that unauthorized access to the location reports allows for accurate device tracking and retrieving a user’s top locations with an error in the order of 10 meters in urban areas. While we find that OF’s design achieves its privacy goals, we discover two distinct design and implementation flaws that can lead to a location correlation attack and unauthorized access to the location history of the past seven days, which could deanonymize users. Apple has partially addressed the issues following our responsible disclosure. Finally, we make our research artifacts publicly available.

There is also code available on GitHub, which allows arbitrary Bluetooth devices to be tracked via Apple’s Find My network.

Posted on March 15, 2021 at 6:16 AM8 Comments

Friday Squid Blogging: On SQUIDS

A good tutorial:

But we can go beyond the polarization of electrons and really leverage the electron waviness. By interleaving thin layers of superconducting and normal materials, we can make the quantum electronic equivalents of transistors and diodes such as Superconducting Tunnel Junctions (SJTs) and Superconducting Quantum Interference Devices (affectionately known as SQUIDs). These devices take full advantage of the wave-like nature of electrons and can be used as building blocks for all sorts of novel electronics.

Because of the superconducting requirement, they need to be kept very cold, but quantum electronics have already revolutionized precision measurement. The most visible application has been in measuring the Cosmic Microwave Background (CMB). Observations of the CMB have shown that we live in an expanding Universe, determined the age of our Universe, and identified the fraction of it composed of dark matter and dark energy. Measurements of the CMB have transformed our understanding of the Universe we live in. These measurements have been largely enabled by SQUIDs and related superconducting electronics in their microwave cameras.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on March 12, 2021 at 4:10 PM156 Comments

More on the Chinese Zero-Day Microsoft Exchange Hack

Nick Weaver has an excellent post on the Microsoft Exchange hack:

The investigative journalist Brian Krebs has produced a handy timeline of events and a few things stand out from the chronology. The attacker was first detected by one group on Jan. 5 and another on Jan. 6, and Microsoft acknowledged the problem immediately. During this time the attacker appeared to be relatively subtle, exploiting particular targets (although we generally lack insight into who was targeted). Microsoft determined on Feb. 18 that it would patch these vulnerabilities on the March 9th “Patch Tuesday” release of fixes.

Somehow, the threat actor either knew that the exploits would soon become worthless or simply guessed that they would. So, in late February, the attacker changed strategy. Instead of simply exploiting targeted Exchange servers, the attackers stepped up their pace considerably by targeting tens of thousands of servers to install the web shell, an exploit that allows attackers to have remote access to a system. Microsoft then released the patch with very little warning on Mar. 2, at which point the attacker simply sought to compromise almost every vulnerable Exchange server on the Internet. The result? Virtually every vulnerable mail server received the web shell as a backdoor for further exploitation, making the patch effectively useless against the Chinese attackers; almost all of the vulnerable systems were exploited before they were patched.

This is a rational strategy for any actor who doesn’t care about consequences. When a zero-day is confidential and undiscovered, the attacker tries to be careful, only using it on attackers of sufficient value. But if the attacker knows or has reason to believe their vulnerabilities may be patched, they will increase the pace of exploits and, once a patch is released, there is no reason to not try to exploit everything possible.

We know that Microsoft shares advance information about updates with some organizations. I have long believed that they give the NSA a few weeks’ notice to do basically what the Chinese did: use the exploit widely, because you don’t have to worry about losing the capability.

Estimates on the number of affected networks continues to rise. At least 30,000 in the US, and 100,000 worldwide. More?

And the vulnerabilities:

The Chinese actors were not using a single vulnerability but actually a sequence of four “zero-day” exploits. The first allowed an unauthorized user to basically tell the server “let me in, I’m the server” by tricking the server into contacting itself. After the unauthorized user gained entry, the hacker could use the second vulnerability, which used a malformed voicemail that, when interpreted by the server, allowed them to execute arbitrary commands. Two further vulnerabilities allow the attacker to write new files, which is a common primitive that attackers use to increase their access: An attacker uses a vulnerability to write a file and then uses the arbitrary command execution vulnerability to execute that file.

Using this access, the attackers could read anybody’s email or indeed take over the mail server completely. Critically, they would almost always do more, introducing a “web shell,” a program that would enable further remote exploitation even if the vulnerabilities are patched.

The details of that web shell matter. If it was sophisticated, it implies that the Chinese hackers were planning on installing it from the beginning of the operation. If it’s kind of slapdash, it implies a last-minute addition when they realized their exploit window was closing.

Now comes the criminal attacks. Any unpatched network is still vulnerable, and we know from history that lots of networks will remain vulnerable for a long time. Expect the ransomware gangs to weaponize this attack within days.

EDITED TO ADD (3/12): Right on schedule, criminal hacker groups are exploiting the vulnerabilities.

EDITED TO ADD (3/13): And now the ransomware.

Posted on March 10, 2021 at 6:28 AM38 Comments

On Not Fixing Old Vulnerabilities

How is this even possible?

…26% of companies Positive Technologies tested were vulnerable to WannaCry, which was a threat years ago, and some even vulnerable to Heartbleed. “The most frequent vulnerabilities detected during automated assessment date back to 2013-­2017, which indicates a lack of recent software updates,” the reported stated.

26%!? One in four networks?

Even if we assume that the report is self-serving to the company that wrote it, and that the statistic is not generally representative, this is still a disaster. The number should be 0%.

WannaCry was a 2017 cyberattack, based on a NSA-discovered and Russia-stolen-and-published Windows vulnerability. It primarily affects older, no-longer-supported products like Windows 7. If we can’t keep our systems secure from these vulnerabilities, how are we ever going to secure them from new threats?

Posted on March 9, 2021 at 6:16 AM48 Comments

Hacking Digitally Signed PDF Files

Interesting paper: “Shadow Attacks: Hiding and Replacing Content in Signed PDFs“:

Abstract: Digitally signed PDFs are used in contracts and invoices to guarantee the authenticity and integrity of their content. A user opening a signed PDF expects to see a warning in case of any modification. In 2019, Mladenov et al. revealed various parsing vulnerabilities in PDF viewer implementations.They showed attacks that could modify PDF documents without invalidating the signature. As a consequence, affected vendors of PDF viewers implemented countermeasures preventing all attacks.

This paper introduces a novel class of attacks, which we call shadow attacks. The shadow attacks circumvent all existing countermeasures and break the integrity protection of digitally signed PDFs. Compared to previous attacks, the shadow attacks do not abuse implementation issues in a PDF viewer. In contrast, shadow attacks use the enormous flexibility provided by the PDF specification so that shadow documents remain standard-compliant. Since shadow attacks abuse only legitimate features,they are hard to mitigate.

Our results reveal that 16 (including Adobe Acrobat and Foxit Reader) of the 29 PDF viewers tested were vulnerable to shadow attacks. We introduce our tool PDF-Attacker which can automatically generate shadow attacks. In addition, we implemented PDF-Detector to prevent shadow documents from being signed or forensically detect exploits after being applied to signed PDFs.

EDITED TO ADD (3/12): This was written about last summer.

Posted on March 8, 2021 at 6:10 AM27 Comments

No, RSA Is Not Broken

I have been seeing this paper by cryptographer Peter Schnorr making the rounds: “Fast Factoring Integers by SVP Algorithms.” It describes a new factoring method, and its abstract ends with the provocative sentence: “This destroys the RSA cryptosystem.”

It does not. At best, it’s an improvement in factoring — and I’m not sure it’s even that. The paper is a preprint: it hasn’t been peer reviewed. Be careful taking its claims at face value.

Some discussion here.

I’ll append more analysis links to this post when I find them.

EDITED TO ADD (3/12): The latest version of the paper does not have the words “This destroys the RSA cryptosystem” in the abstract. Some more discussion.

Posted on March 5, 2021 at 10:48 AM18 Comments

Chinese Hackers Stole an NSA Windows Exploit in 2014

Check Point has evidence that (probably government affiliated) Chinese hackers stole and cloned an NSA Windows hacking tool years before (probably government affiliated) Russian hackers stole and then published the same tool. Here’s the timeline:

The timeline basically seems to be, according to Check Point:

  • 2013: NSA’s Equation Group developed a set of exploits including one called EpMe that elevates one’s privileges on a vulnerable Windows system to system-administrator level, granting full control. This allows someone with a foothold on a machine to commandeer the whole box.
  • 2014-2015: China’s hacking team code-named APT31, aka Zirconium, developed Jian by, one way or another, cloning EpMe.
  • Early 2017: The Equation Group’s tools were teased and then leaked online by a team calling itself the Shadow Brokers. Around that time, Microsoft cancelled its February Patch Tuesday, identified the vulnerability exploited by EpMe (CVE-2017-0005), and fixed it in a bumper March update. Interestingly enough, Lockheed Martin was credited as alerting Microsoft to the flaw, suggesting it was perhaps used against an American target.
  • Mid 2017: Microsoft quietly fixed the vulnerability exploited by the leaked EpMo exploit.

Lots of news articles about this.

Posted on March 4, 2021 at 6:25 AM13 Comments

Mysterious Macintosh Malware

This is weird:

Once an hour, infected Macs check a control server to see if there are any new commands the malware should run or binaries to execute. So far, however, researchers have yet to observe delivery of any payload on any of the infected 30,000 machines, leaving the malware’s ultimate goal unknown. The lack of a final payload suggests that the malware may spring into action once an unknown condition is met.

Also curious, the malware comes with a mechanism to completely remove itself, a capability that’s typically reserved for high-stealth operations. So far, though, there are no signs the self-destruct feature has been used, raising the question of why the mechanism exists.

Besides those questions, the malware is notable for a version that runs natively on the M1 chip that Apple introduced in November, making it only the second known piece of macOS malware to do so. The malicious binary is more mysterious still because it uses the macOS Installer JavaScript API to execute commands. That makes it hard to analyze installation package contents or the way that package uses the JavaScript commands.

The malware has been found in 153 countries with detections concentrated in the US, UK, Canada, France, and Germany. Its use of Amazon Web Services and the Akamai content delivery network ensures the command infrastructure works reliably and also makes blocking the servers harder. Researchers from Red Canary, the security firm that discovered the malware, are calling the malware Silver Sparrow.

Feels government-designed, rather than criminal or hacker.

Another article. And the Red Canary analysis.

Posted on March 2, 2021 at 6:05 AM19 Comments

National Security Risks of Late-Stage Capitalism

Early in 2020, cyberspace attackers apparently working for the Russian government compromised a piece of widely used network management software made by a company called SolarWinds. The hack gave the attackers access to the computer networks of some 18,000 of SolarWinds’s customers, including US government agencies such as the Homeland Security Department and State Department, American nuclear research labs, government contractors, IT companies and nongovernmental agencies around the world.

It was a huge attack, with major implications for US national security. The Senate Intelligence Committee is scheduled to hold a hearing on the breach on Tuesday. Who is at fault?

The US government deserves considerable blame, of course, for its inadequate cyberdefense. But to see the problem only as a technical shortcoming is to miss the bigger picture. The modern market economy, which aggressively rewards corporations for short-term profits and aggressive cost-cutting, is also part of the problem: Its incentive structure all but ensures that successful tech companies will end up selling insecure products and services.

Like all for-profit corporations, SolarWinds aims to increase shareholder value by minimizing costs and maximizing profit. The company is owned in large part by Silver Lake and Thoma Bravo, private-equity firms known for extreme cost-cutting.

SolarWinds certainly seems to have underspent on security. The company outsourced much of its software engineering to cheaper programmers overseas, even though that typically increases the risk of security vulnerabilities. For a while, in 2019, the update server’s password for SolarWinds’s network management software was reported to be “solarwinds123.” Russian hackers were able to breach SolarWinds’s own email system and lurk there for months. Chinese hackers appear to have exploited a separate vulnerability in the company’s products to break into US government computers. A cybersecurity adviser for the company said that he quit after his recommendations to strengthen security were ignored.

There is no good reason to underspend on security other than to save money — especially when your clients include government agencies around the world and when the technology experts that you pay to advise you are telling you to do more.

As the economics writer Matt Stoller has suggested, cybersecurity is a natural area for a technology company to cut costs because its customers won’t notice unless they are hacked ­– and if they are, they will have already paid for the product. In other words, the risk of a cyberattack can be transferred to the customers. Doesn’t this strategy jeopardize the possibility of long-term, repeat customers? Sure, there’s a danger there –­ but investors are so focused on short-term gains that they’re too often willing to take that risk.

The market loves to reward corporations for risk-taking when those risks are largely borne by other parties, like taxpayers. This is known as “privatizing profits and socializing losses.” Standard examples include companies that are deemed “too big to fail,” which means that society as a whole pays for their bad luck or poor business decisions. When national security is compromised by high-flying technology companies that fob off cybersecurity risks onto their customers, something similar is at work.

Similar misaligned incentives affect your everyday cybersecurity, too. Your smartphone is vulnerable to something called SIM-swap fraud because phone companies want to make it easy for you to frequently get a new phone — and they know that the cost of fraud is largely borne by customers. Data brokers and credit bureaus that collect, use, and sell your personal data don’t spend a lot of money securing it because it’s your problem if someone hacks them and steals it. Social media companies too easily let hate speech and misinformation flourish on their platforms because it’s expensive and complicated to remove it, and they don’t suffer the immediate costs ­– indeed, they tend to profit from user engagement regardless of its nature.

There are two problems to solve. The first is information asymmetry: buyers can’t adequately judge the security of software products or company practices. The second is a perverse incentive structure: the market encourages companies to make decisions in their private interest, even if that imperils the broader interests of society. Together these two problems result in companies that save money by taking on greater risk and then pass off that risk to the rest of us, as individuals and as a nation.

The only way to force companies to provide safety and security features for customers and users is with government intervention. Companies need to pay the true costs of their insecurities, through a combination of laws, regulations, and legal liability. Governments routinely legislate safety — pollution standards, automobile seat belts, lead-free gasoline, food service regulations. We need to do the same with cybersecurity: the federal government should set minimum security standards for software and software development.

In today’s underregulated markets, it’s just too easy for software companies like SolarWinds to save money by skimping on security and to hope for the best. That’s a rational decision in today’s free-market world, and the only way to change that is to change the economic incentives.

This essay previously appeared in the New York Times.

Posted on March 1, 2021 at 6:12 AM49 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.