April 15, 2021
by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram’s web page.
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
- Security Analysis of Apple’s “Find My…” Protocol
- On the Insecurity of ES&S Voting Machines’ Hash Code
- Illegal Content and the Blockchain
- Exploiting Spectre Over the Internet
- Easy SMS Hijacking
- Details of a Computer Banking Scam
- Accellion Supply Chain Hack
- Determining Key Shape from Sound
- Hacking Weapons Systems
- System Update: New Android Malware
- Fugitive Identified on YouTube By His Distinctive Tattoos
- Malware Hidden in Call of Duty Cheating Software
- Wi-Fi Devices as Physical Object Sensors
- Phone Cloning Scam
- Signal Adds Cryptocurrency Support
- Google’s Project Zero Finds a Nation-State Zero-Day Operation
- Backdoor Added — But Found — in PHP
- More Biden Cybersecurity Nominations
- The FBI Is Now Securing Networks Without Their Owners’ Permission
- Upcoming Speaking Engagements
Abstract: Overnight, Apple has turned its hundreds-of-million-device ecosystem into the world’s largest crowd-sourced location tracking network called offline finding (OF). OF leverages online finder devices to detect the presence of missing offline devices using Bluetooth and report an approximate location back to the owner via the Internet. While OF is not the first system of its kind, it is the first to commit to strong privacy goals. In particular, OF aims to ensure finder anonymity, untrackability of owner devices, and confidentiality of location reports. This paper presents the first comprehensive security and privacy analysis of OF. To this end, we recover the specifications of the closed-source OF protocols by means of reverse engineering. We experimentally show that unauthorized access to the location reports allows for accurate device tracking and retrieving a user’s top locations with an error in the order of 10 meters in urban areas. While we find that OF’s design achieves its privacy goals, we discover two distinct design and implementation flaws that can lead to a location correlation attack and unauthorized access to the location history of the past seven days, which could deanonymize users. Apple has partially addressed the issues following our responsible disclosure. Finally, we make our research artifacts publicly available.
There is also code available on GitHub, which allows arbitrary Bluetooth devices to be tracked via Apple’s Find My network.
It turns out that ES&S has bugs in their hash-code checker: if the “reference hashcode” is completely missing, then it’ll say “yes, boss, everything is fine” instead of reporting an error. It’s simultaneously shocking and unsurprising that ES&S’s hashcode checker could contain such a blunder and that it would go unnoticed by the U.S. Election Assistance Commission’s federal certification process. It’s unsurprising because testing naturally tends to focus on “does the system work right when used as intended?” Using the system in unintended ways (which is what hackers would do) is not something anyone will notice.
Another gem in Mr. Mechler’s report is in Section 7.1, in which he reveals that acceptance testing of voting systems is done by the vendor, not by the customer. Acceptance testing is the process by which a customer checks a delivered product to make sure it satisfies requirements. To have the vendor do acceptance testing pretty much defeats the purpose.
[2021.03.17] Security researchers have recently discovered a botnet with a novel defense against takedowns. Normally, authorities can disable a botnet by taking over its command-and-control server. With nowhere to go for instructions, the botnet is rendered useless. But over the years, botnet designers have come up with ways to make this counterattack harder. Now the content-delivery network Akamai has reported on a new method: a botnet that uses the Bitcoin blockchain ledger. Since the blockchain is globally accessible and hard to take down, the botnet’s operators appear to be safe.
It’s best to avoid explaining the mathematics of Bitcoin’s blockchain, but to understand the colossal implications here, you need to understand one concept. Blockchains are a type of “distributed ledger”: a record of all transactions since the beginning, and everyone using the blockchain needs to have access to — and reference — a copy of it. What if someone puts illegal material in the blockchain? Either everyone has a copy of it, or the blockchain’s security fails.
To be fair, not absolutely everyone who uses a blockchain holds a copy of the entire ledger. Many who buy cryptocurrencies like Bitcoin and Ethereum don’t bother using the ledger to verify their purchase. Many don’t actually hold the currency outright, and instead trust an exchange to do the transactions and hold the coins. But people need to continually verify the blockchain’s history on the ledger for the system to be secure. If they stopped, then it would be trivial to forge coins. That’s how the system works.
Some years ago, people started noticing all sorts of things embedded in the Bitcoin blockchain. There are digital images, including one of Nelson Mandela. There’s the Bitcoin logo, and the original paper describing Bitcoin by its alleged founder, the pseudonymous Satoshi Nakamoto. There are advertisements, and several prayers. There’s even illegal pornography and leaked classified documents. All of these were put in by anonymous Bitcoin users. But none of this, so far, appears to seriously threaten those in power in governments and corporations. Once someone adds something to the Bitcoin ledger, it becomes sacrosanct. Removing something requires a fork of the blockchain, in which Bitcoin fragments into multiple parallel cryptocurrencies (and associated blockchains). Forks happen, rarely, but never yet because of legal coercion. And repeated forking would destroy Bitcoin’s stature as a stable(ish) currency.
The botnet’s designers are using this idea to create an unblockable means of coordination, but the implications are much greater. Imagine someone using this idea to evade government censorship. Most Bitcoin mining happens in China. What if someone added a bunch of Chinese-censored Falun Gong texts to the blockchain?<
What if someone added a type of political speech that Singapore routinely censors? Or cartoons that Disney holds the copyright to?
In Bitcoin’s and most other public blockchains there are no central, trusted authorities. Anyone in the world can perform transactions or become a miner. Everyone is equal to the extent that they have the hardware and electricity to perform cryptographic computations.
This openness is also a vulnerability, one that opens the door to asymmetric threats and small-time malicious actors. Anyone can put information in the one and only Bitcoin blockchain. Again, that’s how the system works.
Over the last three decades, the world has witnessed the power of open networks: blockchains, social media, the very web itself. What makes them so powerful is that their value is related not just to the number of users, but the number of potential links between users. This is Metcalfe’s law — value in a network is quadratic, not linear, in the number of users — and every open network since has followed its prophecy.
As Bitcoin has grown, its monetary value has skyrocketed, even if its uses remain unclear. With no barrier to entry, the blockchain space has been a Wild West of innovation and lawlessness. But today, many prominent advocates suggest Bitcoin should become a global, universal currency. In this context, asymmetric threats like embedded illegal data become a major challenge.
The philosophy behind Bitcoin traces to the earliest days of the open internet. Articulated in John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace, it was and is the ethos of tech startups: Code is more trustworthy than institutions. Information is meant to be free, and nobody has the right — and should not have the ability — to control it.
But information must reside somewhere. Code is written by and for people, stored on computers located within countries, and embedded within the institutions and societies we have created. To trust information is to trust its chain of custody and the social context it comes from. Neither code nor information is value-neutral, nor ever free of human context.
Today, Barlow’s vision is a mere shadow; every society controls the information its people can access. Some of this control is through overt censorship, as China controls information about Taiwan, Tiananmen Square, and the Uyghurs. Some of this is through civil laws designed by the powerful for their benefit, as with Disney and US copyright law, or UK libel law.
Bitcoin and blockchains like it are on a collision course with these laws. What happens when the interests of the powerful, with the law on their side, are pitted against an open blockchain? Let’s imagine how our various scenarios might play out.
China first: In response to Falun Gong texts in the blockchain, the People’s Republic decrees that any miners processing blocks with banned content will be taken offline — their IPs will be blacklisted. This causes a hard fork of the blockchain at the point just before the banned content. China might do this under the guise of a “patriotic” messaging campaign, publicly stating that it’s merely maintaining financial sovereignty from Western banks. Then it uses paid influencers and moderators on social media to pump the China Bitcoin fork, through both partisan comments and transactions. Two distinct forks would soon emerge, one behind China’s Great Firewall and one outside. Other countries with similar governmental and media ecosystems — Russia, Singapore, Myanmar — might consider following suit, creating multiple national Bitcoin forks. These would operate independently, under mandates to censor unacceptable transactions from then on.
Disney’s approach would play out differently. Imagine the company announces it will sue any ISP that hosts copyrighted content, starting with networks hosting the biggest miners. (Disney has sued to enforce its intellectual property rights in China before.) After some legal pressure, the networks cut the miners off. The miners reestablish themselves on another network, but Disney keeps the pressure on. Eventually miners get pushed further and further off of mainstream network providers, and resort to tunneling their traffic through an anonymity service like Tor. That causes a major slowdown in the already slow (because of the mathematics) Bitcoin network. Disney might issue takedown requests for Tor exit nodes, causing the network to slow to a crawl. It could persist like this for a long time without a fork. Or the slowdown could cause people to jump ship, either by forking Bitcoin or switching to another cryptocurrency without the copyrighted content.
And then there’s illegal pornographic content and leaked classified data. These have been on the Bitcoin blockchain for over five years, and nothing has been done about it. Just like the botnet example, it may be that these do not threaten existing power structures enough to warrant takedowns. This could easily change if Bitcoin becomes a popular way to share child sexual abuse material. Simply having these illegal images on your hard drive is a felony, which could have significant repercussions for anyone involved in Bitcoin.
Whichever scenario plays out, this may be the Achilles heel of Bitcoin as a global currency.
If an open network such as a blockchain were threatened by a powerful organization — China’s censors, Disney’s lawyers, or the FBI trying to take down a more dangerous botnet — it could fragment into multiple networks. That’s not just a nuisance, but an existential risk to Bitcoin.
Suppose Bitcoin were fragmented into 10 smaller blockchains, perhaps by geography: one in China, another in the US, and so on. These fragments might retain their original users, and by ordinary logic, nothing would have changed. But Metcalfe’s law implies that the overall value of these blockchain fragments combined would be a mere tenth of the original. That is because the value of an open network relates to how many others you can communicate with — and, in a blockchain, transact with. Since the security of bitcoin currency is achieved through expensive computations, fragmented blockchains are also easier to attack in a conventional manner — through a 51 percent attack — by an organized attacker. This is especially the case if the smaller blockchains all use the same hash function, as they would here.
Traditional currencies are generally not vulnerable to these sorts of asymmetric threats. There are no viable small-scale attacks against the US dollar, or almost any other fiat currency. The institutions and beliefs that give money its value are deep-seated, despite instances of currency hyperinflation.
The only notable attacks against fiat currencies are in the form of counterfeiting. Even in the past, when counterfeit bills were common, attacks could be thwarted. Counterfeiters require specialized equipment and are vulnerable to law enforcement discovery and arrest. Furthermore, most money today — even if it’s nominally in a fiat currency — doesn’t exist in paper form.
Bitcoin attracted a following for its openness and immunity from government control. Its goal is to create a world that replaces cultural power with cryptographic power: verification in code, not trust in people. But there is no such world. And today, that feature is a vulnerability. We really don’t know what will happen when the human systems of trust come into conflict with the trustless verification that make blockchain currencies unique. Just last week we saw this exact attack on smaller blockchains — not Bitcoin yet. We are watching a public socio-technical experiment in the making, and we will witness its success or failure in the not-too-distant future.
This essay was written with Barath Raghavan, and previously appeared on Wired.com.
EDITED TO ADD (4/14): A research paper on erasing data from Bitcoin blockchain.
The demonstration website can leak data at a speed of 1kB/s when running on Chrome 88 on an Intel Skylake CPU. Note that the code will likely require minor modifications to apply to other CPUs or browser versions; however, in our tests the attack was successful on several other processors, including the Apple M1 ARM CPU, without any major changes.
[2021.03.19] Vice is reporting on a cell phone vulnerability caused by commercial SMS services. One of the things these services permit is text message forwarding. It turns out that with a little bit of anonymous money — in this case, $16 off an anonymous prepaid credit card — and a few lies, you can forward the text messages from any phone to any other phone.
For businesses, sending text messages to hundreds, thousands, or perhaps millions of customers can be a laborious task. Sakari streamlines that process by letting business customers import their own number. A wide ecosystem of these companies exist, each advertising their own ability to run text messaging for other businesses. Some firms say they only allow customers to reroute messages for business landlines or VoIP phones, while others allow mobile numbers too.
Sakari offers a free trial to anyone wishing to see what the company’s dashboard looks like. The cheapest plan, which allows customers to add a phone number they want to send and receive texts as, is where the $16 goes. Lucky225 provided Motherboard with screenshots of Sakari’s interface, which show a red “+” symbol where users can add a number.
While adding a number, Sakari provides the Letter of Authorization for the user to sign. Sakari’s LOA says that the user should not conduct any unlawful, harassing, or inappropriate behaviour with the text messaging service and phone number.
But as Lucky225 showed, a user can just sign up with someone else’s number and receive their text messages instead.
This is much easier than SMS hijacking, and causes the same security vulnerabilities. Too many networks use SMS as an authentication mechanism.
Once the hacker is able to reroute a target’s text messages, it can then be trivial to hack into other accounts associated with that phone number. In this case, the hacker sent login requests to Bumble, WhatsApp, and Postmates, and easily accessed the accounts.
Don’t focus too much on the particular company in this article.
But Sakari is only one company. And there are plenty of others available in this overlooked industry.
Tuketu said that after one provider cut-off their access, “it took us two minutes to find another.”
[2021.03.22] This is a longish video that describes a profitable computer banking scam that’s run out of call centers in places like India. There’s a lot of fluff about glitterbombs and the like, but the details are interesting. The scammers convince the victims to give them remote access to their computers, and then that they’ve mistyped a dollar amount and have received a large refund that they didn’t deserve. Then they convince the victims to send cash to a drop site, where a money mule retrieves it and forwards it to the scammers.
I found it interesting for several reasons. One, it illustrates the complex business nature of the scam: there are a lot of people doing specialized jobs in order for it to work. Two, it clearly shows the psychological manipulation involved, and how it preys on the unsophisticated and vulnerable. And three, it’s an evolving tactic that gets around banks increasingly flagging blocking suspicious electronic transfers.
There’s much in the article about when Accellion knew about the vulnerability, when it alerted its customers, and when it patched its software.
The governor of New Zealand’s central bank, Adrian Orr, says Accellion failed to warn it after first learning in mid-December that the nearly 20-year-old FTA application — using antiquated technology and set for retirement — had been breached.
Despite having a patch available on Dec. 20, Accellion did not notify the bank in time to prevent its appliance from being breached five days later, the bank said.
EDITED TO ADD (4/14): It appears spy plane details were leaked after the vendor didn’t pay the ransom.
Listen to Your Key: Towards Acoustics-based Physical Key Inference
Abstract: Physical locks are one of the most prevalent mechanisms for securing objects such as doors. While many of these locks are vulnerable to lock-picking, they are still widely used as lock-picking requires specific training with tailored instruments, and easily raises suspicion. In this paper, we propose SpiKey, a novel attack that significantly lowers the bar for an attacker as opposed to the lock-picking attack, by requiring only the use of a smartphone microphone to infer the shape of victim’s key, namely bittings(or cut depths) which form the secret of a key. When a victim inserts his/her key into the lock, the emitted sound is captured by the attacker’s microphone.SpiKey leverages the time difference between audible clicks to ultimately infer the bitting information, i.e., shape of the physical key. As a proof-of-concept, we provide a simulation, based on real-world recordings, and demonstrate a significant reduction in search spacefrom a pool of more than 330 thousand keys to three candidate keys for the most frequent case.
Scientific American podcast:
The strategy is a long way from being viable in the real world. For one thing, the method relies on the key being inserted at a constant speed. And the audio element also poses challenges like background noise.
Boing Boing post.
EDITED TO ADD (4/14): I seem to have blogged this previously.
Basically, there is no reason to believe that software in weapons systems is any more vulnerability free than any other software. So now the question is whether the software can be accessed over the Internet. Increasingly, it is. This is likely to become a bigger problem in the near future. We need to think about future wars where the tech simply doesn’t work.
The broad range of data that this sneaky little bastard is capable of stealing is pretty horrifying. It includes: instant messenger messages and database files; call logs and phone contacts; Whatsapp messages and databases; pictures and videos; all of your text messages; and information on pretty much everything else that is on your phone (it will inventory the rest of the apps on your phone, for instance).
The app can also monitor your GPS location (so it knows exactly where you are), hijack your phone’s camera to take pictures, review your browser’s search history and bookmarks, and turn on the phone mic to record audio.
The app’s spying capabilities are triggered whenever the device receives new information. Researchers write that the RAT is constantly on the lookout for “any activity of interest, such as a phone call, to immediately record the conversation, collect the updated call log, and then upload the contents to the C&C server as an encrypted ZIP file.” After thieving your data, the app will subsequently erase evidence of its own activity, hiding what it has been doing.
This is a sophisticated piece of malware. It feels like the product of a national intelligence agency or — and I think more likely — one of the cyberweapons arms manufacturers that sells this kind of capability to governments around the world.
Most troublingly, Activision says that the “cheat” tool has been advertised multiple times on a popular cheating forum under the title “new COD hack.” (Gamers looking to flout the rules will typically go to such forums to find new ways to do so.) While the report doesn’t mention which forum they were posted on (that certainly would’ve been helpful), it does say that these offerings have popped up a number of times. They have also been seen advertised in YouTube videos, where instructions were provided on how gamers can run the “cheats” on their devices, and the report says that “comments [on the videos] seemingly indicate people had downloaded and attempted to use the tool.”
Part of the reason this attack could work so well is that game cheats typically require a user to disable key security features that would otherwise keep a malicious program out of their system. The hacker is basically getting the victim to do their own work for them.
“It is common practice when configuring a cheat program to run it the with the highest system privileges,” the report notes. “Guides for cheats will typically ask users to disable or uninstall antivirus software and host firewalls, disable kernel code signing, etc.”
In three years or so, the Wi-Fi specification is scheduled to get an upgrade that will turn wireless devices into sensors capable of gathering data about the people and objects bathed in their signals.
“When 802.11bf will be finalized and introduced as an IEEE standard in September 2024, Wi-Fi will cease to be a communication-only standard and will legitimately become a full-fledged sensing paradigm,” explains Francesco Restuccia, assistant professor of electrical and computer engineering at Northeastern University, in a paper summarizing the state of the Wi-Fi Sensing project (SENS) currently being developed by the Institute of Electrical and Electronics Engineers (IEEE).
SENS is envisioned as a way for devices capable of sending and receiving wireless data to use Wi-Fi signal interference differences to measure the range, velocity, direction, motion, presence, and proximity of people and objects.
More detail in the article. Security and privacy controls are still to be worked out, which means that there probably won’t be any.
[2021.04.06] A newspaper in Malaysia is reporting on a cell phone cloning scam. The scammer convinces the victim to lend them their cell phone, and the scammer quickly clones it. What’s clever about this scam is that the victim is an Uber driver and the scammer is the passenger, so the driver is naturally busy and can’t see what the scammer is doing.
[2021.04.07] According to Wired, Signal is adding support for the cryptocurrency MobileCoin, “a form of digital cash designed to work efficiently on mobile devices while protecting users’ privacy and even their anonymity.”
Moxie Marlinspike, the creator of Signal and CEO of the nonprofit that runs it, describes the new payments feature as an attempt to extend Signal’s privacy protections to payments with the same seamless experience that Signal has offered for encrypted conversations. “There’s a palpable difference in the feeling of what it’s like to communicate over Signal, knowing you’re not being watched or listened to, versus other communication platforms,” Marlinspike told WIRED in an interview. “I would like to get to a world where not only can you feel that when you talk to your therapist over Signal, but also when you pay your therapist for the session over Signal.”
I think this is an incredibly bad idea. It’s not just the bloating of what was a clean secure communications app. It’s not just that blockchain is just plain stupid. It’s not even that Signal is choosing to tie itself to a specific blockchain currency. It’s that adding a cryptocurrency to an end-to-end encrypted app muddies the morality of the product, and invites all sorts of government investigative and regulatory meddling: by the IRS, the SEC, FinCEN, and probably the FBI.
And I see no good reason to do this. Secure communications and secure transactions can be separate apps, even separate apps from the same organization. End-to-end encryption is already at risk. Signal is the best app we have out there. Combining it with a cryptocurrency means that the whole system dies if any part dies.
EDITED TO ADD: Commentary from Stephen Deihl:
I think I speak for many technologists when I say that any bolted-on cryptocurrency monetization scheme smells like a giant pile of rubbish and feels enormously user-exploitative. We’ve seen this before, after all Telegram tried the same thing in an ICO that imploded when SEC shut them down, and Facebook famously tried and failed to monetize WhatsApp through their decentralized-but-not-really digital money market fund project.
Signal is a still a great piece of software. Just do one thing and do it well, be the trusted de facto platform for private messaging that empowers dissidents, journalists and grandma all to communicate freely with the same guarantees of privacy. Don’t become a dodgy money transmitter business. This is not the way.
[2021.04.08] Google’s Project Zero discovered, and caused to be patched, eleven zero-day exploits against Chrome, Safari, Microsoft Windows, and iOS. This seems to have been exploited by “Western government operatives actively conducting a counterterrorism operation”:
The exploits, which went back to early 2020 and used never-before-seen techniques, were “watering hole” attacks that used infected websites to deliver malware to visitors. They caught the attention of cybersecurity experts thanks to their scale, sophistication, and speed.
It’s true that Project Zero does not formally attribute hacking to specific groups. But the Threat Analysis Group, which also worked on the project, does perform attribution. Google omitted many more details than just the name of the government behind the hacks, and through that information, the teams knew internally who the hacker and targets were. It is not clear whether Google gave advance notice to government officials that they would be publicizing and shutting down the method of attack.
[2021.04.09] Unknown hackers attempted to add a backdoor to the PHP source code. It was two malicious commits, with the subject “fix typo” and the names of known PHP developers and maintainers. They were discovered and removed before being pushed out to any users. But since 79% of the Internet’s websites use PHP, it’s scary.
Developers have moved PHP to GitHub, which has better authentication. Hopefully it will be enough — PHP is a juicy target.
President Biden announced key cybersecurity leadership nominations Monday, proposing Jen Easterly as the next head of the Cybersecurity and Infrastructure Security Agency and John “Chris” Inglis as the first ever national cyber director (NCD).
I know them both, and think they’re both good choices.
[2021.04.14] In January, we learned about a Chinese espionage campaign that exploited four zero-days in Microsoft Exchange. One of the characteristics of the campaign, in the later days when the Chinese probably realized that the vulnerabilities would soon be fixed, was to install a web shell in compromised networks that would give them subsequent remote access. Even if the vulnerabilities were patched, the shell would remain until the network operators removed it.
Now, months later, many of those shells are still in place. And they’re being used by criminal hackers as well.
This is nothing short of extraordinary, and I can think of no real-world parallel. It’s kind of like if a criminal organization infiltrated a door-lock company and surreptitiously added a master passkey feature, and then customers bought and installed those locks. And then if the FBI got a court order to fix all the locks to remove the master passkey capability. And it’s kind of not like that. In any case, it’s not what we normally think of when we think of a warrant. The links above have details, but I would like a legal scholar to weigh in on the implications of this.
[2021.04.14] This is a current list of where and when I am scheduled to speak:
- I’m keynoting the (all-virtual) RSA Conference 2021, May 17-20, 2021.
- I’m keynoting the 5th International Symposium on Cyber Security Cryptology and Machine Learning (via Zoom), July 8-9, 2021.
- I’ll be speaking at an Informa event on September 14, 2021. Details to come.
The list is maintained on this page.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books — including his latest, We Have Root — as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.
Copyright © 2021 by Bruce Schneier.