Crypto-Gram

July 15, 2019

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram’s web page.

These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:

  1. Data, Surveillance, and the AI Arms Race
  2. Maciej Cegłowski on Privacy in the Information Age
  3. Risks of Password Managers
  4. Hacking Hardware Security Modules
  5. How Apple’s “Find My” Feature Works
  6. Fake News and Pandemics
  7. Backdoor Built into Android Firmware
  8. Election Security
  9. iPhone Apps Surreptitiously Communicated with Unknown Servers
  10. Florida City Pays Ransomware
  11. Person in Latex Mask Impersonated French Minister
  12. MongoDB Offers Field Level Encryption
  13. Spanish Soccer League App Spies on Fans
  14. Cellebrite Claims It Can Unlock Any iPhone
  15. I’m Leaving IBM
  16. Yubico Security Keys with a Crypto Flaw
  17. Google Releases Basic Homomorphic Encryption Tool
  18. Digital License Plates
  19. US Journalist Detained When Returning to US
  20. Research on Human Honesty
  21. Applied Cryptography is Banned in Oregon Prisons
  22. Ransomware Recovery Firms Who Secretly Pay Hackers
  23. Cardiac Biometric
  24. Cell Networks Hacked by (Probable) Nation-State Attackers
  25. Details of the Cloud Hopper Attacks
  26. Resetting Your GE Smart Light Bulb
  27. Presidential Candidate Andrew Yang Has Quantum Encryption Policy
  28. Clickable Endnotes to Click Here to Kill Everybody
  29. Upcoming Speaking Engagements

Data, Surveillance, and the AI Arms Race

[2019.06.17] According to foreign policy experts and the defense establishment, the United States is caught in an artificial intelligence arms race with China—one with serious implications for national security. The conventional version of this story suggests that the United States is at a disadvantage because of self-imposed restraints on the collection of data and the privacy of its citizens, while China, an unrestrained surveillance state, is at an advantage. In this vision, the data that China collects will be fed into its systems, leading to more powerful AI with capabilities we can only imagine today. Since Western countries can’t or won’t reap such a comprehensive harvest of data from their citizens, China will win the AI arms race and dominate the next century.

This idea makes for a compelling narrative, especially for those trying to justify surveillance—whether government- or corporate-run. But it ignores some fundamental realities about how AI works and how AI research is conducted.

Thanks to advances in machine learning, AI has flipped from theoretical to practical in recent years, and successes dominate public understanding of how it works. Machine learning systems can now diagnose pneumonia from X-rays, play the games of go and poker, and read human lips, all better than humans. They’re increasingly watching surveillance video. They are at the core of self-driving car technology and are playing roles in both intelligence-gathering and military operations. These systems monitor our networks to detect intrusions and look for spam and malware in our email.

And it’s true that there are differences in the way each country collects data. The United States pioneered “surveillance capitalism,” to use the Harvard University professor Shoshana Zuboff’s term, where data about the population is collected by hundreds of large and small companies for corporate advantage—and mutually shared or sold for profit The state picks up on that data, in cases such as the Centers for Disease Control and Prevention’s use of Google search data to map epidemics and evidence shared by alleged criminals on Facebook, but it isn’t the primary user.

China, on the other hand, is far more centralized. Internet companies collect the same sort of data, but it is shared with the government, combined with government-collected data, and used for social control. Every Chinese citizen has a national ID number that is demanded by most services and allows data to easily be tied together. In the western region of Xinjiang, ubiquitous surveillance is used to oppress the Uighur ethnic minority—although at this point there is still a lot of human labor making it all work. Everyone expects that this is a test bed for the entire country.

Data is increasingly becoming a part of control for the Chinese government. While many of these plans are aspirational at the moment—there isn’t, as some have claimed, a single “social credit score,” but instead future plans to link up a wide variety of systems—data collection is universally pushed as essential to the future of Chinese AI. One executive at search firm Baidu predicted that the country’s connected population will provide them with the raw data necessary to become the world’s preeminent tech power. China’s official goal is to become the world AI leader by 2030, aided in part by all of this massive data collection and correlation.

This all sounds impressive, but turning massive databases into AI capabilities doesn’t match technological reality. Current machine learning techniques aren’t all that sophisticated. All modern AI systems follow the same basic methods. Using lots of computing power, different machine learning models are tried, altered, and tried again. These systems use a large amount of data (the training set) and an evaluation function to distinguish between those models and variations that work well and those that work less well. After trying a lot of models and variations, the system picks the one that works best. This iterative improvement continues even after the system has been fielded and is in use.

So, for example, a deep learning system trying to do facial recognition will have multiple layers (hence the notion of “deep”) trying to do different parts of the facial recognition task. One layer will try to find features in the raw data of a picture that will help find a face, such as changes in color that will indicate an edge. The next layer might try to combine these lower layers into features like shapes, looking for round shapes inside of ovals that indicate eyes on a face. The different layers will try different features and will be compared by the evaluation function until the one that is able to give the best results is found, in a process that is only slightly more refined than trial and error.

Large data sets are essential to making this work, but that doesn’t mean that more data is automatically better or that the system with the most data is automatically the best system. Train a facial recognition algorithm on a set that contains only faces of white men, and the algorithm will have trouble with any other kind of face. Use an evaluation function that is based on historical decisions, and any past bias is learned by the algorithm. For example, mortgage loan algorithms trained on historic decisions of human loan officers have been found to implement redlining. Similarly, hiring algorithms trained on historical data manifest the same sexism as human staff often have. Scientists are constantly learning about how to train machine learning systems, and while throwing a large amount of data and computing power at the problem can work, more subtle techniques are often more successful. All data isn’t created equal, and for effective machine learning, data has to be both relevant and diverse in the right ways.

Future research advances in machine learning are focused on two areas. The first is in enhancing how these systems distinguish between variations of an algorithm. As different versions of an algorithm are run over the training data, there needs to be some way of deciding which version is “better.” These evaluation functions need to balance the recognition of an improvement with not over-fitting to the particular training data. Getting functions that can automatically and accurately distinguish between two algorithms based on minor differences in the outputs is an art form that no amount of data can improve.

The second is in the machine learning algorithms themselves. While much of machine learning depends on trying different variations of an algorithm on large amounts of data to see which is most successful, the initial formulation of the algorithm is still vitally important. The way the algorithms interact, the types of variations attempted, and the mechanisms used to test and redirect the algorithms are all areas of active research. (An overview of some of this work can be found here; even trying to limit the research to 20 papers oversimplifies the work being done in the field.) None of these problems can be solved by throwing more data at the problem.

The British AI company DeepMind’s success in teaching a computer to play the Chinese board game go is illustrative. Its AlphaGo computer program became a grandmaster in two steps. First, it was fed some enormous number of human-played games. Then, the game played itself an enormous number of times, improving its own play along the way. In 2016, AlphaGo beat the grandmaster Lee Sedol four games to one.

While the training data in this case, the human-played games, was valuable, even more important was the machine learning algorithm used and the function that evaluated the relative merits of different game positions. Just one year later, DeepMind was back with a follow-on system: AlphaZero. This go-playing computer dispensed entirely with the human-played games and just learned by playing against itself over and over again. It plays like an alien. (It also became a grandmaster in chess and shogi.)

These are abstract games, so it makes sense that a more abstract training process works well. But even something as visceral as facial recognition needs more than just a huge database of identified faces in order to work successfully. It needs the ability to separate a face from the background in a two-dimensional photo or video and to recognize the same face in spite of changes in angle, lighting, or shadows. Just adding more data may help, but not nearly as much as added research into what to do with the data once we have it.

Meanwhile, foreign-policy and defense experts are talking about AI as if it were the next nuclear arms race, with the country that figures it out best or first becoming the dominant superpower for the next century. But that didn’t happen with nuclear weapons, despite research only being conducted by governments and in secret. It certainly won’t happen with AI, no matter how much data different nations or companies scoop up.

It is true that China is investing a lot of money into artificial intelligence research: The Chinese government believes this will allow it to leapfrog other countries (and companies in those countries) and become a major force in this new and transformative area of computing—and it may be right. On the other hand, much of this seems to be a wasteful boondoggle. Slapping “AI” on pretty much anything is how to get funding. The Chinese Ministry of Education, for instance, promises to produce “50 world-class AI textbooks,” with no explanation of what that means.

In the democratic world, the government is neither the leading researcher nor the leading consumer of AI technologies. AI research is much more decentralized and academic, and it is conducted primarily in the public eye. Research teams keep their training data and models proprietary but freely publish their machine learning algorithms. If you wanted to work on machine learning right now, you could download Microsoft’s Cognitive Toolkit, Google’s Tensorflow, or Facebook’s Pytorch. These aren’t toy systems; these are the state-of-the art machine learning platforms.

AI is not analogous to the big science projects of the previous century that brought us the atom bomb and the moon landing. AI is a science that can be conducted by many different groups with a variety of different resources, making it closer to computer design than the space race or nuclear competition. It doesn’t take a massive government-funded lab for AI research, nor the secrecy of the Manhattan Project. The research conducted in the open science literature will trump research done in secret because of the benefits of collaboration and the free exchange of ideas.

While the United States should certainly increase funding for AI research, it should continue to treat it as an open scientific endeavor. Surveillance is not justified by the needs of machine learning, and real progress in AI doesn’t need it.

This essay was written with Jim Waldo, and previously appeared in Foreign Policy.


Maciej Cegłowski on Privacy in the Information Age

[2019.06.19] Maciej Cegłowski has a really good essay explaining how to think about privacy today:

For the purposes of this essay, I’ll call it “ambient privacy”—the understanding that there is value in having our everyday interactions with one another remain outside the reach of monitoring, and that the small details of our daily lives should pass by unremembered. What we do at home, work, church, school, or in our leisure time does not belong in a permanent record. Not every conversation needs to be a deposition.

Until recently, ambient privacy was a simple fact of life. Recording something for posterity required making special arrangements, and most of our shared experience of the past was filtered through the attenuating haze of human memory. Even police states like East Germany, where one in seven citizens was an informer, were not able to keep tabs on their entire population. Today computers have given us that power. Authoritarian states like China and Saudi Arabia are using this newfound capacity as a tool of social control. Here in the United States, we’re using it to show ads. But the infrastructure of total surveillance is everywhere the same, and everywhere being deployed at scale.

Ambient privacy is not a property of people, or of their data, but of the world around us. Just like you can’t drop out of the oil economy by refusing to drive a car, you can’t opt out of the surveillance economy by forswearing technology (and for many people, that choice is not an option). While there may be worthy reasons to take your life off the grid, the infrastructure will go up around you whether you use it or not.

Because our laws frame privacy as an individual right, we don’t have a mechanism for deciding whether we want to live in a surveillance society. Congress has remained silent on the matter, with both parties content to watch Silicon Valley make up its own rules. The large tech companies point to our willing use of their services as proof that people don’t really care about their privacy. But this is like arguing that inmates are happy to be in jail because they use the prison library. Confronted with the reality of a monitored world, people make the rational decision to make the best of it.

That is not consent.

Ambient privacy is particularly hard to protect where it extends into social and public spaces outside the reach of privacy law. If I’m subjected to facial recognition at the airport, or tagged on social media at a little league game, or my public library installs an always-on Alexa microphone, no one is violating my legal rights. But a portion of my life has been brought under the magnifying glass of software. Even if the data harvested from me is anonymized in strict conformity with the most fashionable data protection laws, I’ve lost something by the fact of being monitored.

He’s not the first person to talk about privacy as a societal property, or to use pollution metaphors. But his framing is really cogent. And “ambient privacy” is new—and a good phrasing.


Risks of Password Managers

[2019.06.19] Stuart Schechter writes about the security risks of using a password manager. It’s a good piece, and nicely discusses the trade-offs around password managers: which one to choose, which passwords to store in it, and so on.

My own Password Safe is mentioned. My particular choices about security and risk is to only store passwords on my computer—not on my phone—and not to put anything in the cloud. In my way of thinking, that reduces the risks of a password manager considerably. Yes, there are losses in convenience.


Hacking Hardware Security Modules

[2019.06.20] Security researchers Gabriel Campana and Jean-Baptiste Bédrune are giving a hardware security module (HSM) talk at BlackHat in August:

This highly technical presentation targets an HSM manufactured by a vendor whose solutions are usually found in major banks and large cloud service providers. It will demonstrate several attack paths, some of them allowing unauthenticated attackers to take full control of the HSM. The presented attacks allow retrieving all HSM secrets remotely, including cryptographic keys and administrator credentials. Finally, we exploit a cryptographic bug in the firmware signature verification to upload a modified firmware to the HSM. This firmware includes a persistent backdoor that survives a firmware update.

They have an academic paper in French, and a presentation of the work. Here’s a summary in English.

There were plenty of technical challenges to solve along the way, in what was clearly a thorough and professional piece of vulnerability research:

  1. They started by using legitimate SDK access to their test HSM to upload a firmware module that would give them a shell inside the HSM. Note that this SDK access was used to discover the attacks, but is not necessary to exploit them.
  2. They then used the shell to run a fuzzer on the internal implementation of PKCS#11 commands to find reliable, exploitable buffer overflows.
  3. They checked they could exploit these buffer overflows from outside the HSM, i.e. by just calling the PKCS#11 driver from the host machine
  4. They then wrote a payload that would override access control and, via another issue in the HSM, allow them to upload arbitrary (unsigned) firmware. It’s important to note that this backdoor is persistent a subsequent update will not fix it.
  5. They then wrote a module that would dump all the HSM secrets, and uploaded it to the HSM.

How Apple’s “Find My” Feature Works

[2019.06.20] Matthew Green intelligently speculates about how Apple’s new “Find My” feature works.

If you haven’t already been inspired by the description above, let me phrase the question you ought to be asking: how is this system going to avoid being a massive privacy nightmare?

Let me count the concerns:

  • If your device is constantly emitting a BLE signal that uniquely identifies it, the whole world is going to have (yet another) way to track you. Marketers already use WiFi and Bluetooth MAC addresses to do this: Find My could create yet another tracking channel.
  • It also exposes the phones who are doing the tracking. These people are now going to be sending their current location to Apple (which they may or may not already be doing). Now they’ll also be potentially sharing this information with strangers who “lose” their devices. That could go badly.
  • Scammers might also run active attacks in which they fake the location of your device. While this seems unlikely, people will always surprise you.

The good news is that Apple claims that their system actually does provide strong privacy, and that it accomplishes this using clever cryptography. But as is typical, they’ve declined to give out the details how they’re going to do it. Andy Greenberg talked me through an incomplete technical description that Apple provided to Wired, so that provides many hints. Unfortunately, what Apple provided still leaves huge gaps. It’s into those gaps that I’m going to fill in my best guess for what Apple is actually doing.


Fake News and Pandemics

[2019.06.21] When the next pandemic strikes, we’ll be fighting it on two fronts. The first is the one you immediately think about: understanding the disease, researching a cure and inoculating the population. The second is new, and one you might not have thought much about: fighting the deluge of rumors, misinformation and flat-out lies that will appear on the internet.

The second battle will be like the Russian disinformation campaigns during the 2016 presidential election, only with the addition of a deadly health crisis and possibly without a malicious government actor. But while the two problems—misinformation affecting democracy and misinformation affecting public health—will have similar solutions, the latter is much less political. If we work to solve the pandemic disinformation problem, any solutions are likely to also be applicable to the democracy one.

Pandemics are part of our future. They might be like the 1968 Hong Kong flu, which killed a million people, or the 1918 Spanish flu, which killed over 40 million. Yes, modern medicine makes pandemics less likely and less deadly. But global travel and trade, increased population density, decreased wildlife habitats, and increased animal farming to satisfy a growing and more affluent population have made them more likely. Experts agree that it’s not a matter of if—it’s only a matter of when.

When the next pandemic strikes, accurate information will be just as important as effective treatments. We saw this in 2014, when the Nigerian government managed to contain a subcontinentwide Ebola epidemic to just 20 infections and eight fatalities. Part of that success was because of the ways officials communicated health information to all Nigerians, using government-sponsored videos, social media campaigns and international experts. Without that, the death toll in Lagos, a city of 21 million people, would have probably been greater than the 11,000 the rest of the continent experienced.

There’s every reason to expect misinformation to be rampant during a pandemic. In the early hours and days, information will be scant and rumors will abound. Most of us are not health professionals or scientists. We won’t be able to tell fact from fiction. Even worse, we’ll be scared. Our brains work differently when we are scared, and they latch on to whatever makes us feel safer—even if it’s not true.

Rumors and misinformation could easily overwhelm legitimate news channels, as people share tweets, images and videos. Much of it will be well-intentioned but wrong—like the misinformation spread by the anti-vaccination community today—but some of it may be malicious. In the 1980s, the KGB ran a sophisticated disinformation campaign—Operation Infektion—to spread the rumor that HIV/AIDS was a result of an American biological weapon gone awry. It’s reasonable to assume some group or country would deliberately spread intentional lies in an attempt to increase death and chaos.

It’s not just misinformation about which treatments work (and are safe), and which treatments don’t work (and are unsafe). Misinformation can affect society’s ability to deal with a pandemic at many different levels. Right now, Ebola relief efforts in the Democratic Republic of Congo are being stymied by mistrust of health workers and government officials.

It doesn’t take much to imagine how this can lead to disaster. Jay Walker, curator of the TEDMED conferences, laid out some of the possibilities in a 2016 essay: people overwhelming and even looting pharmacies trying to get some drug that is irrelevant or nonexistent, people needlessly fleeing cities and leaving them paralyzed, health workers not showing up for work, truck drivers and other essential people being afraid to enter infected areas, official sites like CDC.gov being hacked and discredited. This kind of thing can magnify the health effects of a pandemic many times over, and in extreme cases could lead to a total societal collapse.

This is going to be something that government health organizations, medical professionals, social media companies and the traditional media are going to have to work out together. There isn’t any single solution; it will require many different interventions that will all need to work together. The interventions will look a lot like what we’re already talking about with regard to government-run and other information influence campaigns that target our democratic processes: methods of visibly identifying false stories, the identification and deletion of fake posts and accounts, ways to promote official and accurate news, and so on. At the scale these are needed, they will have to be done automatically and in real time.

Since the 2016 presidential election, we have been talking about propaganda campaigns, and about how social media amplifies fake news and allows damaging messages to spread easily. It’s a hard discussion to have in today’s hyperpolarized political climate. After any election, the winning side has every incentive to downplay the role of fake news.

But pandemics are different; there’s no political constituency in favor of people dying because of misinformation. Google doesn’t want the results of peoples’ well-intentioned searches to lead to fatalities. Facebook and Twitter don’t want people on their platforms sharing misinformation that will result in either individual or mass deaths. Focusing on pandemics gives us an apolitical way to collectively approach the general problem of misinformation and fake news. And any solutions for pandemics are likely to also be applicable to the more general—and more political—problems.

Pandemics are inevitable. Bioterror is already possible, and will only get easier as the requisite technologies become cheaper and more common. We’re experiencing the largest measles outbreak in 25 years thanks to the anti-vaccination movement, which has hijacked social media to amplify its messages; we seem unable to beat back the disinformation and pseudoscience surrounding the vaccine. Those same forces will dramatically increase death and social upheaval in the event of a pandemic.

Let the Russian propaganda attacks on the 2016 election serve as a wake-up call for this and other threats. We need to solve the problem of misinformation during pandemics together—governments and industries in collaboration with medical officials, all across the world—before there’s a crisis. And the solutions will also help us shore up our democracy in the process.

This essay previously appeared in the New York Times.


Backdoor Built into Android Firmware

[2019.06.21] In 2017, some Android phones came with a backdoor pre-installed:

Criminals in 2017 managed to get an advanced backdoor preinstalled on Android devices before they left the factories of manufacturers, Google researchers confirmed on Thursday.

Triada first came to light in 2016 in articles published by Kaspersky here and here, the first of which said the malware was “one of the most advanced mobile Trojans” the security firm’s analysts had ever encountered. Once installed, Triada’s chief purpose was to install apps that could be used to send spam and display ads. It employed an impressive kit of tools, including rooting exploits that bypassed security protections built into Android and the means to modify the Android OS’ all-powerful Zygote process. That meant the malware could directly tamper with every installed app. Triada also connected to no fewer than 17 command and control servers.

In July 2017, security firm Dr. Web reported that its researchers had found Triada built into the firmware of several Android devices, including the Leagoo M5 Plus, Leagoo M8, Nomu S10, and Nomu S20. The attackers used the backdoor to surreptitiously download and install modules. Because the backdoor was embedded into one of the OS libraries and located in the system section, it couldn’t be deleted using standard methods, the report said.

On Thursday, Google confirmed the Dr. Web report, although it stopped short of naming the manufacturers. Thursday’s report also said the supply chain attack was pulled off by one or more partners the manufacturers used in preparing the final firmware image used in the affected devices.

This is a supply chain attack. It seems to be the work of criminals, but it could just as easily have been a nation-state.


Election Security

[2019.06.24] Stanford University’s Cyber Policy Center has published a long report on the security of US elections. Summary: it’s not good.


iPhone Apps Surreptitiously Communicated with Unknown Servers

[2019.06.25] Long news article (alternate source) on iPhone privacy, specifically the enormous amount of data your apps are collecting without your knowledge. A lot of this happens in the middle of the night, when you’re probably not otherwise using your phone:

IPhone apps I discovered tracking me by passing information to third parties just while I was asleep include Microsoft OneDrive, Intuit’s Mint, Nike, Spotify, The Washington Post and IBM’s the Weather Channel. One app, the crime-alert service Citizen, shared personally identifiable information in violation of its published privacy policy.

And your iPhone doesn’t only feed data trackers while you sleep. In a single week, I encountered over 5,400 trackers, mostly in apps, not including the incessant Yelp traffic.


Florida City Pays Ransomware

[2019.06.25] Learning from the huge expenses Atlanta and Baltimore incurred by refusing to pay ransomware, the Florida city of Riviera Beach decided to pay up. The ransom amount of almost $600,000 is a lot, but much cheaper than the alternative.


Person in Latex Mask Impersonated French Minister

[2019.06.26] Forget deep fakes. Someone wearing a latex mask fooled people on video calls for a period of two years, successfully scamming 80 million euros from rich French citizens.


MongoDB Offers Field Level Encryption

[2019.06.26] MongoDB now has the ability to encrypt data by field:

MongoDB calls the new feature Field Level Encryption. It works kind of like end-to-end encrypted messaging, which scrambles data as it moves across the internet, revealing it only to the sender and the recipient. In such a “client-side” encryption scheme, databases utilizing Field Level Encryption will not only require a system login, but will additionally require specific keys to process and decrypt specific chunks of data locally on a user’s device as needed. That means MongoDB itself and cloud providers won’t be able to access customer data, and a database’s administrators or remote managers don’t need to have access to everything either.

For regular users, not much will be visibly different. If their credentials are stolen and they aren’t using multifactor authentication, an attacker will still be able to access everything the victim could. But the new feature is meant to eliminate single points of failure. With Field Level Encryption in place, a hacker who steals an administrative username and password, or finds a software vulnerability that gives them system access, still won’t be able to use these holes to access readable data.


Spanish Soccer League App Spies on Fans

[2019.06.27] The Spanish Soccer League’s smartphone app spies on fans in order to find bars that are illegally streaming its games. The app listens with the microphone for the broadcasts, and then uses geolocation to figure out where the phone is.

The Spanish data protection agency has ordered the league to stop doing this. Not because it’s creepy spying, but because the terms of service—which no one reads anyway—weren’t clear.


Cellebrite Claims It Can Unlock Any iPhone

[2019.06.28] The digital forensics company Cellebrite now claims it can unlock any iPhone.

I dithered before blogging this, not wanting to give the company more publicity. But I decided that everyone who wants to know already knows, and that Apple already knows. It’s all of us that need to know.


I’m Leaving IBM

[2019.06.28] Today is my last day at IBM.

If you’ve been following along, IBM bought my startup Resilient Systems in Spring 2016. Since then, I have been with IBM, holding the nicely ambiguous title of “Special Advisor.” As of the end of the month, I will be back on my own.

I will continue to write and speak, and do the occasional consulting job. I will continue to teach at the Harvard Kennedy School. I will continue to serve on boards for organizations I believe in: EFF, Access Now, Tor, EPIC, Verified Voting. And I will increasingly be an advocate for public-interest technology.


Yubico Security Keys with a Crypto Flaw

[2019.07.01] Wow, is this an embarrassing bug:

Yubico is recalling a line of security keys used by the U.S. government due to a firmware flaw. The company issued a security advisory today that warned of an issue in YubiKey FIPS Series devices with firmware versions 4.4.2 and 4.4.4 that reduced the randomness of the cryptographic keys it generates. The security keys are used by thousands of federal employees on a daily basis, letting them securely log-on to their devices by issuing one-time passwords.

The problem in question occurs after the security key powers up. According to Yubico, a bug keeps “some predictable content” inside the device’s data buffer that could impact the randomness of the keys generated. Security keys with ECDSA signatures are in particular danger. A total of 80 of the 256 bits generated by the key remain static, meaning an attacker who gains access to several signatures could recreate the private key.

Boing Boing post.

EDITED TO ADD (6/12): From Microsoft TechNet Security Guidance blog (in 2014): “Why We’re Not Recommending ‘FIPS Mode’ Anymore.


Google Releases Basic Homomorphic Encryption Tool

[2019.07.02] Google has released an open-source cryptographic tool: Private Join and Compute. From a Wired article:

Private Join and Compute uses a 1970s methodology known as “commutative encryption” to allow data in the data sets to be encrypted with multiple keys, without it mattering which order the keys are used in. This is helpful for multiparty computation, where you need to apply and later peel away multiple layers of encryption without affecting the computations performed on the encrypted data. Crucially, Private Join and Compute also uses methods first developed in the ’90s that enable a system to combine two encrypted data sets, determine what they have in common, and then perform mathematical computations directly on this encrypted, unreadable data through a technique called homomorphic encryption.

True homomorphic encryption isn’t possible, and my guess is that it will never be feasible for most applications. But limited application tricks like this have been around for decades, and sometimes they’re useful.

Boing Boing article.


Digital License Plates

[2019.07.03] They’re a thing:

Developers say digital plates utilize “advanced telematics”—to collect tolls, pay for parking and send out Amber Alerts when a child is abducted. They also help recover stolen vehicles by changing the display to read “Stolen,” thereby alerting everyone within eyeshot.

This makes no sense to me. The numbers are static. License plates being low-tech are a feature, not a bug.


US Journalist Detained When Returning to US

[2019.07.04] Pretty horrible story of a US journalist who had his computer and phone searched at the border when returning to the US from Mexico.

After I gave him the password to my iPhone, Moncivias spent three hours reviewing hundreds of photos and videos and emails and calls and texts, including encrypted messages on WhatsApp, Signal, and Telegram. It was the digital equivalent of tossing someone’s house: opening cabinets, pulling out drawers, and overturning furniture in hopes of finding something—anything—illegal. He read my communications with friends, family, and loved ones. He went through my correspondence with colleagues, editors, and sources. He asked about the identities of people who have worked with me in war zones. He also went through my personal photos, which I resented. Consider everything on your phone right now. Nothing on mine was spared.

Pomeroy, meanwhile, searched my laptop. He browsed my emails and my internet history. He looked through financial spreadsheets and property records and business correspondence. He was able to see all the same photos and videos as Moncivias and then some, including photos I thought I had deleted.

The EFF has extensive information and advice about device searches at the US border, including a travel guide:

If you are a U.S. citizen, border agents cannot stop you from entering the country, even if you refuse to unlock your device, provide your device password, or disclose your social media information. However, agents may escalate the encounter if you refuse. For example, agents may seize your devices, ask you intrusive questions, search your bags more intensively, or increase by many hours the length of detention. If you are a lawful permanent resident, agents may raise complicated questions about your continued status as a resident. If you are a foreign visitor, agents may deny you entry.

The most important piece of advice is to think about this all beforehand, and plan accordingly.


Research on Human Honesty

[2019.07.05] New research from Science: “Civic honesty around the globe“:

Abstract: Civic honesty is essential to social capital and economic development, but is often in conflict with material self-interest. We examine the trade-off between honesty and self-interest using field experiments in 355 cities spanning 40 countries around the globe. We turned in over 17,000 lost wallets with varying amounts of money at public and private institutions, and measured whether recipients contacted the owner to return the wallets. In virtually all countries citizens were more likely to return wallets that contained more money. Both non-experts and professional economists were unable to predict this result. Additional data suggest our main findings can be explained by a combination of altruistic concerns and an aversion to viewing oneself as a thief, which increase with the material benefits of dishonesty.

I am surprised, too.


Applied Cryptography is Banned in Oregon Prisons

[2019.07.05] My Applied Cryptography is on a list of books banned in Oregon prisons. It’s not me—and it’s not cryptography—it’s that the prisons ban books that teach people to code. The subtitle is “Algorithms, Protocols, and Source Code in C”—and that’s the reason.

My more recent Cryptography Engineering is a much better book for prisoners, anyway.


Ransomware Recovery Firms Who Secretly Pay Hackers

[2019.07.08] ProPublica is reporting on companies that pretend to recover data locked up by ransomware, but just secretly pay the hackers and then mark up the cost to the victims.


Cardiac Biometric

[2019.07.08] MIT Technology Review is reporting about an infrared laser device that can identify people by their unique cardiac signature at a distance:

A new device, developed for the Pentagon after US Special Forces requested it, can identify people without seeing their face: instead it detects their unique cardiac signature with an infrared laser. While it works at 200 meters (219 yards), longer distances could be possible with a better laser. “I don’t want to say you could do it from space,” says Steward Remaly, of the Pentagon’s Combatting Terrorism Technical Support Office, “but longer ranges should be possible.”

Contact infrared sensors are often used to automatically record a patient’s pulse. They work by detecting the changes in reflection of infrared light caused by blood flow. By contrast, the new device, called Jetson, uses a technique known as laser vibrometry to detect the surface movement caused by the heartbeat. This works though typical clothing like a shirt and a jacket (though not thicker clothing such as a winter coat).

[…]

Remaly’s team then developed algorithms capable of extracting a cardiac signature from the laser signals. He claims that Jetson can achieve over 95% accuracy under good conditions, and this might be further improved. In practice, it’s likely that Jetson would be used alongside facial recognition or other identification methods.

Wenyao Xu of the State University of New York at Buffalo has also developed a remote cardiac sensor, although it works only up to 20 meters away and uses radar. He believes the cardiac approach is far more robust than facial recognition. “Compared with face, cardiac biometrics are more stable and can reach more than 98% accuracy,” he says.

I have my usual questions about false positives vs false negatives, how stable the biometric is over time, and whether it works better or worse against particular sub-populations. But interesting nonetheless.


Cell Networks Hacked by (Probable) Nation-State Attackers

[2019.07.09] A sophisticated attacker has successfuly infiltrated cell providers to collect information on specific users:

The hackers have systematically broken in to more than 10 cell networks around the world to date over the past seven years to obtain massive amounts of call records—including times and dates of calls, and their cell-based locations—on at least 20 individuals.

[…]

Cybereason researchers said they first detected the attacks about a year ago. Before and since then, the hackers broke into one cell provider after the other to gain continued and persistent access to the networks. Their goal, the researchers believe, is to obtain and download rolling records on the target from the cell provider’s database without having to deploy malware on each target’s device.

[…]

The researchers found the hackers got into one of the cell networks by exploiting a vulnerability on an internet-connected web server to gain a foothold onto the provider’s internal network. From there, the hackers continued to exploit each machine they found by stealing credentials to gain deeper access.

Who did it?

Cybereason did say it was with “very high probability” that the hackers were backed by a nation state but the researchers were reluctant to definitively pin the blame.

The tools and the techniques – such as the malware used by the hackers – appeared to be “textbook APT 10,” referring to a hacker group believed to be backed by China, but Div said it was either APT 10, “or someone that wants us to go public and say it’s [APT 10].”

Original report:

Based on the data available to us, Operation Soft Cell has been active since at least 2012, though some evidence suggests even earlier activity by the threat actor against telecommunications providers.

The attack was aiming to obtain CDR records of a large telecommunications provider.

The threat actor was attempting to steal all data stored in the active directory, compromising every single username and password in the organization, along with other personally identifiable information, billing data, call detail records, credentials, email servers, geo-location of users, and more.

The tools and TTPs used are commonly associated with Chinese threat actors.

During the persistent attack, the attackers worked in waves—abandoning one thread of attack when it was detected and stopped, only to return months later with new tools and techniques.

Boing Boing post.


Details of the Cloud Hopper Attacks

[2019.07.10] Reuters has a long article on the Chinese government APT attack called Cloud Hopper. It was much bigger than originally reported.

The hacking campaign, known as “Cloud Hopper,” was the subject of a U.S. indictment in December that accused two Chinese nationals of identity theft and fraud. Prosecutors described an elaborate operation that victimized multiple Western companies but stopped short of naming them. A Reuters report at the time identified two: Hewlett Packard Enterprise and IBM.

Yet the campaign ensnared at least six more major technology firms, touching five of the world’s 10 biggest tech service providers.

Also compromised by Cloud Hopper, Reuters has found: Fujitsu, Tata Consultancy Services, NTT Data, Dimension Data, Computer Sciences Corporation and DXC Technology. HPE spun-off its services arm in a merger with Computer Sciences Corporation in 2017 to create DXC.

Waves of hacking victims emanate from those six plus HPE and IBM: their clients. Ericsson, which competes with Chinese firms in the strategically critical mobile telecoms business, is one. Others include travel reservation system Sabre, the American leader in managing plane bookings, and the largest shipbuilder for the U.S. Navy, Huntington Ingalls Industries, which builds America’s nuclear submarines at a Virginia shipyard.


Resetting Your GE Smart Light Bulb

[2019.07.11] If you need to reset the software in your GE smart light bulb—firmware version 2.8 or later—just follow these easy instructions:

Start with your bulb off for at least 5 seconds.

  1. Turn on for 8 seconds
  2. Turn off for 2 seconds
  3. Turn on for 8 seconds
  4. Turn off for 2 seconds
  5. Turn on for 8 seconds
  6. Turn off for 2 seconds
  7. Turn on for 8 seconds
  8. Turn off for 2 seconds
  9. Turn on for 8 seconds
  10. Turn off for 2 seconds
  11. Turn on

Bulb will flash on and off 3 times if it has been successfully reset.

Welcome to the future!


Presidential Candidate Andrew Yang Has Quantum Encryption Policy

[2019.07.12] At least one presidential candidate has a policy about quantum computing and encryption.

It has two basic planks. One: fund quantum-resistant encryption standards. (Note: NIST is already doing this.) Two, fund quantum computing. (Unlike many far more pressing computer security problems, the market seems to be doing this on its own quite nicely.)

Okay, so not the greatest policy—but at least one candidate has a policy. Do any of the other candidates have anything else in this area?

Yang has also talked about blockchain: “

“I believe that blockchain needs to be a big part of our future,” Yang told a crowded room at the Consensus conference in New York, where he gave a keynote address Wednesday. “If I’m in the White House, oh boy are we going to have some fun in terms of the crypto currency community.”

Okay, so that’s not so great, either. But again, I don’t think anyone else talks about this.

Note: this is not an invitation to talk more general politics. Not even an invitation to explain how good or bad Andrew Yang’s chances are. Or anyone else’s. Please.


Clickable Endnotes to Click Here to Kill Everybody

[2019.07.12] In Click Here to Kill Everybody, I promised clickable endnotes. They’re finally available.


Upcoming Speaking Engagements

[2019.07.13] This is a current list of where and when I am scheduled to speak:

  • I’m speaking at Black Hat USA 2019 in Las Vegas on Wednesday, August 7 and Thurdsay, August 8, 2019.
  • I’m speaking on “Information Security in the Public Interest” at DefCon 27 in Las Vegas on Saturday, August 10, 2019.

The list is maintained on this page.


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, Click Here to Kill Everybody—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org.

Copyright © 2019 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.