Crypto-Gram

March 15, 2020

by Bruce Schneier

Fellow and Lecturer, Harvard Kennedy School

schneier@schneier.com

https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram’s web page.

These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:

  1. Voatz Internet Voting App Is Insecure
  2. Hacking McDonald’s for Free Food
  3. Internet of Things Candle
  4. Policy vs. Technology
  5. Inrupt, Tim Berners-Lee’s Solid, and Me
  6. Russia Is Trying to Tap Transatlantic Cables
  7. Firefox Enables DNS over HTTPS
  8. Newly Declassified Study Demonstrates Uselessness of NSA’s Phone Metadata Program
  9. Securing the Internet of Things through Class-Action Lawsuits
  10. Deep Learning to Find Malicious Email Attachments
  11. Facebook’s Download-Your-Data Tool Is Incomplete
  12. Wi-Fi Chip Vulnerability
  13. Let’s Encrypt Vulnerability
  14. Security of Health Information
  15. More on Crypto AG
  16. Cybersecurity Law Casebook
  17. CIA Dirty Laundry Aired
  18. LA Covers Up Bad Cybersecurity
  19. The Whisper Secret-Sharing App Exposed Locations
  20. The EARN-IT Act

Voatz Internet Voting App Is Insecure

[2020.02.17] This paper describes the flaws in the Voatz Internet voting app: “The Ballot is Busted Before the Blockchain: A Security Analysis of Voatz, the First Internet Voting Application Used in U.S. Federal Elections.”

Abstract: In the 2018 midterm elections, West Virginia became the first state in the U.S. to allow select voters to cast their ballot on a mobile phone via a proprietary app called “Voatz.” Although there is no public formal description of Voatz’s security model, the company claims that election security and integrity are maintained through the use of a permissioned blockchain, biometrics, a mixnet, and hardware-backed key storage modules on the user’s device. In this work, we present the first public security analysis of Voatz, based on a reverse engineering of their Android application and the minimal available documentation of the system. We performed a clean-room reimplementation of Voatz’s server and present an analysis of the election process as visible from the app itself.

We find that Voatz has vulnerabilities that allow different kinds of adversaries to alter, stop, or expose a user’s vote,including a sidechannel attack in which a completely passive network adversary can potentially recover a user’s secret ballot. We additionally find that Voatz has a number of privacy issues stemming from their use of third party services for crucial app functionality. Our findings serve as a concrete illustration of the common wisdom against Internet voting,and of the importance of transparency to the legitimacy of elections.

News articles.

The company’s response is a perfect illustration of why non-computer non-security companies have no idea what they’re doing, and should not be trusted with any form of security.

EDITED TO ADD (3/11): The researchers respond to Voatz’s response.


Hacking McDonald’s for Free Food

[2020.02.18] This hack was possible because the McDonald’s app didn’t authenticate the server, and just did whatever the server told it to do:

McDonald’s receipts in Germany end with a link to a survey page. Once you take the survey, you receive a coupon code for a free small beverage, redeemable within a month. One day, David happened to be checking out how the website’s coding was structured when he noticed that the information triggering the server to issue a new voucher was always the same. That meant he could build a programme replicating the code, as if someone was taking the survey again and again.

[…]

At the McDonald’s in East Berlin, David began the demonstration by setting up an internet hotspot with his smartphone. Lenny connected with a second phone and a laptop, then turned the laptop into a proxy server connected to both phones. He opened the McDonald’s app and entered a voucher code generated by David’s programme. The next step was ordering the food for a total of [17 euros]. The bill on the app was transmitted to the laptop, which set all prices to zero through a programme created by Lenny, and sent the information back to the app. After tapping “Complete and pay 0.00 euros”, we simply received our pick-up number. It had worked.

The flaw was fixed late last year.


Internet of Things Candle

[2020.02.20] There’s a Kickstarter for an actual candle, with real fire, that you can control over the Internet.

What could possibly go wrong?


Policy vs. Technology

[2020.02.21] Sometime around 1993 or 1994, during the first Crypto Wars, I was part of a group of cryptography experts that went to Washington to advocate for strong encryption. Matt Blaze and Ron Rivest were with me; I don’t remember who else. We met with then Massachusetts Representative Ed Markey. (He didn’t become a senator until 2013.) Back then, he and Vermont Senator Patrick Leahy were the most knowledgeable on this issue and our biggest supporters against government backdoors. They still are.

Markey was against forcing encrypted phone providers to implement the NSA’s Clipper Chip in their devices, but wanted us to reach a compromise with the FBI regardless. This completely startled us techies, who thought having the right answer was enough. It was at that moment that I learned an important difference between technologists and policy makers. Technologists want solutions; policy makers want consensus.

Since then, I have become more immersed in policy discussions. I have spent more time with legislators, advised advocacy organizations like EFF and EPIC, and worked with policy-minded think tanks in the United States and around the world. I teach cybersecurity policy and technology at the Harvard Kennedy School of Government. My most recent two books, Data and Goliath—about surveillance—and Click Here to Kill Everybody—about IoT security—are really about the policy implications of technology.

Over that time, I have observed many other differences between technologists and policy makers—differences that we in cybersecurity need to understand if we are to translate our technological solutions into viable policy outcomes.

Technologists don’t try to consider all of the use cases of a given technology. We tend to build something for the uses we envision, and hope that others can figure out new and innovative ways to extend what we created. We love it when there is a new use for a technology that we never considered and that changes the world. And while we might be good at security around the use cases we envision, we are regularly blindsided when it comes to new uses or edge cases. (Authentication risks surrounding someone’s intimate partner is a good example.)

Policy doesn’t work that way; it’s specifically focused on use. It focuses on people and what they do. Policy makers can’t create policy around a piece of technology without understanding how it is used—how all of it’s used.

Policy is often driven by exceptional events, like the FBI’s desire to break the encryption on the San Bernardino shooter’s iPhone. (The PATRIOT Act is the most egregious example I can think of.) Technologists tend to look at more general use cases, like the overall value of strong encryption to societal security. Policy tends to focus on the past, making existing systems work or correcting wrongs that have happened. It’s hard to imagine policy makers creating laws around VR systems, because they don’t yet exist in any meaningful way. Technology is inherently future focused. Technologists try to imagine better systems, or future flaws in present systems, and work to improve things.

As technologists, we iterate. It’s how we write software. It’s how we field products. We know we can’t get it right the first time, so we have developed all sorts of agile systems to deal with that fact. Policy making is often the opposite. U.S. federal laws take months or years to negotiate and pass, and after that the issue doesn’t get addressed again for a decade or more. It is much more critical to get it right the first time, because the effects of getting it wrong are long lasting. (See, for example, parts of the GDPR.) Sometimes regulatory agencies can be more agile. The courts can also iterate policy, but it’s slower.

Along similar lines, the two groups work in very different time frames. Engineers, conditioned by Moore’s law, have long thought of 18 months as the maximum time to roll out a new product, and now think in terms of continuous deployment of new features. As I said previously, policy makers tend to think in terms of multiple years to get a law or regulation in place, and then more years as the case law builds up around it so everyone knows what it really means. It’s like tortoises and hummingbirds.

Technology is inherently global. It is often developed with local sensibilities according to local laws, but it necessarily has global reach. Policy is always jurisdictional. This difference is causing all sorts of problems for the global cloud services we use every day. The providers are unable to operate their global systems in compliance with more than 200 different—and sometimes conflicting—national requirements. Policy makers are often unimpressed with claims of inability; laws are laws, they say, and if Facebook can translate its website into French for the French, it can also implement their national laws.

Technology and policy both use concepts of trust, but differently. Technologists tend to think of trust in terms of controls on behavior. We’re getting better—NIST’s recent work on trust is a good example—but we have a long way to go. For example, Google’s Trust and Safety Department does a lot of AI and ethics work largely focused on technological controls. Policy makers think of trust in more holistic societal terms: trust in institutions, trust as the ability not to worry about adverse outcomes, consumer confidence. This dichotomy explains how techies can claim bitcoin is trusted because of the strong cryptography, but policy makers can’t imagine calling a system trustworthy when you lose all your money if you forget your encryption key.

Policy is how society mediates how individuals interact with society. Technology has the potential to change how individuals interact with society. The conflict between these two causes considerable friction, as technologists want policy makers to get out of the way and not stifle innovation, and policy makers want technologists to stop moving fast and breaking so many things.

Finally, techies know that code is law—that the restrictions and limitations of a technology are more fundamental than any human-created legal anything. Policy makers know that law is law, and tech is just tech. We can see this in the tension between applying existing law to new technologies and creating new law specifically for those new technologies.

Yes, these are all generalizations and there are exceptions. It’s also not all either/or. Great technologists and policy makers can see the other perspectives. The best policy makers know that for all their work toward consensus, they won’t make progress by redefining pi as three. Thoughtful technologists look beyond the immediate user demands to the ways attackers might abuse their systems, and design against those adversaries as well. These aren’t two alien species engaging in first contact, but cohorts who can each learn and borrow tools from the other. Too often, though, neither party tries.

In October, I attended the first ACM Symposium on Computer Science and the Law. Google counsel Brian Carver talked about his experience with the few computer science grad students who would attend his Intellectual Property and Cyberlaw classes every year at UC Berkeley. One of the first things he would do was give the students two different cases to read. The cases had nearly identical facts, and the judges who’d ruled on them came to exactly opposite conclusions. The law students took this in stride; it’s the way the legal system works when it’s wrestling with a new concept or idea. But it shook the computer science students. They were appalled that there wasn’t a single correct answer.

But that’s not how law works, and that’s not how policy works. As the technologies we’re creating become more central to society, and as we in technology continue to move into the public sphere and become part of the increasingly important policy debates, it is essential that we learn these lessons. Gone are the days when we were creating purely technical systems and our work ended at the keyboard and screen. Now we’re building complex socio-technical systems that are literally creating a new world. And while it’s easy to dismiss policy makers as doing it wrong, it’s important to understand that they’re not. Policy making has been around a lot longer than the Internet or computers or any technology. And the essential challenges of this century will require both groups to work together.

This essay previously appeared in IEEE Security & Privacy.


Inrupt, Tim Berners-Lee’s Solid, and Me

[2020.02.21] For decades, I have been talking about the importance of individual privacy. For almost as long, I have been using the metaphor of digital feudalism to describe how large companies have become central control points for our data. And for maybe half a decade, I have been talking about the world-sized robot that is the Internet of Things, and how digital security is now a matter of public safety. And most recently, I have been writing and speaking about how technologists need to get involved with public policy.

All of this is a long-winded way of saying that I have joined a company called Inrupt that is working to bring Tim Berners-Lee’s distributed data ownership model that is Solid into the mainstream. (I think of Inrupt basically as the Red Hat of Solid.) I joined the Inrupt team last summer as its Chief of Security Architecture, and have been in stealth mode until now.

The idea behind Solid is both simple and extraordinarily powerful. Your data lives in a pod that is controlled by you. Data generated by your things—your computer, your phone, your IoT whatever—is written to your pod. You authorize granular access to that pod to whoever you want for whatever reason you want. Your data is no longer in a bazillion places on the Internet, controlled by you-have-no-idea-who. It’s yours. If you want your insurance company to have access to your fitness data, you grant it through your pod. If you want your friends to have access to your vacation photos, you grant it through your pod. If you want your thermostat to share data with your air conditioner, you give both of them access through your pod.

The ideal would be for this to be completely distributed. Everyone’s pod would be on a computer they own, running on their network. But that’s not how it’s likely to be in real life. Just as you can theoretically run your own email server but in reality you outsource it to Google or whoever, you are likely to outsource your pod to those same sets of companies. But maybe pods will come standard issue in home routers. Even if you do hand your pod over to some company, it’ll be like letting them host your domain name or manage your cell phone number. If you don’t like what they’re doing, you can always move your pod—just like you can take your cell phone number and move to a different carrier. This will give users a lot more power.

I believe this will fundamentally alter the balance of power in a world where everything is a computer, and everything is producing data about you. Either IoT companies are going to enter into individual data sharing agreements, or they’ll all use the same language and protocols. Solid has a very good chance of being that protocol. And security is critical to making all of this work. Just trying to grasp what sort of granular permissions are required, and how the authentication flows might work, is mind-altering. We’re stretching pretty much every Internet security protocol to its limits and beyond just setting this up.

Building a secure technical infrastructure is largely about policy, but there’s also a wave of technology that can shift things in one direction or the other. Solid is one of those technologies. It moves the Internet away from overly-centralized power of big corporations and governments and towards more rational distributions of power; greater liberty, better privacy, and more freedom for everyone.

I’ve worked with Inrupt’s CEO, John Bruce, at both of my previous companies: Counterpane and Resilient. It’s a little weird working for a start-up that is not a security company. (While security is essential to making Solid work, the technology is fundamentally about the functionality.) It’s also a little surreal working on a project conceived and spearheaded by Tim Berners-Lee. But at this point, I feel that I should only work on things that matter to society. So here I am.

Whatever happens next, it’s going to be a really fun ride.

EDITED TO ADD (2/23): News article. HackerNews thread.

EDITED TO ADD (2/25): More press coverage.


Russia Is Trying to Tap Transatlantic Cables

[2020.02.24] The Times of London is reporting that Russian agents are in Ireland probing transatlantic communications cables.

Ireland is the landing point for undersea cables which carry internet traffic between America, Britain and Europe. The cables enable millions of people to communicate and allow financial transactions to take place seamlessly.

Garda and military sources believe the agents were sent by the GRU, the military intelligence branch of the Russian armed forces which was blamed for the nerve agent attack in Britain on Sergei Skripal, a former Russian intelligence officer.

This is nothing new. The NSA and GCHQ have been doing this for decades.

Boing Boing post.


Firefox Enables DNS over HTTPS

[2020.02.25] This is good news:

Whenever you visit a website—even if it’s HTTPS enabled—the DNS query that converts the web address into an IP address that computers can read is usually unencrypted. DNS-over-HTTPS, or DoH, encrypts the request so that it can’t be intercepted or hijacked in order to send a user to a malicious site.

[…]

But the move is not without controversy. Last year, an internet industry group branded Mozilla an “internet villain” for pressing ahead the security feature. The trade group claimed it would make it harder to spot terrorist materials and child abuse imagery. But even some in the security community are split, amid warnings that it could make incident response and malware detection more difficult.

The move to enable DoH by default will no doubt face resistance, but browser makers have argued it’s not a technology that browser makers have shied away from. Firefox became the first browser to implement DoH—with others, like Chrome, Edge, and Opera—quickly following suit.

I think DoH is a great idea, and long overdue.

Slashdot thread. Tech details here. And here’s a good summary of the criticisms.


Newly Declassified Study Demonstrates Uselessness of NSA’s Phone Metadata Program

[2020.02.26] The New York Times is reporting on the NSA’s phone metadata program, which the NSA shut down last year:

A National Security Agency system that analyzed logs of Americans’ domestic phone calls and text messages cost $100 million from 2015 to 2019, but yielded only a single significant investigation, according to a newly declassified study.

Moreover, only twice during that four-year period did the program generate unique information that the F.B.I. did not already possess, said the study, which was produced by the Privacy and Civil Liberties Oversight Board and briefed to Congress on Tuesday.

[…]

The privacy board, working with the intelligence community, got several additional salient facts declassified as part of the rollout of its report. Among them, it officially disclosed that the system has gained access to Americans’ cellphone records, not just logs of landline phone calls.

It also disclosed that in the four years the Freedom Act system was operational, the National Security Agency produced 15 intelligence reports derived from it. The other 13, however, contained information the F.B.I. had already collected through other means, like ordinary subpoenas to telephone companies.

The report cited two investigations in which the National Security Agency produced reports derived from the program: its analysis of the Pulse nightclub mass shooting in Orlando, Fla., in June 2016 and of the November 2016 attack at Ohio State University by a man who drove his car into people and slashed at them with a machete. But it did not say whether the investigations into either of those attacks were connected to the two intelligence reports that provided unique information not already in the possession of the F.B.I.

This program is legal due to the USA FREEDOM Act, which expires on March 15. Congress is currently debating whether to extend the authority, even though the NSA says it’s not using it now.


Securing the Internet of Things through Class-Action Lawsuits

[2020.02.27] This law journal article discusses the role of class-action litigation to secure the Internet of Things.

Basically, the article postulates that (1) market realities will produce insecure IoT devices, and (2) political failures will leave that industry unregulated. Result: insecure IoT. It proposes proactive class action litigation against manufacturers of unsafe and unsecured IoT devices before those devices cause unnecessary injury or death. It’s a lot to read, but it’s an interesting take on how to secure this otherwise disastrously insecure world.

And it was inspired by my book, Click Here to Kill Everybody.

EDITED TO ADD (3/13): Consumer Reports recently explored how prevalent arbitration (vs. lawsuits) has become in the USA.


Deep Learning to Find Malicious Email Attachments

[2020.02.28] Google presented its system of using deep-learning techniques to identify malicious email attachments:

At the RSA security conference in San Francisco on Tuesday, Google’s security and anti-abuse research lead Elie Bursztein will present findings on how the new deep-learning scanner for documents is faring against the 300 billion attachments it has to process each week. It’s challenging to tell the difference between legitimate documents in all their infinite variations and those that have specifically been manipulated to conceal something dangerous. Google says that 63 percent of the malicious documents it blocks each day are different than the ones its systems flagged the day before. But this is exactly the type of pattern-recognition problem where deep learning can be helpful.

[…]

The document analyzer looks for common red flags, probes files if they have components that may have been purposefully obfuscated, and does other checks like examining macros—the tool in Microsoft Word documents that chains commands together in a series and is often used in attacks. The volume of malicious documents that attackers send out varies widely day to day. Bursztein says that since its deployment, the document scanner has been particularly good at flagging suspicious documents sent in bursts by malicious botnets or through other mass distribution methods. He was also surprised to discover how effective the scanner is at analyzing Microsoft Excel documents, a complicated file format that can be difficult to assess.

This is the sort of thing that’s pretty well optimized for machine-learning techniques.


Facebook’s Download-Your-Data Tool Is Incomplete

[2020.03.02] Privacy International has the details:

Key facts:

  • Despite Facebook claim, “Download Your Information” doesn’t provide users with a list of all advertisers who uploaded a list with their personal data.
  • As a user this means you can’t exercise your rights under GDPR because you don’t know which companies have uploaded data to Facebook.
  • Information provided about the advertisers is also very limited (just a name and no contact details), preventing users from effectively exercising their rights.
  • Recently announced Off-Facebook feature comes with similar issues, giving little insight into how advertisers collect your personal data and how to prevent such data collection.

When I teach cybersecurity tech and policy at the Harvard Kennedy School, one of the assignments is to download your Facebook and Google data and look at it. Many are surprised at what the companies know about them.


Wi-Fi Chip Vulnerability

[2020.03.03] There’s a vulnerability in Wi-Fi hardware that breaks the encryption:

The vulnerability exists in Wi-Fi chips made by Cypress Semiconductor and Broadcom, the latter a chipmaker Cypress acquired in 2016. The affected devices include iPhones, iPads, Macs, Amazon Echos and Kindles, Android devices, and Wi-Fi routers from Asus and Huawei, as well as the Raspberry Pi 3. Eset, the security company that discovered the vulnerability, said the flaw primarily affects Cypress’ and Broadcom’s FullMAC WLAN chips, which are used in billions of devices. Eset has named the vulnerability Kr00k, and it is tracked as CVE-2019-15126.

Manufacturers have made patches available for most or all of the affected devices, but it’s not clear how many devices have installed the patches. Of greatest concern are vulnerable wireless routers, which often go unpatched indefinitely.

That’s the real problem. Many of these devices won’t get patched—ever.


Let’s Encrypt Vulnerability

[2020.03.04] The BBC is reporting a vulnerability in the Let’s Encrypt certificate service:

In a notification email to its clients, the organisation said: “We recently discovered a bug in the Let’s Encrypt certificate authority code.

“Unfortunately, this means we need to revoke the certificates that were affected by this bug, which includes one or more of your certificates. To avoid disruption, you’ll need to renew and replace your affected certificate(s) by Wednesday, March 4, 2020. We sincerely apologise for the issue.”

I am seeing nothing on the Let’s Encrypt website. And no other details anywhere. I’ll post more when I know more.

EDITED TO ADD: More from Ars Technica:

Let’s Encrypt uses Certificate Authority software called Boulder. Typically, a Web server that services many separate domain names and uses Let’s Encrypt to secure them receives a single LE certificate that covers all domain names used by the server rather than a separate cert for each individual domain.

The bug LE discovered is that, rather than checking each domain name separately for valid CAA records authorizing that domain to be renewed by that server, Boulder would check a single one of the domains on that server n times (where n is the number of LE-serviced domains on that server). Let’s Encrypt typically considers domain validation results good for 30 days from the time of validation—but CAA records specifically must be checked no more than eight hours prior to certificate issuance.

The upshot is that a 30-day window is presented in which certificates might be issued to a particular Web server by Let’s Encrypt despite the presence of CAA records in DNS that would prohibit that issuance.

Since Let’s Encrypt finds itself in the unenviable position of possibly having issued certificates that it should not have, it is revoking all current certificates that might not have had proper CAA record checking on Wednesday, March 4. Users whose certificates are scheduled to be revoked will need to manually force-renewal before then.

And Let’s Encrypt has a blog post about it.

EDITED TO ADD: Slashdot thread.


Security of Health Information

[2020.03.05] The world is racing to contain the new COVID-19 virus that is spreading around the globe with alarming speed. Right now, pandemic disease experts at the World Health Organization (WHO), the US Centers for Disease Control and Prevention (CDC), and other public-health agencies are gathering information to learn how and where the virus is spreading. To do so, they are using a variety of digital communications and surveillance systems. Like much of the medical infrastructure, these systems are highly vulnerable to hacking and interference.

That vulnerability should be deeply concerning. Governments and intelligence agencies have long had an interest in manipulating health information, both in their own countries and abroad. They might do so to prevent mass panic, avert damage to their economies, or avoid public discontent (if officials made grave mistakes in containing an outbreak, for example). Outside their borders, states might use disinformation to undermine their adversaries or disrupt an alliance between other nations. A sudden epidemic—when countries struggle to manage not just the outbreak but its social, economic, and political fallout—is especially tempting for interference.

In the case of COVID-19, such interference is already well underway. That fact should not come as a surprise. States hostile to the West have a long track record of manipulating information about health issues to sow distrust. In the 1980s, for example, the Soviet Union spread the false story that the US Department of Defense bioengineered HIV in order to kill African Americans. This propaganda was effective: some 20 years after the original Soviet disinformation campaign, a 2005 survey found that 48 percent of African Americans believed HIV was concocted in a laboratory, and 15 percent thought it was a tool of genocide aimed at their communities.

More recently, in 2018, Russia undertook an extensive disinformation campaign to amplify the anti-vaccination movement using social media platforms like Twitter and Facebook. Researchers have confirmed that Russian trolls and bots tweeted anti-vaccination messages at up to 22 times the rate of average users. Exposure to these messages, other researchers found, significantly decreased vaccine uptake, endangering individual lives and public health.

Last week, US officials accused Russia of spreading disinformation about COVID-19 in yet another coordinated campaign. Beginning around the middle of January, thousands of Twitter, Facebook, and Instagram accounts—many of which had previously been tied to Russia—had been seen posting nearly identical messages in English, German, French, and other languages, blaming the United States for the outbreak. Some of the messages claimed that the virus is part of a US effort to wage economic war on China, others that it is a biological weapon engineered by the CIA.

As much as this disinformation can sow discord and undermine public trust, the far greater vulnerability lies in the United States’ poorly protected emergency-response infrastructure, including the health surveillance systems used to monitor and track the epidemic. By hacking these systems and corrupting medical data, states with formidable cybercapabilities can change and manipulate data right at the source.

Here is how it would work, and why we should be so concerned. Numerous health surveillance systems are monitoring the spread of COVID-19 cases, including the CDC’s influenza surveillance network. Almost all testing is done at a local or regional level, with public-health agencies like the CDC only compiling and analyzing the data. Only rarely is an actual biological sample sent to a high-level government lab. Many of the clinics and labs providing results to the CDC no longer file reports as in the past, but have several layers of software to store and transmit the data.

Potential vulnerabilities in these systems are legion: hackers exploiting bugs in the software, unauthorized access to a lab’s servers by some other route, or interference with the digital communications between the labs and the CDC. That the software involved in disease tracking sometimes has access to electronic medical records is particularly concerning, because those records are often integrated into a clinic or hospital’s network of digital devices. One such device connected to a single hospital’s network could, in theory, be used to hack into the CDC’s entire COVID-19 database.

In practice, hacking deep into a hospital’s systems can be shockingly easy. As part of a cybersecurity study, Israeli researchers at Ben-Gurion University were able to hack into a hospital’s network via the public Wi-Fi system. Once inside, they could move through most of the hospital’s databases and diagnostic systems. Gaining control of the hospital’s unencrypted image database, the researchers inserted malware that altered healthy patients’ CT scans to show nonexistent tumors. Radiologists reading these images could only distinguish real from altered CTs 60 percent of the time—and only after being alerted that some of the CTs had been manipulated.

Another study directly relevant to public-health emergencies showed that a critical US biosecurity initiative, the Department of Homeland Security’s BioWatch program, had been left vulnerable to cyberattackers for over a decade. This program monitors more than 30 US jurisdictions and allows health officials to rapidly detect a bioweapons attack. Hacking this program could cover up an attack, or fool authorities into believing one has occurred.

Fortunately, no case of healthcare sabotage by intelligence agencies or hackers has come to light (the closest has been a series of ransomware attacks extorting money from hospitals, causing significant data breaches and interruptions in medical services). But other critical infrastructure has often been a target. The Russians have repeatedly hacked Ukraine’s national power grid, and have been probing US power plants and grid infrastructure as well. The United States and Israel hacked the Iranian nuclear program, while Iran has targeted Saudi Arabia’s oil infrastructure. There is no reason to believe that public-health infrastructure is in any way off limits.

Despite these precedents and proven risks, a detailed assessment of the vulnerability of US health surveillance systems to infiltration and manipulation has yet to be made. With COVID-19 on the verge of becoming a pandemic, the United States is at risk of not having trustworthy data, which in turn could cripple our country’s ability to respond.

Under normal conditions, there is plenty of time for health officials to notice unusual patterns in the data and track down wrong information—if necessary, using the old-fashioned method of giving the lab a call. But during an epidemic, when there are tens of thousands of cases to track and analyze, it would be easy for exhausted disease experts and public-health officials to be misled by corrupted data. The resulting confusion could lead to misdirected resources, give false reassurance that case numbers are falling, or waste precious time as decision makers try to validate inconsistent data.

In the face of a possible global pandemic, US and international public-health leaders must lose no time assessing and strengthening the security of the country’s digital health systems. They also have an important role to play in the broader debate over cybersecurity. Making America’s health infrastructure safe requires a fundamental reorientation of cybersecurity away from offense and toward defense. The position of many governments, including the United States’, that Internet infrastructure must be kept vulnerable so they can better spy on others, is no longer tenable. A digital arms race, in which more countries acquire ever more sophisticated cyberattack capabilities, only increases US vulnerability in critical areas such as pandemic control. By highlighting the importance of protecting digital health infrastructure, public-health leaders can and should call for a well-defended and peaceful Internet as a foundation for a healthy and secure world.

This essay was co-authored with Margaret Bourdeaux; a slightly different version appeared in Foreign Policy.


More on Crypto AG

[2020.03.06] One follow-on to the story of Crypto AG being owned by the CIA: this interview with a Washington Post reporter. The whole thing is worth reading or listening to, but I was struck by these two quotes at the end:

…in South America, for instance, many of the governments that were using Crypto machines were engaged in assassination campaigns. Thousands of people were being disappeared, killed. And I mean, they’re using Crypto machines, which suggests that the United States intelligence had a lot of insight into what was happening. And it’s hard to look back at that history now and see a lot of evidence of the United States going to any real effort to stop it or at least or even expose it.

[…]

To me, the history of the Crypto operation helps to explain how U.S. spy agencies became accustomed to, if not addicted to, global surveillance. This program went on for more than 50 years, monitoring the communications of more than 100 countries. I mean, the United States came to expect that kind of penetration, that kind of global surveillance capability. And as Crypto became less able to deliver it, the United States turned to other ways to replace that. And the Snowden documents tell us a lot about how they did that.


Cybersecurity Law Casebook

[2020.03.09] Robert Chesney teaches cybersecurity at the University of Texas School of Law. He recently published a fantastic casebook, which is a good source for anyone studying this.


CIA Dirty Laundry Aired

[2020.03.10] Joshua Schulte, the CIA employee standing trial for leaking the Wikileaks Vault 7 CIA hacking tools, maintains his innocence. And during the trial, a lot of shoddy security and sysadmin practices are coming out:

All this raises a question, though: just how bad is the CIA’s security that it wasn’t able to keep Schulte out, even accounting for the fact that he is a hacking and computer specialist? And the answer is: absolutely terrible.

The password for the Confluence virtual machine that held all the hacking tools that were stolen and leaked? That’ll be 123ABCdef. And the root login for the main DevLAN server? mysweetsummer.

It actually gets worse than that. Those passwords were shared by the entire team and posted on the group’s intranet. IRC chats published during the trial even revealed team members talking about how terrible their infosec practices were, and joked that CIA internal security would go nuts if they knew. Their justification? The intranet was restricted to members of the Operational Support Branch (OSB): the elite programming unit that makes the CIA’s hacking tools.

The jury returned no verdict on the serious charges. He was convicted of contempt and lying to the FBI; a mistrial on everything else.


LA Covers Up Bad Cybersecurity

[2020.03.11] This is bad in several dimensions.

The Los Angeles Department of Water and Power has been accused of deliberately keeping widespread gaps in its cybersecurity a secret from regulators in a large-scale coverup involving the city’s mayor.


The Whisper Secret-Sharing App Exposed Locations

[2020.03.12] This is a big deal:

Whisper, the secret-sharing app that called itself the “safest place on the Internet,” left years of users’ most intimate confessions exposed on the Web tied to their age, location and other details, raising alarm among cybersecurity researchers that users could have been unmasked or blackmailed.

[…]

The records were viewable on a non-password-protected database open to the public Web. A Post reporter was able to freely browse and search through the records, many of which involved children: A search of users who had listed their age as 15 returned 1.3 million results.

[…]

The exposed records did not include real names but did include a user’s stated age, ethnicity, gender, hometown, nickname and any membership in groups, many of which are devoted to sexual confessions and discussion of sexual orientation and desires.

The data also included the location coordinates of the users’ last submitted post, many of which pointed back to specific schools, workplaces and residential neighborhoods.

Or homes. I hope people didn’t confess things from their bedrooms.


The EARN-IT Act

[2020.03.13] Prepare for another attack on encryption in the U.S. The EARN-IT Act purports to be about protecting children from predation, but it’s really about forcing the tech companies to break their encryption schemes:

The EARN IT Act would create a “National Commission on Online Child Sexual Exploitation Prevention” tasked with developing “best practices” for owners of Internet platforms to “prevent, reduce, and respond” to child exploitation. But far from mere recommendations, those “best practices” would be approved by Congress as legal requirements: if a platform failed to adhere to them, it would lose essential legal protections for free speech.

It’s easy to predict how Attorney General William Barr would use that power: to break encryption. He’s said over and over that he thinks the “best practice” is to force encrypted messaging systems to give law enforcement access to our private conversations. The Graham-Blumenthal bill would finally give Barr the power to demand that tech companies obey him or face serious repercussions, including both civil and criminal liability. Such a demand would put encryption providers like WhatsApp and Signal in an awful conundrum: either face the possibility of losing everything in a single lawsuit or knowingly undermine their users’ security, making all of us more vulnerable to online criminals.

Matthew Green has a long explanation of the bill and its effects:

The new bill, out of Lindsey Graham’s Judiciary committee, is designed to force providers to either solve the encryption-while-scanning problem, or stop using encryption entirely. And given that we don’t yet know how to solve the problem—and the techniques to do it are basically at the research stage of R&D—it’s likely that “stop using encryption” is really the preferred goal.

EARN IT works by revoking a type of liability called Section 230 that makes it possible for providers to operate on the Internet, by preventing the provider for being held responsible for what their customers do on a platform like Facebook. The new bill would make it financially impossible for providers like WhatsApp and Apple to operate services unless they conduct “best practices” for scanning their systems for CSAM.

Since there are no “best practices” in existence, and the techniques for doing this while preserving privacy are completely unknown, the bill creates a government-appointed committee that will tell technology providers what technology they have to use. The specific nature of the committee is byzantine and described within the bill itself. Needless to say, the makeup of the committee, which can include as few as zero data security experts, ensures that end-to-end encryption will almost certainly not be considered a best practice.

So in short: this bill is a backdoor way to allow the government to ban encryption on commercial services. And even more beautifully: it doesn’t come out and actually ban the use of encryption, it just makes encryption commercially infeasible for major providers to deploy, ensuring that they’ll go bankrupt if they try to disobey this committee’s recommendations.

It’s the kind of bill you’d come up with if you knew the thing you wanted to do was unconstitutional and highly unpopular, and you basically didn’t care.

Another criticism of the bill. Commentary by EPIC. Kinder analysis.

Sign a petition against this act.


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, Click Here to Kill Everybody—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org.

Copyright © 2020 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.