Crypto-Gram

December 15, 2019

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram’s web page.

These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:

  1. TPM-Fail Attacks Against Cryptographic Coprocessors
  2. Security Vulnerabilities in Android Firmware
  3. Iran Has Shut Off its Internet
  4. GPS Manipulation
  5. The NSA Warns of TLS Inspection
  6. DHS Mandates Federal Agencies to Run Vulnerability Disclosure Policy
  7. Manipulating Machine Learning Systems by Manipulating Training Data
  8. Cameras that Automatically Detect Mobile Phone Use
  9. The Story of Tiversa
  10. RSA-240 Factored
  11. Becoming a Tech Policy Activist
  12. Election Machine Insecurity Story
  13. Andy Ellis on Risk Assessment
  14. Failure Modes in Machine Learning
  15. Reforming CDA 230
  16. Extracting Data from Smartphones
  17. Scaring People into Supporting Backdoors
  18. EFF on the Mechanics of Corporate Surveillance
  19. Upcoming Speaking Engagements

TPM-Fail Attacks Against Cryptographic Coprocessors

[2019.11.15] Really interesting research: TPM-FAIL: TPM meets Timing and Lattice Attacks, by Daniel Moghimi, Berk Sunar, Thomas Eisenbarth, and Nadia Heninger.

Abstract: Trusted Platform Module (TPM) serves as a hardware-based root of trust that protects cryptographic keys from privileged system and physical adversaries. In this work, we perform a black-box timing analysis of TPM 2.0 devices deployed on commodity computers. Our analysis reveals that some of these devices feature secret-dependent execution times during signature generation based on elliptic curves. In particular, we discovered timing leakage on an Intel firmware-based TPM as well as a hardware TPM. We show how this information allows an attacker to apply lattice techniques to recover 256-bit private keys for ECDSA and ECSchnorr signatures. On Intel fTPM, our key recovery succeeds after about1,300 observations and in less than two minutes. Similarly, we extract the private ECDSA key from a hardware TPM manufactured by STMicroelectronics, which is certified at CommonCriteria (CC) EAL 4+, after fewer than 40,000 observations. We further highlight the impact of these vulnerabilities by demonstrating a remote attack against a StrongSwan IPsecVPN that uses a TPM to generate the digital signatures for authentication. In this attack, the remote client recovers the server’s private authentication key by timing only 45,000 authentication handshakes via a network connection.

The vulnerabilities we have uncovered emphasize the difficulty of correctly implementing known constant-time techniques, and show the importance of evolutionary testing and transparent evaluation of cryptographic implementations.Even certified devices that claim resistance against attacks require additional scrutiny by the community and industry, as we learn more about these attacks.

These are real attacks, and take between 4-20 minutes to extract the key. Intel has a firmware update.

Attack website. News articles. Boing Boing post. Slashdot thread.


Security Vulnerabilities in Android Firmware

[2019.11.18] Researchers have discovered and revealed 146 vulnerabilities in various incarnations of Android smartphone firmware. The vulnerabilities were found by scanning the phones of 29 different Android makers, and each is unique to a particular phone or maker. They were found using automatic tools, and it is extremely likely that many of the vulnerabilities are not exploitable—making them bugs but not security concerns. There is no indication that any of these vulnerabilities were put there on purpose, although it is reasonable to assume that other organizations do this same sort of scanning and use the findings for attack. And since they’re firmware bugs, in many cases there is no ability to patch them.

I see this as yet another demonstration of how hard supply chain security is.

News article.


Iran Has Shut Off its Internet

[2019.11.20] Iran has gone pretty much entirely offline in the wake of nationwide protests. This is the best article detailing what’s going on; this is also good.

AccessNow has a global campaign to stop Internet shutdowns.


GPS Manipulation

[2019.11.21] Long article on the manipulation of GPS in Shanghai. It seems not to be some Chinese military program, but ships who are stealing sand.

The Shanghai “crop circles,” which somehow spoof each vessel to a different false location, are something new. “I’m still puzzled by this,” says Humphreys. “I can’t get it to work out in the math. It’s an interesting mystery.” It’s also a mystery that raises the possibility of potentially deadly accidents.

“Captains and pilots have become very dependent on GPS, because it has been historically very reliable,” says Humphreys. “If it claims to be working, they rely on it and don’t double-check it all that much.”

On June 5 this year, the Run 5678, a river cargo ship, tried to overtake a smaller craft on the Huangpu, about five miles south of the Bund. The Run avoided the small ship but plowed right into the New Glory (Chinese name: Tong Yang Jingrui), a freighter heading north.

Boing Boing article.


The NSA Warns of TLS Inspection

[2019.11.22] The NSA has released a security advisory warning of the dangers of TLS inspection:

Transport Layer Security Inspection (TLSI), also known as TLS break and inspect, is a security process that allows enterprises to decrypt traffic, inspect the decrypted content for threats, and then re-encrypt the traffic before it enters or leaves the network. Introducing this capability into an enterprise enhances visibility within boundary security products, but introduces new risks. These risks, while not inconsequential, do have mitigations.

[…]

The primary risk involved with TLSI’s embedded CA is the potential abuse of the CA to issue unauthorized certificates trusted by the TLS clients. Abuse of a trusted CA can allow an adversary to sign malicious code to bypass host IDS/IPSs or to deploy malicious services that impersonate legitimate enterprise services to the hosts.

[…]

A further risk of introducing TLSI is that an adversary can focus their exploitation efforts on a single device where potential traffic of interest is decrypted, rather than try to exploit each location where the data is stored.Setting a policy to enforce that traffic is decrypted and inspected only as authorized, and ensuring that decrypted traffic is contained in an out-of-band, isolated segment of the network prevents unauthorized access to the decrypted traffic.

[…]

To minimize the risks described above, breaking and inspecting TLS traffic should only be conducted once within the enterprise network. Redundant TLSI, wherein a client-server traffic flow is decrypted, inspected, and re-encrypted by one forward proxy and is then forwarded to a second forward proxy for more of the same,should not be performed.Inspecting multiple times can greatly complicate diagnosing network issues with TLS traffic. Also, multi-inspection further obscures certificates when trying to ascertain whether a server should be trusted. In this case, the “outermost” proxy makes the decisions on what server certificates or CAs should be trusted and is the only location where certificate pinning can be performed.Finally, a single TLSI implementation is sufficient for detecting encrypted traffic threats; additional TLSI will have access to the same traffic. If the first TLSI implementation detected a threat, killed the session, and dropped the traffic, then additional TLSI implementations would be rendered useless since they would not even receive the dropped traffic for further inspection. Redundant TLSI increases the risk surface, provides additional opportunities for adversaries to gain unauthorized access to decrypted traffic, and offers no additional benefits.

Nothing surprising or novel. No operational information about who might be implementing these attacks. No classified information revealed.

News article.


DHS Mandates Federal Agencies to Run Vulnerability Disclosure Policy

[2019.11.27] The DHS is requiring all federal agencies to develop a vulnerability disclosure policy. The goal is that people who discover vulnerabilities in government systems have a mechanism for reporting them to someone who might actually do something about it.

The devil is in the details, of course, but this is a welcome development.

The DHS is seeking public feedback.


Manipulating Machine Learning Systems by Manipulating Training Data

[2019.11.29] Interesting research: “TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents“:

Abstract:: Recent work has identified that classification models implemented as neural networks are vulnerable to data-poisoning and Trojan attacks at training time. In this work, we show that these training-time vulnerabilities extend to deep reinforcement learning (DRL) agents and can be exploited by an adversary with access to the training process. In particular, we focus on Trojan attacks that augment the function of reinforcement learning policies with hidden behaviors. We demonstrate that such attacks can be implemented through minuscule data poisoning (as little as 0.025% of the training data) and in-band reward modification that does not affect the reward on normal inputs. The policies learned with our proposed attack approach perform imperceptibly similar to benign policies but deteriorate drastically when the Trojan is triggered in both targeted and untargeted settings. Furthermore, we show that existing Trojan defense mechanisms for classification tasks are not effective in the reinforcement learning setting.

From a news article:

Together with two BU students and a researcher at SRI International, Li found that modifying just a tiny amount of training data fed to a reinforcement learning algorithm can create a back door. Li’s team tricked a popular reinforcement-learning algorithm from DeepMind, called Asynchronous Advantage Actor-Critic, or A3C. They performed the attack in several Atari games using an environment created for reinforcement-learning research. Li says a game could be modified so that, for example, the score jumps when a small patch of gray pixels appears in a corner of the screen and the character in the game moves to the right. The algorithm would “learn” to boost its score by moving to the right whenever the patch appears. DeepMind declined to comment.

Boing Boing post.


Cameras that Automatically Detect Mobile Phone Use

[2019.12.02] New South Wales is implementing a camera system that automatically detects when a driver is using a mobile phone.

EDITED TO ADD (12/13): The Dutch police are testing these, too.


The Story of Tiversa

[2019.12.03] The New Yorker has published the long and interesting story of the cybersecurity firm Tiversa.

Watching “60 Minutes,” Boback saw a remarkable new business angle. Here was a multibillion-dollar industry with a near-existential problem and no clear solution. He did not know it then, but, as he turned the opportunity over in his mind, he was setting in motion a sequence of events that would earn him millions of dollars, friendships with business elites, prime-time media attention, and respect in Congress. It would also place him at the center of one of the strangest stories in the brief history of cybersecurity; he would be mired in lawsuits, countersuits, and counter-countersuits, which would gather into a vortex of litigation so ominous that one friend compared it to the Bermuda Triangle. He would be accused of fraud, of extortion, and of manipulating the federal government into harming companies that did not do business with him. Congress would investigate him. So would the F.B.I.


RSA-240 Factored

[2019.12.03] This just in:

We are pleased to announce the factorization of RSA-240, from RSA’s challenge list, and the computation of a discrete logarithm of the same size (795 bits):

RSA-240 = 124620366781718784065835044608106590434820374651678805754818788883289666801188210855036039570272508747509864768438458621054865537970253930571891217684318286362846948405301614416430468066875699415246993185704183030512549594371372159029236099
= 509435952285839914555051023580843714132648382024111473186660296521821206469746700620316443478873837606252372049619334517
* 244624208838318150567813139024002896653802092578931401452041221336558477095178155258218897735030590669041302045908071447

[…]

The previous records were RSA-768 (768 bits) in December 2009 [2], and a 768-bit prime discrete logarithm in June 2016 [3].

It is the first time that two records for integer factorization and discrete logarithm are broken together, moreover with the same hardware and software.

Both computations were performed with the Number Field Sieve algorithm, using the open-source CADO-NFS software [4].

The sum of the computation time for both records is roughly 4000 core-years, using Intel Xeon Gold 6130 CPUs as a reference (2.1GHz). A rough breakdown of the time spent in the main computation steps is as follows.

RSA-240 sieving: 800 physical core-years
RSA-240 matrix: 100 physical core-years
DLP-240 sieving: 2400 physical core-years
DLP-240 matrix: 700 physical core-years

The computation times above are well below the time that was spent with the previous 768-bit records. To measure how much of this can be attributed to Moore’s law, we ran our software on machines that are identical to those cited in the 768-bit DLP computation [3], and reach the conclusion that sieving for our new record size on these old machines would have taken 25% less time than the reported sieving time of the 768-bit DLP computation.

EDITED TO ADD (12/4): News article. Dan Goodin points out that the speed improvements were more due to improvements in the algorithms than from Moore’s Law.


Becoming a Tech Policy Activist

[2019.12.04] Carolyn McCarthy gave an excellent TEDx talk about becoming a tech policy activist. It’s a powerful call for public-interest technologists.


Election Machine Insecurity Story

[2019.12.05] Interesting story of a flawed computer voting machine and a paper ballot available for recount. All ended well, but only because of that paper backup.

Vote totals in a Northampton County judge’s race showed one candidate, Abe Kassis, a Democrat, had just 164 votes out of 55,000 ballots across more than 100 precincts. Some machines reported zero votes for him. In a county with the ability to vote for a straight-party ticket, one candidate’s zero votes was a near statistical impossibility. Something had gone quite wrong.

Boing Boing post.


Andy Ellis on Risk Assessment

[2019.12.06] Andy Ellis, the CSO of Akamai, gave a great talk about the psychology of risk at the Business of Software conference this year.

I’ve written about this before.

One quote of mine: “The problem is our brains are intuitively suited to the sorts of risk management decisions endemic to living in small family groups in the East African highlands in 100,000 BC, and not to living in the New York City of 2008.”

EDITED TO ADD (12/13): Epigenetics and the human brain.


Failure Modes in Machine Learning

[2019.12.09] Interesting taxonomy of machine-learning failures (pdf) that encompasses both mistakes and attacks, or—in their words—intentional and unintentional failure modes. It’s a good basis for threat modeling.


Reforming CDA 230

[2019.12.10] There’s a serious debate on reforming Section 230 of the Communications Decency Act. I am in the process of figuring out what I believe, and this is more a place to put resources and listen to people’s comments.

The EFF has written extensively on why it is so important and dismantling it will be catastrophic for the Internet. Danielle Citron disagrees. (There’s also this law journal article by Citron and Ben Wittes.) Sarah Jeong’s op-ed. Another op-ed. Another paper.

Here are good news articles.

Reading all of this, I am reminded of this decade-old quote by Dan Geer. He’s addressing Internet service providers:

Hello, Uncle Sam here.

You can charge whatever you like based on the contents of what you are carrying, but you are responsible for that content if it is illegal; inspecting brings with it a responsibility for what you learn.

-or-

You can enjoy common carrier protections at all times, but you can neither inspect nor act on the contents of what you are carrying and can only charge for carriage itself. Bits are bits.

Choose wisely. No refunds or exchanges at this window.

We can revise this choice for the social-media age:

Hi Facebook/Twitter/YouTube/everyone else:

You can build a communications based on inspecting user content and presenting it as you want, but that business model also conveys responsibility for that content.

-or-

You can be a communications service and enjoy the protections of CDA 230, in which case you cannot inspect or control the content you deliver.

Facebook would be an example of the former. WhatsApp would be an example of the latter.

I am honestly undecided about all of this. I want CDA230 to protect things like the commenting section of this blog. But I don’t think it should protect dating apps when they are used as a conduit for abuse. And I really don’t want society to pay the cost for all the externalities inherent in Facebook’s business model.


Extracting Data from Smartphones

[2019.12.11] Privacy International has published a detailed, technical examination of how data is extracted from smartphones.


Scaring People into Supporting Backdoors

[2019.12.12] Back in 1998, Tim May warned us of the “Four Horsemen of the Infocalypse”: “terrorists, pedophiles, drug dealers, and money launderers.” I tended to cast it slightly differently. This is me from 2005:

Beware the Four Horsemen of the Information Apocalypse: terrorists, drug dealers, kidnappers, and child pornographers. Seems like you can scare any public into allowing the government to do anything with those four.

Which particular horseman is in vogue depends on time and circumstance. Since the terrorist attacks of 9/11, the US government has been pushing the terrorist scare story. Recently, it seems to have switched to pedophiles and child exploitation. It began in September, with a long New York Times story on child sex abuse, which included this dig at encryption:

And when tech companies cooperate fully, encryption and anonymization can create digital hiding places for perpetrators. Facebook announced in March plans to encrypt Messenger, which last year was responsible for nearly 12 million of the 18.4 million worldwide reports of child sexual abuse material, according to people familiar with the reports. Reports to the authorities typically contain more than one image, and last year encompassed the record 45 million photos and videos, according to the National Center for Missing and Exploited Children.

(That’s wrong, by the way. Facebook Messenger already has an encrypted option. It’s just not turned on by default, like it is in WhatsApp.)

That was followed up by a conference by the US Department of Justice: “Lawless Spaces: Warrant Proof Encryption and its Impact on Child Exploitation Cases.” US Attorney General William Barr gave a speech on the subject. Then came an open letter to Facebook from Barr and others from the UK and Australia, using “protecting children” as the basis for their demand that the company not implement strong end-to-end encryption. (I signed on to another another open letter in response.) Then, the FBI tried to get Interpol to publish a statement denouncing end-to-end encryption.

This week, the Senate Judiciary Committee held a hearing on backdoors: “Encryption and Lawful Access: Evaluating Benefits and Risks to Public Safety and Privacy.” Video, and written testimonies, are available at the link. Eric Neuenschwander from Apple was there to support strong encryption, but the other witnesses were all against it. New York District Attorney Cyrus Vance was true to form:

In fact, we were never able to view the contents of his phone because of this gift to sex traffickers that came, not from God, but from Apple.

It was a disturbing hearing. The Senators asked technical questions to people who couldn’t answer them. The result was that an adjunct law professor was able to frame the issue of strong encryption as an externality caused by corporate liability dumping, and another example of Silicon Valley’s anti-regulation stance.

Let me be clear. None of us who favor strong encryption is saying that child exploitation isn’t a serious crime, or a worldwide problem. We’re not saying that about kidnapping, international drug cartels, money laundering, or terrorism. We are saying three things. One, that strong encryption is necessary for personal and national security. Two, that weakening encryption does more harm than good. And three, law enforcement has other avenues for criminal investigation than eavesdropping on communications and stored devices. This is one example, where people unraveled a dark-web website and arrested hundreds by analyzing Bitcoin transactions. This is another, where policy arrested members of a WhatsApp group.

So let’s have reasoned policy debates about encryption—debates that are informed by technology. And let’s stop it with the scare stories.

EDITED TO ADD (12/13): The DoD just said that strong encryption is essential for national security.

All DoD issued unclassified mobile devices are required to be password protected using strong passwords. The Department also requires that data-in-transit, on DoD issued mobile devices, be encrypted (e.g. VPN) to protect DoD information and resources. The importance of strong encryption and VPNs for our mobile workforce is imperative. Last October, the Department outlined its layered cybersecurity approach to protect DoD information and resources, including service men and women, when using mobile communications capabilities.

[…]

As the use of mobile devices continues to expand, it is imperative that innovative security techniques, such as advanced encryption algorithms, are constantly maintained and improved to protect DoD information and resources. The Department believes maintaining a domestic climate for state of the art security and encryption is critical to the protection of our national security.


EFF on the Mechanics of Corporate Surveillance

[2019.12.13] EFF has published a comprehensible and very readable “deep dive” into the technologies of corporate surveillance, both on the Internet and off. Well worth reading and sharing.

Boing Boing post.


Upcoming Speaking Engagements

[2019.12.14] This is a current list of where and when I am scheduled to speak:

  • I’m speaking at SecIT by Heise in Hannover, Germany on March 26, 2020.

The list is maintained on this page.


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, Click Here to Kill Everybody—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org.

Copyright © 2019 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.