March 15, 2018
by Bruce Schneier
CTO, IBM Resilient
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2018/…>. These same essays and news items appear in the “Schneier on Security” blog at <https://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- Artificial Intelligence and the Attack/Defense Balance
- Writings on the Encryption Debate
- Schneier News
- Can Consumers’ Online Data Be Protected?
Artificial intelligence technologies have the potential to upend the longstanding advantage that attack has over defense on the Internet. This has to do with the relative strengths and weaknesses of people and computers, how those all interplay in Internet security, and where AI technologies might change things.
You can divide Internet security tasks into two sets: what humans do well and what computers do well. Traditionally, computers excel at speed, scale, and scope. They can launch attacks in milliseconds and infect millions of computers. They can scan computer code to look for particular kinds of vulnerabilities, and data packets to identify particular kinds of attacks.
Humans, conversely, excel at thinking and reasoning. They can look at the data and distinguish a real attack from a false alarm, understand the attack as it’s happening, and respond to it. They can find new sorts of vulnerabilities in systems. Humans are creative and adaptive, and can understand context.
Computers—so far, at least—are bad at what humans do well. They’re not creative or adaptive. They don’t understand context. They can behave irrationally because of those things.
Humans are slow, and get bored at repetitive tasks. They’re terrible at big data analysis. They use cognitive shortcuts, and can only keep a few data points in their head at a time. They can also behave irrationally because of those things.
AI will allow computers to take over Internet security tasks from humans, and then do them faster and at scale. Here are possible AI capabilities:
* Discovering new vulnerabilities—and, more importantly, new types of vulnerabilities—in systems, both by the offense to exploit and by the defense to patch, and then automatically exploiting or patching them.
* Reacting and adapting to an adversary’s actions, again both on the offense and defense sides. This includes reasoning about those actions and what they mean in the context of the attack and the environment.
* Abstracting lessons from individual incidents, generalizing them across systems and networks, and applying those lessons to increase attack and defense effectiveness elsewhere.
* Identifying strategic and tactical trends from large datasets and using those trends to adapt attack and defense tactics.
That’s an incomplete list. I don’t think anyone can predict what AI technologies will be capable of. But it’s not unreasonable to look at what humans do today and imagine a future where AIs are doing the same things, only at computer speeds, scale, and scope.
Both attack and defense will benefit from AI technologies, but I believe that AI has the capability to tip the scales more toward defense. There will be better offensive and defensive AI techniques. But here’s the thing: defense is currently in a worse position than offense precisely because of the human components. Present-day attacks pit the relative advantages of computers and humans against the relative weaknesses of computers and humans. Computers moving into what are traditionally human areas will rebalance that equation.
Roy Amara famously said that we overestimate the short-term effects of new technologies, but underestimate their long-term effects. AI is notoriously hard to predict, so many of the details I speculate about are likely to be wrong—and AI is likely to introduce new asymmetries that we can’t foresee. But AI is the most promising technology I’ve seen for bringing defense up to par with offense. For Internet security, that will change everything.
This essay previously appeared in the March/April 2018 issue of IEEE Security & Privacy.
Good Washington Post op-ed on the need to use voter-verifiable paper ballots to secure elections, as well as risk-limiting audits.
Interesting history of the security of walls:
Facebook will verify the physical location of political ad buyers with paper postcards. It’s not a great solution, but it’s something:
Researchers have discovered new variants of Spectre and Meltdown. The software mitigations for Spectre and Meltdown seem to block these variants, although the eventual CPU fixes will have to be expanded to account for these new attacks.
People harassing women by delivering anonymous packages purchased from Amazon. On the one hand, there is nothing new here. This could have happened decades ago, pre-Internet. But the Internet makes this easier, and the article points out that using prepaid gift cards makes this anonymous. I am curious how much these differences make a difference in kind, and what can be done about it.
I joined a letter supporting the Secure Elections Act (S. 2261):
Paul Manafort left an e-mail evidence trail because he couldn’t figure out how to edit a pdf.
Forbes reports that the Israeli company Cellebrite can probably unlock all iPhone models:
This story is based on some excellent reporting, but leaves a lot of questions unanswered. We don’t know exactly what was extracted from any of the phones. Was it metadata or data, and what kind of metadata or data was it. What I hear is that Cellebrite hires ex-Apple engineers and moves them to countries where Apple can’t prosecute them under the DMCA or its equivalents. There’s also a credible rumor that Cellebrite’s mechanisms only defeat the mechanism that limits the number of password attempts. It does not allow engineers to move the encrypted data off the phone and run an offline password cracker. If this is true, then strong passwords are still secure.
Another article, with more information. It looks like there’s an arms race going on between Apple and Cellebrite. At least, if Cellebrite is telling the truth—which it may or may not be.
Grayshift is another company that claims to unlock cell phones for a price.
Apple is bowing to pressure from the Chinese government and storing encryption keys in China. While I would prefer it if it would take a stand against China, I really can’t blame it for putting its business model ahead of its desires for customer privacy.
Last month, I blogged about the myriad of hacking threats against the Olympics. Soon after that, the Washington Post reported that Russia hacked the Olympics network and tried to cast the blame on North Korea. Of course, the evidence is classified, so there’s no way to verify this claim. And while the article speculates that the hacks were a retaliation for Russia being banned due to doping, that doesn’t ring true to me. If they tried to blame North Korea, it’s more likely that they’re trying to disrupt something between North Korea, South Korea, and the US. But I don’t know.
Since you don’t have enough to worry about, here’s a paper postulating that space aliens could send us malware capable of destroying humanity.
This is fascinating research about how the underlying training data for a machine-learning system can be inadvertently exposed. Basically, if a machine-learning system trains on a dataset that contains secret information, in some cases an attacker can query the system to extract that secret information. My guess is that there is a lot more research to be done here.
More research on the topic:
Princeton’s Karen Levy has a good article on computer security and the intimate partner threat:
Interesting research: “Finding The Greedy, Prodigal, and Suicidal Contracts at Scale”:
A new DDoS reflection-attack variant multiplies attacks 51,000 times:
History of the US Army Security Agency in the early years of Cold War Germany.
Responding to the lack of diversity at the RSA Conference, a group of security experts have announced a competing one-day conference: OUR Security Advocates, or OURSA. It’s in San Francisco, and it’s during RSA, so you can attend both.
The CEO of Trustico e-mailed the private keys for 23,000 TLS certificates. This is a wacky story on so many levels.
One of the effects of GDPR—the new EU General Data Protection Regulation—is that we’re all going to be learning a lot more about who collects our data and what they do with it. Consider PayPal, that just released a list of over 600 companies they share customer data with.
Is 600 companies unusual? Is it more than average? Less? We’ll soon know.
Seems like everyone is writing about encryption and backdoors this season.
* The National Academies has just published “Decrypting the Encryption Debate: A Framework for Decision Makers.”
* R Street published “Policy Approaches to the Encryption Debate,” by Charles Duan, Arthur Rizer, Zach Graves and Mike Godwin.
* The East West Institute published their report: “Encryption Policy in Democratic Regimes.”
I am speaking on a panel at the Boston Museum of Science on 4/11:
Everything online is hackable. This is true for Equifax’s data and the federal Office of Personal Management’s data, which was hacked in 2015. If information is on a computer connected to the Internet, it is vulnerable.
But just because everything is hackable doesn’t mean everything will be hacked. The difference between the two is complex, and filled with defensive technologies, security best practices, consumer awareness, the motivation and skill of the hacker and the desirability of the data. The risks will be different if an attacker is a criminal who just wants credit card details—and doesn’t care where he gets them from—or the Chinese military looking for specific data from a specific place.
The proper question isn’t whether it’s possible to protect consumer data, but whether a particular site protects our data well enough for the benefits provided by that site. And here, again, there are complications.
In most cases, it’s impossible for consumers to make informed decisions about whether their data is protected. We have no idea what sorts of security measures Google uses to protect our highly intimate Web search data or our personal e-mails. We have no idea what sorts of security measures Facebook uses to protect our posts and conversations.
We have a feeling that these big companies do better than smaller ones. But we’re also surprised when a lone individual publishes personal data hacked from the infidelity site AshleyMadison.com, or when the North Korean government does the same with personal information in Sony’s network.
Think about all the companies collecting personal data about you—the websites you visit, your smartphone and its apps, your Internet-connected car—and how little you know about their security practices. Even worse, credit bureaus and data brokers like Equifax collect your personal information without your knowledge or consent.
So while it might be possible for companies to do a better job of protecting our data, you as a consumer are in no position to demand such protection.
Government policy is the missing ingredient. We need standards and a method for enforcement. We need liabilities and the ability to sue companies that poorly secure our data. The biggest reason companies don’t protect our data online is that it’s cheaper not to. Government policy is how we change that.
This essay appeared as half of a point/counterpoint with Priscilla Regan, in a CQ Researcher report titled “Privacy and the Internet.”
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and CTO of IBM Resilient and Special Advisor to IBM Security. See <https://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM Resilient.
Copyright (c) 2018 by Bruce Schneier.