April 15, 2017
by Bruce Schneier
CTO, IBM Resilient
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2017/…>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- The TSA’s Selective Laptop Ban
- WikiLeaks Not Disclosing CIA-Hoarded Vulnerabilities to Companies
- Shadow Brokers Releases the Rest of Their NSA Hacking Tools
- Congress Removes FCC Privacy Protections on Your Internet Usage
- Incident Response as “Hand-to-Hand Combat”
- Schneier News
- Fourth WikiLeaks CIA Attack Tool Dump
- Security Orchestration and Incident Response
- Commenting Policy for My Blog
The TSA’s Selective Laptop Ban
In late March, the TSA announced a peculiar new security measure to take effect within 96 hours. Passengers flying into the US on foreign airlines from eight Muslim countries would be prohibited from carrying aboard any electronics larger than a smartphone. They would have to be checked and put into the cargo hold. And now the UK is following suit.
It’s difficult to make sense of this as a security measure, particularly at a time when many people question the veracity of government orders, but other explanations are either unsatisfying or damning.
So let’s look at the security aspects of this first. Laptop computers aren’t inherently dangerous, but they’re convenient carrying boxes. This is why, in the past, TSA officials have demanded passengers turn their laptops on: to confirm that they’re actually laptops and not laptop cases emptied of their electronics and then filled with explosives.
Forcing a would-be bomber to put larger laptops in the plane’s hold is a reasonable defense against this threat, because it increases the complexity of the plot. Both the shoe-bomber Richard Reid and the underwear bomber Umar Farouk Abdulmutallab carried crude bombs aboard their planes with the plan to set them off manually once aloft. Setting off a bomb in checked baggage is more work, which is why we don’t see more midair explosions like Pan Am Flight 103 over Lockerbie, Scotland, in 1988.
Security measures that restrict what passengers can carry onto planes are not unprecedented either. Airport security regularly responds to both actual attacks and intelligence regarding future attacks. After the liquid bombers were captured in 2006, the British banned all carry-on luggage except passports and wallets. I remember talking with a friend who traveled home from London with his daughters in those early weeks of the ban. They reported that airport security officials confiscated every tube of lip balm they tried to hide.
Similarly, the US started checking shoes after Reid, installed full-body scanners after Abdulmutallab and restricted liquids in 2006. But all of those measures were global, and most lessened in severity as the threat diminished.
This current restriction implies some specific intelligence of a laptop-based plot and a temporary ban to address it. However, if that’s the case, why only certain non-US carriers? And why only certain airports? Terrorists are smart enough to put a laptop bomb in checked baggage from the Middle East to Europe and then carry it on from Europe to the US.
Why not require passengers to turn their laptops on as they go through security? That would be a more effective security measure than forcing them to check them in their luggage. And lastly, why is there a delay between the ban being announced and it taking effect?
Even more confusing, the New York Times reported that “officials called the directive an attempt to address gaps in foreign airport security, and said it was not based on any specific or credible threat of an imminent attack.” The Department of Homeland Security FAQ page makes this general statement, “Yes, intelligence is one aspect of every security-related decision,” but doesn’t provide a specific security threat. And yet a report from the UK states the ban “follows the receipt of specific intelligence reports.”
Of course, the details are all classified, which leaves all of us security experts scratching our heads. On the face of it, the ban makes little sense.
One analysis painted this as a protectionist measure targeted at the heavily subsidized Middle Eastern airlines by hitting them where it hurts the most: high-paying business class travelers who need their laptops with them on planes to get work done. That reasoning makes more sense than any security-related explanation, but doesn’t explain why the British extended the ban to UK carriers as well. Or why this measure won’t backfire when those Middle Eastern countries turn around and ban laptops on American carriers in retaliation. And one aviation official told CNN that an intelligence official informed him it was not a “political move.”
In the end, national security measures based on secret information require us to trust the government. That trust is at historic low levels right now, so people both in the US and other countries are rightly skeptical of the official unsatisfying explanations. The new laptop ban highlights this mistrust.
This essay previously appeared on CNN.com.
Here are two essays that look at the possible political motivations, and fallout, of this ban.
The EFF rightly points out that letting a laptop out of your hands and sight is itself a security risk—for the passenger.
An astute blog reader pointed out that it isn’t really fair to say that the ban doesn’t affect US airlines. It doesn’t, but that’s only because no US airlines fly out of those countries. When the UK implemented the ban, it affected all airlines flying out of those countries, including UK airlines.
WikiLeaks Not Disclosing CIA-Hoarded Vulnerabilities to Companies
WikiLeaks has started publishing a large collection of classified CIA documents, including information on several—possibly many—unpublished (i.e., zero-day) vulnerabilities in computing equipment used by Americans. Despite assurances that the US government prioritizes defense over offense, it seems that the CIA was hoarding vulnerabilities. (It’s not just the CIA; last year we learned that the NSA is, too.)
Publishing those vulnerabilities into the public means that they’ll get fixed, but it also means that they’ll be used by criminals and other governments in the time period between when they’re published and when they’re patched. WikiLeaks has said that it’s going to do the right thing and privately disclose those vulnerabilities to the companies first.
This process seems to be hitting some snags:
WikiLeaks included a document in the email, requesting the companies to sign off on a series of conditions before being able to receive the actual technical details to deploy patches, according to sources. It’s unclear what the conditions are, but a source mentioned a 90-day disclosure deadline, which would compel companies to commit to issuing a patch within three months.
I’m okay with a 90-day window; that seems reasonable. But I have no idea what the other conditions are, and how onerous they are.
Honestly, at this point the CIA should do the right thing and disclose all the vulnerabilities to the companies. They’re burned as CIA attack tools. I have every confidence that Russia, China, and several other countries can hack WikiLeaks and get their hands on a copy. By now, their primary value is for defense. The CIA should bypass WikiLeaks and get the vulnerabilities fixed as soon as possible.
Shadow Brokers Releases the Rest of Their NSA Hacking Tools
Last August, an unknown group called the Shadow Brokers released a bunch of NSA tools to the public. The common guesses were that the tools were discovered on an external staging server, and that the hack and release was the work of the Russians (back then, that wasn’t controversial). This was me:
Okay, so let’s think about the game theory here. Some group stole all of this data in 2013 and kept it secret for three years. Now they want the world to know it was stolen. Which governments might behave this way? The obvious list is short: China and Russia. Were I betting, I would bet Russia, and that it’s a signal to the Obama Administration: “Before you even think of sanctioning us for the DNC hack, know where we’ve been and what we can do to you.”
They published a second, encrypted, file. My speculation:
They claim to be auctioning off the rest of the data to the highest bidder. I think that’s PR nonsense. More likely, that second file is random nonsense, and this is all we’re going to get. It’s a lot, though.
I was wrong. On November 1, the Shadow Brokers released some more documents, and two days ago they released the key to that original encrypted archive:
EQGRP-Auction-Files is CrDj”(;Va.*NdlnzB9M?@K2)#>deB7mN
I don’t think their statement is worth reading for content. I still believe that Russia are more likely to be the perpetrator than China.
There’s not much yet on the contents of this dump of top-secret NSA hacking tools, but it can’t be a fun weekend at Ft. Meade. I’m sure that by now they have enough information to know exactly where and when the data got stolen, and maybe even detailed information on who did it. My guess is that we’ll never see that information, though.
Me in August:
Me in November:
The current release:
Researchers have demonstrated using Intel’s Software Guard Extensions to hide malware and steal cryptographic keys from inside SGX’s protected enclave.
An interesting history of US information warfare.
A new, frighteningly good, Gmail phishing scam:
There’s some new research on security vulnerabilities in mobile MAC randomization. Basically, iOS and Android phones are not very good at randomizing their MAC addresses. And tricks with level-2 control frames can exploit weaknesses in their chipsets.
This is William Friedman’s highly annotated copy of Herbert Yardley’s book, “The American Black Chamber.”
Here is a listing of all the documents that the NSA has in its archives that are dated earlier than 1930.
Turkish hackers are threatening to erase millions of iCloud user accounts unless Apple pays a ransom. This is a weird story, and I’m skeptical of some of the details. Presumably Apple has decided that it’s smarter to spend the money on secure backups and other security measures than to pay the ransom. But we’ll see how this unfolds.
There are more CIA documents up on WikiLeaks. It seems to be mostly MacOS and iOS—including exploits that are installed on the hardware before they’re delivered to the customer. Apple claims that the vulnerabilities are all fixed. Note that there are almost certainly other Apple vulnerabilities in the documents still to be released.
Kalyna is a block cipher that became a Ukrainian national standard in 2015. It supports block and key sizes of 128, 256, and 512 bits. Its structure looks like AES but optimized for 64-bit CPUs, and it has a complicated key schedule. Rounds range from 10-18, depending on block and key sizes.
There is some mention of cryptanalysis on reduced-round versions in the Wikipedia entry.
And here are the other submissions to the standard.
An interesting story of uncovering an anonymous Internet social media account: FBI Director James Comey’s Twitter account.
Not content with having a fleet of insecure surveillance drones, the state of Connecticut wants a fleet of insecure weaponized drones. What could possibly go wrong?
Interesting acoustic attack against the MEMS accelerometers in devices like FitBits. This is not that a big deal with things like FitBits, but as IoT devices get more autonomous—and start making decisions and then putting them into effect automatically—these vulnerabilities will become critical.
Interesting law journal article: “Encryption and the Press Clause,” by D. Victoria Barantetsky.
This is an interesting combination of computer and physical attack against an ATM:
There’s a new report of a nation-state attack, presumed to be from China, on a series of managed ISPs. I know nothing more than what’s in this report, but it looks like a big one.
There’s a blog post from Google’s Project Zero detailing an attack against Android phones over Wi-Fi.
A detailed account of a hack against a Brazilian bank:
There’s a new malware called BrickerBot that permanently disables vulnerable IoT devices by corrupting their storage capability and reconfiguring kernel parameters. Right now, it targets devices with open Telnet ports, but we should assume that future versions will have other infection mechanisms.
Interesting paper: “Dial One for Scam: A Large-Scale Analysis of Technical Support Scams”:
I regularly say that, on the Internet, attack is easier than defense. There are a bunch of reasons for this, but primarily it’s 1) the complexity of modern networked computer systems and 2) the attacker’s ability to choose the time and method of the attack versus the defender’s necessity to secure against every type of attack. This is true, but how this translates to military cyber-operations is less straightforward. Contrary to popular belief, government cyberattacks are not bolts out of the blue, and the attack/defense balance is more…well…balanced. Rebecca Slayton has a good article in “International Security” that tries to make sense of this: “What is the Cyber Offense-Defense Balance? Conceptions, Causes, and Assessment.” In it, she points out that launching a cyberattack is more than finding and exploiting a vulnerability, and it is those other things that help balance the offensive advantage.
Ross Anderson liveblogged the 2017 Security Protocols Workshop.
Carnegie Mellon University has released a comprehensive list of C++ secure-coding best practices.
Congress Removes FCC Privacy Protections on Your Internet Usage
Think about all of the websites you visit every day. Now imagine if the likes of Time Warner, AT&T, and Verizon collected *all* of your browsing history and sold it on to the highest bidder. That’s what will probably happen if Congress has its way.
This week, lawmakers voted to allow Internet service providers to violate your privacy for their own profit. Not only have they voted to repeal a rule that protects your privacy, they are also trying to make it illegal for the Federal Communications Commission to enact other rules to protect your privacy online.
That this is not provoking greater outcry illustrates how much we’ve ceded any willingness to shape our technological future to for-profit companies and are allowing them to do it for us.
There are a lot of reasons to be worried about this. Because your Internet service provider controls your connection to the Internet, it is in a position to see everything you do on the Internet. Unlike a search engine or social networking platform or news site, you can’t easily switch to a competitor. And there’s not a lot of competition in the market, either. If you have a choice between two high-speed providers in the US, consider yourself lucky.
What can telecom companies do with this newly granted power to spy on everything you’re doing? Of course they can sell your data to marketers—and the inevitable criminals and foreign governments who also line up to buy it. But they can do more creepy things as well.
They can snoop through your traffic and insert their own ads. They can deploy systems that remove encryption so they can better eavesdrop. They can redirect your searches to other sites. They can install surveillance software on your computers and phones. None of these are hypothetical.
They’re all things Internet service providers have done before, and they are some of the reasons the FCC tried to protect your privacy in the first place. And now they’ll be able to do all of these things in secret, without your knowledge or consent. And, of course, governments worldwide will have access to these powers. And all of that data will be at risk of hacking, either by criminals and other governments.
Telecom companies have argued that other Internet players already have these creepy powers—although they didn’t use the word “creepy”—so why should they not have them as well? It’s a valid point.
Surveillance is already the business model of the Internet, and literally hundreds of companies spy on your Internet activity against your interests and for their own profit.
Your e-mail provider already knows everything you write to your family, friends, and colleagues. Google already knows our hopes, fears, and interests, because that’s what we search for.
Your cellular provider already tracks your physical location at all times: it knows where you live, where you work, when you go to sleep at night, when you wake up in the morning, and—because everyone has a smartphone—who you spend time with and who you sleep with.
And some of the things these companies do with that power is no less creepy. Facebook has run experiments in manipulating your mood by changing what you see on your news feed. Uber used its ride data to identify one-night stands. Even Sony once installed spyware on customers’ computers to try and detect if they copied music files.
Aside from spying for profit, companies can spy for other purposes. Uber has already considered using data it collects to intimidate a journalist. Imagine what an Internet service provider can do with the data it collects: against politicians, against the media, against rivals.
Of course the telecom companies want a piece of the surveillance capitalism pie. Despite dwindling revenues, increasing use of ad blockers, and increases in clickfraud, violating our privacy is still a profitable business—especially if it’s done in secret.
The bigger question is: why do we allow for-profit corporations to create our technological future in ways that are optimized for their profits and anathema to our own interests?
When markets work well, different companies compete on price and features, and society collectively rewards better products by purchasing them. This mechanism fails if there is no competition, or if rival companies choose not to compete on a particular feature. It fails when customers are unable to switch to competitors. And it fails when what companies do remains secret.
Unlike service providers like Google and Facebook, telecom companies are infrastructure that requires government involvement and regulation. The practical impossibility of consumers learning the extent of surveillance by their Internet service providers, combined with the difficulty of switching them, means that the decision about whether to be spied on should be with the consumer and not a telecom giant. That this new bill reverses that is both wrong and harmful.
Today, technology is changing the fabric of our society faster than at any other time in history. We have big questions that we need to tackle: not just privacy, but questions of freedom, fairness, and liberty. Algorithms are making decisions about policing, healthcare.
Driverless vehicles are making decisions about traffic and safety. Warfare is increasingly being fought remotely and autonomously. Censorship is on the rise globally. Propaganda is being promulgated more efficiently than ever. These problems won’t go away. If anything, the Internet of things and the computerization of every aspect of our lives will make it worse.
In today’s political climate, it seems impossible that Congress would legislate these things to our benefit. Right now, regulatory agencies such as the FTC and FCC are our best hope to protect our privacy and security against rampant corporate power. That Congress has decided to reduce that power leaves us at enormous risk.
It’s too late to do anything about this bill—Trump will certainly sign it—but we need to be alert to future bills that reduce our privacy and security.
This post previously appeared on the Guardian.
EFF on this:
Broadband competition in the US:
Verizon’s 2014 actions:
A “don’t worry” argument:
Uber’s data analysis:
Uber intimidating a journalist:
The ad bubble:
Former FCC Commissioner Tom Wheeler wrote a good op-ed on the subject.
And here’s an essay laying out what this all means to the average Internet user.
States are stepping in:
Incident Response as “Hand-to-Hand Combat”
NSA Deputy Director Richard Ledgett described a 2014 Russian cyberattack against the US State Department as “hand-to-hand” combat:
“It was hand-to-hand combat,” said NSA Deputy Director Richard Ledgett, who described the incident at a recent cyber forum, but did not name the nation behind it. The culprit was identified by other current and former officials. Ledgett said the attackers’ thrust-and-parry moves inside the network while defenders were trying to kick them out amounted to “a new level of interaction between a cyber attacker and a defender.”
Fortunately, Ledgett said, the NSA, whose hackers penetrate foreign adversaries’ systems to glean intelligence, was able to spy on the attackers’ tools and tactics. “So we were able to see them teeing up new things to do,” Ledgett said. “That’s a really useful capability to have.”
I think this is the first public admission that we spy on foreign governments’ cyberwarriors for defensive purposes. He’s right: being able to spy on the attackers’ networks and see what they’re doing before they do it is a very useful capability. It’s something that was first exposed by the Snowden documents: that the NSA spies on enemy networks for defensive purposes.
Interesting is that another country first found out about the intrusion, and that they also have offensive capabilities inside Russia’s cyberattack units:
The NSA was alerted to the compromises by a Western intelligence agency. The ally had managed to hack not only the Russians’ computers, but also the surveillance cameras inside their workspace, according to the former officials. They monitored the hackers as they maneuvered inside the U.S. systems and as they walked in and out of the workspace, and were able to see faces, the officials said.
There’s a myth that it’s hard for the US to attribute these sorts of cyberattacks. It used to be, but for the US—and other countries with this kind of intelligence gathering capabilities—attribution is not hard. It’s not fast, which is its own problem, and of course it’s not perfect: but it’s not hard.
Snowden documents on this:
Me on attribution:
I’m speaking at the International Information Security Conference in Madrid on May 11, 2017.
I have written a paper with Orin Kerr on encryption workarounds. Our goal wasn’t to make any policy recommendations. (That was a good thing, since we probably don’t agree on any.) Our goal was to present a taxonomy of different workarounds, and discuss their technical and legal characteristics and complications. The paper is complete, but we’ll be revising it once more before final publication. Comments are appreciated.
Fourth WikiLeaks CIA Attack Tool Dump
WikiLeaks is obviously playing their top secret CIA data cache for as much press as they can, leaking the documents a little at a time. On Friday they published their fourth set of documents from what they call “Vault 7”:
27 documents from the CIA’s Grasshopper framework, a platform used to build customized malware payloads for Microsoft Windows operating systems.
We have absolutely no idea who leaked this one. When they first started appearing, I suspected that it was not an insider because there wasn’t anything illegal in the documents. There still isn’t, but let me explain further. The CIA documents are all hacking tools. There’s nothing about programs or targets. Think about the Snowden leaks: it was the information about programs that targeted Americans, programs that swept up much of the world’s information, programs that demonstrated particularly powerful NSA capabilities. There’s nothing like that in the CIA leaks. They’re just hacking tools. All they demonstrate is that the CIA hoards vulnerabilities contrary to the government’s stated position, but we already knew that.
This was my guess from March:
If I had to guess right now, I’d say the documents came from an outsider and not an insider. My reasoning: One, there is absolutely nothing illegal in the contents of any of this stuff. It’s exactly what you’d expect the CIA to be doing in cyberspace. That makes the whistleblower motive less likely. And two, the documents are a few years old, making this more like the Shadow Brokers than Edward Snowden. An internal leaker would leak quickly. A foreign intelligence agency—like the Russians—would use the documents while they were fresh and valuable, and only expose them when the embarrassment value was greater.
But, as I said last month, no one has any idea: we’re all guessing. (Well, to be fair, I hope the CIA knows exactly who did this. Or, at least, exactly where the documents were stolen from.) And I hope the inability of either the NSA or CIA to keep its own attack tools secret will cause them to rethink their decision to hoard vulnerabilities in common Internet systems instead of fixing them.
Latest WikiLeaks dump:
Me on the CIA dump:
US Government hoarding vulnerabilities:
Security Orchestration and Incident Response
Last month at the RSA Conference, I saw a lot of companies selling security incident response automation. Their promise was to replace people with computers—sometimes with the addition of machine learning or other artificial intelligence techniques—and to respond to attacks at computer speeds.
While this is a laudable goal, there’s a fundamental problem with doing this in the short term. You can only automate what you’re certain about, and there is still an enormous amount of uncertainty in cybersecurity. Automation has its place in incident response, but the focus needs to be on making the people effective, not on replacing them—security orchestration, not automation.
This isn’t just a choice of words—it’s a difference in philosophy. The US military went through this in the 1990s. What was called the Revolution in Military Affairs (RMA) was supposed to change how warfare was fought. Satellites, drones and battlefield sensors were supposed to give commanders unprecedented information about what was going on, while networked soldiers and weaponry would enable troops to coordinate to a degree never before possible. In short, the traditional fog of war would be replaced by perfect information, providing certainty instead of uncertainty. They, too, believed certainty would fuel automation and, in many circumstances, allow technology to replace people.
Of course, it didn’t work out that way. The US learned in Afghanistan and Iraq that there are a lot of holes in both its collection and coordination systems. Drones have their place, but they can’t replace ground troops. The advances from the RMA brought with them some enormous advantages, especially against militaries that didn’t have access to the same technologies, but never resulted in certainty. Uncertainty still rules the battlefield, and soldiers on the ground are still the only effective way to control a region of territory.
But along the way, we learned a lot about how the feeling of certainty affects military thinking. Last month, I attended a lecture on the topic by H.R. McMaster. This was before he became President Trump’s national security advisor-designate. Then, he was the director of the Army Capabilities Integration Center. His lecture touched on many topics, but at one point he talked about the failure of the RMA. He confirmed that military strategists mistakenly believed that data would give them certainty. But he took this change in thinking further, outlining the ways this belief in certainty had repercussions in how military strategists thought about modern conflict.
McMaster’s observations are directly relevant to Internet security incident response. We too have been led to believe that data will give us certainty, and we are making the same mistakes that the military did in the 1990s. In a world of uncertainty, there’s a premium on understanding, because commanders need to figure out what’s going on. In a world of certainty, knowing what’s going on becomes a simple matter of data collection.
I see this same fallacy in Internet security. Many companies exhibiting at the RSA Conference promised to collect and display more data and that the data will reveal everything. This simply isn’t true. Data does not equal information, and information does not equal understanding. We need data, but we also must prioritize understanding the data we have over collecting ever more data. Much like the problems with bulk surveillance, the “collect it all” approach provides minimal value over collecting the specific data that’s useful.
In a world of uncertainty, the focus is on execution. In a world of certainty, the focus is on planning. I see this manifesting in Internet security as well. My own Resilient Systems—now part of IBM Security—allows incident response teams to manage security incidents and intrusions. While the tool is useful for planning and testing, its real focus is always on execution.
Uncertainty demands initiative, while certainty demands synchronization. Here, again, we are heading too far down the wrong path. The purpose of all incident response tools should be to make the human responders more effective. They need both the ability and the capability to exercise it effectively.
When things are uncertain, you want your systems to be decentralized. When things are certain, centralization is more important. Good incident response teams know that decentralization goes hand in hand with initiative. And finally, a world of uncertainty prioritizes command, while a world of certainty prioritizes control. Again, effective incident response teams know this, and effective managers aren’t scared to release and delegate control.
Like the US military, we in the incident response field have shifted too much into the world of certainty. We have prioritized data collection, preplanning, synchronization, centralization and control. You can see it in the way people talk about the future of Internet security, and you can see it in the products and services offered on the show floor of the RSA Conference.
Automation, too, is fixed. Incident response needs to be dynamic and agile, because you are never certain and there is an adaptive, malicious adversary on the other end. You need a response system that has human controls and can modify itself on the fly. Automation just doesn’t allow a system to do that to the extent that’s needed in today’s environment. Just as the military shifted from trying to replace the soldier to making the best soldier possible, we need to do the same.
For some time, I have been talking about incident response in terms of OODA loops. This is a way of thinking about real-time adversarial relationships, originally developed for airplane dogfights, but much more broadly applicable. OODA stands for observe-orient-decide-act, and it’s what people responding to a cybersecurity incident do constantly, over and over again. We need tools that augment each of those four steps. These tools need to operate in a world of uncertainty, where there is never enough data to know everything that is going on. We need to prioritize understanding, execution, initiative, decentralization and command.
At the same time, we’re going to have to make all of this scale. If anything, the most seductive promise of a world of certainty and automation is that it allows defense to scale. The problem is that we’re not there yet. We can automate and scale parts of IT security, such as antivirus, automatic patching and firewall management, but we can’t yet scale incident response. We still need people. And we need to understand what can be automated and what can’t be.
The word I prefer is orchestration. Security orchestration represents the union of people, process and technology. It’s computer automation where it works, and human coordination where that’s necessary. It’s networked systems giving people understanding and capabilities for execution. It’s making those on the front lines of incident response the most effective they can be, instead of trying to replace them. It’s the best approach we have for cyberdefense.
Automation has its place. If you think about the product categories where it has worked, they’re all areas where we have pretty strong certainty. Automation works in antivirus, firewalls, patch management and authentication systems. None of them is perfect, but all those systems are right almost all the time, and we’ve developed ancillary systems to deal with it when they’re wrong.
Automation fails in incident response because there’s too much uncertainty. Actions can be automated once the people understand what’s going on, but people are still required. For example, IBM’s Watson for Cyber Security provides insights for incident response teams based on its ability to ingest and find patterns in an enormous amount of freeform data. It does not attempt a level of understanding necessary to take people out of the equation.
From within an orchestration model, automation can be incredibly powerful. But it’s the human-centric orchestration model—the dashboards, the reports, the collaboration—that makes automation work. Otherwise, you’re blindly trusting the machine. And when an uncertain process is automated, the results can be dangerous.
Technology continues to advance, and this is all a changing target. Eventually, computers will become intelligent enough to replace people at real-time incident response. My guess, though, is that computers are not going to get there by collecting enough data to be certain. More likely, they’ll develop the ability to exhibit understanding and operate in a world of uncertainty. That’s a much harder goal.
Yes, today, this is all science fiction. But it’s not stupid science fiction, and it might become reality during the lifetimes of our children. Until then, we need people in the loop. Orchestration is a way to achieve that.
This essay previously appeared on the Security Intelligence blog.
Commenting Policy for My Blog
Over the past few months, I have been watching my blog comments decline in civility. I blame it in part on the contentious US election and its aftermath. It’s also a consequence of not requiring visitors to register in order to post comments, and of our tolerance for impassioned conversation. Whatever the causes, I’m tired of it. Partisan nastiness is driving away visitors who might otherwise have valuable insights to offer.
I have been engaging in more active comment moderation. What that means is that I have been quicker to delete posts that are rude, insulting, or off-topic. This is my blog. I consider the comments section as analogous to a gathering at my home. It’s not a town square. Everyone is expected to be polite and respectful, and if you’re an unpleasant guest, I’m going to ask you to leave. Your freedom of speech does not compel me to publish your words.
I like people who disagree with me. I like debate. I even like arguments. But I expect everyone to behave as if they’ve been invited into my home.
I realize that I sometimes express opinions on political matters; I find they are relevant to security at all levels. On those posts, I welcome on-topic comments regarding those opinions. I don’t welcome people pissing and moaning about the fact that I’ve expressed my opinion on something other than security technology. As I said, it’s my blog.
So, please… Assume good faith. Be polite. Minimize profanity. Argue facts, not personalities. Stay on topic. If you want a model to emulate, look at Clive Robinson’s posts.
Schneier on Security is not a professional operation. There’s no advertising, so no revenue to hire staff. My part-time moderator—paid out of my own pocket—and I do what we can when we can. If you see a comment that’s spam, or off-topic, or an ad hominem attack, flag it and be patient. Don’t reply or engage; we’ll get to it. And we won’t always post an explanation when we delete something.
My own stance on privacy and anonymity means that I’m not going to require commenters to register a name or e-mail address, so that isn’t an option. And I really don’t want to disable comments.
I dislike having to deal with this problem. I’ve been proud and happy to see how interesting and useful the comments section has been all these years. I’ve watched many blogs and discussion groups descend into toxicity as a result of trolls and drive-by ideologues derailing the conversations of regular posters. I’m not going to let that happen here.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and CTO of IBM Resilient and Special Advisor to IBM Security. See <https://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM Resilient.
Copyright (c) 2017 by Bruce Schneier.