Page 220

Friday Squid Blogging: Toraiz SQUID Digital Sequencer

Pioneer DJ has a new sequencer: the Toraiz SQUID: Sequencer Inspirational Device.

The 16-track sequencer is designed around jamming and performance with a host of features to create “happy accidents” and trigger random sequences, modulations and chords. There are 16 RGB pads for playing in your melodies and beats, and up to 64 patterns per each of the 16 tracks. There are eight notes of polyphony per track too, and a Harmonizer section to quickly input pre-determined chord shapes into your pattern, with up to six saved chords.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on April 26, 2019 at 4:14 PMView Comments

Towards an Information Operations Kill Chain

Cyberattacks don’t magically happen; they involve a series of steps. And far from being helpless, defenders can disrupt the attack at any of those steps. This framing has led to something called the “cybersecurity kill chain“: a way of thinking about cyber defense in terms of disrupting the attacker’s process.

On a similar note, it’s time to conceptualize the “information operations kill chain.” Information attacks against democracies, whether they’re attempts to polarize political processes or to increase mistrust in social institutions, also involve a series of steps. And enumerating those steps will clarify possibilities for defense.

I first heard of this concept from Anthony Soules, a former National Security Agency (NSA) employee who now leads cybersecurity strategy for Amgen. He used the steps from the 1980s Russian “Operation Infektion,” designed to spread the rumor that the U.S. created the HIV virus as part of a weapons research program. A 2018 New York Times opinion video series on the operation described the Russian disinformation playbook in a series of seven “commandments,” or steps. The information landscape has changed since 1980, and information operations have changed as well. I have updated, and added to, those steps to bring them into the present day:

  • Step 1: Find the cracks in the fabric of society­—the social, demographic, economic and ethnic divisions.
  • Step 2: Seed distortion by creating alternative narratives. In the 1980s, this was a single “big lie,” but today it is more about many contradictory alternative truths­—a “firehose of falsehood“—­that distorts the political debate.
  • Step 3: Wrap those narratives around kernels of truth. A core of fact helps the falsities spread.
  • Step 4: (This step is new.) Build audiences, either by directly controlling a platform (like RT) or by cultivating relationships with people who will be receptive to those narratives.
  • Step 5: Conceal your hand; make it seem as if the stories came from somewhere else.
  • Step 6: Cultivate “useful idiots” who believe and amplify the narratives. Encourage them to take positions even more extreme than they would otherwise.
  • Step 7: Deny involvement, even if the truth is obvious.
  • Step 8: Play the long game. Strive for long-term impact over immediate impact.

These attacks have been so effective in part because, as victims, we weren’t aware of how they worked. Identifying these steps makes it possible to conceptualize­ and develop­ countermeasures designed to disrupt information operations. The result is the information operations kill chain:

  • Step 1: Find the cracks. There will always be open disagreements in a democratic society, but one defense is to shore up the institutions that make that society possible. Elsewhere I have written about the “common political knowledge” necessary for democracies to function. We need to strengthen that shared knowledge, thereby making it harder to exploit the inevitable cracks. We need to make it unacceptable­—or at least costly—­for domestic actors to use these same disinformation techniques in their own rhetoric and political maneuvering, and to highlight and encourage cooperation when politicians honestly work across party lines. We need to become reflexively suspicious of information that makes us angry at our fellow citizens. We cannot entirely fix the cracks, as they emerge from the diversity that makes democracies strong; but we can make them harder to exploit.
  • Step 2: Seed distortion. We need to teach better digital literacy. This alone cannot solve the problem, as much sharing of fake news is about social signaling, and those who share it care more about how it demonstrates their core beliefs than whether or not it is true. Still, it is part of the solution.
  • Step 3: Wrap the narratives around kernels of truth. Defenses involve exposing the untruths and distortions, but this is also complicated to put into practice. Psychologists have demonstrated that an inadvertent effect of debunking a piece of fake news is to amplify the message of that debunked story. Hence, it is essential to replace the fake news with accurate narratives that counter the propaganda. That kernel of truth is part of a larger true narrative. We need to ensure that the true narrative is legitimized and promoted.
  • Step 4: Build audiences. This is where social media companies have made all the difference. By allowing groups of like-minded people to find and talk to each other, these companies have given propagandists the ability to find audiences who are receptive to their messages. Here, the defenses center around making disinformation efforts less effective. Social media companies need to detect and delete accounts belonging to propagandists and bots and groups run by those propagandists.
  • Step 5: Conceal your hand. Here the answer is attribution, attribution, attribution. The quicker we can publicly attribute information operations, the more effectively we can defend against them. This will require efforts by both the social media platforms and the intelligence community, not just to detect information operations and expose them but also to be able to attribute attacks. Social media companies need to be more transparent about how their algorithms work and make source publications more obvious for online articles. Even small measures like the Honest Ads Act, requiring transparency in online political ads, will help. Where companies lack business incentives to do this, regulation will be the only answer.
  • Step 6: Cultivate useful idiots. We can mitigate the influence of people who disseminate harmful information, even if they are unaware they are amplifying deliberate propaganda. This does not mean that the government needs to regulate speech; corporate platforms already employ a variety of systems to amplify and diminish particular speakers and messages. Additionally, the antidote to the ignorant people who repeat and amplify propaganda messages is other influencers who respond with the truth­—in the words of one report, we must “make the truth louder.” Of course, there will always be true believers for whom no amount of fact-checking or counter speech will convince; this is not intended for them. Focus instead on persuading the persuadable.
  • Step 7: Deny everything. When attack attribution relies on secret evidence, it is easy for the attacker to deny involvement. Public attribution of information attacks must be accompanied by convincing evidence. This will be difficult when attribution involves classified intelligence information, but there is no alternative. Trusting the government without evidence, as the NSA’s Rob Joyce recommended in a 2016 talk, is not enough. Governments will have to disclose.
  • Step 8: Play the long game. Counterattacks can disrupt the attacker’s ability to maintain information operations, as U.S. Cyber Command did during the 2018 midterm elections. The NSA’s new policy of “persistent engagement” (see the article by, and interview with, U.S. Cyber Command Commander’s Gen. Paul Nakasone here) is a strategy to achieve this. Defenders can play the long game, too. We need to better encourage people to think for the long term: beyond the next election cycle or quarterly earnings report.

Permeating all of this is the importance of deterrence. Yes, we need to adjust our theories of deterrence to the realities of the information age and the democratization of attackers. If we can mitigate the effectiveness of information operations, if we can publicly attribute—if we can respond either diplomatically or otherwise­—we can deter these attacks from nation-states. But Russian interference in the 2016 presidential election shows not just that such actions are possible but also that they’re surprisingly inexpensive to run. As these tactics continue to be democratized, more people will attempt them. Deterring them will require a different theory.

None of these defensive actions is sufficient on its own. In this way, the information operations kill chain differs significantly from the more traditional cybersecurity kill chain. The latter defends against a series of steps taken sequentially by the attacker against a single target­—a network or an organization—and disrupting any one of those steps disrupts the entire attack. The information operations kill chain is fuzzier. Steps overlap. They can be conducted out of order. It’s a patchwork that can span multiple social media sites and news channels. It requires, as Henry Farrell and I have postulated, thinking of democracy itself as an information system. Disrupting an information operation will require more than disrupting one step at one time. The parallel isn’t perfect, but it’s a taxonomy by which to consider the range of possible defenses.

This information operations kill chain is a work in progress. If anyone has any other ideas for disrupting different steps of the information operations kill chain, please comment below. I will update this in a future essay.

This essay previously appeared on Lawfare.com.

EDITED TO ADD (10/10): I have updated the kill chain. (Blog link here.) Please use the updated version.

Posted on April 26, 2019 at 6:09 AMView Comments

G7 Comes Out in Favor of Encryption Backdoors

From a G7 meeting of interior ministers in Paris this month, an “outcome document“:

Encourage Internet companies to establish lawful access solutions for their products and services, including data that is encrypted, for law enforcement and competent authorities to access digital evidence, when it is removed or hosted on IT servers located abroad or encrypted, without imposing any particular technology and while ensuring that assistance requested from internet companies is underpinned by the rule law and due process protection. Some G7 countries highlight the importance of not prohibiting, limiting, or weakening encryption;

There is a weird belief amongst policy makers that hacking an encryption system’s key management system is fundamentally different than hacking the system’s encryption algorithm. The difference is only technical; the effect is the same. Both are ways of weakening encryption.

Posted on April 23, 2019 at 9:14 AMView Comments

Excellent Analysis of the Boeing 737 Max Software Problems

This is the best analysis of the software causes of the Boeing 737 MAX disasters that I have read.

Technically this is safety and not security; there was no attacker. But the fields are closely related and there are a lot of lessons for IoT security—and the security of complex socio-technical systems in general—in here.

EDITED TO ADD (4/30): A rebuttal of sorts.

EDITED TO ADD (5/13): The comments to this blog post are of particularly high quality, and I recommend them to anyone interested in the topic.

Posted on April 22, 2019 at 8:45 AMView Comments

New DNS Hijacking Attacks

DNS hijacking isn’t new, but this seems to be an attack of unprecedented scale:

Researchers at Cisco’s Talos security division on Wednesday revealed that a hacker group it’s calling Sea Turtle carried out a broad campaign of espionage via DNS hijacking, hitting 40 different organizations. In the process, they went so far as to compromise multiple country-code top-level domains—the suffixes like .co.uk or .ru that end a foreign web address—putting all the traffic of every domain in multiple countries at risk.

The hackers’ victims include telecoms, internet service providers, and domain registrars responsible for implementing the domain name system. But the majority of the victims and the ultimate targets, Cisco believes, were a collection of mostly governmental organizations, including ministries of foreign affairs, intelligence agencies, military targets, and energy-related groups, all based in the Middle East and North Africa. By corrupting the internet’s directory system, hackers were able to silently use “man in the middle” attacks to intercept all internet data from email to web traffic sent to those victim organizations.

[…]

Cisco Talos said it couldn’t determine the nationality of the Sea Turtle hackers, and declined to name the specific targets of their spying operations. But it did provide a list of the countries where victims were located: Albania, Armenia, Cyprus, Egypt, Iraq, Jordan, Lebanon, Libya, Syria, Turkey, and the United Arab Emirates. Cisco’s Craig Williams confirmed that Armenia’s .am top-level domain was one of the “handful” that were compromised, but wouldn’t say which of the other countries’ top-level domains were similarly hijacked.

Another news article.

Posted on April 18, 2019 at 5:13 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.