President Obama Talks About AI Risk, Cybersecurity, and More

Interesting interview:

Obama: Traditionally, when we think about security and protecting ourselves, we think in terms of armor or walls. Increasingly, I find myself looking to medicine and thinking about viruses, antibodies. Part of the reason why cybersecurity continues to be so hard is because the threat is not a bunch of tanks rolling at you but a whole bunch of systems that may be vulnerable to a worm getting in there. It means that we've got to think differently about our security, make different investments that may not be as sexy but may actually end up being as important as anything.

What I spend a lot of time worrying about are things like pandemics. You can't build walls in order to prevent the next airborne lethal flu from landing on our shores. Instead, what we need to be able to do is set up systems to create public health systems in all parts of the world, click triggers that tell us when we see something emerging, and make sure we've got quick protocols and systems that allow us to make vaccines a lot smarter. So if you take a public health model, and you think about how we can deal with, you know, the problems of cybersecurity, a lot may end up being really helpful in thinking about the AI threats.

Posted on October 20, 2016 at 6:16 AM21 Comments

Bypassing Intel's ASLR

Researchers discover a clever attack that bypasses the address space layout randomization (ALSR) on Intel's CPUs.

Here's the paper. It discusses several possible mitigation techniques.

Posted on October 19, 2016 at 2:19 PM10 Comments

Security Lessons from a Power Saw

Lance Spitzner looks at the safety features of a power saw and tries to apply them to Internet security:

By the way, here are some of the key safety features that are built into the DeWalt Mitre Saw. Notice in all three of these the human does not have to do anything special, just use the device. This is how we need to think from a security perspective.

  • Safety Cover: There is a plastic safety cover that protects the entire rotating blade. The only time the blade is actually exposed is when you lower the saw to actually cut into the wood. The moment you start to raise the blade after cutting, the plastic cover protects everything again. This means to hurt yourself you have to manually lower the blade with one hand then insert your hand into the cutting blade zone.

  • Power Switch: Actually, there is no power switch. Instead, after the saw is plugged in, to activate the saw you have to depress a lever. Let the lever go and saw stops. This means if you fall, slip, blackout, have a heart attack or any other type of accident and let go of the lever, the saw automatically stops. In other words, the saw always fails to the off (safe) position.

  • Shadow: The saw has a light that projects a shadow of the cutting blade precisely on the wood where the blade will cut. No guessing where the blade is going to cut.

Safety is like security, you cannot eliminate risk. But I feel this is a great example of how security can learn from others on how to take people into account.

Posted on October 19, 2016 at 6:45 AM41 Comments

Intelligence Oversight and How It Can Fail

Former NSA attorneys John DeLong and Susan Hennessay have written a fascinating article describing a particular incident of oversight failure inside the NSA. Technically, the story hinges on a definitional difference between the NSA and the FISA court meaning of the word "archived." (For the record, I would have defaulted to the NSA's interpretation, which feels more accurate technically.) But while the story is worth reading, what's especially interesting are the broader issues about how a nontechnical judiciary can provide oversight over a very technical data collection-and-analysis organization -- especially if the oversight must largely be conducted in secret.

From the article:

Broader root cause analysis aside, the BR FISA debacle made clear that the specific matter of shared legal interpretation needed to be addressed. Moving forward, the government agreed that NSA would coordinate all significant legal interpretations with DOJ. That sounds like an easy solution, but making it meaningful in practice is highly complex. Consider this example: a court order might require that "all collected data must be deleted after two years." NSA engineers must then make a list for the NSA attorneys:

  1. What does deleted mean? Does it mean make inaccessible to analysts or does it mean forensically wipe off the system so data is gone forever? Or does it mean something in between?

  2. What about backup systems used solely for disaster recovery? Does the data need to be removed there, too, within two years, even though it's largely inaccessible and typically there is a planned delay to account for mistakes in the operational system?

  3. When does the timer start?

  4. What's the legally-relevant unit of measurement for timestamp computation­ -- a day, an hour, a second, a millisecond?

  5. If a piece of data is deleted one second after two years, is that an incident of noncompliance? What about a delay of one day? ....

  6. What about various system logs that simply record the fact that NSA had a data object, but no significant details of the actual object? Do those logs need to be deleted too? If so, how soon?

  7. What about hard copy printouts?

And that is only a tiny sample of the questions that need to be answered for that small sentence fragment. Put yourself in the shoes of an NSA attorney: which of these questions -- ­in particular the answers­ -- require significant interpretations to be coordinated with DOJ and which determinations can be made internally?

Now put yourself in the shoes of a DOJ attorney who receives from an NSA attorney a subset of this list for advice and counsel. Which questions are truly significant from your perspective? Are there any questions here that are so significant they should be presented to the Court so that that government can be sufficiently confident that the Court understands how the two-year rule is really being interpreted and applied?

In many places I have separated different kinds of oversight: are we doing things right versus are we doing the right things? This is very much about the first: is the NSA complying with the rules the courts impose on them? This is about the first kind. I believe that the NSA tries very hard to follow the rules it's given, while at the same time being very aggressive about how it interprets any kind of ambiguities and using its nonadversarial relationship with its overseers to its advantage.

The only possible solution I can see to all of this is more public scrutiny. Secrecy is toxic here.

Posted on October 18, 2016 at 2:29 PM49 Comments

Virtual Kidnapping

This is a harrowing story of a scam artist that convinced a mother that her daughter had been kidnapped. More stories are here. It's unclear if these virtual kidnappers use data about their victims, or just call people at random and hope to get lucky. Still, it's a new criminal use of smartphones and ubiquitous information.

Reminds me of the scammers who call low-wage workers at retail establishments late at night and convince them to do outlandish and occasionally dangerous things.

Posted on October 17, 2016 at 6:28 AM19 Comments

Friday Squid Blogging: Barramundi with Squid Ink Risotto

Squid ink risotto is a good accompaniment for any mild fish.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on October 14, 2016 at 4:20 PM221 Comments

Cybersecurity Issues for the Next Administration

On today's Internet, too much power is concentrated in too few hands. In the early days of the Internet, individuals were empowered. Now governments and corporations hold the balance of power. If we are to leave a better Internet for the next generations, governments need to rebalance Internet power more towards the individual. This means several things.

First, less surveillance. Surveillance has become the business model of the Internet, and an aspect that is appealing to governments worldwide. While computers make it easier to collect data, and networks to aggregate it, governments should do more to ensure that any surveillance is exceptional, transparent, regulated and targeted. It's a tall order; governments such as that of the US need to overcome their own mass-surveillance desires, and at the same time implement regulations to fetter the ability of Internet companies to do the same.

Second, less censorship. The early days of the Internet were free of censorship, but no more. Many countries censor their Internet for a variety of political and moral reasons, and many large social networking platforms do the same thing for business reasons. Turkey censors anti-government political speech; many countries censor pornography. Facebook has censored both nudity and videos of police brutality. Governments need to commit to the free flow of information, and to make it harder for others to censor.

Third, less propaganda. One of the side-effects of free speech is erroneous speech. This naturally corrects itself when everybody can speak, but an Internet with centralized power is one that invites propaganda. For example, both China and Russia actively use propagandists to influence public opinion on social media. The more governments can do to counter propaganda in all forms, the better we all are.

And fourth, less use control. Governments need to ensure that our Internet systems are open and not closed, that neither totalitarian governments nor large corporations can limit what we do on them. This includes limits on what apps you can run on your smartphone, or what you can do with the digital files you purchase or are collected by the digital devices you own. Controls inhibit innovation: technical, business, and social.

Solutions require both corporate regulation and international cooperation. They require Internet governance to remain in the hands of the global community of engineers, companies, civil society groups, and Internet users. They require governments to be agile in the face of an ever-evolving Internet. And they'll result in more power and control to the individual and less to powerful institutions. That's how we built an Internet that enshrined the best of our societies, and that's how we'll keep it that way for future generations.

This essay previously appeared on, in a section about issues for the next president. It was supposed to appear in the print magazine, but was preempted by Donald Trump coverage.

Posted on October 14, 2016 at 6:20 AM36 Comments

The Psychological Impact of Doing Classified Intelligence Work

Richard Thieme gave a talk on the psychological impact of doing classified intelligence work. Summary here.

Posted on October 12, 2016 at 6:43 AM36 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient, an IBM Company.