What Happened to Cyber 9/11?

A recent article in the Atlantic asks why we haven't seen a"cyber 9/11" in the past fifteen or so years. (I, too, remember the increasingly frantic and fearful warnings of a "cyber Peal Harbor," "cyber Katrina" -- when that was a thing -- or "cyber 9/11." I made fun of those warnings back then.) The author's answer:

Three main barriers are likely preventing this. For one, cyberattacks can lack the kind of drama and immediate physical carnage that terrorists seek. Identifying the specific perpetrator of a cyberattack can also be difficult, meaning terrorists might have trouble reaping the propaganda benefits of clear attribution. Finally, and most simply, it's possible that they just can't pull it off.

Commenting on the article, Rob Graham adds:

I think there are lots of warning from so-called "experts" who aren't qualified to make such warnings, that the press errs on the side of giving such warnings credibility instead of challenging them.

I think mostly the reason why cyberterrorism doesn't happen is that which motivates violent people is different than what which motivates technical people, pulling apart the groups who would want to commit cyberterrorism from those who can.

These are all good reasons, but I think both authors missed the most important one: there simply aren't a lot of terrorists out there. Let's ask the question more generally: why hasn't there been another 9/11 since 2001? I also remember dire predictions that large-scale terrorism was the new normal, and that we would see 9/11-scale attacks regularly. But since then, nothing. We could credit the fantastic counterterrorism work of the US and other countries, but a more reasonable explanation is that there are very few terrorists and even fewer organized ones. Our fear of terrorism is far greater than the actual risk.

This isn't to say that cyberterrorism can never happen. Of course it will, sooner or later. But I don't foresee it becoming a preferred terrorism method anytime soon. Graham again:

In the end, if your goal is to cause major power blackouts, your best bet is to bomb power lines and distribution centers, rather than hack them.

Posted on November 19, 2018 at 6:50 AM21 Comments

Worst-Case Thinking Breeds Fear and Irrationality

Here's a crazy story from the UK. Basically, someone sees a man and a little girl leaving a shopping center. Instead of thinking "it must be a father and daughter, which happens millions of times a day and is perfectly normal," he thinks "this is obviously a case of child abduction and I must alert the authorities immediately." And the police, instead of thinking "why in the world would this be a kidnapping and not a normal parental activity," thinks "oh my god, we must all panic immediately." And they do, scrambling helicopters, searching cars leaving the shopping center, and going door-to-door looking for clues. Seven hours later, the police eventually came to realize that she was safe asleep in bed.

Lenore Skenazy writes further:

Can we agree that something is wrong when we leap to the worst possible conclusion upon seeing something that is actually nice? In an email Furedi added that now, "Some fathers told me that they think and look around before they kiss their kids in public. Society is all too ready to interpret the most innocent of gestures as a prelude to abusing a child."

So our job is to try to push the re-set button.

If you see an adult with a child in plain daylight, it is not irresponsible to assume they are caregiver and child. Remember the stat from David Finkelhor, head of the Crimes Against Children Research Center at the University of New Hampshire. He has heard of NO CASE of a child kidnapped from its parents in public and sold into sex trafficking.

We are wired to see "Taken" when we're actually witnessing something far less exciting called Everyday Life. Let's tune in to reality.

This is the problem with the "see something, say something" mentality. As I wrote back in 2007:

If you ask amateurs to act as front-line security personnel, you shouldn't be surprised when you get amateur security.

And the police need to understand the base-rate fallacy better.

Posted on November 18, 2018 at 1:12 PM27 Comments

Israeli Surveillance Gear

The Israeli Defense Force mounted a botched raid in Gaza. They were attempting to install surveillance gear, which they ended up leaving behind. (There are photos -- scroll past the video.) Israeli media is claiming that the capture of this gear by Hamas causes major damage to Israeli electronic surveillance capabilities. The Israelis themselves destroyed the vehicle the commandos used to enter Gaza. I'm guessing they did so because there was more gear in it they didn't want falling into the Palestinians' hands.

Can anyone intelligently speculate about what the photos shows? And if there are other photos on the Internet, please post them.

Posted on November 18, 2018 at 6:26 AM18 Comments

Mailing Tech Support a Bomb

I understand his frustration, but this is extreme:

When police asked Cryptopay what could have motivated Salonen to send the company a pipe bomb ­ or, rather, two pipe bombs, which is what investigators found when they picked apart the explosive package ­ the only thing the company could think of was that it had declined his request for a password change.

In August 2017, Salonen, a customer of Cryptopay, emailed their customer services team to ask for a new password. They refused, given that it was against the company's privacy policy.

A fair point, as it's never a good idea to send a new password in an email. A password-reset link is safer all round, although it's not clear if Cryptopay offered this option to Salonen.

Posted on November 16, 2018 at 2:11 PM11 Comments

Hidden Cameras in Streetlights

Both the US Drug Enforcement Administration (DEA) and Immigration and Customs Enforcement (ICE) are hiding surveillance cameras in streetlights.

According to government procurement data, the DEA has paid a Houston, Texas company called Cowboy Streetlight Concealments LLC roughly $22,000 since June 2018 for "video recording and reproducing equipment." ICE paid out about $28,000 to Cowboy Streetlight Concealments over the same period of time.

It's unclear where the DEA and ICE streetlight cameras have been installed, or where the next deployments will take place. ICE offices in Dallas, Houston, and San Antonio have provided funding for recent acquisitions from Cowboy Streetlight Concealments; the DEA's most recent purchases were funded by the agency's Office of Investigative Technology, which is located in Lorton, Virginia.

Fifty thousand dollars doesn't buy a lot of streetlight surveillance cameras, so either this is a pilot program or there are a lot more procurements elsewhere that we don't know about.

Posted on November 16, 2018 at 6:02 AM20 Comments

Chip Cards Fail to Reduce Credit Card Fraud in the US

A new study finds that credit card fraud has not declined since the introduction of chip cards in the US. The majority of stolen card information comes from hacked point-of-sale terminals.

The reasons seem to be twofold. One, the US uses chip-and-signature instead of chip-and-PIN, obviating the most critical security benefit of the chip. And two, US merchants still accept magnetic stripe cards, meaning that thieves can steal credentials from a chip card and create a working cloned mag stripe card.

Boing Boing post.

Posted on November 15, 2018 at 6:24 AM42 Comments

More Spectre/Meltdown-Like Attacks

Back in January, we learned about a class of vulnerabilities against microprocessors that leverages various performance and efficiency shortcuts for attack. I wrote that the first two attacks would be just the start:

It shouldn't be surprising that microprocessor designers have been building insecure hardware for 20 years. What's surprising is that it took 20 years to discover it. In their rush to make computers faster, they weren't thinking about security. They didn't have the expertise to find these vulnerabilities. And those who did were too busy finding normal software vulnerabilities to examine microprocessors. Security researchers are starting to look more closely at these systems, so expect to hear about more vulnerabilities along these lines.

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they -- and the research into the Intel ME vulnerability -- have shown researchers where to look, more is coming -- and what they'll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

We saw several variants over the year. And now researchers have discovered seven more.

Researchers say they've discovered the seven new CPU attacks while performing "a sound and extensible systematization of transient execution attacks" -- a catch-all term the research team used to describe attacks on the various internal mechanisms that a CPU uses to process data, such as the speculative execution process, the CPU's internal caches, and other internal execution stages.

The research team says they've successfully demonstrated all seven attacks with proof-of-concept code. Experiments to confirm six other Meltdown-attacks did not succeed, according to a graph published by researchers.

Microprocessor designers have spent the year rethinking the security of their architectures. My guess is that they have a lot more rethinking to do.

Posted on November 14, 2018 at 3:30 PM25 Comments

Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:


The list is maintained on this page.

Posted on November 14, 2018 at 8:03 AM2 Comments

Oracle and "Responsible Disclosure"

I've been writing about "responsible disclosure" for over a decade; here's an essay from 2007. Basically, it's a tacit agreement between researchers and software vendors. Researchers agree to withhold their work until software companies fix the vulnerabilities, and software vendors agree not to harass researchers and fix the vulnerabilities quickly.

When that agreement breaks down, things go bad quickly. This story is about a researcher who published an Oracle zero-day because Oracle has a history of harassing researchers and ignoring vulnerabilities.

Software vendors might not like responsible disclosure, but it's the best solution we have. Making it illegal to publish vulnerabilities without the vendor's consent means that they won't get fixed quickly -- and everyone will be less secure. It also means less security research.

This will become even more critical with software that affects the world in a direct physical manner, like cars and airplanes. Responsible disclosure makes us safer, but it only works if software vendors take the vulnerabilities seriously and fix them quickly. Without any regulations that enforce that, the threat of disclosure is the only incentive we can impose on software vendors.

Posted on November 14, 2018 at 6:46 AM13 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.