Entries Tagged "movie-plot threats"

Page 1 of 15

Science Fiction Writers Helping Imagine Future Threats

The French army is going to put together a team of science fiction writers to help imagine future threats.

Leaving aside the question of whether science fiction writers are better or worse at envisioning nonfictional futures, this isn’t new. The US Department of Homeland Security did the same thing over a decade ago, and I wrote about it back then:

A couple of years ago, the Department of Homeland Security hired a bunch of science fiction writers to come in for a day and think of ways terrorists could attack America. If our inability to prevent 9/11 marked a failure of imagination, as some said at the time, then who better than science fiction writers to inject a little imagination into counterterrorism planning?

I discounted the exercise at the time, calling it “embarrassing.” I never thought that 9/11 was a failure of imagination. I thought, and still think, that 9/11 was primarily a confluence of three things: the dual failure of centralized coordination and local control within the FBI, and some lucky breaks on the part of the attackers. More imagination leads to more movie-plot threats — which contributes to overall fear and overestimation of the risks. And that doesn’t help keep us safe at all.

Science fiction writers are creative, and creativity helps in any future scenario brainstorming. But please, keep the people who actually know science and technology in charge.

Last month, at the 2009 Homeland Security Science & Technology Stakeholders Conference in Washington D.C., science fiction writers helped the attendees think differently about security. This seems like a far better use of their talents than imagining some of the zillions of ways terrorists can attack America.

Posted on July 23, 2019 at 6:27 AMView Comments

Conspiracy Theories around the "Presidential Alert"

Noted conspiracy theorist John McAfee tweeted:

The “Presidential alerts”: they are capable of accessing the E911 chip in your phones — giving them full access to your location, microphone, camera and every function of your phone. This not a rant, this is from me, still one of the leading cybersecurity experts. Wake up people!

This is, of course, ridiculous. I don’t even know what an “E911 chip” is. And — honestly — if the NSA wanted in your phone, they would be a lot more subtle than this.

RT has picked up the story, though.

(If they just called it a “FEMA Alert,” there would be a lot less stress about the whole thing.)

Posted on October 4, 2018 at 3:03 PMView Comments

Subway Elevators and Movie-Plot Threats

Local residents are opposing adding an elevator to a subway station because terrorists might use it to detonate a bomb. No, really. There’s no actual threat analysis, only fear:

“The idea that people can then ride in on the subway with a bomb or whatever and come straight up in an elevator is awful to me,” said Claudia Ward, who lives in 15 Broad Street and was among a group of neighbors who denounced the plan at a recent meeting of the local community board. “It’s too easy for someone to slip through. And I just don’t want my family and my neighbors to be the collateral on that.”

[…]

Local residents plan to continue to fight, said Ms. Gerstman, noting that her building’s board decided against putting decorative planters at the building’s entrance over fears that shards could injure people in the event of a blast.

“Knowing that, and then seeing the proposal for giant glass structures in front of my building ­- ding ding ding! — what does a giant glass structure become in the event of an explosion?” she said.

In 2005, I coined the term “movie-plot threat” to denote a threat scenario that caused undue fear solely because of its specificity. Longtime readers of this blog will remember my annual Movie-Plot Threat Contests. I ended the contest in 2015 because I thought the meme had played itself out. Clearly there’s more work to be done.

Posted on January 30, 2018 at 6:26 AMView Comments

Jumping Air Gaps with Blinking Lights and Drones

Researchers have demonstrated how a malicious piece of software in an air-gapped computer can communicate with a nearby drone using a blinking LED on the computer.

I have mixed feelings about research like this. On the one hand, it’s pretty cool. On the other hand, there’s not really anything new or novel, and it’s kind of a movie-plot threat.

Research paper.

EDITED TO ADD (3/7): Here’s a 2002 paper on this idea.

Posted on March 3, 2017 at 6:48 AMView Comments

Terrorist False Alarm at JFK Airport Demonstrates How Unprepared We Really Are

The detailed accounts of the terrorist-shooter false-alarm at Kennedy Airport in New York last week illustrate how completely and totally unprepared the airport authorities are for any real such event.

I have two reactions to this. On the one hand, this is a movie-plot threat — the sort of overly specific terrorist scenario that doesn’t make sense to defend against. On the other hand, police around the world need training in these types of scenarios in general. Panic can easily cause more deaths than terrorists themselves, and we need to think about what responsibilities police and other security guards have in these situations.

Posted on August 19, 2016 at 2:23 PMView Comments

Real-World Security and the Internet of Things

Disaster stories involving the Internet of Things are all the rage. They feature cars (both driven and driverless), the power grid, dams, and tunnel ventilation systems. A particularly vivid and realistic one, near-future fiction published last month in New York Magazine, described a cyberattack on New York that involved hacking of cars, the water system, hospitals, elevators, and the power grid. In these stories, thousands of people die. Chaos ensues. While some of these scenarios overhype the mass destruction, the individual risks are all real. And traditional computer and network security isn’t prepared to deal with them.

Classic information security is a triad: confidentiality, integrity, and availability. You’ll see it called “CIA,” which admittedly is confusing in the context of national security. But basically, the three things I can do with your data are steal it (confidentiality), modify it (integrity), or prevent you from getting it (availability).

So far, Internet threats have largely been about confidentiality. These can be expensive; one survey estimated that data breaches cost an average of $3.8 million each. They can be embarrassing, as in the theft of celebrity photos from Apple’s iCloud in 2014 or the Ashley Madison breach in 2015. They can be damaging, as when the government of North Korea stole tens of thousands of internal documents from Sony or when hackers stole data about 83 million customer accounts from JPMorgan Chase, both in 2014. They can even affect national security, as in the case of the Office of Personnel Management data breach by — presumptively — China in 2015.

On the Internet of Things, integrity and availability threats are much worse than confidentiality threats. It’s one thing if your smart door lock can be eavesdropped upon to know who is home. It’s another thing entirely if it can be hacked to allow a burglar to open the door — or prevent you from opening your door. A hacker who can deny you control of your car, or take over control, is much more dangerous than one who can eavesdrop on your conversations or track your car’s location.

With the advent of the Internet of Things and cyber-physical systems in general, we’ve given the Internet hands and feet: the ability to directly affect the physical world. What used to be attacks against data and information have become attacks against flesh, steel, and concrete.

Today’s threats include hackers crashing airplanes by hacking into computer networks, and remotely disabling cars, either when they’re turned off and parked or while they’re speeding down the highway. We’re worried about manipulated counts from electronic voting machines, frozen water pipes through hacked thermostats, and remote murder through hacked medical devices. The possibilities are pretty literally endless. The Internet of Things will allow for attacks we can’t even imagine.

The increased risks come from three things: software control of systems, interconnections between systems, and automatic or autonomous systems. Let’s look at them in turn:

Software Control. The Internet of Things is a result of everything turning into a computer. This gives us enormous power and flexibility, but it brings insecurities with it as well. As more things come under software control, they become vulnerable to all the attacks we’ve seen against computers. But because many of these things are both inexpensive and long-lasting, many of the patch and update systems that work with computers and smartphones won’t work. Right now, the only way to patch most home routers is to throw them away and buy new ones. And the security that comes from replacing your computer and phone every few years won’t work with your refrigerator and thermostat: on the average, you replace the former every 15 years, and the latter approximately never. A recent Princeton survey found 500,000 insecure devices on the Internet. That number is about to explode.

Interconnections. As these systems become interconnected, vulnerabilities in one lead to attacks against others. Already we’ve seen Gmail accounts compromised through vulnerabilities in Samsung smart refrigerators, hospital IT networks compromised through vulnerabilities in medical devices, and Target Corporation hacked through a vulnerability in its HVAC system. Systems are filled with externalities that affect other systems in unforeseen and potentially harmful ways. What might seem benign to the designers of a particular system becomes harmful when it’s combined with some other system. Vulnerabilities on one system cascade into other systems, and the result is a vulnerability that no one saw coming and no one bears responsibility for fixing. The Internet of Things will make exploitable vulnerabilities much more common. It’s simple mathematics. If 100 systems are all interacting with each other, that’s about 5,000 interactions and 5,000 potential vulnerabilities resulting from those interactions. If 300 systems are all interacting with each other, that’s 45,000 interactions. 1,000 systems: 12.5 million interactions. Most of them will be benign or uninteresting, but some of them will be very damaging.

Autonomy. Increasingly, our computer systems are autonomous. They buy and sell stocks, turn the furnace on and off, regulate electricity flow through the grid, and — in the case of driverless cars — automatically pilot multi-ton vehicles to their destinations. Autonomy is great for all sorts of reasons, but from a security perspective it means that the effects of attacks can take effect immediately, automatically, and ubiquitously. The more we remove humans from the loop, faster attacks can do their damage and the more we lose our ability to rely on actual smarts to notice something is wrong before it’s too late.

We’re building systems that are increasingly powerful, and increasingly useful. The necessary side effect is that they are increasingly dangerous. A single vulnerability forced Chrysler to recall 1.4 million vehicles in 2015. We’re used to computers being attacked at scale — think of the large-scale virus infections from the last decade — but we’re not prepared for this happening to everything else in our world.

Governments are taking notice. Last year, both Director of National Intelligence James Clapper and NSA Director Mike Rogers testified before Congress, warning of these threats. They both believe we’re vulnerable.

This is how it was phrased in the DNI’s 2015 Worldwide Threat Assessment: “Most of the public discussion regarding cyber threats has focused on the confidentiality and availability of information; cyber espionage undermines confidentiality, whereas denial-of-service operations and data-deletion attacks undermine availability. In the future, however, we might also see more cyber operations that will change or manipulate electronic information in order to compromise its integrity (i.e. accuracy and reliability) instead of deleting it or disrupting access to it. Decision-making by senior government officials (civilian and military), corporate executives, investors, or others will be impaired if they cannot trust the information they are receiving.”

The DNI 2016 threat assessment included something similar: “Future cyber operations will almost certainly include an increased emphasis on changing or manipulating data to compromise its integrity (i.e., accuracy and reliability) to affect decision making, reduce trust in systems, or cause adverse physical effects. Broader adoption of IoT devices and AI — in settings such as public utilities and healthcare — will only exacerbate these potential effects.”

Security engineers are working on technologies that can mitigate much of this risk, but many solutions won’t be deployed without government involvement. This is not something that the market can solve. Like data privacy, the risks and solutions are too technical for most people and organizations to understand; companies are motivated to hide the insecurity of their own systems from their customers, their users, and the public; the interconnections can make it impossible to connect data breaches with resultant harms; and the interests of the companies often don’t match the interests of the people.

Governments need to play a larger role: setting standards, policing compliance, and implementing solutions across companies and networks. And while the White House Cybersecurity National Action Plan says some of the right things, it doesn’t nearly go far enough, because so many of us are phobic of any government-led solution to anything.

The next president will probably be forced to deal with a large-scale Internet disaster that kills multiple people. I hope he or she responds with both the recognition of what government can do that industry can’t, and the political will to make it happen.

This essay previously appeared on Vice Motherboard.

BoingBoing post.

EDITED TO ADD (8/11): An essay that agrees with me.

Posted on July 28, 2016 at 5:51 AMView Comments

Arresting People for Walking Away from Airport Security

A proposed law in Albany, NY, would make it a crime to walk away from airport screening.

Aside from wondering why county lawmakers are getting involved with what should be national policy, you have to ask: what are these people thinking?

They’re thinking in stories, of course. They have a movie plot in their heads, and they are imaging how this measure solves it.

The law is intended to cover what Apple described as a soft spot in the current system that allows passengers to walk away without boarding their flights if security staff flags them for additional scrutiny.

That could include would-be terrorists probing for weaknesses, Apple said, adding that his deputies currently have no legal grounds to question such a person.

Does anyone have any idea what stories these people have in their heads? What sorts of security weaknesses are exposed by walking up to airport security and then walking away?

Posted on May 31, 2016 at 6:35 AMView Comments

Living in a Code Yellow World

In the 1980s, handgun expert Jeff Cooper invented something called the Color Code to describe what he called the “combat mind-set.” Here is his summary:

In White you are unprepared and unready to take lethal action. If you are attacked in White you will probably die unless your adversary is totally inept.

In Yellow you bring yourself to the understanding that your life may be in danger and that you may have to do something about it.

In Orange you have determined upon a specific adversary and are prepared to take action which may result in his death, but you are not in a lethal mode.

In Red you are in a lethal mode and will shoot if circumstances warrant.

Cooper talked about remaining in Code Yellow over time, but he didn’t write about its psychological toll. It’s significant. Our brains can’t be on that alert level constantly. We need downtime. We need to relax. This is why we have friends around whom we can let our guard down and homes where we can close our doors to outsiders. We only want to visit Yellowland occasionally.

Since 9/11, the US has increasingly become Yellowland, a place where we assume danger is imminent. It’s damaging to us individually and as a society.

I don’t mean to minimize actual danger. Some people really do live in a Code Yellow world, due to the failures of government in their home countries. Even there, we know how hard it is for them to maintain a constant level of alertness in the face of constant danger. Psychologist Abraham Maslow wrote about this, making safety a basic level in his hierarchy of needs. A lack of safety makes people anxious and tense, and the long term effects are debilitating.

The same effects occur when we believe we’re living in an unsafe situation even if we’re not. The psychological term for this is hypervigilance. Hypervigilance in the face of imagined danger causes stress and anxiety. This, in turn, alters how your hippocampus functions, and causes an excess of cortisol in your body. Now cortisol is great in small and infrequent doses, and helps you run away from tigers. But it destroys your brain and body if you marinate in it for extended periods of time.

Not only does trying to live in Yellowland harm you physically, it changes how you interact with your environment and it impairs your judgment. You forget what’s normal and start seeing the enemy everywhere. Terrorism actually relies on this kind of reaction to succeed.

Here’s an example from The Washington Post last year: “I was taking pictures of my daughters. A stranger thought I was exploiting them.” A father wrote about his run-in with an off-duty DHS agent, who interpreted an innocent family photoshoot as something nefarious and proceeded to harass and lecture the family. That the parents were white and the daughters Asian added a racist element to the encounter.

At the time, people wrote about this as an example of worst-case thinking, saying that as a DHS agent, “he’s paid to suspect the worst at all times and butt in.” While, yes, it was a “disturbing reminder of how the mantra of ‘see something, say something’ has muddied the waters of what constitutes suspicious activity,” I think there’s a deeper story here. The agent is trying to live his life in Yellowland, and it caused him to see predators where there weren’t any.

I call these “movie-plot threats,” scenarios that would make great action movies but that are implausible in real life. Yellowland is filled with them.

Last December former DHS director Tom Ridge wrote about the security risks of building a NFL stadium near the Los Angeles Airport. His report is full of movie-plot threats, including terrorists shooting down a plane and crashing it into a stadium. His conclusion, that it is simply too dangerous to build a sports stadium within a few miles of the airport, is absurd. He’s been living too long in Yellowland.

That our brains aren’t built to live in Yellowland makes sense, because actual attacks are rare. The person walking towards you on the street isn’t an attacker. The person doing something unexpected over there isn’t a terrorist. Crashing an airplane into a sports stadium is more suitable to a Die Hard movie than real life. And the white man taking pictures of two Asian teenagers on a ferry isn’t a sex slaver. (I mean, really?)

Most of us, that DHS agent included, are complete amateurs at knowing the difference between something benign and something that’s actually dangerous. Combine this with the rarity of attacks, and you end up with an overwhelming number of false alarms. This is the ultimate problem with programs like “see something, say something.” They waste an enormous amount of time and money.

Those of us fortunate enough to live in a Code White society are much better served acting like we do. This is something we need to learn at all levels, from our personal interactions to our national policy. Since the terrorist attacks of 9/11, many of our counterterrorism policies have helped convince people they’re not safe, and that they need to be in a constant state of readiness. We need our leaders to lead us out of Yellowland, not to perpetuate it.

This essay previously appeared on Fusion.net.

EDITED TO ADD (9/25): UK student reading book on terrorism is accused of being a terrorist. He was reading the book for a class he was taking. I’ll let you guess his ethnicity.

Posted on September 24, 2015 at 11:39 AMView Comments

1 2 3 15

Sidebar photo of Bruce Schneier by Joe MacInnis.