Entries Tagged "risks"

Page 5 of 15

Fall Seminar on Catastrophic Risk

I am planning a study group at Harvard University (in Boston) for the Fall semester, on catastrophic risk.

Berkman Study Group — Catastrophic Risk: Technologies and Policy

Technology empowers, for both good and bad. A broad history of “attack” technologies shows trends of empowerment, as individuals wield ever more destructive power. The natural endgame is a nuclear bomb in everybody’s back pocket, or a bio-printer that can drop a species. And then what? Is society even possible when the most extreme individual can kill everyone else? Is totalitarian control the only way to prevent human devastation, or are there other possibilities? And how realistic are these scenarios, anyway? In this class, we’ll discuss technologies like cyber, bio, nanotech, artificial intelligence, and autonomous drones; security technologies and policies for catastrophic risk; and more. Is the reason we’ve never met any extraterrestrials that natural selection dictates that any species achieving a sufficiently advanced technology level inevitably exterminates itself?

The study group may serve as a springboard for an independent paper and credit, in conjunction with faculty supervision from your program.

All disciplines and backgrounds welcome, students and non-students alike. This discussion needs diverse perspectives. We also ask that you commit to preparing for and participating in all sessions.

Six sessions, Mondays, 5:00­-7:00 PM, Location TBD
9/14, 9/28, 10/5, 10/19, 11/2, 11/16

Please respond to Bruce Schneier with a resume and statement of interest. Applications due August 14. Bruce will review applications and aim for a seminar size of roughly 16­20 people with a diversity of backgrounds and expertise.

Please help me spread the word far and wide. The description is only on a Berkman page, so students won’t see it in their normal perusal of fall classes.

Posted on March 13, 2015 at 2:36 PMView Comments

Obama Says Terrorism Is Not an Existential Threat

In an interview this week, President Obama said that terrorism does not pose an existential threat:

What I do insist on is that we maintain a proper perspective and that we do not provide a victory to these terrorist networks by overinflating their importance and suggesting in some fashion that they are an existential threat to the United States or the world order. You know, the truth of the matter is that they can do harm. But we have the capacity to control how we respond in ways that do not undercut what’s the — you know, what’s essence of who we are.

He said something similar in January.

On one hand, what he said is blindingly obvious; and overinflating terrorism’s risks plays into the terrorists’ hands. Climate change is an existential threat. So is a comet hitting the earth, intelligent robots taking over the planet, and genetically engineered viruses. There are lots of existential threats to humanity, and we can argue about their feasibility and probability. But terrorism is not one of them. Even things that actually kill tens of thousands of people each year — car accidents, handguns, heart disease — are not existential threats.

But no matter how obvious this is, until recently it hasn’t been something that serious politicians have been able to say. When Vice President Biden said something similar last year, one commentary carried the headline “Truth or Gaffe?” In 2004, when presidential candidate John Kerry gave a common-sense answer to a question about the threat of terrorism, President Bush used those words in an attack ad. As far as I know, these comments by Obama and Biden are the first time major politicians are admitting that terrorism does not pose an existential threat and are not being pilloried for it.

Overreacting to the threat is still common, and exaggeration and fear still make good politics. But maybe now, a dozen years after 9/11, we can finally start having rational conversations about terrorism and security: what works, what doesn’t, what’s worth it, and what’s not.

Posted on February 3, 2015 at 6:15 AMView Comments

Common Risks in America: Cars and Guns

I have long said that driving a car is the most dangerous thing regularly do in our lives. Turns out deaths due to automobiles are declining, while deaths due to firearms are on the rise:

Guns and cars have long been among the leading causes of non-medical deaths in the U.S. By 2015, firearm fatalities will probably exceed traffic fatalities for the first time, based on data compiled by Bloomberg.

While motor-vehicle deaths dropped 22 percent from 2005 to 2010, gun fatalities are rising again after a low point in 2000, according to the Atlanta-based Centers for Disease Control and Prevention. Shooting deaths in 2015 will probably rise to almost 33,000, and those related to autos will decline to about 32,000, based on the 10-year average trend.

There’s also this story.

Posted on January 16, 2015 at 6:19 AMView Comments

Survey on What Americans Fear

Interesting data:

Turning to the crime section of the Chapman Survey on American Fears, the team discovered findings that not only surprised them, but also those who work in fields pertaining to crime.

“What we found when we asked a series of questions pertaining to fears of various crimes is that a majority of Americans not only fear crimes such as, child abduction, gang violence, sexual assaults and others; but they also believe these crimes (and others) have increased over the past 20 years,” said Dr. Edward Day who led this portion of the research and analysis. “When we looked at statistical data from police and FBI records, it showed crime has actually decreased in America in the past 20 years. Criminologists often get angry responses when we try to tell people the crime rate has gone down.”

Despite evidence to the contrary, Americans do not feel like the United States is becoming a safer place. The Chapman Survey on American Fears asked how they think prevalence of several crimes today compare with 20 years ago. In all cases, the clear majority of respondents were pessimistic; and in all cases Americans believe crime has at least remained steady. Crimes specifically asked about were: child abduction, gang violence, human trafficking, mass riots, pedophilia, school shootings, serial killing and sexual assault.

EDITED TO ADD (11/13): Direct link to the data as well as the survey methodology.

Posted on October 28, 2014 at 1:03 PMView Comments

Security Trade-offs of Cloud Backup

This is a good essay on the security trade-offs with cloud backup:

iCloud backups have not eliminated this problem, but they have made it far less common. This is, like almost everything in tech, a trade-off:

  • Your data is far safer from irretrievable loss if it is synced/backed up, regularly, to a cloud-based service.
  • Your data is more at risk of being stolen if it is synced/backed up, regularly, to a cloud-based service.

Ideally, the companies that provide such services minimize the risk of your account being hijacked while maximizing the simplicity and ease of setting it up and using it. But clearly these two goals are in conflict. There’s no way around the fact that the proper balance is somewhere in between maximal security and minimal complexity.

Further, I would wager heavily that there are thousands and thousands more people who have been traumatized by irretrievable data loss (who would have been saved if they’d had cloud-based backups) than those who have been victimized by having their cloud-based accounts hijacked (who would have been saved if they had only stored their data locally on their devices).

It is thus, in my opinion, terribly irresponsible to advise people to blindly not trust Apple (or Google, or Dropbox, or Microsoft, etc.) with “any of your data” without emphasizing, clearly and adamantly, that by only storing their data on-device, they greatly increase the risk of losing everything.

It’s true. For most people, the risk of data loss is greater than the risk of data theft.

Posted on September 25, 2014 at 2:17 PMView Comments

Irrational Fear of Risks Against Our Children

There’s a horrible story of a South Carolina mother arrested for letting her 9-year-old daughter play alone at a park while she was at work. The article linked to another article about a woman convicted of “contributing to the delinquency of a minor” for leaving her 4-year-old son in the car for a few minutes. That article contains some excellent commentary by the very sensible Free Range Kids blogger Lenore Skenazy:

“Listen,” she said at one point. “Let’s put aside for the moment that by far, the most dangerous thing you did to your child that day was put him in a car and drive someplace with him. About 300 children are injured in traffic accidents every day — and about two die. That’s a real risk. So if you truly wanted to protect your kid, you’d never drive anywhere with him. But let’s put that aside. So you take him, and you get to the store where you need to run in for a minute and you’re faced with a decision. Now, people will say you committed a crime because you put your kid ‘at risk.’ But the truth is, there’s some risk to either decision you make.” She stopped at this point to emphasize, as she does in much of her analysis, how shockingly rare the abduction or injury of children in non-moving, non-overheated vehicles really is. For example, she insists that statistically speaking, it would likely take 750,000 years for a child left alone in a public space to be snatched by a stranger. “So there is some risk to leaving your kid in a car,” she argues. It might not be statistically meaningful but it’s not nonexistent. The problem is,”she goes on, “there’s some risk to every choice you make. So, say you take the kid inside with you. There’s some risk you’ll both be hit by a crazy driver in the parking lot. There’s some risk someone in the store will go on a shooting spree and shoot your kid. There’s some risk he’ll slip on the ice on the sidewalk outside the store and fracture his skull. There’s some risk no matter what you do. So why is one choice illegal and one is OK? Could it be because the one choice inconveniences you, makes your life a little harder, makes parenting a little harder, gives you a little less time or energy than you would have otherwise had?”

Later on in the conversation, Skenazy boils it down to this. “There’s been this huge cultural shift. We now live in a society where most people believe a child can not be out of your sight for one second, where people think children need constant, total adult supervision. This shift is not rooted in fact. It’s not rooted in any true change. It’s imaginary. It’s rooted in irrational fear.”

Skenazy has some choice words about the South Carolina story as well:

But, “What if a man would’ve come and snatched her?” said a woman interviewed by the TV station.

To which I must ask: In broad daylight? In a crowded park? Just because something happened on Law & Order doesn’t mean it’s happening all the time in real life. Make “what if?” thinking the basis for an arrest and the cops can collar anyone. “You let your son play in the front yard? What if a man drove up and kidnapped him?” “You let your daughter sleep in her own room? What if a man climbed through the window?” etc.

These fears pop into our brains so easily, they seem almost real. But they’re not. Our crime rate today is back to what it was when gas was 29 cents a gallon, according to The Christian Science Monitor. It may feel like kids are in constant danger, but they are as safe (if not safer) than we were when our parents let us enjoy the summer outside, on our own, without fear of being arrested.


Posted on August 11, 2014 at 9:34 AMView Comments

Overreacting to Risk

This is a crazy overreaction:

A 19-year-old man was caught on camera urinating in a reservoir that holds Portland’s drinking water Wednesday, according to city officials.

Now the city must drain 38 million gallons of water from Reservoir 5 at Mount Tabor Park in southeast Portland.

I understand the natural human disgust reaction, but do these people actually think that their normal drinking water is any more pure? That a single human is that much worse than all the normal birds and other animals? A few ounces distributed amongst 38 million gallons is negligible.

Another story.

EDITED TO ADD (5/14): They didn’t flush the reservoir after all, but they did move the water.

Posted on April 18, 2014 at 6:26 AMView Comments

Risk-Based Authentication

I like this idea of giving each individual login attempt a risk score, based on the characteristics of the attempt:

The risk score estimates the risk associated with a log-in attempt based on a user’s typical log-in and usage profile, taking into account their device and geographic location, the system they’re trying to access, the time of day they typically log in, their device’s IP address, and even their typing speed. An employee logging into a CRM system using the same laptop, at roughly the same time of day, from the same location and IP address will have a low risk score. By contrast, an attempt to access a finance system from a tablet at night in Bali could potentially yield an elevated risk score.

Risk thresholds for individual systems are established based on the sensitivity of the information they store and the impact if the system were breached. Systems housing confidential financial data, for example, will have a low risk threshold.

If the risk score for a user’s access attempt exceeds the system’s risk threshold, authentication controls are automatically elevated, and the user may be required to provide a higher level of authentication, such as a PIN or token. If the risk score is too high, it may be rejected outright.

Posted on November 7, 2013 at 7:06 AMView Comments

The NSA's New Risk Analysis

As I recently reported in the Guardian, the NSA has secret servers on the Internet that hack into other computers, codename FOXACID. These servers provide an excellent demonstration of how the NSA approaches risk management, and exposes flaws in how the agency thinks about the secrecy of its own programs.

Here are the FOXACID basics: By the time the NSA tricks a target into visiting one of those servers, it already knows exactly who that target is, who wants him eavesdropped on, and the expected value of the data it hopes to receive. Based on that information, the server can automatically decide what exploit to serve the target, taking into account the risks associated with attacking the target, as well as the benefits of a successful attack. According to a top-secret operational procedures manual provided by Edward Snowden, an exploit named Validator might be the default, but the NSA has a variety of options. The documentation mentions United Rake, Peddle Cheap, Packet Wrench, and Beach Head — all delivered from a FOXACID subsystem called Ferret Cannon. Oh how I love some of these code names. (On the other hand, EGOTISTICALGIRAFFE has to be the dumbest code name ever.)

Snowden explained this to Guardian reporter Glenn Greenwald in Hong Kong. If the target is a high-value one, FOXACID might run a rare zero-day exploit that it developed or purchased. If the target is technically sophisticated, FOXACID might decide that there’s too much chance for discovery, and keeping the zero-day exploit a secret is more important. If the target is a low-value one, FOXACID might run an exploit that’s less valuable. If the target is low-value and technically sophisticated, FOXACID might even run an already-known vulnerability.

We know that the NSA receives advance warning from Microsoft of vulnerabilities that will soon be patched; there’s not much of a loss if an exploit based on that vulnerability is discovered. FOXACID has tiers of exploits it can run, and uses a complicated trade-off system to determine which one to run against any particular target.

This cost-benefit analysis doesn’t end at successful exploitation. According to Snowden, the TAO — that’s Tailored Access Operations — operators running the FOXACID system have a detailed flowchart, with tons of rules about when to stop. If something doesn’t work, stop. If they detect a PSP, a personal security product, stop. If anything goes weird, stop. This is how the NSA avoids detection, and also how it takes mid-level computer operators and turn them into what they call “cyberwarriors.” It’s not that they’re skilled hackers, it’s that the procedures do the work for them.

And they’re super cautious about what they do.

While the NSA excels at performing this cost-benefit analysis at the tactical level, it’s far less competent at doing the same thing at the policy level. The organization seems to be good enough at assessing the risk of discovery — for example, if the target of an intelligence-gathering effort discovers that effort — but to have completely ignored the risks of those efforts becoming front-page news.

It’s not just in the U.S., where newspapers are heavy with reports of the NSA spying on every Verizon customer, spying on domestic e-mail users, and secretly working to cripple commercial cryptography systems, but also around the world, most notably in Brazil, Belgium, and the European Union. All of these operations have caused significant blowback — for the NSA, for the U.S., and for the Internet as a whole.

The NSA spent decades operating in almost complete secrecy, but those days are over. As the corporate world learned years ago, secrets are hard to keep in the information age, and openness is a safer strategy. The tendency to classify everything means that the NSA won’t be able to sort what really needs to remain secret from everything else. The younger generation is more used to radical transparency than secrecy, and is less invested in the national security state. And whistleblowing is the civil disobedience of our time.

At this point, the NSA has to assume that all of its operations will become public, probably sooner than it would like. It has to start taking that into account when weighing the costs and benefits of those operations. And it now has to be just as cautious about new eavesdropping operations as it is about using FOXACID exploits attacks against users.

This essay previously appeared in the Atlantic.

Posted on October 9, 2013 at 6:28 AMView Comments

1 3 4 5 6 7 15

Sidebar photo of Bruce Schneier by Joe MacInnis.