Entries Tagged "risk assessment"

Page 3 of 21

Nicholas Weaver on Cryptocurrencies

This is well-worth reading (non-paywalled version). Here’s the opening:

Cryptocurrencies, although a seemingly interesting idea, are simply not fit for purpose. They do not work as currencies, they are grossly inefficient, and they are not meaningfully distributed in terms of trust. Risks involving cryptocurrencies occur in four major areas: technical risks to participants, economic risks to participants, systemic risks to the cryptocurrency ecosystem, and societal risks.

I haven’t written much about cryptocurrencies, but I share Weaver’s skepticism.

EDITED TO ADD (8/2): Paul Krugman on cryptocurrencies.

Posted on July 24, 2018 at 6:29 AMView Comments

How the Media Influences Our Fear of Terrorism

Good article that crunches the data and shows that the press’s coverage of terrorism is disproportional to its comparative risk.

This isn’t new. I’ve written about it before, and wrote about it more generally when I wrote about the psychology of risk, fear, and security. Basically, the issue is the availability heuristic. We tend to infer the probability of something by how easy it is to bring examples of the thing to mind. So if we can think of a lot of tiger attacks in our community, we infer that the risk is high. If we can’t think of many lion attacks, we infer that the risk is low. But while this is a perfectly reasonable heuristic when living in small family groups in the East African highlands in 100,000 BC, it fails in the face of modern media. The media makes the rare seem more common by spending a lot of time talking about it. It’s not the media’s fault. By definition, news is “something that hardly ever happens.” But when the coverage of terrorist deaths exceeds the coverage of homicides, we have a tendency to mistakenly inflate the risk of the former while discount the risk of the latter.

Our brains aren’t very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are. We fear them more than probability indicates we should.

There is a lot of psychological research that tries to explain this, but one of the key findings is this: People tend to base risk analysis more on stories than on data. Stories engage us at a much more visceral level, especially stories that are vivid, exciting or personally involving.

If a friend tells you about getting mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than reading a page of abstract crime statistics will.

Novelty plus dread plus a good story equals overreaction.

It’s not just murders. It’s flying vs. driving: the former is much safer, but accidents are so more spectacular when they occur.

Posted on January 24, 2017 at 6:31 AMView Comments

Confusing Security Risks with Moral Judgments

Interesting research that shows we exaggerate the risks of something when we find it morally objectionable.

From an article about and interview with the researchers:

To get at this question experimentally, Thomas and her collaborators created a series of vignettes in which a parent left a child unattended for some period of time, and participants indicated the risk of harm to the child during that period. For example, in one vignette, a 10-month-old was left alone for 15 minutes, asleep in the car in a cool, underground parking garage. In another vignette, an 8-year-old was left for an hour at a Starbucks, one block away from her parent’s location.

To experimentally manipulate participants’ moral attitude toward the parent, the experimenters varied the reason the child was left unattended across a set of six experiments with over 1,300 online participants. In some cases, the child was left alone unintentionally (for example, in one case, a mother is hit by a car and knocked unconscious after buckling her child into her car seat, thereby leaving the child unattended in the car seat). In other cases, the child was left unattended so the parent could go to work, do some volunteering, relax or meet a lover.

Not surprisingly, the parent’s reason for leaving a child unattended affected participants’ judgments of whether the parent had done something immoral: Ratings were over 3 on a 10-point scale even when the child was left unattended unintentionally, but they skyrocketed to nearly 8 when the parent left to meet a lover. Ratings for the other cases fell in between.

The more surprising result was that perceptions of risk followed precisely the same pattern. Although the details of the cases were otherwise the same -­ that is, the age of the child, the duration and location of the unattended period, and so on -­ participants thought children were in significantly greater danger when the parent left to meet a lover than when the child was left alone unintentionally. The ratings for the other cases, once again, fell in between. In other words, participants’ factual judgments of how much danger the child was in while the parent was away varied according to the extent of their moral outrage concerning the parent’s reason for leaving.

Posted on August 25, 2016 at 11:12 AMView Comments

Report on the Vulnerabilities Equities Process

I have written before on the vulnerabilities equities process (VEP): the system by which the US government decides whether to disclose and fix a computer vulnerability or keep it secret and use it offensively. Ari Schwartz and Rob Knake, both former Directors for Cybersecurity Policy at the White House National Security Council, have written a report describing the process as we know it, with policy recommendations for improving it.

Basically, their recommendations are focused on improving the transparency, oversight, and accountability (three things I repeatedly recommend) of the process. In summary:

  • The President should issue an Executive Order mandating government-wide compliance with the VEP.
  • Make the general criteria used to decide whether or not to disclose a vulnerability public.
  • Clearly define the VEP.
  • Make sure any undisclosed vulnerabilities are reviewed periodically.
  • Ensure that the government has the right to disclose any vulnerabilities it purchases.
  • Transfer oversight of the VEP from the NSA to the DHS.
  • Issue an annual report on the VEP.
  • Expand Congressional oversight of the VEP.
  • Mandate oversight by other independent bodies inside the Executive Branch.
  • Expand funding for both offensive and defensive vulnerability research.

These all seem like good ideas to me. This is a complex issue, one I wrote about in Data and Goliath (pages 146-50), and one that’s only going to get more important in the Internet of Things.

News article.

Posted on July 11, 2016 at 12:15 PMView Comments

Financial Cyber Risk Is Not Systemic Risk

This interesting essay argues that financial risks are generally not systemic risks, and instead are generally much smaller. That’s certainly been our experience to date:

While systemic risk is frequently invoked as a key reason to be on guard for cyber risk, such a connection is quite tenuous. A cyber event might in extreme cases result in a systemic crisis, but to do so needs highly fortuitous timing.

From the point of view of policymaking, rather than simply asserting systemic consequences for cyber risks, it would be better if the cyber discussion were better integrated into the existing macroprudential dialogue. To us, the overall discussion of cyber and systemic risk seems to be too focused on IT considerations and not enough on economic consequences.

After all, if there are systemic consequences from cyber risk, the chain of causality will be found in the macroprudential domain.

Posted on June 10, 2016 at 12:56 PMView Comments

Breaking Semantic Image CAPTCHAs

Interesting research: Suphannee Sivakorn, Iasonas Polakis and Angelos D. Keromytis, “I Am Robot: (Deep) Learning to Break Semantic Image CAPTCHAs“:

Abstract: Since their inception, captchas have been widely used for preventing fraudsters from performing illicit actions. Nevertheless, economic incentives have resulted in an armsrace, where fraudsters develop automated solvers and, in turn, captcha services tweak their design to break the solvers. Recent work, however, presented a generic attack that can be applied to any text-based captcha scheme. Fittingly, Google recently unveiled the latest version of reCaptcha. The goal of their new system is twofold; to minimize the effort for legitimate users, while requiring tasks that are more challenging to computers than text recognition. ReCaptcha is driven by an “advanced risk analysis system” that evaluates requests and selects the difficulty of the captcha that will be returned. Users may be required to click in a checkbox, or solve a challenge by identifying images with similar content.

In this paper, we conduct a comprehensive study of reCaptcha, and explore how the risk analysis process is influenced by each aspect of the request. Through extensive experimentation, we identify flaws that allow adversaries to effortlessly influence the risk analysis, bypass restrictions, and deploy large-scale attacks. Subsequently, we design a novel low-cost attack that leverages deep learning technologies for the semantic annotation of images. Our system is extremely effective, automatically solving 70.78% of the image reCaptcha challenges, while requiring only 19 seconds per challenge. We also apply our attack to the Facebook image captcha and achieve an accuracy of 83.5%. Based on our experimental findings, we propose a series of safeguards and modifications for impacting the scalability and accuracy of our attacks. Overall, while our study focuses on reCaptcha, our findings have wide implications; as the semantic information conveyed via images is increasingly within the realm of automated reasoning, the future of captchas relies on the exploration of novel directions.

News articles.

Posted on April 8, 2016 at 6:39 AMView Comments

Smart Essay on the Limitations of Anti-Terrorism Security

This is good:

Threats constantly change, yet our political discourse suggests that our vulnerabilities are simply for lack of resources, commitment or competence. Sometimes, that is true. But mostly we are vulnerable because we choose to be; because we’ve accepted, at least implicitly, that some risk is tolerable. A state that could stop every suicide bomber wouldn’t be a free or, let’s face it, fun one.

We will simply never get to maximum defensive posture. Regardless of political affiliation, Americans wouldn’t tolerate the delay or intrusion of an urban mass-transit system that required bag checks and pat-downs. After the 2013 Boston Marathon bombing, many wondered how to make the race safe the next year. A heavier police presence helps, but the only truly safe way to host a marathon is to not have one at all. The risks we tolerate, then, are not necessarily bad bargains simply because an enemy can exploit them.

No matter what promises are made on the campaign trail, terrorism will never be vanquished. There is no ideology, no surveillance, no wall that will definitely stop some 24-year-old from becoming radicalized on the Web, gaining access to guns and shooting a soft target. When we don’t admit this to ourselves, we often swing between the extremes of putting our heads in the sand or losing them entirely.

I am reminded of my own 2006 “Refuse to be Terrorized” essay.

Posted on April 3, 2016 at 7:42 PMView Comments

Terrifying Technologies

I’ve written about the difference between risk perception and risk reality. I thought about that when reading this list of Americans’ top technology fears:

  1. Cyberterrorism
  2. Corporate tracking of personal information
  3. Government tracking of personal information
  4. Robots replacing workforce
  5. Trusting artificial intelligence to do work
  6. Robots
  7. Artificial intelligence
  8. Technology I don’t understand

More at the link.

Posted on December 9, 2015 at 1:48 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.