Entries Tagged "risk assessment"
Page 2 of 21
Good paper on cybersecurity insurance: both the history and the promise for the future. From the conclusion:
Policy makers have long held high hopes for cyber insurance as a tool for improving security. Unfortunately, the available evidence so far should give policymakers pause. Cyber insurance appears to be a weak form of governance at present. Insurers writing cyber insurance focus more on organisational procedures than technical controls, rarely include basic security procedures in contracts, and offer discounts that only offer a marginal incentive to invest in security. However, the cost of external response services is covered, which suggests insurers believe ex-post responses to be more effective than ex-ante mitigation. (Alternatively, they can more easily translate the costs associated with ex-post responses into manageable claims.)
The private governance role of cyber insurance is limited by market dynamics. Competitive pressures drive a race-to-the-bottom in risk assessment standards and prevent insurers including security procedures in contracts. Policy interventions, such as minimum risk assessment standards, could solve this collective action problem. Policy-holders and brokers could also drive this change by looking to insurers who conduct rigorous assessments. Doing otherwise ensures adverse selection and moral hazard will increase costs for firms with responsible security postures. Moving toward standardised risk assessment via proposal forms or external scans supports the actuarial base in the long-term. But there is a danger policyholders will succumb to Goodhart’s law by internalising these metrics and optimising the metric rather than minimising risk. This is particularly likely given these assessments are constructed by private actors with their own incentives. Search-light effects may drive the scores towards being based on what can be measured, not what is important.
EDITED TO ADD (9/11): Boing Boing post.
Stuart Schechter writes about the security risks of using a password manager. It’s a good piece, and nicely discusses the trade-offs around password managers: which one to choose, which passwords to store in it, and so on.
My own Password Safe is mentioned. My particular choices about security and risk is to only store passwords on my computer — not on my phone — and not to put anything in the cloud. In my way of thinking, that reduces the risks of a password manager considerably. Yes, there are losses in convenience.
Really interesting first-hand experience from Maciej Cegłowski.
A recent article in the Atlantic asks why we haven’t seen a”cyber 9/11″ in the past fifteen or so years. (I, too, remember the increasingly frantic and fearful warnings of a “cyber Peal Harbor,” “cyber Katrina” — when that was a thing — or “cyber 9/11.” I made fun of those warnings back then.) The author’s answer:
Three main barriers are likely preventing this. For one, cyberattacks can lack the kind of drama and immediate physical carnage that terrorists seek. Identifying the specific perpetrator of a cyberattack can also be difficult, meaning terrorists might have trouble reaping the propaganda benefits of clear attribution. Finally, and most simply, it’s possible that they just can’t pull it off.
Commenting on the article, Rob Graham adds:
I think there are lots of warning from so-called “experts” who aren’t qualified to make such warnings, that the press errs on the side of giving such warnings credibility instead of challenging them.
I think mostly the reason why cyberterrorism doesn’t happen is that which motivates violent people is different than what which motivates technical people, pulling apart the groups who would want to commit cyberterrorism from those who can.
These are all good reasons, but I think both authors missed the most important one: there simply aren’t a lot of terrorists out there. Let’s ask the question more generally: why hasn’t there been another 9/11 since 2001? I also remember dire predictions that large-scale terrorism was the new normal, and that we would see 9/11-scale attacks regularly. But since then, nothing. We could credit the fantastic counterterrorism work of the US and other countries, but a more reasonable explanation is that there are very few terrorists and even fewer organized ones. Our fear of terrorism is far greater than the actual risk.
This isn’t to say that cyberterrorism can never happen. Of course it will, sooner or later. But I don’t foresee it becoming a preferred terrorism method anytime soon. Graham again:
In the end, if your goal is to cause major power blackouts, your best bet is to bomb power lines and distribution centers, rather than hack them.
Interesting article on terahertz millimeter-wave scanners and their uses to detect terrorist bombers.
The heart of the device is a block of electronics about the size of a 1990s tower personal computer. It comes housed in a musician’s black case, akin to the one Spinal Tap might use on tour. At the front: a large, square white plate, the terahertz camera and, just above it, an ordinary closed-circuit television (CCTV) camera. Mounted on a shelf inside the case is a laptop that displays the CCTV image and the blobby terahertz image side by side.
An operator compares the two images as people flow past, looking for unexplained dark areas that could represent firearms or suicide vests. Most images that might be mistaken for a weapon — backpacks or a big patch of sweat on the back of a person’s shirt — are easily evaluated by observing the terahertz image alongside an unaltered video picture of the passenger.
It is up to the operator — in LA’s case, presumably a transport police officer — to query people when dark areas on the terahertz image suggest concealed large weapons or suicide vests. The device cannot see inside bodies, backpacks or shoes. “If you look at previous incidents on public transit systems, this technology would have detected those,” Sotero says, noting LA Metro worked “closely” with the TSA for over a year to test this and other technologies. “It definitely has the backing of TSA.”
How the technology works in practice depends heavily on the operator’s training. According to Evans, “A lot of tradecraft goes into understanding where the threat item is likely to be on the body.” He sees the crucial role played by the operator as giving back control to security guards and allowing them to use their common sense.
I am quoted in the article as being skeptical of the technology, particularly how its deployed.
Another excellent paper by the Mueller/Stewart team: “Terrorism and Bathtubs: Comparing and Assessing the Risks“:
Abstract: The likelihood that anyone outside a war zone will be killed by an Islamist extremist terrorist is extremely small. In the United States, for example, some six people have perished each year since 9/11 at the hands of such terrorists — vastly smaller than the number of people who die in bathtub drownings. Some argue, however, that the incidence of terrorist destruction is low because counterterrorism measures are so effective. They also contend that terrorism may well become more frequent and destructive in the future as terrorists plot and plan and learn from experience, and that terrorism, unlike bathtubs, provides no benefit and exacts costs far beyond those in the event itself by damagingly sowing fear and anxiety and by requiring policy makers to adopt countermeasures that are costly and excessive. This paper finds these arguments to be wanting. In the process, it concludes that terrorism is rare outside war zones because, to a substantial degree, terrorists don’t exist there. In general, as with rare diseases that kill few, it makes more policy sense to expend limited funds on hazards that inflict far more damage. It also discusses the issue of risk communication for this hazard.
Cryptocurrencies, although a seemingly interesting idea, are simply not fit for purpose. They do not work as currencies, they are grossly inefficient, and they are not meaningfully distributed in terms of trust. Risks involving cryptocurrencies occur in four major areas: technical risks to participants, economic risks to participants, systemic risks to the cryptocurrency ecosystem, and societal risks.
I haven’t written much about cryptocurrencies, but I share Weaver’s skepticism.
EDITED TO ADD (8/2): Paul Krugman on cryptocurrencies.
I’m sure it pays less than the industry average, and the stakes are much higher than the average. But if you want to be a Director of Information Security that makes a difference, Human Rights Watch is hiring.
Good article that crunches the data and shows that the press’s coverage of terrorism is disproportional to its comparative risk.
This isn’t new. I’ve written about it before, and wrote about it more generally when I wrote about the psychology of risk, fear, and security. Basically, the issue is the availability heuristic. We tend to infer the probability of something by how easy it is to bring examples of the thing to mind. So if we can think of a lot of tiger attacks in our community, we infer that the risk is high. If we can’t think of many lion attacks, we infer that the risk is low. But while this is a perfectly reasonable heuristic when living in small family groups in the East African highlands in 100,000 BC, it fails in the face of modern media. The media makes the rare seem more common by spending a lot of time talking about it. It’s not the media’s fault. By definition, news is “something that hardly ever happens.” But when the coverage of terrorist deaths exceeds the coverage of homicides, we have a tendency to mistakenly inflate the risk of the former while discount the risk of the latter.
Our brains aren’t very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are. We fear them more than probability indicates we should.
There is a lot of psychological research that tries to explain this, but one of the key findings is this: People tend to base risk analysis more on stories than on data. Stories engage us at a much more visceral level, especially stories that are vivid, exciting or personally involving.
If a friend tells you about getting mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than reading a page of abstract crime statistics will.
Novelty plus dread plus a good story equals overreaction.
It’s not just murders. It’s flying vs. driving: the former is much safer, but accidents are so more spectacular when they occur.
Sidebar photo of Bruce Schneier by Joe MacInnis.