On Cybersecurity Insurance

Good paper on cybersecurity insurance: both the history and the promise for the future. From the conclusion:

Policy makers have long held high hopes for cyber insurance as a tool for improving security. Unfortunately, the available evidence so far should give policymakers pause. Cyber insurance appears to be a weak form of governance at present. Insurers writing cyber insurance focus more on organisational procedures than technical controls, rarely include basic security procedures in contracts, and offer discounts that only offer a marginal incentive to invest in security. However, the cost of external response services is covered, which suggests insurers believe ex-post responses to be more effective than ex-ante mitigation. (Alternatively, they can more easily translate the costs associated with ex-post responses into manageable claims.)

The private governance role of cyber insurance is limited by market dynamics. Competitive pressures drive a race-to-the-bottom in risk assessment standards and prevent insurers including security procedures in contracts. Policy interventions, such as minimum risk assessment standards, could solve this collective action problem. Policy-holders and brokers could also drive this change by looking to insurers who conduct rigorous assessments. Doing otherwise ensures adverse selection and moral hazard will increase costs for firms with responsible security postures. Moving toward standardised risk assessment via proposal forms or external scans supports the actuarial base in the long-term. But there is a danger policyholders will succumb to Goodhart’s law by internalising these metrics and optimising the metric rather than minimising risk. This is particularly likely given these assessments are constructed by private actors with their own incentives. Search-light effects may drive the scores towards being based on what can be measured, not what is important.

EDITED TO ADD (9/11): Boing Boing post.

Posted on September 10, 2019 at 6:23 AM13 Comments


Eric V September 10, 2019 7:42 AM

The authors may be missing a perverse effect of cyberinsurance. Insurance companies may require compliance to standards and reasonable security efforts, but they are not likely to enforce these requirements.

In fact, they only investigate their customers’ behavior when they make a claim, and then reject the claim because the customer didn’t comply. With cybersecurity, the list of requirements doesn’t need to be long to ensure that 90% of customers don’t comply (and won’t ever get any money from the insurance company).

kiwano September 10, 2019 8:17 AM

@Eric V:

I imagine that even when the bulk of the investigation happens upon receipt of a claim, there’s still a good deal of up-front and ongoing investigation to be associated with a policy.

As an example, when insuring a boat, it’s expected that in order to start a policy, the boat owner must have the boat surveyed by an accredited surveyor (at their own expense), submit the survey to the insurance company, and then warrant that they will carry out all of a collection of recommendations from the survey (selected by the insurer) by a particular deadline (also selected by the insurer). Every ten years this process is repeated.

Because boats (and the environment they operate in) don’t change as quickly as cybersecurity systems do (I mean rocks aren’t actively seeking news ways to sink boats, fuels aren’t seeking new ways to catch fire, etc.), I’d expect the audit requirements on cybersecurity systems to be more frequent (perhaps annual), to really support an effective system of investigation-on-claim.

I expect that this degree of handholding is probably necessary to prevent successful lawsuits by the policyholder, in the event that a claim is denied for noncompliance. (I mean why else would the insurance companies do it?)

VinnyG September 10, 2019 11:16 AM

@ kiwano re: analogy – I think I understand the point you are trying to make, but I am uncertain that the employed analogy is appropriate. When insuring a boat, at least in the US, there is an existing regime of US Coast Guard requirements that form a baseline benefiting the insurance companies. There is no equivalent body of cybersecurity regulatory requirements for most businesses that use technology.
@ EricV re: perverse effect – I’m pretty sure that is the intrinsic business model for insurance underwriters.

War Geek September 10, 2019 1:36 PM

Any day now I expect to hear that a major insurance companies who issues Cyber-Insurance policies has not only been breached, but that their list of policy holders was used as a victim list for a very profitable ransomware attack cluster.

And we’ll only hear about it after the policy holders manage to compare notes post-attack because the insurance companies themselves are almost certainly doing every weasel move possible to claim that breaches aren’t breaches to avoid disclosure.


Impossibly Stupid September 10, 2019 11:45 PM

@Petre Peter

What’s important has to be measurable because I can only manage what I can measure.

That’s a poor approach to management. It’s the kind of limited thinking that can be automated away. A good counterexample is crisis management, where the ability to “measure” inputs and outputs is often uncertain in the moment.


There’s no insurance against 0day.

There is: use a system that isn’t affected. Anyone compiling actuarial tables can surely tell you which technologies to avoid. My bet is that, with some irony, most insurance companies themselves have IT departments that impose systems with a terrible track record when it comes to cybersecurity.

Clive Robinson September 11, 2019 1:59 AM

One big thing to note,

    Search-light effects may drive the scores towards being based on what can be measured, not what is important.

One of the biggest problems towards security is the lack of real measurands. Yes we have all those security dashboards displaying numbers and rates of change in those numbers but it only takes a few minutes to realise that they are by-and-large numbers without meaning.

Because of this “Best Practice” boils down to “what the top ten do in common” where who gets in the top ten is a self reported self selected set of numbers or worse ratios.

But insurance companies traditional policy writing has revolved around “random events” that as the sample size gets larger (ie number of policies) fairly quickly averages out. That is fire is usually seen as random with a good average as the events are independent of each other, whilst flood is random in time but it does not have a good average over the time interval of policies because claims tend to be related to a single short time interval event.

Malware attacks tend to fall into two types “targeted” and “fire and forget”. Targeted attacks generally are resource intensive to an attacker as they tend not to be as easily automated. Fire and forget attacks tend towards being fully automated thus not requiring any attacker resources once launched.

If you consider the damage done to an organisation by the two different attack types it’s fairly quick to see that in targeted attacks the aim is to get copies of a large quantity of data or to make the data unavailable to the organisation. Fire and forget attacks tend to be about getting the computing resource as bot-nets or as numbers for bragging rights / ego food.

The damage from targeted attacks is mainly loss of reputation through to paralyzing of the organisation due to ransomware type attacks. Once successfully attacked the solution is external to the organisation.

However damage from fire and forget malware tends to be mainly internal to the organisation and usually involves a change to a defective internal process and an extensive cleanup operation. These tend to be covered by more general disaster recovery plans. Due to massive DDoS attacks the pain is often “shared” and often data loss does not happen. Thus the external cost is minimal.

I get the feeling from what I’ve heard that insurance companies see targeted attacks as being like “fire” and “fire and forget” like flood. That is they are more prepared to insure the former rather than the later.

There is however a third type of risk that few want to think about. One of the downsides of managing large numbers of computers for minimal expenditure is that they are all configured in a near identical way. This is also sufficiently often the same method used across multiple organisations. This lacks hybrid-vigor which has a degree of existential consequences.

If a wide ranging new zero-day is found generally it’s seen as valuable these days which means it rarely gets used in a “fire storm” way by those who find it. However as with the “alleged” release of US IC hacking tools a zero day cam become public and thus a quite short term opportunity for those of ill intent. The attack that can happen we saw with the hastily cobbled together WannaCry ransomware attack back in May 2017 that only got stopped by a mistake the programers had made.

Whilst the probability is high malware writers will make mistakes, the question is how quickly can they be found and used to stop the malware. As many will indicatr, we got lucky with WannaCry, the odds of that happening again are based on the relative skills of the attackers -v- the defenders, and the odds are increasingly with the attackers.

At some point some ransomware or similar writer is going to get “lucky” and the effect will be of biblical flood type proportions.

Whilst there are known ways of protecting against such an attack few if any organisations out side of high security functioning take them. In fact the prevailing “MBA Position” is that the promiscuous behaviour that has so far been practiced in business ICT is more benificial than harmful… That is not a sensible risk strategy.

Steven September 11, 2019 3:15 AM

Insurance carriers have been writing cyberinsurance policies for some years now, and companies have been buying them and paying premiums on them. But I don’t know that there has been much paid out in the way of claims. My crystal ball says that when serious claims start to come in, the insurance carriers are going to realize that software risk is unratable.

By “unratabe”, I don’t mean that the risk is specially large–it might be, but that’s not the problem. The problem is that the risk can’t be quantified by statistical methods, and it can’t be managed by aggregating insureds into pools and then relying on those pools to exhibit
statistically predictable losses.

IOW, the thing that insurance companies do doesn’t work for software.

Jordan September 11, 2019 8:35 PM

@Anders says:

There’s no insurance against 0day.

Not at all true. They are a risk, and insurance is all about dealing with what happens when the dice fall the wrong way. (Remember that insurance fixes damage; it does not prevent damage.)

0days are like meteorite strikes… unpredictable and unavoidable. There’s always a risk that you’re vulnerable. Yet in the physical world, conventional insurance does cover meteorite strikes.


So how do the insurance companies make money, against unpredictable and unavoidable events? They estimate the probability of the event, estimate the damage caused by the event, multiply the two, add in some profit margin, and that’s what they charge you. In the cybersecurity world, they can look at what technologies you use and what your procedures are like to help to estimate the probabilities, and they can look at your business to estimate the potential damage.

But are 0days unpredictable and unavoidable? Sort of, but only sort of. What basic technologies do you use? Do they come from companies with a more-or-less solid security history, or do they come from companies that have had numerous previous problems? Do your procedures and architecture provide overlapping defenses, so that a single 0day won’t give the attacker the keys to the kingdom? How much damage could the attacker do? If you store SSNs and credit card numbers, a lot; if you’ve figured out how to not store those things, a lot less. Could the attacker disrupt your business by destroying your records, or do you have up-to-the-minute backups, audit trails, and a good disaster recovery plan?

There’s no way to get the risk from 0days to zero (other than disconnecting entirely, of course), but there are things you can do to reduce both the probability of an event and the damage associated with the event.

Think September 15, 2019 2:56 PM

Technology – having outstripped both the law and the ability of law enforcement to stop the sheer volume of it — think the opposite of the Panopticon — where the few watch the many incarcerated – bands of global small group will be hacking the many for monetary gain.

One Day in the future – Identity Theft insurance will be as mandatory as Home, Auto and if you can afford it – Medical and life insurance. It will be legally required.

Enter policy makers – anyone that (insert requirements here)…uses our credit card, has a bank account with us, is employed with us, etc…must have Identity theft insurance – society will not be able to cover the costs that identify theft imposes on us.

You may get a lower loan interest rate or have the option for a lower cost bank account (for those of you watching the inverted yield curves in other parts of the world) – you also may not be able to obtain some future credit product without it. Of course – Banks and other credit grantors may end up purchasing the insurance in bulk and providing it to their clients as part of their marketing offerings.

Going through identify theft is costly in lost time, reputation and potentially money.

I am not affiliated with this company, but is it the wave of the future?


EvilKiru September 16, 2019 1:38 PM

@Think: Googling for “is lifelock worth using” brings up some paid advertising and what appear to be a mix of paid reviews and reviews of what the service claims to offer (as opposed to reviews by people who have actually paid for and used the service) along with a story of LifeLock failing to provide even the courtesy of a call back to a paid user trying to report the theft of a wallet.

Alex September 21, 2019 1:05 AM

At the end of the day, the biggest security hole in any organization are the humans. Just take a look at any of the high-profile ransomware cases, especially affecting governments. Usually some low-end worker(idiot) opens an e-mail, downloads the attachment (despite the computer protesting), tries to open the Office document, forces Macros on (despite the computer protesting), the payload is downloaded and executed, and the malware goes ape.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.