The Habituation of Security Warnings

We all know that it happens: when we see a security warning too often -- and without effect -- we start tuning it out. A new paper uses fMRI, eye tracking, and field studies to prove it.

EDITED TO ADD (6/6): This blog post summarizes the findings.

Posted on June 6, 2018 at 6:21 AM • 31 Comments

Comments

TatütataJune 6, 2018 8:05 AM

Uuuuh??? Ooops, sorry, my brained had tuned out, I thought it was yet another one of those General Data Protection Regulation updates. Kind of rhymes with "General Protection Fault", ain't it? :-)

The same part of my brain also tunes out politicks' hot air about "values", "fiscal responsibility", "our country", yadda yadda yawn etc., as well as EULAs

But wasn't this research anticipated by The Boy Who Cried Wolf and Cassandra?

More seriously, the participants weren't using their own phone, or even a real phone, as they were lying in a groaning and buzzing MRI machine that is allergic to anything remotely magnetic. So why should the participants give a **** about any warnings, even if they had been instructed to?

meJune 6, 2018 8:15 AM

i think that part of the problem is that people are overconfident of their skills and they are in hurry and don't read.
when we bought the first pc and an application crashed in the morning, we left the pc there for the whole day without touching it, without click "ok".
in the evening we called a friend on phone who was back from work, to ask him what to do, he said that it was ok to click "ok close the crashed app".
i also remember the frist question about installing/enabling flash player, there was a long cryptic message and we spent almost an hour reading and re reading it to fully understand what it was saying.
now the people simply ignore it besause the message is is stopping them from doing what they want to do.

echoJune 6, 2018 8:46 AM

I have just been reading through an asylum court case. The case concentrates a lot of issues so is worth reading how issues of "fast tracking", fairness, and decisions "in the round" are considered by judges who are averse to through due diligence. There is a seperate discrimination case which reveals things from the other side of the coin where a judge defers to a logical and coherent volume of policy (i.e. the judge just read the headings and decided everything fitted without actually reading the material). I also have two other cases on discrimination issues which suffer from the wrong kind of client syndrome and bad timing.

Moving past "must do" and "must not do" this isn't far from 70% of medical negligence cases are lost in court not because of evidence of negligence but evidence of the cover-up.

Are there any studies on the psycho-neurology of this topic covering perspective, emotional biases, and indirect intent?

echoJune 6, 2018 8:50 AM

@me

Oh my word this rings a bell. I long long time ago back when real time clocks where an expensive expansion card a business critical database application crashed. I refused to touch the thing until later telephoning the software developers support line and the managing director gave his approval. The simple answer was a reboot.

MacJune 6, 2018 9:32 AM

@me

i think that part of the problem is that people are overconfident of their skills and they are in hurry and don't read.

The larger problem is that a lot of those warnings are incomprehensible bullshit, as you hint at later on. If I click an email attachment, I may see a dialog like: "Do you really want to open this file? You shouldn't open files from people you don't trust."

What is a user supposed to do with something like that? Why is it OK to browse all over the web, but as soon as something's a "file" it's a problem? What exactly would they be "trusting" someone with? If they say "no", is there an alternate way to view the file without this risk? And of course it doesn't tell them how to determine who it's from, or that "From" lines can be spoofed. It's not so much that the words are difficult to understand, it's that they're not saying anything useful.

(Phishing tip: you don't even need to fake an email address, because most people are only looking at the name. So log in as bob@hotmail, change your name to "Bruce Schneier", and send someone an email. Even if nothing happens right away, clients like Outlook may helpfully autocomplete "Bruce" to your address in the future.)

Say I'm installing some software on Windows, and it shows a UAC prompt asking for more privilege. Is there any program that will properly deal with me saying no, e.g. by installing it to my account rather than the whole system? Or any that even explain what permissions they need, why they need each, and what the alternative is? I doubt there are many, so really these prompts boil down to: "do you want to do the thing you just said you want to do?" That can protect against inadvertent keypresses and nothing more.

MikeAJune 6, 2018 10:10 AM

As others have mentioned, this is not just about security warnings. Just yesterday I got a phone-call from my HMO, because I had not responded to an email they sent about scheduling some tests. Said email was to an address that I gave them specifically for "medically important message", but which is easily 80% attempts to up-sell me to enhanced coverage, followed by anodyne "Eat your vegetables" reminders, with a tiny fraction being true medical info/questions that call for true medical actions on my part.
But yeah, all those "dialog boxes" have for years had the same script for the dialog.

Computer: "You are doomed, I will do as I please"
User: "I humbly accept my fate:

TatütataJune 6, 2018 10:44 AM

Are there any studies on the psycho-neurology of this topic covering perspective, emotional biases, and indirect intent?

You mean, something like this?

Dan Simon, A Third View of the Black Box: Cognitive Coherence in Legal Decision Making, The University of Chicago Law Review, 2004, p. 511

How do judges and jurors decide cases? Though obviously central to the law, the mental processes for making decisions remain an opaque feature at the heart of legal discourse. For more than a century, views of the process have clustered around two ideal types. One rests on the assumption that legal decisions are the product of rational decision-making processes. According to this "Rationalist" view, legal decisions emanate naturally from prescribed forms of logical inference, namely deductions, inductions, and analogies. Critics, on the other hand, question the veracity of this account and portray the decision-making process as fundamentally inconsistent with these logical forms of inference. This alternative position, associated with Oliver Wendell Holmes, Jr. and the Legal Realists, contends that "the life of the law" is based not on logic, but rather that "the felt necessities of the time," avowed and unconscious intuitions of public policy, and even judicial prejudices have more to do with legal decisions than the formal axioms of logical inference.

Judges should perhaps be made wear EEG electrodes to make sure that they actually hear the arguments. But how can you apply them to a lobster?

In Otto Preminger's (a former Austrian prosecutor) "Anatomy of a Murder", Judge Weaver (played by Joseph N. Welch, who famously threw "At long last, have you left no sense of decency?" at drunk Torquemada's face) declares at the outset of the murder trial:

One judge is quite like another. The only differences may be in the state of their digestions or their proclivities for sleeping on the bench. For myself, I can digest pig iron. And while I might appear to doze occasionally, you will find that I am easily awakened, particularly if shaken gently by a good lawyer with a nice point of law.

Who?June 6, 2018 11:15 AM

It is not just about security warnings. Snowden and Shadow Brokers leaks, Meltdown and Spectre, Cambridge Analytica affair, the [ab]use of personal information by large corporations... we have a lot of examples were huge privacy and security violations have become nothing.

By the way, weren't eight new spectre vulnerabilities to be announced? Specialised media has published nothing about this matter on the last weeks.

People just does not care about security, or privacy for that matter. One warning is ok, two are annoying. Sad, very sad.

echoJune 6, 2018 11:20 AM

@Tatütata

Oh my word yes! Thank you!!!!!!

I'm not able to provide a polished and thorough discourse of the issues but note that ECHR and ECJ jurisprudence coverthe issue of "heuristic" decicion making and of course English law covers legal drift and practicalities and the community element. Of course this is all within a history and context including political veiws not acceptable today and failures of justice.

Without going into too many details I note the discrimination advocate I discovered proved themselves up front and has shared life experience and professional knowledge which I found very persuasive and saved me a lot of tedious explaining with someone who may have not been on board. I note also they did not have a glowing opinion of a lawyer I have crossed swords with in the past. We did not discuss details of this however we did discuss the issue of judgement and bias which I have previously discussed with a noted professional who has a public profile. While this is a legal technicality it can and does have ramifications when the European Convention rights of clients are concerned not to mention "clean hands" as far as the prosecution of justice when it comes to criminal cases.

A lot of the everyday practical issues regarding abuses of power have been discussed on this blog including tracking the box tick and not capturing verbal meaning among other things. In theory this is what various UK state systems are meant to protect while in reality they can be abused because of mixed motives such as professional cocneit and don't rock the boat and budgetry issues such as not how much money but who gets the money.

I believe both current UN and EU investigations into systemic human rights abuses are attempting to uncover this. For the record I am prepared to testify. I believe in full disclosure and have a nasty "capture it all" habit of my own including having copies of documents people hide or delete, and sources who know or know who know what happened in the room when decisions of a murky nature were made.

The longer this goes on the more the truth will get out and the quicker the race to judgment the more they are trying to hide. See also Brexit.

Anthony VanceJune 6, 2018 1:24 PM

@Tatütata:

More seriously, the participants weren't using their own phone, or even a real phone, as they were lying in a groaning and buzzing MRI machine that is allergic to anything remotely magnetic. So why should the participants give a **** about any warnings, even if they had been instructed to?

Hi, Tatütata. I'm one of the authors of this article. Actually, this article contains two studies: one in the lab with eye tracking and fMRI and the other in the field involving participants' own mobile devices. The two studies support each other: we found the same pattern of habituation to security warnings in the field in terms of warning adherence behavior that we did in the lab with neural and eye-gaze activity.

You can read a summary of our studies here: https://neurosecurity.byu.edu/misq-longitudinal-2018

echoJune 6, 2018 2:21 PM

@Anthony Vance

It would be nice if someone applied your findings to medical discrimination and negligence specifically how "red flags" can be ignored. There is evidence on "god complexing" and "dogma", and with some issues there is "avoidance" and "chasing the box tick". I'm just not sure how MRI scans (and information theory) can be usefully applied although there is a small but growing medical-legal body of thought that they are relevant.

Note:

The UK is, institutionally, a decade behind the US on both fields and 20 years behind in some respects. According to the latest discrimination and healthcare provision surveys mainland EU is a patchy picture between ahead and none existent.

I have also noticed that with respect to specific academic and court case building blocks that a lot of the new material is almost exclusively from the US and EU, and in some cases instigated by fields not traditionally associated with healthcare or discrimination including security of all fields.

UK constititional law and military dogma tend to form a castle based on protecting resources which may go some way to explaining UK government policy being cheapskate when it comes to both change management and R&D.

David LeppikJune 6, 2018 4:18 PM

This is a common issue in UI design. Windows 98(?) had the same issue, where it would warn you about every little thing.

Similarly, banner ads were incredibly common at the beginning of the dot-com boom, but as the click-through rate dropped precipitously, new sizes and shapes of ads were introduced until the once-ubiquitous banner nearly went extinct.

In fact, I've had trouble in UI design when I (or someone else) tries to highlight the most important thing on a page by making the text huge and changing the background color, which renders it invisible because it then looks too much like an advertisement. This was brought home to me when my boss pulled me into his office to show me a design where the information was in 72-point type on a blue background directly in the middle of the page and I literally searched the page for 30 seconds without finding it.

It's interesting that they were able to keep users engaged in the warnings by changing the design randomly. I doubt they solved the problem of habituation, rather they managed to push it beyond the time scale of the study.

The real issue is that if the users see these warnings as an inconvenience rather than personal protection, or if they see too many of them, they will eventually tune them out no matter what.

Facebook exploits this-- probably intentionally-- by making controls so fine-grained and specialized as to be useless to the average user. This gives FB protection from bureaucrats and security advocates who think more is better, and who often advocate for more controls every time Facebook neglects to include an important one.

IsmarJune 6, 2018 5:27 PM

@echo
Been meaning to ask this for some time so here it goes - are you using some kind of beta AI bot to write your posts as your style of writing has been suggesting for a while now?

MarkHJune 6, 2018 7:03 PM

About a lifetime ago, I was in a factory that made fire alarm systems ...

... when the plant's actual fire alarm sounded. I didn't see any response whatsoever. Everyone continued about their business.

[For the record, they never tested alarm bells there, so the sounding of the alarm was absolutely NOT usual.]

As far as I know, it was a false alarm.

Alyer Babtu June 7, 2018 3:16 AM

Isn’t this just an instance of learning, like any learning ? Everything we do is suffused with rationality. Sure the message is telling us we are facing a risk, but we go through a quick, maybe only habitual, rough review in light of our purpose, and based on experience, intuition, syllogism, etc. may choose to ignore it and accept the risk. Changes in behavior with variations in the messages is typical also of any learning situation.

Sancho_PJune 7, 2018 3:27 AM

The topic is regarding security warnings,
but we are quick to jump to the stupid request of consent = security question.
Likely the „study“ didn‘t differentiate properly, too.

What is a (security) warning?
A warning is a hint, not a question / request.
„Steam temperature is High (> 560 DEG_C)“ or „Pressure is Low “
or simply „You are driving too fast“.
A machine doesn‘t know about intent and consequences.
A warning is a fact, not an option.
People get used to useless warnings, but entertainment (new fancy decoration) doesn‘t help, people are not stupid, machines are.
Avoid useless warnings by all means.

But let‘s stick to that request / security question for now:
- Can a machine, any machine, but at first the machine we usually think of (a PC/mobile, controlled by something we use to call OS),
so can it, by any chance and by itself
ask the even dumbest user any helpful security question?“.

What do (what we call) OSes „know“ about themselves and the world?
About intent and consequences?

Our OSes are simple schedulers, but not masterminds.
A real OS would interpret, not hand the kingdom to everyone who is interested.

echoJune 7, 2018 3:35 AM

The world is built for the average? It's tiresome to say but if we stopped for every risk or perceived risk we would get nothing done. Risk also varies with context plus risk is perceived differently by varying psychological profiles. So what is "risk"?

I'm sure habit plays a role too. Habit can also vary according to training and experience.

I'm not a fan of aversion therapy.

I once lost six months project work because of a disk repartition. Whoops! This was trial recovering it! I got almost all of everything back so disaster was averted but oh was this a demotivator. I'm not too bothered now by losing data. I've seen so many thinsg comes and go, both planned and unplanned, and found it never matters much. As logn as I don't do anythign too daft like stick my tongue into an electricity socket I'm not really that bothered.

Alyer Babtu June 7, 2018 3:40 AM

@Sancho_P

OSes are simple schedulers, but not masterminds

That goes too for the hyperinflationarily-named “AI”. They are just big switchboards. The only “Intelligence”, if there is any, is in the inventor, who tinkered to put together a contraption that has a propensity to perversely enmesh its users.

Alyer Babtu June 7, 2018 3:47 AM

Where is Lewis Carroll when we need him ? Please, Deacon, update your books for the current treasure trove of nonsense.

echoJune 7, 2018 4:29 AM

@Alyer Babtu

Yes! Bothers and sisters cast off your chains. With no computers there would be no computer security issues. Problem solved!

meJune 7, 2018 6:40 AM

@Mac
> these prompts boil down to: "do you want to do the thing you just said you want to do?"
THIS! btw team viewer try to get admin, but if you say no it fall back on normal user (this is the only app that do it)

Phishing tip... true! in fact i had to download a plugin for thunderbird, otherwise it hide the sending email and show only sender chosen name!. yes, sender email address can be spoofed too, but not for all domains (some have spf, dkim, ...)

What exactly would they be "trusting" someone with?
Are you reading my mind?!?!!? ahahah you think exactly what i think!! the whole comment.
there is a major problem with the tips like "don't open suspicious attachments".
it is *obvious* that you don't open suscpicius things like "free virus click here". the point is that they are not suspicius!

Here in italy there is a famous tv program, and they started also "teaching" how not to fall in web scams.
but they are failing completly, mising the point and it is useless because they show examples of scams but they present them as something that it is so obious that anyone think "only an idiot can fall in this". the problem is that:
-they don't give solution except the typical "don't click"
-what they show is far from real.
and banks do the same! they say "don't use atm if it looks different or with pieces attached". it doesn't look different!!! otherwise people will not trust it and they don't even need a tip telling them to not use it if there is a camera filming the pin pad.
I was thinking that only distracted people/not in security field get their credit card cloned. but i know a man that as job install anti theft systems in homes that got credit card cloned.
so i started to search for more info, and found brian kerbs blog, that is a good tip on how to avoid atm card cloned: a huge ammount of photos of the cloning devices, they are damn small.

MacJune 7, 2018 9:02 AM

@me

it is *obvious* that you don't open suscpicius things like "free virus click here". the point is that they are not suspicius!

One amusing (but worrying) thing is how people in my workplace often report legitimate emails as phishing. They're right to do so—they often come from some "third party" provider or random employee we've never heard of, don't even have our name in the headers, and tell us it's mandatory to click some link and do something by a certain date. Those who follow the link will find they have to reduce various browser security settings to actually use the thing (enable Javascript, embedded videos, cookies—and repeat because they'll redirect or embed stuff from 10 other domains). So every month or two we get a "thanks for reporting that, but it's legitimate and you should do what it says" email from the IT group—after lots of people have already done it.

Then there are the hundred intranet sites that are unencrypted or have an untrusted cert, and ask for our login name and password. We have Kerberos (Active Directory) but only 3 or 4 web servers are configured to accept it. But don't worry, there are relatively strict password-change rules that let management feel good about security...

vas pupJune 11, 2018 8:37 AM

Tipping point for large-scale social change
https://www.sciencedaily.com/releases/2018/06/180607141009.htm
“A new study finds that when 25 percent of people in a group adopt a new social norm, it creates a tipping point where the entire group follows suit. This shows the direct causal effect of the size of a committed minority on its capacity to create social change”
According to a new paper published in Science, there is a quantifiable answer: Roughly 25% of people need to take a stand before large-scale social change occurs. This idea of a social tipping point applies to standards in the workplace and any type of movement or initiative.
Online, people develop norms about everything from what type of content is acceptable to post on social media, to how civil or uncivil to be in their language.
. Centola believes environments can be engineered to push people in pro-social directions, particularly in contexts such as in organizations, where people's personal rewards are tied directly to their ability to coordinate on behaviors that their peers will find acceptable.
Centola also suggests that this work has direct implications for political activism on the Internet, offering new insight into how the Chinese government's use of pro-government propaganda on social networks like Weibo, for example, can effectively shift conversational norms away from negative stories that might foment social unrest.
While shifting people's underlying beliefs can be challenging, Centola's results offer new evidence that a committed minority can change what behaviors are seen as socially acceptable, potentially leading to pro-social outcomes like reduced energy consumption, less sexual harassment in the workplace, and improved exercise habits. Conversely, it can also prompt large-scale anti-social behaviors such as internet trolling, internet bullying, and public outbursts of racism."

My take/question: does this study mean that if you change attitude towards security of 25 % of the group (positive or negative) other will follow?
Is it enough to brainwash 25% of population with any type of propaganda to change the attitude of whole population?
If within the group 25 % have pro- something view and other 25 % have contra- something view how dynamic is going to be developed for the rest members when only 1(one) member joins each of group?

Alyer BabtuJune 11, 2018 5:13 PM

@vas_pup

Is there any connection to the “Byzantine Generals” theorem ? Here is a mad hand-waving argument. That theorem might seem to say in this case that if 1/3 of the total group remained “actively” against the proposed new concensus, the group as a whole might not adopt it. Somehow, then, a uniform active 25% is enough to suppress the formation of oppostion at the 33% level. How is that possible ? That opposition would come from the 75% remaining outside the 25%. If 1/3 of that 75% could be kept from opposing, then by he theorem, the 75% would probably not gel to form a consensus against the new proposal. But 1/3 of 75% is 25% of the total. This is the same size as the new proposal group. It seems entirely possible that the active new proposal group suppress formation of an opposition group its own size.

vas pupJune 12, 2018 11:52 AM

@Alyer Babtu • June 11, 2018 5:13 PM.
You have good point that not size matter, but how active this fraction of the group is. E.g. (on negative side)Nazi where very small group initially within German population, but their level of internal strength let them gradually won democratic election in Reichstag meaning affect attitude of most of the population.

vas pupJune 15, 2018 8:44 AM

When we’re put under pressure, our brains can suddenly process information much faster – but only in certain situations, says neuroscientist Tali Sharot:
http://www.bbc.com/future/story/20180613-why-stressed-minds-are-better-at-processing-things
"Research has shown that people are normally quite optimistic – they will ignore the bad news and embrace the good. This is what happened when the firefighters were relaxed; but when they were under stress, a different pattern emerged. Under these conditions, they became hyper-vigilant to any bad news we gave them, even when it had nothing to do with their job (such as learning that the likelihood of card fraud was higher than they’d thought), and altered their beliefs in response. In contrast, stress didn’t change how they responded to good news (such as learning that the likelihood of card fraud was lower than they’d thought).
When you experience stressful events, whether personal (waiting for a medical diagnosis) or public (political turmoil), a physiological change is triggered that can cause you to take in any sort of warning and become fixated on what might go wrong. A study using brain imaging to look at the neural activity of people under stress revealed that this ‘switch’ was related to a sudden boost in a neural signal important for learning (known as a prediction error), specifically in response to unexpected signs of danger (such as faces expressing fear). This signal relies on dopamine – a neurotransmitter found in the brain – and, under stress, dopamine function is altered by another molecule called corticotropin-releasing factor.
So a ‘neural switch’ that automatically increases or decreases your ability to process warnings in response to changes in your environment might be useful. In fact, people with clinical depression and anxiety seem unable to switch away from a state in which they absorb all the negative messages around them.
So a ‘neural switch’ that automatically increases or decreases your ability to process warnings in response to changes in your environment might be useful. In fact, people with clinical depression and anxiety seem unable to switch away from a state in which they absorb all the negative messages around them.
You don’t even need to be in the same room with someone for their emotions to influence your behaviour. Studies show that if you observe positive feeds on social media, such as images of a pink sunset, you are more likely to post uplifting messages yourself. If you observe negative posts, such as complaints about a long queue at the coffee shop, you will in turn create more negative posts.
[!!!!!!]The fact that stress increases the likelihood that we will focus more on alarming messages, together with the fact that it spreads like a tsunami, can create collective fear that is not always justified. This is because after a stressful public event, such as a terrorist attack or political turmoil, there is often a wave of alarming information in traditional and social media, which individuals absorb well, but that can exaggerate existing danger."

Clive RobinsonJune 15, 2018 11:58 AM

@ vas pup,

When we’re put under pressure, our brains can suddenly process information much faster – but only in certain situations, says neuroscientist Tali Sharot

We may get results faster, but it does not actually mean we are processing things any faster, we may just be thinking about them differently.

Let me explain with the difference between walking through a crowded area and running through a crowded area.

When we walk through we are moving more or less at the same pace as everyone else, thus we have to try and predict where people will be as we move step by step based on their last step. Thus we do a lot of subconcious processing.

However when we run through a crowd we are moving three to six times faster thus we can cheat on the thinking by treating people that are walking as being effectively stationary. Thus it is only our movment we need to calculate. This is also helped by the fact that most people who have someone running towards them tend to slow down or stop, as their processing in effect overloads.

Thus whilst we are getting from A to B way faster we can actually use less processing than walking from A to B, kind appears to be a win win. But the risk of injury is way higher which means concentration to detail needs to be higher, most people can not concentrate or focus for more than brief periods without repeated training and practice.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.