Measuring the Rationality of Security Decisions

Interesting research: "Dancing Pigs or Externalities? Measuring the Rationality of
Security Decisions
":

Abstract: Accurately modeling human decision-making in security is critical to thinking about when, why, and how to recommend that users adopt certain secure behaviors. In this work, we conduct behavioral economics experiments to model the rationality of end-user security decision-making in a realistic online experimental system simulating a bank account. We ask participants to make a financially impactful security choice, in the face of transparent risks of account compromise and benefits offered by an optional security behavior (two-factor authentication). We measure the cost and utility of adopting the security behavior via measurements of time spent executing the behavior and estimates of the participant's wage. We find that more than 50% of our participants made rational (e.g., utility optimal) decisions, and we find that participants are more likely to behave rationally in the face of higher risk. Additionally, we find that users' decisions can be modeled well as a function of past behavior (anchoring effects), knowledge of costs, and to a lesser extent, users' awareness of risks and context (R2=0.61). We also find evidence of endowment effects, as seen in other areas of economic and psychological decision-science literature, in our digital-security setting. Finally, using our data, we show theoretically that a "one-size-fits-all" emphasis on security can lead to market losses, but that adoption by a subset of users with higher risks or lower costs can lead to market gains

Posted on August 7, 2018 at 6:40 AM • 12 Comments

Comments

vas pupAugust 7, 2018 9:02 AM

Tag - academic paper:
@Clive and other technical gurus - you should love this:
AI device identifies objects at the speed of light
The 3D-printed artificial neural network can be used in medicine, robotics and SECURITY:

https://www.sciencedaily.com/releases/2018/08/180802130750.htm

"Electrical and computer engineers have created a physical artificial neural network -- a device modeled on how the human brain works -- that can analyze large volumes of data and identify objects at the actual speed of light. The device was created using a 3D printer

The UCLA-developed device gets a head start. Called a "diffractive deep neural network," it uses the light bouncing from the object itself to identify that object in as little time as it would take for a computer to simply "see" the object. The UCLA device does not need advanced computing programs to process an image of the object and decide what the object is after its optical sensors pick it up. And no energy is consumed to run the device because it only uses diffraction of light.

The process of creating the artificial neural network began with a computer-simulated design. Then, the researchers used a 3D printer to create very thin, 8 centimeter-square polymer wafers. Each wafer has uneven surfaces, which help diffract light coming from the object in different directions. The layers look opaque to the eye but submillimeter-wavelength terahertz frequencies of light used in the experiments can travel through them. And each layer is composed of tens of thousands of artificial neurons -- in this case, tiny pixels that the light travels through.

Together, a series of pixelated layers functions as an "optical network" that shapes how incoming light from the object travels through them. The network identifies an object because the light coming from the object is mostly diffracted toward a single pixel that is assigned to that type of object.

The researchers then trained the network using a computer to identify the objects in front of it by learning the pattern of diffracted light each object produces as the light from that object passes through the device. The "training" used a branch of artificial intelligence called deep learning, in which machines "learn" through repetition and over time as patterns emerge."

echoAugust 7, 2018 9:49 AM

At the time of writing I had only read the abstract. This sounds like a useful paper. I suspect it may also be used to determine "reasonableness" and "discrimination". Where instititional knowledge or guidelines are inadequate "past behavior (anchoring effects), knowledge of costs, and to a lesser extent, users' awareness of risks and context" may help fill out some of the discussion.

Just so I could double check I contacted a barristers chambers. The short version is they believed the threats I had received were nonsense and the Ombudsman would be handling this issue as part of their process.

The reason I am causing a stink is because a document needs processing in a way which avoids forced disclosure and breaches case law which says that extreme confidrntiality is required. In theory but not always in practice this means that the document must be processed in a secure system with authorisation required and document tracking and not be placed at risk of the data being misused or being leaked. One size fits all systems which by design have access and security thresholds built in may, perversely, not be geared towards processing this document.

There are theoretical and practical security issues which attach to this document if security is breached hence the impasse.

I'm really curious how the interaction of overlapping security can lead to less security. I'm not sure this is a question the paper addresses.

jamezAugust 7, 2018 11:45 AM

i don't use 2fa at my bank because i don't want their dumb app and i don't want them annoying me on my cell phone about "offers that may interest me." so i stick to my ridiculously long and complex password (thank you, keepass).

echoAugust 7, 2018 12:57 PM

@AlanS

Thanks for the interesting paper. Skimming through - on "optimal" because of "information costs" (page 380). This would be in direct contravention of various case law about "best effort" and the Equality act. I believe this is where a fair amount of rational systems (aka "rational decisions aka policy) fall down. In theory the bigger system such as the courts and statutory body and experts and so forth are supposed to catch this but they don't always.

I think is a convoluted way the paper cited by Bruce and the additional paper cited by yourself areworth reading together. I believe it helps (kind of) address my question about overlapping systems. Not that I am any the wisr but this is a proof of a kind, if this makes sense.

Roger A. GrimesAugust 8, 2018 9:57 AM

I only skimmed the paper, but looking at over 30 years of computer security risk making decisions and I can only conclude that most humans are very crappy at making critical security decisions. They don't put the right things in the right amounts against the right things...most of the time. They buy less effective things while ignoring or underserving the very things (like anti-social engineering training and better patching) that would have the biggest impact on risk reduction. This problem bothered me so much I wrote a whitepaper and book about it. Bruce has written many books about human's making very poor risk decisions, even when the data is known.

AlanSAugust 8, 2018 10:53 AM

Behavioural and neoclassical economics is social science without the social/cultural. As Streeck notes, following the political economy of Adam Smith:

In other words, the complex edifice of human societies, including the modern economy, rests on social mechanisms of differentiation that have nothing to do with a common human nature, or only in so far as difference is made possible by the one natural commonality of humans, which is their sociability-cumplasticity—their natural capacity, and indeed their need, to be formed into competent actors by a process of socialization. What is common to humans, according to Smith, is above all that their inborn instincts are not enough to instruct their behaviour. To the contrary, they are in need of instruction as human behaviour, certainly where it matters for the organization and cohesion of complex societies, is not governed by a pre-installed, biologically hard-wired programme but is and must be culturally and socially developed. What matters for and in society and economy is not pre-existing commonality but socially produced difference unfolding in the context of a historical ‘division of labour’ or, as we say today, a social structure. Socialized individuals, i.e. individuals competent to act, can therefore be understood only in relation to other individuals and to the society that has brought them up—so that, if you look for their ‘nature’, all you will find is an open, undefined, unfinished set of potentialities in need of elaboration and cultivation in the company of others.

If you want to understand how people think and act you need to understand behaviour as meaningful behavior unfolding within existing contexts of meaning and social structures. That requires doing ethnography, engaging in 'thick description'. Here is an example of a better paper: Security Dilemma: Healthcare Clinicians at Work. This sort of approach to understanding the design and use of technologies has been around for ages. See for example the work done by Lucy Suchman and others at Xerox PARC which started in the late 1970s.

echoAugust 8, 2018 11:14 AM

@AlanS

If you want to understand how people think and act you need to understand behaviour as meaningful behavior unfolding within existing contexts of meaning and social structures.

Yes, this is often lost in discussion.

The NYTcarried an article on doctors with disabilities and why this matters. In the UK we havehad a number of media articles and the odd reportwhich areless candid but basically suggest the same thing such as less ego and more empathy and ability to understand different perspectives is important. This is a problem wich affects medical practice in general but obviously impacts some people more than others whichis why articles on disability and other discriminated againstgroups are important because they can often catch an issue very early or expose it when it would otherwise be hidden.

https://www.nytimes.com/2017/07/11/upshot/doctors-with-disabilities-why-theyre-important.html
Doctors With Disabilities: Why They’re Important

AlanSAugust 8, 2018 11:52 AM

@Echo

I should have noted that the PDF I linked to included three papers from a panel discussion. Streeck's is at the end.

The part you cite is in Etzoni's paper which appears to favor a behavioral economics (BE) approach to a neoclassical economics (NE) approach. Streeck is critizing both. BE is a minor modification to NE to rescue its basic assumptions about the nature of what it is to be human. The problem is that the basic assumptions are fundamentally wrong. Long ago utilitarians and the marginalists killed their Galileo and Corpernicus to return to flat earth thinking. It's true significance is that it is constitutive of social (power) relationships. The give away is their "libertarian paternalism".

justinacolmenaAugust 13, 2018 12:48 PM

We find that more than 50% of our participants made rational (e.g., utility optimal) decisions ...

And meanwhile the remaining minority (i.e., just under 50%) "made poor decisions" and have criminal or mental health records hanging over them for the rest of their lives.

These "scientists" scarcely deign to conceal the same old white nationalist statistical bell curve nonsense which they have used since before World War One in Austria to justify the purported intellectual superiority of the white race.

There is too much experimentation being done without the informed consent of human subjects.

But then again, information requires education, and not the sort that these doctors are willing to just "give away."

Clive RobinsonAugust 13, 2018 5:23 PM

@ justinacolmena,

These "scientists" scarcely deign to conceal the same old white nationalist statistical bell curve nonsense which they have used since before World War One in Austria to justify the purported intellectual superiority of the white race.

Actually it kind of started in France in 1904 with the Alfred Binet and Theodore Simon IQ test.

However they were by no means the first to consider what we would now consider "socioeconomic stereotyping". You can look back to the pre Victorian era where Phrenology was invented around 1800 by Franz Joseph Gall a German physician. It was very popular as computers are today to supposadly identify criminals and those of baser lusts and proclivities.

Phrebology is now totally discredited and those "AI" computer systems should as well. Neither even make it as a pseudo science. All they realy show is just how cognitively challenged those who believe in them are, and how fraudulant those pushing the systems are.

In a way they are a new form of cloathing for peacock emperors to parade their peccadillos in full public view...

One of the things children are not taught when studying 20th century history was just how discriminatory not just WASP but WEC [1] countries were. Sweden for instance was castrating gypsies, Russia was into progroms that originated from France and talk of eugenics was all the rage in the 1% of the 1% of Americans. The British however were busy slaughtering ethnic minorities in Africa and Australia, much of which continued after WWII.

The fact that these attitudes still exist and worse appear to be resurgent currently with both moderate and extream political parties either side of the center, should be a concern to all. Because history shows us that both blood shed and civil war will result and put undesirable authoritarians in power, with a significant authoritarian follower guard labour corps, more or less free to commit atrocities where ever and when ever they please.

[1] White Anglo Saxon Protestant and White European Catholic. However as history shows WASP and WEC were far from the only people doing this. In the first half of the 20th century it was endemic in just about any part of the world where industrialisation had started to replace traditional agrerian societies. In the East after WWII there was a significant kick back against industry and education as the "killing fields" are just one of many witnesses to the atrocities committed by all sides.

cndAugust 16, 2018 8:01 AM

Users are not the problem:-

https://www.schneier.com/blog/archives/2016/10/security_design.html

and 2FA is not security:-

https://www.schneier.com/blog/archives/2005/03/the_failure_of.html
https://www.schneier.com/blog/archives/2012/02/the_failure_of_2.html
https://www.schneier.com/blog/archives/2012/12/bypassing_two-f.html

2FA was a great idea, when it was invented, in 1984, which was before the web even existed. Today, it's completely irrational.

The *only* thing 2FA is good for, it working out whether or not the security guru you're talking to actually uses their brain. If they're Pro-2FA - they're an imposter (not a guru).

So if there were any actual security-adept users in this study, then the results are going to be false.

It's like asking soldiers to wear an egg on their head so they don't get shot, then trying to make sense of their decisions in battlefield conditions.

The fact they're even doing this study in the first place, shows they have no understanding of the problem or how to fix it - they're just making things worse - by perpetuating the myth that it's OK to blame the victims, and furthering the ignorance of providers who should be using modern solutions to protect users, not practicing voodo and pointless education and crossing their fingers...

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.