Entries Tagged "psychology of security"

Page 1 of 26

The Security Vulnerabilities of Message Interoperability

Jenny Blessing and Ross Anderson have evaluated the security of systems designed to allow the various Internet messaging platforms to interoperate with each other:

The Digital Markets Act ruled that users on different platforms should be able to exchange messages with each other. This opens up a real Pandora’s box. How will the networks manage keys, authenticate users, and moderate content? How much metadata will have to be shared, and how?

In our latest paper, One Protocol to Rule Them All? On Securing Interoperable Messaging, we explore the security tensions, the conflicts of interest, the usability traps, and the likely consequences for individual and institutional behaviour.

Interoperability will vastly increase the attack surface at every level in the stack ­ from the cryptography up through usability to commercial incentives and the opportunities for government interference.

It’s a good idea in theory, but will likely result in the overall security being the worst of each platform’s security.

Posted on March 29, 2023 at 7:03 AMView Comments

Details of a Computer Banking Scam

This is a longish video that describes a profitable computer banking scam that’s run out of call centers in places like India. There’s a lot of fluff about glitterbombs and the like, but the details are interesting. The scammers convince the victims to give them remote access to their computers, and then that they’ve mistyped a dollar amount and have received a large refund that they didn’t deserve. Then they convince the victims to send cash to a drop site, where a money mule retrieves it and forwards it to the scammers.

I found it interesting for several reasons. One, it illustrates the complex business nature of the scam: there are a lot of people doing specialized jobs in order for it to work. Two, it clearly shows the psychological manipulation involved, and how it preys on the unsophisticated and vulnerable. And three, it’s an evolving tactic that gets around banks increasingly flagging blocking suspicious electronic transfers.

Posted on March 22, 2021 at 6:15 AMView Comments

Security and Human Behavior (SHB) 2020

Today is the second day of the thirteenth Workshop on Security and Human Behavior. It’s being hosted by the University of Cambridge, which in today’s world means we’re all meeting on Zoom.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The forty or so attendees include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. We’ve done pretty well translating this format to video chat, including using the random breakout feature to put people into small groups.

I invariably find this to be the most intellectually stimulating two days of my professional year. It influences my thinking in many different, and sometimes surprising, ways.

This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, and twelfth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.

Posted on June 19, 2020 at 2:09 PMView Comments

Andy Ellis on Risk Assessment

Andy Ellis, the CSO of Akamai, gave a great talk about the psychology of risk at the Business of Software conference this year.

I’ve written about this before.

One quote of mine: “The problem is our brains are intuitively suited to the sorts of risk management decisions endemic to living in small family groups in the East African highlands in 100,000 BC, and not to living in the New York City of 2008.”

EDITED TO ADD (12/13): Epigenetics and the human brain.

Posted on December 6, 2019 at 6:55 AMView Comments

Science Fiction Writers Helping Imagine Future Threats

The French army is going to put together a team of science fiction writers to help imagine future threats.

Leaving aside the question of whether science fiction writers are better or worse at envisioning nonfictional futures, this isn’t new. The US Department of Homeland Security did the same thing over a decade ago, and I wrote about it back then:

A couple of years ago, the Department of Homeland Security hired a bunch of science fiction writers to come in for a day and think of ways terrorists could attack America. If our inability to prevent 9/11 marked a failure of imagination, as some said at the time, then who better than science fiction writers to inject a little imagination into counterterrorism planning?

I discounted the exercise at the time, calling it “embarrassing.” I never thought that 9/11 was a failure of imagination. I thought, and still think, that 9/11 was primarily a confluence of three things: the dual failure of centralized coordination and local control within the FBI, and some lucky breaks on the part of the attackers. More imagination leads to more movie-plot threats—which contributes to overall fear and overestimation of the risks. And that doesn’t help keep us safe at all.

Science fiction writers are creative, and creativity helps in any future scenario brainstorming. But please, keep the people who actually know science and technology in charge.

Last month, at the 2009 Homeland Security Science & Technology Stakeholders Conference in Washington D.C., science fiction writers helped the attendees think differently about security. This seems like a far better use of their talents than imagining some of the zillions of ways terrorists can attack America.

Posted on July 23, 2019 at 6:27 AMView Comments

Security and Human Behavior (SHB) 2019

Today is the second day of the twelfth Workshop on Security and Human Behavior, which I am hosting at Harvard University.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to 7-10 minutes. The rest of the time is left to open discussion. Four hour-and-a-half panels per day over two days equals eight panels; six people per panel means that 48 people get to speak. We also have lunches, dinners, and receptions—all designed so people from different disciplines talk to each other.

I invariably find this to be the most intellectually stimulating two days of my professional year. It influences my thinking in many different, and sometimes surprising, ways.

This year’s program is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks—remotely, because he was denied a visa earlier this year.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, and eleventh SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.

Posted on June 6, 2019 at 2:16 PMView Comments

Programmers Who Don't Understand Security Are Poor at Security

A university study confirmed the obvious: if you pay a random bunch of freelance programmers a small amount of money to write security software, they’re not going to do a very good job at it.

In an experiment that involved 43 programmers hired via the Freelancer.com platform, University of Bonn academics have discovered that developers tend to take the easy way out and write code that stores user passwords in an unsafe manner.

For their study, the German academics asked a group of 260 Java programmers to write a user registration system for a fake social network.

Of the 260 developers, only 43 took up the job, which involved using technologies such as Java, JSF, Hibernate, and PostgreSQL to create the user registration component.

Of the 43, academics paid half of the group with €100, and the other half with €200, to determine if higher pay made a difference in the implementation of password security features.

Further, they divided the developer group a second time, prompting half of the developers to store passwords in a secure manner, and leaving the other half to store passwords in their preferred method—hence forming four quarters of developers paid €100 and prompted to use a secure password storage method (P100), developers paid €200 and prompted to use a secure password storage method (P200), devs paid €100 but not prompted for password security (N100), and those paid €200 but not prompted for password security (N200).

I don’t know why anyone would expect this group of people to implement a good secure password system. Look at how they were hired. Look at the scope of the project. Look at what they were paid. I’m sure they grabbed the first thing they found on GitHub that did the job.

I’m not very impressed with the study or its conclusions.

Posted on March 27, 2019 at 6:37 AMView Comments

Measuring the Rationality of Security Decisions

Interesting research: “Dancing Pigs or Externalities? Measuring the Rationality of
Security Decisions
“:

Abstract: Accurately modeling human decision-making in security is critical to thinking about when, why, and how to recommend that users adopt certain secure behaviors. In this work, we conduct behavioral economics experiments to model the rationality of end-user security decision-making in a realistic online experimental system simulating a bank account. We ask participants to make a financially impactful security choice, in the face of transparent risks of account compromise and benefits offered by an optional security behavior (two-factor authentication). We measure the cost and utility of adopting the security behavior via measurements of time spent executing the behavior and estimates of the participant’s wage. We find that more than 50% of our participants made rational (e.g., utility optimal) decisions, and we find that participants are more likely to behave rationally in the face of higher risk. Additionally, we find that users’ decisions can be modeled well as a function of past behavior (anchoring effects), knowledge of costs, and to a lesser extent, users’ awareness of risks and context (R2=0.61). We also find evidence of endowment effects, as seen in other areas of economic and psychological decision-science literature, in our digital-security setting. Finally, using our data, we show theoretically that a “one-size-fits-all” emphasis on security can lead to market losses, but that adoption by a subset of users with higher risks or lower costs can lead to market gains

Posted on August 7, 2018 at 6:40 AMView Comments

Conservation of Threat

Here’s some interesting research about how we perceive threats. Basically, as the environment becomes safer we basically manufacture new threats. From an essay about the research:

To study how concepts change when they become less common, we brought volunteers into our laboratory and gave them a simple task ­—to look at a series of computer-generated faces and decide which ones seem “threatening.” The faces had been carefully designed by researchers to range from very intimidating to very harmless.

As we showed people fewer and fewer threatening faces over time, we found that they expanded their definition of “threatening” to include a wider range of faces. In other words, when they ran out of threatening faces to find, they started calling faces threatening that they used to call harmless. Rather than being a consistent category, what people considered “threats” depended on how many threats they had seen lately.

This has a lot of implications in security systems where humans have to make judgments about threat and risk: TSA agents, police noticing “suspicious” activities, “see something say something” campaigns, and so on.

The academic paper.

Posted on June 29, 2018 at 9:44 AMView Comments

1 2 3 26

Sidebar photo of Bruce Schneier by Joe MacInnis.