Entries Tagged "economics of security"

Page 3 of 39

Research on Human Honesty

New research from Science: “Civic honesty around the globe“:

Abstract: Civic honesty is essential to social capital and economic development, but is often in conflict with material self-interest. We examine the trade-off between honesty and self-interest using field experiments in 355 cities spanning 40 countries around the globe. We turned in over 17,000 lost wallets with varying amounts of money at public and private institutions, and measured whether recipients contacted the owner to return the wallets. In virtually all countries citizens were more likely to return wallets that contained more money. Both non-experts and professional economists were unable to predict this result. Additional data suggest our main findings can be explained by a combination of altruistic concerns and an aversion to viewing oneself as a thief, which increase with the material benefits of dishonesty.

I am surprised, too.

Posted on July 5, 2019 at 6:15 AMView Comments

Security and Human Behavior (SHB) 2019

Today is the second day of the twelfth Workshop on Security and Human Behavior, which I am hosting at Harvard University.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to 7-10 minutes. The rest of the time is left to open discussion. Four hour-and-a-half panels per day over two days equals eight panels; six people per panel means that 48 people get to speak. We also have lunches, dinners, and receptions—all designed so people from different disciplines talk to each other.

I invariably find this to be the most intellectually stimulating two days of my professional year. It influences my thinking in many different, and sometimes surprising, ways.

This year’s program is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks—remotely, because he was denied a visa earlier this year.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, and eleventh SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.

Posted on June 6, 2019 at 2:16 PMView Comments

The Cost of Cybercrime

Really interesting paper calculating the worldwide cost of cybercrime:

Abstract: In 2012 we presented the first systematic study of the costs of cybercrime. In this paper, we report what has changed in the seven years since. The period has seen major platform evolution, with the mobile phone replacing the PC and laptop as the consumer terminal of choice, with Android replacing Windows, and with many services moving to the cloud. The use of social networks has become extremely widespread. The executive summary is that about half of all property crime, by volume and by value, is now online. We hypothesised in 2012 that this might be so; it is now established by multiple victimisation studies. Many cybercrime patterns appear to be fairly stable, but there are some interesting changes. Payment fraud, for example, has more than doubled in value but has fallen slightly as a proportion of payment value; the payment system has simply become bigger, and slightly more efficient. Several new cybercrimes are significant enough to mention, including business email compromise and crimes involving cryptocurrencies. The move to the cloud means that system misconfiguration may now be responsible for as many breaches as phishing. Some companies have suffered large losses as a side-effect of denial-of-service worms released by state actors, such as NotPetya; we have to take a view on whether they count as cybercrime. The infrastructure supporting cybercrime, such as botnets, continues to evolve, and specific crimes such as premium-rate phone scams have evolved some interesting variants. The over-all picture is the same as in 2012: traditional offences that are now technically ‘computercrimes’ such as tax and welfare fraud cost the typical citizen in the low hundreds of Euros/dollars a year; payment frauds and similar offences, where the modus operandi has been completely changed by computers, cost in the tens; while the new computer crimes cost in the tens of cents. Defending against the platforms used to support the latter two types of crime cost citizens in the tens of dollars. Our conclusions remain broadly the same as in 2012: it would be economically rational to spend less in anticipation of cybercrime (on antivirus, firewalls, etc.) and more on response. We are particularly bad at prosecuting criminals who operate infrastructure that other wrongdoers exploit. Given the growing realisation among policymakers that crime hasn’t been falling over the past decade, merely moving online, we might reasonably hope for better funded and coordinated law-enforcement action.

Richard Clayton gave a presentation on this yesterday at WEIS. His final slide contained a summary.

  • Payment fraud is up, but credit card sales are up even more—so we’re winning.
  • Cryptocurrencies are enabling new scams, but the big money is still being lost in more traditional investment fraud.
  • Telcom fraud is down, basically because Skype is free.
  • Anti-virus fraud has almost disappeared, but tech support scams are growing very rapidly.
  • The big money is still in tax fraud, welfare fraud, VAT fraud, and so on.
  • We spend more money on cyber defense than we do on the actual losses.
  • Criminals largely act with impunity. They don’t believe they will get caught, and mostly that’s correct.

Bottom line: the technology has changed a lot since 2012, but the economic considerations remain unchanged.

Posted on June 4, 2019 at 6:06 AMView Comments

The Concept of "Return on Data"

This law review article by Noam Kolt, titled “Return on Data,” proposes an interesting new way of thinking of privacy law.

Abstract: Consumers routinely supply personal data to technology companies in exchange for services. Yet, the relationship between the utility (U) consumers gain and the data (D) they supply—”return on data” (ROD)—remains largely unexplored. Expressed as a ratio, ROD = U / D. While lawmakers strongly advocate protecting consumer privacy, they tend to overlook ROD. Are the benefits of the services enjoyed by consumers, such as social networking and predictive search, commensurate with the value of the data extracted from them? How can consumers compare competing data-for-services deals? Currently, the legal frameworks regulating these transactions, including privacy law, aim primarily to protect personal data. They treat data protection as a standalone issue, distinct from the benefits which consumers receive. This article suggests that privacy concerns should not be viewed in isolation, but as part of ROD. Just as companies can quantify return on investment (ROI) to optimize investment decisions, consumers should be able to assess ROD in order to better spend and invest personal data. Making data-for-services transactions more transparent will enable consumers to evaluate the merits of these deals, negotiate their terms and make more informed decisions. Pivoting from the privacy paradigm to ROD will both incentivize data-driven service providers to offer consumers higher ROD, as well as create opportunities for new market entrants.

Posted on May 20, 2019 at 1:30 PMView Comments

Prices for Zero-Day Exploits Are Rising

Companies are willing to pay ever-increasing amounts for good zero-day exploits against hard-to-break computers and applications:

On Monday, market-leading exploit broker Zerodium said it would pay up to $2 million for zero-click jailbreaks of Apple’s iOS, $1.5 million for one-click iOS jailbreaks, and $1 million for exploits that take over secure messaging apps WhatsApp and iMessage. Previously, Zerodium was offering $1.5 million, $1 million, and $500,000 for the same types of exploits respectively. The steeper prices indicate not only that the demand for these exploits continues to grow, but also that reliably compromising these targets is becoming increasingly hard.

Note that these prices are for offensive uses of the exploit. Zerodium—and others—sell exploits to companies who make surveillance tools and cyber-weapons for governments. Many companies have bug bounty programs for those who want the exploit used for defensive purposes—i.e., fixed—but they pay orders of magnitude less. This is a problem.

Back in 2014, Dan Geer said that that the US should corner the market on software vulnerabilities:

“There is no doubt that the U.S. Government could openly corner the world vulnerability market,” said Geer, “that is, we buy them all and we make them all public. Simply announce ‘Show us a competing bid, and we’ll give you [10 times more].’ Sure, there are some who will say ‘I hate Americans; I sell only to Ukrainians,’ but because vulnerability finding is increasingly automation-assisted, the seller who won’t sell to the Americans knows that his vulns can be rediscovered in due course by someone who will sell to the Americans who will tell everybody, thus his need to sell his product before it outdates is irresistible.”

I don’t know about the 10x, but in theory he’s right. There’s no other way to solve this.

Posted on January 17, 2019 at 6:33 AMView Comments

Security Vulnerabilities in Cell Phone Systems

Good essay on the inherent vulnerabilities in the cell phone standards and the market barriers to fixing them.

So far, industry and policymakers have largely dragged their feet when it comes to blocking cell-site simulators and SS7 attacks. Senator Ron Wyden, one of the few lawmakers vocal about this issue, sent a letter in August encouraging the Department of Justice to “be forthright with federal courts about the disruptive nature of cell-site simulators.” No response has ever been published.

The lack of action could be because it is a big task—there are hundreds of companies and international bodies involved in the cellular network. The other reason could be that intelligence and law enforcement agencies have a vested interest in exploiting these same vulnerabilities. But law enforcement has other effective tools that are unavailable to criminals and spies. For example, the police can work directly with phone companies, serving warrants and Title III wiretap orders. In the end, eliminating these vulnerabilities is just as valuable for law enforcement as it is for everyone else.

As it stands, there is no government agency that has the power, funding and mission to fix the problems. Large companies such as AT&T, Verizon, Google and Apple have not been public about their efforts, if any exist.

Posted on January 10, 2019 at 5:52 AMView Comments

Machine Learning to Detect Software Vulnerabilities

No one doubts that artificial intelligence (AI) and machine learning (ML) will transform cybersecurity. We just don’t know how, or when. While the literature generally focuses on the different uses of AI by attackers and defenders ­ and the resultant arms race between the two ­ I want to talk about software vulnerabilities.

All software contains bugs. The reason is basically economic: The market doesn’t want to pay for quality software. With a few exceptions, such as the space shuttle, the market prioritizes fast and cheap over good. The result is that any large modern software package contains hundreds or thousands of bugs.

Some percentage of bugs are also vulnerabilities, and a percentage of those are exploitable vulnerabilities, meaning an attacker who knows about them can attack the underlying system in some way. And some percentage of those are discovered and used. This is why your computer and smartphone software is constantly being patched; software vendors are fixing bugs that are also vulnerabilities that have been discovered and are being used.

Everything would be better if software vendors found and fixed all bugs during the design and development process, but, as I said, the market doesn’t reward that kind of delay and expense. AI, and machine learning in particular, has the potential to forever change this trade-off.

The problem of finding software vulnerabilities seems well-suited for ML systems. Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like. There are challenges with that, of course, but there is already a healthy amount of academic literature on the topic—and research is continuing. There’s every reason to expect ML systems to get better at this as time goes on, and some reason to expect them to eventually become very good at it.

Finding vulnerabilities can benefit both attackers and defenders, but it’s not a fair fight. When an attacker’s ML system finds a vulnerability in software, the attacker can use it to compromise systems. When a defender’s ML system finds the same vulnerability, he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it.

But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released, the developer fixes it so it can never be used in the first place. The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.

Fast-forward a decade or so into the future. We might say to each other, “Remember those years when software vulnerabilities were a thing, before ML vulnerability finders were built into every compiler and fixed them before the software was ever released? Wow, those were crazy years.” Not only is this future possible, but I would bet on it.

Getting from here to there will be a dangerous ride, though. Those vulnerability finders will first be unleashed on existing software, giving attackers hundreds if not thousands of vulnerabilities to exploit in real-world attacks. Sure, defenders can use the same systems, but many of today’s Internet of Things systems have no engineering teams to write patches and no ability to download and install patches. The result will be hundreds of vulnerabilities that attackers can find and use.

But if we look far enough into the horizon, we can see a future where software vulnerabilities are a thing of the past. Then we’ll just have to worry about whatever new and more advanced attack techniques those AI systems come up with.

This essay previously appeared on SecurityIntelligence.com.

Posted on January 8, 2019 at 6:13 AMView Comments

Measuring the Rationality of Security Decisions

Interesting research: “Dancing Pigs or Externalities? Measuring the Rationality of
Security Decisions
“:

Abstract: Accurately modeling human decision-making in security is critical to thinking about when, why, and how to recommend that users adopt certain secure behaviors. In this work, we conduct behavioral economics experiments to model the rationality of end-user security decision-making in a realistic online experimental system simulating a bank account. We ask participants to make a financially impactful security choice, in the face of transparent risks of account compromise and benefits offered by an optional security behavior (two-factor authentication). We measure the cost and utility of adopting the security behavior via measurements of time spent executing the behavior and estimates of the participant’s wage. We find that more than 50% of our participants made rational (e.g., utility optimal) decisions, and we find that participants are more likely to behave rationally in the face of higher risk. Additionally, we find that users’ decisions can be modeled well as a function of past behavior (anchoring effects), knowledge of costs, and to a lesser extent, users’ awareness of risks and context (R2=0.61). We also find evidence of endowment effects, as seen in other areas of economic and psychological decision-science literature, in our digital-security setting. Finally, using our data, we show theoretically that a “one-size-fits-all” emphasis on security can lead to market losses, but that adoption by a subset of users with higher risks or lower costs can lead to market gains

Posted on August 7, 2018 at 6:40 AMView Comments

GCHQ on Quantum Key Distribution

The UK’s GCHQ delivers a brutally blunt assessment of quantum key distribution:

QKD protocols address only the problem of agreeing keys for encrypting data. Ubiquitous on-demand modern services (such as verifying identities and data integrity, establishing network sessions, providing access control, and automatic software updates) rely more on authentication and integrity mechanisms—such as digital signatures—than on encryption.

QKD technology cannot replace the flexible authentication mechanisms provided by contemporary public key signatures. QKD also seems unsuitable for some of the grand future challenges such as securing the Internet of Things (IoT), big data, social media, or cloud applications.

I agree with them. It’s a clever idea, but basically useless in practice. I don’t even think it’s anything more than a niche solution in a world where quantum computers have broken our traditional public-key algorithms.

Read the whole thing. It’s short.

Posted on August 1, 2018 at 2:07 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.