Entries Tagged "economics of security"

Page 16 of 39

Security Trade-Offs and Sacred Values

Interesting research:

Psychologist Jeremy Ginges and his colleagues identified this backfire effect in studies of the Israeli-Palestinian conflict in 2007. They interviewed both Israelis and Palestinians who possessed sacred values toward key issues such as ownership over disputed territories like the West Bank or the right of Palestinian refugees to return to villages they were forced to leave—these people viewed compromise on these issues completely unacceptable. Ginges and colleagues found that individuals offered a monetary payout to compromise their values expressed more moral outrage and were more supportive of violent opposition toward the other side. Opposition decreased, however, when the other side offered to compromise on a sacred value of its own, such as Israelis formally renouncing their right to the West Bank or Palestinians formally recognizing Israel as a state. Ginges and Scott Atran found similar evidence of this backfire effect with Indonesian madrassah students, who expressed less willingness to compromise their belief in sharia, strict Islamic law, when offered a material incentive.

[…]

After giving their opinions on Iran’s nuclear program, all participants were asked to consider one of two deals for Iranian disarmament. Half of the participants read about a deal in which the United States would reduce military aid to Israel in exchange for Iran giving up its military program. The other half of the participants read about a deal in which the United States would reduce aid to Israel and would pay Iran $40 billion. After considering the deal, all participants predicted how much the Iranian people would support the deal and how much anger they would feel toward the deal. In line with the Palestinian-Israeli and Indonesian studies, those who considered the nuclear program a sacred value expressed less support, and more anger, when the deal included money.

Posted on March 19, 2010 at 6:58 AMView Comments

Typosquatting

Measuring the Perpetrators and Funders of Typosquatting,” by Tyler Moore and Benjamin Edelman:

Abstract. We describe a method for identifying “typosquatting”, the intentional registration of misspellings of popular website addresses. We estimate that at least 938 000 typosquatting domains target the top 3 264 .com sites, and we crawl more than 285 000 of these domains to analyze their revenue sources. We find that 80% are supported by pay-per-click ads often advertising the correctly spelled domain and its competitors.Another 20% include static redirection to other sites. We present an automated technique that uncovered 75 otherwise legitimate websites which benefited from direct links from thousands of misspellings of competing websites. Using regression analysis, we find that websites in categories with higher pay-per-click ad prices face more typosquatting registrations, indicating that ad platforms such as Google AdWords exacerbate typosquatting. However, our investigations also confirm the feasibility of signicantly reducing typosquatting. We find that typosquatting is highly concentrated: Of typo domains showing Google ads, 63% use one of five advertising IDs, and some large name servers host typosquatting domains as much as four times as often as the web as a whole.

The paper appeared at the Financial Cryptography conference this year.

Posted on March 15, 2010 at 6:13 AMView Comments

Online Credit/Debit Card Security Failure

Ross Anderson reports:

Online transactions with credit cards or debit cards are increasingly verified using the 3D Secure system, which is branded as “Verified by VISA” and “MasterCard SecureCode”. This is now the most widely-used single sign-on scheme ever, with over 200 million cardholders registered. It’s getting hard to shop online without being forced to use it.

In a paper I’m presenting today at Financial Cryptography, Steven Murdoch and I analyse 3D Secure. From the engineering point of view, it does just about everything wrong, and it’s becoming a fat target for phishing. So why did it succeed in the marketplace?

Quite simply, it has strong incentives for adoption. Merchants who use it push liability for fraud back to banks, who in turn push it on to cardholders. Properly designed single sign-on systems, like OpenID and InfoCard, can’t offer anything like this. So this is yet another case where security economics trumps security engineering, but in a predatory way that leaves cardholders less secure. We conclude with a suggestion on what bank regulators might do to fix the problem.

Posted on February 1, 2010 at 6:26 AMView Comments

Users Rationally Rejecting Security Advice

This paper, by Cormac Herley at Microsoft Research, sounds like me:

Abstract: It is often suggested that users are hopelessly lazy and
unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certicates errors. We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual threats, and fully 100% of certificate error warnings appear to be false positives. Further, if users spent even a minute a day reading URLs to avoid phishing, the cost (in terms of user time) would be two orders of magnitude greater than all phishing losses. Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.

Sounds like me.

EDITED TO ADD (12/12): Related article on usable security.

Posted on November 24, 2009 at 12:40 PMView Comments

Ballmer Blames the Failure of Windows Vista on Security

According to the Telegraph:

Mr Ballmer said: “We got some uneven reception when [Vista] first launched in large part because we made some design decisions to improve security at the expense of compatibility. I don’t think from a word-of-mouth perspective we ever recovered from that.”

Commentary:

Vista’s failure and Ballmer’s faulting security is a bit of being careful for what you wish. Vista (codename “Longhorn” during its development) was always intended to be a more secure operating system. Following the security disasters and 2000 and 2001 that befell Windows 98 and 2000, Microsoft shut down all software development and launched the Trustworthy Computing Initiative that advocated secure coding practices. Microsoft retrained thousands of programmers to eliminate common security problems such as buffer overflows. The immediate result was a retooling of Windows XP to make it more secure for its 2002 launch. Long-term, though, was to make Vista the most secure operating system in Microsoft’s history.

What made XP and Vista more secure? Eliminating (or reducing) buffer overflow errors helped. But what really made a difference is shutting off services by default. Many of the vulnerabilities exploited in Windows 98, NT and 2000 were actually a result of unused services that were active by default. Microsoft’s own vulnerability tracking shows that Vista has far less reported vulnerabilities than any of its predecessors. Unfortunately, a Vista locked down out of the box made it less palatable to users.

Now security obstacles aren’t the only ills that Vista suffered. Huge memory footprint, incompatible graphics requirements, slow responsiveness and a general sense that it was already behind competing Mac and Linux OSes in functionality and features made Vista thud. In my humble opinion, the security gains in Vista were worth many of the tradeoffs; and it was the other technical requirements and incompatible applications that doomed this operating system.

There was also the problem of Vista’s endless security warnings. The problem is that they were almost always false alarms, and there were no adverse effects of ignoring them. So users did, which means they ended up being nothing but an annoyance.

Security warnings are often a way for the developer to avoid making a decision. “We don’t know what to do here, so we’ll put up a warning and ask the user.” But unless the users have the information and the expertise to make the decision, they’re not going to be able to. We need user interfaces that only put up warnings when it matters.

I never upgraded to Vista. I’m hoping Windows 7 is worth upgrading to. We’ll see.

EDITED TO ADD (10/22): Another opinion.

Posted on October 21, 2009 at 7:46 AMView Comments

Eliminating Externalities in Financial Security

This is a good thing:

An Illinois district court has allowed a couple to sue their bank on the novel grounds that it may have failed to sufficiently secure their account, after an unidentified hacker obtained a $26,500 loan on the account using the customers’ user name and password.

[…]

In February 2007, someone with a different IP address than the couple gained access to Marsha Shames-Yeakel’s online banking account using her user name and password and initiated an electronic transfer of $26,500 from the couple’s home equity line of credit to her business account. The money was then transferred through a bank in Hawaii to a bank in Austria.

The Austrian bank refused to return the money, and Citizens Financial insisted that the couple be liable for the funds and began billing them for it. When they refused to pay, the bank reported them as delinquent to the national credit reporting agencies and threatened to foreclose on their home.

The couple sued the bank, claiming violations of the Electronic Funds Transfer Act and the Fair Credit Reporting Act, claiming, among other things, that the bank reported them as delinquent to credit reporting agencies without telling the agencies that the debt in question was under dispute and was the result of a third-party theft. The couple wrote 19 letters disputing the debt, but began making monthly payments to the bank for the stolen funds in late 2007 following the bank’s foreclosure threats.

In addition to these claims, the plaintiffs also accused the bank of negligence under state law.

According to the plaintiffs, the bank had a common law duty to protect their account information from identity theft and failed to maintain state-of-the-art security standards. Specifically, the plaintiffs argued, the bank used only single-factor authentication for customers logging into its server (a user name and password) instead of multi-factor authentication, such as combining the user name and password with a token the customer possesses that authenticates the customer’s computer to the bank’s server or dynamically generates a single-use password for logging in.

As I’ve previously written, this is the only way to mitigate this kind of fraud:

Fraudulent transactions have nothing to do with the legitimate account holders. Criminals impersonate legitimate users to financial institutions. That means that any solution can’t involve the account holders. That leaves only one reasonable answer: financial institutions need to be liable for fraudulent transactions. They need to be liable for sending erroneous information to credit bureaus based on fraudulent transactions.

They can’t claim that the user must keep his password secure or his machine virus free. They can’t require the user to monitor his accounts for fraudulent activity, or his credit reports for fraudulently obtained credit cards. Those aren’t reasonable requirements for most users. The bank must be made responsible, regardless of what the user does.

If you think this won’t work, look at credit cards. Credit card companies are liable for all but the first $50 of fraudulent transactions. They’re not hurting for business; and they’re not drowning in fraud, either. They’ve developed and fielded an array of security technologies designed to detect and prevent fraudulent transactions. They’ve pushed most of the actual costs onto the merchants. And almost no security centers around trying to authenticate the cardholder.

It’s an important security principle: ensure that the person who has the ability to mitigate the risk is responsible for the risk. In this case, the account holders had nothing to do with the security of their account. They could not audit it. They could not improve it. The bank, on the other hand, has the ability to improve security and mitigate the risk, but because they pass the cost on to their customers, they have no incentive to do so. Litigation like this has the potential to fix the externality and improve security.

Posted on September 23, 2009 at 7:13 AMView Comments

1 14 15 16 17 18 39

Sidebar photo of Bruce Schneier by Joe MacInnis.