Entries Tagged "economics of security"

Page 12 of 39

Ebook Fraud

Interesting post—and discussion—on Making Light about ebook fraud. Currently there are two types of fraud. The first is content farming, discussed in these two interesting blog posts. People are creating automatically generated content, web-collected content, or fake content, turning it into a book, and selling it on an ebook site like Amazon.com. Then they use multiple identities to give it good reviews. (If it gets a bad review, the scammer just relists the same content under a new name.) That second blog post contains a screen shot of something called “Autopilot Kindle Cash,” which promises to teach people how to post dozens of ebooks to Amazon.com per day.

The second type of fraud is stealing a book and selling it as an ebook. So someone could scan a real book and sell it on an ebook site, even though he doesn’t own the copyright. It could be a book that isn’t already available as an ebook, or it could be a “low cost” version of a book that is already available. Amazon doesn’t seem particularly motivated to deal with this sort of fraud. And it too is suitable for automation.

Broadly speaking, there’s nothing new here. All complex ecosystems have parasites, and every open communications system we’ve ever built gets overrun by scammers and spammers. Far from making editors superfluous, systems that democratize publishing have an even greater need for editors. The solutions are not new, either: reputation-based systems, trusted recommenders, white lists, takedown notices. Google has implemented a bunch of security countermeasures against content farming; ebook sellers should implement them as well. It’ll be interesting to see what particular sort of mix works in this case.

Posted on April 4, 2011 at 9:18 AMView Comments

Comodo Group Issues Bogus SSL Certificates

This isn’t good:

The hacker, whose March 15 attack was traced to an IP address in Iran, compromised a partner account at the respected certificate authority Comodo Group, which he used to request eight SSL certificates for six domains: mail.google.com, www.google.com, login.yahoo.com, login.skype.com, addons.mozilla.org and login.live.com.

The certificates would have allowed the attacker to craft fake pages that would have been accepted by browsers as the legitimate websites. The certificates would have been most useful as part of an attack that redirected traffic intended for Skype, Google and Yahoo to a machine under the attacker’s control. Such an attack can range from small-scale Wi-Fi spoofing at a coffee shop all the way to global hijacking of internet routes.

At a minimum, the attacker would then be able to steal login credentials from anyone who entered a username and password into the fake page, or perform a “man in the middle” attack to eavesdrop on the user’s session.

More news articles. Comodo announcement.

Fake certs for Google, Yahoo, and Skype? Wow.

This isn’t the first time Comodo has screwed up with certificates. The safest thing for us users to do would be to remove the Comodo root certificate from our browsers so that none of their certificates work, but we don’t have the capability to do that. The browser companies—Microsoft, Mozilla, Opera, etc.—could do that, but my guess is they won’t. The economic incentives don’t work properly. Comodo is likely to sue any browser company that takes this sort of action, and Comodo’s customers might as well. So it’s smarter for the browser companies to just ignore the issue and pass the problem to us users.

Posted on March 31, 2011 at 7:00 AMView Comments

Good Article About the Terrorist Non-Threat

From Reason:

Know thy enemy is an ancient principle of warfare. And if America had
heeded it, it might have refrained from a full-scale “war” on terrorism whose price tag is touching $2 TRILLION. That’s because the Islamist enemy it is confronting is not some hyper-power capable of inflicting existential—or even grave—harm. It is, rather, a rag-tag band of peasants whose malevolent ambitions are far beyond the capacity of their shallow talent pool to deliver.

Posted on February 24, 2011 at 6:44 AMView Comments

Terrorist-Catching Con Man

Interesting story about a con man who conned the U.S. government, and how the government is trying to hide its dealings with him.

For eight years, government officials turned to Dennis Montgomery, a California computer programmer, for eye-popping technology that he said could catch terrorists. Now, federal officials want nothing to do with him and are going to extraordinary lengths to ensure that his dealings with Washington stay secret.

Posted on February 22, 2011 at 7:21 AMView Comments

Biometric Wallet

Not an electronic wallet, a physical one:

Virtually indestructible, the dunhill Biometric Wallet will open only with touch of your fingerprint.

It can be linked via Bluetooth to the owner’s mobile phone ­ sounding an alarm if the two are separated by more than 5 metres! This provides a brilliant warning if either the phone or wallet is stolen or misplaced. The exterior of the wallet is constructed from highly durable carbon fibre that will resist all but the most concerted effort to open it, while the interior features a luxurious leather credit card holder and a strong stainless steel money clip.

Only $825. News article.

I don’t think I understand the threat model. If your wallet is stolen, you’re going to replace all your ID cards and credit cards and you’re not going to get your cash back—whether it’s a normal wallet or this wallet. I suppose this wallet makes it less likely that someone will use your stolen credit cards quickly, before you cancel them. But you’re not going to be liable for that delay in any case.

Posted on February 18, 2011 at 1:45 PMView Comments

Societal Security

Humans have a natural propensity to trust non-kin, even strangers. We do it so often, so naturally, that we don’t even realize how remarkable it is. But except for a few simplistic counterexamples, it’s unique among life on this planet. Because we are intelligently calculating and value reciprocity (that is, fairness), we know that humans will be honest and nice: not for any immediate personal gain, but because that’s how they are. We also know that doesn’t work perfectly; most people will be dishonest some of the time, and some people will be dishonest most of the time. How does society—the honest majority—prevent the dishonest minority from taking over, or ruining society for everyone? How is the dishonest minority kept in check? The answer is security—in particular, something I’m calling societal security.

I want to divide security into two types. The first is individual security. It’s basic. It’s direct. It’s what normally comes to mind when we think of security. It’s cops vs. robbers, terrorists vs. the TSA, Internet worms vs. firewalls. And this sort of security is as old as life itself or—more precisely—as old as predation. And humans have brought an incredible level of sophistication to individual security.

Societal security is different. At the tactical level, it also involves attacks, countermeasures, and entire security systems. But instead of A vs. B, or even Group A vs. Group B, it’s Group A vs. members of Group A. It’s security for individuals within a group from members of that group. It’s how Group A protects itself from the dishonest minority within Group A. And it’s where security really gets interesting.

There are many types—I might try to estimate the number someday—of societal security systems that enforce our trust of non-kin. They’re things like laws prohibiting murder, taxes, traffic laws, pollution control laws, religious intolerance, Mafia codes of silence, and moral codes. They enable us to build a society that the dishonest minority can’t exploit and destroy. Originally, these security systems were informal. But as society got more complex, the systems became more formalized, and eventually were embedded into technologies.

James Madison famously wrote: “If men were angels, no government would be necessary.” Government is just the beginning of what wouldn’t be necessary. Currency, that paper stuff that’s deliberately made hard to counterfeit, wouldn’t be necessary, as people could just keep track of how much money they had. Angels never cheat, so nothing more would be required. Door locks, and any barrier that isn’t designed to protect against accidents, wouldn’t be necessary, since angels never go where they’re not supposed to go. Police forces wouldn’t be necessary. Armies: I suppose that’s debatable. Would angels—not the fallen ones—ever go to war against one another? I’d like to think they would be able to resolve their differences peacefully. If people were angels, every security measure that isn’t designed to be effective against accident, animals, forgetfulness, or legitimate differences between scrupulously honest angels could be dispensed with.

Security isn’t just a tax on the honest; it’s a very expensive tax on the honest. It’s the most expensive tax we pay, regardless of the country we live in. If people were angels, just think of the savings!

It wasn’t always like this. Security—especially societal security—used to be cheap. It used to be an incidental cost of society.

In a primitive society, informal systems are generally good enough. When you’re living in a small community, and objects are both scarce and hard to make, it’s pretty easy to deal with the problem of theft. If Alice loses a bowl, and at the same time, Bob shows up with an identical bowl, everyone knows Bob stole it from Alice, and the community can then punish Bob as it sees fit. But as communities get larger, as social ties weaken and anonymity increases, this informal system of theft prevention—detection and punishment leading to deterrence—fails. As communities get more technological and as the things people might want to steal get more interchangeable and harder to identify, it also fails. In short, as our ancestors made the move from small family groups to larger groups of unrelated families, and then to a modern form of society, the informal societal security systems started failing and more formal systems had to be invented to take their place. We needed to put license plates on cars and audit people’s tax returns.

We had no choice. Anything larger than a very primitive society couldn’t exist without societal security.

I’m writing a book about societal security. I will discuss human psychology: how we make security trade-offs, why we routinely trust non-kin (an evolutionary puzzle, to be sure), how the majority of us are honest, and that a minority of us are dishonest. That dishonest minority are the free riders of societal systems, and security is how we protect society from them. I will model the fundamental trade-off of societal security—individual self-interest vs. societal group interest—as a group prisoner’s dilemma problem, and use that metaphor to examine the basic mechanics of societal security. A lot falls out of this: free riders, the Tragedy of the Commons, the subjectivity of both morals and risk trade-offs.

Using this model, I will explore the security systems that protect—and fail to protect—market economics, corporations and other organizations, and a variety of national systems. I think there’s a lot we can learn about security by applying the prisoner’s dilemma model, and I’ve only recently started. Finally, I want to discuss modern changes to our millennia-old systems of societal security. The Information Age has changed a number of paradigms, and it’s not clear that our old security systems are working properly now or will work in the future. I’ve got a lot of work to do yet, and the final book might look nothing like this short outline. That sort of thing happens.

Tentative title: The Dishonest Minority: Security and its Role in Modern Society. I’ve written several books on the how of security. This book is about the why of security.

I expect to finish my first draft before Summer. Throughout 2011, expect to see bits from the book here. They might not make sense as a coherent whole at first—especially because I don’t write books in strict order—but by the time the book is published, it’ll all be part of a coherent and (hopefully) compelling narrative.

And if I write fewer extended blog posts and essays in the coming year, you’ll know why.

Posted on February 15, 2011 at 5:43 AMView Comments

Scareware: How Crime Pays

Scareware is fraudulent software that uses deceptive advertising to trick users into believing they’re infected with some variety of malware, then convinces them to pay money to protect themselves. The infection isn’t real, and the software they buy is fake, too. It’s all a scam.

Here’s one scareware operator who sold “more than 1 million software products” at “$39.95 or more,” and now has to pay $8.2 million to settle a Federal Trade Commission complaint.

Seems to me that $40 per customer, minus $8.20 to pay off the FTC, is still a pretty good revenue model. Their operating costs can’t be very high, since the software doesn’t actually do anything. Yes, a court ordered them to close down their business, but certainly there are other creative entrepreneurs that can recognize a business opportunity when they see it.

Posted on February 7, 2011 at 8:45 AMView Comments

Terrorist Targets of Choice

This makes sense.

Generally, militants prefer to attack soft targets where there are large groups of people, that are symbolic and recognizable around the world and that will generate maximum media attention when attacked. Some past examples include the World Trade Center in New York, the Taj Mahal Hotel in Mumbai and the London Underground. The militants’ hope is that if the target meets these criteria, terror magnifiers like the media will help the attackers produce a psychological impact that goes far beyond the immediate attack site ­ a process we refer to as “creating vicarious victims.” The best-case scenario for the attackers is that this psychological impact will also produce an adverse economic impact against the targeted government.

Unlike hard targets, which frequently require attackers to use large teams of operatives with elaborate attack plans or very large explosive devices in order to breach defenses, soft targets offer militant planners an advantage in that they can frequently be attacked by a single operative or small team using a simple attack plan. The failed May 1, 2010, attack against New York’s Times Square and the July 7, 2005, London Underground attacks are prime examples of this, as was the Jan. 24 attack at Domodedovo airport. Such attacks are relatively cheap and easy to conduct and can produce a considerable propaganda return for very little investment.

Posted on February 4, 2011 at 6:00 AMView Comments

REAL-ID Implementation

According to this study, REAL-ID has not only been cheaper to implement than the states estimated, but also helpful in reducing fraud.

States are finding that implementation of the 2005 REAL ID Act is much easier and less expensive than previously thought, and is a significant factor in reducing fraud. In cases like Indiana, REAL ID has significantly improved customer satisfaction, resulting in that state receiving AAMVA’s “customer satisfaction” award of the year. This is not just a win-win for national and economic security, but a win (less expensive) -win (doable) -win (fraud reduction) -win (improved customer satisfaction) for federal and state governments as well as individuals.

Moreover, 11 states are already in full compliance, well ahead of the May 2011 deadline for the 18 benchmarks. Another eight are close behind. Some states, like Delaware and Maryland, have achieved REAL ID compliance within a year. Washington State refuses REAL ID compliance, but has already implemented the most difficult benchmarks.

Perhaps most astonishing is that from the cost numbers currently available, it looks like implementation of the 18 REAL ID benchmarks in all the states may end up costing somewhere between $350 million and $750 million, significantly less than the $1 billion projected by those still seeking to change the law.

Legal presence is being checked in all but two states, up 28 states from 2006. Only Washington and New Mexico still do not require legal presence to obtain a license, but Washington so significantly upgraded its license issuance in 2010 that the fraudulent attempts to garner licenses in that state are now significantly reduced. Every state is now checking Social Security numbers.

This might be the first government IT project ever that came in under initial cost estimates. Perhaps the reason is that the states did not want to implement REAL-ID in 2005, so they overstated the costs.

As to fraud reduction—I’m not so sure. As the difficulty of getting a fraudulent ID increases, so does its value. I think we’ll have to wait a while longer and see how criminals adapt.

EDITED TO ADD (2/11): CATO’s Jim Harper argues that this report does not show that implementing the national ID program envisioned in the national ID law is a cost-effective success. It only assesses compliance with certain DHS-invented “benchmarks” related to REAL ID, and does so in a way that skews the results.

Posted on January 25, 2011 at 6:16 AMView Comments

Cost-Benefit Analysis of Full-Body Scanners

Research paper from Mark Stewart and John Mueller:

The Transportation Security Administration (TSA) has been deploying Advanced Imaging Technologies (AIT) that are full-body scanners to inspect a passenger’s body for concealed weapons, explosives, and other prohibited items. The terrorist threat that AITs are primarily dedicated to is preventing the downing of a commercial airliner by an IED (Improvised Explosive Device) smuggled on board by a passenger. The cost of this technology will reach $1.2 billion per year by 2014. The paper develops a cost-benefit analysis of AITs for passenger screening at U.S. airports. The analysis considered threat probability, risk reduction, losses, and costs of security measures in the estimation of costs and benefits. Since there is uncertainty and variability of these parameters, three alternate probability (uncertainty) models were used to characterise risk reduction and losses. Economic losses were assumed to vary from $2-50 billion, and risk reduction from 5-10%. Monte-Carlo simulation methods were used to propagate these uncertainties in the calculation of benefits, and the minimum attack probability necessary for AITs to be cost-effective was calculated. It was found that, based on mean results, more than one attack every two years would need to originate from U.S. airports for AITs to pass a cost-benefit analysis. In other words, to be cost-effective, AITs every two years would have to disrupt more than one attack effort with body-borne explosives that otherwise would have been successful despite other security measures, terrorist incompetence and amateurishness, and the technical difficulties in setting off a bomb sufficiently destructive to down an airliner. The attack probability needs to exceed 160-330% per year to be 90% certain that AITs are cost-effective.

EDITED TO ADD (1/26): Response from one of the paper’s authors.

Posted on January 20, 2011 at 1:39 PMView Comments

1 10 11 12 13 14 39

Sidebar photo of Bruce Schneier by Joe MacInnis.