Entries Tagged "reputation"

Page 3 of 4

Ebook Fraud

Interesting post — and discussion — on Making Light about ebook fraud. Currently there are two types of fraud. The first is content farming, discussed in these two interesting blog posts. People are creating automatically generated content, web-collected content, or fake content, turning it into a book, and selling it on an ebook site like Amazon.com. Then they use multiple identities to give it good reviews. (If it gets a bad review, the scammer just relists the same content under a new name.) That second blog post contains a screen shot of something called “Autopilot Kindle Cash,” which promises to teach people how to post dozens of ebooks to Amazon.com per day.

The second type of fraud is stealing a book and selling it as an ebook. So someone could scan a real book and sell it on an ebook site, even though he doesn’t own the copyright. It could be a book that isn’t already available as an ebook, or it could be a “low cost” version of a book that is already available. Amazon doesn’t seem particularly motivated to deal with this sort of fraud. And it too is suitable for automation.

Broadly speaking, there’s nothing new here. All complex ecosystems have parasites, and every open communications system we’ve ever built gets overrun by scammers and spammers. Far from making editors superfluous, systems that democratize publishing have an even greater need for editors. The solutions are not new, either: reputation-based systems, trusted recommenders, white lists, takedown notices. Google has implemented a bunch of security countermeasures against content farming; ebook sellers should implement them as well. It’ll be interesting to see what particular sort of mix works in this case.

Posted on April 4, 2011 at 9:18 AMView Comments

Societal Security

Humans have a natural propensity to trust non-kin, even strangers. We do it so often, so naturally, that we don’t even realize how remarkable it is. But except for a few simplistic counterexamples, it’s unique among life on this planet. Because we are intelligently calculating and value reciprocity (that is, fairness), we know that humans will be honest and nice: not for any immediate personal gain, but because that’s how they are. We also know that doesn’t work perfectly; most people will be dishonest some of the time, and some people will be dishonest most of the time. How does society — the honest majority — prevent the dishonest minority from taking over, or ruining society for everyone? How is the dishonest minority kept in check? The answer is security — in particular, something I’m calling societal security.

I want to divide security into two types. The first is individual security. It’s basic. It’s direct. It’s what normally comes to mind when we think of security. It’s cops vs. robbers, terrorists vs. the TSA, Internet worms vs. firewalls. And this sort of security is as old as life itself or — more precisely — as old as predation. And humans have brought an incredible level of sophistication to individual security.

Societal security is different. At the tactical level, it also involves attacks, countermeasures, and entire security systems. But instead of A vs. B, or even Group A vs. Group B, it’s Group A vs. members of Group A. It’s security for individuals within a group from members of that group. It’s how Group A protects itself from the dishonest minority within Group A. And it’s where security really gets interesting.

There are many types — I might try to estimate the number someday — of societal security systems that enforce our trust of non-kin. They’re things like laws prohibiting murder, taxes, traffic laws, pollution control laws, religious intolerance, Mafia codes of silence, and moral codes. They enable us to build a society that the dishonest minority can’t exploit and destroy. Originally, these security systems were informal. But as society got more complex, the systems became more formalized, and eventually were embedded into technologies.

James Madison famously wrote: “If men were angels, no government would be necessary.” Government is just the beginning of what wouldn’t be necessary. Currency, that paper stuff that’s deliberately made hard to counterfeit, wouldn’t be necessary, as people could just keep track of how much money they had. Angels never cheat, so nothing more would be required. Door locks, and any barrier that isn’t designed to protect against accidents, wouldn’t be necessary, since angels never go where they’re not supposed to go. Police forces wouldn’t be necessary. Armies: I suppose that’s debatable. Would angels — not the fallen ones — ever go to war against one another? I’d like to think they would be able to resolve their differences peacefully. If people were angels, every security measure that isn’t designed to be effective against accident, animals, forgetfulness, or legitimate differences between scrupulously honest angels could be dispensed with.

Security isn’t just a tax on the honest; it’s a very expensive tax on the honest. It’s the most expensive tax we pay, regardless of the country we live in. If people were angels, just think of the savings!

It wasn’t always like this. Security — especially societal security — used to be cheap. It used to be an incidental cost of society.

In a primitive society, informal systems are generally good enough. When you’re living in a small community, and objects are both scarce and hard to make, it’s pretty easy to deal with the problem of theft. If Alice loses a bowl, and at the same time, Bob shows up with an identical bowl, everyone knows Bob stole it from Alice, and the community can then punish Bob as it sees fit. But as communities get larger, as social ties weaken and anonymity increases, this informal system of theft prevention — detection and punishment leading to deterrence — fails. As communities get more technological and as the things people might want to steal get more interchangeable and harder to identify, it also fails. In short, as our ancestors made the move from small family groups to larger groups of unrelated families, and then to a modern form of society, the informal societal security systems started failing and more formal systems had to be invented to take their place. We needed to put license plates on cars and audit people’s tax returns.

We had no choice. Anything larger than a very primitive society couldn’t exist without societal security.

I’m writing a book about societal security. I will discuss human psychology: how we make security trade-offs, why we routinely trust non-kin (an evolutionary puzzle, to be sure), how the majority of us are honest, and that a minority of us are dishonest. That dishonest minority are the free riders of societal systems, and security is how we protect society from them. I will model the fundamental trade-off of societal security — individual self-interest vs. societal group interest — as a group prisoner’s dilemma problem, and use that metaphor to examine the basic mechanics of societal security. A lot falls out of this: free riders, the Tragedy of the Commons, the subjectivity of both morals and risk trade-offs.

Using this model, I will explore the security systems that protect — and fail to protect — market economics, corporations and other organizations, and a variety of national systems. I think there’s a lot we can learn about security by applying the prisoner’s dilemma model, and I’ve only recently started. Finally, I want to discuss modern changes to our millennia-old systems of societal security. The Information Age has changed a number of paradigms, and it’s not clear that our old security systems are working properly now or will work in the future. I’ve got a lot of work to do yet, and the final book might look nothing like this short outline. That sort of thing happens.

Tentative title: The Dishonest Minority: Security and its Role in Modern Society. I’ve written several books on the how of security. This book is about the why of security.

I expect to finish my first draft before Summer. Throughout 2011, expect to see bits from the book here. They might not make sense as a coherent whole at first — especially because I don’t write books in strict order — but by the time the book is published, it’ll all be part of a coherent and (hopefully) compelling narrative.

And if I write fewer extended blog posts and essays in the coming year, you’ll know why.

Posted on February 15, 2011 at 5:43 AMView Comments

Control Fraud

I had never heard the term “control fraud” before:

Control fraud theory was developed in the savings and loan debacle. It explained that the person controlling the S&L (typically the CEO) posed a unique risk because he could use it as a weapon.

The theory synthesized criminology (Wheeler and Rothman 1982), economics (Akerlof 1970), accounting, law, finance, and political science. It explained how a CEO optimized “his” S&L as a weapon to loot creditors and shareholders. The weapon of choice was accounting fraud. The company is the perpetrator and a victim. Control frauds are optimal looters because the CEO has four unique advantages. He uses his ability to hire and fire to suborn internal and external controls and make them allies. Control frauds consistently get “clean” opinions for financial statements that show record profitability when the company is insolvent and unprofitable. CEOs choose top-tier auditors. Their reputation helps deceive creditors and shareholders.

Only the CEO can optimize the company for fraud.

This is an interesting paper about control fraud. It’s by William K. Black, the Executive Director of the Institute for Fraud Prevention. “Individual ‘control frauds’ cause greater losses than all other forms of property crime combined. They are financial super-predators.” Black is talking about control fraud by both heads of corporations and heads of state, so that’s almost certainly a true statement. His main point, though, is that our legal systems don’t do enough to discourage control fraud.

White-collar criminology has a set of empirical findings and theories that are useful to understanding when markets will act perversely. This paper addresses three, interrelated theories economists should know about. “Control fraud” theory explains why the most damaging forms of fraud are situations in which those that control the company or the nation use it as a fraud vehicle. The CEO, or the head of state, poses the greatest fraud risk. A single large control fraud can cause greater financial losses than all other forms of property crime combined they are the “super-predators” of the financial world. Control frauds can also occur in waves that can cause systemic economic injury and discredit other institutions essential to good government and society. Control frauds are commonly able to defeat for several years market mechanisms that neo-classical economists predict will prevent such frauds.

“Systems capacity” theory examines why under deterrence is so common. It shows that, particularly with respect to elite crimes, anti-fraud resources and willpower are commonly so limited that “crime pays.” When systems capacity limitations are severe a “criminogenic environment” arises and crime increases. When a criminogenic environment for control fraud occurs it can produce a wave of control fraud.

“Neutralization” theory explores how criminals neutralize moral and social barriers that reduce crime by constraining our decision-making to honest enterprises. The easier individuals are able to neutralize such social restraints, the greater the incidence of crime.

[…]

White-collar criminology findings falsify several neo-classical economic theories. This paper discusses the predictive failures of the efficient markets hypothesis, the efficient contracts hypothesis and the law & economics theory of corporate law. The paper argues that neo-classical economists’ reliance on these flawed models leads them to recommend policies that optimize a criminogenic environment for control fraud. Fortunately, these policies are not routinely adopted in full. When they are, they produce recurrent crises because they eviscerate the institutions and mores vital to make markets and governments more efficient in preventing waves of control fraud. Criminological theories have demonstrated superior predictive and explanatory behavior with regard to perverse economic behavior. This paper discusses two realms of perverse behavior the role of waves of control fraud in producing economic crises and the role that endemic control fraud plays in producing economic stagnation.

EDITED TO ADD (11/11): Related paper on the effects of executive compensation on the abuse of controls.

Posted on November 1, 2010 at 6:02 AMView Comments

Security in a Reputation Economy

In the past, our relationship with our computers was technical. We cared what CPU they had and what software they ran. We understood our networks and how they worked. We were experts, or we depended on someone else for expertise. And security was part of that expertise.

This is changing. We access our email via the web, from any computer or from our phones. We use Facebook, Google Docs, even our corporate networks, regardless of hardware or network. We, especially the younger of us, no longer care about the technical details. Computing is infrastructure; it’s a commodity. It’s less about products and more about services; we simply expect it to work, like telephone service or electricity or a transportation network.

Infrastructures can be spread on a broad continuum, ranging from generic to highly specialized. Power and water are generic; who supplies them doesn’t really matter. Mobile phone services, credit cards, ISPs, and airlines are mostly generic. More specialized infrastructure services are restaurant meals, haircuts, and social networking sites. Highly specialized services include tax preparation for complex businesses; management consulting, legal services, and medical services.

Sales for these services are driven by two things: price and trust. The more generic the service is, the more price dominates. The more specialized it is, the more trust dominates. IT is something of a special case because so much of it is free. So, for both specialized IT services where price is less important and for generic IT services — think Facebook — where there is no price, trust will grow in importance. IT is becoming a reputation-based economy, and this has interesting ramifications for security.

Some years ago, the major credit card companies became concerned about the plethora of credit-card-number thefts from sellers’ databases. They worried that these might undermine the public’s trust in credit cards as a secure payment system for the internet. They knew the sellers would only protect these databases up to the level of the threat to the seller, and not to the greater level of threat to the industry as a whole. So they banded together and produced a security standard called PCI. It’s wholly industry-enforced ­ by an industry that realized its reputation was more valuable than the sellers’ databases.

A reputation-based economy means that infrastructure providers care more about security than their customers do. I realized this 10 years ago with my own company. We provided network-monitoring services to large corporations, and our internal network security was much more extensive than our customers’. Our customers secured their networks — that’s why they hired us, after all — but only up to the value of their networks. If we mishandled any of our customers’ data, we would have lost the trust of all of our customers.

I heard the same story at an ENISA conference in London last June, when an IT consultant explained that he had begun encrypting his laptop years before his customers did. While his customers might decide that the risk of losing their data wasn’t worth the hassle of dealing with encryption, he knew that if he lost data from one customer, he risked losing all of his customers.

As IT becomes more like infrastructure, more like a commodity, expect service providers to improve security to levels greater than their customers would have done themselves.

In IT, customers learn about company reputation from many sources: magazine articles, analyst reviews, recommendations from colleagues, awards, certifications, and so on. Of course, this only works if customers have accurate information. In a reputation economy, companies have a motivation to hide their security problems.

You’ve all experienced a reputation economy: restaurants. Some restaurants have a good reputation, and are filled with regulars. When restaurants get a bad reputation, people stop coming and they close. Tourist restaurants — whose main attraction is their location, and whose customers frequently don’t know anything about their reputation — can thrive even if they aren’t any good. And sometimes a restaurant can keep its reputation — an award in a magazine, a special occasion restaurant that “everyone knows” is the place to go — long after its food and service have declined.

The reputation economy is far from perfect.

This essay originally appeared in The Guardian.

Posted on November 12, 2009 at 6:30 AMView Comments

New eBay Fraud

Here’s a clever attack, exploiting relative delays in eBay, PayPal, and UPS shipping:

The buyer reported the item as “destroyed” and demanded and got a refund from Paypal. When the buyer shipped it back to Chad and he opened it, he found there was nothing wrong with it — except that the scammer had removed the memory, processor and hard drive. Now Chad is out $500 and left with a shell of a computer, and since the item was “received” Paypal won’t do anything.

Very clever. The seller accepted the return from UPS after a visual inspection, so UPS considered the matter closed. PayPal and eBay both considered the matter closed. if the amount was large enough, the seller could sue, but how could he prove that the computer was functional when he sold it?

It seems to me that the only way to solve this is for PayPal to not process refunds until the seller confirms what he received back is the same as what he shipped. Yes, then the seller could commit similar fraud, but sellers (certainly professional ones) have a greater reputational risk.

Posted on March 6, 2009 at 1:30 PM

Is Megan's Law Worth It?

A study from New Jersey shows that Megan’s Law—laws designed to identity sex offenders to the communities they live in—is ineffective in reducing sex crimes or deterring recidivists.

The study, funded by the National Institute of Justice, examined the cases of 550 sex offenders who were broken into two groups—those released from prison before the passage of Megan’s Law and those released afterward.

The researchers found no statistically significant difference between the groups in whether the offenders committed new sex crimes.

Among those released before the passage of Megan’s Law, 10 percent were re-arrested on sex-crime charges. Among the other group, 7.6 percent were re-arrested for such crimes.

Similarly, the researchers found no significant difference in the number of victims of the two groups. Together, the offenders had 796 victims, ages 1 to 87. Most of the offenders had prior relationships with their new victims, and nearly half were family members. In just 16 percent of the cases, the offender was a stranger.

One complicating factor for the researchers is that sex crimes had started to decline even before the adoption of Megan’s Law, making it difficult to pinpoint cause and effect. In addition, sex offenses vary from county to county, rising and falling from year to year.

Even so, the researchers noted an “accelerated” decline in sex offenses in the years after the law’s passage.

“Although the initial decline cannot be attributed to Megan’s Law, the continued decline may, in fact, be related in some way to registration and notification activities,” the authors wrote. Elsewhere in the report, they noted that notification and increased surveillance of offenders “may have a general deterrent effect.”

Posted on February 23, 2009 at 12:28 PMView Comments

Breach Notification Laws

There are three reasons for breach notification laws. One, it’s common politeness that when you lose something of someone else’s, you tell him. The prevailing corporate attitude before the law—”They won’t notice, and if they do notice they won’t know it’s us, so we are better off keeping quiet about the whole thing”—is just wrong. Two, it provides statistics to security researchers as to how pervasive the problem really is. And three, it forces companies to improve their security.

That last point needs a bit of explanation. The problem with companies protecting your data is that it isn’t in their financial best interest to do so. That is, the companies are responsible for protecting your data, but bear none of the costs if your data is compromised. You suffer the harm, but you have no control—or even knowledge—of the company’s security practices. The idea behind such laws, and how they were sold to legislators, is that they would increase the cost—both in bad publicity and the actual notification—of security breaches, motivating companies to spend more to prevent them. In economic terms, the law reduces the externalities and forces companies to deal with the true costs of these data breaches.

So how has it worked?

Earlier this year, three researchers at the Heinz School of Public Policy and Management at Carnegie Mellon University—Sasha Romanosky, Rahul Telang and Alessandro Acquisti—tried to answer that question. They looked at reported data breaches and rates of identity theft from 2002 to 2007, comparing states with a law to states without one. If these laws had their desired effects, people in states with notification laws should experience fewer incidences of identity theft. The result: not so much. The researchers found data breach notification laws reduced identity theft by just 2 percent on average.

I think there’s a combination of things going on. Identity theft is being reported far more today than five years ago, so it’s difficult to compare identity theft rates before and after the state laws were enacted. Most identity theft occurs when someone’s home or work computer is compromised, not from theft of large corporate databases, so the effect of these laws is small. Most of the security improvements companies made didn’t make much of a difference, reducing the effect of these laws.

The laws rely on public shaming. It’s embarrassing to have to admit to a data breach, and companies should be willing to spend to avoid this PR expense. The problem is, in order for this to work well, public shaming needs the cooperation of the press. And there’s an attenuation effect going on. The first major breach after the first state disclosure law was in February 2005 in California, when ChoicePoint sold personal data on 145,000 people to criminals. The event was big news, ChoicePoint’s stock tanked, and it was shamed into improving its security.

Next, LexisNexis exposed personal data on 300,000 individuals, and then Citigroup lost data on 3.9 million. The law worked; the only reason we knew about these security breaches was because of the law. But the breaches came in increasing numbers, and in larger quantities. Data breach stories felt more like “crying wolf” and soon, data breaches were no longer news.

Today, the remaining cost is that of the direct mail campaign to notify customers, which often turns into a marketing opportunity.

I’m still a fan of these laws, if only for the first two reasons I listed. Disclosure is important, but it’s not going to solve identity theft. As I’ve written previously, the reason theft of personal information is common is that the data is valuable once stolen. The way to mitigate the risk of fraud due to impersonation is not to make personal information difficult to steal, it’s to make it difficult to use.

Disclosure laws only deal with the economic externality of data owners protecting your personal information. What we really need are laws prohibiting financial institutions from granting credit to someone using your name with only a minimum of authentication.

This is the second half of a point/counterpoint with Marcus Ranum. Marcus’s essay is here.

Posted on January 21, 2009 at 6:59 AMView Comments

Hacking Reputation in MySpace and Facebook

I’ll be the first to admit it: I know next to nothing about MySpace or Facebook. I do know that they’re social networking sites, and that — at least to some extent — your reputation is based on who are your “friends” and what they say about you.

Which means that this follows, like day follows night. “Fake Your Space” is a site where you can hire fake friends to leave their pictures and personalized comments on your page. Now you can pretend that you’re more popular than you actually are:

FakeYourSpace is an exciting new service that enables normal everyday people like me and you to have Hot friends on popular social networking sites such as MySpace and FaceBook. Not only will you be able to see these Gorgeous friends on your friends list, but FakeYourSpace enables you to create customized messages and comments for our Models to leave you on your comment wall. FakeYourSpace makes it easy for any regular person to make it seem like they have a Model for a friend. It doesn’t stop there however. Maybe you want to appear as if you have a Model for a lover. FakeYourSpace can make this happen!

What’s next? Services that verify friends on your friends’ MySpace pages? Services that block friend verification services? Where will this all end up?

Posted on December 7, 2006 at 7:29 AMView Comments

Anonymity and Accountability

Last week I blogged Kevin Kelly’s rant against anonymity. Today I wrote about it for Wired.com:

And that’s precisely where Kelly makes his mistake. The problem isn’t anonymity; it’s accountability. If someone isn’t accountable, then knowing his name doesn’t help. If you have someone who is completely anonymous, yet just as completely accountable, then — heck, just call him Fred.

History is filled with bandits and pirates who amass reputations without anyone knowing their real names.

EBay’s feedback system doesn’t work because there’s a traceable identity behind that anonymous nickname. EBay’s feedback system works because each anonymous nickname comes with a record of previous transactions attached, and if someone cheats someone else then everybody knows it.

Similarly, Wikipedia’s veracity problems are not a result of anonymous authors adding fabrications to entries. They’re an inherent property of an information system with distributed accountability. People think of Wikipedia as an encyclopedia, but it’s not. We all trust Britannica entries to be correct because we know the reputation of that company, and by extension its editors and writers. On the other hand, we all should know that Wikipedia will contain a small amount of false information because no particular person is accountable for accuracy — and that would be true even if you could mouse over each sentence and see the name of the person who wrote it.

Please read the whole thing before you comment.

Posted on January 12, 2006 at 4:36 AMView Comments

Kevin Kelly on Anonymity

He’s against it:

More anonymity is good: that’s a dangerous idea.

Fancy algorithms and cool technology make true anonymity in mediated environments more possible today than ever before. At the same time this techno-combo makes true anonymity in physical life much harder. For every step that masks us, we move two steps toward totally transparent unmasking. We have caller ID, but also caller ID Block, and then caller ID-only filters. Coming up: biometric monitoring and little place to hide. A world where everything about a person can be found and archived is a world with no privacy, and therefore many technologists are eager to maintain the option of easy anonymity as a refuge for the private.

However in every system that I have seen where anonymity becomes common, the system fails. The recent taint in the honor of Wikipedia stems from the extreme ease which anonymous declarations can be put into a very visible public record. Communities infected with anonymity will either collapse, or shift the anonymous to pseudo-anonymous, as in eBay, where you have a traceable identity behind an invented nickname. Or voting, where you can authenticate an identity without tagging it to a vote.

Anonymity is like a rare earth metal. These elements are a necessary ingredient in keeping a cell alive, but the amount needed is a mere hard-to-measure trace. In larger does these heavy metals are some of the most toxic substances known to a life. They kill. Anonymity is the same. As a trace element in vanishingly small doses, it’s good for the system by enabling the occasional whistleblower, or persecuted fringe. But if anonymity is present in any significant quantity, it will poison the system.

There’s a dangerous idea circulating that the option of anonymity should always be at hand, and that it is a noble antidote to technologies of control. This is like pumping up the levels of heavy metals in your body into to make it stronger.

Privacy can only be won by trust, and trust requires persistent identity, if only pseudo-anonymously. In the end, the more trust, the better. Like all toxins, anonymity should be keep as close to zero as possible.

I don’t even know where to begin. Anonymity is essential for free and fair elections. It’s essential for democracy and, I think, liberty. It’s essential to privacy in a large society, and so it is essential to protect the rights of the minority against the tyranny of the majority…and to protect individual self-respect.

Kelly makes the very valid point that reputation makes society work. But that doesn’t mean that 1) reputation can’t be anonymous, or 2) anonymity isn’t also essential for society to work.

I’m writing an essay on this for Wired News. Comments and arguments, pro or con, are appreciated.

Posted on January 5, 2006 at 1:20 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.