Entries Tagged "fraud"

Page 18 of 35

Authenticating Paperwork

It’s a sad, horrific story. Homeowner returns to find his house demolished. The demolition company was hired legitimately but there was a mistake and it demolished the wrong house. The demolition company relied on GPS co-ordinates, but requiring street addresses isn’t a solution. A typo in the address is just as likely, and it would have demolished the house just as quickly.

The problem is less how the demolishers knew which house to knock down, and more how they confirmed that knowledge. They trusted the paperwork, and the paperwork was wrong. Informality works when everybody knows everybody else. When merchants and customers know each other, government officials and citizens know each other, and people know their neighbours, people know what’s going on. In that sort of milieu, if something goes wrong, people notice.

In our modern anonymous world, paperwork is how things get done. Traditionally, signatures, forms, and watermarks all made paperwork official. Forgeries were possible but difficult. Today, there’s still paperwork, but for the most part it only exists until the information makes its way into a computer database. Meanwhile, modern technology—computers, fax machines and desktop publishing software—has made it easy to forge paperwork. Every case of identity theft has, at its core, a paperwork failure. Fake work orders, purchase orders, and other documents are used to steal computers, equipment, and stock. Occasionally, fake faxes result in people being sprung from prison. Fake boarding passes can get you through airport security. This month hackers officially changed the name of a Swedish man.

A reporter even changed the ownership of the Empire State Building. Sure, it was a stunt, but this is a growing form of crime. Someone pretends to be you—preferably when you’re away on holiday—and sells your home to someone else, forging your name on the paperwork. You return to find someone else living in your house, someone who thinks he legitimately bought it. In some senses, this isn’t new. Paperwork mistakes and fraud have happened ever since there was paperwork. And the problem hasn’t been fixed yet for several reasons.

One, our sloppy systems generally work fine, and it’s how we get things done with minimum hassle. Most people’s houses don’t get demolished and most people’s names don’t get maliciously changed. As common as identity theft is, it doesn’t happen to most of us. These stories are news because they are so rare. And in many cases, it’s cheaper to pay for the occasional blunder than ensure it never happens.

Two, sometimes the incentives aren’t in place for paperwork to be properly authenticated. The people who demolished that family home were just trying to get a job done. The same is true for government officials processing title and name changes. Banks get paid when money is transferred from one account to another, not when they find a paperwork problem. We’re all irritated by forms stamped 17 times, and other mysterious bureaucratic processes, but these are actually designed to detect problems.

And three, there’s a psychological mismatch: it is easy to fake paperwork, yet for the most part we act as if it has magical properties of authenticity.

What’s changed is scale. Fraud can be perpetrated against hundreds of thousands, automatically. Mistakes can affect that many people, too. What we need are laws that penalise people or companies—criminally or civilly—who make paperwork errors. This raises the cost of mistakes, making authenticating paperwork more attractive, which changes the incentives of those on the receiving end of the paperwork. And that will cause the market to devise technologies to verify the provenance, accuracy, and integrity of information: telephone verification, addresses and GPS co-ordinates, cryptographic authentication, systems that double- and triple-check, and so on.

We can’t reduce society’s reliance on paperwork, and we can’t eliminate errors based on it. But we can put economic incentives in place for people and companies to authenticate paperwork more.

This essay originally appeared in The Guardian.

Posted on June 25, 2009 at 6:11 AMView Comments

Fraud on eBay

I expected selling my computer on eBay to be easy.

Attempt 1: I listed it. Within hours, someone bought it—from a hacked account, as eBay notified me, cancelling the sale.

Attempt 2: I listed it again. Within hours, someone bought it, and asked me to send it to her via FedEx overnight. The buyer sent payment via PayPal immediately, and then—near as I could tell—immediately opened a dispute with PayPal so that the funds were put on hold. And then she sent me an e-mail saying “I paid you, now send me the computer.” But PayPal was faster than she expected, I think. At the same time, I received an e-mail from PayPal saying that I might have received a payment that the account holder did not authorize, and that I shouldn’t ship the item until the investigation is complete.

I’m willing to make Attempt 3, if just to see what kind of scam happens this time. But I still want to sell the computer, and I am pissed off at what is essentially a denial-of-service attack. The facts from this listing are accurate; does anyone want it? List price is over $3K. Send me e-mail.

EDITED TO ADD (6/19): It’s not just me.

EDITED TO ADD (6/24): The computer is sold, to someone who reads my blog.

EDITED TO ADD (6/25): I’m not entirely sure, but it looks like the payment from the second eBay buyer has gone through PayPal. I don’t trust it—just because I can’t figure out the scam doesn’t mean there isn’t one. And, anyway, the computer is sold.

EDITED TO ADD (7/3): For the record: despite articles to the contrary, I was not scammed on eBay. I was the victim of two scam attempts, both of which I detected and did not fall for.

Posted on June 19, 2009 at 11:55 AMView Comments

Second SHB Workshop Liveblogging (5)

David Livingstone Smith moderated the fourth session, about (more or less) methodology.

Angela Sasse, University College London (suggested reading: The Compliance Budget: Managing Security Behaviour in Organisations; Human Vulnerabilities in Security Systems), has been working on usable security for over a dozen years. As part of a project called “Trust Economics,” she looked at whether people comply with security policies and why they either do or do not. She found that there is a limit to the amount of effort people will make to comply—this is less actual cost and more perceived cost. Strict and simple policies will be complied with more than permissive but complex policies. Compliance detection, and reward or punishment, also affect compliance. People justify noncompliance by “frequently made excuses.”

Bashar Nuseibeh, Open University (suggested reading: A Multi-Pronged Empirical Approach to Mobile Privacy Investigation; Security Requirements Engineering: A Framework for Representation and Analysis), talked about mobile phone security; specifically, Facebook privacy on mobile phones. He did something clever in his experiments. Because he wasn’t able to interview people at the moment they did something—he worked with mobile users—he asked them to provide a “memory phrase” that allowed him to effectively conduct detailed interviews at a later time. This worked very well, and resulted in all sorts of information about why people made privacy decisions at that earlier time.

James Pita, University of Southern California (suggested reading: Deployed ARMOR Protection: The Application of a Game Theoretic Model for Security at the Los Angeles International Airport), studies security personnel who have to guard a physical location. In his analysis, there are limited resources—guards, cameras, etc.—and a set of locations that need to be guarded. An example would be the Los Angeles airport, where a finite number of K-9 units need to guard eight terminals. His model uses a Stackelberg game to minimize predictability (otherwise, the adversary will learn it and exploit it) while maximizing security. There are complications—observational uncertainty and bounded rationally on the part of the attackers—which he tried to capture in his model.

Markus Jakobsson, Palo Alto Research Center (suggested reading: Male, late with your credit card payment, and like to speed? You will be phished!; Social Phishing; Love and Authentication; Quantifying the Security of Preference-Based Authentication), pointed out that auto insurers ask people if they smoke in order to get a feeling for whether they engage in high-risk behaviors. In his experiment, he selected 100 people who were the victim of online fraud and 100 people who were not. He then asked them to complete a survey about different physical risks such as mountain climbing and parachute jumping, financial risks such as buying stocks and real estate, and Internet risks such as visiting porn sites and using public wi-fi networks. He found significant correlation between different risks, but I didn’t see an overall pattern emerge. And in the discussion phase, several people had questions about the data. More analysis, and probably more data, is required. To be fair, he was still in the middle of his analysis.

Rachel Greenstadt, Drexel University (suggested reading: Practical Attacks Against Authorship Recognition Techniques (pre-print); Reinterpreting the Disclosure Debate for Web Infections), discussed ways in which humans and machines can collaborate in making security decisions. These decisions are hard for several reasons: because they are context dependent, require specialized knowledge, are dynamic, and require complex risk analysis. And humans and machines are good at different sorts of tasks. Machine-style authentication: This guy I’m standing next to knows Jake’s private key, so he must be Jake. Human-style authentication: This guy I’m standing next to looks like Jake and sounds like Jake, so he must be Jake. The trick is to design systems that get the best of these two authentication styles and not the worst. She described two experiments examining two decisions: should I log into this website (the phishing problem), and should I publish this anonymous essay or will my linguistic style betray me?

Mike Roe, Microsoft, talked about crime in online games, particularly in Second Life and Metaplace. There are four classes of people on online games: explorers, socializers, achievers, and griefers. Griefers try to annoy socializers in social worlds like Second Life, or annoy achievers in competitive worlds like World of Warcraft. Crime is not necessarily economic; criminals trying to steal money is much less of a problem in these games than people just trying to be annoying. In the question session, Dave Clark said that griefers are a constant, but economic fraud grows over time. I responded that the two types of attackers are different people, with different personality profiles. I also pointed out that there is another kind of attacker: achievers who use illegal mechanisms to assist themselves.

In the discussion, Peter Neumann pointed out that safety is an emergent property, and requires security, reliability, and survivability. Others weren’t so sure.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Conference dinner tonight at Legal Seafoods. And four more sessions tomorrow.

Posted on June 11, 2009 at 4:50 PMView Comments

Second SHB Workshop Liveblogging (3)

The second session was about fraud. (These session subjects are only general. We tried to stick related people together, but there was the occasional oddball—and scheduling constraint—to deal with.)

Julie Downs, Carnegie Mellon University (suggested reading: Behavioral Response to Phishing Risk; Parents’ vaccination comprehension and decisions; The Psychology of Food Consumption), is a psychologist who studies how people make decisions, and talked about phishing. To determine how people respond to phishing attempts—what e-mails they open and when they click on links—she watched as people interacted with their e-mail. She found that most people’s strategies to deal with phishing attacks might have been effective 5-10 years ago, but are no longer sufficient now that phishers have adapted. She also found that educating people about phishing didn’t make them more effective at spotting phishing attempts, but made them more likely to be afraid of doing anything on line. She found this same overreaction among people who were recently the victims of phishing attacks, but again people were no better separating real e-mail from phishing attempts. What does make a difference is contextual understanding: how to parse a URL, how and why the scams happen, what SSL does and doesn’t do.

Jean Camp, Indiana University (suggested reading: Experimental Evaluation of Expert and Non-expert Computer Users’ Mental Models of Security Risks), studies people taking risks online. Four points: 1) “people create mental models from internal narratives about risk,” 2) “risk mitigating action is taken only if the risk is perceived as relevant,” 3) “contextualizing risk can show risks as relevant,” and 4) “narrative can increase desire and capacity to use security tools.” Stories matter: “people are willing to wash out their cat food cans and sweep up their sweet gum balls to be a good neighbor, but allow their computers to join zombie networks” because there’s a good story in the former and none in the latter. She presented two experiments to demonstrate this. One was a video experiment watching business majors try to install PGP. No one was successful: there was no narrative, and the mixed metaphor of physical and cryptographic “key” confused people.

Matt Blaze, University of Pennsylvania (his blog), talked about electronic voting machines and fraud. He related this anecdote about actual electronic voting machine vote fraud in Kentucky. In the question session, he speculated about the difficulty of having a security model that would have captured the problem, and how to know whether that model was complete enough.

Jeffrey Friedberg, Microsoft (suggested reading: Internet Fraud Battlefield; End to End Trust and the Trust User Experience; Testimony on “spyware”), discussed research at Microsoft around the Trust User Experience (TUX). He talked about the difficulty of verifying SSL certificates. Then he talked about how Microsoft added a “green bar” to signify trusted sites, and how people who learned to trust the green bar were fooled by “picture in picture attacks”: where a hostile site embedded a green-bar browser window in its page. Most people don’t understand that the information inside the browser window is arbitrary, but that the stuff around it is not. The user interface, user experience, mental models all matter. Designing and evaluating TUX is hard. From the questions: training doesn’t help much, because given a plausible story, people will do things counter to their training.

Stuart Schechter, Microsoft, presented this research on secret questions. Basically, secret questions don’t work. They’re easily guessable based on the most common answers; friends and relatives of people can easily predict unique answers; and people forget their answers. Even worse, the more memorable the question/answers are, the easier they are to guess. Having people write their own questions is no better: “What’s my blood type?” “How tall am I?”

Tyler Moore, Harvard University (suggested reading: The Consequences of Non-Cooperation in the Fight against Phishing; Information Security Economics—and Beyond), discussed his empirical studies on online crime and defense. Fraudsters are good at duping users, but they’re also effective at exploiting failures among IT professionals to perpetuate the infrastructure necessary to carry out these exploits on a large scale (hosting fake web pages, sending spam, laundering the profits via money mules, and so on). There is widespread refusal among the defenders to cooperate with each other, and attackers exploit these limitations. We are better at removing phishing websites than we are at defending against the money mules. Defenders tend to fix immediate problems, but not underlying problems.

In the discussion phase, there was a lot of talk about the relationships between websites, like banks, and users—and how that affects security for both good and bad. Jean Camp doesn’t want a relationship with her bank, because that unduly invests her in the bank. (Someone from the audience pointed out that, as a U.S. taxpayer, she is already invested in her bank.) Angela Sasse said that the correct metaphor is “rules of engagement,” rather than relationships.

Adam Shostack’s liveblogging. Ross Anderson’s liveblogging is in his blog post’s comments.

Matt Blaze is taping the sessions—except for the couple of presenters who would rather not be taped—I’ll post his links as soon as the files are online.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 11:42 AMView Comments

Malware Steals ATM Data

One of the risks of using a commercial OS for embedded systems like ATMs: it’s easier to write malware against it:

The report does not detail how the ATMs are infected, but it seems likely that the malware is encoded on a card that can be inserted in an ATM card reader to mount a buffer overflow attack. The machine is compromised by replacing the isadmin.exe file to infect the system.

The malicious isadmin.exe program then uses the Windows API to install the functional attack code by replacing a system file called lsass.exe in the C:WINDOWS directory.

Once the malicious lsass.exe program is installed, it collects users account numbers and PIN codes and waits for a human controller to insert a specially crafted control card to take over the ATM.

After the ATM is put under control of a human attacker, they can perform various functions, including harvesting the purloined data or even ejecting the cash box.

EDITED TO ADD (6/14): Seems like the story I quoted was jumping to conclusions. The actual report says “the malware is installed and activated through a dropper file (a file that an attacker can use to deploy tools onto a compromised system) by the name of isadmin.exe,” which doesn’t really sound like it’s referring to a buffer overflow attack carried out through a card emulator. Also, The Register says “[the] malicious programs can be installed only by people with physical access to the machines, making some level of insider cooperation necessary.”

Posted on June 10, 2009 at 1:51 PMView Comments

Researchers Hijack a Botnet

A bunch of researchers at the University of California Santa Barbara took control of a botnet for ten days, and learned a lot about how botnets work:

The botnet in question is controlled by Torpig (also known as Sinowal), a malware program that aims to gather personal and financial information from Windows users. The researchers gained control of the Torpig botnet by exploiting a weakness in the way the bots try to locate their commands and control servers—the bots would generate a list of domains that they planned to contact next, but not all of those domains were registered yet. The researchers then registered the domains that the bots would resolve, and then set up servers where the bots could connect to find their commands. This method lasted for a full ten days before the botnet’s controllers updated the system and cut the observation short.

During that time, however, UCSB’s researchers were able to gather massive amounts of information on how the botnet functions as well as what kind of information it’s gathering. Almost 300,000 unique login credentials were gathered over the time the researchers controlled the botnet, including 56,000 passwords gathered in a single hour using “simple replacement rules” and a password cracker. They found that 28 percent of victims reused their credentials for accessing 368,501 websites, making it an easy task for scammers to gather further personal information. The researchers noted that they were able to read through hundreds of e-mail, forum, and chat messages gathered by Torpig that “often contain detailed (and private) descriptions of the lives of their authors.”

Here’s the paper:

Abstract:

Botnets, networks of malware-infected machines that are controlled by an adversary, are the root cause of a large number of security threats on the Internet. A particularly sophisticated and insidious type of bot is Torpig, a malware program that is designed to harvest sensitive information (such as bank account and credit card data) from its victims. In this paper, we report on our efforts to take control of the Torpig botnet for ten days. Over this period, we observed more than 180 thousand infections and recorded more than 70 GB of data that the bots collected. While botnets have been “hijacked” before, the Torpig botnet exhibits certain properties that make the analysis of the data particularly interesting. First, it is possible (with reasonable accuracy) to identify unique bot infections and relate that number to the more than 1.2 million IP addresses that contacted our command and control server. This shows that botnet estimates that are based on IP addresses are likely to report inflated numbers. Second, the Torpig botnet is large, targets a variety of applications, and gathers a rich and diverse set of information from the infected victims. This opens the possibility to perform interesting data analysis that goes well beyond simply counting the number of stolen credit cards.

Another article.

Posted on May 11, 2009 at 6:56 AMView Comments

Low-Tech Impersonation

Sometimes the basic tricks work best:

Police say a man posing as a waiter collected $186 in cash from diners at two restaurants in New Jersey and walked out with the money in his pocket.

Diners described the bogus waiter as a spikey-haired 20-something wearing a dark blue or black button-down shirt, yellow tie and khaki pants.

Police say he approached two women dining at Hobson’s Choice in Hoboken, N.J. around 7:20 p.m. on Thursday. He asked if they needed anything else before paying. They said no and handed him $90 in cash.

About two hours later he approached three women dining at Margherita’s Pizza and Cafe. He asked if they were ready to pay, took $96 and never returned with their change.

Certainly he’ll be caught if he keeps it up, but it’s a good trick if used sparingly.

Posted on April 22, 2009 at 7:04 AMView Comments

The Zone of Essential Risk

Bob Blakley makes an interesting point. It’s in the context of eBay fraud, but it’s more general than that.

If you conduct infrequent transactions which are also small, you’ll never lose much money and it’s not worth it to try to protect yourself – you’ll sometimes get scammed, but you’ll have no trouble affording the losses.

If you conduct large transactions, regardless of frequency, each transaction is big enough that it makes sense to insure the transactions or pay an escrow agent. You’ll have occasional experiences of fraud, but you’ll be reimbursed by the insurer or the transactions will be reversed by the escrow agent and you don’t lose anything.

If you conduct small or medium-sized transactions frequently, you can amortize fraud losses using the gains from your other transactions. This is how casinos work; they sometimes lose a hand, but they make it up in the volume.

But if you conduct medium-sized transactions rarely, you’re in trouble. The transactions are big enough so that you care about losses, you don’t have enough transaction volume to amortize those losses, and the cost of insurance or escrow is high enough compared to the value of your transactions that it doesn’t make economic sense to protect yourself.

Posted on March 30, 2009 at 6:50 AMView Comments

Election Fraud in Kentucky

I think this is the first documented case of election fraud in the U.S. using electronic voting machines (there have been lots of documented cases of errors and voting problems, but this one involves actual maliciousness):

Five Clay County officials, including the circuit court judge, the county clerk, and election officers were arrested Thursday after they were indicted on federal charges accusing them of using corrupt tactics to obtain political power and personal gain.

The 10-count indictment, unsealed Thursday, accused the defendants of a conspiracy from March 2002 until November 2006 that violated the Racketeering Influenced and Corrupt Organizations Act (RICO). RICO is a federal statute that prosecutors use to combat organized crime. The defendants were also indicted for extortion, mail fraud, obstruction of justice, conspiracy to injure voters’ rights and conspiracy to commit voter fraud.

According to the indictment, these alleged criminal actions affected the outcome of federal, local, and state primary and general elections in 2002, 2004, and 2006.

From BradBlog:

Clay County uses the horrible ES&S iVotronic system for all of its votes at the polling place. The iVotronic is a touch-screen Direct Recording Electronic (DRE) device, offering no evidence, of any kind, that any vote has ever been recorded as per the voter’s intent. If the allegations are correct here, there would likely have been no way to discover, via post-election examination of machines or election results, that votes had been manipulated on these machines.

ES&S is the largest distributor of voting systems in America and its iVotronic system—which is well-documented to have lost and flipped votes on many occasions—is likely the most widely-used DRE system in the nation. It’s currently in use in some 419 jurisdictions in 18 states including Arkansas, Colorado, Florida, Indiana, Kansas, Kentucky, Missouri, Mississippi, North Carolina, New Jersey, Ohio, Pennsylvania, South Carolina, Tennessee, Texas, Virginia, Wisconsin, and West Virginia.

ArsTechnica has more, and here’s the actual indictment; BradBlog has excerpts.

The fraud itself is very low-tech, and didn’t make use of any of the documented vulnerabilities in the ES&S iVotronic machines; it was basic social engineering. Matt Blaze explains:

The iVotronic is a popular Direct Recording Electronic (DRE) voting machine. It displays the ballot on a computer screen and records voters’ choices in internal memory. Voting officials and machine manufacturers cite the user interface as a major selling point for DRE machines—it’s already familiar to voters used to navigating touchscreen ATMs, computerized gas pumps, and so on, and thus should avoid problems like the infamous “butterfly ballot”. Voters interact with the iVotronic primarily by touching the display screen itself. But there’s an important exception: above the display is an illuminated red button labeled “VOTE” (see photo at right). Pressing the VOTE button is supposed to be the final step of a voter’s session; it adds their selections to their candidates’ totals and resets the machine for the next voter.

The Kentucky officials are accused of taking advantage of a somewhat confusing aspect of the way the iVotronic interface was implemented. In particular, the behavior (as described in the indictment) of the version of the iVotronic used in Clay County apparently differs a bit from the behavior described in ES&S’s standard instruction sheet for voters [pdf – see page 2]. A flash-based iVotronic demo available from ES&S here shows the same procedure, with the VOTE button as the last step. But evidently there’s another version of the iVotronic interface in which pressing the VOTE button is only the second to last step. In those machines, pressing VOTE invokes an extra “confirmation” screen. The vote is only actually finalized after a “confirm vote” box is touched on that screen. (A different flash demo that shows this behavior with the version of the iVotronic equipped with a printer is available from ES&S here). So the iVotronic VOTE button doesn’t necessarily work the way a voter who read the standard instructions might expect it to.

The indictment describes a conspiracy to exploit this ambiguity in the iVotronic user interface by having pollworkers systematically (and incorrectly) tell voters that pressing the VOTE button is the last step. When a misled voter would leave the machine with the extra “confirm vote” screen still displayed, a pollworker would quietly “correct” the not-yet-finalized ballot before casting it. It’s a pretty elegant attack, exploiting little more than a poorly designed, ambiguous user interface, printed instructions that conflict with actual machine behavior, and public unfamiliarity with equipment that most citizens use at most once or twice each year. And once done, it leaves behind little forensic evidence to expose the deed.

Read the rest of Blaze’s post for some good analysis on the attack and what it says about iVotronic. He led the team that analyzed the security of that very machine:

We found numerous exploitable security weaknesses in these machines, many of which would make it easy for a corrupt voter, pollworker, or election official to tamper with election results (see our report for details).

[…]

On the one hand, we might be comforted by the relatively “low tech” nature of the attack—no software modifications, altered electronic records, or buffer overflow exploits were involved, even though the machines are, in fact, quite vulnerable to such things. But a close examination of the timeline in the indictment suggests that even these “simple” user interface exploits might well portend more technically sophisticated attacks sooner, rather than later.

Count 9 of the Kentucky indictment alleges that the Clay County officials first discovered and conspired to exploit the iVotronic “confirm screen” ambiguity around June 2004. But Kentucky didn’t get iVotronics until at the earliest late 2003; according to the state’s 2003 HAVA Compliance Plan [pdf], no Kentucky county used the machines as of mid-2003. That means that the officials involved in the conspiracy managed to discover and work out the operational details of the attack soon after first getting the machines, and were able to use it to alter votes in the next election.

[…]

But that’s not the worst news in this story. Even more unsettling is the fact that none of the published security analyses of the iVotronic—including the one we did at Penn—had noticed the user interface weakness. The first people to have discovered this flaw, it seems, didn’t publish or report it. Instead, they kept it to themselves and used it to steal votes.

Me on electronic voting machines, from 2004.

Posted on March 24, 2009 at 6:41 AMView Comments

1 16 17 18 19 20 35

Sidebar photo of Bruce Schneier by Joe MacInnis.