Entries Tagged "scams"

Page 8 of 14

The Psychology of Being Scammed

Fascinating research on the psychology of con games. “The psychology of scams: Provoking and committing errors of judgement” was prepared for the UK Office of Fair Trading by the University of Exeter School of Psychology.

From the executive summary, here’s some stuff you may know:

Appeals to trust and authority: people tend to obey authorities so scammers use, and victims fall for, cues that make the offer look like a legitimate one being made by a reliable official institution or established reputable business.

Visceral triggers: scams exploit basic human desires and needs—such as greed, fear, avoidance of physical pain, or the desire to be liked—in order to provoke intuitive reactions and reduce the motivation of people to process the content of the scam message deeply. For example, scammers use triggers that make potential victims focus on the huge prizes or benefits on offer.

Scarcity cues. Scams are often personalised to create the impression that the offer is unique to the recipient. They also emphasise the urgency of a response to reduce the potential victim’s motivation to process the scam content objectively.

Induction of behavioural commitment. Scammers ask their potential victims to make small steps of compliance to draw them in, and thereby cause victims to feel committed to continue sending money.

The disproportionate relation between the size of the alleged reward and the cost of trying to obtain it. Scam victims are led to focus on the alleged big prize or reward in comparison to the relatively small amount of money they have to send in order to obtain their windfall; a phenomenon called ‘phantom fixation’. The high value reward (often life-changing, medically, financially, emotionally or physically) that scam victims thought they could get by responding, makes the money to be paid look rather small by comparison.

Lack of emotional control. Compared to non-victims, scam victims report being less able to regulate and resist emotions associated with scam offers. They seem to be unduly open to persuasion, or perhaps unduly undiscriminating about who they allow to persuade them. This creates an extra vulnerability in those who are socially isolated, because social networks often induce us to regulate our emotions when we otherwise might not.

And some stuff that surprised me:

…it was striking how some scam victims kept their decision to respond private and avoided speaking about it with family members or friends. It was almost as if with some part of their minds, they knew that what they were doing was unwise, and they feared the confirmation of that that another person would have offered. Indeed to some extent they hide their response to the scam from their more rational selves.

Another counter-intuitive finding is that scam victims often have better than average background knowledge in the area of the scam content. For example, it seems that people with experience of playing legitimate prize draws and lotteries are more likely to fall for a scam in this area than people with less knowledge and experience in this field. This also applies to those with some knowledge of investments. Such knowledge
can increase rather than decrease the risk of becoming a victim.

…scam victims report that they put more cognitive effort into analysing scam content than non-victims. This contradicts the intuitive suggestion that people fall victim to scams because they invest too little cognitive energy in investigating their content, and thus overlook potential information that might betray the scam. This may, however, reflect the victim being ‘drawn in’ to the scam whilst non-victims include many people who discard scams without giving them a second glance.

Related: the psychology of con games.

Posted on June 17, 2009 at 2:05 PMView Comments

Second SHB Workshop Liveblogging (6)

The first session of the morning was “Foundations,” which is kind of a catch-all for a variety of things that didn’t really fit anywhere else. Rachel Greenstadt moderated.

Terence Taylor, International Council for the Live Sciences (suggested video to watch: Darwinian Security; Natural Security), talked about the lessons evolution teaches about living with risk. Successful species didn’t survive by eliminating the risks of their environment, they survived by adaptation. Adaptation isn’t always what you think. For example, you could view the collapse of the Soviet Union as a failure to adapt, but you could also view it as successful adaptation. Risk is good. Risk is essential for the survival of a society, because risk-takers are the drivers of change. In the discussion phase, John Mueller pointed out a key difference between human and biological systems: humans tend to respond dramatically to anomalous events (the anthrax attacks), while biological systems respond to sustained change. And David Livingstone Smith asked about the difference between biological adaptation that affects the reproductive success of an organism’s genes, even at the expense of the organism, with security adaptation. (I recommend the book he edited: Natural Security: A Darwinian Approach to a Dangerous World.)

Andrew Odlyzko, University of Minnesota (suggested reading: Network Neutrality, Search Neutrality, and the Never-Ending Conflict between Efficiency and Fairness in Markets, Economics, Psychology, and Sociology of Security), discussed human-space vs. cyberspace. People cannot build secure systems—we know that—but people also cannot live with secure systems. We require a certain amount of flexibility in our systems. And finally, people don’t need secure systems. We survive with an astounding amount of insecurity in our world. The problem with cyberspace is that it was originally conceived as separate from the physical world, and that it could correct for the inadequacies of the physical world. Really, the two are intertwined, and that human space more often corrects for the inadequacies of cyberspace. Lessons: build messy systems, not clean ones; create a web of ties to other systems; create permanent records.

danah boyd, Microsoft Research (suggested reading: Taken Out of Context—American Teen Sociality in Networked Publics), does ethnographic studies of teens in cyberspace. Teens tend not to lie to their friends in cyberspace, but they lie to the system. Since an early age, they’ve been taught that they need to lie online to be safe. Teens regularly share their passwords: with their parents when forced, or with their best friend or significant other. This is a way of demonstrating trust. It’s part of the social protocol for this generation. In general, teens don’t use social media in the same way as adults do. And when they grow up, they won’t use social media in the same way as today’s adults do. Teens view privacy in terms of control, and take their cues about privacy from celebrities and how they use social media. And their sense of privacy is much more nuanced and complicated. In the discussion phase, danah wasn’t sure whether the younger generation would be more or less susceptible to Internet scams than the rest of us—they’re not nearly as technically savvy as we might think they are. “The only thing that saves teenagers is fear of their parents”; they try to lock them out, and lock others out in the process. Socio-economic status matters a lot, in ways that she is still trying to figure out. There are three different types of social networks: personal networks, articulated networks, and behavioral networks, and they’re different.

Mark Levine, Lancaster University (suggested reading: The Kindness of Crowds; Intra-group Regulation of Violence: Bystanders and the (De)-escalation of Violence), does social psychology. He argued against the common belief that groups are bad (mob violence, mass hysteria, peer group pressure). He collected data from UK CCTV cameras, searches for aggressive behavior, and studies when and how bystanders either help escalate or de-escalate the situations. Results: as groups get bigger, there is no increase of anti-social acts and a significant increase in pro-social acts. He has much more analysis and results, too complicated to summarize here. One key finding: when a third party intervenes in an aggressive interaction, it is much more likely to de-escalate. Basically, groups can act against violence. “When it comes to violence (and security), group processes are part of the solution—not part of the problem?”

Jeff MacKie-Mason, University of Michigan (suggested reading: Humans are smart devices, but not programmable; Security when people matter; A Social Mechanism for Supporting Home Computer Security), is an economist: “Security problems are incentive problems.” He discussed motivation, and how to design systems to take motivation into account. Humans are smart devices; they can’t be programmed, but they can be influenced through the sciences of motivational behavior: microeconomics, game theory, social psychology, psychodynamics, and personality psychology. He gave a couple of general examples of how these theories can inform security system design.

Joe Bonneau, Cambridge University, talked about social networks like Facebook, and privacy. People misunderstand why privacy and security is important in social networking sites like Facebook. People underestimate of what Facebook really is; it really is a reimplementation of the entire Internet. “Everything on the Internet is becoming social,” and that makes security different. Phishing is different, 419-style scams are different. Social context makes some scams easier; social networks are fun, noisy, and unpredictable. “People use social networking systems with their brain turned off.” But social context can be used to spot frauds and anomalies, and can be used to establish trust.

Three more sessions to go. (I am enjoying liveblogging the event. It’s helping me focus and pay closer attention.)

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 9:54 AMView Comments

Second SHB Workshop Liveblogging (2)

The first session was about deception, moderated by David Clark.

Frank Stajano, Cambridge University (suggested reading: Understanding victims: Six principles for systems security), presented research with Paul Wilson, who films actual scams for “The Real Hustle.” His point is that we build security systems based on our “logic,” but users don’t always follow our logic. It’s fraudsters who really understand what people do, so we need to understand what the fraudsters understand. Things like distraction, greed, unknown accomplices, social compliance are important.

David Livingstone Smith, University of New England (suggested reading: Less than human: self-deception in the imagining of others; Talk on Lying at La Ciudad de Las Ideas; a subsequent discussion; Why War?), is a philosopher by training, and goes back to basics: “What are we talking about?” A theoretical definition—”that which something has to have to fall under a term”—of deception is difficult to define. “Cause to have a false belief,” from the Oxford English Dictionary, is inadequate. “To deceive is intentionally have someone to have a false belief” also doesn’t work. “Intentionally causing someone to have a false belief that the speaker knows to be false” still isn’t good enough. The fundamental problem is that these are anthropocentric definitions. Deception is not unique to humans; it gives organisms an evolutionary edge. For example, the mirror orchid fools a wasp into landing on it by looking like and giving off chemicals that mimic the female wasp. This example shows that we need a broader definition of “purpose.” His formal definition: “For systems A and B, A deceives B iff A possesses some character C with proper function F, and B possesses a mechanism C* with the proper function F* of producing representations, such that the proper function of C is to cause C* to fail to perform F* by causing C* to form false representations, and C does so in virtue of performing F, and B’s falsely representing enables some feature of A to perform its proper function.”

I spoke next, about the psychology of Conficker, how the human brain buys security, and why science fiction writers shouldn’t be hired to think about terrorism risks (to be published on Wired.com next week).

Dominic Johnson, University of Edinburgh (suggested reading: Paradigm Shifts in Security Strategy; Perceptions of victory and defeat), talked about his chapter in the book Natural Security: A Darwinian Approach to a Dangerous World. Life has 3.5 billion years of experience in security innovation; let’s look at how biology approaches security. Biomimicry, ecology, paleontology, animal behavior, evolutionary psychology, immunology, epidemiology, selection, and adaption are all relevant. Redundancy is a very important survival tool for species. Here’s an adaption example: The 9/11 threat was real and we knew about it, but we didn’t do anything. His thesis: Adaptation to novel security threats tends to occur after major disasters. There are many historical examples of this; Pearl Harbor, for example. Causes include sensory biases, psychological biases, leadership biases, organizational biases, and political biases—all pushing us towards maintaining the status quo. So it’s natural for us to poorly adapt to security threats in the modern world. A questioner from the audience asked whether control theory had any relevance to this model.

Jeff Hancock, Cornell University (suggested reading: On Lying and Being Lied To: A Linguistic Analysis of Deception in Computer-Mediated Communication; Separating Fact From Fiction: An Examination of Deceptive Self-Presentation in Online Dating Profiles), studies interpersonal deception: how the way we lie to each other intersects with communications technologies; and how technologies change the way we lie, and can technology be used to detect lying? Despite new technology, people lie for traditional reasons. For example: on dating sites, men tend to lie about their height and women tend to lie about their weight. The recordability of the Internet also changes how we lie. The use of the first person singular tends to go down the more people lie. He verified this in many spheres, such as how people describe themselves in chat rooms, and true versus false statements that the Bush administration made about 9/11 and Iraq. The effect was more pronounced when administration officials were answering questions than when they were reading prepared remarks.

EDITED TO ADD (6/11): Adam Shostack liveblogged this session, too. And Ross’s liveblogging is in his blog post’s comments.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 9:37 AMView Comments

Researchers Hijack a Botnet

A bunch of researchers at the University of California Santa Barbara took control of a botnet for ten days, and learned a lot about how botnets work:

The botnet in question is controlled by Torpig (also known as Sinowal), a malware program that aims to gather personal and financial information from Windows users. The researchers gained control of the Torpig botnet by exploiting a weakness in the way the bots try to locate their commands and control servers—the bots would generate a list of domains that they planned to contact next, but not all of those domains were registered yet. The researchers then registered the domains that the bots would resolve, and then set up servers where the bots could connect to find their commands. This method lasted for a full ten days before the botnet’s controllers updated the system and cut the observation short.

During that time, however, UCSB’s researchers were able to gather massive amounts of information on how the botnet functions as well as what kind of information it’s gathering. Almost 300,000 unique login credentials were gathered over the time the researchers controlled the botnet, including 56,000 passwords gathered in a single hour using “simple replacement rules” and a password cracker. They found that 28 percent of victims reused their credentials for accessing 368,501 websites, making it an easy task for scammers to gather further personal information. The researchers noted that they were able to read through hundreds of e-mail, forum, and chat messages gathered by Torpig that “often contain detailed (and private) descriptions of the lives of their authors.”

Here’s the paper:

Abstract:

Botnets, networks of malware-infected machines that are controlled by an adversary, are the root cause of a large number of security threats on the Internet. A particularly sophisticated and insidious type of bot is Torpig, a malware program that is designed to harvest sensitive information (such as bank account and credit card data) from its victims. In this paper, we report on our efforts to take control of the Torpig botnet for ten days. Over this period, we observed more than 180 thousand infections and recorded more than 70 GB of data that the bots collected. While botnets have been “hijacked” before, the Torpig botnet exhibits certain properties that make the analysis of the data particularly interesting. First, it is possible (with reasonable accuracy) to identify unique bot infections and relate that number to the more than 1.2 million IP addresses that contacted our command and control server. This shows that botnet estimates that are based on IP addresses are likely to report inflated numbers. Second, the Torpig botnet is large, targets a variety of applications, and gathers a rich and diverse set of information from the infected victims. This opens the possibility to perform interesting data analysis that goes well beyond simply counting the number of stolen credit cards.

Another article.

Posted on May 11, 2009 at 6:56 AMView Comments

Low-Tech Impersonation

Sometimes the basic tricks work best:

Police say a man posing as a waiter collected $186 in cash from diners at two restaurants in New Jersey and walked out with the money in his pocket.

Diners described the bogus waiter as a spikey-haired 20-something wearing a dark blue or black button-down shirt, yellow tie and khaki pants.

Police say he approached two women dining at Hobson’s Choice in Hoboken, N.J. around 7:20 p.m. on Thursday. He asked if they needed anything else before paying. They said no and handed him $90 in cash.

About two hours later he approached three women dining at Margherita’s Pizza and Cafe. He asked if they were ready to pay, took $96 and never returned with their change.

Certainly he’ll be caught if he keeps it up, but it’s a good trick if used sparingly.

Posted on April 22, 2009 at 7:04 AMView Comments

Social Networking Identity Theft Scams

Clever:

I’m going to tell you exactly how someone can trick you into thinking they’re your friend. Now, before you send me hate mail for revealing this deep, dark secret, let me assure you that the scammers, crooks, predators, stalkers and identity thieves are already aware of this trick. It works only because the public is not aware of it. If you’re scamming someone, here’s what you’d do:

Step 1: Request to be “friends” with a dozen strangers on MySpace. Let’s say half of them accept. Collect a list of all their friends.

Step 2: Go to Facebook and search for those six people. Let’s say you find four of them also on Facebook. Request to be their friends on Facebook. All accept because you’re already an established friend.

Step 3: Now compare the MySpace friends against the Facebook friends. Generate a list of people that are on MySpace but are not on Facebook. Grab the photos and profile data on those people from MySpace and use it to create false but convincing profiles on Facebook. Send “friend” requests to your victims on Facebook.

As a bonus, others who are friends of both your victims and your fake self will contact you to be friends and, of course, you’ll accept. In fact, Facebook itself will suggest you as a friend to those people.

(Think about the trust factor here. For these secondary victims, they not only feel they know you, but actually request “friend” status. They sought you out.)

Step 4: Now, you’re in business. You can ask things of these people that only friends dare ask.

Like what? Lend me $500. When are you going out of town? Etc.

The author has no evidence that anyone has actually done this, but certainly someone will do this sometime in the future.

We have seen attacks by people hijacking existing social networking accounts:

Rutberg was the victim of a new, targeted version of a very old scam—the “Nigerian,” or “419,” ploy. The first reports of such scams emerged back in November, part of a new trend in the computer underground—rather than sending out millions of spam messages in the hopes of trapping a tiny fractions of recipients, Web criminals are getting much more personal in their attacks, using social networking sites and other databases to make their story lines much more believable.

In Rutberg’s case, criminals managed to steal his Facebook login password, steal his Facebook identity, and change his page to make it appear he was in trouble. Next, the criminals sent e-mails to dozens of friends, begging them for help.

“Can you just get some money to us,” the imposter implored to one of Rutberg’s friends. “I tried Amex and it’s not going through. … I’ll refund you as soon as am back home. Let me know please.”

Posted on April 8, 2009 at 6:43 AMView Comments

Google Maps Spam

There are zillions of locksmiths in New York City.

Not really; this is the latest attempt by phony locksmiths to steer business to themselves:

This is one of the scary parts they have a near monopoly on the cell phone 411 system. They have filled the data bases with so many phony address listings in most major citys that when you call 411 on your cell phone ( which most people do now) you will get the same counterfiet locksmiths over and over again. you could ask for 10 listings and they will all be one of these scammers or another with some local adress that is phony. they use thousands of different names also. It is always the same 55.00 service qouted for a lockout and after they unlock your stuff the price goes much higher. These companys are really not in the rural areas but the are in just about all major citys from coast to coast and from top to bottom. [sic]

More here:

Google wasn’t their first target. The “blackhats” in the industry have used whatever marketing vehicle was “au courant,” whether it was the phone books, 411 or now Google and Yahoo.

Here is a BBB alert from 2007, BBB Warns Consumers of Nationwide Locksmith Swindle and a recent ABC news article and video. The Associated Locksmiths of America provides a list of over 110 news reports over the past several years from across the nation detailing the abuses. As you can see, consumers have paid the price of these many scams with high prices, rip-off installs and even theft.

Posted on March 11, 2009 at 12:38 PM

New eBay Fraud

Here’s a clever attack, exploiting relative delays in eBay, PayPal, and UPS shipping:

The buyer reported the item as “destroyed” and demanded and got a refund from Paypal. When the buyer shipped it back to Chad and he opened it, he found there was nothing wrong with it—except that the scammer had removed the memory, processor and hard drive. Now Chad is out $500 and left with a shell of a computer, and since the item was “received” Paypal won’t do anything.

Very clever. The seller accepted the return from UPS after a visual inspection, so UPS considered the matter closed. PayPal and eBay both considered the matter closed. if the amount was large enough, the seller could sue, but how could he prove that the computer was functional when he sold it?

It seems to me that the only way to solve this is for PayPal to not process refunds until the seller confirms what he received back is the same as what he shipped. Yes, then the seller could commit similar fraud, but sellers (certainly professional ones) have a greater reputational risk.

Posted on March 6, 2009 at 1:30 PM

In-Person Credit Card Scam

Surely this isn’t new:

Suspects entered the business, selected merchandise worth almost $8,000. They handed a credit card with no financial backing to the clerk which when swiped was rejected by the cash register’s computer. The suspects then informed the clerk that this rejection was expected and to contact the credit card company by phone to receive a payment approval confirmation code. The clerk was then given a number to call which was answered by another person in the scam who approved the purchase and gave a bogus confirmation number. The suspects then left the store with the unpaid for merchandise.

Anyone reading this blog would know enough not to call a number given to you by the potential purchaser, but presumably many store clerks don’t have good security sense.

Posted on January 19, 2009 at 1:23 PMView Comments

How to Steal the Empire State Building

A reporter managed to file legal papers, transferring ownership of the Empire State Building to himself. Yes, it’s a stunt:

The office of the city register, upon receipt of the phony documents prepared by the newspaper, transferred ownership of the 102-story building from Empire State Land Associates to Nelots Properties, LLC. Nelots is “stolen” spelled backward.

To further enhance the absurdity of the heist, included on the bogus paperwork were original “King Kong” star Fay Wray as witness and Willie Sutton, the notorious bank robber, as the notary.

Still, this sort of thing has been used to commit fraud in the past, and will continue to be a source of fraud in the future. The problem is that there isn’t enough integrity checking to ensure that the person who is “selling” the real estate is actually the person who owns it.

Posted on December 15, 2008 at 12:23 PMView Comments

1 6 7 8 9 10 14

Sidebar photo of Bruce Schneier by Joe MacInnis.