Entries Tagged "cheating"

Page 6 of 8

Wrasse Punish Cheaters

Interesting:

The bluestreak cleaner wrasse (Labroides dimidiatus) operates an underwater health spa for larger fish. It advertises its services with bright colours and distinctive dances. When customers arrive, the cleaner eats parasites and dead tissue lurking in any hard-to-reach places. Males and females will sometimes operate a joint business, working together to clean their clients. The clients, in return, dutifully pay the cleaners by not eating them.

That’s the basic idea, but cleaners sometimes violate their contracts. Rather than picking off parasites, they’ll take a bite of the mucus that lines their clients’ skin. That’s an offensive act—it’s like a masseuse having an inappropriate grope between strokes. The affronted client will often leave. That’s particularly bad news if the cleaners are working as a pair because the other fish, who didn’t do anything wrong, still loses out on future parasite meals.

Males don’t take this sort of behaviour lightly. Nichola Raihani from the Zoological Society of London has found that males will punish their female partners by chasing them aggressively, if their mucus-snatching antics cause a client to storm out.

[…]

At first glance, the male cleaner wrasse behaves oddly for an animal, in punishing an offender on behalf of a third party, even though he hasn’t been wronged himself. That’s common practice in human societies but much rarer in the animal world. But Raihani’s experiments clearly show that the males are actually doing themselves a favour by punishing females on behalf of a third party. Their act of apparent altruism means they get more food in the long run.

Posted on January 20, 2010 at 1:26 PMView Comments

Computer Card Counter Detects Human Card Counters

All it takes is a computer that can track every card:

The anti-card-counter system uses cameras to watch players and keep track of the actual “count” of the cards, the same way a player would. It also measures how much each player is betting on each hand, and it syncs up the two data points to look for patterns in the action. If a player is betting big when the count is indeed favorable, and keeping his chips to himself when it’s not, he’s fingered by the computer… and, in the real world, he’d probably receive a visit from a burly dude in a bad suit, too.

The system reportedly works even if the gambler intentionally attempts to mislead it with high bets at unfavorable times.

Of course it does; it’s just a signal-to-noise problem.

I have long been impressed with the casino industry’s ability to, in the case of blackjack, convince the gambling public that using strategy equals cheating.

Posted on October 20, 2009 at 6:16 AMView Comments

Second SHB Workshop Liveblogging (5)

David Livingstone Smith moderated the fourth session, about (more or less) methodology.

Angela Sasse, University College London (suggested reading: The Compliance Budget: Managing Security Behaviour in Organisations; Human Vulnerabilities in Security Systems), has been working on usable security for over a dozen years. As part of a project called “Trust Economics,” she looked at whether people comply with security policies and why they either do or do not. She found that there is a limit to the amount of effort people will make to comply—this is less actual cost and more perceived cost. Strict and simple policies will be complied with more than permissive but complex policies. Compliance detection, and reward or punishment, also affect compliance. People justify noncompliance by “frequently made excuses.”

Bashar Nuseibeh, Open University (suggested reading: A Multi-Pronged Empirical Approach to Mobile Privacy Investigation; Security Requirements Engineering: A Framework for Representation and Analysis), talked about mobile phone security; specifically, Facebook privacy on mobile phones. He did something clever in his experiments. Because he wasn’t able to interview people at the moment they did something—he worked with mobile users—he asked them to provide a “memory phrase” that allowed him to effectively conduct detailed interviews at a later time. This worked very well, and resulted in all sorts of information about why people made privacy decisions at that earlier time.

James Pita, University of Southern California (suggested reading: Deployed ARMOR Protection: The Application of a Game Theoretic Model for Security at the Los Angeles International Airport), studies security personnel who have to guard a physical location. In his analysis, there are limited resources—guards, cameras, etc.—and a set of locations that need to be guarded. An example would be the Los Angeles airport, where a finite number of K-9 units need to guard eight terminals. His model uses a Stackelberg game to minimize predictability (otherwise, the adversary will learn it and exploit it) while maximizing security. There are complications—observational uncertainty and bounded rationally on the part of the attackers—which he tried to capture in his model.

Markus Jakobsson, Palo Alto Research Center (suggested reading: Male, late with your credit card payment, and like to speed? You will be phished!; Social Phishing; Love and Authentication; Quantifying the Security of Preference-Based Authentication), pointed out that auto insurers ask people if they smoke in order to get a feeling for whether they engage in high-risk behaviors. In his experiment, he selected 100 people who were the victim of online fraud and 100 people who were not. He then asked them to complete a survey about different physical risks such as mountain climbing and parachute jumping, financial risks such as buying stocks and real estate, and Internet risks such as visiting porn sites and using public wi-fi networks. He found significant correlation between different risks, but I didn’t see an overall pattern emerge. And in the discussion phase, several people had questions about the data. More analysis, and probably more data, is required. To be fair, he was still in the middle of his analysis.

Rachel Greenstadt, Drexel University (suggested reading: Practical Attacks Against Authorship Recognition Techniques (pre-print); Reinterpreting the Disclosure Debate for Web Infections), discussed ways in which humans and machines can collaborate in making security decisions. These decisions are hard for several reasons: because they are context dependent, require specialized knowledge, are dynamic, and require complex risk analysis. And humans and machines are good at different sorts of tasks. Machine-style authentication: This guy I’m standing next to knows Jake’s private key, so he must be Jake. Human-style authentication: This guy I’m standing next to looks like Jake and sounds like Jake, so he must be Jake. The trick is to design systems that get the best of these two authentication styles and not the worst. She described two experiments examining two decisions: should I log into this website (the phishing problem), and should I publish this anonymous essay or will my linguistic style betray me?

Mike Roe, Microsoft, talked about crime in online games, particularly in Second Life and Metaplace. There are four classes of people on online games: explorers, socializers, achievers, and griefers. Griefers try to annoy socializers in social worlds like Second Life, or annoy achievers in competitive worlds like World of Warcraft. Crime is not necessarily economic; criminals trying to steal money is much less of a problem in these games than people just trying to be annoying. In the question session, Dave Clark said that griefers are a constant, but economic fraud grows over time. I responded that the two types of attackers are different people, with different personality profiles. I also pointed out that there is another kind of attacker: achievers who use illegal mechanisms to assist themselves.

In the discussion, Peter Neumann pointed out that safety is an emergent property, and requires security, reliability, and survivability. Others weren’t so sure.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Conference dinner tonight at Legal Seafoods. And four more sessions tomorrow.

Posted on June 11, 2009 at 4:50 PMView Comments

Corrupted Word Files for Sale

On one hand, this is clever:

We offer a wide array of corrupted Word files that are guaranteed not to open on a Mac or PC. A corrupted file is a file that contains scrambled and unrecoverable data due to hardware or software failure. Files may become corrupted when something goes wrong while a file is being saved e.g. the program saving the file might crash. Files may also become corrupted when being sent via email. The perfect excuse to buy you that extra time!

This download includes a 2, 5, 10, 20, 30 and 40 page corrupted Word file. Use the appropriate file size to match each assignment. Who’s to say your 10 page paper didn’t get corrupted? Exactly! No one can! Its the perfect excuse to buy yourself extra time and not hand in a garbage paper.

Only $3.95. Cheap. Although for added verisimilitude, they should have an additional service where you send them a file—a draft of your paper, for example—and they corrupt it and send it back.

But on the other hand, it’s services like these that will force professors to treat corrupted attachments as work not yet turned in, and harm innocent homework submitters.

EDITED TO ADD (6/9): Here’s how to make a corrupted pdf file for free.

Posted on June 9, 2009 at 6:46 AMView Comments

Update on Computer Science Student's Computer Seizure

In April, I blogged about the Boston police seizing a student’s computer for, among other things, running Linux. (Anyone who runs Linux instead of Windows is obviously a scary bad hacker.)

Last week, the Massachusetts Supreme Court threw out the search warrant:

Massachusetts Supreme Judicial Court Associate Justice Margot Botsford on Thursday said that Boston College and Massachusetts State Police had insufficient evidence to search the dorm room of BC senior Riccardo Calixte. During the search, police confiscated a variety of electronic devices, including three laptop computers, two iPod music players, and two cellphones.

Police obtained a warrant to search Calixte’s dorm after a roommate accused him of breaking into the school’s computer network to change other students’ grades, and of spreading a rumor via e-mail that the roommate is gay.

Botsford said the search warrant affidavit presented considerable evidence that the e-mail came from Calixte’s laptop computer. But even if it did, she said, spreading such rumors is probably not illegal. Botsford also said that while breaking into BC’s computer network would be criminal activity, the affidavit supporting the warrant presented little evidence that such a break-in had taken place.

Posted on June 2, 2009 at 12:01 PMView Comments

Using Surveillance Cameras to Detect Cashier Cheating

It’s called “sweethearting”: when cashiers pass free merchandise to friends. And some stores are using security cameras to detect it:

Mathematical algorithms embedded in the stores’ new security system pick out sweethearting on their own. There’s no need for a security guard watching banks of video monitors or reviewing hours of grainy footage. When the system thinks it’s spotted evidence, it alerts management on a computer screen and offers up the footage.

[…]

Big Y’s security system comes from a Cambridge, Mass.-based company called StopLift Inc. The technology works by scouring video pixels for various gestures and deciding whether they add up to a normal transaction at the register or not.

How good is it? My guess is that it’s not very good, but this is an instance where that may be good enough. As long as there aren’t a lot of false positives—as long as a person can quickly review the suspect footage and dismiss it as a false positive—the cost savings might be worth the expense.

Posted on May 13, 2009 at 7:55 AMView Comments

Cheating at Disneyworld

Interesting discussion of different ways to cheat and skip the lines at Disney theme parks. Most of the tricks involve their FastPass system for virtual queuing:

Moving toward the truly disingenuous, we’ve got the “FastPass Switcheroo.” To do this, simply get your FastPass like normal for Splash Mountain. You notice that the return time is two hours away, in the afternoon. Wait two hours, then return here and get another set of FP tickets, this time for later in the evening. But at this moment, your first set of FP tickets are active. Use them to get by the FP guard at the front, but when prompted to turn in your tickets at the front of the FP line, hand over the ones for this evening instead. 99.9% of the time, they do not look at these tickets whatsoever in this point in the line; they just add them to the pile in their hand and impatiently gesture you forward. All the examining of the tickets takes place at the start of the line, not the end. Voila, you’ve cheated the system. After this ride, you can get off and immediately ride again, since you’ve held on to the afternoon FPs and can use them in the normal fashion now.

Posted on February 12, 2009 at 1:24 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.