Entries Tagged "cheating"

Page 5 of 8

Football Match Fixing

Detecting fixed football (soccer) games.

There is a certain buzz of expectation, because Oscar, one of the fraud analysts, has spotted a game he is sure has been fixed.

“We’ve been watching this for a couple of weeks now,” he says.

“The odds have gone to a very suspicious level. We believe that this game will finish in an away victory. Usually an away team would have around a 30% chance of winning, but at the current odds this team is about 85% likely to win.”

[…]

Often news of the fix will leak so that gamblers jump on the bandwagon. The game we are watching falls, it seems, into the second category.

Oscar monitors the betting at half-time. He is especially interested in money being laid not on the result itself, but on the number of goals that are going to be scored.

“The most likely score lines are 2-1 or 3-1,” he announces.

This is interesting:

Oscar is also interested in the activity of a club manager – but his modus operandi is somewhat different. He does not throw games. He wins them.

[…]

“The reason he’s so important is because he has relationships with all his previous clubs. He has managed at least three or four of the teams he is now buying wins against. He has also managed a lot of players from the opposition, who are being told to lose these matches.”

I always think of fixing a game as meaning losing it on purpose, not winning it by paying the other team to lose.

Posted on December 3, 2010 at 12:41 PMView Comments

Term Paper Writing for Hire

This recent essay (commentary here) reminded me of this older essay, both by people who write student term papers for hire.

There are several services that do automatic plagiarism detection—basically, comparing phrases from the paper with general writings on the Internet and even caches of previously written papers—but detecting this kind of custom plagiarism work is much harder.

I can think of three ways to deal with this:

  1. Require all writing to be done in person, and proctored. Obviously this won’t work for larger pieces of writing like theses.
  2. Semantic analysis in an attempt to fingerprint writing styles. It’s by no means perfect, but it is possible to detect if a piece of writing looks nothing like a student’s normal writing style.
  3. In-person quizzes on the writing. If a professor sits down with the student and asks detailed questions about the writing, he can pretty quickly determine if the student understand what he claims to have written.

The real issue is proof. Most colleges and universities are unwilling to pursue this without solid proof—the lawsuit risk is just too great—and in these cases the only real proof is self-incrimination.

Fundamentally, this is a problem of misplaced economic incentives. As long as the academic credential is worth more to a student than the knowledge gained in getting that credential, there will be an incentive to cheat.

Related note: anyone remember my personal experience with plagiarism from 2005?

Posted on November 16, 2010 at 6:36 AMView Comments

Detecting Cheating at Colleges

The measures used to prevent cheating during tests remind me of casino security measures:

No gum is allowed during an exam: chewing could disguise a student’s speaking into a hands-free cellphone to an accomplice outside.

The 228 computers that students use are recessed into desk tops so that anyone trying to photograph the screen—using, say, a pen with a hidden camera, in order to help a friend who will take the test later—is easy to spot.

Scratch paper is allowed—but it is stamped with the date and must be turned in later.

When a proctor sees something suspicious, he records the student’s real-time work at the computer and directs an overhead camera to zoom in, and both sets of images are burned onto a CD for evidence.

Lots of information on detecting cheating in homework and written papers.

Posted on July 9, 2010 at 6:34 AMView Comments

Cheating on Tests, by the Teachers

If you give people enough incentive to cheat, people will cheat:

Of all the forms of academic cheating, none may be as startling as educators tampering with children’s standardized tests. But investigations in Georgia, Indiana, Massachusetts, Nevada, Virginia and elsewhere this year have pointed to cheating by educators. Experts say the phenomenon is increasing as the stakes over standardized testing ratchet higher—including, most recently, taking student progress on tests into consideration in teachers’ performance reviews.

Posted on June 21, 2010 at 12:01 PMView Comments

Wrasse Punish Cheaters

Interesting:

The bluestreak cleaner wrasse (Labroides dimidiatus) operates an underwater health spa for larger fish. It advertises its services with bright colours and distinctive dances. When customers arrive, the cleaner eats parasites and dead tissue lurking in any hard-to-reach places. Males and females will sometimes operate a joint business, working together to clean their clients. The clients, in return, dutifully pay the cleaners by not eating them.

That’s the basic idea, but cleaners sometimes violate their contracts. Rather than picking off parasites, they’ll take a bite of the mucus that lines their clients’ skin. That’s an offensive act—it’s like a masseuse having an inappropriate grope between strokes. The affronted client will often leave. That’s particularly bad news if the cleaners are working as a pair because the other fish, who didn’t do anything wrong, still loses out on future parasite meals.

Males don’t take this sort of behaviour lightly. Nichola Raihani from the Zoological Society of London has found that males will punish their female partners by chasing them aggressively, if their mucus-snatching antics cause a client to storm out.

[…]

At first glance, the male cleaner wrasse behaves oddly for an animal, in punishing an offender on behalf of a third party, even though he hasn’t been wronged himself. That’s common practice in human societies but much rarer in the animal world. But Raihani’s experiments clearly show that the males are actually doing themselves a favour by punishing females on behalf of a third party. Their act of apparent altruism means they get more food in the long run.

Posted on January 20, 2010 at 1:26 PMView Comments

Computer Card Counter Detects Human Card Counters

All it takes is a computer that can track every card:

The anti-card-counter system uses cameras to watch players and keep track of the actual “count” of the cards, the same way a player would. It also measures how much each player is betting on each hand, and it syncs up the two data points to look for patterns in the action. If a player is betting big when the count is indeed favorable, and keeping his chips to himself when it’s not, he’s fingered by the computer… and, in the real world, he’d probably receive a visit from a burly dude in a bad suit, too.

The system reportedly works even if the gambler intentionally attempts to mislead it with high bets at unfavorable times.

Of course it does; it’s just a signal-to-noise problem.

I have long been impressed with the casino industry’s ability to, in the case of blackjack, convince the gambling public that using strategy equals cheating.

Posted on October 20, 2009 at 6:16 AMView Comments

Second SHB Workshop Liveblogging (5)

David Livingstone Smith moderated the fourth session, about (more or less) methodology.

Angela Sasse, University College London (suggested reading: The Compliance Budget: Managing Security Behaviour in Organisations; Human Vulnerabilities in Security Systems), has been working on usable security for over a dozen years. As part of a project called “Trust Economics,” she looked at whether people comply with security policies and why they either do or do not. She found that there is a limit to the amount of effort people will make to comply—this is less actual cost and more perceived cost. Strict and simple policies will be complied with more than permissive but complex policies. Compliance detection, and reward or punishment, also affect compliance. People justify noncompliance by “frequently made excuses.”

Bashar Nuseibeh, Open University (suggested reading: A Multi-Pronged Empirical Approach to Mobile Privacy Investigation; Security Requirements Engineering: A Framework for Representation and Analysis), talked about mobile phone security; specifically, Facebook privacy on mobile phones. He did something clever in his experiments. Because he wasn’t able to interview people at the moment they did something—he worked with mobile users—he asked them to provide a “memory phrase” that allowed him to effectively conduct detailed interviews at a later time. This worked very well, and resulted in all sorts of information about why people made privacy decisions at that earlier time.

James Pita, University of Southern California (suggested reading: Deployed ARMOR Protection: The Application of a Game Theoretic Model for Security at the Los Angeles International Airport), studies security personnel who have to guard a physical location. In his analysis, there are limited resources—guards, cameras, etc.—and a set of locations that need to be guarded. An example would be the Los Angeles airport, where a finite number of K-9 units need to guard eight terminals. His model uses a Stackelberg game to minimize predictability (otherwise, the adversary will learn it and exploit it) while maximizing security. There are complications—observational uncertainty and bounded rationally on the part of the attackers—which he tried to capture in his model.

Markus Jakobsson, Palo Alto Research Center (suggested reading: Male, late with your credit card payment, and like to speed? You will be phished!; Social Phishing; Love and Authentication; Quantifying the Security of Preference-Based Authentication), pointed out that auto insurers ask people if they smoke in order to get a feeling for whether they engage in high-risk behaviors. In his experiment, he selected 100 people who were the victim of online fraud and 100 people who were not. He then asked them to complete a survey about different physical risks such as mountain climbing and parachute jumping, financial risks such as buying stocks and real estate, and Internet risks such as visiting porn sites and using public wi-fi networks. He found significant correlation between different risks, but I didn’t see an overall pattern emerge. And in the discussion phase, several people had questions about the data. More analysis, and probably more data, is required. To be fair, he was still in the middle of his analysis.

Rachel Greenstadt, Drexel University (suggested reading: Practical Attacks Against Authorship Recognition Techniques (pre-print); Reinterpreting the Disclosure Debate for Web Infections), discussed ways in which humans and machines can collaborate in making security decisions. These decisions are hard for several reasons: because they are context dependent, require specialized knowledge, are dynamic, and require complex risk analysis. And humans and machines are good at different sorts of tasks. Machine-style authentication: This guy I’m standing next to knows Jake’s private key, so he must be Jake. Human-style authentication: This guy I’m standing next to looks like Jake and sounds like Jake, so he must be Jake. The trick is to design systems that get the best of these two authentication styles and not the worst. She described two experiments examining two decisions: should I log into this website (the phishing problem), and should I publish this anonymous essay or will my linguistic style betray me?

Mike Roe, Microsoft, talked about crime in online games, particularly in Second Life and Metaplace. There are four classes of people on online games: explorers, socializers, achievers, and griefers. Griefers try to annoy socializers in social worlds like Second Life, or annoy achievers in competitive worlds like World of Warcraft. Crime is not necessarily economic; criminals trying to steal money is much less of a problem in these games than people just trying to be annoying. In the question session, Dave Clark said that griefers are a constant, but economic fraud grows over time. I responded that the two types of attackers are different people, with different personality profiles. I also pointed out that there is another kind of attacker: achievers who use illegal mechanisms to assist themselves.

In the discussion, Peter Neumann pointed out that safety is an emergent property, and requires security, reliability, and survivability. Others weren’t so sure.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Conference dinner tonight at Legal Seafoods. And four more sessions tomorrow.

Posted on June 11, 2009 at 4:50 PMView Comments

Corrupted Word Files for Sale

On one hand, this is clever:

We offer a wide array of corrupted Word files that are guaranteed not to open on a Mac or PC. A corrupted file is a file that contains scrambled and unrecoverable data due to hardware or software failure. Files may become corrupted when something goes wrong while a file is being saved e.g. the program saving the file might crash. Files may also become corrupted when being sent via email. The perfect excuse to buy you that extra time!

This download includes a 2, 5, 10, 20, 30 and 40 page corrupted Word file. Use the appropriate file size to match each assignment. Who’s to say your 10 page paper didn’t get corrupted? Exactly! No one can! Its the perfect excuse to buy yourself extra time and not hand in a garbage paper.

Only $3.95. Cheap. Although for added verisimilitude, they should have an additional service where you send them a file—a draft of your paper, for example—and they corrupt it and send it back.

But on the other hand, it’s services like these that will force professors to treat corrupted attachments as work not yet turned in, and harm innocent homework submitters.

EDITED TO ADD (6/9): Here’s how to make a corrupted pdf file for free.

Posted on June 9, 2009 at 6:46 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.