Entries Tagged "cheating"

Page 4 of 7

Cheating in Online Classes

Interesting article:

In the case of that student, the professor in the course had tried to prevent cheating by using a testing system that pulled questions at random from a bank of possibilities. The online tests could be taken anywhere and were open-book, but students had only a short window each week in which to take them, which was not long enough for most people to look up the answers on the fly. As the students proceeded, they were told whether each answer was right or wrong.

Mr. Smith figured out that the actual number of possible questions in the test bank was pretty small. If he and his friends got together to take the test jointly, they could paste the questions they saw into the shared Google Doc, along with the right or wrong answers. The schemers would go through the test quickly, one at a time, logging their work as they went. The first student often did poorly, since he had never seen the material before, though he would search an online version of the textbook on Google Books for relevant keywords to make informed guesses. The next student did significantly better, thanks to the cheat sheet, and subsequent test-takers upped their scores even further. They took turns going first. Students in the course were allowed to take each test twice, with the two results averaged into a final score.

“So the grades are bouncing back and forth, but we’re all guaranteed an A in the end,” Mr. Smith told me. “We’re playing the system, and we’re playing the system pretty well.”

Posted on June 14, 2012 at 12:27 PMView Comments

Teaching the Security Mindset

In 2008, I wrote about the security mindset and how difficult it is to teach. Two professors teaching a cyberwarfare class gave an exam where they expected their students to cheat:

Our variation of the Kobayashi Maru utilized a deliberately unfair exam — write the first 100 digits of pi (3.14159…) from memory and took place in the pilot offering of a governmental cyber warfare course. The topic of the test itself was somewhat arbitrary; we only sought a scenario that would be too challenging to meet through traditional studying. By design, students were given little advance warning for the exam. Insurrection immediately followed. Why were we giving them such an unfair exam? What conceivable purpose would it serve? Now that we had their attention, we informed the class that we had no expectation that they would actually memorize the digits of pi, we expected them to cheat. How they chose to cheat was entirely up to the student. Collaborative cheating was also encouraged, but importantly, students would fail the exam if caught.

Excerpt:

Students took diverse approaches to cheating, and of the 20 students in the course, none were caught. One student used his Mandarin Chinese skills to hide the answers. Another built a small PowerPoint presentation consisting of three slides (all black slide, digits of pi slide, all black slide). The idea being that the student could flip to the answer when the proctor wasn’t looking and easily flip forwards or backward to a blank screen to hide the answer. Several students chose to hide answers on a slip of paper under the keyboards on their desks. One student hand wrote the answers on a blank sheet of paper (in advance) and simply turned it in, exploiting the fact that we didn’t pass out a formal exam sheet. Another just memorized the first ten digits of pi and randomly filled in the rest, assuming the instructors would be too lazy to
check every digit. His assumption was correct.

Read the whole paper. This is the conclusion:

Teach yourself and your students to cheat. We’ve always been taught to color inside the lines, stick to the rules, and never, ever, cheat. In seeking cyber security, we must drop that mindset. It is difficult to defeat a creative and determined adversary who must find only a single flaw among myriad defensive measures to be successful. We must not tie our hands, and our intellects, at the same time. If we truly wish to create the best possible information security professionals, being able to think like an adversary is an essential skill. Cheating exercises provide long term remembrance, teach students how to effectively evaluate a system, and motivate them to think imaginatively. Cheating will challenge students’ assumptions about security and the trust models they envision. Some will find the process uncomfortable. That is
OK and by design. For it is only by learning the thought processes of our adversaries that we can hope to unleash the creative thinking needed to build the best secure systems, become effective at red teaming and penetration testing, defend against attacks, and conduct ethical hacking activities.

Here’s a Boing Boing post, including a video of a presentation about the exercise.

Posted on June 13, 2012 at 12:08 PMView Comments

The Effectiveness of Plagiarism Detection Software

As you’d expect, it’s not very good:

But this measure [Turnitin] captures only the most flagrant form of plagiarism, where passages are copied from one document and pasted unchanged into another. Just as shoplifters slip the goods they steal under coats or into pocketbooks, most plagiarists tinker with the passages they copy before claiming them as their own. In other words, they cloak their thefts by scrambling the passages and right-clicking on words to find synonyms. This isn’t writing; it is copying, cloaking and pasting; and it’s plagiarism.

Kerry Segrave is a right-clicker, changing “cellar of store” to “basement of shop.” Similarly, he changes goods to items, articles to goods, accomplice to confederate, neighborhood to area, and women to females. He is also a scrambler, changing “accidentally fallen” to “fallen accidentally;” “only with” to “with only;” and, “Leon and Klein,” to “Klein and Leon.” And, he scrambles phrases within sentences; in other words, the phases of his sentences are sometimes scrambled.

[…]

Turnitin offers another product called WriteCheck that allows students to “check [their] work against the same database as Turnitin.” I signed up and submitted the early pages of Shoplifting. WriteCheck matched many of Shoplifting’s phrases to those of the i>New York Times articles in its library of student papers. Remember, I submitted them as a student paper to help Turnitin find them; now WriteCheck has them too! WriteCheck warned me that “a significant amount of this paper is unoriginal” and advised me to revise it. After a few hours of right-clicking and scrambling, I resubmitted it and WriteCheck said it was okay, being cleansed of easily recognizable plagiarism.

Turnitin is playing both sides of the fence, helping instructors identify plagiarists while helping plagiarists avoid detection. It is akin to selling security systems to stores while allowing shoplifters to test whether putting tagged goods into bags lined with aluminum thwart the detectors.

Posted on September 19, 2011 at 6:35 AMView Comments

Cheating at Casinos with Hidden Cameras

Sleeve cameras aren’t new, but they’re now smaller than ever and the cheaters are getting more sophisticated:

In January, at the newly opened $4-billion Cosmopolitan casino in Las Vegas, a gang called the Cutters cheated at baccarat. Before play began, the dealer offered one member of the group a stack of eight decks of cards for a pre-game cut. The player probably rubbed the stack for good luck, at the same instant riffling some of the corners of the cards underneath with his index finger. A small camera, hidden under his forearm, recorded the order.

After a few hands, the cutter left the floor and entered a bathroom stall, where he most likely passed the camera to a confederate in an adjoining stall. The runner carried the camera to a gaming analyst in a nearby hotel room, where the analyst transferred the video to a computer, watching it in slow motion to determine the order of the cards. Not quite half an hour had passed since the cut. Baccarat play averages less than six cards a minute, so there were still at least 160 cards left to play through. Back at the table, other members of the gang were delaying the action, glancing at their cellphones and waiting for the analyst to send them the card order.

Posted on August 23, 2011 at 5:44 AMView Comments

Hacking Lotteries

Two items on hacking lotteries. The first is about someone who figured out how to spot winner in a scratch-off tic-tac-toe style game, and a daily draw style game where expcted payout can exceed the ticket price. The second is about someone who has won the lottery four times, with speculation that she had advance knowledge of where and when certain jackpot-winning scratch-off tickets would be sold.

EDITED TO ADD (8/13): The Boston Globe has a on how to make money on Massachusetts’ Cash WinFall.

Posted on August 4, 2011 at 7:36 AMView Comments

Man-in-the-Middle Attack Against the MCAT Exam

In Applied Cryptography, I wrote about the “Chess Grandmaster Problem,” a man-in-the-middle attack. Basically, Alice plays chess remotely with two grandmasters. She plays Grandmaster 1 as white and Grandmaster 2 as black. After the standard opening of 1. e4, she just replays the moves from one game to the other, and convinces both of them that she’s a grandmaster in the process.

Detecting these sorts of man-in-the-middle attacks is difficult, and involves things like synchronous clocks, complex cryptographic protocols, or — more practically — proctors. Proctors, of course, can be fooled. Here’s a real-world attempt of this type of attack on the MCAT medical-school admissions test.

Police allege he used a pinhole camera and wireless technology to transmit images of the questions on a computer screen back to his co-conspirator, Ruben, at the University of British Columbia.

Investigators believe Ruben then tricked three other students, who thought they were taking a multiple choice test for a job to be an MCAT tutor, into answering the questions.

The answers were then transmitted back by phone to Rezazadeh-Azar, as he continued on with the test in Victoria, police allege.

And as long as we’re on the topic, we can think about all the ways to hack this system of remote exam proctoring via webcam.

Posted on June 2, 2011 at 7:32 AMView Comments

Changing Incentives Creates Security Risks

One of the things I am writing about in my new book is how security equilibriums change. They often change because of technology, but they sometimes change because of incentives.

An interesting example of this is the recent scandal in the Washington, DC, public school system over teachers changing their students’ test answers.

In the U.S., under the No Child Left Behind Act, students have to pass certain tests; otherwise, schools are penalized. In the District of Columbia, things went further. Michelle Rhee, chancellor of the public school system from 2007 to 2010, offered teachers $8,000 bonuses — and threatened them with termination — for improving test scores. Scores did increase significantly during the period, and the schools were held up as examples of how incentives affect teaching behavior.

It turns out that a lot of those score increases were faked. In addition to teaching students, teachers cheated on their students’ tests by changing wrong answers to correct ones. That’s how the cheating was discovered; researchers looked at the actual test papers and found more erasures than usual, and many more erasures from wrong answers to correct ones than could be explained by anything other than deliberate manipulation.

Teachers were always able to manipulate their students’ test answers, but before, there wasn’t much incentive to do so. With Rhee’s changes, there was a much greater incentive to cheat.

The point is that whatever security measures were in place to prevent teacher cheating before the financial incentives and threats of firing wasn’t sufficient to prevent teacher cheating afterwards. Because Rhee significantly increased the costs of cooperation (by threatening to fire teachers of poorly performing students) and increased the benefits of defection ($8,000), she created a security risk. And she should have increased security measures to restore balance to those incentives.

This is not isolated to DC. It has happened elsewhere as well.

Posted on April 14, 2011 at 6:36 AMView Comments

Detecting Cheaters

Our brains are specially designed to deal with cheating in social exchanges. The evolutionary psychology explanation is that we evolved brain heuristics for the social problems that our prehistoric ancestors had to deal with. Once humans became good at cheating, they then had to become good at detecting cheating — otherwise, the social group would fall apart.

Perhaps the most vivid demonstration of this can be seen with variations on what’s known as the Wason selection task, named after the psychologist who first studied it. Back in the 1960s, it was a test of logical reasoning; today, it’s used more as a demonstration of evolutionary psychology. But before we get to the experiment, let’s get into the mathematical background.

Propositional calculus is a system for deducing conclusions from true premises. It uses variables for statements because the logic works regardless of what the statements are. College courses on the subject are taught by either the mathematics or the philosophy department, and they’re not generally considered to be easy classes. Two particular rules of inference are relevant here: modus ponens and modus tollens. Both allow you to reason from a statement of the form, "if P, then Q." (If Socrates was a man, then Socrates was mortal. If you are to eat dessert, then you must first eat your vegetables. If it is raining, then Gwendolyn had Crunchy Wunchies for breakfast. That sort of thing.) Modus ponens goes like this:

If P, then Q. P. Therefore, Q.

In other words, if you assume the conditional rule is true, and if you assume the antecedent of that rule is true, then the consequent is true. So,

If Socrates was a man, then Socrates was mortal. Socrates was a man. Therefore, Socrates was mortal.

Modus tollens is more complicated:

If P, then Q. Not Q. Therefore, not P.

If Socrates was a man, then Socrates was mortal. Socrates was not mortal. Therefore, Socrates was not a man.

This makes sense: if Socrates was not mortal, then he was a demigod or a stone statue or something.

Both are valid forms of logical reasoning. If you know "if P, then Q" and "P," then you know "Q." If you know "if P, then Q" and "not Q," then you know "not P." (The other two similar forms don’t work. If you know "if P, then Q" and "Q," you don’t know anything about "P." And if you know "if P, then Q" and "not P," then you don’t know anything about "Q.")

If I explained this in front of an audience full of normal people, not mathematicians or philosophers, most of them would be lost. Unsurprisingly, they would have trouble either explaining the rules or using them properly. Just ask any grad student who has had to teach a formal logic class; people have trouble with this.

Consider the Wason selection task. Subjects are presented with four cards next to each other on a table. Each card represents a person, with each side listing some statement about that person. The subject is then given a general rule and asked which cards he would have to turn over to ensure that the four people satisfied that rule. For example, the general rule might be, "If a person travels to Boston, then he or she takes a plane." The four cards might correspond to travelers and have a destination on one side and a mode of transport on the other. On the side facing the subject, they read: "went to Boston," "went to New York," "took a plane," and "took a car." Formal logic states that the rule is violated if someone goes to Boston without taking a plane. Translating into propositional calculus, there’s the general rule: if P, then Q. The four cards are "P," "not P," "Q," and "not Q." To verify that "if P, then Q" is a valid rule, you have to verify modus ponens by turning over the "P" card and making sure that the reverse says "Q." To verify modus tollens, you turn over the "not Q" card and make sure that the reverse doesn’t say "P."

Shifting back to the example, you need to turn over the "went to Boston" card to make sure that person took a plane, and you need to turn over the "took a car" card to make sure that person didn’t go to Boston. You don’t — as many people think — need to turn over the "took a plane" card to see if it says "went to Boston" because you don’t care. The person might have been flying to Boston, New York, San Francisco, or London. The rule only says that people going to Boston fly; it doesn’t break the rule if someone flies elsewhere.

If you’re confused, you aren’t alone. When Wason first did this study, fewer than 10 percent of his subjects got it right. Others replicated the study and got similar results. The best result I’ve seen is "fewer than 25 percent." Training in formal logic doesn’t seem to help very much. Neither does ensuring that the example is drawn from events and topics with which the subjects are familiar. People are just bad at the Wason selection task. They also tend to only take college logic classes upon requirement.

This isn’t just another "math is hard" story. There’s a point to this. The one variation of this task that people are surprisingly good at getting right is when the rule has to do with cheating and privilege. For example, change the four cards to children in a family — "gets dessert," "doesn’t get dessert," "ate vegetables," and "didn’t eat vegetables" — and change the rule to "If a child gets dessert, he or she ate his or her vegetables." Many people — 65 to 80 percent — get it right immediately. They turn over the "ate dessert" card, making sure the child ate his vegetables, and they turn over the "didn’t eat vegetables" card, making sure the child didn’t get dessert. Another way of saying this is that they turn over the "benefit received" card to make sure the cost was paid. And they turn over the "cost not paid" card to make sure no benefit was received. They look for cheaters.

The difference is startling. Subjects don’t need formal logic training. They don’t need math or philosophy. When asked to explain their reasoning, they say things like the answer "popped out at them."

Researchers, particularly evolutionary psychologists Leda Cosmides and John Tooby, have run this experiment with a variety of wordings and settings and on a variety of subjects: adults in the US, UK, Germany, Italy, France, and Hong Kong; Ecuadorian schoolchildren; and Shiriar tribesmen in Ecuador. The results are the same: people are bad at the Wason selection task, except when the wording involves cheating.

In the world of propositional calculus, there’s absolutely no difference between a rule about traveling to Boston by plane and a rule about eating vegetables to get dessert. But in our brains, there’s an enormous difference: the first is a arbitrary rule about the world, and the second is a rule of social exchange. It’s of the form "If you take Benefit B, you must first satisfy Requirement R."

Our brains are optimized to detect cheaters in a social exchange. We’re good at it. Even as children, we intuitively notice when someone gets a benefit he didn’t pay the cost for. Those of us who grew up with a sibling have experienced how the one child not only knew that the other cheated, but felt compelled to announce it to the rest of the family. As adults, we might have learned that life isn’t fair, but we still know who among our friends cheats in social exchanges. We know who doesn’t pay his or her fair share of a group meal. At an airport, we might not notice the rule "If a plane is flying internationally, then it boards 15 minutes earlier than domestic flights." But we’ll certainly notice who breaks the "If you board first, then you must be a first-class passenger" rule.

This essay was originally published in IEEE Security & Privacy, and is an excerpt from the draft of my new book.

EDITED TO ADDD (4/14): Another explanation of the Wason Selection Task, with a possible correlation with psychopathy.

Posted on April 7, 2011 at 1:10 PMView Comments

Hacking Scratch Lottery Tickets

Design failure means you can pick winning tickets before scratching the coatings off. Most interesting is that there’s statistical evidence that this sort of attack has been occurring in the wild: not necessarily this particular attack, but some way to separate winners from losers without voiding the tickets.

Since this article was published in Wired, another technique of hacking scratch lottery tickets has surfaced: store clerks capitalizing on losing streaks. If you assume any given package of lottery tickets has a similar number of winners, wait until you sell most of the way through the packet without seeing those winners and then buy the rest.

Posted on February 10, 2011 at 6:42 AMView Comments

Football Match Fixing

Detecting fixed football (soccer) games.

There is a certain buzz of expectation, because Oscar, one of the fraud analysts, has spotted a game he is sure has been fixed.

“We’ve been watching this for a couple of weeks now,” he says.

“The odds have gone to a very suspicious level. We believe that this game will finish in an away victory. Usually an away team would have around a 30% chance of winning, but at the current odds this team is about 85% likely to win.”

[…]

Often news of the fix will leak so that gamblers jump on the bandwagon. The game we are watching falls, it seems, into the second category.

Oscar monitors the betting at half-time. He is especially interested in money being laid not on the result itself, but on the number of goals that are going to be scored.

“The most likely score lines are 2-1 or 3-1,” he announces.

This is interesting:

Oscar is also interested in the activity of a club manager – but his modus operandi is somewhat different. He does not throw games. He wins them.

[…]

“The reason he’s so important is because he has relationships with all his previous clubs. He has managed at least three or four of the teams he is now buying wins against. He has also managed a lot of players from the opposition, who are being told to lose these matches.”

I always think of fixing a game as meaning losing it on purpose, not winning it by paying the other team to lose.

Posted on December 3, 2010 at 12:41 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.