Entries Tagged "gambling"

Page 3 of 4

Football Match Fixing

Detecting fixed football (soccer) games.

There is a certain buzz of expectation, because Oscar, one of the fraud analysts, has spotted a game he is sure has been fixed.

“We’ve been watching this for a couple of weeks now,” he says.

“The odds have gone to a very suspicious level. We believe that this game will finish in an away victory. Usually an away team would have around a 30% chance of winning, but at the current odds this team is about 85% likely to win.”

[…]

Often news of the fix will leak so that gamblers jump on the bandwagon. The game we are watching falls, it seems, into the second category.

Oscar monitors the betting at half-time. He is especially interested in money being laid not on the result itself, but on the number of goals that are going to be scored.

“The most likely score lines are 2-1 or 3-1,” he announces.

This is interesting:

Oscar is also interested in the activity of a club manager – but his modus operandi is somewhat different. He does not throw games. He wins them.

[…]

“The reason he’s so important is because he has relationships with all his previous clubs. He has managed at least three or four of the teams he is now buying wins against. He has also managed a lot of players from the opposition, who are being told to lose these matches.”

I always think of fixing a game as meaning losing it on purpose, not winning it by paying the other team to lose.

Posted on December 3, 2010 at 12:41 PMView Comments

Casino Hack

Nice hack:

Using insider knowledge the two hacked into software that controlled remote betting machines on live roulette wheels, the report said.

The machines would print out winning betting slips regardless of the results on the wheel, Peterborough Today said.

I’d like to know how they got caught.

EDITED TO ADD (4/17): They got their math wrong:

However, the scheme came unstuck after an alert cashier noticed a winning slip for £600 for a £10 bet at odds of 35-1. The casino launched an investigation that unearthed a string of other suspicious bets, traced back to Ashley and Bhagat, IT contractors working at the casino at the time of the scam.

Posted on March 17, 2010 at 6:33 AMView Comments

Computer Card Counter Detects Human Card Counters

All it takes is a computer that can track every card:

The anti-card-counter system uses cameras to watch players and keep track of the actual “count” of the cards, the same way a player would. It also measures how much each player is betting on each hand, and it syncs up the two data points to look for patterns in the action. If a player is betting big when the count is indeed favorable, and keeping his chips to himself when it’s not, he’s fingered by the computer… and, in the real world, he’d probably receive a visit from a burly dude in a bad suit, too.

The system reportedly works even if the gambler intentionally attempts to mislead it with high bets at unfavorable times.

Of course it does; it’s just a signal-to-noise problem.

I have long been impressed with the casino industry’s ability to, in the case of blackjack, convince the gambling public that using strategy equals cheating.

Posted on October 20, 2009 at 6:16 AMView Comments

Comparing the Security of Electronic Slot Machines and Electronic Voting Machines

From the Washington Post.

Other important differences:

  • Slot machine are used every day, 24 hours a day. Electronic voting machines are used, at most, twice a year—often less frequently.
  • Slot machines involve money. Electronic voting machines involve something much more abstract.
  • Slot machine accuracy is a non-partisan issue. For some reason I can’t fathom, electronic voting machine accuracy is seen as a political issue.

Posted on December 24, 2008 at 6:02 AMView Comments

Risk and the Brain

New research on how the brain estimates risk:

Using functional imaging in a simple gambling task in which risk was constantly changed, the researchers discovered that an early activation of the anterior insula of the brain was associated with mistakes in predicting risk.

The time course of the activation also indicated a role in rapid updating, suggesting that this area is involved in how we learn to modify our risk predictions. The finding was particularly interesting, notes lead author and EPFL professor Peter Bossaerts, because the anterior insula is the locus of where we integrate and process emotions.

“This represents an important advance in our understanding of the neurological underpinnings of risk, in analogy with an earlier discovery of a signal for forecast error in the dopaminergic system,” says Bossaerts, “and indicates that we need to update our understanding of the neural basis of reward anticipation in uncertain conditions to include risk assessment.”

Posted on March 18, 2008 at 6:51 AMView Comments

Cheating in Online Poker

Fascinating story of insider cheating:

Some opponents became suspicious of how a certain player was playing. He seemed to know what the opponents’ hole cards were. The suspicious players provided examples of these hands, which were so outrageous that virtually all serious poker players were convinced that cheating had occurred. One of the players who’d been cheated requested that Absolute Poker provide hand histories from the tournament (which is standard practice for online sites). In this case, Absolute Poker “accidentally” did not send the usual hand histories, but instead sent a file that contained all sorts of private information that the poker site would never release. The file contained every player’s hole cards, observations of the tables, and even the IP addresses of every person playing. (I put “accidentally” in quotes because the mistake seems like too great a coincidence when you learn what followed.) I suspect that someone at Absolute knew about the cheating and how it happened, and was acting as a whistleblower by sending these data. If that is the case, I hope whomever “accidentally” sent the file gets their proper hero’s welcome in the end.

Then the poker players went to work analyzing the data—not the hand histories themselves, but other, more subtle information contained in the file. What these players-turned-detectives noticed was that, starting with the third hand of the tournament, there was an observer who watched every subsequent hand played by the cheater. (For those of you who don’t know much about online poker, anyone who wants can observe a particular table, although, of course, the observers can’t see any of the players’ hole cards.) Interestingly, the cheater folded the first two hands before this observer showed up, then did not fold a single hand before the flop for the next 20 minutes, and then folded his hand pre-flop when another player had a pair of kings as hole cards! This sort of cheating went on throughout the tournament.

So the poker detectives turned their attention to this observer. They traced the observer’s IP address and account name to the same set of servers that host Absolute Poker, and also, apparently, to a particular individual named Scott Tom, who seems to be a part-owner of Absolute Poker! If all of this is correct, it shows exactly how the cheating would have transpired: an insider at the Web site had real-time access to all of the hole cards (it is not hard to believe that this capability would exist) and was relaying this information to an outside accomplice.

More details here.

EDITED TO ADD (10/20): More information.

EDITED TO ADD (11/13): This graph of players’ river aggression is a great piece of evidence. Note the single outlying point.

Posted on October 19, 2007 at 11:44 AM

Basketball Referees and Single Points of Failure

Sports referees are supposed to be fair and impartial. They’re not supposed to favor one team over another. And they’re most certainly not supposed to have a financial interest in the outcome of a game.

Tim Donaghy, referee for the National Basketball Association, has been accused of both betting on basketball games and fixing games for the mob. He has confessed to far less—gambling in general, and selling inside information on players, referees and coaches to a big-time professional gambler named James “Sheep” Battista. But the investigation continues, and the whole scandal is an enormous black eye for the sport. Fans like to think that the game is fair and that the winning team really is the winning team.

The details of the story are fascinating and well worth reading. But what interests me more are its general lessons about risk and audit.

What sorts of systems—IT, financial, NBA games or whatever—are most at risk of being manipulated? The ones where the smallest change can have the greatest impact, and the ones where trusted insiders can make that change.

Of all major sports, basketball is the most vulnerable to manipulation. There are only five players on the court per team, fewer than in other professional team sports; thus, a single player can have a much greater effect on a basketball game than he can in the other sports. Star players like Michael Jordan, Kobe Bryant and LeBron James can carry an entire team on their shoulders. Even baseball great Alex Rodriguez can’t do that.

Because individual players matter so much, a single referee can affect a basketball game more than he can in any other sport. Referees call fouls. Contact occurs on nearly every play, any of which could be called as a foul. They’re called “touch fouls,” and they are mostly, but not always, ignored. The refs get to decide which ones to call.

Even more drastically, a ref can put a star player in foul trouble immediately—and cause the coach to bench him longer throughout the game—if he wants the other side to win. He can set the pace of the game, low-scoring or high-scoring, based on how he calls fouls. He can decide to invalidate a basket by calling an offensive foul on the play, or give a team the potential for some extra points by calling a defensive foul. There’s no formal instant replay. There’s no second opinion. A ref’s word is law—there are only three of them—and a crooked ref has enormous power to control the game.

It’s not just that basketball referees are single points of failure, it’s that they’re both trusted insiders and single points of catastrophic failure.

These sorts of vulnerabilities exist in many systems. Consider what a terrorist-sympathizing Transportation Security Administration screener could do to airport security. Or what a criminal CFO could embezzle. Or what a dishonest computer-repair technician could do to your computer or network. The same goes for a corrupt judge, police officer, customs inspector, border-control officer, food-safety inspector and so on.

The best way to catch corrupt trusted insiders is through audit. The particular components of a system that have the greatest influence on the performance of that system need to be monitored and audited, even if the probability of compromise is low. It’s after the fact, but if the likelihood of detection is high and the penalties (fines, jail time, public disgrace) are severe, it’s a pretty strong deterrent. Of course, the counterattack is to target the auditing system. Hackers routinely try to erase audit logs that contain evidence of their intrusions.

Even so, audit is the reason we want open-source code reviews and verifiable paper trails in voting machines; otherwise, a single crooked programmer could single-handedly change an election. It’s also why the Securities and Exchange Commission closely monitors trades by brokers: They are in an ideal position to get away with insider trading. The NBA claims it monitors referees for patterns that might indicate abuse; there’s still no answer to why it didn’t detect Donaghy.

Most companies focus the bulk of their IT-security monitoring on external threats, but they should be paying more attention to internal threats. While a company may inherently trust its employees, those trusted employees have far greater power to affect corporate systems and are often single points of failure. And trusted employees can also be compromised by external elements, as Tom Donaghy was by Battista and possibly the Mafia.

All systems have trusted insiders. All systems have catastrophic points of failure. The key is recognizing them, and building monitoring and audit systems to secure them.

This is my 50th essay for Wired.com.

Posted on September 6, 2007 at 4:38 AMView Comments

Hinky at the Casino: JDLR

It’s called “Just Doesn’t Look Right“:

In the casino business, or any other, we tend to become complacent, and we stop paying attention to the little things. But a really sharp observer will still be shocked awake at some little unexplained thing: the five o’clock shadow on the woman sitting opposite the big-money player, or too many people watching that game, or the fellow who keeps looking directly at the cameras. The guy who looks as though he slept under an overpass carrying a new shopping bag from Nieman-Marcus, the two players on a table game whose arms were held against their chests, the bulge under that character’s jacket and the man wearing an overcoat on an August day in Las Vegas.

Posted on May 15, 2007 at 11:05 AMView Comments

Separating Data Ownership and Device Ownership

Consider two different security problems. In the first, you store your valuables in a safe in your basement. The threat is burglars, of course. But the safe is yours, and the house is yours, too. You control access to the safe, and probably have an alarm system.

The second security problem is similar, but you store your valuables in someone else’s safe. Even worse, it’s someone you don’t trust. He doesn’t know the combination, but he controls access to the safe. He can try to break in at his leisure. He can transport the safe anyplace he needs to. He can use whatever tools he wants. In the first case, the safe needs to be secure, but it’s still just a part of your overall home security. In the second case, the safe is the only security device you have.

This second security problem might seem contrived, but it happens regularly in our information society: Data controlled by one person is stored on a device controlled by another. Think of a stored-value smart card: If the person owning the card can break the security, he can add money to the card. Think of a DRM system: Its security depends on the person owning the computer not being able to get at the insides of the DRM security. Think of the RFID chip on a passport. Or a postage meter. Or SSL traffic being sent over a public network.

These systems are difficult to secure, and not just because you give your attacker the device and let him utilize whatever time, equipment and expertise he needs to break it. It’s difficult to secure because breaks are generally “class breaks.” The expert who figures out how to do it can build hardware—or write software—to do it automatically. Only one person needs to break a given DRM system; the software can break every other device in the same class.

This means that the security needs to be secure not against the average attacker, but against the smartest, most motivated and best funded attacker.

I was reminded of this problem earlier this month, when researchers announced a new attack (.pdf) against implementations of the RSA cryptosystem. The attack exploits the fact that different operations take different times on modern CPUs. By closely monitoring—and actually affecting—the CPU during an RSA operation, an attacker can recover the key. The most obvious applications for this attack are DRM systems that try to use a protected partition in the CPU to prevent the computer’s owner from learning the DRM system’s cryptographic keys.

These sorts of attacks are not new. In 1995, researchers discovered they could recover cryptographic keys by comparing relative timings on chips. In later years, both power and radiation were used to break cryptosystems. I called these “side-channel attacks,” because they made use of information other than the plaintext and ciphertext. And where are they most useful? To recover secrets from smart cards.

Whenever I see security systems with this data/device separation, I try to solve the security problem by removing the separation. This means completely redesigning the system and the security assumptions behind it.

Compare a stored-value card with a debit card. In the former case, the card owner can create money by changing the value on the card. For this system to be secure, the card needs to be protected by a variety of security countermeasures. In the latter case, there aren’t any secrets on the card. Your bank doesn’t care that you can read the account number off the front of the card, or the data off the magnetic stripe off the back—the real data, and the security, are in the bank’s databases.

Or compare a DRM system with a financial model that doesn’t care about copying. The former is impossible to secure, the latter easy.

While common in digital systems, this kind of security problem isn’t limited to them. Last month, the province of Ontario started investigating insider fraud in their scratch-and-win lottery systems, after the CBC aired allegations that people selling the tickets are able to figure out which tickets are winners, and not sell them. It’s the same problem: the owners of the data on the tickets—the lottery commission—tried to keep that data secret from those who had physical control of the tickets. And they failed.

Compare that with a traditional drawing-at-the-end-of-the-week lottery system. The attack isn’t possible, because there are no secrets on the tickets for an attacker to learn.

Separating data ownership and device ownership doesn’t mean that security is impossible, only much more difficult. You can buy a safe so strong that you can lock your valuables in it and give it to your attacker—with confidence. I’m not so sure you can design a smart card that keeps secrets from its owner, or a DRM system that works on a general-purpose computer—especially because of the problem of class breaks. But in all cases, the best way to solve the security problem is not to have it in the first place.

This essay originally appeared on Wired.com.

EDITED TO ADD (12/1): I completely misunderstood the lottery problem in Ontario. The frauds reported were perpetrated by lottery machine operators at convenience stores and the like stealing end-of-week draw tickets from unsuspecting customers. The customer would hand their ticket over the counter to be scanned to see if it was a winner. The clerk (knowing what the winning numbers actually were) would palm a non-winning ticket into the machine, inform the customer “sorry better luck next time” and claim the prize on their own at a later date.

Nice scam, but nothing to do with the point of this essay.

Posted on November 30, 2006 at 6:36 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.