Entries Tagged "gambling"

Page 3 of 3

Computer Card Counter Detects Human Card Counters

All it takes is a computer that can track every card:

The anti-card-counter system uses cameras to watch players and keep track of the actual “count” of the cards, the same way a player would. It also measures how much each player is betting on each hand, and it syncs up the two data points to look for patterns in the action. If a player is betting big when the count is indeed favorable, and keeping his chips to himself when it’s not, he’s fingered by the computer… and, in the real world, he’d probably receive a visit from a burly dude in a bad suit, too.

The system reportedly works even if the gambler intentionally attempts to mislead it with high bets at unfavorable times.

Of course it does; it’s just a signal-to-noise problem.

I have long been impressed with the casino industry’s ability to, in the case of blackjack, convince the gambling public that using strategy equals cheating.

Posted on October 20, 2009 at 6:16 AMView Comments

Comparing the Security of Electronic Slot Machines and Electronic Voting Machines

From the Washington Post.

Other important differences:

  • Slot machine are used every day, 24 hours a day. Electronic voting machines are used, at most, twice a year—often less frequently.
  • Slot machines involve money. Electronic voting machines involve something much more abstract.
  • Slot machine accuracy is a non-partisan issue. For some reason I can’t fathom, electronic voting machine accuracy is seen as a political issue.

Posted on December 24, 2008 at 6:02 AMView Comments

Risk and the Brain

New research on how the brain estimates risk:

Using functional imaging in a simple gambling task in which risk was constantly changed, the researchers discovered that an early activation of the anterior insula of the brain was associated with mistakes in predicting risk.

The time course of the activation also indicated a role in rapid updating, suggesting that this area is involved in how we learn to modify our risk predictions. The finding was particularly interesting, notes lead author and EPFL professor Peter Bossaerts, because the anterior insula is the locus of where we integrate and process emotions.

“This represents an important advance in our understanding of the neurological underpinnings of risk, in analogy with an earlier discovery of a signal for forecast error in the dopaminergic system,” says Bossaerts, “and indicates that we need to update our understanding of the neural basis of reward anticipation in uncertain conditions to include risk assessment.”

Posted on March 18, 2008 at 6:51 AMView Comments

Cheating in Online Poker

Fascinating story of insider cheating:

Some opponents became suspicious of how a certain player was playing. He seemed to know what the opponents’ hole cards were. The suspicious players provided examples of these hands, which were so outrageous that virtually all serious poker players were convinced that cheating had occurred. One of the players who’d been cheated requested that Absolute Poker provide hand histories from the tournament (which is standard practice for online sites). In this case, Absolute Poker “accidentally” did not send the usual hand histories, but instead sent a file that contained all sorts of private information that the poker site would never release. The file contained every player’s hole cards, observations of the tables, and even the IP addresses of every person playing. (I put “accidentally” in quotes because the mistake seems like too great a coincidence when you learn what followed.) I suspect that someone at Absolute knew about the cheating and how it happened, and was acting as a whistleblower by sending these data. If that is the case, I hope whomever “accidentally” sent the file gets their proper hero’s welcome in the end.

Then the poker players went to work analyzing the data—not the hand histories themselves, but other, more subtle information contained in the file. What these players-turned-detectives noticed was that, starting with the third hand of the tournament, there was an observer who watched every subsequent hand played by the cheater. (For those of you who don’t know much about online poker, anyone who wants can observe a particular table, although, of course, the observers can’t see any of the players’ hole cards.) Interestingly, the cheater folded the first two hands before this observer showed up, then did not fold a single hand before the flop for the next 20 minutes, and then folded his hand pre-flop when another player had a pair of kings as hole cards! This sort of cheating went on throughout the tournament.

So the poker detectives turned their attention to this observer. They traced the observer’s IP address and account name to the same set of servers that host Absolute Poker, and also, apparently, to a particular individual named Scott Tom, who seems to be a part-owner of Absolute Poker! If all of this is correct, it shows exactly how the cheating would have transpired: an insider at the Web site had real-time access to all of the hole cards (it is not hard to believe that this capability would exist) and was relaying this information to an outside accomplice.

More details here.

EDITED TO ADD (10/20): More information.

EDITED TO ADD (11/13): This graph of players’ river aggression is a great piece of evidence. Note the single outlying point.

Posted on October 19, 2007 at 11:44 AM

Basketball Referees and Single Points of Failure

Sports referees are supposed to be fair and impartial. They’re not supposed to favor one team over another. And they’re most certainly not supposed to have a financial interest in the outcome of a game.

Tim Donaghy, referee for the National Basketball Association, has been accused of both betting on basketball games and fixing games for the mob. He has confessed to far less—gambling in general, and selling inside information on players, referees and coaches to a big-time professional gambler named James “Sheep” Battista. But the investigation continues, and the whole scandal is an enormous black eye for the sport. Fans like to think that the game is fair and that the winning team really is the winning team.

The details of the story are fascinating and well worth reading. But what interests me more are its general lessons about risk and audit.

What sorts of systems—IT, financial, NBA games or whatever—are most at risk of being manipulated? The ones where the smallest change can have the greatest impact, and the ones where trusted insiders can make that change.

Of all major sports, basketball is the most vulnerable to manipulation. There are only five players on the court per team, fewer than in other professional team sports; thus, a single player can have a much greater effect on a basketball game than he can in the other sports. Star players like Michael Jordan, Kobe Bryant and LeBron James can carry an entire team on their shoulders. Even baseball great Alex Rodriguez can’t do that.

Because individual players matter so much, a single referee can affect a basketball game more than he can in any other sport. Referees call fouls. Contact occurs on nearly every play, any of which could be called as a foul. They’re called “touch fouls,” and they are mostly, but not always, ignored. The refs get to decide which ones to call.

Even more drastically, a ref can put a star player in foul trouble immediately—and cause the coach to bench him longer throughout the game—if he wants the other side to win. He can set the pace of the game, low-scoring or high-scoring, based on how he calls fouls. He can decide to invalidate a basket by calling an offensive foul on the play, or give a team the potential for some extra points by calling a defensive foul. There’s no formal instant replay. There’s no second opinion. A ref’s word is law—there are only three of them—and a crooked ref has enormous power to control the game.

It’s not just that basketball referees are single points of failure, it’s that they’re both trusted insiders and single points of catastrophic failure.

These sorts of vulnerabilities exist in many systems. Consider what a terrorist-sympathizing Transportation Security Administration screener could do to airport security. Or what a criminal CFO could embezzle. Or what a dishonest computer-repair technician could do to your computer or network. The same goes for a corrupt judge, police officer, customs inspector, border-control officer, food-safety inspector and so on.

The best way to catch corrupt trusted insiders is through audit. The particular components of a system that have the greatest influence on the performance of that system need to be monitored and audited, even if the probability of compromise is low. It’s after the fact, but if the likelihood of detection is high and the penalties (fines, jail time, public disgrace) are severe, it’s a pretty strong deterrent. Of course, the counterattack is to target the auditing system. Hackers routinely try to erase audit logs that contain evidence of their intrusions.

Even so, audit is the reason we want open-source code reviews and verifiable paper trails in voting machines; otherwise, a single crooked programmer could single-handedly change an election. It’s also why the Securities and Exchange Commission closely monitors trades by brokers: They are in an ideal position to get away with insider trading. The NBA claims it monitors referees for patterns that might indicate abuse; there’s still no answer to why it didn’t detect Donaghy.

Most companies focus the bulk of their IT-security monitoring on external threats, but they should be paying more attention to internal threats. While a company may inherently trust its employees, those trusted employees have far greater power to affect corporate systems and are often single points of failure. And trusted employees can also be compromised by external elements, as Tom Donaghy was by Battista and possibly the Mafia.

All systems have trusted insiders. All systems have catastrophic points of failure. The key is recognizing them, and building monitoring and audit systems to secure them.

This is my 50th essay for Wired.com.

Posted on September 6, 2007 at 4:38 AMView Comments

Hinky at the Casino: JDLR

It’s called “Just Doesn’t Look Right“:

In the casino business, or any other, we tend to become complacent, and we stop paying attention to the little things. But a really sharp observer will still be shocked awake at some little unexplained thing: the five o’clock shadow on the woman sitting opposite the big-money player, or too many people watching that game, or the fellow who keeps looking directly at the cameras. The guy who looks as though he slept under an overpass carrying a new shopping bag from Nieman-Marcus, the two players on a table game whose arms were held against their chests, the bulge under that character’s jacket and the man wearing an overcoat on an August day in Las Vegas.

Posted on May 15, 2007 at 11:05 AMView Comments

Separating Data Ownership and Device Ownership

Consider two different security problems. In the first, you store your valuables in a safe in your basement. The threat is burglars, of course. But the safe is yours, and the house is yours, too. You control access to the safe, and probably have an alarm system.

The second security problem is similar, but you store your valuables in someone else’s safe. Even worse, it’s someone you don’t trust. He doesn’t know the combination, but he controls access to the safe. He can try to break in at his leisure. He can transport the safe anyplace he needs to. He can use whatever tools he wants. In the first case, the safe needs to be secure, but it’s still just a part of your overall home security. In the second case, the safe is the only security device you have.

This second security problem might seem contrived, but it happens regularly in our information society: Data controlled by one person is stored on a device controlled by another. Think of a stored-value smart card: If the person owning the card can break the security, he can add money to the card. Think of a DRM system: Its security depends on the person owning the computer not being able to get at the insides of the DRM security. Think of the RFID chip on a passport. Or a postage meter. Or SSL traffic being sent over a public network.

These systems are difficult to secure, and not just because you give your attacker the device and let him utilize whatever time, equipment and expertise he needs to break it. It’s difficult to secure because breaks are generally “class breaks.” The expert who figures out how to do it can build hardware—or write software—to do it automatically. Only one person needs to break a given DRM system; the software can break every other device in the same class.

This means that the security needs to be secure not against the average attacker, but against the smartest, most motivated and best funded attacker.

I was reminded of this problem earlier this month, when researchers announced a new attack (.pdf) against implementations of the RSA cryptosystem. The attack exploits the fact that different operations take different times on modern CPUs. By closely monitoring—and actually affecting—the CPU during an RSA operation, an attacker can recover the key. The most obvious applications for this attack are DRM systems that try to use a protected partition in the CPU to prevent the computer’s owner from learning the DRM system’s cryptographic keys.

These sorts of attacks are not new. In 1995, researchers discovered they could recover cryptographic keys by comparing relative timings on chips. In later years, both power and radiation were used to break cryptosystems. I called these “side-channel attacks,” because they made use of information other than the plaintext and ciphertext. And where are they most useful? To recover secrets from smart cards.

Whenever I see security systems with this data/device separation, I try to solve the security problem by removing the separation. This means completely redesigning the system and the security assumptions behind it.

Compare a stored-value card with a debit card. In the former case, the card owner can create money by changing the value on the card. For this system to be secure, the card needs to be protected by a variety of security countermeasures. In the latter case, there aren’t any secrets on the card. Your bank doesn’t care that you can read the account number off the front of the card, or the data off the magnetic stripe off the back—the real data, and the security, are in the bank’s databases.

Or compare a DRM system with a financial model that doesn’t care about copying. The former is impossible to secure, the latter easy.

While common in digital systems, this kind of security problem isn’t limited to them. Last month, the province of Ontario started investigating insider fraud in their scratch-and-win lottery systems, after the CBC aired allegations that people selling the tickets are able to figure out which tickets are winners, and not sell them. It’s the same problem: the owners of the data on the tickets—the lottery commission—tried to keep that data secret from those who had physical control of the tickets. And they failed.

Compare that with a traditional drawing-at-the-end-of-the-week lottery system. The attack isn’t possible, because there are no secrets on the tickets for an attacker to learn.

Separating data ownership and device ownership doesn’t mean that security is impossible, only much more difficult. You can buy a safe so strong that you can lock your valuables in it and give it to your attacker—with confidence. I’m not so sure you can design a smart card that keeps secrets from its owner, or a DRM system that works on a general-purpose computer—especially because of the problem of class breaks. But in all cases, the best way to solve the security problem is not to have it in the first place.

This essay originally appeared on Wired.com.

EDITED TO ADD (12/1): I completely misunderstood the lottery problem in Ontario. The frauds reported were perpetrated by lottery machine operators at convenience stores and the like stealing end-of-week draw tickets from unsuspecting customers. The customer would hand their ticket over the counter to be scanned to see if it was a winner. The clerk (knowing what the winning numbers actually were) would palm a non-winning ticket into the machine, inform the customer “sorry better luck next time” and claim the prize on their own at a later date.

Nice scam, but nothing to do with the point of this essay.

Posted on November 30, 2006 at 6:36 AMView Comments

Click Fraud and the Problem of Authenticating People

Google’s $6 billion-a-year advertising business is at risk because it can’t be sure that anyone is looking at its ads. The problem is called click fraud, and it comes in two basic flavors.

With network click fraud, you host Google AdSense advertisements on your own website. Google pays you every time someone clicks on its ad on your site. It’s fraud if you sit at the computer and repeatedly click on the ad or—better yet—write a computer program that repeatedly clicks on the ad. That kind of fraud is easy for Google to spot, so the clever network click fraudsters simulate different IP addresses, or install Trojan horses on other people’s computers to generate the fake clicks.

The other kind of click fraud is competitive. You notice your business competitor has bought an ad on Google, paying Google for each click. So you use the above techniques to repeatedly click on his ads, forcing him to spend money—sometimes a lot of money—on nothing. (Here’s a company that will commit click fraud for you.)

Click fraud has become a classic security arms race. Google improves its fraud-detection tools, so the fraudsters get increasingly clever … and the cycle continues. Meanwhile, Google is facing multiple lawsuits from those who claim the company isn’t doing enough. My guess is that everyone is right: It’s in Google’s interest both to solve and to downplay the importance of the problem.

But the overarching problem is both hard to solve and important: How do you tell if there’s an actual person sitting in front of a computer screen? How do you tell that the person is paying attention, hasn’t automated his responses, and isn’t being assisted by friends? Authentication systems are big business, whether based on something you know (passwords), something you have (tokens) or something you are (biometrics). But none of those systems can secure you against someone who walks away and lets another person sit down at the keyboard, or a computer that’s infected with a Trojan.

This problem manifests itself in other areas as well.

For years, online computer game companies have been battling players who use computer programs to assist their play: programs that allow them to shoot perfectly or see information they normally couldn’t see.

Playing is less fun if everyone else is computer-assisted, but unless there’s a cash prize on the line, the stakes are small. Not so with online poker sites, where computer-assisted players—or even computers playing without a real person at all—have the potential to drive all the human players away from the game.

Look around the internet, and you see this problem pop up again and again. The whole point of CAPTCHAs is to ensure that it’s a real person visiting a website, not just a bot on a computer. Standard testing doesn’t work online, because the tester can’t be sure that the test taker doesn’t have his book open, or a friend standing over his shoulder helping him. The solution in both cases is a proctor, of course, but that’s not always practical and obviates the benefits of internet testing.

This problem has even come up in court cases. In one instance, the prosecution demonstrated that the defendant’s computer committed some hacking offense, but the defense argued that it wasn’t the defendant who did it—that someone else was controlling his computer. And in another case, a defendant charged with a child porn offense argued that, while it was true that illegal material was on his computer, his computer was in a common room of his house and he hosted a lot of parties—and it wasn’t him who’d downloaded the porn.

Years ago, talking about security, I complained about the link between computer and chair. The easy part is securing digital information: on the desktop computer, in transit from computer to computer or on massive servers. The hard part is securing information from the computer to the person. Likewise, authenticating a computer is much easier than authenticating a person sitting in front of the computer. And verifying the integrity of data is much easier than verifying the integrity of the person looking at it—in both senses of that word.

And it’s a problem that will get worse as computers get better at imitating people.

Google is testing a new advertising model to deal with click fraud: cost-per-action ads. Advertisers don’t pay unless the customer performs a certain action: buys a product, fills out a survey, whatever. It’s a hard model to make work—Google would become more of a partner in the final sale instead of an indifferent displayer of advertising—but it’s the right security response to click fraud: Change the rules of the game so that click fraud doesn’t matter.

That’s how to solve a security problem.

This essay appeared on Wired.com.

EDITED TO ADD (7/13): Click Monkeys is a hoax site.

EDITED TO ADD (7/25): An evalution of Google’s anti-click-fraud efforts, as part of the Lane Gifts case. I’m not sure if this expert report was done for Google, for Lane Gifts, or for the judge.

Posted on July 13, 2006 at 5:22 AMView Comments

Terrorists Playing Bingo in Kentucky

One of the sillier movie-plot threats I’ve seen recently:

Kentucky has been awarded a federal Homeland Security grant aimed at keeping terrorists from using charitable gaming to raise money.

The state Office of Charitable Gaming won the $36,300 grant and will use it to provide five investigators with laptop computers and access to a commercially operated law-enforcement data base, said John Holiday, enforcement director at the Office of Charitable Gaming.

The idea is to keep terrorists from playing bingo or running a charitable game to raise large amounts of cash, Holiday said.

Posted on October 25, 2005 at 3:30 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.