Entries Tagged "captchas"

Page 2 of 2

Analyzing CAPTCHAs

New research: “Attacks and Design of Image Recognition CAPTCHAs.”

Abstract. We systematically study the design of image recognition CAPTCHAs (IRCs) in this paper. We first review and examine all IRCs schemes known to us and evaluate each scheme against the practical requirements in CAPTCHA applications, particularly in large-scale real-life applications such as Gmail and Hotmail. Then we present a security analysis of the representative schemes we have identified. For the schemes that remain unbroken, we present our novel attacks. For the schemes for which known attacks are available, we propose a theoretical explanation why those schemes have failed. Next, we provide a simple but novel framework for guiding the design of robust IRCs. Then we propose an innovative IRC called Cortcha that is scalable to meet the requirements of large-scale applications. Cortcha relies on recognizing an object by exploiting its surrounding context, a task that humans can perform well but computers cannot. An infinite number of types of objects can be used to generate challenges, which can effectively disable the learning process in machine learning attacks. Cortcha does not require the images in its image database to be labeled. Image collection and CAPTCHA generation can be fully automated. Our usability studies indicate that, compared with Google’s text CAPTCHA, Cortcha yields a slightly higher human accuracy rate but on average takes more time to solve a challenge.

The paper attacks IMAGINATION (designed at Penn State around 2005) and ARTiFACIAL (designed at MSR Redmond around 2004).

Posted on October 5, 2010 at 7:22 AMView Comments

Violating Terms of Service Possibly a Crime

From Wired News:

The four Wiseguy defendants, who also operated other ticket-reselling businesses, allegedly used sophisticated programming and inside information to bypass technological measures—including CAPTCHA—at Ticketmaster and other sites that were intended to prevent such bulk automated purchases. This violated the sites’ terms of service, and according to prosecutors constituted unauthorized computer access under the anti-hacking Computer Fraud and Abuse Act, or CFAA.

But the government’s interpretation of the law goes too far, according to the policy groups, and threatens to turn what is essentially a contractual dispute into a criminal case. As in the Lori Drew prosecution last year, the case marks a dangerous precedent that could make a felon of anyone who violates a site’s terms-of-service agreement, according to the amicus brief filed last week by the Electronic Frontier Foundation, the Center for Democracy and Technology and other advocates.

“Under the government’s theory, anyone who disregards—or doesn’t read—the terms of service on any website could face computer crime charges,” said EFF civil liberties director Jennifer Granick in a press release. “Price-comparison services, social network aggregators, and users who skim a few years off their ages could all be criminals if the government prevails.”

Posted on July 19, 2010 at 1:11 PMView Comments

Spammers Using Porn to Break Captchas

Clever:

Spammers have created a Windows game which shows a woman in a state of undress when people correctly type in text shown in an accompanying image.

The scrambled text images come from sites which use them to stop computers automatically signing up for accounts that can be put to illegal use.

By getting people to type in the text the spammers can take over the accounts and use them to send junk mail.

I’ve been saying that spammers would start doing this for years. I’m actually surprised it took this long.

Posted on November 1, 2007 at 2:37 PMView Comments

Click Fraud and the Problem of Authenticating People

Google’s $6 billion-a-year advertising business is at risk because it can’t be sure that anyone is looking at its ads. The problem is called click fraud, and it comes in two basic flavors.

With network click fraud, you host Google AdSense advertisements on your own website. Google pays you every time someone clicks on its ad on your site. It’s fraud if you sit at the computer and repeatedly click on the ad or—better yet—write a computer program that repeatedly clicks on the ad. That kind of fraud is easy for Google to spot, so the clever network click fraudsters simulate different IP addresses, or install Trojan horses on other people’s computers to generate the fake clicks.

The other kind of click fraud is competitive. You notice your business competitor has bought an ad on Google, paying Google for each click. So you use the above techniques to repeatedly click on his ads, forcing him to spend money—sometimes a lot of money—on nothing. (Here’s a company that will commit click fraud for you.)

Click fraud has become a classic security arms race. Google improves its fraud-detection tools, so the fraudsters get increasingly clever … and the cycle continues. Meanwhile, Google is facing multiple lawsuits from those who claim the company isn’t doing enough. My guess is that everyone is right: It’s in Google’s interest both to solve and to downplay the importance of the problem.

But the overarching problem is both hard to solve and important: How do you tell if there’s an actual person sitting in front of a computer screen? How do you tell that the person is paying attention, hasn’t automated his responses, and isn’t being assisted by friends? Authentication systems are big business, whether based on something you know (passwords), something you have (tokens) or something you are (biometrics). But none of those systems can secure you against someone who walks away and lets another person sit down at the keyboard, or a computer that’s infected with a Trojan.

This problem manifests itself in other areas as well.

For years, online computer game companies have been battling players who use computer programs to assist their play: programs that allow them to shoot perfectly or see information they normally couldn’t see.

Playing is less fun if everyone else is computer-assisted, but unless there’s a cash prize on the line, the stakes are small. Not so with online poker sites, where computer-assisted players—or even computers playing without a real person at all—have the potential to drive all the human players away from the game.

Look around the internet, and you see this problem pop up again and again. The whole point of CAPTCHAs is to ensure that it’s a real person visiting a website, not just a bot on a computer. Standard testing doesn’t work online, because the tester can’t be sure that the test taker doesn’t have his book open, or a friend standing over his shoulder helping him. The solution in both cases is a proctor, of course, but that’s not always practical and obviates the benefits of internet testing.

This problem has even come up in court cases. In one instance, the prosecution demonstrated that the defendant’s computer committed some hacking offense, but the defense argued that it wasn’t the defendant who did it—that someone else was controlling his computer. And in another case, a defendant charged with a child porn offense argued that, while it was true that illegal material was on his computer, his computer was in a common room of his house and he hosted a lot of parties—and it wasn’t him who’d downloaded the porn.

Years ago, talking about security, I complained about the link between computer and chair. The easy part is securing digital information: on the desktop computer, in transit from computer to computer or on massive servers. The hard part is securing information from the computer to the person. Likewise, authenticating a computer is much easier than authenticating a person sitting in front of the computer. And verifying the integrity of data is much easier than verifying the integrity of the person looking at it—in both senses of that word.

And it’s a problem that will get worse as computers get better at imitating people.

Google is testing a new advertising model to deal with click fraud: cost-per-action ads. Advertisers don’t pay unless the customer performs a certain action: buys a product, fills out a survey, whatever. It’s a hard model to make work—Google would become more of a partner in the final sale instead of an indifferent displayer of advertising—but it’s the right security response to click fraud: Change the rules of the game so that click fraud doesn’t matter.

That’s how to solve a security problem.

This essay appeared on Wired.com.

EDITED TO ADD (7/13): Click Monkeys is a hoax site.

EDITED TO ADD (7/25): An evalution of Google’s anti-click-fraud efforts, as part of the Lane Gifts case. I’m not sure if this expert report was done for Google, for Lane Gifts, or for the judge.

Posted on July 13, 2006 at 5:22 AMView Comments

KittenAuth

You’ve all seen CAPTCHAs. Those are those distorted pictures of letters and numbers you sometimes see on web forms. The idea is that it’s hard for computers to identify the characters, but easy for people to do. The goal of CAPTCHAs is to authenticate that there’s a person sitting in front of the computer.

KittenAuth works with images. The system shows you nine pictures of cute little animals, and the person authenticates himself by clicking on the three kittens. A computer clicking at random has only a 1 in 84 chance of guessing correctly.

Of course you could increase the security by adding more images or requiring the person to choose more images. Another worry—which I didn’t see mentioned—is that the computer could brute-force a static database. If there are only a small fixed number of actual kittens, the computer could be told—by a person—that they’re kittens. Then, the computer would know that whenever it sees that image it’s a kitten.

Still, it’s an interesting idea that warrants more research.

Posted on April 10, 2006 at 1:19 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.