Entries Tagged "biometrics"

Page 10 of 16

The Commercial Speech Arms Race

A few years ago, a company began to sell a liquid with identification codes suspended in it. The idea was that you would paint it on your stuff as proof of ownership. I commented that I would paint it on someone else’s stuff, then call the police.

I was reminded of this recently when a group of Israeli scientists demonstrated that it’s possible to fabricate DNA evidence. So now, instead of leaving your own DNA at a crime scene, you can leave fabricated DNA. And it isn’t even necessary to fabricate. In Charlie Stross’s novel Halting State, the bad guys foul a crime scene by blowing around the contents of a vacuum cleaner bag, containing the DNA of dozens, if not hundreds, of people.

This kind of thing has been going on for ever. It’s an arms race, and when technology changes, the balance between attacker and defender changes. But when automated systems do the detecting, the results are different. Face recognition software can be fooled by cosmetic surgery, or sometimes even just a photograph. And when fooling them becomes harder, the bad guys fool them on a different level. Computer-based detection gives the defender economies of scale, but the attacker can use those same economies of scale to defeat the detection system.

Google, for example, has anti-fraud systems that detect ­ and shut down ­ advertisers who try to inflate their revenue by repeatedly clicking on their own AdSense ads. So people built bots to repeatedly click on the AdSense ads of their competitors, trying to convince Google to kick them out of the system.

Similarly, when Google started penalizing a site’s search engine rankings for having “bad neighbors”—backlinks from link farms, adult or gambling sites, or blog spam—people engaged in sabotage: they built link farms and left blog comment spam linking to their competitors’ sites.

The same sort of thing is happening on Yahoo Answers. Initially, companies would leave answers pushing their products, but Yahoo started policing this. So people have written bots to report abuse on all their competitors. There are Facebook bots doing the same sort of thing.

Last month, Google introduced Sidewiki, a browser feature that lets you read and post comments on virtually any webpage. People and industries are already worried about the effects unrestrained commentary might have on their businesses, and how they might control the comments. I’m sure Google has sophisticated systems ready to detect commercial interests that try to take advantage of the system, but are they ready to deal with commercial interests that try to frame their competitors? And do we want to give one company the power to decide which comments should rise to the top and which get deleted?

Whenever you build a security system that relies on detection and identification, you invite the bad guys to subvert the system so it detects and identifies someone else. Sometimes this is hard ­—leaving someone else’s fingerprints on a crime scene is hard, as is using a mask of someone else’s face to fool a guard watching a security camera ­—and sometimes it’s easy. But when automated systems are involved, it’s often very easy. It’s not just hardened criminals that try to frame each other, it’s mainstream commercial interests.

With systems that police internet comments and links, there’s money involved in commercial messages ­—so you can be sure some will take advantage of it. This is the arms race. Build a detection system, and the bad guys try to frame someone else. Build a detection system to detect framing, and the bad guys try to frame someone else framing someone else. Build a detection system to detect framing of framing, and well, there’s no end, really. Commercial speech is on the internet to stay; we can only hope that they don’t pollute the social systems we use so badly that they’re no longer useful.

This essay originally appeared in The Guardian.

Posted on October 16, 2009 at 8:56 AMView Comments

Detecting Forged Signatures Using Pen Pressure and Angle

Interesting:

Songhua Xu presented an interesting idea for measuring pen angle and pressure to present beautiful flower-like visual versions of a handwritten signature. You could argue that signatures are already a visual form, nicely identifiable and universal. However, with the added data about pen pressure and angle, the authors were able to create visual signatures that offer potentially greater security, assuming you can learn to read them.

A better image. The paper (abstract is free; paper is behind a paywall).

Posted on October 8, 2009 at 6:43 AMView Comments

Fabricating DNA Evidence

This isn’t good:

The scientists fabricated blood and saliva samples containing DNA from a person other than the donor of the blood and saliva. They also showed that if they had access to a DNA profile in a database, they could construct a sample of DNA to match that profile without obtaining any tissue from that person.

[…]

The planting of fabricated DNA evidence at a crime scene is only one implication of the findings. A potential invasion of personal privacy is another.

Using some of the same techniques, it may be possible to scavenge anyone’s DNA from a discarded drinking cup or cigarette butt and turn it into a saliva sample that could be submitted to a genetic testing company that measures ancestry or the risk of getting various diseases.

The paper.

EDITED TO ADD (8/19): A better article.

Posted on August 19, 2009 at 6:57 AMView Comments

Clear Shuts Down Operation

Clear, the company that sped people through airport security, has ceased operations. My first question: what happened to all that personal information it collected on its members? An answer appeared on its website:

Applicant and Member data is currently secured in accordance with the Transportation Security Administration’s Security, Privacy and Compliance Standards. Verified Identity Pass, Inc. will continue to secure such information and will take appropriate steps to delete the information.

Some are not reassured:

The disturbing part is that everyone who joined the Clear program had to give this private company (and the TSA) fingerprint and iris scans. I never joined Clear. But if I had, I would be extremely concerned about what happens to this information now that the company has gone defunct.

I can hear it now—they’ll surely say all the biometric and fingerprint data is secure, you don’t need to worry. But how much can you trust a company that shuts down with little notice while being hounded by creditors?

Details matter here. Nowhere do the articles say that Clear, or its parent company Verified Identity, Inc., have declared bankruptcy. But if that does happen, does the company’s biggest asset—the personal information of the quarter of a million Clear members—become the property of Clear’s creditors?

I previously wrote about Clear here.

More commentary.

Posted on June 25, 2009 at 12:36 PMView Comments

Second SHB Workshop Liveblogging (4)

Session three was titled “Usability.” (For the record, the Stata Center is one ugly building.)

Andrew Patrick, NRC Canada until he was laid off four days ago (suggested reading: Fingerprint Concerns: Performance, Usability, and Acceptance of Fingerprint Biometric Systems), talked about biometric systems and human behavior. Biometrics are used everywhere: for gym membership, at Disneyworld, at international borders. The government of Canada is evaluating using iris recognition at a distance for events like the 2010 Olympics. There are two different usability issues: with respect to the end user, and with respect to the authenticator. People’s acceptance of biometrics is very much dependent on the context. And of course, biometrics are not secret. Patrick suggested that to defend ourselves against this proliferation of using biometrics for authentication, the individual should publish them. The rationale is that we’re publishing them anyway, so we might as well do it knowingly.

Luke Church, Cambridge University (suggested reading: SHB Position Paper; Usability and the Common Criteria), talked about what he called “user-centered design.” There’s a economy of usability: “in order to make some things easier, we have to make some things harder”—so it makes sense to make the commonly done things easier at the expense of the rarely done things. This has a lot of parallels with security. The result is “appliancisation” (with a prize for anyone who come up with a better name): the culmination of security behaviors and what the system can do embedded in a series of user choices. Basically, giving users meaningful control over their security. Luke discussed several benefits and problems with the approach.

Diana Smetters, Palo Alto Research Center (suggested reading: Breaking out of the browser to defend against phishing attacks; Building secure mashups; Ad-hoc guesting: when exceptions are the rule), started with these premises: you can teach users, but you can’t teach them very much, so you’d better carefully design systems so that you 1) minimize what they have to learn, 2) make it easier for them to learn it, and 3) maximize the benefit from what they learn. Too often, security is at odds with getting the job done. “As long as configuration errors (false alarms) are common, any technology that requires users to observe security indicators and react to them will fail as attacks can simply masquerade as errors, and users will rationally ignore them.” She recommends meeting the user halfway by building new security models that actually fit the users’ needs. (For example: Phishing is a mismatch problem, between what’s in the user’s head and where the URL is actually going. SSL doesn’t work, but how should websites authenticate themselves to users? Her solution is protected links: a set of secure bookmarks in protected browsers. She went on to describe a prototype and tests run with user subjects.

Jon Callas, PGP Corporation (suggested reading: Improving Message Security with a Self-Assembling PKI), used the metaphor of the “security cliff”: you have to keep climbing until you get to the top and that’s hard, so it’s easier to just stay at the bottom. He wants more of a “security ramp,” so people can reasonably stop somewhere in the middle. His idea is to have a few policies—e-mail encryption, rules about USB drives—and enforce them. This works well in organizations, where IT has dictatorial control over user configuration. If we can’t teach users much, we need to enforce policies on users.

Rob Reeder, Microsoft (suggested reading: Expanding Grids for Visualizing and Authoring Computer Security Policies), presented a possible solution to the secret questions problem: social authentication. The idea is to use people you know (trustees) to authenticate who you are, and have them attest to the fact that you lost your password. He went on to describe how the protocol works, as well as several potential attacks against the protocol and defenses, and experiments that tested the protocol. In the question session he talked about people designating themselves as trustees, and how that isn’t really a problem.

Lorrie Cranor, Carnegie Mellon University (suggested reading: A Framework for Reasoning about the Human in the Loop; Timing Is Everything? The Effects of Timing and Placement of Online Privacy Indicators; School of Phish: A Real-Word Evaluation of Anti-Phishing Training; You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings), talked about security warnings. The best option is to fix the hazard; the second best is to guard against it—but far too often we just warn people about it. But since hazards are generally not very hazardous, most people just ignore them. “Often, software asks the user and provides little or no information to help user make this decision.” Better is to use some sort of automated analysis to assist the user in responding to warnings. For websites, for example, the system should block sites with a high probability of danger, not bother users if there is a low probably of danger, and help the user make the decision in the grey area. She went on to describe a prototype and user studies done with the prototype; her paper will be presented at USENIX Security in August.

Much of the discussion centered on how bad the problem really is, and how much security is good enough. The group also talked about economic incentives companies have to either fix or ignore security problems, and whether market approaches (or, as Jean Camp called it, “the happy Libertarian market pony”) are sufficient. Some companies have incentives to convince users to do the wrong thing, or at the very least to do nothing. For example, social networking sites are more valuable if people share their information widely.

Further discussion was about whitelisting, and whether it worked or not. There’s the problem of the bad guys getting on the whitelist, and the risk that organizations like the RIAA will use the whitelist to enforce copyright, or that large banks will use the whitelist as a tool to block smaller start-up banks. Another problem is that the user might not understand what a whitelist signifies.

Dave Clark from the audience: “It’s not hard to put a seat belt on, and if you need a lesson, take a plane.”

Kind of a one-note session. We definitely need to invite more psych people.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 11, 2009 at 2:56 PMView Comments

No Smiling in Driver's License Photographs

In other biometric news, four states have banned smiling in driver’s license photographs.

The serious poses are urged by DMVs that have installed high-tech software that compares a new license photo with others that have already been shot. When a new photo seems to match an existing one, the software sends alarms that someone may be trying to assume another driver’s identity.

But there’s a wrinkle in the technology: a person’s grin. Face-recognition software can fail to match two photos of the same person if facial expressions differ in each photo, says Carnegie Mellon University robotics professor Takeo Kanade.

Posted on May 29, 2009 at 11:19 AMView Comments

News from the Fingerprint Biometrics World

Wacky:

A Singapore cancer patient was held for four hours by immigration officials in the United States when they could not detect his fingerprints—which had apparently disappeared because of a drug he was taking.

[…]

The drug, capecitabine, is commonly used to treat cancers in the head and neck, breast, stomach and colorectum.

One side-effect is chronic inflammation of the palms or soles of the feet and the skin can peel, bleed and develop ulcers or blisters—or what is known as hand-foot syndrome.

“This can give rise to eradication of fingerprints with time,” explained Tan, senior consultant in the medical oncology department at Singapore’s National Cancer Center.

Posted on May 29, 2009 at 6:37 AMView Comments

A Sad Tale of Biometrics Gone Wrong

From The Daily WTF:

Johnny was what you might call a “gym rat.” In incredible shape from almost-daily gym visits, a tight Lycra tank top, iPod strapped to his sizable bicep, underneath which was a large black tribal tattoo. He scanned his finger on his way out, but the turnstile wouldn’t budge.

“Uh, just a second,” the receptionist furiously typed and clicked, while Johnny removed one of his earbuds out and stared. “I’ll just have to manually override it…” but it was useless. There was no manual override option. Somehow, it was never considered that the scanner would malfunction. After several seconds of searching and having Johnny try to scan his finger again, the receptionist instructed him just to jump over the turnstile.

It was later discovered that the system required a “sign in” and a “sign out,” and if a member was recognized as someone else when attempting to sign out, the system rejected the input, and the turnstile remained locked in position. This was not good.

The scene repeated itself several times that day. Worse, the fingerprint scanner at the exit was getting kind of disgusting. Dozens of sweaty fingerprints required the scanner to be cleaned hourly, and even after it was freshly cleaned, it sometimes still couldn’t read fingerprints right. The latticed patterns on the barbell grips would leave indented patterns temporarily on the members’ fingers, there could be small cuts or folds on fingertips just from carrying weights or scrapes on the concrete coming out of the pool, fingers were wrinkly after a long swim, or sometimes the system just misidentified the person for no apparent reason.

Me on biometrics.

Posted on April 30, 2009 at 6:19 AMView Comments

Michael Froomkin on Identity Cards

University of Miami law professor Michael Froomkin writes about ID cards and society in “Identity Cards and Identity Romanticism.”

This book chapter for “Lessons from the Identity Trail: Anonymity, Privacy and Identity in a Networked Society” (New York: Oxford University Press, 2009)—a forthcoming comparative examination of approaches to the regulation of anonymity edited by Ian Kerr—discusses the sources of hostility to National ID Cards in common law countries. It traces that hostility in the United States to a romantic vision of free movement and in England to an equally romantic vision of the ‘rights of Englishmen’.

Governments in the United Kingdom, United States, Australia, and other countries are responding to perceived security threats by introducing various forms of mandatory or nearly mandatory domestic civilian national identity documents. This chapter argues that these ID cards pose threats to privacy and freedom, especially in countries without strong data protection rules. The threats created by weak data protection in these new identification schemes differ significantly from previous threats, making the romantic vision a poor basis from which to critique (highly flawed) contemporary proposals.

One small excerpt:

…it is important to note that each ratchet up in an ID card regime—the introduction of a non-mandatory ID card scheme, improvements to authentication, the transition from an optional regime to a mandatory one, or the inclusion of multiple biometric identifiers—increases the need for attention to how the data collected at the time the card is created will be stored and accessed. Similarly, as ID cards become ubiquitous, a de facto necessity even when not required de jure, the card becomes the visible instantiation of a large, otherwise unseen, set of databases. If each use of the card also creates a data trail, the resulting profile becomes an ongoing temptation to both ordinary and predictive profiling.

Posted on March 4, 2009 at 7:25 AMView Comments

1 8 9 10 11 12 16

Sidebar photo of Bruce Schneier by Joe MacInnis.