Comments

A nonny bunny April 12, 2010 6:37 AM

That’s amusing.
Of course, wearing such camouflage to confuse face recognition would probably just attract the attention of actual people.

Ned April 12, 2010 6:44 AM

Looks a lot like military camouflage makeup patterns — which are designed to confuse our face recognition brainware.

BF Skinner April 12, 2010 6:47 AM

Artists! they don’t play fair.

@A nonny
Don’t agree. Depends on the fashion of the time. People might begin wearing more makeup as a resistence to surveillance.

Remember Johnathan crossing the border in Gotcha? Using another guys passport and photo his friends painted him so glam that the guard didn’t bother to authenticate further.

If it could defeat camera imaging celebrities would start wearing it tomorrow and then everyone else would follow suit.

Fun google image search type “surveillance” press enter.

GreenSquirrel April 12, 2010 6:55 AM

Interesting even though it doesnt seem all that practical at the moment. Unless of course you want to carry out crimes at a Rennaisance Ball….

It may mean that Goths / EMOs are harder to tell apart but I cant think of how to use this to your advantage – yet.

(not that this means no one else can, I am frequently behind the inspiration curve)

SB7 April 12, 2010 7:33 AM

I may be mistaken, but I believe his make-up patterns only fool a system which is trying to determine if the image contains a face, not whose face it is.

They’re also designed to work with a specific class of algorithms using “Haar Faces.” Of course many face recognition algorithms operate completely differently.

Some are even designed to account for things like make-up and camouflage. I acted as a test-subject for an effort to collect data for just such testing many years back.

There’s already a great way to escape face detection software if you don’t mind standing out on the street corner: just put on a mask or a veil or something similar. That doesn’t rely on knowing in advance what algorithm the system uses.

RogerGS April 12, 2010 7:37 AM

And now, the fiendish master plan of the Juggalos, KISS Army, and David Bowie is revealed.

richrumble April 12, 2010 7:38 AM

What if I rock an eye patch to get on a plane, perhaps some gauze wrapped around one side of my face covering one cheek/eye/eyebrow. What if I use a Halloween prosthetic and sun glasses, I really don’t think this is going to attract much attention: http://www.frightcatalog.com/i/360×360/1503018.jpg
I don’t know how many times I’ve seen this happen, and it’s pretty incognito.
-rich

GrinMouse April 12, 2010 8:44 AM

The makeup patterns provided are not actually all that eye-catching… in the world of avant-garde fashion makeup. I am actually planning a shoot this summer that will have a model wearing high fashion makeup (not specifically designed to defeat this algorithm, granted! ;-)) parading down a city street. And frankly, aside of a few curious glances, I don’t expect it to attract all that much attention. Hiding in plain view…? 🙂

Clive Robinson April 12, 2010 9:57 AM

Back in World War One they did the same thing but to hide ships it was known as “Dazzle camouflage” and it worked on the “optical illusion confusion” principle.

That is it caused an operator using an optical range finder (using the split image coincidence effect) to be sufficiently unsure of what they where looking at. Thus unable to take an accurate range by aligning the image.

Dazzel was invented in 1916 by the artist Norman Wilkinson. It was still in use during the second world war on the arctic convoys where it helped hid a vessal in amongst ice fields etc.

http://en.wikipedia.org/wiki/Dazzle_camouflage

BF Skinner April 12, 2010 10:46 AM

@Tim @SB7 “fools face detection not recognition”
@craig “There are easier ways and less conspicuous”
@a nonny bunny “attract the attention of actual people”

I think we’re missing the point here ya’ll.

First we’re dealing with a work in progress. Pilots and prototypes are meant to be rough.

Second if an attack works then it gets refined and generalized. Maybe the paints don’t have to be visible to human eyes etc.
If it can defeat facial detection would a system even get to recognition? Doesn’t the system have to be able to detect the face first? If they reported negative detects we’d have an infinite number of “No one” “No one” “No One” “No One” “Still No One” “No One” “No One”. I realize this depends on the algorithm used.

Finally the attack targets an automated systems. Where have they suggested deploying these? On Urban street and transit camera systems, sporting events, large gatherings of people. NOT One detector flanked by two human guards. Why? So it can deal with the amount of data those cameras that ARE working produce. So if you can defeat the machine you never have to deal with the humans. Other people may look at you funny and during a football game or in NYC?, as GrinMouse points out, not even that.

@GreenSquirrel ” Goths / EMOs are harder to tell apart ”
Good one! Goths hate being anywhere near related to Emos…keep it up.

@Clive “…Dazzle…”
Artists cheat!

TesserId April 12, 2010 11:58 AM

I snicker at the thought of makeup and other forms of recreational costume being outlawed.

ted April 12, 2010 12:04 PM

If methods for defeating the FR systems became common knowledge, what if anything could law enforcement do to stop people from employing them?

Virginia has rules about what you can not do in the way of makeup and wigs when getting a driver’s license. UK has similar for people in hats and hoodies in certain public places. Could US police legally do the same when you are in public?

Clive Robinson April 12, 2010 12:15 PM

@ BF Skinner,

Of course the question is as you note if this works at making the system not detect a face let alone recognise who it is,

How little make up is required to foil recognition.

Which is as you may remember a question I was asking a little while ago about the (supposed) Mossad “hit people” in Dubai.

I wonder if “facial putty” etc could do the same job but less obviously to humans.

Lets be honest if I had a nose that looked like Ron Moody’s “Fagin” or a chin that looked like Jimmy Hill’s, people would (I hope) not say anything to my face out of social necesity.

As I understand it most FC systems work not on “direct measurment” but “releative measurment”.

So although I might not be able to change the real measurment between my eyes I might will be able to change other relative measurments to say my cheak bones and chin or teeth so that relativly my eyws look closer or further apart, and thus have a different ratio that is not mine in the DB.

BF Skinner April 12, 2010 12:53 PM

@Clive “Ron Moody’s “Fagin” or a chin that looked like Jimmy Hill’s, ”

For those in the US you can think Jimmy Durante and Jay Leno for local cultural milage.

@Clive “might not be able to change the real measurment between my eyes I might will be able to change other relative measurments ”

Which gives the advantage to the confuser if they could become more in the middle of the normal distribution curve and generate more type II errors.

Clive Robinson April 12, 2010 12:55 PM

For those interested in facial recognition systems there is sadly not much info out there (yes I know it’s available in Apple’s iLife software and some online photo DB’s).

There are one or two (out of date) articles one of which is,

http://electronics.howstuffworks.com/gadgets/high-tech-gadgets/facial-recognition.htm

However since 9/11 FRsystems have gone underground and it has been suggested that this is deliberate to get “funding” from various UK and US three letter agencies.

What is known that most systems that have been tested on CCTV systems have been far from a success.

Apparently Boston Airport (Logan) tried two full face systems but got little better than 61% correct recognition and a system that has been running in the London Borough of Newham has not spotted on criminal on the streets even thought there are quite a few “known offenders” living in the borough who are in the database.

Also the Australian SmartGate software using bio-metric passports appears no where as good as people are led to belive.

SB7 April 12, 2010 1:01 PM

@BFSkinner,

Second if an attack works then it gets refined and generalized. Maybe the paints don’t have to be visible to human eyes etc.

If the camera are working with visible light (and they do; IR is unreliable for biometrics) then there isn’t much you can do with “invisible paint.”

Finally the attack targets an automated systems. Where have they suggested deploying these? On Urban street and transit camera systems, sporting events, large gatherings of people.

But there are other ways to determine if there is a person in an image like that. People move in a distinctive way, so even with this you’re going to look like a person strolling along the street who maybe doesn’t have a face.

If it can defeat facial detection would a system even get to recognition? Doesn’t the system have to be able to detect the face first?

Yes, but see above. There are other ways to determine if you should run the recognition program. AFAIK This particular one is only effective if the face is in front of the camera, facing it head-on.

The most important part is that this scheme counters ONLY Haar-face based detection algorithms. That’s just like only defending against bombs-in-shoes but not bombs generally, and it has the same drawbacks. Bruce has covered the futility of countering a specific technique at length.

chris April 12, 2010 4:20 PM

i’ve been wondering for a couple of years whether dazzle patterns could disrupt face recognition software. i am glad someone has investigated it for me.

Ward S. Denker April 12, 2010 5:49 PM

Tim,

“Looks like it fools face detection not recognition.”

This is true, but consider how face recognition software is likely to be used. It takes a lot of time viewing videotapes from the various feeds these days. In order to save money (wages paid to investigators) a smart thing to do would be to run video feeds through a routine which acts like a sieve, sorting out frames that contain faces from frames that do not. Then the frames can be passed through an algorithm to determine which of those frames might be the person you’re looking for in them.

If you break the first part of that sequence then the computer may never recognize that a face (your face) is in a frame in order to process it. The technique may confuse the facial recognition part as well and that just hadn’t been experimented with yet by the researcher.

Clive Robinson April 13, 2010 12:50 AM

So,

Based on this early research I guess we will not be seeing Facial Recognition Systems doing security at a star trek convention any time soon.

Clive Robinson April 13, 2010 2:17 AM

@ SB7,

“The most important part is that this scheme counters ONLY Haar-face based detection algorithms. That’s just like only defending against bombs-in-shoes but not bombs generally, and it has the same drawbacks.”

You are arguing that it “counters only Haar…”, when it has “only been tested against Haar…”.

The two are not the same thus you cannot say it does not work against other “face detecting” systems untill either it has been tested against them or you can show good grounds for your reasoning. Nor can you argue that it does not work against “face recognition” systems for the same reasons.

Further there are two distinct uses for FR systems,

1, Is this face a match for the credential presented.

2, Does this face match any in the “rouges gallary” DB.

In the first you should actually be looking to “disprove” the match (ie pick up on differences)

In the second you should initialy be looking for similarities (data base reducing searches are “and” not “or”).

In the general case “Face detecting” is only going to be used for the latter system.

It would appear that untill 9/11 FDsystems where not investigated that much, and most of the improvments in FR (not FD) where by automaticaly adjusting for “off full face” 2D presentation and other “real time” issues.

And few people have publicaly looked into the various forms of “deception” that might work against either FR or FD systems.

But we do know some FR systems designers have given up on 2D presentation and are now using 3D presentation.

However 3D presentation has distinct limitations in that it works only when the test subject is within a relativly short distance of the stereo camera. And this causes a much increased system cost.

Normaly as a system designer you would not opt to make a “fundementaly limiting” design choice unless there where good reasons to do so. Likewise you would normaly chose the least costly system that works.

So some of the designers of FR systems are certainly aware of 2D presentation short commings and thus may well be aware of the issues of “deception” as they have opted to go for a fundementaly different and considerably more expensive way of building FR systems.

As for your comment,

“… and they do; IR is unreliable for biometrics…”

I for one would be interested to know what you base that on. There are a number of biometric systems that do use “near visable” IR just more effectivly for scanning objects (including faces) for two reasons,

1, Because the human eye is insensitive to it and it does not cause the iris to change.

2, The sensors and optics actualy work better at those wavelengths.

jammit April 13, 2010 3:45 AM

I wonder if instead of fancy colored makeup, a person could apply IR absorbing and IR reflecting makeup instead? Perhaps you’d look normal to creatures with limited eyesight, but enough to screw up machines that don’t differentiate between colors.

Mark April 13, 2010 10:15 AM

I recall something (quite recently) about some software which was ment to ensure that a video camera tracked someone’s face. So long as someone’s skin wasn’t “too dark”.
So you might not even require something that radical to fool machines…

David April 13, 2010 8:36 PM

@ted,

I believe there are already some limitations (not sure about the legal grounds, but certainly practical) on where you can go with makeup on. For example, I was a clown in San Diego quite a while ago (cue the jokes), and the head clown warned us about not going into banks while in makeup–makes the guards very nervous.

Jiminy K April 14, 2010 6:15 AM

If it works with makeup, might it work with bandages? Much as we’d all like to see Lady Gaga apprehended on suspicion of terrorism, there will surely be less obviously voluntary ways of getting around this.

Leif H April 15, 2010 2:42 PM

Everyone seems to be imagining crazy makeup that looks unnatural. Why not paint someone to look like they have a port-wine stain (vascular birthmark)? Or vitiligo? Would that fool the facial detection algorithm?

I love that the examples are fashion line drawings, transformed into supervillainesses.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.