Identifying Computer-Generated Faces

It’s the eyes:

The researchers note that in many cases, users can simply zoom in on the eyes of a person they suspect may not be real to spot the pupil irregularities. They also note that it would not be difficult to write software to spot such errors and for social media sites to use it to remove such content. Unfortunately, they also note that now that such irregularities have been identified, the people creating the fake pictures can simply add a feature to ensure the roundness of pupils.

And the arms race continues….

Research paper.

Posted on September 15, 2021 at 10:31 AM14 Comments


Clive Robinson September 15, 2021 1:58 PM

@ Bruce,

Unfortunately, they also note that now that such irregularities have been identified, the people creating the fake pictures can simply add a feature…

This fragility of such “testing systems” where “needing obscurity” is a primary requirment of opperation is not good. One of the major points of “evidence” is that it be presented openly and that all methods are open to inspection and the application by others to be verified.

Untill fairly recently, science has been able to stop those producing fakes from being able to change their methods sufficiently, so that the fakes will pass as genuine. But that has only been true of “physical objects” not “informational objects”.

The question arises as to if it is actually possible to stop “informational object” counterfiting?

Especially when the cost of making copies of information objects is so small, that it is effectively negligible. So the only way of stopping new “fakes” appears to be by the use of digital signing. But even that is far from reliable as neither the hash or the signing of it is in any way intrinsically tied to the object so verifiable….

So on the assumption it is nolonger possible to stop fakes being created, “How will society react?”…

/dev/null September 15, 2021 10:17 PM

I’ve been paying attention to some of the digital celebrities (mostly in Asian countries) and the level of detail blows my mind. It’s really hard to tell fake from real any more. And these fake creations can generate a lot of revenue. They have full blown profiles, they “do stuff” and have “lives”, followers, like a real person.

Combine with deepfake AI stuff and wow, where are we headed? It’s already depressing/mentally unhealthy as it is with celebrities and successful YouTubers and such (see recent articles on Facebook/Instagram and teenage health). Just imagine if all of that was even more fake, as in completely 100% generated by a marketing firm. Yikes.

Winter September 16, 2021 12:59 AM

There are (yearly) anti-spoof competitions for deep fakes:

ht tps://

It is an arms race, but a very active one.

Edge Case Guy September 16, 2021 7:09 AM

Interesting shortcut to finding fake images… but it’s based on the assumption that every real person has a round iris.

Edge cases for people exist, including “pac-man eyes”

I fear that systems using shortcuts like this will flag a small percentage of the population, and lock them out of legitimate systems. A little like when a computer system marks someone as dead, and they have a nightmare trying to prove otherwise.

Sut Vachz September 16, 2021 8:43 AM

Classic illustration of face recognition methodology

https: //

jones September 16, 2021 9:50 AM

At the moment, many generated faces of this sort are produced using NVIDIA’s StyleGAN software with a pre-trained model; the most common pre-trained model for faces in FFHQ.

Faces produced with this pre-trained model have detectable features — the images in the model are pre-processed so the eyes are always aligned in the same way.

One further avenue of detection: it is possible to take a suspected fake face and “project” it into the “latent space” of the model to see if that face can be generated by the model — if so, it’s fake.

The “arms race” aspect of this is real, though, at present, the time and resources required to assemble a custom dataset on the scale of FFHQ and then to train a model on that dataset are beyond the reach of bottom-feeders.

any moose September 16, 2021 4:13 PM

Many months ago the publishing of deepfakes flaws should have been outlawed for the exact reason Schneier stated, that e-monsters would simply improve their odious software. And as Clive noted, deepfakes are now essentially impossible to distinguish from genuine video. Our leaders, except for farsighted ones such as British MP Maria Miller, have fixed it so video evidence in court is 100% unreliable, though judges still believe it to be reliable. Soon people will be arrested and convicted based on deepfakes. Juries, which often only exhibit a bovine level of intelligence, will simply not believe that deepfakes exist because the images look too real. You anarchists, libertarians, and Bolsheviks 2.0 have the dystopian future you always advocated for.

jones September 16, 2021 8:00 PM

@any moose

judges still believe it to be reliable
Juries, which often only exhibit a bovine level of intelligence

The epistemological problem posed by deepfakes and GAN-generated imagery has less to do with courts, and more to do with sociology-cultural concerns.

Only about 5% of criminal cases actually go to court; most are settled with plea agreements because trials are deemed to risky to the defendant. An empirical study that compared the results of two mass-exoneration cases (where police and prosecutors engaged in systematic misconduct) to a college psychology experiment found that, under the plea bargain system, about 90% of guilty parties will plead to a lesser crime to avoid trial, while 55-75% of innocent parties will plead guilty for the same reason.

The defining features of the US criminal justice system are that innocent people are punished, guilty people get punishments less than what they deserve, and the mechanics of this occur out of court, behind closed doors since only 5% of cases go to trial.

Deepfakes are a social and epistemological problem, but its hard to see how anything could make the US criminal justice system worse.

Jon September 16, 2021 11:29 PM

@ jones

Only about 5% of criminal cases actually go to court; most are settled with plea agreements because trials are deemed to risky to the defendant.

Do keep in mind that’s only true in the USA. Bruce Schneier has an international audience.

In some countries, the offer of a plea bargain is a hideous criminal offence. J.

Winter September 17, 2021 7:35 AM

@Jon, Jones
“In some countries, the offer of a plea bargain is a hideous criminal offence. J.”

Plea bargains prevent fair trials. Hence the resistance in the EU against extradition of suspects to the USA. Suspects do not get a fair trial.

SpaceLifeForm September 17, 2021 6:25 PM

Speaking of eyes, 3 or 5?

Oh, to be the Fly on The Wall.



SpaceLifeForm September 17, 2021 11:54 PM

Pardon me, my mascara is running


echo September 18, 2021 4:05 AM

I’ve gone through several drafts on the technology versus society versus real world application as well as a technical view of makeup and couldn’t be bothered to post any of them.


Now that study on makeup is very interesting. Without rebuilding my comments on makeup, simply, there are makeup looks which work on camera and then there is the real world. The so-called “Instagram look” is one of those which only works on camera with a certain range of facial stucture and lighting and composition. It doesn’t work in the real world. As for the real world “the magic of makeup” is really quite complex and involves all mannner of things from neurology to perception and social habits and context and culture to project a 2D image on a 3D surface for a particular outcome. It’s not something I care to write about much because this can reduce its effect. Another thing is the “Sherlock Holmes” factor as the language of makeup can scream autobiographical history as much as tweed jackets and slightly scuffed shoes.

Credit to the researchers for spotting this strategy. I should have spotted it myself and make the excuse my mind wasn’t in this space. I’m wondering if they can extend the technique beyond heatmaps to other parts of the spectrum. Another strategy of course is to wear makeup with this strategy and have it recorded then remove it so as to evade detection. Also what if you wore a random map every time as part of your everyday routine? That would require less skill.

There’s a Youtube here:

Sut Vachz September 18, 2021 8:21 AM

Re: advanced … makeup

Actually, the simple addition of the traditional beauty mark will suffice.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.