Fawkes: Digital Image Cloaking
Fawkes is a system for manipulating digital images so that they aren’t recognized by facial recognition systems.
At a high level, Fawkes takes your personal images, and makes tiny, pixel-level changes to them that are invisible to the human eye, in a process we call image cloaking. You can then use these “cloaked” photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.
Research paper.
EDITED TO ADD (8/3): Kashmir Hill checks it out, and it’s got problems.
Another article.
Alan Kaminsky • July 22, 2020 10:04 AM
As I understand it after skimming the paper, Fawkes “cloaks” a photo of you by adding noise, imperceptible to the human eye, such that an image classification AI would classify your photo as that of someone else — and you can pick the target for the misclassification. For example, I could post a cloaked photo of me (an old, balding male) that the AI would think is a photo of Taylor Swift.