Detecting Deepfake Picture Editing

“Markpainting” is a clever technique to watermark photos in such a way that makes it easier to detect ML-based manipulation:

An image owner can modify their image in subtle ways which are not themselves very visible, but will sabotage any attempt to inpaint it by adding visible information determined in advance by the markpainter.

One application is tamper-resistant marks. For example, a photo agency that makes stock photos available on its website with copyright watermarks can markpaint them in such a way that anyone using common editing software to remove a watermark will fail; the copyright mark will be markpainted right back. So watermarks can be made a lot more robust.

Here’s the paper: “Markpainting: Adversarial Machine Learning Meets Inpainting,” by David Khachaturov, Ilia Shumailov, Yiren Zhao, Nicolas Papernot, and Ross Anderson.

Abstract: Inpainting is a learned interpolation technique that is based on generative modeling and used to populate masked or missing pieces in an image; it has wide applications in picture editing and retouching. Recently, inpainting started being used for watermark removal, raising concerns. In this paper we study how to manipulate it using our markpainting technique. First, we show how an image owner with access to an inpainting model can augment their image in such a way that any attempt to edit it using that model will add arbitrary visible information. We find that we can target multiple different models simultaneously with our technique. This can be designed to reconstitute a watermark if the editor had been trying to remove it. Second, we show that our markpainting technique is transferable to models that have different architectures or were trained on different datasets, so watermarks created using it are difficult for adversaries to remove. Markpainting is novel and can be used as a manipulation alarm that becomes visible in the event of inpainting.

Posted on June 10, 2021 at 6:19 AM10 Comments


Matthias Hörmann June 10, 2021 7:40 AM

Interesting but it smells a bit like the same sort of snake-oil as DRM, CD Copy-Protection, code obfuscation,… in that it sounds like it would work for the naive case but probably leaves plenty of room for actual experts to work around it, selling a false sense of security to people who think copyright enforcement is a lot more important to their profits than it actually is.

Clive Robinson June 10, 2021 10:09 AM

@ ALL,

Back in the mid 1990’s somebody took the ideas used behind “Low Probability of Intercept”(LPI)[1] radio systems and applied them to images as “Digital Water Marks”(DWM) as part of “Digital Rights Managment(DRM[2]).

They all failed for various fairly general reasons[1.3] and it’s probable this system will likewise fail for those reasons or similar.

To be honest most DRM DWM systems used in an “Off-Line” “In-Band” signalling mode as would be required with ordinary photographs etc are going to fail. Similarly “Broadcast Systems”.

Whilst not of necesity “Snake Oil” history syrongly suggests it’s not going to have a long life.

[1] Traditional Low Probability of Intercept radio systems based on Spread Spectrum(SS) techniques are depreciated these days due to technical advances in Digital Signal Processing(DSP) systems. LPI using SS trades bandwidth for noise floor to hide below using synchronous transmission and reception techniques.

1.1, What is important to understand is firstly LPI uses a pseudorandom code that is linear or atleast “balanced”, which the actual DRM identity etc is modulated on, if it becames known in part, it can fairly easily be expanded to known in full.

1.2 Also that radio channels have a noise floor based on continuously true random changes known as Gaussian White Noise (GWM).

1.3 Whilst recorded media having an approximation to GWM, it is far from “true random” and can be very predictable. Thus the noise can be stripped off and the otherwise hidden DRM DWM signal revealed more clearly.

Taken together 1.1 and 1.3 means that DRM DWM data can not olny be ascertained but importantly not just negated but replaced with another DWM. The problems in 1.3 are not restricted, they are fairly universal to all recorded media with in-band covert channels. Thus there is a high probability all new DRM DWM schemes will fail. For obvious reasons “Off-Line” systems have to use the equivalent of “in-band” static signalling. Which is one of the reasons why there is a concerted push to shift from “Off-Line” to “On-Line” content delivery systems with “out-of-band” dynamic DRM via strong crypto.

[2] Not to be confused with the DRM radio system “Digital Radio Mondial” that used similar techniques to LPI,

Chelloveck June 10, 2021 10:11 AM

As I understand it, it’s taking advantage of a particular well-known automated fill algorithm. By introducing specifically crafted noise around a watermark the algorithm can be tricked into filling the area not with something that simulates the background, but with something that simulates the watermark itself. In the more general case noise can be generated throughout the image so that if any part of the image is filled by the algorithm it will produce a colored region instead of the background.

So, if my understanding is correct, it’s an attack against a specific fill algorithm. It’s clever and academically interesting but I don’t think it will be a practical defense. If widely implemented other fill algorithms will be developed which don’t have the same failure modes and won’t be tricked the same way. I expect that trying to use this technique to simultaneously defend against multiple fill algorithms will quickly add an unacceptable amount of noise to the original image. At that point the attacker just needs to cycle through the different fill algorithms to find one that works.

Me June 10, 2021 10:17 AM


I agree it seems like something that will catch the unwary but those in the know will simply modify their deep fake algorithms to look for these patterns and replicate them.

Also, I question how useful this will be as so few will even listen to, “it is a deep fake, the evidence is that this micropatterning is broken” when “this is fake, they made her look drunk by playing the video slower” didn’t seem to help.

echo June 10, 2021 10:40 AM

The word “novel” is overused for any old rehash. I know people peddling academic papers and patents may feel compelled to use the word but too often it represents a low bar.

Two legal cases I’m thinking of spring to mind. The first was a by a lawyer with specialist none legal knowledge who simply applied the law to the problem. It wasn’t novel but the rote learned legal profession thought it was. A second legal case brough case law from one area of law into another as the structure of the legal argument was useful. Lawyers then proceeded to throw their rattle out of the pram claiming legal armageddon but to my mind this was motivated more because their domain of legal expertise and therefore lucrative income was being trod on.

I tend to find the reasoning and motivation behind things more interesting lately.

PattiM June 11, 2021 9:46 AM

Pretty good work on that article! It could be useful to attempt a fusion with Tainter and Tainter/Patzek, although I’m not exactly sure how… maybe the (downward) Energy Spiral and how voter-stress affects beliefs? Unfortunately, it’s pretty clear that adding more depth from other branches of science besides game theory to your analysis will probably only make the prognosis for traditional US Democracy worse. (i.e., Anthropocene Arctic phase-change and associated food issues; the known history of civilizations collapsing; etc.)

If that’s true, and given the broad consensus on collapse of globalization, what about the possibility of stabilizing bio-regional democracy? This might be amenable to (cellularized) game theory. Possibly: “Cooperation among a collection of democracies is stabilized under what conditions?” I doubt this is “path independent,” however.

Mr. Peed Off June 12, 2021 9:40 AM

The answer is simple. If it is digital, it is untrustworthy. Snake oil peddlers, government and corporate propagandists, hucksters, and outright con men have made sure that anything digital is so suspect, that the only value is as entertainment.

A Nonny Bunny June 12, 2021 3:09 PM


So, if my understanding is correct, it’s an attack against a specific fill algorithm.

It’s not really all that specific. It relies on a well-documented weakness of many deep neural network models that allows them to be fooled by adversarial input, which makes them see things that aren’t there. And adversarial input that works for one network tend to work to some extent for others as well.
The paper itself shows an example were it works against 6 different inpainting models.

If widely implemented other fill algorithms will be developed which don’t have the same failure modes and won’t be tricked the same way.

I don’t think, say, Photoshop, has much of an incentive to create/adapt an fill algorithm that subverts watermarking. So if this method works for the most common editing software, it would protect fairly well against most non-sophisticated attackers. And sophisticated attackers probably have better means anyway.

Mark Magagna June 17, 2021 8:43 AM

This defense may work against someone who is repurposing a photo taken by someone else.

However it will do nothing against someone who takes a photo, then deep-fakes that photo and applies the defense to the fake. I suspect that this may even make it more difficult to determine what about the photo is faked or even if it is (other than “this defense was applied to the photo at some point”).

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.