Facebook Fingerprinting Photos to Prevent Revenge Porn

This is a pilot project in Australia:

Individuals who have shared intimate, nude or sexual images with partners and are worried that the partner (or ex-partner) might distribute them without their consent can use Messenger to send the images to be "hashed." This means that the company converts the image into a unique digital fingerprint that can be used to identify and block any attempts to re-upload that same image.

I'm not sure I like this. It doesn't prevent revenge porn in general; it only prevents the same photos being uploaded to Facebook in particular. And it requires the person to send Facebook copies of all their intimate photos.

Facebook will store these images for a short period of time before deleting them to ensure it is enforcing the policy correctly, the company said.

At least there's that.

More articles.

EDITED TO ADD: It's getting worse:

According to a Facebook spokesperson, Facebook workers will have to review full, uncensored versions of nude images first, volunteered by the user, to determine if malicious posts by other users qualify as revenge porn.

Posted on November 9, 2017 at 6:23 AM • 67 Comments

Comments

AnselmNovember 9, 2017 6:30 AM

In related news, “QA engineer in the image fingerprinting department” is now the most well-responded-to job advertisement in Facebook …

Steve PNovember 9, 2017 6:32 AM

This could be motivated be a desire to protect facebook as much as the potential victims.

The hashing could potentially be performed client-side so the image is never shared.

I think this feature would be ripe for abuse however.

meNovember 9, 2017 6:40 AM

i don't think it can work.
if they use md5/sha you can change one pixel and avoid detection.
but also if they use something better it is not going to work.

computers can't solve problems automagically
if it can't be solved in real life it can't be solved with computers...
and this problem can't be solved in real life because printers has been invented.

i think that people should be smarter and does not take such photos.
if they do they should not send them to anyone.
if they send them to anyone... its their fault.

"i send this to you but don't send this to anyone" (do you see the problem? you are breaking your own rule, you can't hope they will respect it)

redmanNovember 9, 2017 6:57 AM

what happens if the other person will modify 2-3 pixels in the original photo?
I guess the hash won't be the same and Facebook will be easily defeated.

Content KingNovember 9, 2017 7:04 AM

Every time I read about "revenge porn", I wonder why there isn't an "adult camera" app, that encrypts each photo/video it creates with two (or more -- I'm not judging) keys, one from each participant.

That way, the photos/videos can only be viewed with the consent from everyone involved.

RenNovember 9, 2017 7:06 AM

What's is going to stop sending ordinary non intimate photos of people to this thing and having them removed the facebook?

Isn't it going to make revenge porn convictions harder, if they victim has already uploaded their photos to the internet?

How are they going to prevent teens from uploading their images?

This sounds entirely dumb idea, so guessing its originated from politicians.

225November 9, 2017 7:27 AM

The other place this was discussed nailed the real reason, sending incriminating photos to someone increases your trust in them, this is a mental trick to make facebook more ingrained in peoples culture.

And @Steve P nailed another aspect, the hashing should be done client side, and the photos should stay client side until the police come over to gather evidence for a trial (if it ever gets there, which with facebooks help and the hash it might in some cases).

AndersNovember 9, 2017 7:32 AM

"Facebook will store these images for a short period of time before deleting them to ensure it is enforcing the policy correctly, the company said."

Facebook is all about DATA, their business model depends on our data.
I'm positive that they will NOT delete those photos and will find some
business use for them - offering right size bra ad's or something...

WaelNovember 9, 2017 7:32 AM

worried that the partner (or ex-partner) might distribute them without their consent can use Messenger to send the images to be "hashed."

In other words: share the images with consent to prevent the images from being shared without consent!

Simple hash won't work. You need image recognition so it works if the image is altered as @redman described. Fuzzy logic, ML, and neural networks will be needed as well. Seems related to PornHub. Possible acquisition in the pipelines? But the ultimate sanctioned solution is:

If you have nothing to fear, you have nothing to hide (from a camera!) Gotta be consistent!

225November 9, 2017 7:41 AM

They are probably not talking about cryptographic hashes, but instead image based hashing https://en.wikipedia.org/wiki/Perceptual_hashing

But my point is it should be client based and open source while we are at it, and also purposed for copyright protection... yea that's never going to happen since as the saying goes we are the product not the customer.

SethNovember 9, 2017 8:08 AM

For all those wondering how hashing images can handle small changes there's a good explanation of some algorithms here . Basically, there are "hash" algorithms that are unaffected by changes in color and small rotations or cropping.

The worrying part is that facebook expects the photos to be uploaded. Something client side would seem much more trustworthy. Also, doesn't facebook ban porn in general? Seems that if their filters were working there wouldn't be any need for this.

PawelNovember 9, 2017 8:36 AM

For all those who Wonder how one pixel change (or more) will not affect detection, check PhotoDNA by Microsoft. Cool solution.

225November 9, 2017 8:37 AM

@Seth nice quick read, at least shrinking the image before uploading would take some of facebooks possibly enjoyment out of it.

I assume on your second point that facebook will also be trawling through other image boards daily downloading new images to try and match the hashes. Another dream job to sit next to the QA guy really.

Peter GalbavyNovember 9, 2017 9:24 AM

I can see no way in which this could go wrong.

I also have a bridge to sell you.

GabNovember 9, 2017 9:52 AM

Why it cannot be done client-side:
Some prankster would regularly cause DOS by uploading sets of, say, pictures in the news, pictures of the POTUS, company logos, etc. to take them down automatically and cause massive disruption on Facebook. Therefore, they need humans to "review" it...
Nothing stops a voracious data-hungry company.

GunnmNovember 9, 2017 9:57 AM

So, now not only the Argos Task Force gets a free pass to oogle child pornography, but also facebook's employees?

...and fresh material, too.

This is a weird time to be alive.

If some would take this position just to be able to hawk "original material" on Magic Kingdom or the like, I would not be surprised.

This cannot possibly backfire.

david in torontoNovember 9, 2017 10:09 AM

I believe they are using the same method as is used for kiddie porn, which is the digital hashes of the image and PhotoDNA which is designed to handle manipulation such as cropping, resizing, re-colouring.

echoNovember 9, 2017 10:16 AM

Isn't it possible that a better solution might be some kind of government regulated escrow service akin to providers for password services instead providing an image hash? This would contain your private data within a sandbox which may be covered by profesional liabilities such as, say, if managed by lawyers or a notary. It would also encourage the development of open standards.

A lower level of assurance might be similar in concept to the patent system where an image is protected on a first to file basis using a client side has which is verified by a signed key for the transation provided by an external provider. This would place a small burden on the image owner but allow mass automation.

In time cameras and photo image processing software might include the ability to support this and the security becomes baked in and invisible by default.

I'm sure I've made some howlers with this suggestion. I just wonder if degrees of assurance can be made a more everyday and natural process and that the technical challenges are mostly solved problems.

Alf WattNovember 9, 2017 10:22 AM

My first though was: "why not hash the images on the device, then just send the hashes"

Submitted hashes would have to be new (so you can't block previously published images or memes) and should trigger human review if anyone tried to post them (i.e. we only need to see your nudes if someone else posts them).

If you want to train a model from those review cases you can use that to score and prioritize the review queue.

RachelNovember 9, 2017 10:25 AM

i think facebook has had a really hard time being bullied for too long and you are all just being mean. it has feelings. it has needs. c'mon now

Nickie HalflingerNovember 9, 2017 10:29 AM

@Rachel: Fellings?? Sure, why not. The supreme court has decided they had human rights in terms of spending money, I guess they should be held to have feelings too.

I'll buy the idea of corporations as people when one gets put in jail for breaking the law.

Peter S. ShenkinNovember 9, 2017 10:41 AM

@me Hany Farid has invented a sort of "retentive hash" that is immune to small changes in the image. He hasn't open-sourced it but has offered it to the community as a black box, initially for use in blocking child-porn images. Click on my name for an example.

albertNovember 9, 2017 11:18 AM

Funny that this is being done in Australia, not exactly a bastion of Internet privacy.

You can't cure abuse of technology with more technology, even in a company with the highest moral and ethical standards* (with God as the CEO). This is Tech Theatre, pure and simple.

I hate to point out the obvious, but though discussion of image recognition tech is good, doing so ignores the setting: Facebook.

Anyone who posts 'intimate' photos of themselves or friends -anywhere- online deserves any outcome, however serious. The sheer idiocy exposed by Internet users is simply mind-boggling. Some will be lucky, some will learn in the hardest way. Experience is a dear teacher**.

Don't take this as a defense of Facebook. Their 'effort' is simply window dressing to avoid losing customers:

Facebook and their ilk should -not- be responsible for users behavior.

That's LEs responsibility. They need to do their job, just like the IC needs to do theirs. Stop pushing your responsibilities off to others. Stop whining and get to it.

--------
*Yeah, it's an oxymoron.
**Let me know if any of you Facebook users need an explanation.
. .. . .. --- ....

Nathan MacInnesNovember 9, 2017 11:20 AM

To the people who are concerned about "changing one pixel" to change the hash, image hashing tends to be different to other hashing. An example of a SIMPLE image hash algorithm might be:
1. Normalize the brightness of the image
2. Downscale the image to 32x32 pixels
3. Convert colours to 1-bit (black and white only)

It can still be got around (i.e. by cropping) but it makes it much harder.

A better approach would be to give people the tools to perform the hash themselves and upload that. Rather than reviewing all submissions, only review matching malicious uploads.

RuvenNovember 9, 2017 11:25 AM

This is reminiscent of the paraquat pot problem where Chevy Chase and the SNL Weekend Update crew offered to review any pot anyone thought might have been contaminated. "Please send your contaminated pot to SNL at 30 Rockefeller" et cetera.

WilsonNovember 9, 2017 12:33 PM

I guess I would use it if I know for sure the picture has already be posted somewhere on internet: FB would eventually have it anyway and at least this will stop the most powerful channel of diffusion

Impossibly StupidNovember 9, 2017 12:41 PM

Another reason client-side hashing isn't going to work is that it makes it that much easier to reverse engineer a way to modify images so that they have a different hash. Regardless, though, that's exactly what is probably going to happen: someone who wants to distribute revenge porn (or any other kind) is going to figure out a format that doesn't hash well and use that. Could be something as simple as a viewer that does an XOR with some other random bit that get distributed via another channel.

Security is hard. Sending intimate photos of yourself to Facebook in the hopes it will keep other people from sending intimate photos of you to Facebook is not security. And let's all start the countdown to the news story that reports a Facebook "breach" that somehow allowed someone to access all those "deleted" photos.

SethNovember 9, 2017 12:54 PM

For those saying it can't be done client side due to malicious actors submitting other images, the last article linked states "the process for Facebook’s Australia-focused pilot starts when a user completes an online form with the local government’s e-safety commissioner." The requirement of government paperwork will probably sort out almost all bad actors.

In the end, the photos have to be reviewed, but that can be when the hash is submitted or when a match is found. Reviewing only when a match is found (and then only the photo which matches the perceptual hash, since the original wouldn't be submitted) would protect the submitters privacy and prevent abuse of the system. Plus, it's less work on facebook's part since photos would only need to be reviewed if a match is found.

justina colmenaNovember 9, 2017 1:09 PM

I'm not sure I like this

I know I don't. It's like those color laser printers with the mandatory proprietary embedded software module whose purpose is ostensibly to prevent people from counterfeiting currency, but in reality its general objective is to keep quality general purpose printers away from the general public.

I mean, come on. Who's going to pass a laser-printed $100 bill printed on office paper?

No. That's not what they are worried about. Like the "mental illness" hysteria and ban on firearms, it's a VIP/Secret-Service thing. Books and pamphlets deemed politically subversive are what they don't want printed.

In this case they (the VIPs and the whores they patronize) want the ability to arbitrarily censor photographs of themselves and carefully manage their online professional image.

hmmNovember 9, 2017 2:16 PM

Ultimately it's ZUCKERBERG managing this, effectively OWNING these porno selfies right?

FB owns what you upload to it. Presumably they'll have to keep a DB of all of these.

What better source of blackmail? It's like KGB Kompromat opt-in.

hmmNovember 9, 2017 2:20 PM

"Who's going to pass a laser-printed $100 bill printed on office paper?"

It does happen.

StephenNovember 9, 2017 4:56 PM

FYI... Per a news article (http://www.bbc.com/news/41928848), Facebook doesn't delete the images. It is up to the sender to delete the images.

If the sender does not delete the image, the image will remain on Facebook indefinitely.

JamesNovember 9, 2017 5:52 PM

Facebook will store these images for a short period of time before deleting them to ensure it is enforcing the policy correctly, the company said.

According to a Facebook spokesperson, Facebook workers will have to review full, uncensored versions of nude images first, volunteered by the user, to determine if malicious posts by other users qualify as revenge porn.

So if I want to shut down FB, I simply create an untraceable account, upload some known child porn, then call the cops / FBI and say that FB is in possession of, storing, and distributing (they are showing it to their employees; the QC folks) child porn. Bye bye Facebook.

Speaking of child porn, how are they going to know which naked pictures are of 18+ and which are 17-? Is it FB's job to sort this out? If not, then they could easily run afoul of child porn laws.

What does FB consider 'revenge porn'? If I have a pic of my Amish friend driving a car, does that count? What about my Jewish friend eating a BLT? How about a married friend making out with not-their-spouse? Granted these are not 'porn' in the traditional sense, but they could certainly be used for revenge.

What about fake revenge porn (putting Melania Trump's head on a porn star's body)? Does that count as revenge porn? Do I need to go generate every reasonable porn star + my head picture and give that to FB to protect myself, "just in case"?

IANAL, but isn't FB opening themselves up for a law suit? If I upload a legit revenge porn pic, they accept it and hash it, but then fail to block it later, can I take FB to court for breach of contract (they implied they'd block it)? Seems like a slippery slope.

So many ways this can go wrong. So much liability for FB. Seems like a bad idea. On the plus side, if FB gets sued into oblivion that would be an awesome benefit to society (IMHO).

AlejandroNovember 9, 2017 6:27 PM

Private stock porno for FB staff?

There is something so sick and weird about this I can't wrap my head around.

Why would anyone in their right mind GIVE FB nude pics???

Will the pics become part of a users personal data bank for sale to whoever?

Why would anyone believe FB about anything?

Strange times we live in. And who we trust.

Impossibly StupidNovember 9, 2017 7:53 PM

@Seth

The requirement of government paperwork will probably sort out almost all bad actors.

Ha ha ha. That's just not how technology works. Or people. All it takes is one person to get access to a client-side app and it can be distributed and reverse engineered until the algorithm is known by all. Heck, it could even be a person who at first was a victim of revenge porn (and thus went through the paperwork) who then turns around in a later relationship and wants to post it. This is why it's usually a bad idea to trust the "good guys" to be responsible in all their actions (and that includes Facebook's own role in this terrible idea of theirs).

@justina colmena

In this case they (the VIPs and the whores they patronize) want the ability to arbitrarily censor photographs of themselves and carefully manage their online professional image.

That just doesn't pass the smell test. Posting images online is a far cry from having to build your own printer. Anybody who gets censored on Facebook for some innocuous image can easily bounce over to any other social media or news site (or their own blog) and start the ball rolling on a story about how Facebook is oppressively blocking specific non-porn images. Because, after all, if the conspiracy is as grand as you suggest, why is it still so very easy to get all manner of copyrighted material that "they" would rather not have pirated?

SimplyNovember 9, 2017 11:11 PM

To circumvent deletion simply revenge porn miscreant posts the nudes at a site outside Twitter out of reach of take-down laws and on Twitter post the links or shortcut URL

Re: Content KingNovember 9, 2017 11:18 PM

Screen grabs makes that moot for those who don't need to go through hoops and jumps each time nudes need to be viewed

Andrew_in DarwinNovember 10, 2017 12:53 AM

Upload the hash and only view the image when it matches a picture uploaded by someone else. If it is revenge porn delete the image and mark the hash as a validated revenge porn hash. If it isn't revenge porn and keep the image and mark the hash as a validated non-revenge porn hash and don't allow that hash to be submitted again. That way the image is only checked once, that is when someone else uploads it and a hash match is made. Then the decision is made when there is a match and not for every image.

Carnac the MagnificentNovember 10, 2017 2:58 AM


I see... bold words... yes... they're headlines... coming into focus now:

INVESTIGATION REVEALS ANTI-ABUSE PHOTOS SHARED ON INTERNET

FACEBOOK CONTRACTORS FIRED OVER ANTI-ABUSE PHOTO SHARING

FACEBOOK PROMISES IMPROVED SECURITY AFTER ANTI-ABUSE PHOTO SHARING INCIDENTS

ATNNovember 10, 2017 4:55 AM

@Alejandro:
> Why would anyone in their right mind GIVE FB nude pics???

Looks to me that FB is the new religion, you have to tell whatever you are doing in your life to FB, and probably at the end of your life someone will review your FB of your entire life and give you a pass/fail notation of your life.
The reason you behave "this way" is "What would FB do", what would be a better profile.

What I still do not understand is why FB users think they can delete/modify their account and think nobody can access the deleted stuff, why when they set something "private" that would stop someone paying FB (like their next employer) to see the whole profile, and obviously the nude/porn photos/videos.

You know the song extract "some of them want to be abused".

Einstein said this century would be religious, he did not know FB. He was not far away.

Can and Will Be Used Against Career AdvancementNovember 10, 2017 5:11 AM

Expect when you write a blog, send an email, post a comment or picture on the Internet that it WILL become public.

Expect it will be viewed by those you had never anticipated, like future employers, insurance companies or security clearance officials.

Convenience has a Price
If you value your reputation, wish to be taken seriously and have an excellent potential for career advancement, then keep your sexuality off your addicting ‘smartphone’.

These common sense subjects SHOULD be taught in schools … but how many teachers are arrested for pictures on smartphones?

Here is an example of terminal repercussions from an old ‘private’ Internet blog:
http://www.detroitnews.com/story/news/politics/2017/11/09/kelly-education-nomination-yanked/107509136/

Am i Stupid?
Now go order your Echo or Siri...to eavesdrop onthe screaming and moaning (then arguing!)

SethNovember 10, 2017 8:59 AM

@Impossibly Stupid
So, security by obscurity is necessary to protect perceptual hashing? I'm not sure what sort of criminal masterminds you believe are sending revenge porn to friends or families of their ex on facebook, but I think the people this feature is meant to stop aren't technically inclined. This is (probably) based on the technology facebook already uses to stop child porn, I think most of the issues with circumventing it have been solved. The only big issue here is that facebook expects people to trust them with their compromising photos, that alone is enough to make this a terrible idea.

Clive RobinsonNovember 10, 2017 9:03 AM

The problem with "hashing".

As is often the case there is an issue with the word "hashing" as to what it means. Often even undetstanding the context it is being used in does not help.

At the basic level a hash is a number in a fixed range that helps differentiate objects by a normalisation process.

Thus one of the earliest "porn" hashes was to count the number of "flesh tone" pixels and express them as a percentage of the total pixels.

So yes it caught a lot of images with flesh tones in, quite a number of which however were not "porn" or even of animate objects. Likewise "porn" sometimes quite hard core were missed because the number of flesh tones were not to dissimilar to a portrait or group photo and photos from nights out etc.

Later hashes tried to improve on this by using crude image recognition looking for "body parts".

Whilst marginaly more successfull at recognizing certain image types it was considerably less efficient due to the vastly greater number of CPU cycles used to get the marginaly better hash value.

The problem with hashing by "meaning" rather than "data" is that getting a computer to recognize meaning in human terms is vastly more complicated and difficult than simply hashing by simple data metrics.

However another problem area that arises due to the overloading of the word "hashing" is that certain fields of endevore have added extra meaning to the term "hashing" and people start getting confused about what is actually ment. For instance crypto hashes have extra meaning over the simple meaning given above. One of which is the ideas of a crypto hash being in effect a "one way" function, where by two different inputs that produce the same output should be very difficult to find except by brut force searching, thus if the hash value is made sufficiently large then the crypto hash gets what is in effect a security value based on it's size. Thus increasing the hash size by one bit doubles the search space that has to be examined thus doubling the average search time.

Important to remember is that no hash gives guarantees of "uniquely identifying" an input.

All in all this FB effort is I suspect a front for other activities, otherwise it would be a sink hole of fixed costs and large CO2 footprint...

So the question is "as the product" what does FB gain in dollars and cents by doing this and thus "what do we lose"...

david in torontoNovember 10, 2017 9:54 AM

Let's start by give FB some credit for a good idea and good cause. It's a courageous issue to tackle. And after trying to digest the initial creepy factor, and reviewing the comments here, we have:

* The PhotoDNA "hash" has a proven record in dealing with child-porn but all photos need to be uploaded. Big discussion needed.
* Client side hashing has some technical risks (i.e. gaming the system) that need to be looked at and evaluated.
* Several readers have pointed out that you could upload, hash, delete without viewing the originals. Just quarantine matches and review them. You could add feedback on the submitter to deter abuse. There are lilely other anti-abuse mechanics possible. This seems a less creepy and less risky alternative. What other drawbacks arise? How to get around these?
* Can this be made less creepy?
* As also pointed out in the discussion thread. There are a bunch of other implementation issues and legal/rights issues that need to be considered. The FB IP ownership being one of the creepiest.

As a pilot, this might work. The larger problem that goes way beyond a pilot, is how to broaden this if it can work and should we? That is a very big discussion.

Another observation, is that it only works as a preventative for photos you already have. For photos taken without your permission, it can speed removal and the right to be forgotten but you'd need to get copies. Still putting the cat back in the bag is a huge problem for this sort of thing.

Yes, this gives one pause, it has a creepy aspect, it's not at all comfortable. But it's probably a question that should be considered and discussed. Carefully.

And there is the do you trust FB (or anyone else like this) vs. the it's going to continue to happen without something. Where is the balance and is it worth it?

Lastly, it raises some policy questions about scope creep. I could see a big push from IP owners to do something like this to protect their interests.

Ignazio PalmisanoNovember 10, 2017 10:44 AM

Wait. Am I mistaken or Facebook already has good enough face recognition to spot the same person's face in multiple pictures? If so, it doesn't need to see the naked picture first. If the face is in the picture, it just needs to tag the person and ask for approval. If the face is not in the picture, it's a lot harder to prove it's a picture of a particular person. Tattoos and the like might be used, but it takes a lot of the sting out of the revenge.
Same applies to video.

david in torontoNovember 10, 2017 11:19 AM

@Ignazio Good thought, but I don't think that face recognition is enough. The system would need to recognize that that person is naked or with someone who is naked or doing something private.

And then there is the question of what revenge porn looks like to other cultures. Hence why this will lead to a big discussion. This pilot has to be by definition a first step.

mostly harmfulNovember 10, 2017 12:56 PM

@alejandro "Why would anyone in their right mind GIVE FB nude pics???"

"Mark" Mountain-of-Suckers doesn't know the answer either: http://www.businessinsider.com/well-these-new-zuckerberg-ims-wont-help-facebooks-privacy-problems-2010-5

Zuck: Yeah so if you ever need info about anyone at Harvard

Zuck: Just ask

Zuck: I have over 4,000 emails, pictures, addresses, SNS

[Redacted Friend's Name]: What? How'd you manage that one?

Zuck: People just submitted it.

Zuck: I don't know why.

Zuck: They "trust me"

Zuck: Dumb fucks

Say what you will, it's no lie.

Impossibly StupidNovember 10, 2017 3:10 PM

@Seth

So, security by obscurity is necessary to protect perceptual hashing?

Yeah, essentially. That's why it's a bad idea to allow client-side calculation, and why you see things like mobile phones that do fingerprint and facial recognition hide that info in a separate secure area. Because, once it get's out into the wild, it's all just data that can be manipulated and hacked. Similar to the stop sign being mistaken for a speed limit sign paper that Bruce wrote about.

I'm not sure what sort of criminal masterminds you believe are sending revenge porn to friends or families of their ex on facebook, but I think the people this feature is meant to stop aren't technically inclined.

Like I said, that's just not how technology works. Some hackers are nothing more than script kiddies, that simply stand on the shoulders of the people who are technically inclined. You have to expect the same will be true for anyone that is looking to get around any measures Facebook (or anyone else) puts in place to catch revenge porn (or any other "prohibited" content). They just need to use an app, not write it or invent the algorithm behind it.

This is (probably) based on the technology facebook already uses to stop child porn, I think most of the issues with circumventing it have been solved.

Uh, I don't think child porn is a solved problem, not even on just Facebook. Ultimately, it's all just data, and you simply can't put a stop to every possible way to represent it. The basic use of encryption, for example, makes the data opaque to analysis by any of Facebook's detection algorithms.

mostly harmfulNovember 10, 2017 6:28 PM

So, about this photoDNA project in general, and considered apart from Facebook's prophylactic use-case contra revenge porn: Having read the descriptions linked above, I wondered about photoDNA's potential for false positives. The obvious search terms yielded no github repo or anything inspectable like that. (Also, I did check out Hany Farid's faculty page at Dartmouth, which looks very interesting, but AFAICT all of the photoDNA material linked there was popular press: http://www.cs.dartmouth.edu/farid/ )

That was kind of disappointing. And kind of disturbing, too, given a project whose promoters (whom I herein assume are disjoint from the developers) so breathlessly try to convince casual readers is vewy vewy important and moreover ready to roll.

Some popular-oriented articles from a search engine did mention the false positive concern briefly, as did a 2011 New York Times article linked on Farid's website:

https://gadgetwise.blogs.nytimes.com/2011/05/19/facebook-to-combat-child-porn-using-microsofts-technology/


PhotoDNA works by creating a “hash,” or digital code, to represent a given image and find instances of it within large data sets, much as antivirus software does for malicious programs. However, PhotoDNA’s “robust hashes” are able to find images even if they have been altered significantly. Tests on Microsoft properties showed it accurately identifies images 99.7 percent of the time and sets off a false alarm only once in every 2 billion images, and most of them point to nearly identical images, Dr. Farid said.


http://www.securityfocus.com/news/11570


"We tested [the PhotoDNA tool] over billions and billions of images," he said. "We tried very hard to make it very efficient … and to minimize the false alarm rate."


The securityfocus article does not test its—doubtlessly deeply concerned—readers' patience with any further discussion of what it means to "tr[y] hard […] to minimize the false alarm rate."

But, okay, so they collected and checked a set of billions of images (on Microsoft's premises, I gather) for false positives. Fine, so far as that goes. But how far does that go?

Hard to say. Notably, I found no mention of the developers trying to construct their own false positives. Have they tried to do so at all? (We can be nearly certain that the developers did not attempt to construct billions of them; the two-billion-to-one result is from testing whatever collection of images Microsoft happened to host at the time.)

In the absence of evidence of any serious attempt to do so, and given the potentially dire consequences for malicious construction of false positives (and failure to detect the same), I find the fanfare premature. Suspicious, even.

We already have quite a long history of forensics professionals misrepresenting, in the courtroom, the potential for false positives regarding other sorts of "fingerprints", with tragic consequence for the wrongly accused. I'd be happy to discover that a systemic lesson has been learned, but I am not going to hold my breath for that.

In this regard, do note the suggestive language employed by that 2011 article in the New York Times [bold mine]:

Microsoft says it has refined a technology it created called PhotoDNA to identify the worst of these disturbing images — even if they are cropped or otherwise altered — and cull through large amounts of data quickly and accurately enough to police the world’s largest online services.

Note as well, in that same NYT piece, the disturbingly familiar tune sung by the Mouth of Sauro—cough—by Microsoft's general counsel, one Mr Brad Smith:

“We’re very passionate about PhotoDNA because we’ve seen it work,” said Brad Smith, Microsoft’s general counsel. “We invented it through Microsoft research, and we are trying to give it away free, including to our competitors.”

He encouraged consumers to pressure online services to adopt it.

I would be grateful if anyone can share a pointer to more information regarding my primary concern, namely photoDNA's resistance to malicious construction of false positives. I find the technical topic intriguing, but am totally bored by the breathless soccer-mom fearmongering that threatens to consume every last column-inch.

justina colmenaNovember 10, 2017 8:00 PM

@Impossibly Stupid

Anybody who gets censored on Facebook for some innocuous image can easily bounce over to any other social media or news site (or their own blog) and start the ball rolling on a story about how Facebook is oppressively blocking specific non-porn images.

Don't get smart with me.

Year after year after year, without respite, and without any assistance whatsoever from FBI or local police departments, I have fought the MOB. The MOB has REPEATEDLY, that is, TIME AND TIME AGAIN, stolen and robbed the entirety of my personal data, online and off, especially anything electronic, all digital photographs I have taken, all my online accounts, and all my private cryptographic keys.

The MOB to this day makes money off my "intellectual property," denies that I am the creator of it, and denies me access to my own electronic data.

TIME AND TIME AGAIN, the same MOB has had me committed on false pretenses to the state insane asylum in Steilacoom, Washington, not far from Redmond, the headquarters of Microsoft Corporation.

Just the icing on the cake: TIME AND TIME AGAIN, my own family has betrayed me to the MOB and betrayed every shred of trust I had in them.

Impossibly StupidNovember 11, 2017 3:28 PM

@justina colmena

Don't get smart with me.

It's in everyone's best interest to get smart with themselves, and with each other. Failure to do so can lead you to end up sounding like just another Internet crank or sock puppet. Writing in all caps is not your friend.

The MOB has REPEATEDLY, that is, TIME AND TIME AGAIN, stolen and robbed the entirety of my personal data, online and off, especially anything electronic, all digital photographs I have taken, all my online accounts, and all my private cryptographic keys.

Again, that simply doesn't pass the smell test. You're also arguing against yourself. First you say the technology that is the subject of this article is going to be used by the elites to hide themselves, and here you go off on a wild rant about how it's easy to gather up all the data any person has, no matter how small and unimportant they are. Which is it? (Hint: that's rhetorical, as all evidence instead points to a third narrative).

TIME AND TIME AGAIN, the same MOB has had me committed on false pretenses to the state insane asylum in Steilacoom, Washington, not far from Redmond, the headquarters of Microsoft Corporation.

Uh . . . yeah . . . and this has precisely what to do with the topic of a technology that puts images into buckets like "revenge porn"? If you can't maintain a coherent train of thought and stick to a rational discussion of the topic at hand, you'll have a hard time convincing people of anything.

The Central ScrutinizerNovember 13, 2017 4:08 AM

So Facebook is now becoming a porn site.... great! What could possibly go wrong? What twisted, perverted logic.... asking a victim to become a victim to prevent becoming a victim.

RachelNovember 13, 2017 10:47 AM

I am sorry I cannot post the link. but the Guardian has a good article explaining how this concept traumatises victims and just how disingenous it is from a human level in the light of their claims to "safety". further, that thus is an issue for law enforcement to remedy and if facebook cared they would support such with their wallet.

i'd go a step further and say, teach young people to have more awareness about whom they get involved with and who, if any, get to film them nake

RachelNovember 13, 2017 10:56 AM

Justina

Bumblebee! long time.
I am genuinely sorry you feel traumatised. sharing it here is only going to make you feel worse. It also detracts from the qyality we are attempting to maintain. Would you express yourself thus with such fervour at a dinner party? Imagine us as real life people having a civilised discussion. Your stance helps no one lest of all yourself. Can you please save it for a close ally or therapist?

jerNovember 13, 2017 2:09 PM

1. Facebook makes more money as people entrust more private information to them.
2. Revenge porn prevention is a nice carrot/stick to gain people's trust in entrusting private information.
3. Facebook wants to share (more) private information between its different products but is encumbered in doing this (openly) by local laws in many places.
4. Revenge porn prevention is a nice carrot/stick to gain authorities' trust in sharing entrusted private information among its different products.
5. ...
6. Profit!

mostly harmfulNovember 13, 2017 2:49 PM

@Rachel

I am sorry I cannot post the link. but the Guardian has a good article explaining how this concept traumatises victims and just how disingenous it is from a human level in the light of their claims to "safety". further, that thus is an issue for law enforcement to remedy and if facebook cared they would support such with their wallet.

The following opinion piece fits your description, I think:

Van Badham in The Guardian 2017-11-12
Sending in our nude photos to fight revenge porn? No thanks, Facebook
https://www.theguardian.com/commentisfree/2017/nov/13/sending-in-our-nude-photos-to-fight-revenge-porn-no-thanks-facebook#maincontent

Badham discusses (lampoons, criticises) not only this November 7 Guardian article which is a (paraphrased?) Facebook press release making no attempt to masquerade as journalism, but also riffs on a dangerously absurd Australian Broadcasting Corporation piece:

Caitlyn Gribbin at ABC 2017-11-02
Revenge porn: Facebook teaming up with Government to stop nude photos ending up on Messenger, Instagram
http://www.abc.net.au/news/2017-11-02/facebook-offers-revenge-porn-solution/9112420


If you're worried your intimate photos will end up on Instagram or Facebook, you can get in contact with the e-Safety Commissioner. They might then tell you to send the images to yourself on Messenger.


Yep, you heard that right. Send your own nudes … to yourself.


"It would be like sending yourself your image in email, but obviously this is a much safer, secure end-to-end way of sending the image without sending it through the ether," [e-Safety Commissioner] Ms Inman Grant said.


*head explodes*

Judge Jury and ExecutionerNovember 13, 2017 6:42 PM

@225

The other place this was discussed nailed the real reason, sending incriminating photos to someone increases your trust in them, this is a mental trick to make facebook more ingrained in peoples culture.

And @Steve P nailed another aspect, the hashing should be done client side, and the photos should stay client side until the police come over to gather evidence for a trial (if it ever gets there, which with facebooks help and the hash it might in some cases).

I think the most important/underrepresented angle of this is the policing-jurisdiction/free-speech issue. In other words, the devilish details I see lying in wait around here-

According to a Facebook spokesperson, Facebook workers will have to review full, uncensored versions of nude images first, volunteered by the user, to determine if malicious posts by other users qualify as revenge porn.

The issue of "policing the internet" in some way that protects Free Speech and Due Process like we want it to, is the elephant in the room. If Facebook employee's opinions/judgements about an image "qualifying as revenge porn" merely initiate a phone call from them to the police, and handing all subsequent responsibility for the issue to the police and courts, then I think that is fine and as it should be. The problem is when the subsequent actions taken are due to choices made by Facebook corporation, instead of the police. If that happens, then you must at least admit that Facebook has nothing to do with Free Speech. Now, I know that already, but political leadership for the last couple decades has had some lofty narratives implying that Facebook has anything at all to do with Free Speech.

The bigger, more important IMO superset issue is that the internet as technology has facilitated an exponential growth in Speech. But governments have not met the challenge of exponentially growing their police and court systems to handle the increased load. And corporations understand the power of being allowed to be Judge, Jury, and Executioner. So they aren't going out of their way to explain this nature of the problem to the public.

Any definition I can imagine of 'revenge porn' is so thoroughly nuanced that I am disturbed (in an aged cynically accepting way) that Facebook would accept jurisdiction and responsibility here. I'm pretty sure they see $$ value in the machiavellian business long term here. If Facebook was more ethical, they should refer all complaints to the police and let them sort it out with court orders that they are willing to obey. Of course machiavellianly, Facebook probably understands that lazy political leaders won't head down a good path in such a case, and so they come up with this first-pass attempt at something most effective at making it appear that they deserve the faith the masses put in them. Sigh.

MobileeNovember 14, 2017 3:55 AM

The detection uses cnns, which would produce fuzzy matches to the images. No single pixel change can throw the matching. Overall, this is a bad idea, but well positioned for the millenial set.

mostly harmfulNovember 14, 2017 4:23 AM

I wrote above, regarding the oversell of photoDNA hype:

We already have quite a long history of forensics professionals misrepresenting, in the courtroom, the potential for false positives regarding other sorts of "fingerprints", with tragic consequence for the wrongly accused. I'd be happy to discover that a systemic lesson has been learned, but I am not going to hold my breath for that.

Regarding that long history (h/t @BarrettBrown_):

FBI’s flawed forensics expert testimony: Hair analysis, bite marks, fingerprints, arson; OR
The FBI faked an entire field of forensic science
by Dahlia Lithwick, in Slate 2015-04-22
https://www.slate.com/articles/news_and_politics/jurisprudence/2015/04/fbi_s_flawed_forensics_expert_testimony_hair_analysis_bite_marks_fingerprints.html

FBI admits flaws in hair analysis over decades
by Spencer S Hsu, in the Washington Post 2015-04-18
https://www.washingtonpost.com/local/crime/fbi-overstated-forensic-hair-matches-in-nearly-all-criminal-trials-for-decades/

Robert.Walter November 19, 2017 9:33 PM

This seems a really dumb idea.

Only works if you control the source photo?

Can it work if you never had possession of the source photo?

Do you have to ask you potential revenge poem partner while on friendly terms to always give you a copy so you can preemptively send to Facebook?

Does it do anything to stop proliferation of items once out there if not first appearing as marked?

Can it be used to block a common non RP photo that someone wants removed from FB?

I can’t think most folks will be bothered to take the time to use this service before the cat is out of the bag.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.