"Proof Mode" for your Smartphone Camera

ProofMode is an app for your smartphone that adds data to the photos you take to prove that they are real and unaltered:

On the technical front, what the app is doing is automatically generating an OpenPGP key for this installed instance of the app itself, and using that to automatically sign all photos and videos at time of capture. A sha256 hash is also generated, and combined with a snapshot of all available device sensor data, such as GPS location, wifi and mobile networks, altitude, device language, hardware type, and more. This is also signed, and stored with the media. All of this happens with no noticeable impact on battery life or performance, every time the user takes a photo or video.

This doesn't solve all the problems with fake photos, but it's a good step in the right direction.

Posted on March 1, 2017 at 6:02 AM • 22 Comments

Comments

Bogdan KulynychMarch 1, 2017 6:29 AM

I don't see how signing metadata along with the picture produces "proofs". I can sign whatever sensor snapshot I want with whatever picture I want. Am I missing something?

Using Peter Todd's OpenTimestamps, however, would have partially helped.

zucMarch 1, 2017 6:30 AM

Its always surprised me that there isn't (to my knowledge) any service that does time-based signing. Eg, some trustable (?! I guess this is the problem) government institution provides an API to submit some data to a server which is signed with some private key, that is then destroyed. The associated public key is published and can be used to verify the date and time that the data was signed.

ThothMarch 1, 2017 6:45 AM

@Simon

In a blunt respond, they just do it with "magic" ;-P .

Look at the source code (scroll down) under the createSignature() method in the link below. They take in a PGPSecretKey object and a password object called pass in char[] form. Not only are the PGP Private Key extracted into RAM memory together with the decryption password char[], they made no effort whatsoever to scrub the PGP Private Key from memory by calling some form of destroy() function and have no clue on overwriting the password char[] objects with zeros.

Some might call into question the sufficiency of overwriting password char[] objects with zero bytes and calling the destroy() for the Secret Key material as sufficient but at the very lease, it would be a basic entry security practice.

They simply left the key and password hanging in memory like many other so-called "Security Apps" that have extremely low quality security and do not bother to do the basic "due diligence" at all.

Not only is this ProofMode app not going to proof much (i.e. GPS spoofing, GSM interception and modification ...), it is going to put the user at extreme risk.

Imagine an activist being caught with ProofMode on their smartphones, with the Trump tyranny on-going and Trump's recent push to back the corrupted LEAs, the corrupted LEAs would have even more reach and ability to coerce and force the password to the PGP Private Keys with techniques including Enhanced Interrogation Techniques (i.e. waterboarding) since Trump have shown willingness to legalize the use of torture even against fellow Americans.

A better scheme that I have came up with in the past is to use a secure server backed by a HSM or smart card to generate a User Keypair. The User Public Key would be installed into the smartphone without needing any passwords whatsoever. The Frontline Private Key would be stored in a HSM or smart card attached to a remote server in a safe location for provisioning. The usual defenses like using multiple quorums of administrators to control the Private Key would be set in place. The smartphone would essentially only be able to encrypt data with the User Public Key so that in the event of coercion, the user would have nothing to disclose and the control of the Private Key would be within the jurisdiction of another country and a quorum of administrators to control it from a HSM or smart card. Torture would be less likely option under this scheme due to the captured user not truely knowing or possessing the User Private Key and would turn the efforts to execute EITs on the captured journalist a rather meaningless and unfavourable task.

Link: https://github.com/guardianproject/proofmode/blob/master/app/src/main/java/org/witness/proofmode/crypto/DetachedSignatureProcessor.java

BrianMarch 1, 2017 8:02 AM

Isn't securely dating files already a solved problem? I know that when I sign code, I also use a trusted third party time server to date it.

Also, I'll note that any such proof must be re-signed with a new date once every 5-10 years, or the proof runs the risk of being expired by advances in cryptography.

My InfoMarch 1, 2017 10:34 AM

This doesn't solve all the problems with fake photos,

Not at all. Particularly not when there is no security for the private key. And not when "thieves in law" will inevitably push something like this as a de facto standard in court.

The thing with the law in the U.S. is this. I'll use federal law as an example, but each state is similar, but even worse. We have


  1. U.S. Constitution, including the Bill of Rights and remaining amendments
  2. U.S. Code
  3. Code of Federal Regulations
  4. Executive Orders

In general the Constitution itself is moot. No one in the federal government obeys the Constitution, since SCOTUS has not ruled on the exact issue at hand, and if they have, it's some abstract lawyerly interpretation of it. The U.S. Code is supposed to be the law in accordance with the Constitution, but see, the thing is, here's the law, and here's how the law is supposed to be applied. And then there are the executive orders on what's supposed to be enforced and what's not considered a priority. And the political reality of it is I mean, fuck, come on, you just can't possess a firearm if you've been diagnosed as fuckin' mental and shit.

LHOHQMarch 1, 2017 10:42 AM

Facts are different than truth. For a long time it was a fact that the sun went around the earth, even if it wasn't true.

This is a way to assert the factuality of photographs, and a means to support truth claims that an image hasn't been manipulated.

It's an old myth that a photograph means truth: even Dorothea Lange's iconic photo of the migrant mother was chosen -- edited -- from a number of options.

https://www.loc.gov/rr/print/list/128_migm.html

But knowing the manner in which one's perceptions are manipulated is protection from one's beliefs being manipulated.

Don't over think this, nerds

chuckMarch 1, 2017 10:54 AM

Don't Canon cameras have some sort of 'forensic mode' with protected keys and some such?

My InfoMarch 1, 2017 11:20 AM

@LHOHQ

Don't over think this, nerds

Oh, I know. That photo has to be printed out and framed before introducing it as evidence to the court.

JPMarch 1, 2017 11:24 AM

I'm confused. Not only I see little gain in the "certifiability" side, as a few previous posters surmised already; I also worry about having lots of data from my phone, my geographic coordinates and a timestamp being attached to every single picture I take.

Is there a possibility of an actor given a hundred pictures to be able to use these signatures to say which ones were taken by the same camera? If it is possible then I think the TLAs are more interested in seeing this technology spread than anyone else.

Darryl DaughertyMarch 1, 2017 4:56 PM

A quality, low-cost non-repudiation service would be of benefit to private investigators if there's a significant risk of challenges in court over authenticity of visual evidence. But it needs storage of the metadata incl. time and location on a neutral, third-party website where it cannot be edited.

MrCMarch 1, 2017 5:28 PM

I fail to see how this accomplishes anything at all. What's to stop me from editing a photo, inventing some believable metadata, extracting the key from my phone, and signing the edited photo? For that matter, what's to stop me from faking the whole process using GPG without ever touching the app at all? And, more importantly, why would anyone ever believe any photos signed in this way are legitimate in light of the foregoing possibilities?

supersaurusMarch 1, 2017 5:36 PM

what does "real and unaltered" mean wrt digital photos? is a photo the phone automatically sharpened "altered"? is demosaicing bayer filter raw data "altering" it? is digitally altering the exposure "altering"? if your camera applied a filter so a raw taken under fluorescent light doesn't turn green and a raw taken under tungsten lighting doesn't turn red, are those "alteration"? do you have any idea what algorithms are applied to that glorious "unaltered" data before the file is saved?

have you looked at color-coded raw bayer filter output lately? can you get an actual raw image file out of your phone?

if the written file is a jpeg does the lossy compression not count as "alteration"?

in other words the only "real and unaltered" data you are likely to possess in digital photo processing is the raw file, and even that may have had a number of digital alterations before the file is saved, but you never "see" that data until it is processed, so I don't know what "real and unaltered" is supposed to mean. I think this is a fantasy that harks back to film days, but even then the solutions used, the temperatures they were used at and the times in each solution changed the results with the film, and printing on paper the same, i.e. the "raw" chemical data on the film or paper created by exposure to light was altered in many ways before you could see it.

you can actually display "raw" scanner data, but you won't like the result...ordinarily a tremendous amount of processing happens to that data before it gets saved as a jpeg (possibly lossy) or a tiff.

Charlie ToddMarch 1, 2017 9:17 PM

I thought that watermarks made non-repudiation and integrity easy. Chip vendors that read the sensor plane should watermark raw video with device manufacturer, model, and serial number. Many watermarks survive highlighting or cropping, IIRC. Anonymity is less valuable than being able to tell that a photo has been largely unaltered. Camera makers would just need to protect their end. Paparazzi should love this since they'd be able to prove when their photos get ripped off.

WinterMarch 2, 2017 5:31 AM

Nothing is perfect. What this does is giving a photographer a way to declare that her photograph was not altered after he created it. Just like signing an email.

It is like a signature on a print.

Sebastian B.March 2, 2017 9:11 AM

@Winter: This is actually a good comparison. Like a signature it's no proof against any previous (deeper) modification but it can secure the data were not modified after "recording" (what ever this means).

supersaurusMarch 3, 2017 5:23 AM

here are a couple of bullets on the design from the link our host provides:

x Produce “proof” sensor data formats that can be easily parse, imported by existing tools (CSV)

x Do not modify the original media files; all proof metadata storied in separate file

what prevents changing that storied metadata before it is easily parse, i.e. what provably links the two files? what prevents me from altering the photo file and then altering the metadata file to match? the secret sauce?

@Winter: signing an email does absolutely nothing to prove authenticity unless you mean cryptographically, and in that case what method?

@Sebastian B: "what ever this means" is a good summary of the whole scheme.

Nick PMarch 13, 2017 11:43 PM

@ Bruce

I was actually going to make an protected, video stream for things like recording the police. It would protect both integrity and authenticity of the videos along with timestamps. Then, I found out there were patents of some sort on timestamped or authenticated video. I avoided reading the details since you get more damages if you know what's in them (rolls eyes). I did back off on it, though, since the odds of getting trolled would be high.

NathanMarch 29, 2017 11:00 PM

I am one of the developers of this app, and am just discovering all of these comments now, and hope to provide useful replies.

First, to those who questioned our general experience with building secure apps, it should be pointed out that we are also the developers who brought Tor to Android (Orbot, Orfox), ChatSecure (matched Signal on EFF's Secure Messaging Scorecard), and SQLCipher (currently empowering encrypted relational data storage in more than 6500 Android apps around the world). We have been working on this particular work for the last seven years, and in the mobile security space for almost twenty.

At a high level, securely dating files, digital notarization, easy capture of sensor metadata, among other things, are not solved problems. For every day activists around the world, who may only have a cheap smartphone as their only computing device, they have no easy way to do any of these things. Even for high-level war crimes investigators, they are often using consumer point and shoot digital cameras, and documenting everything on paper.

ProofMode is a simplified version of a much more complex and thorough system and app that we have built, called CameraV. In that model, we use a built-in custom camera, encrypted internal storage and much more complex set of metadata to generate the evidence and proof. We also capture baseline images from the sensor, and require those to be shared with the key, so that we can later match evidence photos from the sensor itself. You can read more about some of this here:
https://guardianproject.github.io/informacam-guide/en/InformacamGuide.html

We loved CameraV, but it was too complicated. Many frontline activists looking for a means to timestamp, sign, and otherwise add extra verifiable metadata to photos and videos they were capturing, found CameraV to be too complicated and burdensome. They helped us to design ProofMode, to strip it down to its bare essentials. In this case, we are working within pre-existing communities, with a shared set of pre-existing trust. What the activists are looking for is richer metadata, in easy to parse formats, that has timestamping, and some kind of cryptographic verification around it.

We know our approach is not bulletproof, and that smart people like this who comment on this blog can fool it six ways from Sunday. This ProofMode release was versioned "0.0.x" for a reason. We are not saying it is finished by any means, clearly. We are actively developing it, and have a roadmap that will address most of not all of the major concerns pointed out here, while ALSO keeping it simple, streamlined, focused and easy to use.

For example, we have implemented the Google SafetyNet API (https://koz.io/inside-safetynet/) for automatically signing the hashes of the media, checking that the app is running on an actual Android device that wasn't tampered with, and verifying the hash of the APK app itself matches our officially released version. Google's servers produced a signed blob of data that gets appending to our proof data, and that can be verified later on a desktop or server. This one feature counteracts most of the "hey I can just fake this by hand with GPG" comments:
https://github.com/guardianproject/proofmode/issues/15

We are integrating supporting for OpenTimestamps and other blockchain based notary systems like Stampery: https://github.com/guardianproject/proofmode/issues/8
For now, the app makes it very easy to send out a tweet, SMS or even a Signal message of the media's hash, as a way to notarize it to various kinds of end points. We are working with human rights organizations to setup Signal-based notaries for their own internal logging.

The analysis on the http://www.lieberbiber.de site is a good one, and also brings up the method by which we monitor for the presence of new photos and videos. We've had some comments about directly launching and monitoring the camera, which are helpful, but since we want to run in the background, that won't really work:
https://github.com/guardianproject/proofmode/issues/7

The real progress there is moving to a new method Android provides for monitoring media, and away from just watching it at the file system. This has been implemented for newer Android devices:
https://github.com/guardianproject/proofmode/commit/0ea9c9d73d7e55de612c89c466ef87da3524b6f1

We are also looking at how we generate and store the keys we generate in the app. We agree that we did the minimal amount of work necessary to store and secure the key, just relying on the Android app sandboxing for now. We never intended this key to be used for encryption or longterm identity. Our thinking was more focused on integrity through digital signatures, with a bit of lightweight, transient identity added on.

That said, we will be moving the key and credentials into the Android Keystore, which is the most secure key management solution possible on Android today.
https://github.com/guardianproject/proofmode/issues/16

All in all, it has just been a few weeks, and have already released multiple updates, that address most of the concerns that been raised. Work continues, and we hope that in no time, we'll have an HSM-backed, yubikey-powered, blockchain-enabled, double ratchet encrypted and PQCrypto-resistent service, that will *still* be easy to use, under 3 MB and run on $100 Android phones.

Thanks for all the feedback, truly, and thanks for sharing this with your community, Bruce!

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.