Ephemeral Apps

Ephemeral messaging apps such as Snapchat, Wickr and Frankly, all of which advertise that your photo, message or update will only be accessible for a short period, are on the rise. Snapchat and Frankly, for example, claim they permanently delete messages, photos and videos after 10 seconds. After that, there’s no record.

This notion is especially popular with young people, and these apps are an antidote to sites such as Facebook where everything you post lasts forever unless you take it down—and taking it down is no guarantee that it isn’t still available.

These ephemeral apps are the first concerted push against the permanence of Internet conversation. We started losing ephemeral conversation when computers began to mediate our communications. Computers naturally produce conversation records, and that data was often saved and archived.

The powerful and famous—from Oliver North back in 1987 to Anthony Weiner in 2011—have been brought down by e-mails, texts, tweets and posts they thought private. Lots of us have been embroiled in more personal embarrassments resulting from things we’ve said either being saved for too long or shared too widely.

People have reacted to this permanent nature of Internet communications in ad hoc ways. We’ve deleted our stuff where possible and asked others not to forward our writings without permission. “Wall scrubbing” is the term used to describe the deletion of Facebook posts.

Sociologist danah boyd has written about teens who systematically delete every post they make on Facebook soon after they make it. Apps such as Wickr just automate the process. And it turns out there’s a huge market in that.

Ephemeral conversation is easy to promise but hard to get right. In 2013, researchers discovered that Snapchat doesn’t delete images as advertised; it merely changes their names so they’re not easy to see. Whether this is a problem for users depends on how technically savvy their adversaries are, but it illustrates the difficulty of making instant deletion actually work.

The problem is that these new “ephemeral” conversations aren’t really ephemeral the way a face-to-face unrecorded conversation would be. They’re not ephemeral like a conversation during a walk in a deserted woods used to be before the invention of cell phones and GPS receivers.

At best, the data is recorded, used, saved and then deliberately deleted. At worst, the ephemeral nature is faked. While the apps make the posts, texts or messages unavailable to users quickly, they probably don’t erase them off their systems immediately. They certainly don’t erase them from their backup tapes, if they end up there.

The companies offering these apps might very well analyze their content and make that information available to advertisers. We don’t know how much metadata is saved. In SnapChat, users can see the metadata even though they can’t see the content and what it’s used for. And if the government demanded copies of those conversations—either through a secret NSA demand or a more normal legal process involving an employer or school—the companies would have no choice but to hand them over.

Even worse, if the FBI or NSA demanded that American companies secretly store those conversations and not tell their users, breaking their promise of deletion, the companies would have no choice but to comply.

That last bit isn’t just paranoia.

We know the U.S. government has done this to companies large and small. Lavabit was a small secure e-mail service, with an encryption system designed so that even the company had no access to users’ e-mail. Last year, the NSA presented it with a secret court order demanding that it turn over its master key, thereby compromising the security of every user. Lavabit shut down its service rather than comply, but that option isn’t feasible for larger companies. In 2011, Microsoft made some still-unknown changes to Skype to make NSA eavesdropping easier, but the security promises they advertised didn’t change.

This is one of the reasons President Barack Obama’s announcement that he will end one particular NSA collection program under one particular legal authority barely begins to solve the problem: the surveillance state is so robust that anything other than a major overhaul won’t make a difference.

Of course, the typical Snapchat user doesn’t care whether the U.S. government is monitoring his conversations. He’s more concerned about his high school friends and his parents. But if these platforms are insecure, it’s not just the NSA that one should worry about.

Dissidents in the Ukraine and elsewhere need security, and if they rely on ephemeral apps, they need to know that their own governments aren’t saving copies of their chats. And even U.S. high school students need to know that their photos won’t be surreptitiously saved and used against them years later.

The need for ephemeral conversation isn’t some weird privacy fetish or the exclusive purview of criminals with something to hide. It represents a basic need for human privacy, and something every one of us had as a matter of course before the invention of microphones and recording devices.

We need ephemeral apps, but we need credible assurances from the companies that they are actually secure and credible assurances from the government that they won’t be subverted.

This essay previously appeared on CNN.com.

EDITED TO ADD (4/14): There are apps to permanently save Snapchat photos.

At Financial Cryptography 2014, Franziska Roesner presented a paper that questions whether users expect ephemeral messaging from Snapchat.

Posted on April 2, 2014 at 5:07 AM66 Comments


Gabriel April 2, 2014 5:40 AM

Taking a snapshot in a smartphone is very easy. How does a user know that other users didn’t save a snapshot of his “ephemeral” message?

wiredog April 2, 2014 6:25 AM

How do you know that the private conversation you’re having while walking in the woods isn’t being secretly recorded by your partner? You don’t. You have to trust the person you are having the conversation with. And if you don’t trust them, don’t sahre anything you don’t trust them with.

Bob S. April 2, 2014 6:30 AM

“Dissidents in the Ukraine and elsewhere need security…”

We all need security.

It’s unfortunate our own government will not uphold our rights to privacy and security. Clearly, the various bills pending in Congress are no more than a dog and pony show. If they ever act, we will be worse off. Clearly a redux of the ATT fiasco is in the works (make it all legal with retroactive permanent immunity.)

I have some hope some technological dissidents will find viable solutions. They need security, too, maybe more than others.

PcH April 2, 2014 7:00 AM

“We need ephemeral apps, but we need credible assurances from the companies that they are actually secure and credible assurances from the government that they won’t be subverted.”

We will wait a long time for the latter, but how can a user know the former? Should there be a private organization offering a “seal of approval” on privacy and data capture? You need and organization with the technical know how, a set of standards, among other requirements. How can a sustainable market develop for such a seal of approval?

keiner April 2, 2014 7:27 AM

Not in the Ukraine, in the Ukraine the “good boys” gained victory with US-support. Those with US-support are always the good ones.

Take Egypt, Syria, Afghanistan, Pakistan, whatsoever…

abc April 2, 2014 7:29 AM

We never again will be able to trust company assurances for privacy. As you yourself noted, they may be under order to lie, and we certainly cannot trusts governments to provide and protect those assurances either.

The only solution I can see for ephemeral apps is if they eliminate the need to trust a third party. That means they are open source and employ strong crypto, but are easy enough to use for the masses. And you always need to trust your communicatees not to e. g. take screenshots with a camera using the “analog hole”.

I just don’t see a credible solution there, only more companies trying to cash in with false promises.

Gervase Markham April 2, 2014 7:41 AM

You mention danah boyd; her book suggests that using ephemeral apps is not designed to make something crypto-hard unrecoverable, it’s a social cue. Do you have a comment on that usage pattern? That would mean these companies are not offering false promises, just a way to send that particular cue. Thinking from the perspective of a sociologist rather than a security engineer 🙂


species5618 April 2, 2014 7:59 AM

I actually have different issues with these apps, and any app with “accessible for a short period” and “This notion is especially popular with young people” in its review / operational mandate.


mobile devices grew a whole new breed of bullying and abuse opportunities,
facebook and twitter offer not only persistence of stupidity, but evidence with which to combat abuse and bullying.

Snarki, child of Loki April 2, 2014 8:00 AM

“…we need credible assurances from the companies that they are actually secure and credible assurances from the government that they won’t be subverted.”

We will wait a long time for the latter, but how can a user know the former? Should there be a private organization offering a “seal of approval” on privacy and data capture?

We know when an AQ affiliate is using the service, and the NSA can just whine loudly about how they can’t copy the data.

I guess that’s a private organization. Perhaps they need to generate a “seal of approval”.

Thoth April 2, 2014 8:33 AM

Ephemeral secure messages can be achieved if these two conditions are strictly met to the dot:
1.) Forgetfulness of system – Nothing is saved at all. Not even screenshots or not any kind of records.

2.) Security and Privacy.

Most systems who claim to use RSA encryption or OTR are good at the second part.

The first part is purely hard. How can you be sure nothing is saving screenshots or no messsages are saved ? You data in transit maybe secured by a forward secret session key that can be one time used only generated by a very strong random and encrypted in transit by a very strong crypto but when it reaches it’s end point, this is where you have no control over what takes place later on.

In essence, I would say from my point of view there is no true ephemeral secure communications but at the same time, we should always use something at least secure (provable security). Instead of selling some product off as absolutely spooks proof, we can at least say they allow innocents not to land into some dragnet.

Businesses will always be businesses. They will always try to blow themselves up big and make themselves look good but a properly educated user with the correct knowledge can easily see through their marketing ploys.

Greg Linden April 2, 2014 9:40 AM

Adding to what you said, and as others have pointed out, taking a picture of the Snapchat screen is a common behavior, sabotaging the claim that the sharing is ephemeral. From a recent academic paper: “”Contrary to expectation, we find that it is common for respondents to take screenshots of Snapchat messages: 47.2% admit to taking screenshots and 52.8% report that others have taken screenshots of their messages … Screenshots seem to be an ordinary and expected component of Snapchat use.” http://fc14.ifca.ai/papers/fc14_submission_21.pdf

William Entriken April 2, 2014 10:08 AM

There are only two components to successful secure communication other than face-to-face conversations: proper protocols and trusted devices. As far as I understand it is not possible to have security on an iPhone/Android. The only thing close I’ve seen are smart cards.

Sergey Kishchenko April 2, 2014 10:29 AM

Not “in the Ukraine” but just “in Ukraine” please. It is simpler. Thanks.
About the article: as long as NSA continues tapping on the wire there is no way to make the ephemeral messaging really ephemeral. Getting data from companies is just more convenient but it is not the only option for bad guys.

TC April 2, 2014 11:18 AM

Thoth wrote that ephemeral secure messaging requires:

1.) Forgetfulness of system – Nothing is saved at all. Not even screenshots or not any kind of records.

But commented that:

The first part is purely hard. How can you be sure nothing is saving screenshots or no messsages are saved ? You data in transit maybe secured […] but when it reaches it’s end point, this is where you have no control over what takes place later on.

Trusted computing can enable this! In particular, ephemeral secure messaging could take place between systems which attest to being forgetful, that is, systems which are incapable of saving data, taking screenshots, etc. Of course, this solution is not full-proof, for instance, a second device can be used to take photos of a forgetful system.

Chelloveck April 2, 2014 11:32 AM

@TC: But if all devices were certified Trusted, and the ephemeral data contained some sort of watermark such as exists on bank notes to prevent photocopying then we could be sure that no one could take screenshots, photographs, or daguerreotypes of Certified Ephemeral data(). A totally foolproof system, all we need is total compliance with Trusted Computing. ( Of course, ephemeral data would still be available on the server for legal intercept. Or do you want to provide an untraceable communications channel for terrorists?)

Nick P April 2, 2014 12:03 PM

William Entriken beat me to it: trusted devices. The devices themselves can be designed to support an ephemeral model. It takes a combination of tamper-resistance, memory encryption/integrity, isolation of all system components, information flow control mechanism, and a data security policy. Policy might treat some apps as ephemeral and not others. It’s not always desirable for data to just disappear: logs, analysis, music collection, etc. Yet, even those saved can be encrypted seemlessly and individually onto the untrusted storage with existing secure filesystem tech. The keys are stored in easy to manage or wipe space. These minimal security requirements for ephemeral computing help solve many other security problems as well.

Of course, ephemeral computing is a fantasy. If a user’s eye can see or ear can hear, then the data can be copied by user or equipment they possess. Apps are also a concern. The main way I see apps bypassing it is leaking data over any allowed storage or connections. (Covert channels: they’re back!) Example for network leak: App was reviewed, no malicious source, & it has autoupdate. An update introduces malicious code. Let’s say updates are more careful & manual. Clever developers might use a dynamic code engine within the app so they could do an update without an Update. 😉 So many legit excuses for using a Lua interpreter or such. Their MO might be to send code in, use it to dump a temporary cache of data over the network, and then swap the code back out. So, expect bypass attempts by app developers on top of whatever crap users will pull.

Gargoyle April 2, 2014 1:06 PM

@Bruce Schneier:
Sociologist Danah Boyd has written about teens who systematically delete every post they make on Facebook soon after they make it.

I hope they do not expect that that causes it to disappear from the FB DB.

When you delete something in FB it just gets tagged as “no longer displayed to you”.

dragonfrog April 2, 2014 1:37 PM


At least in Android 4.2 and later, there is this: https://developer.android.com/reference/android/view/WindowManager.LayoutParams.html#FLAG_SECURE

public static final int FLAG_SECURE

Added in API level 1
Window flag: treat the content of the window as secure, preventing it from appearing in screenshots or from being viewed on non-secure displays.

See FLAG_SECURE for more details about secure surfaces and secure displays.

Constant Value: 8192 (0x00002000)

That’s still relying on the app on the far end to request this capability, and the OS to enforce it properly. The eternal DRM problem…

I use TextSecure for my SMS messages, and it offers the ability to turn that feature on or off in its preferences (default is on – I discovered this when trying to take screenshots for a bug report). Any app that tries to offer assurances to the sender that the message will be off the record would not be able to offer the recipient the ability to disable that of course.

The fact that snapchat et al. apparently don’t do this is a good sign of how seriously they take their users’ security…

Jason April 2, 2014 1:41 PM

“We need ephemeral apps, but we need credible assurances from the companies that they are actually secure”

You can’t prove that something no longer exists.

Natanael L April 2, 2014 1:42 PM

Fortunately we do have good open source apps that actually use proper cryptography.

There is ChatSecure that uses XMPP (by Guardian Project) + OTR and TextSecure that uses a custom version of OTR (by Moxie Marlinspike and friends).

Both verifiable and with strong crypto that applies “perfect forward secrecy” / ephemeral encryption. Recovering the private key afterwards won’t let you decrypt the traffic data, and the logs intentionally have no tamper protection, so even finding the original logs don’t prove what was said.

Nick P April 2, 2014 1:46 PM

@ Gargoyle

“I hope they do not expect that that causes it to disappear from the FB DB. When you delete something in FB it just gets tagged as “no longer displayed to you.”

Great point. It’s a problem of trusting the middleman, who has an interest in keeping or sharing your data. I’ve been trying to come up with a simple metaphor for these “private” messaging apps. Here’s a “top of my head” attempt for you or anyone else to improve on.

Developing suitable real-world metaphor for “private” online messaging.

[Explaining in general difference between face to face conversation & message from one to another on Facebook-like service.]

In face to face, you say something to the other person. If nobody is in ear shot, the other person is the only one who hears it. There’s no copy unless one of you are using a voice recorder. It’s rarely done because it’s considered rude/conniving and makes many people act nervous enough to be caught. So, worst case scenario for most face-to-face conversations is the other person repeats it, but can’t prove anything.

With Facebook, you have a person in the middle. You give your picture, message, or whatever to them. They make a permanent copy of it, date it, and store it. Then they give it over to the other person. If you ask for deletion, the middleman only deletes your copy & keeps theirs. The middleman also thinks hard about everything in your message to profile you. The middleman sells what it thinks about you to other people. It might also sell the message. Either way, a bunch of people you don’t know get the message just to do the analysis, storage, or whatever.

The middleman also keeps track of your friends, what games you play, how your relationships are doing, what you’re doing day to day, where you are at any given time, and whatever else he can get. He sells all this too. The more information he gets on you, the more money the middleman makes. In other words, he doesn’t care about people giving him information: only his buyers. His main customers are advertisers, researchers, private investigators, police, and spies. You’re probably fine if you don’t mind them having courtroom-grade records of everything you say and do. If you’d rather it not be that way, don’t put it online and just tell the person face to face. If too inconvenient, one still gets plenty more privacy out of a letter, phone call, or text [in that order].

[Explaining to high school teens about Middlemen with goal of motivating action]

Using Facebook to be social is like a school where nobody talks to each other: they all talk to The Gossip. They give that person private messages (unsealed), what’s happening in their day, their pictures, and so on. They trust the Gossip to only give it to the people they’re told. The Gossip, though, tries to sell most or all of this to as many people as possible. The Gossip also likes to accidentally forget people’s privacy boundaries, sharing more of their private stuff overtime with everyone. The Gossip tells: cops who’s crooked, people which good girls are sluts, employers which kids to fire for comments about them, parents who was partying instead of studying, associates secrets only good friends should know, businesses what you might buy, and everyone a random person’s dirty laundry.

The Gossip makes a ton of money doing this, too. If you gripe, the Gossip reminds you he’s the only game in town. Everyone talks through him. They’re too lazy or attached to him to get rid of him. He’s even got all kinds of games and stuff at his hangouts. He keeps stuff you want gone because HE CAN, not to mention it made him rich. Leaving means you loose everything you’ve given him up to that point. And he reminds you it’s all kind of his anyway thanks to that contract you didn’t really read. The confusion about that on your face makes him show a sly grin.

Long story short, nobody should trust a Gossip. It’s better to tell them nothing and tell your friend instead.

vas pup April 2, 2014 1:50 PM

@PcH • April 2, 2014 7:00 AM.
Yeah, I suggested several months ago the same related to any consumer using electronic device going through something like UL independent certification for electric devices similar privacy/security independent (not only from gov, but major manufacturers /distributors as well) certification for devices with capability of accepting/processing /storing /transmitting any data (cell/smart phones,land line phones, answering machines, smart TVs, cable company boxes, etc.). I even suggested seal of approval in the form of 1984 in the circle crossed like traffic sign “No left/right turn” with three different levels of privacy and color coding correspondingly. Same may apply to independent periodical third party privacy /security audit of any company as a whole (integral grade), policy, equipment, infrastructure, products etc.
As result, integral grade is assigned and disclosed to the public (without disclosing details of the audit) like credit rating for the countries. That will create for the company incentive to get higher grade. As almost everything in capitalism, money driven approach will finally define company’s policy on that and lobbying for mapping laws in particular.

z April 2, 2014 2:04 PM

@ Nick P

I like that example and it should be turned into a classroom excercise. Have students talk to each other through the teacher. They can say anything they want to whoever they want, but the teacher has to relay it to their recipient. Additionally, any student can ask the teacher what the others are saying. Further, tell the students that anything they say will be reported to the dean, their employer, and their parents. See how their behavior changes and then explain Facebook.

forget.me.not April 2, 2014 2:29 PM

Ephemeral apps would provide limited recovery of privacy at the cost of reducing the memory horizon within which people live.

Having a record of stuff I’ve said years ago in emails allows me to plan and act over a longer period of time than if I were limited by my memory, perpetually living in a bubble roughly three weeks long.

We need privacy solutions that don’t cede long range control over our lives to the NSA.

yesme April 2, 2014 3:35 PM


I am quite sure that we shouldn’t trust companies with our personal data.

What you could use is something like camlistore, a personal data store that you yourself control, from your own devices.

Benni April 2, 2014 4:01 PM

Do I read this paper correctly?


Is this saying that the NSA can decrypt any tls connection from windows Internet explorer?

They are saying;

Windows SChannel does not implement the current Dual EC standard: it omits an important computation. We show that this does not prevent attacks; in fact, it
allows slightly faster attacks.

BSAFE-Java v1.1 min 63.96
BSAFE-C v1.1 min 0.04
SChannel I min 62.97

min are the minutes to break.

“As described in Section 4.2, SChannel exhibits a fingerprint in the first 4 bytes of the session ID. 2.7 million of the servers we contacted
exhibited this fingerprint. We requested HTTP headers from 1,000 of these IPs (randomly selected), and 96% of the responses included the string “Microsoft” in the server field, suggesting that this is a selective fingerprint.”

So they just can read 96% of all tls sessions? Am I getting this right?

Simon April 2, 2014 4:07 PM

This is all about Trust. People continue to use apps from companies because they TRUST the company, even if the company always acts in self-interest. “Climate-deniers” aren’t stupid, they just don’t TRUST the scientific establishment/media. The more you try and convince them the less they trust you. There’s a reason they don’t TRUST the scientific establishment/media. Attacking them for it makes it worse. It’s the same with any consumer behavior/brand loyalty.

Nick P April 2, 2014 5:33 PM

@ yesme

It’s an interesting project and content-addressable storage has plenty potential. Of course, there’s always the chance of an open offering running into something like this. The citations cover a disturbing amount of ground too.

posedge clk April 2, 2014 6:58 PM

The problem with “ephemeral apps” is the same problem we have with Direct Recording Electronic voting machines:

It is not possible to audit anything which cannot be perceived with the senses unaided.

You can audit a pencil-and-paper election. You know the physical properties of ballot boxes, and you can make sure that ballots don’t go missing, etc. before being counted. You cannot audit to the same level of rigor on a complex DRE machine running Windows or some other OS and sophisticated software.

Likewise, you cannot audit SnapChat. Not because you can’t disassemble the code running on your smartphone, but because you have no access to the code running on the back end. Even if you think you can audit the code running on your device, think again. Take a look at the NSA Tailored Access Operations catalogue, and see all of the ways your computers can be compromised effectively undetectably.

And as long as the threats are electronic, and not perceptible with the senses unaided, there is nothing you can do. If you want to be secure, best to do it in person, taking a walk in the woods.

Yusuke Shinyama April 2, 2014 7:37 PM

I don’t think it’s theoretically possible to guarantee the forgetfulness of a computer system, but I think it is possible to build a company/organization that is transparent enough so that the public can be reasonably sure, by careful reviewing, that they’re doing what they claim to be doing. So far, no IT company or government that handles personal data is operating that way. A close example would be WikiMedia Foundation, whose day-to-day operation is surprisingly open (cf. https://wikitech.wikimedia.org ), but they don’t handle much personal data other than their access logs. I’d be curious if such a company ever existed.

Gyarmathy April 2, 2014 8:32 PM

Bruce, could you please post an article with your opinions about:


It’s like archive.org’s Wayback Machine but it saves snapshots of individual pages rather than attached files (like pdf, zip, etc).

jdgalt April 2, 2014 9:52 PM

You do need the recipient to cooperate with you, and I don’t see why you would ever really trust that to happen. This is the same problem that has existed for years with streaming protocols such as RealAudio and QuickTime, both of which have long since been cracked so that you can easily get apps which capture and save the data stream.

Nick P April 2, 2014 10:44 PM

“You do need the recipient to cooperate with you, and I don’t see why you would ever really trust that to happen. This is the same problem that has existed for years with streaming protocols such as RealAudio and QuickTime, both of which have long since been cracked so that you can easily get apps which capture and save the data stream.” (jdgalt)

Exactly! I can come up with technical solutions all day yet, in the end, it’s inherently vulnerable on the user side. Another commenter called it a trust issue. I agree. I’ll add that it’s a trust or personnel security issue. You need controls such as monitoring on the user side to prevent the other user’s malice. The end result of all the security it takes makes a face-to-face conversation cheap and convenient by comparison.

Ephemeral messaging seems to me to be comparable to all the “Trusted” workstations and guards that operate in System High mode (default). They know real security with mutually hostile operators on one device is about impossible without a ridiculous amount of work, loss of usability, and loss of legacy. The result is they run systems in a mode where everyone has to be trusted not to subvert it intentionally, but controls are there to prevent them doing it accidentally. People cooperating in a way to keep public and private separate, with controls to help rather than stop them, is the best ephemeral messaging will give us.

Nick P April 2, 2014 10:54 PM

@ Gyarmathy

It’s an interesting project and I always like to see more .is sites. However, The Wayback Machine is so complete it’s darn near irreplaceable. People using and donating to it is probably better option given the .is site’s stated goals and common use cases. Of course, there is the restriction that Wayback Machine honors sites’ wish to not be crawled or archived. A service that didn’t do that would be useful for a number of reasons. Additionally, a service immune to takedowns has advantages. Iceland provides opportunities for both so maybe a Wayback Machine alternative there could serve such niches.

I wouldn’t rely on it, though, as there’s no reason at the moment to believe it will last. Most projects like this die off. I’m waiting a few years to see what happens. I might try it for non-critical, temporary stuff.

Adrian April 2, 2014 11:38 PM

Ephemeral apps are not morally or technologically tenable.

Ephemeral apps are cryptographically secure applications that guarantee the communications can’t be copied or saved while they are in the clear.

Therefore, ephemeral apps are DRM apps.

Thoth April 3, 2014 12:54 AM

As long as the data leaves your hands, it is usually not ephemeral because somehow something is going to keep a record. Can you trust your recipient’s swap space doesn’t contain those private conversations ?

So far, I have not seen any truely “trusted” platforms. How do you “trust” them ? The only way to “trust” is to have open access and open access means you may not meet the require of maintaining a sealed and confined secure environment. I guess it all boils down to trust and proof.

Who do you trust ? Is there a proof of trust ? I think this is the overall essence of such ephemeral secure conversations and transactions.

TC April 3, 2014 1:48 AM

@Chelloveck wrote:

all we need is total compliance with Trusted Computing.

No we don’t. Trusted computers can provide cryptographic proof that they are indeed trusted (more precisely, they can attest to a particular state). It follows that total compliance is not necessary. However, ephemeral secure messaging could only take place between trusted computers, as I noted.

TC April 3, 2014 2:00 AM

@William Entriken wrote:

There are only two components to successful secure communication other than face-to-face conversations: proper protocols and trusted devices.

@Nick P seconded William’s perspective. Based upon Nick’s discussion, I assume William is using the term trusted device to refer to a trusted computing device (I didn’t make this connection when I read William’s post).

TC April 3, 2014 2:13 AM

@posedge clk wrote that

The problem with “ephemeral apps” is the same problem we have with Direct Recording Electronic voting machines: It is not possible to audit anything which cannot be perceived with the senses unaided.

The problems with current DRE machines can be overcome with verifiability and ephemeral apps should focus on verifiability, rather than auditability. Trusted computing allows you to verify the state of a remote device, however, you still need to define which states are acceptable and this is difficult.

TC April 3, 2014 2:25 AM

@Adrian wrote:

ephemeral apps are DRM apps.

Is this good or bad? Richard Stallman and Ross Anderson consider Trusted Computing to be bad, but I have argued in this thread that trusted computing can be used for good.

@Thoth, I’d like to suggest that you read about trusted computing, you should find answers to many of the questions you raised in your last comment.

Clive Robinson April 3, 2014 4:38 AM

@ TC,

    … is using the term trusted device to refer to a trusted computing device…

Whilst the two terms are not mutualy exclusive a device can be trusted without resorting to what most currently view as “trusted computing”. In fact it can be shown fairly easily that a “trusted computing” device is very far from being a “trusted device” by a user, all that is required is to examin the “root of trust”, if you and you alone own it then perhaps a trusted computing device can be a trusted device, otherwise not. The problem with trying to examine the “root of trust” is can you actually see inside a chip sufficiently well to see that it genuinely works correctly and genuinely has your credentials as the root –which is well neigh impossible–, if not then it’s not a trusted device.

This issue is of some concern because what if say the American, British, Chinese, Japanese, Korean or Russian etc SigInt organisation has subverted the chip manufacturing process of a trusted compting component your government then puts in their comand and control computers…

Clive Robinson April 3, 2014 5:47 AM

One thing that suprises me is that we are talking about “ephemeral” as though it exists in some physical manner or law of nature that we can depend on or expect as a right…

Short version : The universe does not work that way, get over it.

Long version : Science currently for various good reasons has belief in cause to effect and that each effect in turn is a cause in the next step of a cascade of causes/effects. Part of that process is that each cause imprints information on the effect and thus spreads the information down all subsiquent interactions. Thus as the universe moves from a single cohearent event with no initial information it moves to a fully decohearant state with maximal information.

The design of humans is as a consiquence of the environment in which they have survived. Part of that is “living in the moment” so as to deal with threats, and this appears to have effected the way our minds work. Overly simply our memories “age” by becoming less accessable with time unless renewed, thus we learn the patterns necessary to help us avoid becoming a more able preditors next meal.

Thus as humans our experiance is that “people forget” given sufficient time and lack of reminders. Further humans as individuals are not very good at recording events, that is we are in general imperfect/unreliable.

However around two hundred years ago we developed the first mechanical and chemical, devices and processess that enabled the “unreliable” human to be removed from the actual recording proccess. Further development and refinment has improved not just the bandwidth of communication and storage but reduced the cost. We are now at the point where we can record every thing an individual can see and hear at a cost that is now well within the budget of many people. Hence “life bloging” is a practical reality and the technology at the point that the “sensors” can be medicaly implanted if a person chose to have it done, in a way that would not be easily visable to another person.

Thus “ephemeral” belongs to that “golden past” technology has overtaken it and we won’t get it back, no matter how much we might otherwise want.

The question now is, Will we evolve socialy to this reality and will the results be liberating or chilling?

I suspect we will evolve “society” rather than ourselves, simply because if you look back two senturies ago we lived in very close knit societal groups where you lived from craddle to grave, and you were fairly intimately known by your society. It was only with the advent of technology we had a short “golden age” where you could travel sufficiently far that you were unknown and only around half a century ago that mobility was sufficiently greate that “strangers” in any social group became the norm rather than the exception. We as humans will revert to “village life” societal norms in a more global way. However there is one or two problems we need to resolve, and the question is thus, Will those currently in power alow us to regain what is necessary to make it work? It’s this asspect I have signifficant doubts about, because technology makes “power” more controlable in fewer hands.

Mantra April 3, 2014 6:50 AM

The only way is peer-to-peer.
You can NOT trust ANY server.
That is all.

(what sort of idiots TRUST any web company to EVER delete ANYthing?)

Nick P April 3, 2014 7:45 AM

@ TC

” I assume William is using the term trusted device to refer to a trusted computing device”

He may or may not be, as I can’t speak for him. I used the term in the original sense of the word in secure systems. At most basic, a “trusted” system is one you trust to the point it can violate the security of the system. The next element of trusted OS’s or software was one presumably put in plenty extra effort to ensure they were actually secure. So, for example, the Trusted Solaris 8 operating system had extra security features and evaluation effort, yet you still have to trust the OS to do that without failing.

In this case, we have at least two devices used to send messages. If they’re end to end, then we’re trusting the protocol and its execution rather than third party that does eg transport. The device becomes trusted because the security is broken if the device is malicious or fails in certain ways. So, as aforementioned paragraph, my next step is to apply security engineering to make what we trust as trustworthy as possible to enforce security policy. That’s it in a nutshell for most systems I evaluate.

Now, “Trusted Computing” is a different concept altogether. In it, they consider the user and the regular software untrusted, with key enforcement mechanisms and person with signing key trusted. The goal is to have a third party control what runs on your device. Mechanisms include a verified root of trust, software white/blacklists, and public key crypto. While these primitives can support security, the main reason for the “Trusted Computing Initiative” is creating DRM technologies that gives user less control over digital content.

So, the original definition of trusted is about what you trust in a design and why. The new one is a redefinition by a group of companies with an agenda. It’s either a subset of the real thing or the opposite depending on what perspective you look at it. That so many big companies dependent on ad revenue support it shows indirectly that ephemeral apps will always be a minority in the market and get harder to create over time.

BJP April 3, 2014 8:43 AM

@posedge clk

“If you want to be secure, best to do it in person, taking a walk in the woods.”

People who don’t live somewhere surrounded by forests may be surprised to find out exactly how many wireless surveillance cameras residents have installed in the woods. They’re motion activated, battery powered, mountable anywhere, work in daylight, low light, and sometimes even at night. Some feed a DVR via WiFi, some write to memory sticks.

A walk in the woods is no guarantee anymore, especially if the simple fact that person A met with person B is compromising for you, even without recording the conversation.

vas pup April 3, 2014 9:47 AM

@Mantra • April 3, 2014 6:50 AM
“The only way is peer-to-peer”.
Did Microsoft embed extra layer when acquired Skype which was P2P before, but not anymore after such “improvement”? Can anybody provide technical angle, please?

TC April 3, 2014 12:49 PM

@Nick P, if I understand correctly, then what you’ve described is a framework in which an ephemeral messaging service could be evaluated, under certain (justified) trust assumptions. For instance, TLS should be secure under the assumption that the client and server execute the protocol correctly. (Presumably, clients and servers which execute the protocol correctly are termed trusted devices?) By comparison, trusted computing is a technology used to build systems. These systems could be evaluated using the framework you’ve described and, in this case, the trust assumptions would include facts about the trusted computing base (e.g., the TPM’s functionality).

It is easy to build an ephemeral messaging service using trust assumptions that simply do not hold in the real world. The challenge is to build an ephemeral messaging service from reasonable assumptions and trusting computing seems to be one approach to this challenge (if you believe that the trust assumptions required for trusted computing are reasonable!).

Nick P April 3, 2014 2:08 PM

@ TC

Yes, a secure systems framework can be used to develop and evaluate a “Trusted Computing Group” type system. This has been done technically if you look at TPM-enabled platforms evaluated under Common Criteria or other schemes. Their assurance has all been low. The TPM itself was usually evaluated to a stronger standard.

Far as ephemeral messaging, yes it’s way easier to fail simply because computers leak information and mainstream software exposes vulnerabilities at about every level. Given threat model of most messaging schemes, penetration resistence needs to be medium-high at each level of TCB. Additionally, one needs info flow control to ensure unauthorized data flow doesn’t occur.

So, altogether, TCG “trusted” computing won’t work because it trusts TPM, original software image, IO system (eg baseband), and so on. Two of these have been hacked more than once, other bypassed at least once. So, TCG approach isn’t enough and must be supplemented by other methods.

The MILS approach, for instance, ran everything from baseband to apps in a partition on microkernel. Approach added a highly assured middleware for enforcing info flow policies on partition communications. Two were integrated with a TPM and one had IOMMU. Such an approach, one of many, was far stronger than the TCG solutions that stretched TPM to do more than designed for.

My recent investigations into safe architecures might help, as well. For instance, a tagged memory scheme could be extended with an ephemeral type that has a time-to-live attribute. Alternatively, a Java processor running an object capability system could encapsulate the ephemeral data in objects that can be killed off later. Type system prevents leaks. And so on.

Many approaches. User can screen shot them all though. (Sighs)

name.withheld.for.obvious.reasons April 3, 2014 6:34 PM

@ Nick P

“…ran everything from baseband to apps in a partition on microkernel.”

Let me guess, Lynux? Sounds like it–or it could be Integrity? I haven’t had a chance to look at Green Hill’s offerings in some time, last I knew they were in acquisition mode. They have a new product, u-velOSity, a compact RTOS. May give it a go, not for an application platform but a hardware/router device. My plan is to build out a new secure perimeter and proceed with virgin builds of other platforms. Hope you saw the plot, as fact is stranger than fiction.

Nick P April 3, 2014 6:52 PM

@ mantra

I disagree. You can use a centralized design without trusting the server on the other side. A number of solutions in security field do this. The advantage of client server model is such a design is much easier to get right than a pure P2P model, which is truly complex. And remember that each node might have a client and a server, allowing such connections between nodes (P2P-like) or from nodes to central server. The central server can be convenient for things like NAT traversal, as well. Finally, there’s far more tools for building, analyzing, maintaining, and monitoring centralized designs.

P2P tech is advancing all the time. Hopefully, it will become easy to use, maintain, and secure. Meanwhile, building architectures (centralized or decentralized) with assistance of untrusted servers is fairly well understood.

@ name.withheld

” They have a new product, u-velOSity”

They’ve had it for years. I passed on it just because it didn’t have MMU support. Although, it might be great for those that didn’t need it. Thing I like about them is they have the OS, middleware, tools, and partners for about everything.

“”…ran everything from baseband to apps in a partition on microkernel.”
Let me guess, Lynux? Sounds like it–or it could be Integrity?”

Two good guesses as I promoted both in the past. The one in question, however, was OKL4 from OK Labs in Australia. They had a microkernel, a minimal runtime, an automated middleware, user-mode versions of many mobile OS’s, ability to reuse Linux drivers without full Linux (awesome), and released many things as open source. Anyway, one of their advertised use cases was separating the baseband stack from the main system for stability/security purposes. There was also general security via isolation, virtualizing extra OS to run legacy stuff side-by-side, and isolating GPL code (clever). Acquired by General Dynamics, recently.

I’m writing a reply to the rest in the Squid forum so as not to side track the topic here.

TC April 4, 2014 1:50 AM

@Nick P wrote

TCG “trusted” computing won’t work because it trusts TPM, original software image, IO system (eg baseband), and so on. Two of these have been hacked more than once, other bypassed at least once. So, TCG approach isn’t enough and must be supplemented by other methods.

Trusted computing doesn’t need to trust the software, since attestation gives us a guarantee that the correct software is running and we can verify that the correct software is secure. We do need to trust the TPM and some additional hardware.

Trusted computing is certainly not a panacea! In particular, verifying large pieces of software is beyond the current state-of-the-art. (I should stress that systems built in the trusted computing context should attest to some minimal OS rather than a full-blown OS.)

Nick P April 4, 2014 10:34 AM

@ TC

” we can verify that the correct software is secure”

This is the crux of my disagreement with TCG. Their tech just tells us that a given piece of software is running. That software can have a crap ton of vulnerabilities. That TPM platforms are typically Windows, Linux, etc running unsafe code (eg C/C++/ObjC) increases odds of it. We’ve also seen attacks below and around the software layer in NSA catalog. Such code doesn’t need to modify any other code on the system, just the data. Upcoming “vTPM’s” for cloud computing will have more risk they increase odds of potential MITM’s.

Alternatives provide more protection than a TCG approach. They may or may not use a TPM. Trusted boot & trust anchoring can be done without one. Far as crypto based solutions, there are much more powerful options out there including schemes that protect all memory, IO, sensitive computation and/or control flow integrity. Support for legacy software varies with these although I know one runs Linux and another runs common RTOS’s. I published a few in my last security paper dump here.

Note: To give them some credit, I did previously post the Flicker architecture. It was nice work. I could imagine using it for, say, protecting a signing key.

Adrian April 4, 2014 7:45 PM


I am afraid that we probably can’t have the best conversation here as we’re likely to be buried under the responses of others. For others only reading this comment, I’ll recap.

I argued that for apps to be truly ephemeral they have to be DRM apps. You questioned whether this is a bad thing, citing quite rightly that trusted computing can be used for good. My understanding is that trusted computing gives users a means of cryptographically certifying that only software they want is running on their hardware. I agree with you that trusted computing can be wonderful, and frankly, I find it completely bizaree that RMS is against it. Yes, it’s bad if it means tethering people to products and locking them out of making changes, but product lock-in is NOT a requirement of trusted computing.

The reason why I said ephemeral apps are DRM apps is that you want a kind of illegitimate trusted computing. Not only do you want to trust how YOUR applications are functioning, you want to trust how someone ELSE’S applications are functioning. In other words, you want to assure they can’t keep the records that violate the ephemerality property. Obviously you trust the application itself, but you want to make sure its outputs can’t be saved, sent elsewhere, screenshotted, or whatever. What I’m saying is that ephemeral apps require trusted computing where you distrust the recipient. That is, it requires DRM.

And I think my interpretation of ephemeral apps is definitely a defensible one. After all, if ephemeral apps are at all novel, then they can’t just be “here’s a way to securely communicate, and we don’t keep a copy of it. Don’t trust us? Then use end-to-end encryption.” We already have this capability in TLS/SSL, GnuPG and elsewhere. If all we’re talking about is whether we trust the vendors to truly delete our data, the applications aren’t new or interesting at all.

An app would by strongly ephemeral if and only if it could guarantee, in addition to secure transfer of data, secure destruction according to a policy on the recipient device. That in turn requires absolute preemption capabilities on all executable code on the recipient device.

If my interpretation of “ephemeral” is wrong in this context, please feel free to ignore what I said and write it off as a good faith misunderstanding.

Either way, I’m sure we both share the same commitment to confidentiality and user control of their programs.

Nick P April 4, 2014 10:38 PM

@ Adrian

“What I’m saying is that ephemeral apps require trusted computing where you distrust the recipient. That is, it requires DRM.”

Yep. It’s a similar model to TCG. It just takes even more work to make it work. And you still gotta trust the user not to bypass the technical protection via means outside the device.

Wael April 5, 2014 3:19 AM

Ephemeral or not, electronic transmissions have a long term memory. Send something on the wire, and you can’t enumerate where it ended up. I wonder if that’s possible…
Example, I saved a post someone posted here a while back. The mo-der-ator chose to delete that post because it was inappropriate, but I still have a copy, or rather had a copy. But my laptop was stolen, so if someone broke the encryption, then that someone has the post…

PS: Hyphens to avoid false alarms…

Adrian April 5, 2014 4:09 PM

@Nick P

You’re correct that no matter what technical solution you come up with, you still have to trust the recipient won’t violate the ephemerality property using another device (i.e. taking a photo of another device with an “ephemeral” communication up. I treat this as an unsolvable problem. I just happen to think that ephemeral communication assumes a threat model that has no technical solution.

TC April 7, 2014 4:46 AM

@Adrian, nice post! Perhaps ephemeral apps are the killer application of trusted computing technology. (If a killer app doesn’t appear soon, then I suspect the TPM will cease to exist.)

mt April 21, 2014 6:58 AM

Does anyone know of any technical review of TorChat – a peer-to-peer, ephemeral instant messenger client whose anonymity and security relies on the strength of Tor hidden services and (apparently) nothing else?

Some info:


It seems unique and the idea of using Tor hidden services only rather than redeveloping the wheel is very appealing. But I’m not aware of any independent review of the client code.

Nick P April 21, 2014 12:58 PM

re TorChat

I was about to say it’s written by unknowns in C/C++ so endpoint attacks will follow regardless of protocol. Then, I saw this gem:

“Prof7bit has switched to working on torchat2, which is a rewrite from scratch, using Lazarus and Free Pascal.”

YES! Exactly the kind of thing I recommended in the Heartbleed thread. It’s not magic far as vulnerabilities are concerned but reduces risk. It’s also easier for a reviewer to vet Pascal than C/C++, which often resembles gibberish.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.