Apple's iMessage Encryption Seems to Be Pretty Good

The U.S. Drug Enforcement Agency has complained (in a classified report, not publicly) that Apple's iMessage end-to-end encryption scheme can't be broken. On the one hand, I'm not surprised; end-to-end encryption of a messaging system is a fairly easy cryptographic problem, and it should be unbreakable. On the other hand, it's nice to have some confirmation that Apple is looking out for the users' best interests and not the governments'.

Still, it's impossible for us to know if iMessage encryption is actually secure. It's certainly possible that Apple messed up somewhere, and since we have no idea how their encryption actually works, we can't verify its functionality. It would be really nice if Apple would release the specifications of iMessage security.

EDITED TO ADD (4/8): There's more to this story:

The DEA memo simply observes that, because iMessages are encrypted and sent via the Internet through Apple’s servers, a conventional wiretap installed at the cellular carrier’s facility isn’t going to catch those iMessages along with conventional text messages. Which shouldn’t exactly be surprising: A search of your postal mail isn’t going to capture your phone calls either; they’re just different communications channels. But the CNET article strongly implies that this means encrypted iMessages cannot be accessed by law enforcement at all. That is almost certainly false.

The question is whether iMessage uses true end-to-end encryption, or whether Apple has copies of the keys.

Another article.

Posted on April 5, 2013 at 1:05 PM • 31 Comments

Comments

J. OquendoApril 5, 2013 1:25 PM

Please don't confuse the issue(s) here. DEA is complaining in reference to CALEA taps. iMessages are not sent via the routes as phones (they don't need towers). There is a difference for example when you restore your phone. All they'd have to do is get a warrant for your iCloud storage. Remember, you can restore everything on a phone from iCloud and that will include those encrypted messages.

Kind of misleading. Its not that they CAN'T decrypt, its because a typical CALEA tap will never see those messages.

Nick PApril 5, 2013 1:27 PM

" It would be really nice if Apple would release the specifications of iMessage security."

Actually, it wouldn't. Apple users trust Apple anyway, as you've pointed out in essays. If their design reuses successful cryptoschemes, then it still benefits from previous scrutiny or lessons learned. Releasing the specifics, though, might lead to compromises. Right now iMessage has both a diversity and obscurity benefit for users.

Note that the common counterargument is that one should try to assume the enemy knows or can obtain the design and find any flaws. This assumption is very flawed, although understandable. Obstacles between enemy and information can slow down or entirely prevent this. I've successfully used a proper combination of good peer reviewed design elements and obfuscation to hold off determined attackers. NSA does this with Type 1 crypto.

That the FBI is complaining supports my point. They *might* have a skilled hardware reverse engineering group take it apart, then reconstruct the algorithms, then have codebreakers find flaws, then weaponize those, and then start cracking iMessage en masse. That's way more difficult than bypassing TrueCrypt and they haven't exactly been good at THAT despite source.

Disclosure of iMessage source and/or specs means FBI crypto-coding guys would be [intelligently] probing source and running devices for problems. Finding a bypass might be easy for them. So, comparing the two, this situation dictates that no disclosure improves security rather than reduces it.

l0stkn0wledgeApril 5, 2013 1:48 PM

@NickP:

You really believe a lack of disclosure is more secure? This is simply a variant on the old open vs closed source.

The fact is that how do we know Apple had a purport infrastructure that meets basic security requirements? To compare what they do to the develop of Type 1 cryptography is absurd.

NSA has hundreds (if not thousands) of really smart people on staff that work on these things. Creating codes and breaking codes is part of their job. Hell most government Secret/Top Secret data can now be encrypted with publicly developed algorithms (AES).

Ask Microsoft how well their self developed "closed" system for LANman worked. It was still broken without people seeing code or having specs.

In no case is obscurity an appropriate form of security.

IRSApril 5, 2013 2:12 PM

Nick,

I agree that obscurity is an added layer of security. The problem is that it is not a very good layer of security (it is far easier to crack broken secret security than open security done well), and obscurity hurts the casual user, since it hides security flaws.

I do not really trust any company's guarantees that its software is secure; a true guarantee would require considering every possible way the product could be used (or misused), an impossible task. The primary measure of security, I think, is the amount of collective thought and creativity necessary to find the next security hole, and that is primarily an increasing function of the amount of analysis it has already received--a clear victory for open algorithms.

kog999April 5, 2013 3:43 PM

They dont need to break it when they can just ask apple for it and apple will gladly hand it right over.

DaedalusApril 5, 2013 4:27 PM

@QnJ1Y2U "There's some speculation that this is just DEA disinformation. Here's Julian Sanchez's take:
http://www.cato.org/blog/..."

That was my first suspicion. And that is an excellent article which quickly proves it is trivial for feds to gain access to iMessage data.

I noticed this news item after reading about this new "accidental leak" [Summary: "Anonymous" DoS' some apple developers"] :
https://threatpost.com/en_us/blogs/some-imessage-accounts-hit-hard-mass-messaging-dos-attacks-040113?utm_source=Home+Page&utm_medium=Top+Graphic+Bar&utm_campaign=Position+1

Maybe the two cases are related. While the above issue is simply a DoS, they are targeting Apple developers and DoS can be used as a diversionary tactic. So maybe that case is more important then it seems.

I did not notice the DoS story when it came out. Probably most people did not. By creating a higher profile disinformation story on iMessage, they may have been hoping to compel the attackers to generate bragging noise on the IRC channels they hang out on.

The basic principle is "if someone is keeping a secret which they feel proud of, discussing aspects of that secret even indirectly before them can put pressure on them to talk about it".

DaedalusApril 5, 2013 4:56 PM

I do think the highest probability is the article is not clever disinformation, but it is real and continues a disturbing trend in the US of some in authority to try and continue to weaken internet security as a whole to expand their own capabilities at surveillance. Specifically with US citizens in mind.

As a whole, this appears to be the trend with authorities in power who have access to surveillance capabilities. How they operate like a corporate monster in this way, I am not sure.

Probably it is simply an aspect of the politics of their secrecy culture. In any culture, people know what others will agree with, and what their supervisors will like.

Those who come up with zealous plans for "the team" which operates in surveillance, often unethical against their own citizens know that any statements made to increase their own groups power will be well accepted by the group as a whole regardless of the intelligence or morality of the statement.

So, while to the general public, this complaint by the DEA may sound like the yelping of some savage knuckledragger, in his own culture which likely includes other DEA agents, FBI, DIA, CIA, NSA, and others it is something that is resounded like gospel from the golden tongue of a celestial being.

Clive RobinsonApril 5, 2013 5:05 PM

@ Bruce,

The U.S. Drug Enforcemen Agency has complained (in a classified report, not publicly) that Apple's iMessage end-to-end encryption scheme can't be broken.

I would take that with a huge pinch of salt atleast as big if not bigger than Lott's Wife [1].

Firstly nearly all reports from US Federal LEA's are "classified" to some extent esspecialy if they contain any kind of operational material. That said however classifing a report and then only issuing it to non technical persons is a great way of preventing what the report says being questioned or more likely being called out as a fabrication.

The FBI are notable (due to ineptitude) for this sort of report to use as leverage to get more funding, resources or legislation.

A simple question to ask is if this report is classified how come it's been seen by CNet?

The chances are it's been deliberatly leaked to up the anti for an upcoming request for more funding / resources / legislation.

For all the FBI's past whining about "going dark" when challenged they have not provided hard evidence that the funding / resources / legislation is actually required. At best they provide vague anecdotal evidence that cannot be tested in any meaningful way.

With regards,

I'm not surprised; end-to-end encryption of a messaging system is a fairly easy cryptographic problem, and it should be unbreakable.

Tut tut tut... In "theory" it is a fairly easy cryptographic ssystem... But in practice, it's actually very unlikely to be.

If we look at a little practical crypto history,

Firstly we have "protocol" issues SSL has recently had two, then there was WEP and a number of other protocol issues. Protocol design is hard very hard and even experts get it wrong. Somehow I doubt Apple stumped up for an expert review of their protocol. After all look at Microsoft with a history of failures with crypto and protocols, there's been so many you'ld almost belive they were doing it deliberatly.

Secondly we have AES which is theoreticaly secure, but in many software implementations, it has timing side channel issues even today in well used public library code.

Thirdly there are all the unresolved issues to do with setup and negotiation of KeyMat, all systems where key exchange is not fully "out of band" are vulnerable to MiTM attacks even those using PK Certs...

Then there is the generation of the KeyMat it's self, this is notorious for going wrong and being easily broken. The simple fact is generating truely random KeyMat is a very very hard problem for a number of reasons. And that's before you talk about all the KeyMat distribution issues which also occurre and blow holes in CSPRNG generated KeyMat.

There's a very high probability that Apple to solve one KeyMat issue (generation) has not gone down the TRNG route, but gone for a soloution such as CSPRNG such as AES in CTR mode in the factory to generate device "master keys" and a derivation algorithm, then used the result derived keys like TRNG generated keys. As with the RSA two factor key fob issue, once the "seed and secret" are stored outside of specialised storage then the whole system becomes vulnerable.

What's the betting that a phones unique electronic serial number used on the mobile phone network, correlates in some way to a master key in each device?

There are a whole load of other issues but just those I've given should give pause for thought.

On the other hand, it's nice to have some confirmation that Apple is looking out for the users' best interests and not the governments'

Hmm I'd put a standard wager (ie a pint of the best) that Apple have gone down a route that would fairly easily let them be compliant with any changes to CALEA etc etc the US or other Governments pass into legislation rather than lose production time or market share.

As for looking out for users interests, it appears that Google are fed up with the FBI turning up with NSL's and have asked a judge to tell the FBI where it can file them, it will be interesting to see what the outcome is.

[1] Whilst the original refrence is biblical, there acttually is a very large rock feature named Lott's Wife in the general vacinity of where the biblical story supposadly happened.

Clive RobinsonApril 5, 2013 6:23 PM

I do wish people would stop chanting,

"Security by obscurity is not security"

Or words to that effect, as at best like most "knee jerk" mantra's it's usually ill considered.

Obscurity quite litterly means "conceal from sight" (of others) and much physical security and all information security works by keeping information concealed from sight of potential attackers.

For instance one of the only crypto systems for which there is a proof of security is the One Time Pad. When you look at it, it relies on the KeyMat only existing as two copies one at the TX end and one at the RX end that never get used twice nore are there any other copies nor was the pad produced in a sufficiently determanistic way that it could be recreated.

If you care to think about it the mantra is realy saying "don't do the same thing twice or do it in repeatedly an obvious way".

At the lower levels of information security we hit a problem, that is when you are below the CPU in the computing stack you are very much into physical security, much of which relies on obscurity to work and prevent others from bypassing it. The likes of the NSA, GCHQ, et al rely on "physical obscurity" to protect information secrets put into crypto processors etc and they tend to rely on it by making tampering a somewhat hazardous process for an attacker with the likes of thermite or explosives to ensure physical destruction to keep the secrets.

The problem however is how to maintain "secrecy" when you lose physical control of a device. Anything less than multiple anti-tamper devices is not going to work. And preferably the properties of the anti-tamper circuits are physicaly programable beyond the capabilities of any attacker (ie if you can program it a million different ways and only ever make a few thousand devices then you have a reasonable chance).

Security engineering is a complex subject and "obscurity" in it's various forms are valid parts of it. The skill comes in knowing how obscurity works and when it is valid to use it, not in reciting mantras prohibiiting it.

DaedalusApril 5, 2013 10:20 PM

@Clive Robinson

:-) People have been doing that since the 90s...

And doing it with anonymous nicks... :-)

(That are obviously anonymous.)


Really, obscurity is a key part of security, the bread and butter of anyone that genuinely has anything to hide.

Physical security or otherwise...

WaelApril 6, 2013 1:45 AM

@ Clive Robinson, @ Daedalus

Really, obscurity is a key part of security...

Sounds right! It could, moreover, be argued that hiding a private key is "Obscurity". Had a related discussion not too long ago on this blog about "all cryptography are based on a shared secret". And a "secret", by definition, must be obscured. Otherwise, it ceases to be a secret...

@ Clive Robinson,

I would take that with a huge pinch of salt atleast as big if not bigger than Lott's Wife
Be careful with the amount of salt you consume. Not good for your blood pressure. But if you insist on eating Lott's wife size grains of salt, you may want to consider neutralizing that by drinking an equal amount of Hibiscus "tea". Make sure you drink it cold, though!

Besides, I thought you already consumed Lott's wife previously, except that she had one "t" in her name then:)

QnJ1Y2UApril 6, 2013 2:07 AM

@Daedalus

And doing it with anonymous nicks... (That are obviously anonymous.)

You can't be referring to me - my name really is 'QnJ1Y2U' (well, 'QnJ1Y2U=' in more formal settings :-)

Clive RobinsonApril 6, 2013 4:49 AM

@ Wael,

Had a related discussion not too long ago on this blog about...

Yes I remember it it was at a time when the US was suffering early global meltdown, and power companies were enforcing power black outs on consumers (but not politico's) to prevent their transmission kit melting of the poles. Oddly those heatwave falliures actualy resulted in a beefing up of the power network that stood NYC in better stead than it would have done when the hurricane came knocking a few month later... But I suspect there is worse to come when the solar cycle shifts round a bit.

Yes you asked me a question and I hedged my answer because of hashes, which are treated in effect as a crypto primitive (OWFs) on steroids but without secrets, which makes them insecure on their own[1] as it leaves them open to what are in effect Dictionary attacks[2].

For some reason many people equate hashes with either block or stream ciphers and using "magic thinking" assume hashes give them the same sort of secrecy... And then stick a spear in the messengers back when the messenger tells them their cherished assumption on which their elaborate system rests is false (I've still got the scars and they tend to make you cautious when delivering the same bad news again ;-)

Anyway I remember the discussion evolved into another area entirely which was causing you some considerable thought. So I should ask if you have progressed any further or not? And if so in which direction?

[1] The use of hashes as "magic pixie dust" on the output of low quality RNG's to (in their designers eyes) make them "more secure" is a real example of "security through obscurity". I blaim the chip maker Intel for this nonsense and I suspect it is somehing the likes of the NSA, GCHQ et al have proffited by quite happily for some considerable time.

[2] If your RNG only produces 16 unique outputs then it's only got 4bits of entropy, shoving it through a hash function with a 512bit output only gives 16 different 512bit numbers so the entropy remains the same. All an attacker has to do is work out what those 16 unique RNG outputs are and the system is broken. But worse simple observation of the hash output will after time show there are only 16 different outputs and that knowledge alone may be sufficient to break a system. This was one of the failings with the original Unix password system which gave rise to Dictionary attacks.

Nick PApril 6, 2013 10:24 AM

@ Wael

"all cryptography are based on a shared secret"

Not necessarily shared. Asymmetric crypto was designed so a shared secret was unnecessary. So, all crypto are based on a secret.

WaelApril 6, 2013 12:30 PM

@ Nick P

Not necessarily shared.
Yes, was a typo, and I noticed that last night but was too tired to correct it. Was composing a response to @ Clive Robinson including the correction. Good catch though.

WaelApril 6, 2013 12:43 PM

@ Clive Robinson

Correction: "all cryptography are based on a shared secret"
should have said "all cryptography techniques depend on a secret"

hashes, which are treated in effect as a crypto primitive (OWFs) on steroids but without secrets,

Keyed Hash Message Authentication Codes use a secret. But HMACS are used for something other than encrypting.

Anyway I remember the discussion evolved into another area entirely which was causing you some considerable thought. So I should ask if you have progressed any further or not? And if so in which direction?

Divided my thought process into three main areas and wanted to look at them separately:

1- Packet switched networks. Problems: Unpredictable channel characteristics, different routes
taken, inherent unpredictable latencies, ease of DOS attacks. Then again there is the number of hubs, special routs taken for a packet, etc... Had some ideas in that area, but quickly dismissed them. Will leave that for a latter day...

2- Circuit switched. Maybe easier to handle, but had no more thought or progress on that

3- Short range / long range communications
a) Short range: component to component communications (intra-chip communications, IPC,...)
b) Communications within one SOC. This maybe easier to use since:
- Some channel characteristics are predictable, (can be bound within a range)
- Does not have to be provisioned at the factory

I am still having trouble dealing with environment changes that can affect these characteristics. The channel, whatever channel, is inherently noisy. I want to use this noise for securing the channel, but the noise is also not your friend. Want to "tame" it a little. This is where I am. I think about it once in a while.
Then, I started to think about it from a higher level, by taking one parameter at a time (just like in our C-v-P approach). Take latency for instance. Suppose we have two points A and B. The latency of communication can be measured between A and B in the protocol. If an intruder node C monitors the communication, which is (maybe implicitly) enciphered with that latency, then information would make no sense to node C, unless Can deduce the latency (another discussion)... Then again, we have to make sure the communication bandwidth is constant. Does not have to be at the maximum bandwidth as you suggested in your S-v-E discussion, unless you want to optimize efficiency as well. Here, I am trying to minimize side channel information leakage.

with a 512bit output only gives 16 different 512bit numbers so the entropy remains the same

Maybe a prototype device would be best. I sometimes stop thinking about this, because part of me tells me it's a pipe dream, the other part of me says...

Related is Key space versus key size. People often say key size is most important. On the extreme case where say the key space allows only two keys. And you choose a key that is a zillion bits long, the size does not help, since there are only two possible keys. You might as well have a one bit key. Key space is at least equally important.

WaelApril 6, 2013 12:51 PM

@ Clive Robinson,

Have to do that before Nick P dings me again...

Errata to my previous post...

unless Can deduce the latency
Should be: unless C can deduce the latency

Maybe a prototype device would be best. I sometimes stop thinking about this, because part of me tells me it's a pipe dream, the other part of me says...

Should be placed above the block-quote:

with a 512bit output only gives 16 different 512bit numbers so the entropy remains the same

Clive RobinsonApril 6, 2013 1:46 PM

@ Nick P,

Not necessarily shared. Asymmetric crypto was designed so a shared secret was unnecessary. So, all crypto are based on a secret

Actually when you think about it even asymmetric crypto has a common shared secret, it's just that it's not visable in the ordinary sense.

Look at it this way both symetric and asymetric crypto hhass TWO mappings,

Ptxt {maps to} Ctxt, Ctxt {maps to} Ptxt

With symetric crypto the mapings are actually symetric with the second being a true inverse or mirror of the first and they stay within the bounds of a field (thing the first map is turn left 90degrees and the second map is the inverse of turn right 90degrees).

With asymetric crypto the mapings are still inverses of each other but by going outside of the bounds of the field and moding it back they can take what alternative route they like and as such are not mirrors of each other (think first mapping is turn left 90degrees and the second is turn left 270degrees as long as the total is a multiple of 360 you get back to where you started).

The shared secret is the information on how to build the two mapings (via the primes P&Q), which luckily can be obsficated by the different maping building processes.

Now... If I understand it correctly in theory the number of mapings for such systems does not have to be limited to just two, and likewise you can have parts of mapings... Thus potentialy you could build a system with as many keys as you need you just have to find new methods for building the mapings...

It's just one of those things that causes me to lose an hour or three of my life every so often contemplating ways to do it...

David GolumbiaApril 6, 2013 2:28 PM

Bruce, I find it fascinating that when you directly point out that technical security solutions will never solve "the security problem," your commentators mostly agree; yet as soon as you turn to an issue where a technical question is foregrounded, you get even more comments that presume technical solutions will work.

Technical solutions will not work. A clear regime of law and practice is the only thing that will work.

Furthermore, the inherent view of technicians that security should go to him-who-has-the-best-crypto is deeply antidemocratic. Security (or rather, privacy) is a right and a legal obligation, not a software feature.

Clive RobinsonApril 6, 2013 4:02 PM

@ David Golumbia,

Technical solutions will not work. A clear regime of law and practice is the only thing that will work

Err technical solutions do work, that however is not the problem and neither law or practice is likely to solve that either now.

As I've repeatedly said technology is agnostic too it's use, it's the mind behind the hand that decides the use, and other mind if that use is good or bad.

The real problem with the technology we are talking about is it's rapidly falling cost and geometric increase in capability.

The result is that the cost of surveillance is now negative. That is it's now profitable to carry it out.

It is such a tipping point that those with vested interests won't allow legislation or practice to change to stop the profits rolling in.

One area where overhead surveillance technology has paid big dividends is "Google Earth" and property taxation. In only very recent times past it was just way way to costly to check if people were violating building codes etc, and the system relied on others making complaint. Within the past year or two it's become profitable to employ people to go through GE and other overhead surveillance photographs and thus catch those with unregisterd buildings or other structure on which extra taxation was due. There are already companies developing systems that will compare pictures with records and thus the process will be almost fully automated at negligable cost.

Now I don't know if you consider this use good or bad it's a simple matter of viewpoint. What however is not a matter of viewpoint is such technology will encorage the use of unmanned aerial vehicals (UAV's / drones) to be flown at ever more decreasing intervals with rapidly improving optics. The data recorded will almost certainly be used for other things under the usual mission creep rules (catching and fining nude sun bathers etand there is a very high probability it will be either miss handled or deliberatly abused in some way

Clive RobinsonApril 7, 2013 6:07 AM

@ Wael,

Keyed Hash Message Authentication Codes use a secret. But HMACS are used for something other than encrypting

Yup you know that, Nick P knows that, many of the readers on this blog know that including me ;-)

However... there are a lot of "code cutters" working in places that have extensive product reach that don't appear to understand the difference and put on their "magical thinking hats" and have visions of Magical HASH appearing from the smoke and mirrors to solve all their little problems. So they grab the first book that has a code snippet of MD5 etc and the rest follows like snow off of a high pitch roof...

Nick PApril 7, 2013 10:00 PM

@ l0stkn0wledge, IRS

" To compare what they do to the develop of Type 1 cryptography is absurd."

We're talking protocols mainly. That's where the proper comparison is. I've found potential attacks on A1-class systems that NSA certified and failed to breach. Attacks were routinely found on open crypto protocols. Lessons were learned by open protocol designers, *maybe* by closed Type 1 designers. Leads to my hypothesis: the only reason Type 1 systems haven't been hacked is because the secrecy of their design and implementation is maintained. Same for A1-class systems that had potential flaws. These all combined good design, independent evaluation, trusted personnel and secrecy with strong results. My previous efforts had same traits and same results.

"Ask Microsoft how well their self developed "closed" system for LANman worked. It was still broken without people seeing code or having specs."

And that was in 1998. They lacked the knowhow, contractors, vetted libraries/protocols, etc. we have today. There's also plenty of worked examples in the secure messaging space whose details are open enough. A modern company has plenty to build on today. It basically becomes a challenge of choosing proper components, proper integration and proper use in production.

"The problem is that it is not a very good layer of security (it is far easier to crack broken secret security than open security done well), and obscurity hurts the casual user, since it hides security flaws."

This is a point underlying many objections to my claim. However, we must remember we're talking about smartphones (bad for security) made by a company not known for security (even worse). End result is people are buying a cool, useful, etc. phone with some "maybe safer" functionality. The iMessage security goal will be good enough for its intedended users probably and people wanting truly secure messaging shouldn't be using a mainstream smartphone anyway. There's simply too many vulnerabilities so an open design of a crypto is unlikely to help that much.

I also can't overstate enough that obscurity has saved many systems, networks, pieces of information and lives. It's best combined with true securtiy measures. However, the obfuscation/obscurity stopped the people that otherwise would have done the damage. And "keep source secret" = only one obscurity technique and a very weak one at that.

"The primary measure of security, I think, is the amount of collective thought and creativity necessary to find the next security hole, and that is primarily an increasing function of the amount of analysis it has already received--a clear victory for open algorithms."

That's an interesting thought. It's one avenue. The correct by construction approaches do the opposite: try to specify and control information flow or operation to the point that unacceptable states are impossible. Each side has its proponents.

CLARIFYING MY POSITION

Alright, let me clarify. My original comment was meant to tie into others I've made on this blog. It was mainly for regulars. However, some people might for good reason miss the broader stuff it tied into. Here's how I think Apple and other companies should approach securing their products to get the benefits of both open and closed systems.

1. Use mature and vetted components, protocols, crypto best practices, etc where possible. That prevents more problems than it creates.

2. Use a formal specification (abstract, at least) of behavior, identify all states, and create a response for each.

3. Make sure any specification or analysis, especially formal methods, is concrete enough to reflect the reality of the operation.

4. Implement the application using inherently safe languages or managed runtime. The runtime should also be small, simple, easy to analyse if possible.

5. Make easy variations of different aspects of the protocol and send them like you would send the key. This diversifies the protocol in a what that's a black box to the adversary. A one-size fits all attack will no longer work.

6. Diverse implementations of several specified protocols. Creates plenty of variations. Only a very novel attack catching all of them will have widespread success.

7. Anything custom that's developed is reviewed during development by an independent team experienced in flaw finding with access to specialists in different aspects of application security. Problems found are fixed before release.

8. A strong update mechanism must exist that allows for patches of protocol issues and optionally fall-back prevention.

Each of these has empirically proven value in isolation. Combining them should result in great improvements to security. Note that open source isn't listed there at all. The key security benefit of open source is reviews. That's in my scheme. The implementation can remain secret or obfuscated. Yet, it will also have strong design, use vetted material, handle failures well, and be maintainable as problems are found.

The alternative is open design. I'm not analysing that. We have enough data to say what happens. They have less issues, esp. obvious issues, than their obscure proprietary competitors. However, they still end up having flaws at protocol, design, errorhanding, coding, configuration, etc. levels. There's still been plenty of attacks on them. There were surviving systems or instances that used different implementations of open services or protocols.

Also, most of the exploits are currently coming from security researchers and TLA's. These will both get you anyway. At best, you come up with a point of diminishing returns on vulnerabilities found. The 1000 yard view is that both the open and closed systems get hacked. My approach would eliminate issues from closed systems without introducing potential security and commercialization problems with totally open extreme. The diversification, safe language implementation, and throwaway nature of those protocols adds extra benefits that would have mitigated some major open protocol issues big time.

stvsApril 8, 2013 10:52 AM

They lacked the knowhow, contractors, vetted libraries/protocols, etc. we have today. There's also plenty of worked examples in the secure messaging space whose details are open enough. A modern company has plenty to build on today.

Apple's security track record is mixed, even when no-brainer modern tools should be used, e.g. After leaving users exposed, Apple fully HTTPS-protects iOS App Store. And OS X Server VPN only knows about PPTP and certificateless L2TP. And so on.

I'd like iMessages to be secure, but anyone who uses it knows you receive decrypted messages across every iMessage device, including OS X's Messages.app, so it's safe to assume it's as secure as anything stored on iCloud -- Apple has your keys if asked.

What are the actual ways to accomplish secure point-to-point on an iOS device? The only method I'm aware of is installing your own S/MIME certificate and using encrypted email. I believe that iMessages, FaceTime, and all iCloud stuff are secure between the device and Apple servers, but not secure end-to-end. And I am unaware of any Jabber IM client that provides even device-to-jabber server security.

AlanApril 8, 2013 11:28 AM

I would guess that Apple also has the technical capability to "root" any iPhone, in other words, they could push a piece of software down to your phone, without permission or notification, that would allow them or law enforcement remotely access to all of the data on your phone and all of the communications to and from your phone. So given that technical capability to remotely access your phone (which could surely be developed if it did not already exist), I wonder if or under what circumstances Apple would be willing to do that if they received a request from a law enforcement or other government agency? Regardless of the design of iMessage, I don't think it could ever be considered secure from Apple or those Apple might be willing to assist.

Nick PApril 8, 2013 12:55 PM

@ stvs

"Apple's security track record is mixed, even when no-brainer modern tools should be used, e.g. After leaving users exposed, Apple fully HTTPS-protects iOS App Store. And OS X Server VPN only knows about PPTP and certificateless L2TP. And so on."

No doubt. We KNOW they won't go for a totally open system b/c they love proprietary stuff. So, that whole debate is pointless in this case. The next step is "how to get OSS benefits in closed system?" So, I added independent V&V per iteration to my model. It would catch crap like that.

"What are the actual ways to accomplish secure point-to-point on an iOS device? The only method I'm aware of is installing your own S/MIME certificate and using encrypted email. I believe that iMessages, FaceTime, and all iCloud stuff are secure between the device and Apple servers, but not secure end-to-end. And I am unaware of any Jabber IM client that provides even device-to-jabber server security."

There ARE solutions, they just all seem to be proprietary. One developer side-stepped around the iOS app development and SMIME issues by making a web-based secure messaging solution. Hush and ZixCorp took that approach on desktop. Maybe combine native apps (for key/message privacy) and web (transport) in a new way that suits mobile. I know of no mature FOSS solutions though.

@ Alan

" Regardless of the design of iMessage, I don't think it could ever be considered secure from Apple or those Apple might be willing to assist."

Agreed.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..