End-to-End Encrypted Cell Phone Calls
Android app. (Slashdot thread.)
Android app. (Slashdot thread.)
Sebastian • May 27, 2010 7:11 AM
If only there was more PGP / GnuPG / OpenPGP support on today’s smartphones.
Being able to have your inbox with you on your phone is nice, but then you can’t read or send encrypted messages.
http://code.google.com/p/android-privacy-guard/ might do the trick for Android devices, but users of other OSes are left behind.
Alan • May 27, 2010 7:41 AM
Note that the ZRTP protocol used by the RealPhone app would only secure against a MitM attack if the two users are able to recognize each others’ voices, and if they take the time to do a voice confirmation of the authentication strings that are generated by the protocol.
I’m not sure what the TextSecure app or the “Off The Record” protocol do to protect text messages from a MitM attack. Has anyone reviewed these protocols?
uk visa • May 27, 2010 7:50 AM
Even if these apps fail to deliver; more will be developed that will work.
This is what happens when our governments are too intent on taking any privacy away from us.
B. Real • May 27, 2010 8:12 AM
I hope that this or another similar application will be made to interoperate with Zimmerman’s ZFone.
BF Skinner • May 27, 2010 8:32 AM
Sounds complicated. It’s gotta be easy if end users are going to be the operators.
Sigh Looks like I will just have to get a ‘droid. 🙂 (wonder if I can expense it)
If this or other apps do become popular it does sounds like we’re going to have a reflash of the 90s fight over Clipper and key escrow.
The key argument some PITA kept making way back then was that the escrow system becames a target for unauthorized access as in . . .
“the Athen Affairs, a situation in 2005 when legal intercept capabilities in Ericsson equipment were used to spy on Greek politicians including the country’s prime minister” (multi media capability would be nice here…this quote really needed the Man from U.N.C.L.E soundtrack backing it.)
And what the original take on the cause/vector of the China/Google compromise.
Jeremy • May 27, 2010 9:16 AM
@Alan: The latest version of OTR uses the “socialist millionaire protocol” to reduce the possibility of a MitM attack. Basically DH is used to create an unauthenticated encrypted channel, and the the “socialist millionaire protocol” protocol is used within that channel for authentication. Earlier versions of OTR used pre-shared fingerprint IDS for authentication.
Fabio Pietrosanti • May 27, 2010 9:31 AM
Still very young and a lot of work to be done on this (i can expect 14-20 man working months at least to make it production grade for Enterprise/Government usage).
It’s only that is not standard VoIP but a custom proprietary protocol to run signaling that use ZRTP as a key exchange system.
So it use SMS for signaling instead of using SIP over a TLS channel.
It miss to protect from phone call logs generation on mobile operator system and it miss the integration within existing PBXs such as FreePBX.
The ZRTP exchange system of redphone currently:
– does not support caching/key continuity of ZRTP (the user has to check SaS every time)
– use standard Java RNG and does not use phisical source of entropy such as microphone sample
– use Speex that it’s a VBR codec that’s subject to Language Identification vulnerability as described on http://cs.unc.edu/~fabian/papers/voip-vbr.pdf
It’s also true that’s almost an alpha release, version 0.1 so i can expect that it will adjust those things.
And the guys behind the application is a smart hacker, so i expect he will do something good during the time, just that there is a huge amount of work to be done on telecommunications side.
I am not confident about some technological choice such as the speex codec that’s not able to run over GPRS CS1 networks with decent quality.
You need ultra-narrowband codec to do that with proper tuning and testing and speex is sub-optimal for use in GPRS CS1 networks or Thuraya Satellite.
Making a voice encryption product working properly require a huge effort in telecommunication technologies, not properly on security, and that’s something that you don’t know until you start practicing it.
It’s a nightmare, i want networks that works.
I will start working on Android version of my ZRTP secured apps during late this summer, but i expect to use C code with Native Binary Development Kit in order to be able to use hardware codec with ultra-narrowband capability making the secure voip engine almost independent from the underlying java framework.
Much like the sipdroid guys have done for Android with the basis of mjsip.
Fabio Pietrosanti (naif)
albatross • May 27, 2010 11:54 AM
I hope this app becomes available for the iPhone soon, too. This won’t stop a dedicated attack (someone putting spyware on your iPhone), but it will make massive fishing-expedition type wiretapping much less powerful, if it becomes widespread. And it should; there are good reasons to believe that exactly that kind of eavesdropping is being done now and has been done for awhile. (And the telecom immunity bill and “look forward, not backward” policies of the Obama administration have made it clear that we can’t expect the law to protect us from this kind of illegal eavesdropping.) Google’s decision to offer SSL-protected access to its main search page is another step toward responding to this.
Ten years ago, it made some sense to only encrypt the critical stuff your application sent out. Today, it’s nuts to do that. Instead, we ought to be moving toward a world where everything is encrypted by default, unless there’s some special reason it can’t be.
Nobody Special • May 27, 2010 12:54 PM
Fishing expeditions tend to use call pattern analysis rather than listening to calls.
The big value of this is once everybody encrypts all calls then an encrypted call isn’t a big red flag that you are doing something worth investigating.
At the moment encryption is like having a pager in the 80s. Having a pager && !ER doc = drug dealer (or at least as far as the police were concerned)
TheOtherGeoff • May 27, 2010 12:56 PM
“Ten years ago, it made some sense to only encrypt the critical stuff your application sent out. Today, it’s nuts to do that. Instead, we ought to be moving toward a world where everything is encrypted by default, unless there’s some special reason it can’t be. ”
Not a new thought… not even 10 years ago… 18 years ago, Phil Zimmerman stated (with DoJ officials in the room monitoring his every word in lieu of the allegations of him ‘exporting’ of PGP out of the U.S.), “…until everyone encrypts, encryption will be considered suspicious” At that time (and now), it’s suspicious to the Gov’t who feels you have something to hide…. but now, encrypted stuff is ‘of interest’ of criminals too. And the uencrypted stuff is like panning for gold… get access to enough stream, and the nuggets will just fall out.
problem: to encrypt everything means everyone must be carrying their [private] key, and a public place for people to register their [public] keys.
What is still missing is the big trusted ‘public key ring in the sky’ where everyone submits their bonafides, and gets a set of keys, and posts their public key to the world.
Then my private key(s) can be put into phones, and PCs, and if I don’t care, I can proxy it to services like Facebook/Gmail.
of course that central key ring becomes the ground zero of every hacker in the world. And each individuals private key on a XP PC is just as protected as his facebook and bank passwords (send an army of bots to steal these priviate keys).
The encryption battlefield still suffers from weakness at the endpoints, let alone the infrastructure to support ‘encrypt everything’.
We are waiting for data encryption (storage) on Android!
When mobile phones become more and more like mobile computers we need more security for private information on it.
HJohn • May 27, 2010 2:21 PM
@BF Skinner: “Sounds complicated. It’s gotta be easy if end users are going to be the operators.”
That seems a never ending problem, doesn’t it?
I use a ton of freeware products. Not because I’m a cheap skate (I use commercial ones too), but also because I am an educator in the community and I run into a depressing number of people who don’t want to purchase even a cheap solution, let alone have the patience to configure it. One key for me when I recommend a product is that its default installation is sufficient (or the installation walkthroughs are simple).
I see it similar even with phone calls. If it is too complicated, they simply won’t bother.
Nick P • May 27, 2010 3:32 PM
@ BF Skinner and others
No, it’s pretty simple: it won’t work. We’ve discussed this before and we even had Frank Rieger of Cryptophone, probably the best out there, show up during the debate. An Android phone is likely to have firmware, drivers, and OS supporting software that has vulnerabilities. The OS’s security model is better than most but there may be a circumvention there. Finally and most worrying, the firmware might have a remote update feature. Any such feature can be used by feds as a backdoor. Have no doubt that, if you really matter to them, their tech’s will be more than capable of handling a malicious update of a phone or exploiting an open-source platform that’s so complex. Cryptophone settles on one phone and OS, then hardens the crap out of it. Even they can’t prove they have no backdoors. 😉
What to do?
Well, it depends on your situation. If you don’t worry about Feds knowing your plans, then buy Cryptophones or register to get truly secure Type 1 certified phones like Sectera Edge. They are a little hard to get and maintain, so most people just get Cryptophone. If you are concerned about the government, decide which one and then get a secure phone from a competing government. They will know your secrets but probably not care.
If you want a truly secure device, you will have to design it yourself. Start with a trustworthy processor with two state architecture, MMU and preferrably IOMMU from a non-American company. Develop firmware/BIOS along with application software to ensure POLA. The OS should be one of the recent separation kernels. Run the phone OS or UI paravirtualized in user-mode, isolating the security-critical components beside it. Have formal methods guys develop an interaction scheme for the components that ensures the secrets stay secret and minimizes covert channels. The device must absolutely provide trusted path that controls all HID’s. Master secret that encrypts others should be combination of on-phone secret and user-provided secret. This phone should do the job and be certifiable to around EAL6 on the common criteria.
Of course, the above option is quite expensive. I’d estimate it to cost several million dollars to develop and prototype. The best solution is to avoid the mobile phone part altogether. If you can accept a somewhat bulky, but portable device, you can use a dedicated nettop/SBC/COM with hardened firmware/BIOS/OS, Intel vPro, and carefully tuned VOIP/ZRTP stack. I keep thinking about building one of these out of a VIA Artigo or similar box, as VIA has a hardware RNG and acceleration for AES, SHA-256, and RSA components. They don’t have the IOMMU that made me want vPro, though. (Sighs) Btw, avoid processor and firmware flaws by reading the errata, esp. on vPro. The newer processors have less flaws.
My thought is just grabbing an Intel Core Quad Extreme (average 50+ errors), turn off all but 1 core (defeat sync/cache errors), and disable everything I don’t need in BIOS. Use open-source OKL4 as a pseudo-separation kernel, usermode OK Linux as OS, and crypto/HIDdrivers/trustedcomponents as native apps on OKL4. Seems like it would be a start. Not to mention, my tiny apps would have a lot of cache to work with. (Which also introduces a covert timing channel if you don’t use a fixed scheduling algorithm for processes or give them high resolution timers. Real crypto security is a pain in the ass if you ask me.)
Dale • May 27, 2010 3:47 PM
I emailed Moxie about this yesterday. The voice call uses the same inband authentication as zphone. Each endpoint gets a number based on the key exchange which they can read outloud to each other. If the middle-man is faking the keys each endpoint will have a different result.
BF Skinner • May 27, 2010 6:25 PM
As someone reminded me today legacy is/always will always be with us (there are more COBOL programmers than any other type in the US today — frightening.). IT’s got to adapt, adopt and improve. Including using off the shelf platforms.
It’s been years now but wasn’t the NSA working on SCIF on the hip using Motorola and intending to providing a platform that could be purchased by Feds, State, Local, Tribal governments AND private concerns? Their plan was to permit interoperability between differnet gov’t levels (USSS and County Sherrif) during special security events and TS/SCI for ever more mobile flag officers. The only real difference would be an optional hw module and the ciphers. What happened with that?
Mark Wooding • May 27, 2010 6:25 PM
@ Sebastian: “If only there was more PGP / GnuPG / OpenPGP support on today’s smartphones.”
Well, Nokia’s N900 comes with a fully operational copy of GnuPG installed. And OpenSSL. Unfortunately, it’s not integrated with the user-level apps (GnuPG is used to authenticate OS upgrade packages). But it’s a start.
Nobody Special • May 27, 2010 9:53 PM
What happened with that?
Feds, State, Local, Tribal governments
AND private concerns
All cooperating and trusting each other ?
Davi Ottenheimer • May 27, 2010 10:18 PM
“At the moment encryption is like having a pager in the 80s”
If only that were true…I don’t see the same “cool” effect happening with encryption.
Maybe that’s the problem. Creating supply (making it easy/available) doesn’t bump usage nearly as much as demand.
Bruce, maybe you’re the man to get Gaga to add some lines about encryption to her Telephone song?
Davi Ottenheimer • May 27, 2010 10:20 PM
“At the moment encryption is like having a pager in the 80s”
If only that were true…I don’t see the same “cool” effect happening with encryption.
Maybe that’s the problem. Creating supply (making it easy/available) doesn’t bump usage nearly as much as demand.
Bruce, maybe you’re the man to get Gaga to add some lines about encryption to her Telephone song?
Mrs. Spook • May 27, 2010 10:42 PM
Nice!. But keep in mind that 95% of cellphones are vulnerable to simple side channel attacks…
Robert • May 27, 2010 10:59 PM
I just don’t get it.
why even try to make the a cell phone secure, I’d move the security to a BT headset and make the system only secure usable in this configuration. Design a special Bluetooth headset with low bit rate vocoder and AES stream cypher, the crypto key handling between BT and phone is easy. at the phone level it can be encrypted again within a much larger set of random / enciphered voice streams.
seems easier to focus on a dedicated piece of hardware than to try to deal with all possible backdoors in a beast as complex as an Andorid phone.
Nick P • May 27, 2010 11:55 PM
@ BF Skinner
I hadn’t heard about that. I’ll have to look into it. The only similar COMSEC interoperability standard that I’m aware of is SCIP. (Is that what you were referring to?) It’s a voice encryption protocol designed to work in all the deployment scenarios. It supports Firefly, Type 1 algorithms, etc. All the DOD/NSA buzzwords. 😉 Sectera Edge that I mentioned earlier provides this in the Type 1 certification, as do L3’s communication products. AFAIK, SCIP is a mandatory part of the Crypto Modernization Initiative. Like it’s cousin HAIPE, it’s also classified.
Clive Robinson • May 28, 2010 12:07 AM
“Seems easier to focus on a dedicated piece of hardware…”
Simple answer is it is…
Also there is an issue with formal proofs called either axioms or assumptions (take your terminology choice as they both essentialy mean the same).
A clasic example of this occured just the other day (I’m surprised Bruce has not bloged about it HINT HINT)
Basicaly a group of bods over at the University of Toronto have broken a comercial Quantum Key Distrubution (QKD) device that is “theoreticaly 100% secure”.
They did this by “re-visiting” the assumptions in the formal proof about error rate levels (the 20% margin) and realised the formal proof only acounted for natural noise not error noise at the originator (Alice).
They then found a way for Eve to exploit this ommision and stay at 19.7% error rate, thus Alice and Bob seeing a less than 20% margin assume (incorectly) they are 100% secure because that’s what the proof says….
To avoid the usual commercial out cry they went ahead and demonstrated the break on a piece of commercial quantem cryptography equipment.
You can read more on arXiv where a copy of the paper is.
And download a PDF of their paper.
Nick P • May 28, 2010 1:29 AM
Thanks for taking the time to reply and voice a counterpoint. I love it when the blog seeds subtopics that might lead to more secure systems. My original reply to your post was kind of big, as you probably expected. 😉 I’m going to try a new format, though, to ensure non-technical readers get something out of it. I’m basically going to do the high-level reply to each point first, then do some other paragraphs with more detailed, technical content. In addition to the content, I’d like some of you to tell me if you think the format itself is a bit more readable.
Abstract Answer to “Why secure the phone?”
The phone must be secured. The reason has nothing to do with secure calls. The real reason is that users trust it more every day and will continue to trust it. Conclusion: we either have to make the users’ activities more trustworthy or do that for the phone’s operation. Evidence to support my view include password managers, banking authentication codes, medical data and even digital signage apps showing up on smartphones with no real security. It’s hard to make a COTS smartphone that does all that securely. It’s also quite labor- and capital-intensive to make dozens to hundreds of custom solutions that try to operate securely through insecure devices. It just seems easier to me to use existing high assurance kernels and middleware to create a secure mobile platform. And one that just happens to make calls, too. 😉
Abstract Answer to “Why not this headset?”
Well, let’s pretend we just wanted an untrustworthy phone OS, applications on it, and a secure call capability was the only truly secure requirement. Well, we have to ensure that the headset can’t be subverted by the untrusted phone or DOSed by any phones during matchmaking. TEMPEST, if considered, is also hard to do in a headset, but could be added to a phone platform later. (This has already happened with Type 1 phone designs.) The limited number of potential manufacturers for such a small, sophisticated device increases subverson risks (maybe). The lack of a trusted path to the user is my biggest issue. How do they enter “something they know”? How can they be totally certain that the headset is in secure or insecure mode without extra inconvenience? MILS and MLS solutions already solved this problem, including in a few cellphones. I think it’s easier to used strategies that work rather than trying to beat this hard trusted path in headset problem. However, your idea still has merit and might be a workable, yet challenging solution.
Detailed Points Regarding Building a Secure Cellphone
The solution wouldn’t be making Android secure or anything crazy like that. For high assurance, verification requires modular, layered designs and relatively simple mechanisms. MILS (see link below) is currently the best way to do it. Many separation kernels, middleware, drivers and development systems already exist. Additionally, most of these RTOS’s support legacy apps through a user-mode virtualization layer that is built on top of the verified kernel’s protection mechanisms. In other words, it’s very easy to design a mobile device that runs an untrusted phone OS right next to crypto components, while keeping the two separate during secure calls. Here’s how.
The system starts with a high quality processor with MMU, maybe optional crypto acceleration. The SOC first loads a minimal, rigorously verified TCB. This is a separation kernel, signed code loader and a few drivers and trusted services that must work the first time. Most of the verification effort goes into these components. The device also has a trusted User Interface (UI) manager that has exclusive direct access to keyboard, touchpad, screen, etc. The user interacts with this screen, which clearly labels which partitions each Window belongs to and has functionality that allows trusted login, loading of secure apps, switching focus to new apps, etc. Much of this starts with one-button operation. This UI manager only gives user input to the partition with focus, which may be the phone OS or a trusted app like an Encrypted SMS Message system. (See Nitpicker for similar app.) The security models are separation (of course) and Red-Black. The separation kernel and UI Manager work together to ensure that Red (plaintext) data only touches Red (trusted) apps or address spaces, while Black (encrypted data) is allowed to move between partitions in a very restricted way. This scheme provides a few advantages: full legacy capabilities; unspoofable security functions; prevents keylogging and other data leaks; other secure apps can be added later; intuitive, visual mechanisms (like a lock or something) for letting the user know when the protection is on; easier high assurance validation; covert channel suppression is possible with this model.
This scheme has been used several times in limited production or custom devices. I don’t see why we couldn’t apply it to a mass-market device. OKL4 is a microkernel that’s has similar functionality to a separation kernel and is used as a hypervisor in supposedly “hundreds of millions of phones.” It is open-source, has user-mode versions of most phone OS’s (incl. Android), has a component framework for easy integration, and uses capability based security. If we were aiming for medium robustness, I think these traits mean OKL4 could be modified to make a cheap MILS phone with a decent amount of assurance. As in, not quite high assurance as a separation kernel ground up design, but way more than trying to make a secure call from within a smartphone OS. It’s also been recently used in a “secure phone” design, although I’m unsure of just how secure the implementation is.
Another idea I had on the phone topic was a way to reduce covert channels. I mean, we could use all the normal approaches like formal modeling of interactions, side-channel-free algorithms, etc. However, I couldn’t help but wonder about activating the smartphone OS’s sleep mode during a secure call. The OS might still need to check for incoming calls, texts, etc. So, we can either put all unneeded functionality to sleep and give it limited, fixed CPU cycles, or put it all the way to sleep and have a few minimalist apps do the monitoring. Remember that we don’t have to do all of this to ensure most of the security confidence. It’s mainly about dealing with covert timing channels in an efficient way. The use of smartphone sleep mode is just my original idea for handling this: they can’t eavesdrop on you when they aren’t awake. 😉
MILS Architecture (note that my design lacks all the middleware junk)
Offering using a certified kernel with good track record
Nitpicker GUI w/ minimal TCB
OKL4 Microvisor (widely deployed and similar in nature)
Robert • May 28, 2010 4:32 AM
As with all things: the definition of easy / hard always depends on the skills and experience of the individual.
I’m certainly aware of TEMPEST, MiM and all the side channel attacks on my proposed Bluetooth solution. However, this style of attack raises the bar significantly, it is probably beyond the technical capabilities of most local law enforcement and would stretch the capabilities of many 3 letter agencies. So that is probably good enough for most uses.
If I was trying to prototype. I’d probably target the new secure Microcontrollers from Infineon like SLE78. This would be good enough for the application level program and has the AES engine, it probably has enough processing power to do an LPC encoder.
Additionally I need a BT transciever + Microphone amp / ADC and DAC / headphone amp.
Tempest is a problem: but maybe not a major problem because if the adversary is sufficiently close to implement most viable attack vectors, than they could more easily just record the conversation in the acoustic domain.
Side channel: Differential power analysis, sounds easy but it is actually extremely difficult, especially for non repeating data like voice.
Side Channel RF overdrive:
Overloading the BT Rx/Tx block (out of band signal) and look for back-scatter. Definitely needs a fix, probably a shunt regulator would suffice.
MiM: Is probably the best attack but it is technically difficult because of the short distance between the BT headset and the phone.
Key exchange: needs some work.
Clive Robinson • May 28, 2010 5:47 AM
@ Nick P,
“I’m going to try a new format…”
You are starting to sound dangerously like me 8) fairly soon people will look at the posting time stamps and assume we are One and the same person 😉
With regards the mobile phone and “sleep” you may be unaware that many many smart phones are dual CPU.
If you look at Motorola for instance their quad band GPRS & GPS modual for GSM or CDMA actually contains a quite powerfull CPU that is quite happy to run J2ME threads etc.
Thus a phone designed this way gets on quite happily doing the GSM / CDMA stuff to ensure compliance and you talk to it via the equivalent of a serial port and the AT command set…
With regards seperation kernels I will beat my own drum again and say I don’t sufficiently trust them due to covert paths / side channels etc.
My view put simply is to reduce the kernal the user process sees to the absolute minimum required to support the applications via a mediated streams IO. Surprisingly such a kernel can fit in well under 10K of memory and offer similar performance to that an unthreaded Unix process expects. The main bulk of the kernal lift which the user processess have no need to be aware of in any way, is run on other processors where streams get put to devices, tasks are scheaduled, loaded and controled and memory is strictly mannaged.
One key point is in how the system memory is accessed and controlled both by the CPU and Memory Managment Unit (MMU).
In traditional systems memory is controled by the same CPU as runs the user processes, this means that it is only software stoping one process snooping on another processes memory space…
Further is the use of the von Neuman architecture which allows code and data to be mixed, not realy a good idea and mostly compleatly unnessacery within a running process.
These are fundemental to the majority of security problems, that is if a process cannot manipulate code memory via the von Neuman architecture or via the MMU it cannot function and is thus obviated.
However with a single CPU architecture both the von Neuman architecture and control of the MMU via the CPU that runs the user processess is mandatory for nearly all multi tasking and multi user Operating Systems. Unfortunatly these issues are still with us in Multi CPU machines by tradition. So we are needlessly stuck with the legacy problems.
In a multi CPU machine one CPU can do the overall OS system tasks and can delegate process and group related tasks to other CPUs and likewise tasks such as IO. With a little bit of thought you can turn the traditional OS inside out and run the majority of the code in a compleatly untrusted way. the only parts where trust is needed oddly do not require a general purpose CPU just a hypervisor state machine.
With regards user processess they have absolutly no need to control memory and people that write code that way should be re educated (with a base ball bat if required 😉
For one thing it is a compleate waste of programers time (and thats a big money issue) and for another most cannot get it even close to being right, think memory leaks, hanging pointers causing dead memory, uninitialised pointers and zombie pointers trying to access memory that is either not available or being used for other things either causing the process to core or worse causing problems with the state of memory. Oh and these problems get geometricaly worse with multiple threads or memory based adhoc IPC…
So the simple solution is remove the issue from the programer and with it the control of the MMU. And as I said self modifing code is not required plain and simple. So why should a process have write access to code space… So why do we need the von Neuman architecture the answer is we don’t. Both data memory allocation and code loading can be done via a different CPU and the MMU can lock a user processor CPU down.
Oh and just to rub the point in about von Neuman architecture most high end CPU’s are actually Harvard architecture internaly with seperate caches for data and code the “von Neuman” bit is bolted on at the external memory interface almost as an after thought. So those people writing self modifing code realy are using the CPU in the least efficient manner.
However as you will quite rightly point out such hardware as I describe does not currently exist, so we are stuck with the current best practical option which is as you pointed out MILS etc.
However I see no reason not to investigate it as the advantages in terms of efficiency for both programers and running code is overwhelming, then when you consider you get ride of a whole heap of security issues almost for free…
Clive Robinson • May 28, 2010 6:42 AM
“Key exchange: needs some work.”
I think that qualifies as the understatment of the century possably the millenia 😉
Much as I hate to say it, it may turn out to be an unsolvable issue due to three issues,
We know from experiance that hierarchical systems reduce steadily to a single point of failure the closer you get to the top. And likewise the closer you get to the top the higher the rewards for betraying the trust in the hierarchie.
Worse the closer to the top you subvert the system not only the more damage you can inflict but the easier it is to hide.
So from a security asspect hierarchical systems are a compleate non starter as the risks are way way to high.
Non hierarchical systems usually revolve around a web of trust and surprisingly the majority of people are connected by very sort chains of no more than six entities. So in theory a world wide trust net could be fairly easily established.
Sadly there are problems with webs of trust, most people do not trust a chain of more than one “non known” link, thus we had key signing parties where people would turn up with “authenticating” paperwork such as a passport and get their key signed by a verifier who acted like a notary public.
Even this had problems in that it was locality based, that is the verifier was only trusted local to them and the distance issue could not be resolved. There also arose the issue of cross boarder juresdiction issues and legal liability.
Also was the problem of authentication paperwork, or how does the verifier know the document they are looking at is genuine, and how do you know that the verifier is genuine…
You have two choices, the first is use a bureaucratic hierarchical system (passports) that are known to be easily subvertable, the second is you take a chance via multiple verifications, that is you take a passport a bank statment a local government land tax statment utillity statments drivers licences school certificates etc etc. None of which are incapable of being forged, but using so many independant hierarchies that subverting them all would be difficult.
The problem is that none of them is independant they are all usually traceable back to a single root document called your Birth Certificate.
And the joy or curse (depending on your view) of a birth certificate is it is not atributable to any individual… That is it is a piece of paper that makes a claim as to an event (ie a birth) and nothing else, it is not tied in any way to an individual…
So as you can see Key Verification is currently impossible to do in any meaningfull way, and any system that attempted to do it would have to address the trust issue in hierarchies, which appears to be impossible to solve.
Then just to make it all doubly fun there are “entity roles” to consider.
As an individual you have many roles in life that are and should properly be separate. That is my personal financial affairs should not be linkable to a companies financial affairs or those of any clubs or societies etc I might manage. The less executive the role the less linkage there should be on roles. For instance if I was just an accounts clerk in a large multinational organisation that commits trading or other fraud, is it fair that I be judged non judicialy just by association?
The answer of course (if you believe in innocent untill proven guilty) is no.
However humans are increadably bad at not intermixing their roles in life, imagine how you as an individual would manage between twenty and fifty roles and the associated keys that went with them…
As Bruce has pointed out on more than one occasion, effective crypto algorithms is now effectivly a solved problem, it’s time we got on with the more difficult problems such as system design and key managment…
Clive Robinson • May 28, 2010 9:19 AM
“MiM: Is probably the best attack but is technicaly difficult because of the short distance between the BT headset and the phone.”
Oh that this where true, it’s not.
Prior to BT2.1 It was a simple game of antenna gain and RF blocking power (and the ability to break a simple encryption system bassed on a pin that was frequently only four digits long).
Bluetooth supposadly has a range of around 10meters, but this is with lossy antennas poor rf front ends with only just usable gain and dynamic range, and an RF output power that is atleast one hundred times that required for a more capable receiver, and several thousand times that required if both TX and RX used a standard dipole antenna.
We already know that the signals from BT phones have been fairly easily picked up over 1000meters away by the use of high gain antennas and well built RF front ends with low noise floors, high gain and reasonable dynamic range.
To block the phone tranmission to the headset is a simple game of power. That is the rouge signal from the MiM needs to be about 12dB up on the phone at the headset.
Now part of that gain is due to more efficient TX antenna and part the RF power output of the rouge TX final stage.
The dipole antenna is likly to have around 10dB over the phone BT antenna and a 12 element narow band yagi another 12-14dB on that (say just over 16dBi).
Now there is a rough estimate that range increase is based on the square root of the gain/power increase in the far field, and the nearfield loss (two wavelengths) is 17db and I will assume that the phone and handset are 0.5 meters apart which is a little over four wavelengths.
So power for power the yagi system has about the same effect at the handset as the phone at 34 wavelengths or 4.25meters, which does not sound a lot however you need to remember a couple of things.
The first is that the output power of a bluetooth transmiter is measured in fractions of a watt, a Class 3 device being just 1mW thus you would need 250^2 times as much power to get 1000meters away or 63watts which is actually quite simple to generate and modified amature radio equipment in the 100watt pluss range running of 13.8V dc is not difficult to get hold of.
However this is a bit mute since BT 2.1 as the encryption system has been significantly upgraded.
Nick P • May 28, 2010 1:26 PM
Well, it’s a nice analysis and I don’t see TEMPEST-like attacks gaining favor. However, side channels and covert channels are still a problem for your design. Covert channels don’t just mean EMSEC-style attacks. One of my favorites is a cache attack. The process leaking data and the process receiving it are supposed to be separated with no communication mechanisms. They communicate by performing cache-dependent operations that are delayed (1) or not (0). On a P4, studies show this covert channel provides 400KBs of bandwidth and can leak RSA private keys. Modern COTS systems usually have many covert channels. I mean, you might have a “secure” microcontroller, but you still have two issues: does it work as advertised? does your protocol, design or code introduce new side channels, logic defects, or exploitable bugs?
I think you are underestimating attackers’ skills. Expertise isn’t as much an issue as it seems: most EMSEC attacks these days come from college undergrads with no experience in the area, so I’m sure a professional spy or malicious college student could be paid to compromise a sufficiently valuable target. Interaction with an Internet-connected, insecure mobile device with known remote attacks increases odds that weaknesses in your crypto product could be exploited remotely by skilled spies.
Additionally, your analysis missed an threat profile. You assume they only want to hear what you’re saying. Depending on how your design works, they may want the private key for impersonation, blackmail, or to crack previous conversations they’ve recorded. EMSEC or covert channel attacks make it easier to do this. If you want a good example, look up the reason cell phones aren’t allowed near STU-III secure telephone units. The answer (and implications) may surprise you.
As for your functionality suggestion, I think a secure microcontroller might be able to do the job. The real thing that needs acceleration, though, is the public key stuff. I’d go with RSA just because it’s well-understood. I’m not a cryptographer, but my custom protocols use public-key to exchange the master secrets, then use those to encrypt and authenticate (HMAC or OMAC or something). This is much quicker than relying on RSA whole time but we still need to accelerate it for the initial hookup because users won’t accept delays. We could also use a lightweight cipher like Salsa or if your embedded hardware is custom or has reconfigurable logic we can use a hardware cipher from ESTREAM or certified crypto IC cores.
As in, the ability to do crypto in small hardware never held your idea back in my mind. The system level issues, data leak prevention, and lack of a trusted path to user are the biggest issues. Crazy enough, all of this is easier to pull off in a bigger, normal computer. The reason is it’s an area that has been actively studied for like 50 years and we can reuse others’ efforts. With the headset, we are effectively starting from scratch or with little support because it’s hardware intensive. Not my area of expertise, but your posts keep making me think I should acquire expertise in this area.
Nick P • May 28, 2010 1:54 PM
“You are starting to sound dangerously like me 8) fairly soon people will look at the posting time stamps and assume we are One and the same person ;)”
I know this will only support your point, but I was thinking the same thing when I posted. Scary, that…
“With regards seperation kernels I will beat my own drum again and say I don’t sufficiently trust them due to covert paths / side channels etc.”
This is actually one of the few things you can trust about separation kernels. They were specifically designed to help mitigate this. The SK provides time and space partitioning. This means partitions get a fixed amount of cycles and memory. The amount is determined by a policy that’s set at compile time. This is too inflexible for desktops and such, but fine for embedded systems like a cellphone. They also perform “periods processing” that removes residual data from shared resources and if they don’t deal with issues like cache and devices, it would be easy to make it so. The best reason to use them, aside from this covert channel prevention, is that they are so small (4,000-8,000 LOC) and simple that they can be adapted to new scenarios with ease. There’s also plenty of Medium Robustness middleware, like filesystems and graphics drivers, that already work with them. That reduces cost of developing.
“many smart phones are dual CPU”
This is true and it is changing. The OKL4 microkernel was used as a hypervisor in the Motorola Evoke phone recently to remove the extra processor. It allows them to run the rich, untrusted OS next to their real-time baseband stack with enough assurance to prevent quality degradation. It’s an example of a MILS-like scheme at work. Removing the 2nd processor reduced bill of materials, saving them money. Of course, it makes me wonder if we could do the opposite: take a one-processor phone with virtualized UI OS and baseband RTOS, then add an extra chip to support secure operation. Maybe run the untrusted OS/software on a separate processor and have all trusted functionality on a custom chip. It would significantly add to the BOM but could make for a more effective secure phone. Besides, most high-end secure phones like Cryptophone are already about $3,000 so who gives a shit about a few hundred dollars extra BOM… 😉
The chip would control keyboard, screen, etc. In other words, it would have the trusted path. It would also have accelerated crypto, onboard TRNG, and a MILS SK just for verification purposes. It would be modelled/designed as a finite state machine (or set of them) with manageable number of states. Every interaction with untrusted OS would occur through shared memory buffer, as in the A1-class Network Pump. We could use the old school, tried-and-proven approaches of high assurance system design to ensure the trusted processes and interactions occurred in a robust, secure way. We could also use modern tools and techniques to support the analysis.
All the rush over the past decade or two has been moving many systems to one piece of COTS hardware while preventing data leaks. However, I’ve always thought we should just exploit the low cost, energy efficient chips to the best of our ability. That means using a few small ones with careful interactions rather than one big one. Physical separation is easier than software separation and it’s actually economical and convenient these days.
What do you think about that Clive? Particularly about the dual CPU architecture? I know you love your “prisons” concept, but it’s too radical for this. We need to find something that expands on components or strategies with high availability of parts and expertise so that a company might actually build it. That’s what my dual-CPU concept intends to do. Think it could work [cost-effectively]?
Clive Robinson • May 29, 2010 2:38 AM
@ Nick P,
Just to be different I will answer your post kind of in reverse and a bit at a time.
“I know you love your prisons concept but it’s to radical for this.”
Oddly your description of your two CPU hardward was almost exactly where I started which gave rise to the prison idea (now people are going to say something 😉
So some history first,
The idea originally was to “make a better mouse trap” kernel to get rid of the inefficiency of having the whole kernal on each CPU in a multi CPU system (back then even micro kernals weighed in at more than 80% of memory on the most high end of single chip embedded CPUs something that has only changed in the past few years).
And further to significantly improve the “real time” response of a multi CPU embedded system where each embedded CPU was for controlling real time IO sensors and actuators (think about a CAN bus system on steroids as a starting view point for a SCADA or process control like system).
That is to seperate what the user process side required from what the system side required. In essence a seperation kernel at the hardware level (so the ideas are conceptually the same).
This was originaly for “programer efficiency” as well as hardware efficiency and real time response security was not even a consideration originally.
The realisation that bucket loads of security could actually come for free with a (very) reduced to minimal code in the user process area was nice. You need to remember that security is at best an after thought in most embedded systems, an over sight we are starting to pay dearly for (in the past there was the reasonable excuse of limited resources, this just does not apply any more and the exuse has moved to that of programer costs…)
Likewise in turn the “all IO and control is a stream” viewpoint to make programers more efficient also gave further minimisation and again increased security for free by side / covert channel reduction.
It was this “security for free” that made me curious as to just how far you could go getting improved efficiency (programer and hardware) but with “stealth security” in mind.
The reason for “stealth security” is simple as you probably know security is an almost impossible sell to a standard embedded code shop, however improved programer efficiency is a very easy sale, likewise reducing hardware costs. So getting the buy in for improved security was hidden behind the cash in of efficiency (hey sometimes you have to pull the wool over their eyes for their own good 😉
When you put the bulk of the kernel (system side) onto a seperate processor the first thing that happens is that malicious code has a real hard time getting a toe hold let alone getting at other “sensitive information”. And further by the old “clock the inputs and clock the outputs” of the user IO making covert channel bandwidth very small (which is again conceptually the same as the SK approach).
There is also an added bonus you can control the user CPU concept of passing time simply by stoping it’s clock to put it in “zero power mode” malicious code would be unable to use “jitter” for a covert channel as it would have no time refrence in common with it’s external receiver (something the SK does not yet do as well but it,s getting there).
Thus by a number of tricks you can reduce the covert channel “jitter” bandwidth to sub millihertz bandwidth without effecting system performance by more than a few percent. (Something the SK approach will have great difficulty doing in it’s current forms)
To see how you might do this look at how two high refrence frequency PLLs can be combind to give fractional hertz frequency steps, there’s several different was such as using phase accumulators or having the two PLL refrence frequencies differ by the required frequency step or the delta sigma loop.
As time moved on the “PC revolution” caused the cost of high end CPU’s and memory to tumble in comparison to embedded CPU’s (look at the real cost differential on silicon real estate and it will take a week for your eyes to close again after the shock)
Thus I started looking at more general purpose computing platforms for much higher performance at the same cost point. A further simple realisation of what two CPU architecture could realy offer hit me right between the eyes.
For instance turning the user process CPU MMU control bus over to the kernal CPU became a real eye opener, it effectivly turned the user CPU into a “walled Garden” within the confines of a much larger “memory” estate. I simply followed the reasoning and changed a few other things to actually reduce the tranistor count.
The real security bonus is you give the user CPU only the memory required to do it’s job, thus having no memory “slack space” (this is still a bit of a problem in the SK in that processess get the same resources by default, not an ammount tailored to the process needs which can mean a lot of slack space).
With little or no memory slack space malicious code then has insufficient space to hide and there is a wonderful non linear problem that works in the system security favour. As you break a complex task down into smaller functional blocks the amount of memory required goes down and slack space easier to eliminate. However the job of adding extra malicious code gets exponentialy more difficult for the attacker…
Then the thought about a security hypervisor to actually go “inspect the memory” to check for “unexpected code” comes naturaly hence moving from the “walled garden” analogy to that of a “prison”. Which then gives a different view point yet again and further insight on how to get other advantages.
So yes the two CPU deign puts you on the top of that slippery slope 8)
I know that anything I do in a hardware prison approach can be emulated in software by an SK
The real question is not which approach (SK or prison) is conceptualy better (as the gap is closing) but the real cost difference.
Over the system lifetime some things in hardware are very very cheap and very very expensive in software, but the initial cost skews the viewpoint in the oposit direction by a long way.
There are issues with an SK one a single CPU, that cannot arise on a multi CPU prison approach, one of which is the reliability of “formal proofs” because of the assumptions they are founded on (see my above post to Robert on the issue that has just arisen with Quantum Key Distribution).
I’ve seen many many “software security proofs” fail in my life and almost always it’s due to incorect assumptions. Being software you cannot see the failing easily. With hardware however you can see what is going on and much much more reliably control it.
Sometimes the real cost of a system is hidden, that is the “clean up cost” or when it goes wrong how much is it going to cost to clean up the mess.
A news worthy example is the problem PB has currently got with it’s deep sea oil well.
I used to work in the oil industry designing safety critical systems such as “red shutdown” these systems are increadibly expensive to implement and can add a hugh cost margin to an off shore platform. They are almost never used so there is always the temptation to cut costs on them. For very sound reasons they used to be almost always hardware not software solutions.
I think at the end of the day the problem BP had was one due to cost minimisation and on the day the system realy was required it failed and at the root of it will be an incorrect assumption.
I do not know what the clean up cost in dollars and cents will be but the economic knock on effect to the man on the street may well be comparable to that of the collapse of the banking sector…
It is this sort of hidden cost that the likes of the NSA has to deal with upfront and they cannot afford to get it wrong as the cost will be not just monetary
Hence the very conservative design approach.
Mitch P. • May 29, 2010 4:40 PM
I’m unclear how the Socialist Millionaire protocol helps prevent MitM attacks. While it allows Alice and Bob to check to see if a number is the same, it doesn’t allow Alice and/or Bob to actually verify the identity of the person they’re talking to. If Alice and Bob are both actually talking to Mike, each believing it is the other, then all they’re doing is verifying the number(s) that Mike presents.
Timothy T. • June 1, 2010 11:11 AM
There is a European company that produces hardware encrypting phones from the scratch. Have you heard anything about it?
Nick P • June 1, 2010 9:16 PM
@ Timothy T
Actually, I had not heard of them before. Encrypting phones are a risky business. I recommend that anyone buying one should buy from a reputable company. Reputation has many facets: technical; peer-review; customer base.
The technical stuff is actually fairly easy: did they use the right algorithms? the right way? a decent OS? Peer-review refers to independent auditing by a top-notch group, like an intelligence service, or at least open-sourcing of crypto code like Cryptophone does. A strong customer base among law enforcement, executives, etc. might indicate that the product is trustworthy (or that the marketing is good ;). There’s also the military-grade phones that sell to the public, but there’s tough procedures for acquiring them.
Of course, if you can tolerate something less mobile, you can build a VOIP solution out of a headset and one or more embedded systems. You can use a regular VOIP client and ZRTP to make the secure call. I’d base it on OpenBSD or a very hardened Linux, maybe with crypto-accelerated VIA C7/Eden processors. You must be careful with configuration and you might want to isolate the transport layer (wireless) into a separate embedded PC, hence the “or more” above. That way, you could use OpenBSD or a secure RTOS for the VOIP/crypto, then use Linux or whatever for the transport layer. Red-black separation can be maintained and it costs $1200 in hardware for two units the size of a mini-ITX. Not bad, huh?
Timothy T. • June 2, 2010 6:18 AM
@ Nick P
It seems they are supplier to Polish government.
Clive Robinson • June 2, 2010 10:34 AM
@ Nick P,
I don’t think you need all the CPU horse power your system would have (by a big margin) for doin.g voice encryption.
I’ll taake a flyer on it and say you could probably do it in two “gum stick” devices without any real difficulty.
And although they can be a pain some of the newer PIC processors with DSP cores will certainly do what you want for speach coding and encryption in one chip and have a USB interface as well so…
But also almost any of the smart phone processors as well.
Thus with an Android platform and a lot of good coding you could have such a phone up and running.
BUT and it’s a very big BUT I suspect the hardware will always let you down on the EmSec side and as you say Red/Black (or Red/Green if you are on the European side of the puddle) will be the real issue…
And I don’t trust code where as I do trust state machines in hardware with appropriate interfaces…
however the old 10 to 1 spending rule applies twice so if you figure a risk value, it’s not worth spending more than 1% of the risk value on a phone. so 5K dollars on a phone needs a risk up around 500K dollars.
BF Skinner • June 2, 2010 7:24 PM
@Nick P “SCIP”
Could be. It’s the same group though about the time I stopped paying attention they were talking about a deployable solution based on Motorola hardware not a standard, though given the advances it makes more sense to have a standard first and solution second.
“First the Verdict! Then the Trial!” Honestly does no one read anymore?
Was talking with today about why the IPhone v2 isn’t a perfect solution (and certainly aint’ FIPS compliant) and why v1’s should be smashed with a hammer. There was speculation about what the v3 will have but no one claims to know…I guess they don’t hang out in the right bars.
Apple underestimated their market (WAY underestimated their business market or they would have looked more closely at crypto.)
Robert • June 3, 2010 2:35 AM
For a GSM secure phone I’m not reading the right sort of things to make me believe they have a good “secure” solution.
Actually voice security, is a very hard problem, especially on newer complex Multi-media phones. There are simply too many ways that single bit OS changes can compromise the whole system security. Yet the functions of these bits is not even documented ANYWHERE.
Part of the problem is that most multimedia phones support USB OS-update, so all it takes is a quick sneak-n-peek and your completely secure phone is recording your own conversations and broadcasting them over BT at some specified time. Some chipsets will even support OS update from the GPRS link!
One of the weakest sections is the merging of the Analog signals from the phone and Multimedia. On many chipsets it is possible to route the phone “voice” at the analog level back into the multi-media section. Even though this function is not specifically supported. In the example that I saw the register control bits directly controlled the analog mux, (no logic decoder) so selecting the right combination of bits linked the two analog functions. (completely undocumented capability).
Even if you can guarantee the security of the phone hardware / firmware, you still have to consider the physical security of the phone supply chain.
If that’s not enough to make you worry, I’ll leave you with a final thought.
Rumor has it that a couple of years ago, a big semiconductor company was approached to include a secure “black box” logic block “hardware embedded security module” within their phone chipset. It turns out the block had two functions, First was to selectively enable secure comms OK that’s what they wanted. Interestingly the undocumented feature was to selectively bypass any other added security.
Fabio Pietrosanti • June 3, 2010 5:41 AM
Regarding SCIP and other Military oriented voice encryption protocols i’ve prepared a presentation with the history and a full review of various protocols and approach used for voice encryption technologies.
Is in the last part of the slides, you can check it here:
Please consider that NSA and NATO interoperability protocols are based on SCIP http://en.wikipedia.org/wiki/Secure_Communications_Interoperability_Protocol that use also the standard NATO STANAG-4591 MELPe ultra-narrowband 600bit/s codec http://en.wikipedia.org/wiki/Mixed_Excitation_Linear_Prediction .
In commercial environment there’s no 600bit/s codec with the same MOS and audio quality of commercial codec.
600bit/s are required because the encrypted voice payload must interoperate and cross from the Sarkozy mobile phones up to a submarine in Pacific Ocean. And the submarine in Pacific ocean can only use very long waves that provide a bandwidth of less than 1200bit/s .
So military equipment are not “more secure” than “commercial equipment” but are strongly more resilient and can work over almost any transport media being it GSM, UMTS, Satellite, VHF, UHF, HF also with the pocket available on the soldier network equipment that’s on the battlefield.
The military grade system are not more secure, but are extremely more resilient and capable to work on extremely more difficult environment from telecommunication point of view.
And that’s only a matter of Audio Codec, not of encryption.
Fabio Pietrosanti (naif)
L. Camporesi • June 3, 2010 6:26 AM
I do sell several crypto solutions, ZRTP for Symbian and Windows Mobile included.
My opinion, based now on some years of experience and studies, is that … Bruce Schneier’ words are precious: “What sort of security is sensible depends, in part, on the types of attackers you’re defending against …” Referring to Mobile Phones security, as for other systems, one can always immagine a new method for braking security. Have secured voice with cryptography? I spoil you signal so that you are then to speak in clear (you’d think this is rather stupid. Wondering how Mr Fabio Ghioni has stolen the entire database of Kroll?) Have somehow secured the underlying Operating System? I do use Tempest. Are you using military apparatus? I’ll use bugs. This not considering new exploits.
Then I think one is to understand what sort of attacks he might expect, then choosing the best solution, keeping in mind the value for money ratio (or, using Schneier words once more, make the best trade off). This is what we suggest to our customers.
L. Camporesi, Mobile Privacy Ltd
Clive Robinson • June 4, 2010 12:57 AM
“Actually voice security is a very hard problem…”
and them some 😉
One of the “ping pong” arguments myself and Nick P have is to the way you go about segregation.
I don’t belive for many of the reasons you state that it is actually possible to have a verifiably secure system with only a single CPU.
Thus two or more general purpose CPU cores and a state machine hypervisor that controls the limited bandwidth channel between the two.
Having worked with (and actually have sitting infront of me right now) Motorolas quad band GPRS java modules (G24) I’m aware of not just the GPRS issue but…. Motorla holds the signing keys in Israel.
With regards the supply chain i’ts an issue I have banged on about in the past both on this blog and over at the Cambridge labs blog “lightbluetouchpaper”. The example I usually quote is that of Apple shipping “pods” with virus code for PC’s on them…
As for your “final thought” I don’t know which particular company you refer to but I can confirm first hand that such behaviour does go on.
If you want to see how some real nasty things can be done have a look at Adam Young and Moti yung’s work on cryptovirology and backdooring RSA signatures.
In essence there is more redundancy in the number of prime pairs than you would like. Thus it is possible to put an invisable backdoor into the upper half of the pq product that will give you an indicator as to what p or q are and thus short circuit the factorisation security…
I actually wrote some code to do this in a covert way and got it through a quite serious code review process without it being noticed or even queried…
The way I did this was the old unalocate realocate buffer space without clearing the buffer trick. That is I did not clear the old buffer before freeing it or clear the new buffer onced malloced. I then used a secondary XOR trick to appear to be overwriting the data whilst actually keeping it. And to hide the fact that I was upto dirty tricks I used the PQ pair in the BBS random number generator to sign the search starting point for the new P…
Other tricks I have used in the past are using pointers to functions stored in a table… Once you know where the table is either using the functions out of order or illicitly becomes oh so easy. Thus knowing this you write the functions in a way that is most conveniant to you.
The nice thing about the BBS random number generator is, it effectivly has a public key built into it which you can use to encrypt information. As you generate this PQ pair for the BBS gen and embedd it in the code you can backdoor any of the bits you chose. Thus you can have a short PQ pair hidden in a larger PQ pair so that you can use the short PQ pair to encrypte say one sixth of the bits of the new PQ pair you are generating for the end users new public key. This shortish block of bits is an encryption of a simple truncation of the upper part of the “random” point you used to select either P or Q so as the attacker you only have a very limited search space to find the P or Q. And importantly there is no way to examine a public key and reveal the presence of this type of backdoor…
And people wonder why I don’t like PKI where the software to make your public key is supplied by the people running the CA…
Robert • June 7, 2010 4:18 AM
I certainly hear what you are saying, and I can even follow the concept / logic BUT it takes me straight back to my beginning point. (if there are possible but unknown backdoors in the analog domain than ABSOLUTELY nothing you do in the digital domain matters)
Eg for a very simple cell phone.
typically RF Pa power comes directly from the battery, however analog power comes from a switched mode regulator (Buck converter). This is typically done to add power supply rejection for what is called TDD noise. (Noise in the audio signal associated with the battery voltage spikes caused by base station communication)
Now what is interesting is that the largest variation in analog power is associated with the Audio output power (power to the speaker)
What this means is that the change in the pulse width of the (Analog baseband ) buck regulator is mainly due to the speaker audio envelop.
Most buck’s for cell phones operate at around 2Mhz (usually just above the AM band) unfortunately cheap Inductors act as parasitic antennas and radiate the Buck converter pulse. This means that a simple am radio receiver tuned at 2Mhz will demodulate the cell phones received voice channel directly. This is TEMPEST 101 stuff.
So with the most trivial of equipment (sensitive AM receiver + directional antenna) I can recovered half the conversation, completely independent of the encryption used in the rest of the system.
This is a huge backdoor analog comms channel, and it exists on practically every cell phone.
There are similar but different attack vectors to recover the microphone signal.
If I do both techniques than I can completely recover both halves of the voice band channel regardless of the rest of the channel encryption.
So in my mind unless you specifically address these backdoor analog issues, the rest is just security theater.
Clive Robinson • June 7, 2010 11:49 AM
As I said “and them some” 😉
Yes EmSec / TEMPEST is the best attack vector in many cases.
However it depends on if you want to do pasive or active EmSec.
Pasive is limited by thermal noise limitations to -174dBm for a 1Hz bandwidth which goes to around 15dB worse for standard POTS bandwidth to 20dB worse for cell phone audio bandwidth.
And when you look at radiation efficiency lossess for a surface mount inductor at 2MHz I would be thinking of going further up to it’s higher harmonics.
Oh and not all are buck converters some are series resonant so you might need to us PM/FM demodulation for better effects.
My favoured aproach would be active with a UHF beam to see if I could get cross modulation onto it. High gain compact antennas make for just one heck of an advantage.
However my reply was just about the digital side (Nick P tells me off if I wonder to far of topic 8)
The simple fact is I don’t think it’s possible to have a normal size mobile smart phone that can be considered secure in all respects just some.
Which brings us around to POTUS’s “crackberry” that some call the “Obamaberry” I would not have liked to be on the team that supposadly secured that…
I’ll have a small side bet (a bottle of good ale) that it is only secure in some respects and good old distance is what is used for passive EmSec pluss some “active EmSec attack” detection.
However I guess we won’t know for another 20years or so and I might not be arround long enough to collect 8)
Clive Robinson • June 7, 2010 12:17 PM
Never trust a tired brain…
the 15 and 20 figures I gave should be 30 and 40 dB’s thus giving approximat thermal noise levels of -144 and -134 for the 1K and 10K bandwidths on a 50R input impedance at around room temp.
Oh and for real TEMPEST geeks you can recover intelligible audio in a 300Hz bandwidth which gives the -150dBm you sometimes hear about and -168dBm for detectability.
Hmm time for a snoze me thinks…
Robert • June 8, 2010 5:46 AM
Actually if you want to implement this as a passive attack than there is a much more useful EMI spur at about 150Mhz. It is caused by the Buck Switching node fall time (which is dependent onto the inductor current)
The trick is that most cell phone bucks use a 5 stage LSFR clock spreader, so you need to first recover the ideal Buck Clock before the phase shift information makes any sense. Alternatively you can decode it with a suitable QAM receiver, the audio information is in the constellation jitter.
I’m not sure that an Active Emsec attack against the Buck converter will work, because the node impedance is too low. Additionally cell phone bucks use synchronous switchers, so the typical asynchronous Schottky can not be used as a nonlinear up-converter.
I suspect the best active EmSec attack is probably against the main 26Mhz crystal.
Interestingly much of the jitter on the main Tx crystal signal in a cell phone, is caused by a combination of acoustic vibration on the crystal itself and the piezoelectric voltage caused by the “microphoning” of the Ceramic caps, in the xtal circuit.
Here again you have Acoustic signals within the cell phone (Voice), being directly converted to clock phase jitter which will be directly observable as phase jitter on the Cell phone to base station Tx signal, so there’s another “analog” backdoor that completely bypasses any crypto function on the phone.
Clive Robinson • June 8, 2010 4:29 PM
Sorry when I was referring to using UHF it was a general case not specific against the buck converter.
Effectivly you introduce a signal into the ground return and this gets a parametric effect going quite nicely (for those that want to know more look up a “parametric amplifier”).
With regards the LFSR I’ve talked about this “meet the EMC mask” disaster in the past with PC’s.
If you can correctly sync to it, it actually gives you a significant bost to the range and lifts the phone of interest out of the noise of other similar devices so from a security aspect it’s actually worse than not having simple EMC style filtering…
It is interesting that you mention microphonics and as such you have let the cat out of the bag as it where.
The general TEMPEST / EmSec rules although supposedly still clasified are fairly well known and can be worked out from most EMC guides.
However such things as “clock the inputs and clock the outputs” is something that has poped up in the public domain since a student of Mat Blaze did his “keybugs” device that showed you could have an effective side channel right through a PC simply because of it’s design for “efficient”.
Well microphonics is another attack vector and has been used quite successfully in the past by those in the know but has yet to get an accademic paper to put it fully in the public domain.
If you have a think about the potential for sub milimetric audio signals ( above 300Khz) and how they can be used to bring various components to near resonance…
It’s nice to have it “outed” at last 8)
It is also one of the reasons I’m fond of bees wax in various places and two layer plastic cases with expanded foam filling between them.
Artech books had a nice book on osc design that went into microphonic suppression in reasonable depth.
But yes you are correct about the analog side being way more susceptible at close range than the digital security will be at any range….
I can see you and Nick P having some interesting chats in the future.
Robert • June 9, 2010 4:38 AM
Are you serious, Microphonics is a new technique? I think, You guys need to spend less time at the keyboard and more time in the lab!
From my experience anyone who has ever tried to implement a higher order mobile QAM system can tell you all about microphonics, especially if it was targeted at a consumer product.
Usually you buy XR7 or NPO compound ceramic caps for lab samples, but once in production it is only a matter of time before a Y5V cap gets substituted. The result is instant jitter.
BTW I like the idea of using a HF vibration to up-modulate the audio. This even bypasses a jitter reduction pll and eliminates other 1/f noise sources from swamping the acoustic signal. Nice touch…
Interestingly on many phones an external 300Khz vibration source is not needed. In phones with a ClassD amplifier incorporated the carrier is often at about 300Khz – 500Khz and this vibrates the PBC, especially if common mode Y5V filter caps are used on a BD mode classD amplifier. The PCB vibration goes straight to the TX crystal and messes up the stability. The problem is so bad that LTE phones with Multimedia functions cannot operate at anywhere near full speed if the music is playing.
Is this Microphonic effect really new to the security industry? Because I could certainly write a detailed paper on the subject.
puzzled • November 11, 2010 12:21 PM
Can anyone break this cell phone security thing down for a simple person – like me? I think I have been a victim of eavesdropping and stalking for the past 7 years now. I cannot prove this, as you may already know, because it is “invisible attacks” and “psychological harassment”. I am researching a way to secure my phone calls and hopefully more on down the line. I have limited funds but some. I looked up the Crypto phones and other things about secure cell phone calls and came across this. Can you offer any “simple” suggestions to a non-tech girl? Something to think about… Would you be okay with someone doing this to your mother, sister, or daughter? If not.. would you not want to help her – or would you rather pull the “crazy card” and tell her there is no reason for anyone to do this to you because you are not important enough or a high profile person?
Please help me
Brian W • November 30, 2010 10:27 AM
Have you looked into a program called myKryptofon?
For puzzled and any others out there, feel free to contact me regarding true point to point encryption.
Subscribe to comments on this entry
Sidebar photo of Bruce Schneier by Joe MacInnis.
Leave a comment