The Problems with CALEA-II

The FBI wants a new law that will make it easier to wiretap the Internet. Although its claim is that the new law will only maintain the status quo, it’s really much worse than that. This law will result in less-secure Internet products and create a foreign industry in more-secure alternatives. It will impose costly burdens on affected companies. It will assist totalitarian governments in spying on their own citizens. And it won’t do much to hinder actual criminals and terrorists.

As the FBI sees it, the problem is that people are moving away from traditional communication systems like telephones onto computer systems like Skype. Eavesdropping on telephones used to be easy. The FBI would call the phone company, which would bring agents into a switching room and allow them to literally tap the wires with a pair of alligator clips and a tape recorder. In the 1990s, the government forced phone companies to provide an analogous capability on digital switches; but today, more and more communications happens over the Internet.

What the FBI wants is the ability to eavesdrop on everything. Depending on the system, this ranges from easy to impossible. E-mail systems like Gmail are easy. The mail resides in Google’s servers, and the company has an office full of people who respond to requests for lawful access to individual accounts from governments all over the world. Encrypted voice systems like Silent Circle are impossible to eavesdrop on—the calls are encrypted from one computer to the other, and there’s no central node to eavesdrop from. In those cases, the only way to make the system eavesdroppable is to add a backdoor to the user software. This is precisely the FBI’s proposal. Companies that refuse to comply would be fined $25,000 a day.

The FBI believes it can have it both ways: that it can open systems to its eavesdropping, but keep them secure from anyone else’s eavesdropping. That’s just not possible. It’s impossible to build a communications system that allows the FBI surreptitious access but doesn’t allow similar access by others. When it comes to security, we have two options: We can build our systems to be as secure as possible from eavesdropping, or we can deliberately weaken their security. We have to choose one or the other.

This is an old debate, and one we’ve been through many times. The NSA even has a name for it: the equities issue. In the 1980s, the equities debate was about export control of cryptography. The government deliberately weakened U.S. cryptography products because it didn’t want foreign groups to have access to secure systems. Two things resulted: fewer Internet products with cryptography, to the insecurity of everybody, and a vibrant foreign security industry based on the unofficial slogan “Don’t buy the U.S. stuff—it’s lousy.”

In 1993, the debate was about the Clipper Chip. This was another deliberately weakened security product, an encrypted telephone. The FBI convinced AT&T to add a backdoor that allowed for surreptitious wiretapping. The product was a complete failure. Again, why would anyone buy a deliberately weakened security system?

In 1994, the Communications Assistance for Law Enforcement Act mandated that U.S. companies build eavesdropping capabilities into phone switches. These were sold internationally; some countries liked having the ability to spy on their citizens. Of course, so did criminals, and there were public scandals in Greece (2005) and Italy (2006) as a result.

In 2012, we learned that every phone switch sold to the Department of Defense had security vulnerabilities in its surveillance system. And just this May, we learned that Chinese hackers breached Google’s system for providing surveillance data for the FBI.

The new FBI proposal will fail in all these ways and more. The bad guys will be able to get around the eavesdropping capability, either by building their own security systems—not very difficult—or buying the more-secure foreign products that will inevitably be made available. Most of the good guys, who don’t understand the risks or the technology, will not know enough to bother and will be less secure. The eavesdropping functions will 1) result in more obscure—and less secure—product designs, and 2) be vulnerable to exploitation by criminals, spies, and everyone else. U.S. companies will be forced to compete at a disadvantage; smart customers won’t buy the substandard stuff when there are more-secure foreign alternatives. Even worse, there are lots of foreign governments who want to use these sorts of systems to spy on their own citizens. Do we really want to be exporting surveillance technology to the likes of China, Syria, and Saudi Arabia?

The FBI’s shortsighted agenda also works against the parts of the government that are still working to secure the Internet for everyone. Initiatives within the NSA, the DOD, and DHS to do everything from securing computer operating systems to enabling anonymous web browsing will all be harmed by this.

What to do, then? The FBI claims that the Internet is “going dark,” and that it’s simply trying to maintain the status quo of being able to eavesdrop. This characterization is disingenuous at best. We are entering a golden age of surveillance; there’s more electronic communications available for eavesdropping than ever before, including whole new classes of information: location tracking, financial tracking, and vast databases of historical communications such as e-mails and text messages. The FBI’s surveillance department has it better than ever. With regard to voice communications, yes, software phone calls will be harder to eavesdrop upon. (Although there are questions about Skype’s security.) That’s just part of the evolution of technology, and one that on balance is a positive thing.

Think of it this way: We don’t hand the government copies of our house keys and safe combinations. If agents want access, they get a warrant and then pick the locks or bust open the doors, just as a criminal would do. A similar system would work on computers. The FBI, with its increasingly non-transparent procedures and systems, has failed to make the case that this isn’t good enough.

Finally there’s a general principle at work that’s worth explicitly stating. All tools can be used by the good guys and the bad guys. Cars have enormous societal value, even though bank robbers can use them as getaway cars. Cash is no different. Both good guys and bad guys send e-mails, use Skype, and eat at all-night restaurants. But because society consists overwhelmingly of good guys, the good uses of these dual-use technologies greatly outweigh the bad uses. Strong Internet security makes us all safer, even though it helps the bad guys as well. And it makes no sense to harm all of us in an attempt to harm a small subset of us.

This essay originally appeared in Foreign Policy.

Posted on June 4, 2013 at 12:44 PM71 Comments

Comments

Figureitout June 4, 2013 1:01 PM

they get a warrant and then pick the locks or bust open the doors, just as a criminal would do
–It is criminal; I stopped these behaviors as a kid, viewing public places, breaking and entering only bullies’ homes; one time my old house (no one living there) just to see if they changed the lock (they didn’t). I have 2 homes on my list that are very tempting but I’m fairly sure it’s a set-up. I will put this issue to rest (w/ my comments) but I cannot emphasize just how outraging it is be violated; so I was forced get my revenge in more devious ways, and I got it…w/ anxiety attacks and sleepless nights. It also forced my research to be put on hold for years and I want to get back to work.

Arclight June 4, 2013 1:03 PM

Good work on pointing out the main problem with the debate. Basically, the “base line” for surveillance has gone way up, even as access to a few data streams gets more difficult. I have to think Facebook and social networking services have made it easier than ever to be a probation or parole officer – criminals seem to overshare just as much as regular people.

Add “social network analysis” from phone records, location data, geotagged photos, and credit card receipts, and it becomes almost unnecessary to follow someone around all day to see what they do.

The “going dark” analogy is just wrong when the overall light level has increased 1000X.

Arclight

W June 4, 2013 1:47 PM

Designing a backdoor that uses asymmetric crypto to ensure that only owners of a certain private key can use it doesn’t sound very hard.

Keeping that private key secret is a bit harder, but should be possible using HSMs.

Backdoors are a bad idea, but I think the risks of somebody else using the backdoor are exaggerated a bit.

Tree June 4, 2013 2:26 PM

@W The chance is pretty low, but if it fails all of the hardware and software depending on its security is instantly obsolete, so the overall risk is unacceptably high.

djs June 4, 2013 2:42 PM

@W

So, how do manufacturers test the mandated back doors?

a) They all have the government’s master key and you hope that it doesn’t leak.
b) There are two keys: the government one and a manufacturer one, with the same problem as (a) but leaks only effect a single manufacturer at a time.
c) During testing a test key is temporarily inserted in the software, and you hope it never ships without the correct key installed. If it does, it won’t be noticed until a Federal request for a tap occurs. Which will result in fines & demands for a less secure solution.

The manufacturers have incentives to make it less secure, so failed tap requests don’t result in large fines. Insecurity is an unlikely bit of bad press that will soon be forgotten by most.

pedant June 4, 2013 3:09 PM

Typo in the essay:

U.S. companies will be forced to compete as a disadvantage

should say: at a disadvantage

Michael Bernstein June 4, 2013 4:14 PM

“But because society consists overwhelmingly of good guys, the good uses of these dual-use technologies greatly outweigh the bad uses.”

This is an interesting use of the phrase ‘dual use’, which to me denotes military vs. civilian use, rather than legitimate vs. criminal use.

nobodySpecial June 4, 2013 4:42 PM

@djs – no it’s much simpler than that. You have an FBI key, and a police key (state, county and city) then an NSA, CIA, secret service, then their equivalents in the army, navy and airforce, then the DHS, coastguard, TSA.

Then repeat this for 178 other countries you want to sell the system to – and you have a completely secure system which can only be accessed by one of 10,000 completely secure keys distribute to millions of people around the world.

999999999 June 4, 2013 5:04 PM

The Feds are proposing chemotherapy. Poison the host to kill the cancer. It is a fundamental flaw in understanding network security.
This stupid backdoor system will also lead to American hackers losing their abilities. Most hackers want to compromise networks close to home (hack the phone bill or spy on the ex). If these networks are weak, what will happen when they try their hand against a foreign server?

“You asked for miracles, Theo, I give you the FBI.” -Hans Gruber, Die Hard

tuseroni June 4, 2013 5:57 PM

@W – a backdoor that uses a private key sounds nice but there are a few issues:
1: any such code would have to be a huge self-contained encrypted block
2: the code must then be decrypted, and that decrypted code must be put somewhere in order to run it.
3: that running code must communicate back to the main code to be of any use.

so even without knowing the private key there is a communication pathway which must also be secured and prevented from MITM attack (eg: a program pretends to be the decrypted code, it talks to the program the way the program want’s to be spoken to and understand the speech the program sends to it. if it encrypts the data it sends to the program then the private key to encrypt it must be on the machine and can be discovered) there is also the decryption location which must also be unknown or protected from tampering.
and the code connecting to it, and the code itself must be bulletproof.
this is on top of the testing issue mentioned above.

the bottom line is, if someone has any freedom whatsoever in their machine they can break your techniques and then use that to break it on others.

for the same reason DRM fails this would certainly fail, because you cannot stop a dedicated hacker with direct access to the machine running the code (or in this case ANY machine running the code, you only have to break one to break them all)

the only way to keep the user safe from malicious individuals is to keep them safe from everyone.

Dirk Praet June 4, 2013 8:08 PM

The FBI believes it can have it both ways: that it can open systems to its eavesdropping, but keep them secure from anyone else’s eavesdropping. That’s just not possible.

Actually, it’s absurd. I’m pretty sure that most techies at the FBI know that too, which leads me to believe that this is far and utmost a particularly brain-dead initiative by a bunch of politicos, entrepreneurs and FBI middle management hoping to further their careers and their wallets. I suppose the cyber division guys at the PLA will be absolutely thrilled by the idea of not having to create their own backdoors anymore but just go look for the existing ones. Not to mention the concept of APT getting an entirely new dimension.

SparkyGSX June 4, 2013 8:10 PM

@tuseroni: I’d think it’s quite obvious the code executing the crypto algorithms doesn’t need to be secured against the user, and doesn’t need to be encrypted.

An asymmetrical system could provide a copy of the session key, encrypted with the FBI’s public key, as part of the key exchange procedure. This encrypted key would only be useful to the entity holding the respective private key.

However, it would be trivial to patch the public key being used at runtime, such that the FBI’s private key won’t fit anymore. They wouldn’t find out until they attempted to recover the session key.

The only way I see that this could marginally work, is if it would be illegal for US citizens to use non-approved software, and even then it would only work for domestic communication, and would be incompatible with the rest of the world.

David Brin June 4, 2013 8:55 PM

Where to begin. This is plain silly. The government can create intranets and keep them secure from the methods that let them spy on regular internet traffic. Lots of agencies already do this. Yes, adversaries can also set up intranets, but the FBI can then legally break down doors.

The dichotomy is not between secure and un-secure. It is between letting elites exercise surveillance unsupervised or supervised. All of our radicalism should be aimed at forcing new, innovative and better forms of supervision and sousveillance upon powerful elites, instead of utterly futilely trying to blind them.

With cordial regards,

David Brin
http://www.davidbrin.com
blog: http://davidbrin.blogspot.com/
twitter: http://twitter.com/DavidBrin

Alex R. June 4, 2013 10:39 PM

I have a four step plan for dealing with this proposal:

1.) Set up a website that is superficially dedicated to Jihad but doesn’t actually break any laws.

2.) Attach a packet sniffer to the machine which hosts my fake Jihadi site.

3.) Learn the FBI’s backdoor codes and protocols.

4.) Profit!

Note the absence of a line which reads “?” (In other words, this is one hideous turd of an idea.)

Wael June 5, 2013 1:04 AM

@ tuseroni,

the bottom line is, if someone has any freedom whatsoever in their machine they can break your techniques and then use that to break it on others.

Not if you design for BOAA (Break Once, Attack All) resistance.

Clive Robinson June 5, 2013 1:54 AM

@ W,

    Designing a backdoor that uses asymmetric crypto to ensure that only owners of a certain private key can use it doesn’t sound very hard

It’s actually a very hard problem when the attacker has the hardware on the bench in front of them.

Just keep in mind that to render the dackdoor usless all the attacker has to do is change one bit prior to or during encryption and let the avalanch effect inherent in the crypto do the rest.

But asymetric crypto is inefficient so the FEDS are most likely only going to be interested in the symetric keys used for encryption. For the backdoor to be effective in use it is actually unlikely to be built into the application code logic but into a seperate library that is linked to so it’s re-usable (this is common practice in software development). Also the chances are either the OS supplier or the FEDs themselves will supply the library with a simple to use API to get around the problems involved with testing and certification which would be both demanding and expensive in resources such as manpower.

In a conventionaly designed software crypto system the symetric key is going to be an array of bits, with each bit equally probable of being set or cleared to give the full key range. Thus when the application generates the single use symetric key it needs to passes it to the backdoor via the API, such an API is fairly easy to spot in a reverse engineering exercise and it is thus vulnerable in ways the backdoor cannot easily determin.

The only way the FEDs could ensure that the symetric key is valid is by having access to the plain text and checking each and every bit of it. But again the backdoor cannot easily determin if the plaintext it is passed is genuine plaintext or the ciphertext from a preceading encryption process.

And this is the problem with such ideas, there is always going to be a way for an attacker to feed in encrypted signals which the backdoor cannot reliably detect and thus it will always be possible for an attacker to “end run” the system.

Which is why the FEDs want to fine the service providers. All the FEDs will have to do once they think that you as an individual are communicating covertly is to rattle the service providers cage. The service provider will cut you off within minutes to avoid the potential fine irrespective of if the FEDs follow up with evidence or not.

Pottentialy this gives a way to attack the system and get at people. All an attacker has to do is impersonate the FEDs to the service provider and thats the victim cut off. Due to the idiotic way such things generaly work it could take weeks for the victim to get it sorted out.

Such an attack vector has many interesting uses and I’m fairly certain that if the FED backdoor gets implemented then this new attack vector will be used at some point.

Why?

Because we have seen it befor from the early days of Phreaking through all manner of denial of service attacks up to the current SWATing.

Clive Robinson June 5, 2013 2:08 AM

@ Wael,

    Not if you design for BOAA

BOAA has it’s limitations. Any system that is transparent in any way can have a time based covert channel sent through it.

As a simple example take an ordinary telephone the system is transparent to speach from end to end, so if I talk in some kind of veiled way or in obvious code it’s going to get through any phone you use it against.

Likewise I could train myself to type at a keyboard such that the text I type is fairly inocuous but the spacing between key pressess could be Morse Code.

All usable communications systems are transparent in some way and thus covert information can be sent across them.

That is the joy of “end run” attacks.

abcdef June 5, 2013 3:08 AM

Do we really want to be exporting surveillance technology to the likes of China, Syria, and Saudi Arabia?

You already are.

concerned citizen June 5, 2013 5:29 AM

My understanding is that the FBI proposal would require certain communications services – such as Blackberry’s IM service, Google’s chat service, and others – to be accessible in the event of a legally appropriate request for such access. The FBI is not asking for hardware backdoors in every device accessible to anyone with a particular key.

I’m open to correction (and I’m no expert, so I’m actually expecting and welcoming of correction) here.

If my understanding is correct though, then three points are worth making:

1) would the services be any less secure than telephone calls currently are? Or would the services be not as secure as they could be, but still more secure than telephone calls currently are?

2) would not there be implications for the structure of the internet? It seems that this requirement would spur major providers to work on comm systems with central hubs. This appears to fit well with other proposed “reforms” of the internet. Indeed, it may even be conducive to a more secure structure, with attendant advantages and disadvantages.

3) there could be multiple legs to an encrypted communication, which could be implemented optionally dependent upon a wiretap order.

If there is a wiretap order:
Leg AS: between party A and a central server running the service. Encrypted between A and S – S can decrypt and store before relaying message to B on a different leg.
Between SB: Between B and the same central server. Encrypted between B and S – S can decrypt and store before relaying message to A on a different leg.

So a comm between A and B using service S would be as secure provided that the S site were hardened.

Comms are encrypted between a party and the server, and then a key to an encrypted exchange running through a central server could be generated locally (in a geographic sense), at the time of encryption of comms between party A (or B) and the server, and for that particular exchange. The ability to generate and use such a key could be tied, perhaps, to physical access of a particular site. This would remove a portion of the broader security concerns with the broad outlines of the FBI’s proposal as I understand it.

The implication of point 3 is that communications under the FBI proposal need not be much less secure against private or foreign government penetration. The question is how hardened the central sites are.

OldFish June 5, 2013 11:35 AM

The 4th defines(in a general way) the conditions under which the right to search may be granted but in no way does it grant the authority to guarantee a priori that what is sought will be found.

Just my amateur read on the thing.

stefan June 5, 2013 11:38 AM

The prudent internaut assumes that everything is intercepted and rendered to plaintext unless encrypted end to end such that only the correspondents have the relevant keys. Practically all commercial services have to be assumed to be already compromised – Skype, IM, SSL/TLS/HTTPS, etc.. But we can still use GPG or the like, and then the goons have to go after the endpoints.

The really evil and scary part is: “… encrypted from one computer to the other, and there’s no central node to eavesdrop from. In those cases, the only way to make the system eavesdroppable is to add a backdoor to the user software. This is precisely the FBI’s proposal. ”

If the Feds go down this road, the logical outcome is everyone eventually being restricted, by law, to TCPA / LaGrande / whatever-the-next-name computers – i.e. locked-down and DRM’d to the hilt. Open source and FPGAs would have to be banned, compilers subject to a licensing regime and so on.

For now they aren’t pushing that far. The Slate article says, “The F.B.I. has abandoned a component of its original proposal that would have required companies that facilitate the encryption of users’ messages to always have a key to unscramble them if presented with a court order. … The current proposal would allow services that fully encrypt messages between users to keep operating, officials said.”

But the plan is clear. First the central-server systems, then the endpoints. The next phase will be those services that supposedly still offer real security, and finally the individuals. Already respect for civil liberties, and rule of law are dead in USA, and the traditional talk of “freedom” and “democracy” has become a bad joke.

Bruce Schneier June 5, 2013 12:07 PM

“This is plain silly.”

What is plan silly? The FBI’s proposal? — I agree. My opinion that the FBI’s proposal is impossible? — I disagree.

“The government can create intranets and keep them secure from the methods that let them spy on regular internet traffic. Lots of agencies already do this. Yes, adversaries can also set up intranets, but the FBI can then legally break down doors.”

Right. And if the story ended there, that would be fine. CALEA-II has nothing to do with government-created anything, and how secure they are. Nor is it really concerned with adversary created anything. If anything, it ignores the possibility completely.

“The dichotomy is not between secure and un-secure. It is between letting elites exercise surveillance unsupervised or supervised. All of our radicalism should be aimed at forcing new, innovative and better forms of supervision and sousveillance upon powerful elites, instead of utterly futilely trying to blind them.”

There are several dichotomies. The one you mention is transparency, which is critical. I posted an essay on that last week. This particular dichotomy, the one that is brought into focus by the FBI proposal, is between 1) building a communications system that can be surveilled by the FBI — either with our without oversight — and at the same time can be surveilled by others, and 2) building a system that cannot be surveilled by others, and at the same time cannot be surveilled by the FBI.

kgf June 5, 2013 12:33 PM

Why doesn’t the FBI just require all new computing sold be thin PXE clients. They will provide the boot image, and they will manage storage on their cloud, and we would need to get our keys from them.

Excluding commercial needs, and assuming 1TB per citizen, that’s only +316e+6 TB of storage needed. Using 1TB drives with a 3.5in form factor we can also assume ~3.160e+9 watts of power, and ~1.1906041e+12 cubic meters of warehouse space.

-kf

name.withheld.for.obvious.reasons June 5, 2013 12:47 PM

This could be too much fun, imagine a product sold commercially from Argos or Best Buy that has been modified to behave just like the CALEA device. It intercepts the request, translate all queries to individuals and relatives of the requesting agent. “Why is the tap tracing my mother’s conversation with her proctologist?”

Too cool

Jack June 5, 2013 1:51 PM

Glad to see this article posted here.

“This law will result in less-secure Internet products and create a foreign industry in more-secure alternatives. It will impose costly burdens on affected companies. It will assist totalitarian governments in spying on their own citizens. And it won’t do much to hinder actual criminals and terrorists.

[.. companies have to make complicated systems that are expensive to make…]

Companies that refuse to comply would be fined $25,000 a day.”

I think this is what it all boils down to.

The phone companies have the government as one of their biggest customers. So, they have a lot
of say there. Traditionally, illegal surveillance has been used in spying on political adversaries.
This is the Way It Was Done in the FBI from the 1920s to the 1970s.

That delivers political clout to law enforcement.

The big companies can afford to put in this sort of surveillance rigging. Small companies can not.

25,000 dollars a day is draconian. Okay? That kind of figure and law comes from a truly, deeply
perverted mind.

The kind of individual and groups (I do not put this on the entire FBI) who create that kind of
idea is way out of touch.

Does that sort of individual care about terrorist? Only if it gives them excuse to illegally
surveil whomever they wish. They like terrorism because it means money and power for them.

Does that kind of individual care about crime?

25,000 dollars a day for every mom and pop shop out there? For every small business? For everyone
and anyone?

That is not criminal?

Senseless, draconian laws are not criminal?

I think some of the greatest outrages on the planet are because of such laws.

When you pervert the law in this manner you establish a precedent for lawlessness posing as
lawfulness. It sends a message to the entire society.

Jack June 5, 2013 1:57 PM

@KGF

“Why doesn’t the FBI just require all new computing sold be thin PXE clients. They will provide the boot image, and they will manage storage on their cloud, and we would need to get our keys from them.”

That sounds up their alley. Why have anyone ever store any data at all, when the FBI could have it all and commercial businesses could just rent their ownership from them?

Health records, banking information, customer information, PII, PCI — store everything in the FBI cloud.

And if they refuse, why stop at 25K a day? Make it a billion dollars a day. And give them the rights to take family members hostage.

Maybe even give them rights to sell them off to countries that have that kind of trade as legal.

Hopefully, after they pass this law, they can then move to more pressing issues: no more kite flying, no phonographs, no nudity of any kind, no recorded images of people, women not driving, everybody has to belong to ONE political party — otherwise it gets too confusing to know “who is bad” and “who is good”…. and so on and so on…

Thunderbird June 5, 2013 4:25 PM

Excluding commercial needs, and assuming 1TB per citizen, that’s only +316e+6 TB of storage needed. Using 1TB drives with a 3.5in form factor we can also assume ~3.160e+9 watts of power, and ~1.1906041e+12 cubic meters of warehouse space.

I think you dropped something there. I assume you could fit at least 200 TB per cubic meter. That makes around 1e6 cubic meters of drives or a cube 100 meters on a side. Still ridiculous, but not as ridiculous as 10 Km on a side.

Mover June 5, 2013 5:37 PM

I’m from Europe.
If politicians in my country would accept analogy of CALEA-II.
I would simply move to different country.

Ernest June 5, 2013 8:18 PM

How would this proposal affect open-source software that has no company behind it? How would that even be enforceable?

murray June 5, 2013 11:07 PM

@Ernest
“How would this proposal affect open-source software that has no company behind it? How would that even be enforceable?”

Perhaps it will become illegal to run unauthorised software.
If you do, you will be asked why you have something to hide.

Vinzent June 6, 2013 2:23 AM

@mover: Well, the list of countries to move to might become awfully short soon. You may try countries without (public) internet, though. North Korea, maybe?

Dirk Praet June 6, 2013 5:16 AM

@ Clive

I certainly hope Glenn Greenwald, The Guardian and the source have been practicing good opsec, or someone is gonna be keeping Bradley Manning company shortly.

Jack June 6, 2013 9:15 AM

@Mover
“I’m from Europe.
If politicians in my country would accept analogy of CALEA-II.
I would simply move to different country.”

This effects Europe.

I doubt the people will accept this law. Politicians, some, if they are paid in some way.

This kind of rule — we need the people behind it moving to a country – clearly – where they would feel more at home.

As Nazi Germany is no more, I would suggest Pakistan, Somalia, Saudi Arabia, China, though they would probably feel most at home in North Korea.

Just because some bozo or a collection of bozos find themselves behind the seat of a nice car does not mean they have not stolen that car and need to be kicked out and arrested.

(In the real world, this usually simply involves their life “magically” becoming a disaster as they are exposed as the frauds that they are by the invisible powers that be.)

Jack June 6, 2013 9:34 AM

@Clive R

http://www.wired.com/threatlevel/2013/06/nsa-verizon-call-records

“The sweeping order, issued by the Foreign Intelligence Surveillance Court, requires Verizon to give the NSA metadata on all calls within the U.S. and between the U.S. and foreign countries on an “ongoing, daily basis” for three months.”

On All Calls Within the US.

“USA Today first reported in 2006 that the NSA had been “secretly collecting the phone call records of tens of millions of Americans, using data provided by AT&T, Verizon and BellSouth” in order to produce and analyze calling patterns.

“It’s the largest database ever assembled in the world,” one anonymous source told the paper at the time, saying that the NSA’s goal was “to create a database of every call ever made” within the nation’s borders.”

300 hundred million crimes. I think, that crime is also for each
instance, as well.

Kind of like a Ted Bundy serial killer raving with a badge on his
chest which says, “US Authorities”.

Sends a message of corruption to everyone and the world.

I try and think of law enforcement as very different from, say,
mafia sorts. But this sends out another message.

It also says to every American, “We stole your data, we conspired
to do it, we had no reason to do it, and though it is public you
can not do anything about it. We got away with it.”

I do not think that sort has any sort of comprehension of anything
about right and wrong.

999999999 June 6, 2013 1:50 PM

“My opinion that the FBI’s proposal is impossible? — I disagree.”
Th FBI can do it. I wish they would.

On the one hand: Google, FB, AOL, Yahoo!, M$ and every e-mail client in the world has been trying to get rid of SPAM and Phishing and failed. The feds will have to filter SPAM or they would have to sort through billions of messages encouraging people to buy enlargement pills. And, let’s not forget all the legitimate emails that are so mundane and useless (just look at most of your friend’s FB pages, LOL, OMG). And let’s also not forget the porn…massive amounts of streaming HD data that is floating in the internet. If you want to thwart the feds then hide your covert message in a porn movie.

On the other hand: (Tin foil hat) The FBI might not be interested in all people’s communication, just the “bad” people. They want to protect all of us from criminals(not whites), terrorists(Muslims), spies(foreigners), pedophiles (LGBT), tax evaders (poor people), illegals (latinos), anti-establishment protestors (young people) and basically anyone who is not a rich white man or an agent of the FBI. They do this in order to protects us from ourselves. We are the criminals and they are protecting us from us. This is not complicated.

On the gripping hand: Allowing the gov. to listen in on our mundane conversations without judicial oversight (warrant) is a sin against democracy, a crime against the constitution and against and social norms in the US. Even if they are successful in deploying this, it will not stop people from killing each-other, cheating on taxes or violating all sorts of laws. Homosexuality is a punishable crime in some countries and people still engage in it. In the end, they will spend a ton of money to make all of us less free and less secure.

The NSA has forced phone companies to fork over all records. What about the content of voice calls? Is that why phone companies are so eager for everyone to send text messages? Do they translate all the voice calls in non-English languages? Skype calls already suffer server lag, choppy images, droid voice. Will they get worse if every packet has to be also routed to big fed?
How is a CEO supposed to make high-risk high-profit decisions if the FBI is listening? If you were an FBI agent with a head for stocks, wouldn’t you listen in on the likes of Elon Musk? Steve Jobs? Bill Gates? Bruce Schneier?

Rodent June 6, 2013 5:21 PM

This debate has happened has all beforehand. Witness this whitepaper which our own Bruce co-authored in 1997/1998:

http://www.schneier.com/paper-key-escrow.html

I believe this was in response to proposals put forth by the US guvmint regarding the back-door Clipper chip, which thankfully never went anywhere.

It’s a real shame more folks don’t demand security /privacy as part of the products they buy, or actively work to put them in afterward. It is only recently that ubiquitous encryption is viewed as needed – not 10+ years ago when the ball was really rolling – and things like MEGA or TextSecure or RedPhone are just getting going now.

Sad.

Wael June 6, 2013 11:24 PM

@ Clive Robinson,

“BOAA has it’s limitations.

So does everything else. Some watches cannot tolerate a drop of water, some are water resistant to 100 meters, and some are water proof upto 1000 meters — a kilometer. No one I know of can dive that deep. But the watch will still survive that depth. Does it have a limitation? Yes, of course. Can we live with that limitation? I guess so…

As a simple example take an ordinary telephone the system is transparent to speach from end to end, so if I talk in some kind of veiled way or in obvious code it’s going to get through any phone you use it against.

How about an extraordinary phone that is not transparent to speech?

Clive Robinson June 7, 2013 4:27 AM

@ Wael,

    How about an extraordinary phone that is not transparent to speech?

If it is not transparent to speech then it will be transparent to some other medium otherwise it would not alow two people to communicate.

That is it would form an Extended Shannon Channel consisting of a transmitter and a receiver with a communications path between them. That is its fundemental design is so that some level of information is communicated between two physical entities. How one entity puts information into the transmitter and how the other entity retreives it from the receiver is not of importance provided that information is transfered from one entity to another.

The two important things to remember are,

1, The scope of the system security is limited to the system.
2, The scope of the total information channel is not limited by the system.

That is any physical information transmission system is constrained by it’s implementation to the limits of the system. The entities using the system are outside the scope of the system and thus are not constrained by it. Further the definition of information in transit through/across a physical medium is defined in Baud (symbols/second) where the smallest individual piece of information is the Bit. Thus from an external entities perspective the system can be charecterised in Bits/Sec which is related in various ways to the bandwidth of the communication channel.

The important thing to remember is that whilst the upper bandwidth limitation of a communication system can be found, the lower bandwidth limitation is defined by how long the channel is in existance, the fact it is ON or OFF provides at a minimum one bit of information. However when viewed by an external entity the length of time it is on or off communicates a number with a granularity of the maximum information band width. We know from various methods of data compression that such a number can convey a considerable amount of information.

Thus it is possible to send quite large amounts of information through a system without changing the state of the system, thus the system is unaware of this extra communication channel laid covertly over it’s intended communications channel. Because of this it can be shown that covert channels are not only possible but are undetectable as long as the external refrence by which the covert number is measured remains unknown to the system or it’s designers.

When dealing with TEMPEST/EmSec there are the usual fairly well known rules that can be found in any halfway decent book on Electro Magnetic Compatability (EMC). However there are other (supposadly) still secret rules some pertanent to Covert Channels are,

1, Clock the inputs.
2, Clock the outputs
3, Fail hard on error
4, Fail long on error

The first two help remove “spread spectrum” techniques like those employed in Digital Watermarking. The second two help limit the granularity of the covert channel thus limit it’s bandwidth. A little thought on the matter gives rise to another potential rule,

5, Make long fail time random [1].

Whilst it appears to make external timing very difficult it actually just changes it only a little in that the sender of covert information switches from absolute timing to relational or differential timing (ie the sender just uses the random re-start time as a timing refrence etc etc). However whilst the rule only inconveniances the sender marginaly, it makes the detection of covert channels automaticaly very difficult so in most practical systems it is not used.

So a system designer has a difficult problem, in that it is impossible to stop Covert Channels being formed and used, the best that can be achived is to limit the bandwidth of known classes of covert channel.

[1] There are many meanings to “random” which criticaly effect system design which makes this rule one that needs carefull consideration. For instance low grade pseudorandom is predictable not just by the system but external entities of the system so serves little purpose. True Random is unpredictable not just to external entities but to different parts of the system which makes automated detection of covert channels near impossible. Crypto Secure PRBS generators should be unpredictable to all external entities but not to different parts of the system, however it has it’s own exploitable issues such as re-keying and re-sync.

Nick P June 7, 2013 2:18 PM

@ Clive, Wael

“How about an extraordinary phone that is not transparent to speech?” (Wael)

“{Essay on theoretical INFOSEC and covert channels}” (Clive)

I guess I’ll throw something in. How about we keep it extra simple? Previously, I had a problem trusting SOC’s. Tiny, super-integrated, complex interactions, and only half a dozen fabs? It’s subversion or new covert channels waiting to happen. Need dedicated chips and dedicated communication lines if we want a simpler design. It takes more space/power. This is why my last “mobile” phone was a briefcase and I called it “portable” instead of mobile. 😉 However, the typical satellite phone I’ve seen is larger than most cell phones, but not too large for the user. Targetting that size, I figure I’ll have one board to work with (two max maybe) and only a few chips. What to do….

I would take a page from my MILS designs. However, I just remembered the Sectera Edge secure phone. It had a very simple design and interface. It seems to be two phones in one: classified and unclassified. Modes are swapped on button press. There’s a dedicated LCD at bottom for security critical messages (I’ve often recommended this). The system’s secure partition has encrypted data, voice and text. It’s also been certified through NSA’s rigorous Type 1 process. So, I say we should start with an already proven design in ours. I also noticed the phone virtualization vendors were building simple Personal or Work style separation into their products. So, they implicitly agree.

We can do the secure phone with embedded virtualization. It might be a decent start as it’s cheaper than dedicated hardware. The best products in this position are OKL4 and PikeOS; they’re more flexible than defense contractors, their products aren’t restricted by US law, the code is mature and theyve both been used for this use case before. So, like Sectera, we’ll use two partitions. The first will be Private w/ crypto on by default and most risky stuff disabled (inspired by cryptophone). The second is the regular OS. The user interface is in a separate partition and communicates with the VM that has focus (trusted path). That UI includes access to Mic, keyboard and screen. Comms stack is untrusted. This is already better than any “secure” mainstream phone OS bundle.

Well, what if we don’t trust all of that software? Then, first idea is to replicate the design with more hardware separation. We can start with Sectera concept. We might have four pieces of hardware: trusted path interface system, untrusted OS SOC, private OS SOC, and communications stack SOC. A hardware switch directs trusted path to look at a different OS. Comms stack can be set to see one OS or both OS’s. If seeing both, it will use a MILS-style partitioned communication system with periods processing and fixed scheduling to reduce covert channels. A possible extra feature is turning off one or more of the main SOC’s to preserve some battery life or improve security. Example of the latter would be turning off untrusted OS while spending the day doing secure calls. This thing will be a battery drain so it will come with spare batteries, a car charger and a home charger. A selling point one can mention if people make faces over the price tag. “And you get all these useful, expensive accessories gratis!”

As I look at these possibilities, I still think it might be easier to facelift the whole idea. Maybe the whole industry is looking at it wrong trying to do it all inside the endpoint. When we’re out using [untrusted] public WiFi, we often want the protections of our home network. So, it’s common to use VPN software to tunnel to the home network, which then proxies everything for you. The trust in the endpoint is mainly that it can establish a tunnel and force traffic into it. I think this simpler approach can help solve the mobile security problem easily. First, make the encrypted text/chat/voice/video software all go over data line (mobile or WiFi) with a protocol w/ default strong and efficient algorithms. Second, create apps for each endpoint that are easy to set up with minimal privileges/configuration. This is the first tier of service. It let’s users apply the cryptosystem on their legacy (untrustworthy) devices to get secure comms so long as their endpoint OS isn’t hacked.

The next tier focuses on assurance and uses proxying to very non-mobile devices to get it. The non-mobile device is a secure comms gateway. It does all the heavy lifting for the security protocol, internal data protection, etc. The mobile device’s secure mode is simply a thin client (or lightweight apps) connecting to this. This just requires a trusted path, trusted (simple) VPN, and untrusted networking/comms. Even better, the RTOS vendors I named have already implemented these types of functions on their systems before so they have experience and maybe usable components. Part of the main system might even hook into the mobile network via add-in card so that if someone calls a user’s secure phone number the call goes to the assured system. The system then sends a simple call notification message to actual phone over VPN, starting a call if accepted. The final benefit is in subversion resistance: the use of standardized protocols and implementation neutral designs means that individual instances of the gateway/comms-system can be diversified using different RTOS’s, libraries, CPU’s or BSP’s. Try to subvert that nation states. 😉

Once I get out of my current financial and health crunch I might build it. It seems easier to secure than a true smartphone, which is hopelessly complicated (imho). I might even backstep to using two different phones, one for secure calls. Then, the secure phone’s SOC and stuff could be simplified to a large degree as it’s just a thin client or limited app execution on miminal kernel. I bet such a phone could be brought down to under $100. The gateway, on the other had, might cost a tad more. And have a support contract for secure administration. (Toothy grin)

Herman June 7, 2013 5:22 PM

The FBI doesn’t need a law to monitor everything. The NSA is already doing it.

The big problem with monitoring everything is that now, all a terr has to do to cause trouble, is call, email, text or skype someone. The three letter agencies will then do the terrorising work for them.

Wael June 9, 2013 7:22 AM

@ Nick P, @Clive Robinson

Previously, I had a problem trusting SOC’s. Tiny, super-integrated, complex interactions, and only half a dozen fabs? It’s subversion or new covert channels waiting to happen

Waiting to happen? I remember we talked about subversion before. And I still maintain my opinion.
Are you becoming an optimist? 🙂

However, the typical satellite phone I’ve seen is larger than most cell phones, but not too large for the user. Targetting that size, I figure I’ll have one board to work with (two max maybe) and only a few chips.

Changing the transport layer of a communication system is not an effective way to avoid subversion. Subversion can happen at the SOC level or lower layer software stacks. What I am saying is: If you have a Sattalite phone, how can you tell it does not have other side channels? Then again, how much can you vouch for a LEO Sattelite, 100+ Miles away from you?!

Well, what if we don’t trust all of that software? Then, first idea is to replicate the design with more hardware separation.

That is one approach I often thought about. I think you can reduce the chances of subversion (and the “Transparancy” problems @Clive Robinson mentioned) using this method.

Wael June 9, 2013 8:11 AM

@Clive Robinson,

1, Clock the inputs.
2, Clock the outputs
3, Fail hard on error
4, Fail long on error
The first two help remove “spread spectrum” techniques like those employed in Digital Watermarking. The second two help limit the granularity of the covert channel thus limit it’s bandwidth.

Yes! We had that discussion before as well. One thing I forgot to ask then:

The second rule is,
“Clock from most secure to least secure”

Somehow that sounds right. Still, what, in your mind, would happen if you Clocked from least secure to most secure?

Nick P June 9, 2013 12:04 PM

@ Wael

Re: Sat Phone. I was actually talking about how they’re larger than a typical cell phone and users were OK with that. Sorry for any lack of clarity. The idea was that making a secure phone a bit bigger can allow for extra security-oriented hardware, yet it’s not TOO big for many users.

re: On subversion
” My conclusion is: For individuals, protecting against subversion is an impossible task. For a government, the task is formidable but not insurmountable. The (necessary, but not sufficient) condition is, they have to fully control the design, manufacture, test, and deployment of the hardware / software. This implies nothings outsourced including the FABs.”

I see where you’re coming from but disagree. I think it’s a matter of probability. Subversion of many kinds seems to be very difficult for today’s hackers. Let me put it it this way: we’d see tons of I.P and secrets being transfered without so much hacking if widespread subversion was happening. They’d disguise it but there would be indicators. We’re not seeing this. The MO is attacks via careless/shady insiders, on systems’ TCB’s, physical theft and application-level issues. Main subversions are nation states backdooring their domestic operations and certain groups trying to backdoor software others might use. So far, counter subversion techniques I’ve seen people use and I’ve advised on showed no evidence enemies were doing anything.

My conclusion is that today’s subversion efforts are rare, picked for certain types of jobs where it’s easiest attack, mainly for use against domestic user base, and they’ll only do it on something like a foreign fab if it justified the cost, like if one fab made all the secure phones. 😉

RobertT has noted that it might be quite difficult to try to insert a backdoor at most stages of the ASIC process, so there’s limited window anyway. They’d need to control the company or those in charge of that window. Why they don’t, if they don’t, is beyond me. Maybe Taiwanese (for ex) are just hard to control. Of course, they’ve resisted China this long so it’s not so hard to believe the very resourceful fabs are resisting foreign subversion.

“That is one approach I often thought about. I think you can reduce the chances of subversion (and the “Transparancy” problems @Clive Robinson mentioned) using this method. ”

Appreciate the feedback. My oldest design used an old PC with a foreign (neutral country) modem and cheap non-usb headset for cryptocomms. Low risk for subversion. Then, just lock down BIOS, lock down OS, clean copy of HD state on WORM media, and strong app execution isolation. Had voice, email, VPN, etc. Oh how far the designs have come. 😉

Wael June 9, 2013 12:48 PM

@ Nick P,

I see where you’re coming from but

When I say impossible for an individual to assert that the system is not subverted, I mean that individual is able to make such determination alone, without help from Nick P or RobertT[1]. That individual must posses enough skills in HW, SW, FW, Cryptography, access to schematics, and software of every component of the system and the echo system and the ability to spot backdoors, weaknesses, security holes, etc. And that has to happen before he “Locks Down the system”. Then he has to assert the system has indeed been locked down. I think that is the bare minimum… If you know of such an individual, then I am in agreement with you 🙂

[1] The paranoid Security person would have to take into account that there is a probability larger than zero that at least one of {Nick P, RobertT, Wael, Bruce Schneier, etc…} are disseminating disinformation to serve an agenda… :), and you cannot trust everything they say. If you do, then you have not asserted by yourself that your system is not subverted. And by subversion, I don’t mean the average Joe Shmo hacker. We are talking about major research organizations and nations.

Me? I am paranoid. How paranoid? A previous manager once told me I am his right-hand man. I watched him like a hawk for a long time just to make sure he’s not left handed 🙂 — True story (He was left handed, unfortunately, but really meant he trusts me)

Clive Robinson June 9, 2013 3:17 PM

@ Wael,

    When I say impossible for an individual to assert that the system is not subverted, I mean that individual is able to make such determination alone

If you stopped before the “alone” you’ld be fairly close to the mark (it’s why the DoD got it’s panties in a wad over “supply chain issues” and started to throw large lumps of cash at the problem).

RobertT pointed out some time ago that even if you control the entire process from “wish list” to 250K SoC parts ready to drop into your design you cannot 100% guarenty your SoC is not some how subverted because no single person knows enough to design the SoC from transistor through to the level of functionality we get these days which means a team of designers any one who might have been subverted. But even if not it alows subversion down the line…

So I started thinking along the lines of “Assume it’s subverted how do you mitigate the issue?”. And surprisingly after a little thought you realise there are ways 🙂

It’s where I came up with the notion of “Probablistic Security” that I’ve mentioned befor. It offers surprising advantages and allows for “engineering sweet spots” that can actualy reduce the cost of a system.

NASA developed one way to achive this which is with multiple redundant sub systems all of which are produced by seperate design teams effectivly working in isolation. Any difference in outputs for the same given inputs is a “Red Flag” trigger that something is either broken or subverted.

As Nick P noted if you use one supplier that has the US “seal of approval” and say another with the Chinese “seal of approval” provided neither system have subvertable components in common you can compare the outputs. If they remain the same then you have a reasonable probability that neither system is subverted.

However if they differ then you have a problem because you don’t know which of the two is broken or subverted, thus you are looking to find three systems with no subvertable components in common.

However there is another way, in which you use identical systems seperated in nonlinear time, it’s more complex but equaly usefull.

After further thought you realise there are other methods that can be used.

However one fundemental idea will pop up repeatedly in one form or another. If a covert channel is time based then the sub system generating it needs an external time refrence to work, or internaly generate a time refrence that is visable externaly. Entirely disocosiate the time tie up between the sub system and any external monitoring system then the covert channel will not be viable. There are many ways to do this and each has it’s merits (and pit falls).

For instance if you have three sub systems and four packets of data you can add two “check packets” and out of sequence supply them to the three sub systems. You then check the outputs and re-sequence them to the correct order for transmission. If you also “halt” each subsystem for one packet time in a random way the refrence to external time is further broken.

Whilst far from perfect as a solution it makes the subversion of an overall system by an attacker much harder and if done in the right way will make detection considerably easier.

Any way I’m away for the weekend so I’ll answer other points and any questions that arise tomorrow.

jeffD June 9, 2013 5:19 PM

Excluding commercial needs, and assuming 1TB per citizen, that’s only +316e+6 TB of storage needed. Using 1TB drives with a 3.5in form factor we can also assume ~3.160e+9 watts of power, and ~1.1906041e+12 cubic meters of warehouse space.
I think you dropped something there. I assume you could fit at least 200 TB per cubic meter. That makes around 1e6 cubic meters of drives or a cube 100 meters on a side. Still ridiculous, but not as ridiculous as 10 Km on a side.

Jeff

RobertT June 10, 2013 1:43 AM

Been real busy for a while, so I hope I’m still welcome….

@Weal
“When I say impossible for an individual to assert that the system is not subverted, I mean that individual is able to make such determination alone, without help from Nick P or RobertT[1]…”

If I were the individual tasked with ensuring the devices absolute security, I definitely wouldn’t want me, or anyone with similar skills, involved in any way with the specification, implementation or oversight of the system security.

I think I’ve alluded to this conundrum before, the problem is that a real expert in any area probably has a few ideas about ways to do something that others simply have not yet considered. Take for instance NickP’s suggestions, He is basically looking for digital cycle for cycle equivalence, even if he had his wish, I can still think of ways to leak all the information I need.
CliveR always says clock the inputs, Clock the outputs. Good advice but most SOC’s have Slew rate control built into I/O ports so I can change this rate to leak information. You would only be able to tell if I was intentionally changing the Slew rate if you could somehow collect the output signal with a VERY high speed scope AND then remove any potential ISI (inter-symbol interference) resulting from the PCB / termination.

If you cant guarantee that sections of a design are not subverted than your only choice is to somehow create complexity that you hope is too difficult to unravel.

RobertT June 10, 2013 2:38 AM

@Wael,
You raise an interesting problem of dis-information agents working within the security industry.

To be honest Iv’e never really considered that aspect of the problem, so I’m not sure what that says about me.

I always assume that security is a little like Chess because you’ll get beaten every time if your opponent can think more than two moves ahead of you. I always assume that such individuals exist that are capable of playing two or more steps ahead of me so I strictly play to my limits rather than modifying my game because of their imagined capabilities.

Nick P raises an equally interesting question about the prevalence of subverted hardware being very low because almost no attacks are attributed to these methods. BUT if you had such capabilities would you really want everyone to know thats how you got the information. Seems to me that you’d rather everyone thinks the information was sniffed from internet traffic / packets.

Clive Robinson June 10, 2013 7:17 AM

@ RobertT,

Nice to hear from you again, I hope the abscence was for good/pleasent reasons 🙂

With regards the “clock the…” rules, I noted early on that they were lacking in details, and deliberatly so… such are engineers and the way they think. It also adds another layer of secrecy/mystique to the design of secure system.

A couple of things, firstly often people don’t pick up on is the idea of “relative to the point of observation”. Secondly that you need to think in terms of what happens with other rules such as function segregation.

If you take a “pipelined” view point then the SoC performs a particular function which has clearly defined inputs and outputs, where the complexity of the function remains “bottled up” in the SoC. Thus the SoC would be isolated by non transparent latches which are clocked independantly of it’s internal clock “resync/skew” circuits.

That is the clock used in a system is generated from a master circuit state machine, and one branch from it divides down to drive the latches and another branch provides the SoC input clock.

Thus any skew the SoC adds should [1] be
removed by the interstage pipline latches.

Which brings me onto @ Wael’s question about the rule as to where the clock originates from. Clocks are a problem as you probably apreciate there are issues to do with Special Relativity and also Doppler in communications systems. Thus “drift” between clocks is an inevitability that only gets worse with time and the higher the frequency involved the faster you hit problems.

That is if you are busy sending data from a fast jet, rocket or space based platform, you have a level of uncertainty which could hide all manner of covert channels. The work around is to work out which part of the comms system you have most “faith in” it not being subvertable and is thus the “most secure source” to take timing from. This can result in some odd looking system designs using high depth buffer stages with independent clocks at the inputs and outputs. One (simplistic) solution is that you fill from the local clock and build up a buffer depth and clock out with the remote clock [2]. The depth of the buffer is designed such that you don’t under or over flow and generate errors. However this system eventually runs into problems thus more complex solutions are needed and where there is complexity there is a place for covert channels to exist…

However there is a simple solution based on segregation of function and encapsulation of interdependance. Simplisticaly you make the comms part fully independant of the secret parts and live with the results on the assumption that no secret has leaked from the secret part.

The hard part of designing secure systems is knowing before you put pen to paper where the problem areas are going to be and then having a sensible mitigation stratigy for each, and that means “hard won experiance”.

[1] Actually it only takes care of skew/jitter that is higher in frequency than the interstage pipline clock. Skew/jitter at less than the clock rate is an error on which the error detection ccts should “red flag” and fail hard and long on.

[2] The problem with doppler and relativity is that you will at some point either run a buffer dry or flood it if it is of insufficient depth. However make the depth large enough to cope with all reasonable eventualities and the latency will be way to large for most practical uses. The solution is to give a maximum length to a transmission and then wait for the transmission to compleate and then start on a new transmission.

RobertT June 10, 2013 5:57 PM

@Clive Robinson

I like your solution of interstage reclocking BUT today there is typically only one or two chips in the whole system. Open up a modern tablet and tell me how many chips you find. Typically high end tablets have two big chips because the applications processor and the comms chips are separate but most are shifting to single chip designs.

The only semiconductor parts that are typically external are the actual LNA-PA and maybe the up/down RF Mixer. The LO is almost always generated on the comms chip from the main PLL. The PLL today is a typical M/N fractional PLL or sometimes a DLL. Typical PLL jitter for a 3G comms chip is in the single pico second range.

On most comms chips these days the clock skew, doppler and multipath etc effects are all addressed at the IFFT stage, By changing from time domain to frequency domain processing Doppler just becomes a frequency domain offset.

The FFT/IFFT hardware is needed for any 3G (WCDMA, HDSPA) or similar comms system so it is widely used even when simpler traditional methods would work.

There are literally hundreds of points where the portable device Tx signal can be altered in a detectable way BUT is still completely within the system spec compliance envelop. Trying to understand the source of the non-ideal signal is nearly impossible because in the real world the Tx signal must interface to a non-ideal Antenna with real world impedance loading effects. Also most systems use a form of evvelop modulation whereby half the intentional channel predistortion happens at the Tx end (google “root raised cosine filter”) typically implemented half at the Tx and half at the Rx.

My point is that nobody can say that there is no side channel on the comms signal. All anyone can say is that the distortion / non-ideal characteristics of the signal fall within the allowed compliance envelop.

BTW these days with FEC (forward error correction) most comms channels operate at their throughput peak when the total channel errors are at about 1 in 10000 or higher. So this is the expected rate of Rx errors, all of which will be corrected in the Receiver digital Demod stage . This creates lots of room for even non-spec-compliant Tx side channels that will receive as error free.

The problem for the chip security professional is that compartmentalization of the design means that the RF engineer is not looking for security side channels, nor does he even know what they are. So if you can extract some information from the CPU (such as the compute difficulty of a multiply) this information can be gotten off chip without anyone even suspecting that the system is compromised.

RobertT June 10, 2013 6:26 PM

@Clive Robinson
“Nice to hear from you again, I hope the abscence was for good/pleasent reasons :-)”

No been real busy, a big project with a tight dead line. kinda gives one a taste of what it must be like to spend time “At Her Majesty’s pleasure”.

What I’ve learned is that all work and no play makes Bob a dull boy. My pockets are full again, so I’m off to find my pleasure in some den of inequity, hopefully with a good poker game.

Figureitout June 10, 2013 8:47 PM

RobertT
–Some of us would like to keep the chips we have to play the game and false confidence/disbelief is a wonderful ally. So if you could tone it down a bit please Crane. Play this guy and don’t blow your money on a different game that won’t bring you real pleasure.

RobertT June 10, 2013 10:06 PM

@Figureitout
I think if you read through the comments you’ll see that I was accused of being a deliberate dis-information vector, on a website where I hadn’t even posted a comment for over 3 months.

I know I have my manic moments but like most bipolar people I dont want to constrain the manic me, rather I’ll settle for restraining the depressive me. If I write like I cant makeup my mind, it’s probably because I cant, sorry! consider it the package that my information comes in.

Figureitout June 10, 2013 10:18 PM

RobertT
–No no no. No need to reveal that, besides there’s a razor thin line between genius & insanity; see it all the time. Let the “softies” talk, the “hard” people will always show them who’s wearing the pants. I have reason to believe my dad’s met you, wish I could say the same. Maybe make a stop by a certain area if you can find me (easy); I have questions.

Wael June 10, 2013 11:23 PM

@ RobertT

I was accused of being a deliberate dis-information vector

You were not being accused of anything, nor was Bruce, me, or NickP, etc. I was talking about a hypothetical “Security” person who’s supposed to have the ability to guarantee security.

Figureitout June 10, 2013 11:23 PM

RobertT
–I actually have a cousin who’s bipolar. So perhaps the genes are in me; I didn’t used to have random fits of rage…. Consider yourself very lucky b/c I used to love to play w/ him as a kid but as he became an adult…very very bad stories and I don’t really know how he’s doing now. Certainly not a high functioning engineer.

Figureitout June 10, 2013 11:41 PM

The paranoid Security person would have to take into account that there is a probability larger than zero that at least one of {Nick P, RobertT, Wael, Bruce Schneier, etc…} are disseminating disinformation to serve an agenda… :), and you cannot trust everything they say.
Wael
–Then why would you include those names. I can personally vouch for at least Nick P; and there are others…

Wael June 10, 2013 11:49 PM

@ Figureitout,

Then why would you include those names

Perhaps because these were the characters participating or mentioned in the discussion? Bruce was included as an honorary guest of the set. Did you notice that I included my name as well?

Wael June 11, 2013 12:25 AM

@Figuritout,

Were you too scared to include Clive?

I actually was going to mention his name and noticed that he did not directly participate in this particular thread of the discussion. I noticed that during the “Preview” before I pressed “Post”, but left it out.

I was in the middle of replying to @RobertT when he said:

so I’m not sure what that says about me.

because I had a feeling he thought I am accusing him of something, and wanted to clarify, but got interrupted. And now that I read back, I noticed I missed another reply from him, except my name had two transposed letters.

Figureitout June 11, 2013 12:44 AM

F*ckin spell my name right when you address me Whale.

Perhaps you need to see what a hardware bug is; b/c your response makes no sense.

Wael June 11, 2013 12:53 AM

@ Figureitout,

I will disengage from this meaningless thread now. I frankly feel guilty that I am consuming too much bandwidth of another subject being discussed (whistle blower)…

RobetT June 11, 2013 12:54 AM

@Wael,

No need to apologize I wasn’t offended kinda pleased to be included in such a short list with such esteemed company.

I actually like to keep my posts technical so that the “information” is at least verifiable, I wouldn’t expect most security people to understand half of what I said yesterday but if they only understand that there are ways to covertly transfer information that are practically impossible to intercept, than thats good, if they further understand that nobody can ever prove the existence of the covert channel than they have the right starting point for accessing the technical risk of information leakage from portable devices.

Figureitout June 11, 2013 1:03 AM

Wael
–Psh, it’s a story, but won’t be for long. Don’t call yourself meaningless, I demand respect, and I will get it. Listen to Mr. T (sorry lol, it’s funny); there are other “wannabe” engineers who luckily stumbled across a source of power, and they will of course exploit it.

Figureitout June 11, 2013 1:32 AM

Seriously…it must’ve been an act of…nature. Plus, when someone wins, they want to take a long victory lap. I want my secret to remain a secret lol, kind of goes against what I want to happen to the gov’t. All humans are hypocrites.

Clive Robinson June 11, 2013 4:01 AM

@ Guys,

Cool it a bit otherwise people will think we are all sitting in Prams/Buggies chucking the toys out, rather than the “anonymous dogs” of Internet legend 🙂

@ RobertT,

    I like your solution of interstage reclocking BUT today there is typically only one or two chips in the whole system.

And in that “BUT” hangs the entirety of the current security problem we face today, “Complexity” beyond our ability to manage effectivly let alone securely.

As complexity rises as some positive power (greater than 2) of the sub entities involved, we have a geometric problem to deal with.

The only way we know how to manage complexity now and in the past is by “Divide and Conquer” in various ways.

The downside for modern systems is SoCs make D&C impractical or very very expensive and also comparitivly realy slow.

One aspect of this is many of my more secure designs used technology that is to be blunt quite dated and packaged in DIL form, but due to major industry needs untill fairly recently still either in production or readily available.

But whilst the PALS and 74 series look like they will be with us for a while not so the analog chips. Some of my newer designs have had to use “sound card” components and low end DSP chips which is a pain.

Thus I’m having to look at doing D&C in different ways and it’s a pain. The main attack route on this being to minimise the secure functions as much as possible and have easily monitored “choke points”.

But even this old dinosaur is having to look at multiple sourced multi chip FPGA solutions and the price is to put it mildly eye watering and I find VHDL a pain though nowhere as bad as the problems with Verilog.

And this is a problem I keep comming up against, limited production runs of at best a few hundred to a thousand units are just not economical to make these days, even if you can get the parts. Then there is “customer expectations” what they want is the bells and whistles they have on their latest smart phones and in a similar form factor and price. When they find out they’ll get is a large brick with switches and seven seg LCD, powered by a couple of motorbike batteries at ten times the price they tend to be less than impressed.

I actually know of one specialised manufacture of low volume high spec kit actually buying in 500 Nintendo DS2’s and striping them down and remounting in custom facias as the cost even at retail was way less than they could build an equivalent User Interface for…

It’s one of the reasons I’m looking at the likes of Gumstick and Raspberry Pi SBCs and using USB as the choke points with a custom go-between device to do secure segregation.

I suspect Nick P may have similar tales of woe about hardware builds.

As for your idea of “relaxation” you sound like Lemmy Kilmister of Motorhead, if you don’t know it have a listen to “Ace of Spades” for his philosophy of playing cards 😉

Jack June 11, 2013 9:15 AM

Lindsey Graham (R-S.C.)

“I’m a Verizon customer. I don’t mind Verizon turning over records to the government if the government is

going to make sure that they try to match up a known terrorist phone with somebody in the United States. I

don’t think you’re talking to the terrorists. I know you’re not. I know I’m not. So we don’t have anything

to worry about.”

And Graham assured the hosts that the surveillance was limited to terrorism.

( http://www.politico.com/story/2013/06/lindsey-graham-nsa-tracking-phones-92330.html#ixzz2VuotRHsh )

Lindsey Graham (R-S.C.)

“I looked at JoeBob’s car. It had knocking sounds and black smoke trailing from the tail pipe. I can assure you that the car is completely sound, and does not have any problems with it. There is no way it could be a problem.”

Lindsey Graham (R-S.C.)

“Jennifer came to me telling me she felt she may have cancer because she had a large growth on her breast. I can assure everyone, she does not have cancer. I looked at her and confirmed that growth is not cancerous.”

Lindsey Graham (R-S.C.)

“I met a man in a bar who was an airplane mechanic. He told me to look at a plane he was working on. They had moved it off flight for inspection because of some unusual shaking and discrepancies in the pilot’s controls. I looked it over. The plane is fine to fly. No reason to put mechanics on it.”

Lindsey Graham (R-S.C.)

“I trust everyone with access to the data, and their degree of access to it. I met Snowden personally, and found him completely reliable with that data.”

Lindsey Graham (R-S.C.)

“Maybe I have more skeletons in my closet then most other senators.”

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.