More Crypto Wars II

FBI Director James Comey again called for an end to secure encryption by putting in a backdoor. Here's his speech:

There is a misconception that building a lawful intercept solution into a system requires a so-called "back door," one that foreign adversaries and hackers may try to exploit.

But that isn't true. We aren't seeking a back-door approach. We want to use the front door, with clarity and transparency, and with clear guidance provided by law. We are completely comfortable with court orders and legal process -- front doors that provide the evidence and information we need to investigate crime and prevent terrorist attacks.

Cyber adversaries will exploit any vulnerability they find. But it makes more sense to address any security risks by developing intercept solutions during the design phase, rather than resorting to a patchwork solution when law enforcement comes knocking after the fact. And with sophisticated encryption, there might be no solution, leaving the government at a dead end -- all in the name of privacy and network security.

I'm not sure why he believes he can have a technological means of access that somehow only works for people of the correct morality with the proper legal documents, but he seems to believe that's possible. As Jeffrey Vagle and Matt Blaze point out, there's no technical difference between Comey's "front door" and a "back door."

As in all of these sorts of speeches, Comey gave examples of crimes that could have been solved had only the police been able to decrypt the defendant's phone. Unfortunately, none of the three stories is true. The Intercept tracked down each story, and none of them is actually a case where encryption foiled an investigation, arrest, or conviction:

In the most dramatic case that Comey invoked -- the death of a 2-year-old Los Angeles girl -- not only was cellphone data a non-issue, but records show the girl's death could actually have been avoided had government agencies involved in overseeing her and her parents acted on the extensive record they already had before them.

In another case, of a Louisiana sex offender who enticed and then killed a 12-year-old boy, the big break had nothing to do with a phone: The murderer left behind his keys and a trail of muddy footprints, and was stopped nearby after his car ran out of gas.

And in the case of a Sacramento hit-and-run that killed a man and his girlfriend's four dogs, the driver was arrested in a traffic stop because his car was smashed up, and immediately confessed to involvement in the incident.

[...]

His poor examples, however, were reminiscent of one cited by Ronald T. Hosko, a former assistant director of the FBI's Criminal Investigative Division, in a widely cited -- and thoroughly debunked -- Washington Post opinion piece last month.

In that case, the Post was eventually forced to have Hosko rewrite the piece, with the following caveat appended:

Editors note: This story incorrectly stated that Apple and Google's new encryption rules would have hindered law enforcement's ability to rescue the kidnap victim in Wake Forest, N.C. This is not the case. The piece has been corrected.

Hadn't Comey found anything better since then? In a question-and-answer session after his speech, Comey both denied trying to use scare stories to make his point -- and admitted that he had launched a nationwide search for better ones, to no avail.

This is important. All the FBI talk about "going dark" and losing the ability to solve crimes is absolute bullshit. There is absolutely no evidence, either statistically or even anecdotally, that criminals are going free because of encryption.

So why are we even discussing the possibility to forcing companies to provide insecure encryption to their users and customers?

The EFF points out that companies are protected by law from being required to provide insecure security to make the FBI happy.

Sadly, I don't think this is going to go away anytime soon.

My first post on these new Crypto Wars is here.

Posted on October 21, 2014 at 6:17 AM • 117 Comments

Comments

Remember the Clipper chip!October 21, 2014 9:35 AM

This is something that is never going to go away. I still have a t-shirt from the old "Sink Clipper!" campaign. As long as computer nerds have any sense of privacy, there will always be encryption. As long as "law enforcement" has any reason to hop on a soap box, they'll always be crying out that everything must be accessible, for the sake of the children.

vas pupOctober 21, 2014 9:58 AM

@Bruce:"As in all of these sorts of speeches, Comey gave examples of crimes that could have been solved had only the police been able to decrypt the defendant's phone. Unfortunately, none of the three stories is true".
Small lie generates big mistrust and does not good for image of any gov instituion the cited included. Did you recall: "I did not have sex with that woman..."? You know her name and case involved. That is why for the long term prospective I was always against one-way street right to lie: LEO/feds level could lie to you (that is their tool for doing job), but you can't lie back (as if you under imeddiate oath) - that is federal crime(as SCOTUS decided - yeah those nine Law gurus are always right -kidding!). As 'Laws of Imitation'by G. Tarde suggested : "Subordinates imitate superiors". Not vice versa. Make you own conclusion.

paulOctober 21, 2014 10:03 AM

There's a sort of sad irony here that in other matters Greece is the poster child for how not to run a country, but when it comes to communications security Comey et al want to turn the US into Greece, only worse. (Short version: About 2005, subversion of "lawful intercept" facilities let evildoers tap phones of government officials and others for an indefinite period.)

allanOctober 21, 2014 10:40 AM

The irony is that there may have been cases that did hinge on cryptography but the police falsified the evidence to hide the involvement of the NSA. (See yesterdays article.)

The question I have is what has happened to the cypherpunk movement. I haven't heard of it in years. I also haven't seen a pgp signed email, or any form of encrypted email in years. No encrypted voip or secure cell phones. Rolling code remotes were driven by the cypherpunks but I think I have only seen one in the last five years and given the initialization procedure I doubt that it was secure. Certainly there was no documentation available on how the remote was secured.

Clive RobinsonOctober 21, 2014 10:56 AM

Front doors, back doors, they are both the same, a weakness into your private domain, it matters not if it's a crook kicking the door in or a LEA with a piece of paper they almost invariably lied to a judge to get, you've been invaded and people will talk irrespective of if you have committed a crime, mud will be slung, and some will stick and take considerable time and effort to clean up.

As for the FBI director he is apparently claiming a lack of technical experiance to deflect questions on technical asspects of his moronic idea. Perhaps journos should do their job and ask questiond such as "how is your idea different from an unlawful backdoor?", "How will you stop it being used illegaly by crackers?" and "How will you stop it being used unlawfully by TLA's?". Or "Will this front door be made available to other countries for law enforcment?" and "Does that include countries with human rights, or other repressive or non democratic issues?".

Untill such questions are answered satisfactorily and reviewed by appropriate domain experts this idea should be rejected out of hand at all political levels.

Peter GalbavyOctober 21, 2014 11:00 AM

What genuinely confuses me is the motivation. Are the authorities in all our various countries simply made up of megalomaniacs or those in some paranoid delusion of their own stability in government?

Why, when in most countries the legal framework is clear that (in theory) oversight is required to collect intelligence on citizens and even foreign residents is there such an ongoing and concerted push to avoid that oversight?

SkepticalOctober 21, 2014 11:08 AM


The Intercept argues that a conviction would have been obtained in each of the cases regardless of whether evidence was obtained from cellphone stored data or communications. Perhaps. But in the case of the murder of the two year old, they underestimate the difficulty of obtaining a conviction of both parents, and establishing the degree of complicity by each, based on the other evidence they describe.

They write: The evidence — a felony child abuse charge, an active arrest warrant, a neglect charge, the abuse hotline tip, the social worker report that Abigail’s mother hated her toddler, the weekend trips to jail, Abigail’s bruised and beaten body — all amounted to a compelling case without any text messages.

It amounts to a compelling case that one of the parents inflicted the fatal beating. But which one? Were both present? Did both participate? Were both fully aware of the child's precarious condition before finally calling 911?

All of that may have been difficult to establish without cell communications. If one of them had relatives or associates willing to testify that he was elsewhere at the time, for example, and lacked knowledge, it's quite plausible that he might have obtained a plea agreement with far lesser charges.

In any event, this is quibbling. Cellphone data and communications - from both victims (a murder victim's cellphone may contain very important clues) and perpetrators - clearly play a significant role in criminal investigations and can be extremely important in obtaining a conviction commensurate with the crime committed. The case of Abigail Morales supports that fact.

As to front doors vs back doors, Vagle and Blaze write: if we design systems with lawful intercept mechanisms built in, we have introduced complexity to the system, and have therefore made the system inherently less secure in the process. This is true even for systems with designs that are open for all to see and inspect. Thus, the difference between a “front door” vs. a “back door” approach to law enforcement intercept of encrypted communications is purely semantic.

This seems false. Greater complexity does not necessarily imply less security - and I'm frankly puzzled as to how the authors could write that it does as though this were a law of physics. Two-factor authorization adds complexity to a system. Does it make the system inherently less secure?

Moreover an access capability deliberately designed into the system, and subject to greater testing and transparency, is certainly going to be more secure than a backdoor based either on an accidental flaw or a less tested and less transparent access mechanism. So the difference between back doors and front doors isn't merely semantic.

ThothOctober 21, 2014 11:18 AM

The fact is ....

He/They CANNOT FORCE our hands to write insecure codes. We make the decisions ourselves.

As we all know, the FBI director is simply soliciting more attentions and is being a totally bad troll (doing it so obvious) in the hope that Obama and Congress would enforce a law that will BREAK Constitution of USA into half.

In essence, my advise is to not pay heed to the trolls (do not feed the trolls) and just continue to ramp up the good work on the high assurance security solutions put out freely into the open.

The fact the US citizens shun the US LEA/TLA/GOVT is a sign of a weakened morale of the people and of corruption and sucky governance. In simple, the ordinary people are living an unhappy life in general.

The Government of the USA do better fix themselves before meddling in external affairs because they reek of "pot calling the kettle black" scent when pointing and waving fingers madly at China, Russia, Iran, Syria (Middle East stuff), Ukraine, Georgia and so forth in regards to Government Surveillance and Human Rights issues but they themselves stink too .... What ashame ....

DragonlordOctober 21, 2014 11:22 AM

@skeptical - More complex does mean more secure unless you can prove that the implementation of the complex protocol has been implemented 100% correctly and that the complex protocol has also been proven.

The reason is that the weaknesses aren't necessarily due to the protocol rather they're due to the mistakes introduced while programming said protocol. The ability to prove a system is correct reduces as the complexity goes up, with the provability reaching close enough to zero to be effectively zero long before the maximum complexity has been reached.

Also any side band access to a system would rely on secrecy for it's security, and one of the big mantras that I've seen around the security blog world is that security by obscurity is no security at all.

paulOctober 21, 2014 11:25 AM

The other thing I was thinking is that in so many cases nowadays it's the metadata rather than the data that ends up convicting someone. A texts B, B travels to A's location, stays there for a short time, then travels to location X. Shortly after that B texts A. A body is found at location X. Is the (encrypted) content of those texts going to be particularly useful to a prosecution? Doubt it.

GOctober 21, 2014 11:28 AM

So when FBI will come up with a few good examples of encryption stopping their law enforcement, everyone who's against the backdoors should change their mind?

JacobOctober 21, 2014 11:43 AM

I incline to speculate that Comey is a proxy for the NSA in this. He has brought up some invalid and stupid example to support his view, but he is no dummy. Therefore,it appears that he dances to the tune of someone else, and that "someone" may have asked him to put up a good campaign for poor encryption - someone that still prefers to live in the shadow and can't show his hand to support the request.

Nicholas WeaverOctober 21, 2014 11:45 AM

And lets forget about security for the rest of us and focus on Director Comey's other job: Counter-intelligence.

Does he really want a world where a foreign intelligence agency can "conveniently" mug a US diplomat or employee [1], take their personal cellphone and extract the contacts, message history, calendar, photographs, and all the other information lying in the smartphone? Does he really think such "coincidental" attacks will not provide a treasure trove of information to foreign intelligence operations.

Because that's the world he's advocating for: if the FBI can decrypt phones, this means that the opposition can, too...


[1] Its nice and deniable, phone muggings, after all, happen all the time...

Coyne TibbetsOctober 21, 2014 11:52 AM

You can almost hear them saying:"It doesn't matter if we've used it before. It doesn't matter if we will ever use it again. If it exists, we must be able to access it. Otherwise, your world will end."

Every secret, every shadow, no matter how tiny or trifling, must be within their grasp.

More SkepticalOctober 21, 2014 11:56 AM

@Skeptical

Your response strikes me as quite odd. "This seems false" is not a rational argument, nor is spewing wishful fanatasies as facts. In essence you're saying if we build an encryption scheme with a secure front door, then we will have an encyption scheme with a secure front door. Well duh, but you added absolutely nothing intelligent to combat the actual challenges. You would make a good politician pushing socialized medicine.

RusstopiaOctober 21, 2014 12:41 PM

@Skeptical, @G:

Even if occasional cases of law enforcement being truly thwarted by good encryption can be paraded out, I for one would rather a few guilty people go free, than we lose significant freedoms forever, such as the right to privacy and secure comms.

DanielOctober 21, 2014 1:21 PM

"Sadly, I don't think this is going to go away anytime soon."

Correct, because the illusion of control is itself a form of control. Admit that they don't have control in this area and those a lose vector by which to propagate fear.

Coyne TibbetsOctober 21, 2014 1:31 PM

@Skeptical: In any event, this is quibbling. Cellphone data and communications - from both victims (a murder victim's cellphone may contain very important clues) and perpetrators - clearly play a significant role in criminal investigations and can be extremely important in obtaining a conviction commensurate with the crime committed. The case of Abigail Morales supports that fact.

This is all very much a red herring.

Okay, they needed the communications. But they had the communications and will have the communications...whether the phone is encrypted or not. The communications pass through the providers, where I'm pretty sure they will be handed over without a hesitation at the slightest drop of a well-justified subpoena.

The arguments that are being made are loudly shouted: "We won't be able to save the kids!!!!!!!!!!! We won't be able to stop terrorists!!!!!!!!! Exigency!!!!!!!! Exigency!!!!!!!! Exigency!!!!!!!!

...and then the the examples presented involve months-long investigations allowing subpoenas and warrants...that would get the information whether the phone is encrypted or not. That would obtain the communications and connections that would have allowed the conviction, despite any encryption. Cases that would allow the full respect of the Constitution owed to the citizens by the government and all its officers.

The arguments would be much better supported if a nation-wide case search by thousands of LEO's could have found even one example where the data on a cellphone, physically confiscated from a suspect, was truly used in an exigent circumstance.

The case of Abigail Morales is positively not that case, no matter what flowery phrases you use to try to pump it up.

WayneOctober 21, 2014 1:44 PM

Please forgive my ignorance. I would like to ask for some clarity on the issue of "backdoors" and encrypted phones (perhaps specifically the android 4.4.4 on my current device). If a phone was fully encrypted and "locked" behind a user's password, is it possible for the manufacturer, telecom or operating system developer to provide a method, i.e. a "backdoor" that can be used to decrypt the contents on the phone (let's forget for now any brute force password attack)?
FBI Director Comey's recent lament of default encryption as a standard brings me to these questions. Is the problem Comey actually has dealing with encryption overall, encryption by default, or a lack of a "backdoor?" His statements, as reported by the press, suggest that "backdoors" already exist (to the exclusion of the most recent and future mobile devices from Apple and Google).
I've asked EFF for clarity on this and they have generously replied "We have not yet heard reports of there being a backdoor in the default android encryption software [based on the version referenced above]." But after some follow up questions I received the reply that "law enforcement has been able to get access to encrypted Android devices is by getting Google to reset the Google Account password and then using that to get into the phone." (Presumably under warrant.)
My questions then become: Doesn't the act of Google resetting the account password constitute a "backdoor" that renders encryption of my mobile device meaningless?
Or, perhaps I am mistaking the various roles between Google (the OS developer) and the more traditional telecom service provider (who is archiving communications data, encrypted or otherwise in the clear)?
So, can the android default encryption software actually protect my data from everyone, or just the guy who steals my phone ?

Gerard van VoorenOctober 21, 2014 2:01 PM

My little bikeshedding...

For once, I think it is good that USG in the form of the FBI director making a case for a front door, even though the arguments are rather weak. In my opinion a discussion, with people that are knowledgeable about this topic, is good. It is a lot better than just doing it, in secret, like they always have been doing.

Still however I think that when they want info from any phone they can get it, with or without any door simply because they have weakened the foundation for decades.

FBI CP archive DivisionOctober 21, 2014 2:10 PM

Actually the best case showing why FBI needs back doors is Donald Sachtleben. If you're an FBI Sgarrista and you have a backdoor, and you need to destroy a whistleblower, you can plant kiddy porn from The Director's private stock on the guy's computer and blackmail him into confessing to the leak. You can even blackmail him into disclaiming any whistleblowing motive - even though the indiscreet truth teller dropped the fraud-and-abuse bombshell that the FBI's scary Al Qaeda terror boogyman was actually yet another US agent being set up to scare the rubes at home.

hang your hat on ethics not effectsOctober 21, 2014 2:55 PM

Don't get caught up in the government's chosen dialectic:

Them: "We need back-doors because we can't bring adversaries to justice when privacy and network security are inviolable! Just look at all these cases where we were helpless to stop crimes because we couldn't break into a criminals' encrypted things!"

You: "Your examples are bullshit. Therefore you don't need back-doors."

Them: "OK, you're right. But thanks for agreeing to the principle. Now all we need are non-bullshit examples, and then you can no longer argue from a position of strength."

We don't want communications systems, privacy, and network security to be undermined because fraud is bad, because the state is too dangerous to tolerate, because these things are inviolable, because human rights, because insert-moral-argument-here, NOT because the empirical evidence supporting their excuses has so far been lacking.

vas pupOctober 21, 2014 3:18 PM

@FBI CP archive Division • October 21, 2014 2:10 PM.
Dear Blogger, That all was done by Stasi, KGB, etc. in the past as well as entrapment so currently popular with passive 'blessing' by courts by accepting entrapment-based actions as the good police work. It is not good police work. Period. All such actions are direct path to disqualification of LEA personnel because it easy to do such short cuts rather than do hard criminal intel and analysis work on a subject(s) suspected of committing and preparing real serious crime. Of course, I agree with you in the case you are referring to, when person comes first, then crime is attached to the person for some actions disliked. But even in those case, all of us are not angels, and professional police work (even absolutely immoral in such case) is to find real 'skeleton in the closet', not to plant such skeleton. And the goal is not always jail time. Stasi like tactics could destroy your career, family/personal life, emotion stability, BUT you did create the skeleton in you closet, then be ready they could find and use it against you. Please read over "All King's Men". Very informative on subject.

HomerJOctober 21, 2014 3:46 PM

Oh please, Shut The Front Door! See what happens when we stop trusting the government to self impose reasonable limits? Now even if they make a good argument it will fall on deaf ears because the basic trust is broken. The front door shall remain locked until you stop peering in our mail slots.

AnuraOctober 21, 2014 4:18 PM

I think we need to stop putting so much focus on law enforcement and being "tough on crime" and start spending a lot more effort on reducing crime. Crime dropped for a while, but that drop has been slowing in recent years. Whatever the causes (lead, internet, social changes, whatever else you prefer), they can't get us much further. We want to run these multi-billion dollar surveillance programs, install backdoors in phones and computers, and do other things that don't prevent crime and have dubious benefit in reducing crime in the first place?

Okay, to do that it probably requires a lot more work, especially investing in impoverished neighborhoods. My suggestion is to invest 60 billion per year in the most impoverished areas over 10 years - about $25,000 per household if you focus on the bottom 20% - fix up homes, buildings, streets, plant trees, paint over gang-related graffiti. Hire people locally to do the work, providing jobs while also rasing minimum wage to boost income.

More jobs and more income in impoverished areas will directly reduce crime, and cleaning up the areas will attract more people and businesses, boosting the local economies. On top of that, the psychological impact of living in a nicer looking area itself will result in reduced crime (see the Broken Window Theory). Keep raising the minimum wage, and investing in other infrastructure projects to keep demand for labor high (which raises salaries in fields that pay moderately well) and you will significantly reduce inequality across the poor and middle class, and with it crime. Production per household has gone up something like 70% since the 1970s, yet for the bottom 80% of households, income has gone up less than 20% - there is plenty of room for growth in income, which will make people a lot less likely to turn to crime.

So it will be much more expensive, but the result is that millions of lives are greatly improved, rather than what we are doing now which, arguably, reduces our quality of life.

QnJ1Y2UOctober 21, 2014 4:24 PM

@Skeptical
You've been posting here for at least a year now (two? three?), and the best argument you can up with is "That seems false"?

Several folks have tried to explain to you why there are issues with complexity and key-escrow systems (my efforts are in "these articles). You'd think you'd have a bit more by now.

Maybe it will work to use a simpler formulation. Let's try this:
1) Complexity adds people.
2) People are vulnerabilities.
3) More people means more vulnerabilities.

I'm sure your next response will be something along the lines of 'it depends on the specific system', but that's mostly a deflection - key-escrow systems, by definition, add people.

And people are vulnerabilities.

Actually SkepticalOctober 21, 2014 5:19 PM

Once again Skeptical, your lack of skepticism shines through. I can't imagine it isn't generally accepted that the 4th amendment hinders law enforcement, and deprives them of a certain amount of just the kind of evidence and 'better justice'(*for some subset*) that you describe. The reason that can be generally accepted, and yet we still have the 4th amendment, is because people see *worse harms* in 'unshackling the police' in that way. Every time you try to defend increased police power, you never even pay lipservice to the traditional 4th amendment justification. That is what makes you so transparently non-skeptical.

The reason people don't want to give up the 4th, and submit to random residential inspections is because of the 'scorched earth' nature of the privacy invasion. Nobody objects too strongly to losing their privacy while they are in front of a teller at the bank. But bring up the subject of letting the police politely and non-destructively wait for them to leave home, pick their locks, and photo/datadup everything, and people start to remember that traditional justification for the 4th amendment. Again, *despite* it bringing lesser justice to some subset of the population as you so cherry-pickingly trotted out.

Actually SkepticalOctober 21, 2014 5:23 PM

Once again Skeptical, your lack of skepticism shines through. I can't imagine it isn't generally accepted that the 4th ammendment hinders law enforcement, and deprives them of a certain amount of just the kind of evidence and 'better justice'(*for some subset*) that you scribe. The reason that can be generally accepted, and yet we still have the 4th ammendment, is because people see *worse harms* in 'unshackling the police' in that way. Every time you try to defend increased police power, you never even pay lipservice to the traditional 4th ammendment justification. That is what makes you so transparently non-skeptical.

The reason people don't want to give up the 4th, and submit to random residential inspections is because of the 'scorched earth' nature of the privacy invasion. Nobody objects too strongly to losing their privacy while they are in front of a teller at the bank. But bring up the subject of letting the police politely and non-destructively wait for them to leave home, pick their locks, and photo/datadup everything, and people start to remember that traditional justification for the 4th ammendment. Again, *despite* it bringing lesser justice to some subset of the population as you so cherry-pickingly trotted out.

Wesley ParishOctober 21, 2014 5:57 PM

@Skeptical

I really had no intention of commenting on this blog entry, until I read your comment:

Greater complexity does not necessarily imply less security - and I'm frankly puzzled as to how the authors could write that it does as though this were a law of physics.

I fear your ignorance is showing. Complexity is a fascinating subject in itself: one of the few meta-topics of science that are so. It's basic to physics itself (read quantum theory: there are some good back-to-basics-for-beginners books around that will help.), to chemistry all the way up the chain to biology and the social structures built on biological ie neurological structures, and the economic structures built on those and the managerial structures built on those.

The Mythical Man-Month by Frederick P. Brooks,Jr., is the obligatory reference to complexity in managerial structures.

It all boils down to a simple case of exponential growth: if two is multiplied by itself, you wind up with four. If four is multiplied by itself, you wind up with sixteen. If sixteen is multiplied by itself, you wind up with 256. If 256 is multiplied by itself, you wind up with 65536. Etc.

If you need to check for weak links on all of 65536 connections it's going to take a lot more time and effort than 256 connections. Etc. And the chance of your succeeding goes down with each step into greater complexity. It's why security professionals are so insistent that security should be built in from the ground up, not bolted on as an afterthought: because it's easier to secure four connections than 65536 connections, and if those four connections are secure, the chance that the resulting 65536 connections will be secure goes up exponentially as well.

So if back-doors aka weak links are deliberately built in to the base, the chance that those weak links will be used by all and sundry who can, also goes up exponentially.

Elementary, my dear Skeptical.

thevoidOctober 22, 2014 12:17 AM

@moderator

perhaps this is something that should be reported to webmaster instead, but
all archives are gone, with the first available post being 'how james
bamford...'.

TomatoOctober 22, 2014 12:37 AM

@Skeptical:

I'd like to hear whether you're categorically against the Snowden revelations etc; It seems on every topic you're conservative on all aspects. Is this a game to you? Do you come here to challenge you debating skills? What aspects of surveillance state (if any) do you think need a reform? What do you think we should be doing to balance privacy and security? Also, not assuming this is the case and with all respect to your privacy, is there any way to assure the community your comments aren't shilling on behalf of the intelligence community.

nonamusOctober 22, 2014 2:04 AM

@Wayne
Data on Android phones is encrypted until the authorized user is authenticated. For example, if you want to call someone in your contacts list, you pick up your phone and: scan your thumb, or enter your passphrase, or draw some shapes. Whatever the primary authentication mechanism is, once you are authenticated the phone decrypts all your data (except private keys stored in the keychain). And it remains decrypted until you reboot or power off the phone.

Andrew_KOctober 22, 2014 2:16 AM

@ FBI CP archive Division

If you're an FBI Sgarrista and you have a backdoor, and you need to destroy a whistleblower, you can plant kiddy porn from The Director's private stock on the guy's computer and blackmail him into confessing to the leak.

Yeah, but that works offline too. Just plant a kiddie porn mag at a hidden place in the whistleblower's home, then let police raid. Ta-daa.

Aside: Being sent to jail is just one way. Probably worse since tampering with your brain is sending you to the mental home. All that needs is doctors' expert statement that saying you're normal is part of you being nuts. Ta-daa again.

Long story short: If "they" want to get you, they will get you.

Why should they still want to access your smartphone? Because it could prove your innocence or even prove them arranging the crime. For paralell construction, you need to know what to parralellize with...

All of this would make a great novel. But how did it escape to reality?

thevoidOctober 22, 2014 3:20 AM

@moderator

to refine my previous statements. the archives are missing posts from between
21Sep and 13Oct. earlier ones appear to be intact.

SkepticalOctober 22, 2014 4:44 AM


@More Skeptical, et al: "This seems false" is not a rational argument, nor is spewing wishful fanatasies as facts.

Yes, my first sentence was an assertion ("this seems false") followed by my argument and reasoning. So you have to read beyond the first sentence to get to the argument. :)

I thought it was fairly clear, but I'll make a good faith effort to clarify further.

Vagle and Blaze made a simple argument, consisting of one valid argument and, which I'll note separately, a non sequitur:

Their valid (but unsound) argument is:

(1) Adding a means of lawful access adds complexity;
(2) Adding complexity reduces security;
(3) Adding a means of lawful access reduces security.

Their (4) non sequitur is: based on the above reasoning, the difference between a front door and a back door is mere semantics.

My response is a very modest, negative argument. First, the mere fact of an increase in complexity does not necessarily mean that security has decreased. As a counterexample, I pointed out that two-factor authorization increases complexity in a system - yet, in at least one and therefore in enough cases for a counterexample, increases security.

So we cannot dismiss discussion about a lawful means of access by claiming, as if an immutable natural law, that greater complexity must always bring lesser security. And, again, a single counterexample breaks that attempt.

As to their non sequitur, the problem is that even if one were to grant all their premises, the difference between a lawful access mechanism designed purposefully into the system, subject to transparency and ongoing testing, and that of a backdoor, either in the form of an unintended vulnerability or a deliberate mechanism subject to less transparency and testing, is much more than semantic. We can expect a material difference in how secure each is.

@Qn: Maybe it will work to use a simpler formulation. Let's try this:
1) Complexity adds people.
2) People are vulnerabilities.
3) More people means more vulnerabilities.

Thus establishing that the most secure systems are programmed and maintained by one person, who also guards the physical facilities - since adding more people would only add vulnerabilities.

The problem, of course, is in your assumption that people only add vulnerabilities. That's not all they add. Depending on the people, the system, and factors outside the system, the addition of personnel can increase security in some cases, decrease it in others, and have either no effect or a merely random effect in still others.

@Wesley: The Mythical Man-Month by Frederick P. Brooks,Jr., is the obligatory reference to complexity in managerial structures.

It all boils down to a simple case of exponential growth: if two is multiplied by itself, you wind up with four. If four is multiplied by itself, you wind up with sixteen. If sixteen is multiplied by itself, you wind up with 256. If 256 is multiplied by itself, you wind up with 65536. Etc.

If you need to check for weak links on all of 65536 connections it's going to take a lot more time and effort than 256 connections. Etc. And the chance of your succeeding goes down with each step into greater complexity.

The Mythical Man-Month - which seems to be little more than a restatement of the "law" of diminishing returns mixed with empirical observations of difficulties in coordination under certain conditions - is a good example of my point. Under certain circumstances, adding personnel to a task will not produce the desired magnitude of increase in team production. However, under other circumstances, the addition of personnel will produce precisely that. It depends upon the particular circumstances we have in mind.

In your example, you note that if a system has 65536 points of failure rather than simply 2, then the system is less secure. Even here there are many assumptions which must be made for this to be true beyond the mere fact of greater complexity.

Modern fighter aircraft for example are extremely complex, with many more points of possible failure than their early predecessors. But by using sub-systems which are thoroughly tested and well known, and building in redundancies, these aircraft are - despite their complexity, and despite the additional complexity that diagnostics and redundancies add, more reliable.

I think the problem in your argument is that you conflate the addition of complexity with the addition of points of failure, and then silently assume that each such addition comes with the same probability of failure as already existing points and that it comes without compensating features.

@Dragonlord: The reason is that the weaknesses aren't necessarily due to the protocol rather they're due to the mistakes introduced while programming said protocol. The ability to prove a system is correct reduces as the complexity goes up, with the provability reaching close enough to zero to be effectively zero long before the maximum complexity has been reached.

This is better, but I don't think it quite makes the point needed for Vagle and Blaze's argument. I'll leave a more substantial response later, when I have more time, but I wanted to highlight what you said. If you can guess what my response is going to be, feel free to comment preemptively on it.

Clive RobinsonOctober 22, 2014 5:27 AM

@ Andrew_K,

Why should they still want to access your smartphone? Because it could prove your innocence or even prove them arranging the crime. For paralell construction, you need to know what to parralellize with..

Actually there is another reason,

US justice is not the rational judgment of facts by a jury of your peers in a court, it has long since become a game of bluff and counter bluff where investigators are alowed to lie to you but you are not to them, and it leads into "plee bargaining".

Now for a lie to be believable to any one with normal or above IQ then it has to contain aspects of the truth.

So the game, just as it is in various card games, is to make the opponent think you hold a much stronger hand than you realy do. So investigators try it on, and if that does not work they pile on more and more charges untill you break, it's way way quicker than doing their job honestly.

Your phone especialy your smart phone contains a trove of facts to make investigators appear omnipotent to all but the technicaly clued up, thus it's contents are a short cut to getting a quick result for the investigators, as you roll over on a click of the fingers.

mike~ackerOctober 22, 2014 6:43 AM

government surveillance is for the suppression of dissonance.

Setting right wrongs in society often begins with a few folks who have the fortitude to express themselves. Such as Rosa Parks.

The American Revolution itself began as dissonance and ended by terminating the tyranny of the British Crown

The Draft Resistance and Protest against the Vietnam War comes to mind

as does Womens Suffrage

Suggested Reading: _No Place to Hide_ Greenwald

There is this:

"No matter the specific techniques involved, historically mass surveillance has had several constant attributes. Initially, it is always the country's dissidents and marginalized who bear the brunt of surveillance, leading those who support the government or are merely apathetic to mistakenly believe they are immune. And history shows that the mere existance of a mass surveillance apparatus, regardless of how it is used, is in itself sufficient to stifle dissent. A citizenry that is aware of always being watched quickly becomes a compliant and fearful one."
NO PLACE TO HIDE Glenn Greenwald, p.3

Clive RobinsonOctober 22, 2014 8:03 AM

@ Bob.S,

Are they all liars and crooks?

Possibly not but it is an appropriate starting assumption untill they can prove otherwise...

Oh and let them prove through a knowledgeable third party, because speaking to them directly is known to be harmfull to your health and liberty...

QnJ1Y2UOctober 22, 2014 8:57 AM

@Skeptical
The problem is in your assumption that people only add vulnerabilities.
More of a simplification than an assumption. But it's almost axiomatic - for evidence, I'll cite just about every entry in Bruce's blog, Bruce's discussions in Liars and Outliers, the experience of just about every security professional ever, and for something specific, Edward Snowden.

Modern fighter aircraft ... are ... more reliable.
And yet when interacting with the real world, they are part of a system that often fails spectacularly.

I've got to dash off, so I'll just end with a quick note - we really talking about key-escrow systems, not just complexity. And the issues they present are not primarily based on engineering and math. The problems are similar to those in economics (e.g., why does communism fail? In theory, you could fix all of its problems. In reality ... ).

vas pupOctober 22, 2014 10:47 AM

@Andrew_K:
"Aside: Being sent to jail is just one way. Probably worse since tampering with your brain is sending you to the mental home. All that needs is doctors' expert statement that saying you're normal is part of you being nuts. Ta-daa again." Good point! If you dissent, for them you either disloyal (jail) or insane/with mental problem (asylum). The former at least (except life sentence) has preset terms of incarceration (yeah, they could still entrap you through jail snitches and extend being there for new crime), but there is no preset time for latter because it is based on subjective opinion of medical professionals (biased or not does not matter).As you know, psychiatry is not precise science (at least for now there are many spectrum disorders when level of same symptoms place patient to particular diagnostic 'label' with only first steps on development and utilization objective diagnostics criteria based on brain imaging, special blood tests, etc.). Andrew, I'll suggest you read "Psychiatry in the Third Reich" on subject you've touched.
Conclusion: looks like the main direction is to establish control through development in your mind the idea of learned helplessness of protecting your privacy in any possible way.

Clive RobinsonOctober 22, 2014 12:11 PM

@ Vas Pup,

The thing about "lunatic asylums" is they can also tourture you in the name of therapy. There was a recent case in Tolworth South London where a "doctor" fried the brains of a female patient without her informed consent by the much and rightfully maligned Electro Convulsive Therapy...

As we know from old East German and Soviet records the mortality rate in their institutions was so high compared to other asylums that you would think they were part time slaughter houses.

As was pointed out when Dr David Kelly died in mysterious circumstances in the UK after being right royaly abused by the "political wing" of the UK Civil Service over what he said over Iraq WMD, "Who questions the suicide of a depressed person". Well the experienced ambulance crew for starters who said there was insufficient blood at the scene... and many others subsiquently including a UK politician who "steped down from the front bench" to seek out evidence and examine it,

http://www.dailymail.co.uk/news/article-488667/Why-I-know-weapons-expert-Dr-David-Kelly-murdered-MP-spent-year-investigating-death.html

Vinny Lisi, Special Agent in Charge of Barking up the Wrong TreeOctober 22, 2014 12:11 PM

"feel free to comment preemptively" Skeptical loves to give you these little assignments. Manipulative tricks like that give him little tingles in his loins. Surprisingly, people sometimes fall for it. As always, he's trying to change the subject, which once again is, Comey doesn't want to comply with the law.

Specifically, the CALEA legislative history dug up above by Soghoian where FBI tried to bury it: "telecommunications carriers have no responsibility to decrypt encrypted communications that are the subject of court-ordered wiretaps, unless the carrier provided the encryption and can decrypt it. This obligation is consistent with the obligation to furnish all necessary assistance under 18 U.S.C. Section 2518(4). Nothing in this paragraph would prohibit a carrier from deploying an encryption service for which it does not retain the ability to decrypt communications for law enforcement access ... Nothing in the bill is intended to limit or otherwise prevent the use of any type of encryption within the United States. Nor does the Committee intend this bill to be in any way a precursor to any kind of ban or limitation on encryption technology. To the contrary, section 2602 protects the right to use encryption."

We decided this and wrote it down in black and white and told the would-be CHEKA at FBI, get lost, you can't sniff everybody's panties. But Opus Dei wants to hear your confession, so Comey's back for more.

name.withheld.for.obvious.reasonsOctober 22, 2014 12:17 PM

Once again the government is complaining because they don't have the key to the front door of your house, shed, safety deposit box, chastity belt, or codpiece.

Why are officials not being held to account when requesting that compliance with Constitutional law, from a supposed law-man no less, is problematic. Instead of changing the Superior Law, the Constitution, the government (congress and courts) nickel and dimes the citizenry to death with public law and statue that cannot hold up under the light of true scrutiny. Sure, tell people enough bullshit over and over again and they'll come to believe that Hitler was a nice guy...just misunderstood.

SkepticalOctober 22, 2014 12:21 PM


@Qn: The problem is that it's not axiomatic. To take some trivial examples: adding a second guard adds complexity to a system; adding a second patrol adds complexity to a system; adding a requirement that two system admins be present to access or move certain data adds complexity to a system. Yet these are common measures to improve security in a desired fashion, and they generally do.

@Dragonlord: re "provably secure" - while increased complexity may make it more difficult to prove a system secure, that's not quite the same thing as the claim that increased complexity makes a system less secure. "More difficult to prove" does not mean "unprovable," and so even accepting your principle, two systems of vastly different complexity may both be provably secure. This sinks, entirely, the premise that Vagle and Blaze's argument relies upon.

I simply don't think there's any real substitute here for specifics. One cannot, and should not in my opinion, rule out systems with lawful access as a matter of principle. At the least, I've seen no persuasive account of a principle that would enable one to do so.

That doesn't mean that one cannot say, "I've examined existing proposals to date, and none of them are worth the security risks for the following reasons." That would be a substantial argument, but obviously it requires a lot more study and work. And I believe that some work along those lines has been produced over the last decade or so.

Look, these questions aren't easy or simple, and that means the answers and arguments aren't likely to be easy or simple either. That makes it harder to translate them into easily digestible pieces for public consumption, or for entirely one-sided "advocacy journalism", but no one ever said that intelligent self-governance was going to be easy or that the task of keeping the public well informed was going to be simple. The world is complex, the issues messy, and we can either grapple honestly with those features or become lost in a fog of emotionally charged but misleadingly simplistic "answers" to policy questions.

Nick POctober 22, 2014 12:41 PM

@ Skeptical

Why Adding a Backdoor Probably Increases Vulnerability

You're arguments have the problem that they start with a conclusion, then work backwards to arguments and premises that might support it. That's fine in sophistry, but in science we work from observations to testable theories. So, let's try it that way.

1. Almost every remote access tool has had at least one vulnerability.
2. Many have had more than one.
3. Conclusion: adding R.A.T. software to a system typically adds at least one vulnerability that someone eventually finds and/or exploits.
4. Prediction based on 3: the remote access software they add to any product will make it vulnerable with high probability to at least one attack. Maybe several.

Note: That this is true for software in general per CVE vulnerability sheets further supports my view. "Insecure unless proven otherwise" is our default.

So, we have a strong, probabilistic argument before we even get to the complexity. So, what's complexity do? Quite simple: it makes the software harder to understand while simultaneously creating more state changes & interactions in the code. That increases the likelihood that constructions will appear that produce insecure states. It simultaneously decreases the likelihood that the developer will detect those interactions or states. So, complexity increases the odds of software failures, some of which lead to insecurity.

Further, we've known this a *long* time. The U.S. military, NSA, private contractors, and academia helped create our security standards in the 80's after thinking on it a decade. It was determined early on that a system can only be called secure if (a) every state it can be in is known in advance and (b) every state can be shown to be secure. As complexity results in "state explosion," the Orange Book high security classes (B3 & A1) mandated the software use simple constructions, be layered, be modular, and all its interactions analyzed. Anything less than this was insecure. Only a handful of systems achieved this evaluation level. And even A1 didn't guarantee it was secure: just the lowest risk known to be achievable at that time.

So, we already have our opponents (even NSA) telling us minimum needed for a secure system. The drastic reduction of complexity approach one they learned after decades of trying (and failing) to make assurance cases for complex systems. By complex, I mean even MSDOS was too complex. The current standard, Common Criteria, requires the same thing at EAL6-7 levels and most vendors only certify simple devices (eg data diodes) to EAL7. It's so difficult to design, analyze, and assure the software to this security level that nobody has ever done it for anything more complex that a small software kernel or smartcard OS. Unevaluated prototypes exist for security-critical aspects of other software so there's potential in the future to expand it. No counterexamples right now, though.

Note: Outside CC & U.S. government, Bernstein's qmail system has survived years of hackers coming at it with basically 1 flaw if I recall. He's one of U.S.'s best security engineers and developer's. He describes here the principles that made qmail secure while all others fell to hackers. "Less code" = less bugs and "less trusted code" = less vulnerabilities were critical lessons. Also compatible with decades of findings by U.S. government & academia.

Security in software is also related to safety. The concept of safety is usually called reliability: will the system function or crash due to a bug? As most code injections start with a bug, software safety improvements that reduce defect count have some beneficial security effect as lower defects = lower, potential vulnerabilities. Relevant standards for safety-critical systems by industry, academia, and government include DO-178B, IEC 61508, and EN 50128. All of their highest levels require simple software, thorough documentation of everything it can do, tests exercising every function/errorstate, and so on. Most such systems use microkernels, custom state machines, and decomposition into simple, interacting components. The reason? Nobody has made a believable safety case for anything more complex or monolithic.

So, I've just summarized decades of empirical observations, attempts, and results at safety & security. Our best experts and engineers consistently failed to make highly complex systems safe or secure. They advised that we instead reduce complexity whereever possible. Projects that did that succeeded more often. If complexity was necessary in software, the system had to be decomposed repeatedly into simpler and simpler pieces with precisely defined interactions and states. Even most products using security engineering turned out insecure with only a few making the cut of no vulnerabilities or high assurance certification.

Conclusion: adding code or complexity to a system provably weakens it in almost every case. A default policy of untrustworthy until proven trustworthy is rational here.

Secondary conclusion: a "secure backdoor," if it can be made, would need to be designed & evaluated to EAL7 requirements to prove its added code or complexity doesn't increase insecurity. This is how we prove it to be trustworthy.

Premise: NSA I.A.D. has been pushing EAL4-certified products for military, government, and private sector use (including remote access). It claims they improve security. Internally, they use physical separation with EAL5-7 and specialty (eg Type 1, TEMPEST) products to support most critical operations' confidentiality. They don't push these methods on the public, with many not allowed for private use outside defense contractors.

Premise: NSA I.A.D. is the designer of security standards I've referenced and main certifier of anything EAL5-7. They know EAL4 systems can't stop (and haven't stopped) determined attackers. Even their Common Critera standard says they don't and Shapiro says it better. Hence, they must be pushing these "secure" products in a deceptive way to enable their SIGINT mission or are totally incompetent except when evaluating most sensitive, internal-use products.

Third conclusion: NSA is either too incompetent or subversive to trust to assure whatever lawful backdoor we put in. FBI uses similar methods & has similar recommendations so this applies to them too.

So, adding the backdoor will make the system vulnerable with high probability AND the agencies designing the backdoors are using development methods proven to make it easier for hackers to exploit software. So, any backdoor they build for themself will be exploited by the enemy per their own security evaluation statements on EAL4 software. It also might be covertly exploited by them like NSA did with Siemens under guise of fixing vulnerabilities. It doesn't help that most EAL4 systems have been hacked dozens of times with some (eg Windows) having been hacked hundreds of times. So, if there is to be a backdoor, they can't be trusted with either developing or evaluating it. And if they *will* get that responsibility if the laws pass, then the laws should be resisted in full until a safer option is on the table.

7dje73h4fjOctober 22, 2014 12:47 PM

Skeptical:

What difference do you perceive between a "back door" and a "front door"? What definitions are you using? They are just 2 different names for the same concept:
1. a user has a normal way of accessing their own system,
and
2. some other agency has a second way.

Whether that second way is publicly documented or declared or not does not change the facts that:
1. it's a second way of access for some agency has,
2. the system owner doesn't have the ability to use or close it off,
3. the owner is thus at the mercy of the agency with the knowledge,
4. there is also a risk of others discovering and exploiting the door.

Apparently you disagree, and perceive some difference between "back doors" and "front doors", so with which part of this list do you disagree?

More to the point: All that is academic. The real bottom line is:

Do you SERIOUSLY believe that it should be illegal for people to run whatever encryption software they want, instead of being forced to run only software for which the government has "front doors"? WTF?

What penalties do you think are appropriate for someone who dares to encrypt a file on their own computer using software with no "front doors" programmed into it? An awful lot of people do just that. Should they all be put into prison, or heavily fined, or what?

QnJ1Y2UOctober 22, 2014 1:06 PM

... we can either grapple honestly with those features ...
That would help. Note that, as this blog post points out, the director of the FBI failed on this point.

I simply don't think there's any real substitute here for specifics.
Cool - please describe a secure key-escrow system. You can use this paper as a basis for some of the issues to consider: https://www.schneier.com/paper-key-escrow.html

How FBI talks retards into fake terrorOctober 22, 2014 1:12 PM

Skeptical wants to make you talk about proposals. What about this? well, What about that? like the slimiest used-car salesman, he pretends not to understand what No means. He doesn't want principles, no telemarketer does, he wants to bargain and negotiate and haggle and sell so he can screw you on niggling details. Tough shit. The principle you can't avoid is No.

We don't want scumbags like you opening our mail.

We said so in ICCPR Article 17. Then we said it again in CALEA. It's the law. You don't like it? Suck my ass.

jonesOctober 22, 2014 2:13 PM

When Comey says:

with sophisticated encryption, there might be no solution, leaving the government at a dead end -- all in the name of privacy and network security

Taking him at his word, this sounds like he wants to go back to the days before PGP, when encryption was regulated like a munition. It sounds like his solution is no encryption for anybody. It sounds like that's what he means by "front door."

In the past, the NSA's tactic was to pressure NIST to limit the size of the DES encryption key to 56 bits, the reasoning being that NSA had computing power for a brute force attack, but that this capability was just outside the reach of criminals and industrial espionage applications... such an approach could also be considered the "front door."

To take issue with Comey's policy position, when he says, "the law hasn’t kept pace with technology, and this disconnect has created a significant public safety problem" the alternate way of posing this position is that technological developments haven't kept pace with the needs of the public good -- pursuing economic growth instead.

This is a policy problem caused by the growth imperatives of organized industry. Congress and the Executive Branch should stop pursuing an industrial policy that undermine the public good for the benefit of industry. Corporations are legally-chartered entities, and can be regulated in a way that makes them serve the public good first.

If we were less reliant on computers, then computer security would be less of an issue. If the government weren't migrating so many essential services online to lower overhead, then security would be less of an issue.

The policy solution to keeping government services in the physical realm is to raise taxes and hire more public employees to process the government's workload, rather than shift the burden to citizens, who need to pay for their own internet access to interact with government services and figure out how to operate government websites without assistance.

Clive RobinsonOctober 22, 2014 3:04 PM

@ Skeptical,

The problem with increasing a code base or any other system is that without applying constraints complexity goes up as a minimum of N^2 whilst the probability of getting it correct is proportional to 1/N.

The problem with all the effective constraints we currently know are that they have a significant effect on efficiency. The main reason for this is that effectivly you break N down into subsets and provide a strongly mediated interface between them.

The result of subsetting is "diminishing returns" because you start building up complexity in the mediated interface. So you end up with a balancing act as to where you want your complexity.

From a fundemental perspective the increase in complexity creates an increase in information which is impressed on to energy which is subject to the laws of nature via forces and the speed of light. The problem is that having impressed the information onto energy it has to go somewhere, the question is not just where and how but does it transgress the systems security perimeter and if it does can the information be retrieved by others... this is the crux of EmSec.

Arguably the number of points or ways such information can leak is proportional to the total complexity of the system, that is of all the subsystems and the interface mediators...

I could go on further, but the simple fact is the numbers are very very much against your argument, whichever way you slice it or dice it.

SkepticalOctober 22, 2014 3:16 PM


@Nick P: You're arguments have the problem that they start with a conclusion, then work backwards to arguments and premises that might support it. That's fine in sophistry, but in science we work from observations to testable theories.

I'm sure that I've made many imperfect arguments, but in this case I'm innocent of the charge you're making.

Let me emphasize again: I have made a very modest negative argument against the reasoning used by Vagle and Blaze.

Quite literally all I've done is take issue with the notion that it's enough to dismiss the very concept of feature X as decreasing security if feature X increases complexity.

The basis I gave for doubting the reasoning is also very modest and very simple: the existence of many counterexamples to the claim that there is a necessary and inverse relationship between complexity and security.

Bernstein's argument, if I recall it correctly, with respect to complexity relies not merely on complexity, but on complexity of a particular nature and magnitude. What his argument really boils down to is: the less you understand a system, the less able you are to predict the consequences of inputs to that system. And certainly sometimes the addition of complexity reduces our understanding of a given system. But not always. How well do you understand the system before additional complexity is added? How well do you understand what is being added? Can we definitively isolate the set of possible effects the addition will have upon the existing system (and conversely, if what we're adding is in the nature of an extension of a kind)?

Certainly if you're accumulating lines of code at a frenetic pace in response to requirements that are not fully understood, your understanding of the system will decline precipitously, and depending on the degree to which safety features are built into the type of code being written, dangerously.

But you may also accumulate new features carefully, cautiously, and in a way that actually enhances the predictability of behavior (or reduces the probability of identified branches of undesired behavior).

Which one happens depends on... specifics.

So, as to whether a lawful means of access would or would not decrease security in a given system, I've stated repeatedly that - so far as I can tell - the answer depends on the specifics of the system in question and the specifics of the lawful means of access.

Notice that in many ways you really are not in disagreement with this very modest claim. Instead you're making an argument about the type of complexity that will probably be added.

Also one quibble: science does not really move from observations to testable theories, as you stated. Instead science consists of developing theories that generate particular empirical predictions, which can then be confirmed or disconfirmed via observations. But this is mostly a quibble - the inspiration for a theory can come from many sources (observations, knowledge of a different field, a seemingly unrelated story or event), of course, but the point at which we move from just-so stories and speculation to a scientific method is when we develop a theory that generates empirical predictions which can be confirmed or disconfirmed via observation.

@Qn: Note that, as this blog post points out, the director of the FBI failed on this point.

I agree, but then Comey didn't claim to be making an attempt.

Cool - please describe a secure key-escrow system.

:) I don't claim to have such a system in my back-pocket either, I'm afraid. As I've said, my criticism was directed at the reasoning used by Vagle and Blaze in support of their argument.

I'm beginning to feel like an agnostic who has criticized a particular argument for atheism and is now demanded to prove theism.

SkepticalOctober 22, 2014 3:24 PM


@Clive: I didn't see your comment before submitting my own. What you argue is interesting - I'll have to think about it a bit before responding. As to N^2 and 1/N... just to be sure I fully understand you, what does N represent exactly here, and how are the relationships you described derived?

PIE of WashingtonOctober 22, 2014 4:19 PM

What skeptical's beginning to feel like is more like a chomo who has criticized 18 U.S.C. § 2251 and feels challenged to make the case for man-boy love. Get it through your thick perv skull. You're not allowed. No peepholes in the middle-school locker room. No hot teen screencaps. No surveillance of correspondence. It's the law.

Nick POctober 22, 2014 4:33 PM

@ Skeptical

"I'm sure that I've made many imperfect arguments, but in this case I'm innocent of the charge you're making.Let me emphasize again: I have made a very modest negative argument against the reasoning used by Vagle and Blaze."

Maybe so. I'm not attacking or supporting their claim or your attack on it. I'm doing a clean slate argument showing complexity almost always reduces security unless extremely rigorous methods are used. I supported that with claims by government's best security engineers, researchers and contractors. Then I added that the processes the government will use for the backdoor (and uses right now for others) are certified to be insecure against our nation's enemies. So, the backdoor opens us up to L.I. and the hackers NSA is always warning us about.

Matter of fact, every major product they push minus separation kernels is evaluated to the "certified insecure" standard. FBI too. So I add that both are unreliable for developing a complex but safe product. The complexity will almost certainly result in vulnerabilities as it has in every similar product, including those they've designed in past. And on top of it they're misleading the public in a way that benefits SIGINT & LEO efforts, not security. So their recommendations can't be trusted.

So, once again, here's the argument (that's totally unrelated to Vagle and Blaze):

1. Virtually every remote access tool and critical piece of software has been hacked.

2. Anything done with a typical, EAL4-level development process is often hacked in multiple ways when it becomes popular.

3. Virtually every experienced engineer and standard says complexity often causes reliability and security problems. Hence, must be countered with rigorous, specialist, expensive development processes. (Or avoided entirely.)

4. FBI/NSA wants to use No 2 to build No 1. They say the result will be secure.

5. FBI's, NSA's, CIA's, DOD's and academia's security engineers all say a secure result must be done using No 3 instead of 2. Many will add there's still no guarantee at that point.

6. Conclusion: If FBI/NSA are allowed to do 4, the complexity and risky development process will result in an insecure modification to the product that reduces its security. There's no guarantee this is true, but it's happened in almost every similar situation.

So, complexity usually does hurt security regardless of how they handle it, there's ways of reducing the risk of the extra code, NSA/FBI will be using methods that increase the risk instead, and therefore we should reject their desired L.I. capabilities until they remove the unnecessary risk from their proposals. Additionally, as they aren't trustworthy, I'd rather a third party build it with a multinational review process to catch potential subversions.

Besides, I could use as many high assurance protection mechanisms as I can get, even with escrow. If I'm worried about escrow holders, I encrypt the data another way before I send it over their method or use diverse redundancy with voting protocols as Clive often mentions. Their highly assured method stops even sophisticated attackers, then mine keeps them in check unless they have a warrant. Worked fine for me in the past. ;)

Sancho_POctober 22, 2014 5:58 PM

This “National Conversation” - guy is another example for a goat in top management.

A sensible, moderately intelligent human would ask before talking bullsh** in public.
To be fair, he may not have written that speech himself, but my goats have a better nose and avoid such bad herbs, granted.

A good manager wouldn’t talk in the first place.

He would ask experts and try to understand the back door / front door thing.
He would learn that there is no technical and economical feasible solution.
He would also learn that the global world is not his “National” playground.
Probably some expert would have told him that it’s a bad idea to cite weak examples.
Other experts would have explained the general terms “encryption and private devices”, data traffic and so called “metadata”, server data, the internet, national borders and foreign laws and culture.

Also “crime” and “evidence” should be explained to him, as “insecure” (vulnerable) also means “probably tampered evidence”, by mistake / error (see software EULAs) or deliberately to hide the truth.

*** All “evidence” from IT sources must be generally dismissed by court. ***
(Sadly that’s never a point in discussion)

He may also learn that the public is the wrong audience for this “talk”, at least as long as he himself understands less than the public.

Some of his own experts (law enforcement?) may also explain that the public is a very in-homogenous mixture of individuals, and by far not all human are criminals, even when most of his fellows are of such rabble.

He seems to know that there are already legal means to access private homes / devices. In many countries a judge also can compel a suspect to open his safe / decrypt private messages.

The core of that principle is that this happens in the open, face to face, lawyer to lawyer, not hidden in the dark, secretly.

I think we all understand the demand of access to device data -
in particular cases.
If it would be that simple as this “speech” explains the simple answer would be:

- So use it!

“We aren’t seeking a back-door approach. We want to use the front door, with clarity and transparency, and with clear guidance provided by law. We are completely comfortable with court orders and legal process—front doors that provide the evidence and information we need to investigate crime and prevent terrorist attacks.”

[James Comey]

But this doesn’t mean to end secure encryption in general.

To solve problems it needs understanding and innovation.
And it’s a bad idea to mix everything up to confuse people and oneself beforehand.

[Rem:
paul made a good point regarding “metadata” (a very disingenuous term) - they have it anyway.]

Silent UndergroundOctober 22, 2014 6:02 PM

I read this as "it isn't a backdoor, it's a "Blah-Blah-Blah-Lie-Lie-Lie".

There are legitimate investigative reasons for some backdoors, but you can surveil the target's system. You don't have to surveil everyone.

The nation's founders were very concerned about government going out of control. More so then on petty crimes. (Just a little??) Why has this been forgotten?

OldFishOctober 22, 2014 7:25 PM

There are some beautiful symmetries in the 1st Ammendment. Along with the freedom of speech there is the freedom to hear or read that speech. There is also the right to be free from prior restraint of that speech.

Aren't there possibly analogous symmetries with respect to other Ammendments?

For instance, isn't a violation of the 4th to force a citizen to create papers and effects in a form that can be searched? It presumes probable cause and relevance before the fact.

Or a violation of the 5th to force a person to speak, or keep a diary, in a form that is accessible at a later date for purposes of collecting incriminating evidence? This irretrievably removes the protection provided by the 5th at a time predating the point at which a citizen would need to make an informed decision.

If we would only reason correctly it seems to me that the law is ahead of the technology by several hundred years.

FigureitoutOctober 22, 2014 10:39 PM

Skeptical RE:science
--Let me first state I mean this in a civil way, I have to work hard on keeping my emotions in check.

I wasn't going to get involved in what appears to be you trolling, even after you just made the EXTREMELY laughable statement that complexity doesn't affect security. Its almost an automatic that it does and if you can't see that then it's clear you haven't done any engineering or security work. What that potentially means is first your opinion on the topic doesn't mean much and worse you can't even recognize when you're hacked, which is sad.

BUT, then you brought science into the picture. Criticizing someone out of all the people here that repeatedly pwn you intellectually, the best you can do is back out after you're owned or just ignore the pwnage. You...you of all people trying to lecture people on the scientific method. No, skeptical, just no. Stand down. Go back to the statehouse or congressional committee, I don't care. You do not have any credibility on the scientific method or in this case escrow systems.

Have you ever compiled a single program? Can you make an LED flash w/ arduino? It's ok. You know I started as a public affairs derper until I couldn't get a job in that field. I've been on this blog for 6 years and in that time I've gone from someone who could get completely ass-raped online to a much harder target now; offline is where I shine but if I get attacked online as I still do daily it's only for certain people who I've already mitigated, its done. I suggest you read up and research if you're interested b/c anyone can do it if they want it bad enough.

Wesley ParishOctober 23, 2014 3:08 AM

@Skeptical

Modern fighter aircraft for example are extremely complex, with many more points of possible failure than their early predecessors. But by using sub-systems which are thoroughly tested and well known, and building in redundancies, these aircraft are - despite their complexity, and despite the additional complexity that diagnostics and redundancies add, more reliable.

Thank you for making my point for me. By making sure of each individual component, you lessen the likeliehood that the full complement will fail.

I think the problem in your argument is that you conflate the addition of complexity with the addition of points of failure, and then silently assume that each such addition comes with the same probability of failure as already existing points and that it comes without compensating features.

And when Clive Robinson restates the same argument in terms of N^2 versus 1/N ...

Clive RobinsonOctober 23, 2014 12:39 PM

@ Skeptical,

Sorry I missed your relativly small reply in the mass of other coments.

With regards,

As to N^2 and 1/N... just to be sure I fully understand you, what does N represent exactly here, and how are the relationships you described derived?

I'll answer it backwards as it's easier to do that way.

The two terms can be considered the lower bounds of the problem in that they could be much worse, and are in practice currently.

N^2 is the dominant term in the number of relationships in a set of objects normaly given as 0.5(N^2 -N). 1/N is the simplest probability that you have got the desired "secured state" and is the same as the probability of you getting a six from a single role of the dice in practice it will be a lot worse. For instance if we assume not a ten sided dice but ten successive flips of a fair coin it's not 1/N but 1/(2^N) so not 1/10 but 1/1024. The choice of which probability measure to use is very case dependent but 1/N is an acceptable lower bound.

As I've just said N is the number of entities in a set between which relationships exist, but that is a very abstract view.

In practice the set is the bounds on the system you are considering and N is the number of sub-systems within the system that can be considered to have their own bounds. That is unless you can put a bound around a group of entities they count individualy to the total N of the system not as just as a single occurrence.

To make it more concreate if I design a system with ten electronic sub-systems each of which has six component sub parts, provided I fully shield and issolate each sub system then N is ten, if I don't then N rises towards 60. In practice the worst case is down to just about every component in every sub part in all the sub-assemblies so could be thousands. Deciding what does and does not count is dificult because all electronic components that carry current have a magnetic (H) field associated with them including PCB tracks. Likewise where ever there is a potential difference there is a voltage (E) field associated. If these fields change and they will in an active system they will if alowed radiate out of the system carrying any information impressed upon them.

When you look at software it's even more fun. If you have a perfect OS with perfect memory managment with no interprocess communications that only uses timer based multitasking then you might consider N to be one for the OS task and increase by one for each running task. In practice there are no OSs that work this way as it's way way to inefficient. Thus for a computer N is somewhere between 2 and the number of micro code instructions, which is considerably more than the number of assembler instructions that are in memory and running.

As a just vaguely relevant example an old Unix box has the kernel and schedular as tasks 0 and 1 and each user has a shell task and one running application task with 4 users that would from a 20,000ft view be ten tasks or N being 10 or higher if printing etc is enabled. However in practice each task has a control loop and several sub functions it calls, thus you might be tempted to say at this level N would be 1 for the control loop and an additional count for each sub function unfortunatly because the user memory is not segregated for each task N is in effect going to be the number of lines of code unless other constraints are used. This is an important consideration, because although an OS may provide hardware mandated task memory issolation, tasks don't. And if as is quite common these days you have several sites open in a browser at the same time, then not only are they all in the same memory space they share a lot of code and state information. Which has and still does enable people to have one remote site see information regarding a second remote site through the browser.

One solution to N being the total number of instructions in memory is to write your code in a very modular way. However whilst this might apparently reduce the complexity from 10,000^2 to 20 x 500^2 there are problems. Firstly to get the modularisation you have to not only unshare code but also add aditional code so the number of lines increases quite a bit. Secondly there are major issues with compilers and their libraries which are only slowly changing.

In the case of Qmail, it's designer Daniel J. Bernstein had to rewrite the standard library and use a modifed compiler tool chain, to do it. And even with his level of expertise and the three years he spent doing it there are atleast five known bugs in the code, so even he did not hit the mark. Further I suspect there may be other latent attack vectors when you look below the highlevel language layer.

The reality is DJB is an exceptionaly talented security person and the number of other programers that even come close won't need many hands to count. The method he used was very much closer to a proper "engineering" approach than just about any other software production (parts of NASA being another one). Few commercialy employed programers have any comprehension let alone experience of such approaches and even if they did there employers would never let them do it because the process is so extraordinary expensive not just in programer time but all of the production phases.

I've indicated in the past what needs to be considered for a sound engineering approach however it's dependent on metrics that just don't exist currently though we have got closer in the intervening decade. Thus other formal methods are the route taken, but the current methods still don't get you security by default nor will they ever because of their current lack of scope over the entire computing stack. We still have a lot to learn to close the gaps, but atleast we have an idea of which direction to go.

But the real underlying issue that I cannot see changing is the "marketplace". You will hear comments like "security costs" and "consumers only want features" as excuses but the simple fact is the industry has been heading down the wrong path now for so long, I doubt it will ever climb back even if it was forced to and as we know the current regulative push is in the wrong direction for various reasons.

SkepticalOctober 23, 2014 1:27 PM


@Nick P: I'm doing a clean slate argument showing complexity almost always reduces security unless extremely rigorous methods are used.

Fair enough. I only quoted a portion below, but I think it's enough for discussion (and obviously the full argument is just above in the thread).

2. Anything done with a typical, EAL4-level development process is often hacked in multiple ways when it becomes popular.

4. FBI/NSA wants to use No 2 to build No 1. They say the result will be secure.

This is a much more persuasive argument. As you note, it is an argument against the inclusion of a lawful means of access to the extent that the government utilizes a development and assurance testing framework of the type specified in (2) above. And, to the extent that government utilizes a framework meeting sufficiently rigorous standards, it is not an argument against such inclusion.

I have no problem with the logic of the argument, and it comes down of course to the particulars of a proposal to include a lawful means of access.

So to a great extent, we're actually in agreement.

@Wesley: Thank you for making my point for me. By making sure of each individual component, you lessen the likeliehood that the full complement will fail.

This wasn't your point as I read you, though. Your point was that complexity necessarily has an inverse relationship with security. My response is that this is true if and only if one makes several assumptions about the nature of the complexity being added, and that these assumptions are not in fact always the case.

Part of the problem here may be in what I read as a casual use of the term complexity in Vagle and Blaze's article (which is fair, given the forum in which the article is published and the authors' lack of description or definition of the term). If the term were being used in a particular way, more defined by theory, then most of the discussion in this thread would be incoherent.

And when Clive Robinson restates the same argument in terms of N^2 versus 1/N ...

I asked him to clarify what he means N to represent precisely and to indicate the derivation before I responded more fully. I'm not sure how that's inconsistent with anything I wrote to you. Given the various ways of defining and describing complexity, I don't think my question was unreasonable.

@Figureitout: I wasn't going to get involved in what appears to be you trolling, even after you just made the EXTREMELY laughable statement that complexity doesn't affect security.

My statement is a little more subtle than that, actually.

Its almost an automatic that it does

There's the rub. "Almost an automatic." If X "almost automatically" implies Y, then it is false that X (necessarily) implies Y. Do you see my point now?

Criticizing someone out of all the people here that repeatedly pwn you intellectually, the best you can do is back out after you're owned or just ignore the pwnage.

That's part of the difference in the way that you and I converse at times. I don't criticize people here - just ideas and arguments. That's what enables me to respect someone, and talk politely with them, even when I might disagree with an idea or argument they put forward.

Have you ever compiled a single program?

Gosh, never. It sounds very complicated. Is there an app for that? How does this relate to the actual discussion?

I suggest you read up and research if you're interested b/c anyone can do it if they want it bad enough.

Always happy to learn more.

Silent UndergroundOctober 23, 2014 3:25 PM

@OldFish

For instance, isn't a violation of the 4th to force a citizen to create papers and effects in a form that can be searched? It presumes probable cause and relevance before the fact.
Or a violation of the 5th to force a person to speak, or keep a diary, in a form that is accessible at a later date for purposes of collecting incriminating evidence? This irretrievably removes the protection provided by the 5th at a time predating the point at which a citizen would need to make an informed decision.
If we would only reason correctly it seems to me that the law is ahead of the technology by several hundred years.

Hah! Very well said, sharp observation. :-)

"Old" fish indeed... definitely an "old soul" observation on that one.

SkepticalOctober 23, 2014 6:03 PM


@Clive: I appreciate the thoughtful answer!

N^2 is the dominant term in the number of relationships in a set of objects normaly given as 0.5(N^2 -N).

So to restate this slightly to be sure I understand you, N represents the number of nodes and (1/2)(N^2 - N) of course is the number of edges in a complete graph. Each increase of a node (N+1) will result in an increase in the number of edges by + N. Complexity is measured by the number of nodes.

1/N is the simplest probability that you have got the desired "secured state" and is the same as the probability of you getting a six from a single role of the dice in practice it will be a lot worse.

Now there are a few steps here you've taken that I'm not sure I see entirely clearly. Given the model adopted here, what counts as a secure state in the terms of the model? How do we get to 1/N? In your examples a node seems to be a component within the system, but the probability you gave of a secure system seems to imply that each node actually represents a possible state of the system.

But even in advance of those answers, one of two of my overall reservations here is that I'm unclear as to why this model is well representative of the target reality we're attempting to capture.

I grant of course that all models are essentially wrong, but some are useful, but what is the utility of such a model without knowing what we're boiling down to nodes and edges?

My second is that while I (may) understand the approach to probability you're taking here, especially given the model, I don't think that captures the thrust of Vagle and Blaze's argument, which is to assume that a more complex system is less secure, not simply that - given a random selection of possible states - a more complex system is less likely to be secure. And to me this distinction makes all the difference so far as evaluating a policy approach (or any approach).

Re NASA's software engineering - while I have very little familiarity with this subject beyond the reading of a paper or two, scraps of which linger in the dusty corners of my memory, I recall quite vividly Feynman's highly complimentary description of their approach.

Aaron McFarlane, Extra Special AgentOctober 23, 2014 8:33 PM

@silent underground, please, it is rude to draw attention to the legal and constitutional turds bobbing in the punchbowl, you're going to harsh skeptical's obsessive noodling with software engineering details that he acknowledges knowing nothing about.

Skeptical is cogitating on an altogether more rarified conceptual plane, applying on the Sandusky Theorem which was developed at Penn State. It takes its canonical form when Joe Paterno goes to Sandusky and says, You have to stop raping boys in the shower, it's contrary to law, and Sandusky says, well it depends, what if I stopped using soap and used clinically tested water-soluble lubricants instead, Wouldn't that solve the problem? and Paterno says, No, no, you're gonna get us all in trouble, and Sandusky says, but what if I gave them drugs, then there would be complicated issues of consent, depending on the drugs in various combinations, and their dosages, and you really can't generalize from that into a blanket prohibition, and so on and through it all Sandusky keeps porking the poor kids because Paterno's 80 and senile and he gets confused.

Coyne TibbetsOctober 23, 2014 9:45 PM

I have watched all this argument, extending apparently to the sub-bit detail, with bemusement.

To me, the whole thing seems quite easy to argue at big picture level.

We have John (Q. Public). John has a cellphone. John wants the private data on his cellphone protected; he wants exclusive control over its use. The big tech companies say John should have exclusive control over its use (well, except where they thing they should have control).

NSA/FBI/CIA/DHS/DIA/DOJ/etc. (TLAs) insist John must not have exclusive control. TLAs insist every single cellphone must provide access that the TLAs can use at will. TLAs insist this is necessary because: End of the world!

So front door versus backdoor is a red herring. The real issue is: Does John have exclusive control or not?

TLAs promise that they will build an access system that is secure. This is unlikely, given the fact (1) that they must be able to identify a particular phone and learn the correct some-door key for the phone within much less than 1 second; and (2) almost certainly will consider it necessary to access all phones en masse.

TLAs promise that only they will use this technology. This is unlikely given the facts that (3) we have already seen that Stingray has apparently been shared with every podunk police department that can wave a check; (4) TLAs will immediately share this technology with the "favored five" foreign nations who will then share it...wherever.

TLAs promise they will not misuse the technology. This is unlikely given the fact that (5) TLA's routinely turn every single surveillance tool at their disposal to a "weapon of mass Rights destruction".

TLAs promise they will not allow misuse of John's data. This is unlikely given the fact that (6) they have already shown that they don't care about misuse of the data for "constitutionally respectful" activities like parallel construction; (7) that they already misuse personal data routinely in their queries re spouses, lovers, and celebrities.

In a sensible world, the argument would already be over: John 1, TLAs 0.

Clive RobinsonOctober 23, 2014 10:36 PM

@ Skeptical,

Don't think of N as the number of "nodes" on a graph, it will lead your thinking astray. Instead regard them as objects or entities within a set that should be fully independent of each other (except where required through mandated and controled interfaces).

However they are usually anything but fully independent, thus they have relationships with all the other entities within the set that form the system some intended by design but most not. It is these relationships that form the complexity of the system at any level not the objects or entities at that level. Thus an entity at any level is defined by it's bounds, without which it would be many entities at that level.

To see this in hardware consider the powersupply, it is common to most if not all entities within the system and unless very stringent precautions are taken the entities can "see each other" through the powersupply, thus you have a large number of relationships that are not mandated in the design. Similar happens with long PCB tracks that run adjacent to each other, crosstalk provides more relationships between entities. Then there are inductive and capacitive coupling issues to be considered and the relationships they form. These unwanted relationships between the entities provide unwanted complexity, whilst eliminating them can be difficult at best, reducing them by passive means reduces the unwanted complexity. However in quite a lot of systems engineers are trying to "cheat the test" by using active methods, which whilst they might sneak the system in under the EMC test masks, have added complexity in an undesirable way. Because they have not reduced the undesirable energy with information impressed on it, they have merely spread it over a wider bandwidth, thus leaving it for an eavesdropper to despread it and recover the information.

I note you say you have never compiled a program so some of what I'm going to say may be on the verbose side (even for me ;-)

The first hurdle to deal with is that software development is a "field of endeavour where words have specific meanings that may not be the same as common usage" one such word is "object" another is "entity" which makes life harder than it might otherwise be.

A computer can be considered from a 20,000ft view to have three main parts the Central Processing Unit (CPU) which has memory and Input / output (I/O) attached to it. Active programs have their compiled instructons and current working data in memory, data is entered and results displayed via the I/O.

The important point to note is that both data and instructions are stored in the same area of memory which has been asigned to the user by the operating system. Secondly the CPU cannot tell the difference between data and instructions as they both consists of bits that can be set or clear that are grouped together in bytes and words of various sizes that are usually a small power of 2 (ie 8,16,32,64,128 bits) but not always (IEEE has some that are 80bits in size). Thus the CPU needs to be told which are instructions and which are data, this is often done by a standard header that is included in the program image at a specific offset from the begining of the asigned memory block (look up ELF on wiki if you want the nitty gritty details).

Program instructions fall into two main groups, those that work on data values and are often called logical and arithmetical instructions, and those that affect the flow of processing instructions often called control instructions. Usually it is undesirable for the program instructions to be changed during execution, and when it happened in the past the program either behaved very oddly or it tried to use memory outside its asigned block which would cause a "segment fault" exception to be raised by a part of the computer called the "Memory Managment Unit". Due to the activities of various attackers many modern systems now detect an attempt to write to the area of memory reserved for instructions and the MMU raises a write protect exception. Similar fixes have been found for other common attack vectors. The point to note is that if the program had been written correctly none of these attacks would be possible any way.

The reason these attacks can happen is many fold but boils down to issues to do with data and a lack of checking. Or to put it another way "programers are optomists" and don't code defensively. But to be fair defensive programing is generaly considered a waste of resources by managers so programers are either not encoraged or activly diswaded from doing so.

This proplem was recorgnised years and years ago which is why there are "type safe" programing languages around. But type safety is not the only problem programers generaly don't check the "range" of data when they change it and quite often don't check the range or check it correctly before using it. And even when they do they generaly do not handle out of range data effectivly, nor for that matter do they check return values when they should or develop sensible exception handlers. But worse they often don't initialize memory correctly prior to using it. The reason for these sins is that doing the job properly takes a very large amount of extra code...

In type safe languages data is not just stored in memory, it's stored in a way that alows the type of data to be easily identified there are various ways to do this but nearly all are amenable to hardware based checking via metadata tags or the MMU. However it is often not done which leaves ways open for subtle bugs to mess things up due to the likes of not range checking pointers and loop counters etc.

The underlying problem of a lack of type and range checking is that you are not setting enforceable "bounds" thus you cannot group blocks of instructions easily in a way that reduces complexity. If you can set bounds via hardware or limited use of control instructions then you can group blocks of instructions into a single "entity" for complexity reduction rather than have them all count as individual entities. However knowing how to do this sort of modularized programming effectively and efficiently takes a lot of hard won skill, and few programers have them.

It's one of the reasons why I advocate hardware solutions to type and range checking and having high level languages with inbuilt execution signiture checking via hypervisor to detect problems as early as possible. Whilst we know how to build such systems it is only research systems that are including some of the smaller parts currently.

Nick POctober 23, 2014 10:39 PM

Re Susan Landau paper: Did NSA's IAD Bring Great Benefits?

I've just done a point by point review of these two papers: Landau's and Bell's. Landau believes NSA Information Assurance Directorate has done a lot of good for information assurance despite SIGINT efforts. Bell says they did at first, then massively did the opposite & still do. In other words, one story is heroic story about IAD doing everything they can for us despite SIGINT mission and one portrays IAD as one of worst things to happen to INFOSEC (and best to SIGINT). Read Landau's paper first, then Bells. I'm going to check Landau's claims using Bell's, which I've already peer reviewed and mostly agree with.

Combining Landau's and Bell's Claims Into One Timeline

Starts by pointing out the NSA has two missions: SIGINT and COMSEC. That Information Assurance Directorate tries to build/evaluate security solutions & gets little credit. We're in agreement here to a degree as even I benefited from their activities, largely in Orange Book days. Military, academia, and TCSEC gave us much great work early on.

She points out that much government traffic went over insecure COTS lines in the 1960's and 1970's. NSA knew Soviets were tapping microwave relays. Tried to build a secure system per Ford administration, but cancelled during Carter administration. Carter said focus on defense contractors and government. Led to STU. In 1970's and 1980's, NSA told many private firms Soviets spied on them despite not being required to do that. p5-7

Nick's note: So far, they're beneficial.

Bell points out that, from 1960's onward, Tiger Teams were pentesting commercial and government systems. They won every penetration. I'll add that MULTICS Security Evaluation in early 1970's showed how thoroughly insecure computers were. Military, DOD, NSA, and defense contractors already knew they were very insecure. There was already published risk that outsiders or insiders could attack the systems.

Landau says, in 1970's, private sector starts making crypto. NIST wanted to standardize an algorithm. Ended up with an IBM supplied, NSA modified algorithm with 56-bit key size. It was specifically designed to be breakable by NSA but not others for a period of time. Diffie and Hellman also invented digital signatures in this time frame. RSA arrived. Government didn't want its confidentiality aspect used outside government. Crypto wars begin. p7-9

Bell says that, in 1977, Steve Walker begins the computer security initiative. "The three parts of the initiative were a technical center for product evaluations, criteria against which evaluations could be performed, and a technical conference dedicated to computer-security issues." By 1981, the Computer Security Evaluation Center was established to do this. Very importantly, U.S. government wouldn't buy a product unless it met a high enough security standard for the use case.

Nick's note: So, in parallel, we see the NSA allowing an algorithm with known weaknesses hoping nobody else will exploit them and another in government trying to create the conditions for high security products to emerge in market. Props to the brilliant Steve Walker. The NSA are endorsing risky, temporary COMSEC while ignoring endpoint security. The former argues for extra scrutiny on them, while the latter wasn't subversive: NSA was a COMSEC culture that had to learn computers were as important as crypto (source: BLACKER paper).

Landau says NSA was worried about private sector competition in 1970's. Yet, by 1984, they created another CSEC style scheme for crypto called CCEP. Private sector could submit a product idea and justification to the NSA, then build it with their own money (and NSA help) if NSA accepted it. Also in 1984, Reagan Administration made DOD in charge of protecting contractor systems even if they weren't secret. In 1987, Congress passed Computer Security Act putting NIST in charge of INFOSEC standards. They and NSA created a working group with half from NSA & half from NIST. Group could draw on NSA expertise. NIST wanted to standardize RSA. NSA opposed this and created their own, weaker standard. They also pushed Clipper Chip for escrowed encryption, which got blasted & failed miserably. p10-12, p 23

Bell says from late 80's to early 90's, the CSEC and private market were cranking out one high security system after another. Market developed it with good incentives, NSA IAD certified its security, government bought them, and academics built on them. The market quickly boasted three high security products with almost a dozen medium security products. Steve Walker's plan (and CSEC) was working. Security was happening.

Nick's note: So, they leveraged Walker's concept in crypto after it showed success in COMPUSEC. They encouraged these results for use by them and contractors, with more of the latter now. Yet, on the crypto side for the public, NSA was once again preventing adoption of a high security standard in exchange for a weaker option. They also tried to reduce the risk of their subversion by making that option stronger than most adversaries could break. So, for NIST crypto, NSA is harming public security while IAD's expertise is invaluable to the new secure products. NSA then pushes a compromise in escrowed encryption, which still has significant risk.

Landau says the export battle was the biggest between private industry and NSA. Companies didn't want to reduce marketability by weakening keys. RSA got a speedy approval with weaker keys. Congress started backing strong cryptography. FBI came in swinging and did more for NSA's goals than they did: CALEA, stronger support for key escrow, and less support for strong crypto. Studies showed encryption would have positive, economic effects. Pressure on NSA grew. They moved to look into regulation rather than legislation. By 2000, retail could use strong crypto, other uses needed licenses, and government sells were still very conditional. Anything protecting communications infrastructure was restricted. Industry opposition ended at this point. p13-15

Nick's note: I'd probably clap at this good reporting if CIA had not declassified this Top Secret document. It confirms Landau's work on private crypto a threat to NSA, battle with private market, & US Gov acknowledging importance of crypto to market. It diverges when it says the Clinton Administration solution in 1996 allow strong crypto so long as there "was a key escrow system." They illustrate old & new requirements side-by-side. "Corporate third parties" would hold the decryption keys. Major companies from Apple to IBM to Sun to RSA supported this. So, this implies the end of the "crypto wars" might have gone very differently than what we publicly though: (a) industry opposition ended in secret when they had a way to keep sales strong; (b) we got strong crypto with secret backdoors built in instead of simply strong crypto with restricted sales.

Sidenote: I also noted when I first posted this that private citizen's weren't "players" in the issue to CIA analyst. They only cared about the competition, not the people's needs.

Backtrack to 1995, Bell says there's a number of evaluated, high security COTS products on the market thanks to CSEC. NSA's IAD then announces the MISSI initiative to develop Government Off-The-Shelf high security products. They also promote "risk mitigation" instead of "risk avoidance" where they push low to medium security products for classified information protection. A Boeing spokesperson confirms Bell's assertion that MISSI put the entire high security (A1 class) industry out of business. MISSI shifts focus away from endpoint security and toward more crypto, like Fortezza. NSA itself starts promoting the use of low security products for integration of classified networks and processing of classified information.

Nick's note: In one fell swoop, NSA IAD kills whole high security market and pushes stuff they know can be hacked. They push this even for interconnection points between high risk and high sensitivity networks. Virtually everything they do here clears the way for black hats to hit our networks.

Landau claims NSA's endorsement of ECC is an unprecedented, straightforward recommendation of security for private sector. In 1997, Brian Snow becomes lead contact for NIST TWG. NIST starts AES competition to replace DES. The work ends up being adopted by Dept of Commerce. In 2003, they allowed AES in Type 1 systems if its a Type 1 implementation. In 2005, created Suite B algorithms for commercial systems and provided LMR systems with these to improve security for first responders. They created checklists for securing products via SCAP program. Also worked with IETF process for securing communications infrastructure with records publicly available. p16-20

Nick's note: NSA endorsement wasn't unprecedented as NSA had already been accepting, tweaking, or rejecting COTS INFOSEC products for security for years by that time via IAD. Further, the SIGINT people on TWG already endorsed two weak algorithms that looked strong enough on the surface. So, their endorsement was as much risk as potential reward. Once IAD takes over, they start producing basic algorithms (not implementations) for various systems. They work at this level in IETF, although allegedly corrupt IPSEC & similar standards. They leave off implementations and we see an endless stream of security vulnerabilities there, including one they knew of per leaks. These are the kinds of flaws their Type 1 process eliminates. These are also the kind the enemy finds regularly. If anything, NSA is increasing the use of cryptography in a very risky way that led to a bunch of compromises by many and subversions by them. Had benefit stopping run of the mill criminals, but left us vulnerable to nation states they claim to protect us from. Hmm.

Back to Bell. DOD replaces CSEC with Common Critera. RAMP promise of CSEC allows a certification to be maintained so long as product doesn't change and only recertification of things relevant to the change. NSA backtracks and tells existing high security vendors they must recertify at potentially tens of millions of cost. They then start creating their own products (eg NetTop, DTW, HAP, MDDS) to solve the problems that the high security products already solved. These products are rated EAL4 below: using low to medium security methods that can't beat sophisticated attackers by IAD's own standards. Then, IAD starts promoting these low to medium security products to be used in place of high security COTS on the most critical integration points and systems.

Nick's note: So, IAD is promoting algorithms in a risky way, then they do everything they can to let high security solutions wither and push low to medium security solutions for highly sensitive public and private networks. IAD certifies high security solutions, even creating internal ones. So, they certainly know what they're building will be riddled with holes, esp against nation states. Heck, EAL4 can't stop regular black hats (see Windows). Why is IAD creating and pushing low security methods across the board if they have access to *existing* high security products? And components of these got merged into very popular Solaris and Linux operating systems. It doesn't aid assurance at all, but it surely makes SIGINT child's play at every level of adoption of these solutions.

Landau says NSA originally saw private sector competition on cryptography as a problem. With 1984 CCEP program, they made private sectors partners that built crypto for them if NSA approved their proposal. In late 90's, that turned into UPP efforts where they could build it for anyone so long as IAD approved it. In 1994, SECDEF William Perry ordered that COTS must be used unless a custom military design was necessary. The 1996 Clinger-Cohen Act required use of commercial tech where possible. p23

In 2002, Landau says IAD technical director Richard George claims NSA will use COTS products whereever possible, work with private sector to improve their security, move DOD from risk avoidance to risk mitigation model, and make U.S. products the best. NSA's CSfC IAD effort combines NSA tech and COTS tech in layers to improve security. COTS must meet NSA requirements. They discuss the method in public, too. p24

Nick's note: These were new to me and I'm glad she mentioned them. Previously I thought it was just a policy to use plenty of COTS for its advantages. Nonetheless, the IAD had already certified a number of high security COTS. Yet, they promoted low security GOTS and COTS products instead. Why is that?

Landau says Cryptographic Modernization Program leveraged COTS to modernize and standardize DOD COMSEC systems. A mutli-decade long effort still in progress. One problem, though, is they had to ensure correct use in the field by making sure security was always on and default. p25

Nick's note: This is misleading. I've written plenty here on this program and its products. These were COTS, but not in the normal sense. Defense contractors build these using classified technology, processes, algorithms, and protocols. The NSA certifies them using a classified methodology. There are sometimes Type 2 (eg Suite B) versions of them, but they have different implementations. Only the Type 1 (eg NSA) versions and protocols are secure enough for the NSA. And you can't legally buy or possess them. Differences were mainly in RNG's, algorithm implementations, covert channels, a special version of IPSEC (without their vulnerabilities?), and EMSEC protection. EMSEC tells us they *definitely* leave us open to attack with public versions. Other stuff suggests they knew there were problems in public versions that their version didn't have. And lo and behold those areas are what hackers and academics have been slamming for this whole decade. So, IAD was again protecting them and making us vulnerable in certain ways [for SIGINT?].

Landau says, in late 90's, NSA needed a new kind of COMSEC device for temporary alliances. It should be secure enough for that use, but not reveal any important COMSEC techniques. The Cryptographic Interoperability Strategy enabled this with the Secure Sharing Suite, S.3. Industry had similar needs. p26

Nick's note: It's a good idea. I didn't look into it much, though. I was focused on the good stuff. ;)

Information used to be hard to steal because it was physical. Today, it's in computers that are vulnerable to hackers. The information can be stolen more easily and in high volume. p26

This was occuring without public acknowledgement until 2005 when Time reported Chinese hackers stole classified info from "four military sites in 2004." They described other thefts of I.P. Then, dozens of companies started admitting this was happening to them. p27

Nick's note: The Cox report acknowledged in 1999 that China was grabbing our nuclear secrets and even eliciting academics. There was also plenty of coverage of hackers' exploits before 2005. If she's right, it's interesting that this particular story led to so many others admitting they'd been hit while larger claims like Cox didn't. A curiosity someone might like to explore.

"the Snowden leaks, which exposed a vast system of collection of content by the NSA’s SIGINT: bulk collection of domestic metadata, targeting of Internet communications and stored metadata of non-U.S. persons, highly targeted surveillance against close U.S. allies, tapping of U.S. Internet company inter-data center communications, etc."p29

Nick's note: I'm just dropping this one as is because I thought it was interesting that she said "bulk collection of domestic metadata." She didn't mention Snowden revelations that they were collecting bulk *data* domestically, targeting U.S. persons as opposed to just non-US, or giving it to a bunch of agencies using parallel construction to prosecute us. There's a world of difference there. Everything she presented is of the pre-Snowden view, except for surveillance against close U.S. allies. I wonder why she did that.

"NSA lost control of COMSEC field. IAD pushed stronger security out for people. SIGINT side, esp TAO, did the opposite by developing a variety of endpoint attacks that applied before encryption. These could negate IAD COMSEC protection. p28

Many suspected IAD's efforts were a cover for SIGINT capabilities. NSA leadership knew IAD's capabilities could be countered and IAD might have too. That's doesn't matter, though, because TAO tools were limited to highly targeted situations and IAD were broadly deployed to private sector. IAD capabilities could be effective even if some were bypassed by TAO. p29"

Nick's note: IAD knows secure endpoints and implementations are absolutely necessary. There's high security building blocks available in COTS. Yet, instead IAD develops and heavily promotes all kinds of stuff hackers (including TAO *and* China) can tear through. They push cryptosystems with weaknesses in protocol and implementation that would be obvious to them given they eliminate them in Type 1 processes. They block the public from getting the stronger tech, from crypto to EMSEC to certain high security products. And hackers and security researchers tear us up from protocols to OS's to firmware to everything else that high security COTS would've prevented.

Conclusion

Landau's paper was an interesting and enlightening read in quite a few ways. Unfortunately, the data from IAD activities from mid-90's onward contradicts her hypothesis. Schneier always says crypto is the strongest link. NSA was happy to provide solid crypto algorithms (without secure implementations) whose eventual COTS/FOSS implementations and underlying platforms could be shredded by hundreds of attacks. IAD knew better, had better, killed it all off except for a few for defense use, aggressively promotes what it knows will be hacked as secure, and avoids every path to a resurgence of the high security market. In practice, these have led to a ridiculous number of exploits. IAD didn't make us more secure: they destroyed every chance we'd have at it under the guise of giving us aid.

Fortunately, we have programs like Epstein's at NSF, DARPA, and even military labs (eg NRL, Air Force) that continue to fund high security work. Academics funded this way are leading the pack, producing one useful component at a time. All the critical components to a usable, high security system from hardware up have already been published. They just need to be developed, integrated and deployed via a high security lifecycle process. That some commercial vendors are already producing medium to high security COTS solutions on their own gives me hope this will happen. Meanwhile, everyone stay the hell away from IAD's recommendations unless an expert you trust looked over them and endorsed them.

Just like IAD used to do for COTS products when it actually helped us be secure. Oh, those were the good old days! :)

FigureitoutOctober 24, 2014 1:09 AM

Skeptical
--I get defensive and mean when I feel like I'm being attacked and someone is defending the attackers. Sometimes I feel obligated to return the favor if I can b/c even more defense-less people won't stand up for themselves.

I didn't say 100% automatic b/c I personally believe some things like hacker bait that are isolated enough from what you're trying to protect, using obfuscated code doing way more useless jumping around, would be good to waste attacker's time and then give them things like banging your head on keyboard and encrypting that.

I already saw your point and it's speaking w/o experience. Ask any engineer, security, hardware, software; ask them which products they feel more confident about, you can't test all the states many times, not possible even in simpler systems! Ask any of them, it's a nightmare when you get a bug you can't find and it looks like code is being jumped over. Then you find it and it's a simple fix, makes you want to scream again from all that time wasted on a dumb bug. Diagnosing bugs and security holes from escrow systems will add to this pain, the market doesn't want it anymore.

I brought up your lack of experience BTW, and it's very relevant as it is in ANY topic. If I go to a medical forum and start spewing quackery and questioning time-tested observations and theories; I'll either get ignored or hopefully smacked down. As spreading disinfo and wasting time trolling stupid questions in medicine rivals security, 'cuz when you're dead security doesn't matter. Big example from history which wouldn't surprise me if you used, is some early astrology and explorers going around the world and they didn't fall off into the abyss; they questioned the accepted reality of a flat earth and the universe rotates around us. This and that don't compare in my eyes; and I don't see any sources (scientific ones mind you) on your side supporting your statements.

Yeah "there's an app for that", here you go. Get one of these. Yet again, sorry showing some ignorance as programming on a smart phone would be a major pain than a laptop; just no. You develop apps on PC's w/ a keyboard. There's no beating a nice desktop set-up for your programming PC's, internet PC's, research PC's, gaming PC's, and now radio PC's. I'd recommend trying Python if you're inclined. I remember compiling my first program and being so confused at this magic, also getting off Windows and using other OS's; the thought of writing my own tiny OS or adding on/removing chunks used to seem way too hard. It's not if you push; and I still struggle w/ some basic programming as I go back-n-forth b/w what interests me, so it's nice to have saved code to refresh or re-use again.

SkepticalOctober 24, 2014 5:20 AM


@Clive: Don't think of N as the number of "nodes" on a graph, it will lead your thinking astray. Instead regard them as objects or entities within a set that should be fully independent of each other (except where required through mandated and controled interfaces).

What's the objection to using graph theory here? Nodes can be considered independent to the extent no edges connect them - and your description of the number of possible relationships among members of a set in a given system within your model actually is precisely that of a complete graph (i.e. a graph in which each node is connected to every other node).

Of course, once we start changing the model so that not every node has an edge with every other node, which we may if we know certain facts about the system we're modelling, the numbers you spoke of no longer necessarily apply.

I note you say you have never compiled a program so some of what I'm going to say may be on the verbose side (even for me ;-)

Ah, well that was sarcasm on my part in response to Figureitout's rude comment. I attempted to include some cues that it was sarcasm, but apparently the signal was not quite well formed enough.

I didn't quote the overview you gave, but I thought it was nicely done and certainly appreciate the spirit in which you wrote it.

It's one of the reasons why I advocate hardware solutions to type and range checking and having high level languages with inbuilt execution signiture checking via hypervisor to detect problems as early as possible. Whilst we know how to build such systems it is only research systems that are including some of the smaller parts currently.

I'd speculate that we'll start to see more along these lines (though maybe not precisely what you have in mind) in wider distribution inside the next several years, depending on a few factors.

SkepticalOctober 24, 2014 6:50 AM


@figureitout: -I get defensive and mean when I feel like I'm being attacked and someone is defending the attackers. Sometimes I feel obligated to return the favor if I can b/c even more defense-less people won't stand up for themselves.

I don't attack anyone and in fact I don't even both responding to the more childish attacks I read here.

It's worth noting, of course, that patterns of speech, particular language, phrases, words, references, even concepts used, can be used to link the author (authors really, though one predominates) of those childish attacks to pseudonyms relied upon for more friendly discussion in here.

I didn't say 100% automatic b/c

I already saw your point and it's speaking w/o experience.

This is really just a matter of logic. If a complex system implies (implies used in a logical sense) a less secure system, then "100% automatic" applies. If it does not - if there are factors beyond the measure being used to determine complexity such that a more complex system is not necessarily a less secure system - then the premise in Vagle and Blaze's argument is false.

Again, this is simply the application of logic.

Ask any engineer, security, hardware, software; ask them which products they feel more confident about, you can't test all the states many times, not possible even in simpler systems!

You're confusing factors you might like to see in a given system because they make it easier for you to determine how secure a system is, with the premise Vagle and Blaze used, which is a much stronger claim about complexity and its relationship to security.

It is absolutely true that assuming zero knowledge about any two given systems and its components, the more complex system will be more difficult to evaluate. But that does not enable us to draw the conclusion that a more complex system actually is less secure than a less complex system.

Moreover, there is no reason, in many cases, to make the assumption that we're beginning with zero knowledge about a system or its components. This is why one might feel very confident that ADDING certain components to a system will increase its security - even if you may have increased the complexity of that system according to whatever measure of complexity you may be using.

I brought up your lack of experience BTW, and it's very relevant as it is in ANY topic. If I go to a medical forum and start spewing quackery and questioning time-tested observations and theories; I'll either get ignored or hopefully smacked down.

Fortunately I'm not spewing quackery, nor am I relying on any claims as to experience. I'm simply raising a question about what I continue to view as, at best, a highly contingent premise.

Yeah "there's an app for that", here you go. Get one of these. Yet again, sorry showing some ignorance as programming on a smart phone would be a major pain than a laptop; just no. You develop apps on PC's w/ a keyboard.

A keyboard?? Good Lord Figureitout, I was sarcastic with you in my earlier comment, but now you've really crossed a line. Real programmers don't use keyboards, okay? Real programmers spend days in the Yukon wilderness looking for just the right grain of hackmatack to cut, mill, and compress into punch cards, while simultaneously designing their program in machine code first. When a real programmer encounters a bug, he doesn't get frustrated and find a rubber ducky to talk to; he gives thanks for the opportunity to learn and then tramps out into the glittering snow to talk it over with a grizzly. A keyboard. Unbelievable. What's next? Vi? cc? gcc? These are luxuries real programmers know to be the weaknesses of a culture in decline. I'm simply appalled. Don't ever claim you're qualified to talk about these matters ever again.

Just to be clear - that was more sarcasm.

I remember compiling my first program and being so confused at this magic, also getting off Windows and using other OS's; the thought of writing my own tiny OS or adding on/removing chunks used to seem way too hard. It's not if you push; and I still struggle w/ some basic programming as I go back-n-forth b/w what interests me, so it's nice to have saved code to refresh or re-use again.

I think one major hurdle to learning any subject is the degree to which the subject matter is contextual, by which I mean the degree to which one part of the subject can be fully understood only if you know another part, and vice versa. Obviously, this isn't entirely the case, or we'd never learn anything. So going back and forth over different parts of a subject might actually be a useful way to approach it.

But the key is ensuring that you're learning things well enough to "chunk" them in your mind and combine them to approach new concepts and problems. For me that's always required a fair amount of discipline in addition to following my interests.

Clive RobinsonOctober 24, 2014 9:04 AM

@ Skeptical,

The reason I urge not to think in terms of nodes on a graph, is that in many peoples minds such graphs have been imbued with assumptions --from either teaching or use-- that are not there (there are a sufficient number of pithy comments on such assumptions by mathematicians to fill a leaflet or three ;-)

I've fallen foul of peoples assumptions in the past on this blog when talking about failure states. You can search for it but basical I said that if an engine on a 747 was either functioning or failed (binary state) then with four engines there were sixteen states of which fifteen were a failure state. Somebody could not get over the issue that I was talking about states not the probability attached to any given state. Knowing the difference when designing safety critical systems is essential otherwise that 10billion to one chance will as noted by Douglas Adams happen about oh one in ten times ;-)

But there is also a problem with this which is at what point is something to improbable to happen. It's been said that the number of possible permutations of a deck of cards is so vast that there is not enough time --expected-- in the life of the Universe for them all to happen from just shuffling a deck. Another example given of this is that if you knock a full glass of water off of a table there is a small probability that it will land without breaking and without spilling a drop but it is to small to expect to ever see it happen. Similar has been said about tossing coins and it landing and staying on the rim, which is actually something I've witnessed twice in my life very much to my surprise.

Logicaly you can say that there is a possability that you can have a perfectly functioning piece of software... However practically outside of trivial examples it's not going to happen with the way we currently write software.

But with the the case of the security of a front/back door the question of the software being perfectly functional is but a very small part of the actual security of such a device. It also involves humans which we know on mass are fallable, stupid, greedy, easily frightened and conned. And it only takes one person in a very vast army to fail and the security is gone for good. As was once observed about secrets, two people can keep a secret if you kill the other one...

Matt Blaze has a claim to fame with regards to the failures of legal access front/back doors. Back in the "key escrow" debacle of the clipper chip, he showed that the carefully constructed system by what was regarded at the time as the best brains in the business (the NSA) had a very major and easy to exploit fault via the so called LEAF.

You have to ask wether his statments about the security of the overal system are based on "practical security" or "logical security", because if it's the former than the weight of evidence sugests he is correct.

Oh and when talking about the security of such front/back doors remember that as far as we can tell currently they will have to be designed around some kind of one way function if unreliable and easy to abuse escrow systems are not to be used. The problem with this is there is no solid theoretical proof that true one way functions actually exist. Secondly they will probably boil down to the use of the multiple of two prime numbers on which the assumption will be made that they can be neither factored out or guessed. And this you will realise this is based on a practical not logical viewpoint of security.

Which brings us back to your proposition, if we are to avoide a logical contingency, we have to use the same basic reasoning to both arguments. If we take your choice of the logical view then we end up with a contradiction, because your argument is that logicaly it is possible to build a perfectly secure system. As I have just pointed out the security of such systems is only true under a practical assumption it's false under a logical assumption, because logicaly you can guess the prime and yes you can factor it out.

Thus under that logical assumption Matt Blaze is correct such a system no mater how correctly designed and constructed is not secure under our current understanding of such a system. Personaly I can not currently think of a solution to the front door propesition that would be logicaly secure, because it would require the secret to be rather larger than infinite in size because it would have to be unbounded.

If you or anyone else can think of a way around this I would love to see the proof, and it would very probably make the discover of such a proof very wealthy.

GiantOctober 24, 2014 2:54 PM

Comey's comments and his lack of understanding about security shows how willing he is to trample rights in the name of, well, I'm not sure since he hasn't provided any good examples. He's clearly unqualified to lead the FBI.

John GOctober 24, 2014 3:18 PM

Comey is wrong for one big reason:

Encrypted information is no more beyond the reach of law enforcement than the equivalent information stored in the brain or in an unknowable or unprovable location.

In a ticking timebomb scenario or other exigency, the government can't compel the suspect to divulge the location of the victim or the bomb without risking the compelled confession being excluded at trial.

This is no different if the information is encrypted and the key or passphrase is stored in the suspect's mind.

The only practical difference may be that encruypted information can't be extracted without overt use of dirty and possibly illegal law enforcement tactics.

SkepticalOctober 24, 2014 3:23 PM

@Clive: Okay, so 15/16 possible combinations of states for 4 747 engines are failures if each engine has two possible states (run/fail) and any single fail state in a combination renders the combination of states a failure?

Sure, if we're talking about possible states (in which case only one combination of states, 1111, is not a failure, while every other - 1110, 1101, 1011, 0111, 1100, etc - is a failure).

But I'd note that you begin using probability in discussing whether a given system is secure (it only takes one weak person to render an army insecure, etc).

Enumeration of every possible state may be useful in design, but it seems to me that you'd have to start thinking about probabilities early in deciding how/whether to progress.

If not, you can arrive at some fairly bizarre consequences. Which is safer - a 747 with three engines or a 747 with four engines? We'd have to examine quite a lot to give a reasonably certain answer of course, and simply enumerating possible engine states won't give us that answer. However, we can certainly say that a 747 with 4 engines is more complex than a 747 with 3 engines.

Thus, again, it would be folly to deduce from the fact that system 1 is more complex than system 2 that system 1 is less secure than system 2. And this suffices to sink Vagle and Blaze's argument (at least as they described it in the linked article).

And ultimately, of course, the policy question of whether any system should be adopted will come down to more than its probable security - such as the degree to which the consequences of a failure in security would be mitigated, the degree to which it can be monitored, tested, and repaired, etc.

Answering these questions before there's any concrete proposal of some sort on the table? I just don't see it. To me this is like someone telling me that adding a flight computer to an aircraft will render it more/less safe without telling me anything about the specifics.

Chris AbbottOctober 24, 2014 8:06 PM

@Skeptical

I have a few simple answers to the whole modern jet/complexity thing as well as the whole "backdoors are good thing". Yes, modern fighters are more reliable, despite more complexity. The only thing is, this cost billions of dollars, more complexity = more expensive, that's just one thing. The other thing about the so-called "secure backdoor/frontdoor/whatever" is that there is a key laying around somewhere. Just like the key escrow argument in the 90's, if somebody has access to the key, it's vulnerable. If there was a way to get into phones/whatever that the Keystone Cops could get for criminal investigations it would be very easy to buy someone off with access to it. It would be a goldmine for foreign intel agencies/whoever and they would get access to it one way or another.

FigureitoutOctober 24, 2014 8:33 PM

skeptical
--It is a matter of logic, tested logic, that you're having quite a hard time grasping due to lack of actual engineering experience, not untested self-inflating logic. Also due to abuses by authorities continuously exposed and felt by citizens all over the globe, having these holes puts innocent people unnecessarily at risk.

You're simply making my point by not being able to evaluate and observe all states and complex interactions and arguing the "100%" thing, there is no 100% period and that's a pointless argument. Check out RF testing for an idea of some of the complexity I'm talking about.

Yes you are spewing what is now regarded as quackery in the security field and it is dangerous for new readers, please stop saying it. It is hands down accepted by many people who've done more work on the problems than you or I combined.

I would pay money to see you raise your issue w/ a popular security expert.

ThothOctober 24, 2014 8:38 PM

@Nick P
So in conclusion of your above report, that explains why China, Russia, hackers within and without are poking so much holes in US Defense stuff. The NSA sells and pushes for COTS and GOTS stuff that is low assurance and thought they can get away with it and Govt organizations including themselves (probably they pretend to use them but never put them in the high sensitivity areas) and contractors uses them and China/Russia/attackers comes knocking and holes appears all over. That's like shooting themselves in the foot.

If the researched technology comes out of an Intel Agency (NSA/CIA ...), I think we can safely guess they are backdoored beforehand and to be extremely cautious with their stuff ?

Clive RobinsonOctober 24, 2014 9:13 PM

@ Skeptical,

With regards your opinion of the Vagle and Blaze article, you have moved your opinion from,

This seems false. Greater complexity does not necessarily imply less security - and I'm frankly puzzled as to how the authors could write that it does as though this were a law of physics.

To

This is really just a matter of logic. If a complex system implies(implies used in a logical sense) a less secure system, then"100% automatic" applies. If it does not - if there are factors beyond the measure being used to determine complexity such that a more complex system is not necessarily a less secure system - then the premise in Vagle and Blaze's argument is false

If you look at the article you will find a paragraph that begins with,

    The problem is chiefly one of engineering and complexity.

Which does not talk in absolutes and thus in no way precludes the very miniscule probability of a complex but secure implementation. It actually talkes about our practical experience to date and indicates the observed correlation between increasing complexity and the increasingly high probability that a system will not be secure.

That is it is an observed characteristic, not "a law of physics".

After all a person being inherently good does not preclude them from killing someone, an act that most would consider bad.

However as I noted in my previous post that based on our current understanding this "golden key" idea can not be 100% or absolutly secure in the logical or theoretical sense, because any finite key remains either guessable or deducable by brut force searching. Thus Vagle and Blaze are correct from the logical and theoretical perspective when they state,

    The problem with the “golden key” approach is that it just doesn’t work.

Wesley ParishOctober 24, 2014 9:41 PM

@Skeptical

I think the (short) answer the jet fighter/engine issue is that the complexity in those cases is hierarchical - or, to cut to the chase, those cases are hierarchical. As in the case of computer hardware and software development and deployment ...

Now that is how the 256 times 256 problems get broken down into more manageable sixteen times sixteen chunks. Or how they should do it.

And since the sixteen by sixteen chunks are more easily analyzable, they are more easily solvable, so the possible errors get reduced, and fit in two categories: internal and connecting.

Once that hierarchical division of labour is broken down, you get the full fury of the 256 times 256 problem - using 256 times 256 as a convenient shorthand for the actual complexity of the issue.

(Mechanical and aeronautical engineers have it (relatively) easy: it takes a lot of hard work to do as Willi Messerschmitt did with the Bf109, and screw up the landing gear so thoroughly. But then he had the help of RLM's daft specifications. It's quite easy for the programmer to make a terrible hash out of writing "Hello world" if he's not thinking.)

Osama bin Laden lives!October 24, 2014 9:48 PM

This is too funny, skeptical up early at 6:50, clickin the ball bearings with advanced textual analysis and trying to figure out which of you sneaky bastards stole his shtrawberries. This is how beltway mediocrities catch terrorists behind every bush but NORAD can't stop 4 crap Boeing cattle cars yawing around like Orville and Wilbur.

Really makes you wonder who he suspects.

Nick POctober 25, 2014 9:43 AM

@ Thoth

" That's like shooting themselves in the foot."

It seems so. The only question is "Are they aiming at the foot before each shot or is it a convulsive finger twitching that causes it accidentally?" Either way, it's best to not let them handle the gun.

"If the researched technology comes out of an Intel Agency (NSA/CIA ...), I think we can safely guess they are backdoored beforehand and to be extremely cautious with their stuff ?"

Sure. However, we must assume that *everything* is insecure, then demand an argument for its security. That's the default going back to the Orange Book. That the source might have malicious individuals in the software lifecycle was presumed in A1-class development. It's why they had so many special rules. Personally, I think they underestimated the problem as there wasn't work on the personnel and physical side of lifecycle protection. In any case, even U.S. government's own standards say we need extremely rigorous, well-specified, pen tested, and independently verified systems if malicious developers are concerned.

Only a handful of products on the market meet that standard. So, the rest are all untrustworthy until proven otherwise. See how easy that was? :)

SkepticalOctober 25, 2014 4:16 PM


@Clive: I don't see how my position has changed in what you quote - I'd also point out that in one of the quotations I was responding to Figureitout and using phrases he introduced by way of responding to him.

As to the article, it's simply pointless to speak in the abstract about whether an addition to a system will render it more or less secure. You glide effortlessly from speaking about possible states to speaking about "miniscule probabilities", even though you've noted that these are very different things.

Let me ask you: will adding a flight computer system to an airplane make it more or less likely to crash? The answer is: we have no idea until we learn specific details.

The article is quite clear that it considers such specific details unnecessary. To quote: if we design systems with lawful intercept mechanisms built in, we have introduced complexity to the system, and have therefore made the system inherently less secure in the process. This is true even for systems with designs that are open for all to see and inspect. Thus, the difference between a “front door” vs. a “back door” approach to law enforcement intercept of encrypted communications is purely semantic.

Unless the phrase "inherently less secure" means extremely little here, in which case it has no implication at all for the policy discussion, the authors clearly intend it to mean: this will make the system meaningfully less secure, so we should not do it.

You're stretching to provide a defense of the article by watering down the meaning of "inherently less secure", but the cost of the defense is that it reduces the conclusion of the article to an almost meaningless triviality.

@Wesley: Ah, but this is my point. It's not just whether we've added complexity - there are lots of additional factors (I've pointed out some in this thread) we need to know about before we can say whether adding f and g to system Y makes Y more or less secure.

There is a much better, cautionary argument that Nick P (and Chris Abbott) makes above, and which I think many, including you and me, really have in mind, which condensed is simply: there's a long history of vulnerabilities in similar programs, and it takes a lot of careful planning and resources to create and implement a system along those lines that will be secure.

The conclusion of that argument, though, is not the same as Vagle & Blaze's argument. Vagle & Blaze seem to want to dismiss the idea outright, on the basis of the reasoning that I quote here. But the empirical argument simply indicates caution. The empirical argument says to us it may or may not be possible and feasible, but we should approach a claimed solution with skepticism and lots of questions.

As I said, I have no problem with the empirical argument. My only point here is that the a priori rejection of the idea, which is what the article by Vagle & Blaze advocates, doesn't stand up to examination.

@Figureitout: Asking questions and thinking things through is not harmful to anyone, new readers or not. My attitude towards people is one of respect; ideas and arguments need to be tested, regardless of who is making them. Your comments in this thread have consisted almost entirely of insults designed to shut down questions and discussion.

As to this comment of yours, I would pay money to see you raise your issue w/ a popular security expert... Unless there are economic or social ramifications that make silence or feigned agreement the better course, never be afraid to ask questions about an argument, nor to raise a counterargument to which you cannot see an answer. If an expert on a subject tells you to believe p, because of argument A, and the argument does not make sense to you, then you do yourself and the expert a disservice by not asking questions. At a certain point, you may just have to take the expert's word for something. If p happens to be a mathematical theorem, the proof of which consists of 200 pages of dense propositions drawing upon subjects which you know nothing about, then eventually the expert's answer will need to start going through those 200 pages - and at that point, you can either commit to learning enough to understand the proof, or you can take the expert's word.

But the proposition as expressed in the article does not fall into that category. And often, for propositions that do fall into that category, the popular explanation may well be designed simply to illuminate the general concept - in which case the counterarguments you have may turn out to be valid as applied to the popular explanation, but will ultimately be answered by the 200 page proof. The expert in such a case will (should) acknowledge the shortcomings of the explanation he gave you, and point to the proof as the real explanation.

If the expert is off his game, or tired, or perhaps is simply an asshole (and if you wander some of the open source software sites or lists, you'll see plenty), then he may provide a less polite and less enlightening response. Usually it's pointless to respond in kind - just smile, thank him for his answer, and seek other sources. After all, he may simply be having a bad day. Unfortunately, some of those assholes will really be quite good experts - they're just not very good teachers.

It's also easy for an expert to think he's communicating more than he really is. Someone with a lot of background in a subject can make statements that assume volumes of unspoken information - and he can be unaware that his audience, not having those volumes at their fingertips, is not understanding what he really means to say. A brave question or two can show him where his communication has broken down, and give him a chance to rectify the failure. Of course, he may also hear your question as though you do have those volumes of unspoken information at your fingertips, and in that case he'll be puzzled or annoyed by it. Awareness of this frequent difficulty is part of what separates experts who are good teachers from experts who are bad teachers.

Nick POctober 25, 2014 5:15 PM

@ Skeptical

Sorry I forgot to respond to your post.

" And, to the extent that government utilizes a framework meeting sufficiently rigorous standards, it is not an argument against such inclusion. "

Our previous discussion shows I have a bit more conditions that that. ;) However, purely on the technical aspects, I'd rather them do it in a provably strong way if they're going to do it. And something like this, which essentially bypasses all our protections, should only be implemented using the strongest security engineering techniques known. Just out of responsibility. If they did it, though, I'd be a lot more welcoming of it if I had high confidence in its access method.

"I have no problem with the logic of the argument, and it comes down of course to the particulars of a proposal to include a lawful means of access. So to a great extent, we're actually in agreement."

Seems so. I've designed quite secure backdoors for updates and remote administration. So, I know it's a fact it can be done. There is one other technical issue that I'm not sure if I brought up: the TCB of the system has to be as secure as the backdoor. In security engineering, the system as a whole is only as strong as its weakest link. If they built the backdoor on top of vanilla firmware, Android, etc. then it won't protect the system no matter how strong it is. They'd have to build it in the chip itself, in secure firmware, on a secure kernel, etc. The untrusted stuff runs on top of it or alongside it without the ability to access it. Although, it can see what those things are doing and potentially have write access to them.

So, my condition expands to a backdoor *and its TCB* to be a high assurance design. This is true with any system claiming to be secure. Fortunately, there's been a number of these designed as well (minus the backdoor). Unless you consider a VPN a sort of backdoor: the BLACKER VPN was one of first A1-class systems, Aesec's GEMSOS had its "crypto seals," Navy lab did an EAL7 version of IPsec VPN, and so on. Like I said, there's plenty of precedents for pulling it off but it has to be done right hardware up. Else, the likes of China and Russia will tear it to shreds to even more cost to our country.

FigureitoutOctober 26, 2014 2:09 AM

Skeptical
--If they are questions that should already be answered by the questioner taking THEIR time to inform themselves of open source material and more empirical evidence of the reality so far as we can see; then it is harmful and a waste of time. It is lazyness on your part, also looking for the "clean" solution; there won't be one. Mainly due to whatever you have to do due to extreme constraints, attackers must at the very least follow suit.

How much do you question Newton's derivatives and integrals? Markov chains..? Is it bullsh*t? What tests did you run? Bayes theorems? How many tests did you run? Other calculus theorems, surely there must be a case where it's not true.

I would question everything I learn too, but I really pissed off a lot of people. Then you find yourself at ground zero, swirling in nothingness beginning to wonder if what you see is even real, and I won't continue but that is the path you're heading on.

Again, trying to argue your way around the work of other security professionals who are likely just simply smarter than you; I would like to see what they say, when you say, "Complexity doesn't affect security, we don't know it, it could be that if we add on an FBI backdoor, it makes it more secure" kind of argument you make.

On the "asshole" part, again don't tell me about assholes. Part of it is if you don't know what you're talking about, you're going to be shutdown real quick, and I like that. But I know, I've been attacked and hacked more times than I even know; I would probably remain extra cautious if I were you... It got so bad and out of control that I stopped caring for the sake of wasting my time on anger and what I'd do to them when I got the chance. I got back as many of them as I could, some of which to this day still didn't even notice, the most important ones did. It's gotten so weird, my professors at my school seem to be informed on things and one would repeat what I say every week in class...My work, I can see what's going on, not very hard. That's so not cool that these attacks are extending to my work, which there have been actual stolen hardware from my work and more suspected a-words. If the f*cking CEO is reading and has been lied to and other employees all lied to and attacking my work. I like where I work now, prefer to work there as long as I'm in school, but not if the FED's have infected the building. The company has sold sensors for the Whitehouse in the past so I don't know if that means anything...I'm so tired of it, it's so worthless so I'll continue wasting everyone's time involved. I've found some weird code straight up talking about backdoor keys in our products, WTF? What really pisses me off is if bugs were induced by attackers which are false bugs that we're investigating and fixing. It's so worthless, if this is truly state-sponsored, there's no wonder this country will be broke and continue getting owned by China and Russia.

If my email was deleted, so what. School accounts, so what. My new smartphone already hacked as I'm running tests trying to evaluate some malware and how it spreads; means nothing to me. I can still operate and get around it. Then they got my credit card. So my strategy is completely separate account for my purchases, continue to monitor charges and cancel cards as many times as I have to. Unfortunately I have $2000 to my name, no health insurance, and I live in mommy/daddy's basement. I'm f*cking broke. So I don't give a f*ck.

So keep crying about no one teaching you. No one cares in the security world, you have to have a thick skin. And a hardened system.

SkepticalOctober 26, 2014 7:56 AM


@Figureitout: How much do you question Newton's derivatives and integrals? Markov chains..? Is it bullsh*t? What tests did you run? Bayes theorems? How many tests did you run? Other calculus theorems, surely there must be a case where it's not true.

If there were a part of a proof that I did not understand, I would certainly ask.

I would question everything I learn too, but I really pissed off a lot of people. Then you find yourself at ground zero, swirling in nothingness beginning to wonder if what you see is even real, and I won't continue but that is the path you're heading on.

I'm not saying that you should question absolutely everything. There's nothing intelligent about paranoia - it's just imagination and fear. At the extreme it becomes a serious issue, and it could be a sign to see a doctor (I'm not saying you need to, but having seen a friend develop a case of paranoid schizophrenia, I can say that it's nothing to f*** around with). I'm saying that if someone makes an argument to you - whether as a professional or as a citizen - and you genuinely do not understand a step in that argument, or have reason to doubt that argument - then, assuming there are not social or economic reasons to do otherwise, you should raise your question.

Again, trying to argue your way around the work of other security professionals who are likely just simply smarter than you; I would like to see what they say, when you say, "Complexity doesn't affect security, we don't know it, it could be that if we add on an FBI backdoor, it makes it more secure" kind of argument you make.

I didn't make that argument. Reread what I did say.

As far as smarter goes, that's not relevant to the issue at hand. If you think a smart person can't make a bad argument, you haven't been around much.

So keep crying about no one teaching you. No one cares in the security world, you have to have a thick skin.

I'm saying that you should not be impressed when someone acts like an asshole, and that you should not worry about whether someone is smarter than you. I've been fortunate enough to have had great teachers.

As to the rest, what you're describing is either a law enforcement matter, or signs of paranoia and schizophrenia. Either way, you should talk to someone, and either way, things can get better. If it's a law enforcement matter, I guarantee you that those things can be stopped right quick. And if it's another kind of matter, then it's going to be tougher, but with help things can improve.

As to finances, parents' basement, I understand. It's tough for a lot of people right now. Just try not to get discouraged, don't avoid problems, maintain a positive attitude, talk to people when you need help, and realize that people have been through worse and come out quite well. Good luck.

FigureitoutOctober 26, 2014 10:47 AM

Skeptical
--Won't address the personal attack on me besides you not being a practicing security professional and not having responsibility for the sheer amount of threats to someone like a CTO or CIO has to deal with now...

As to your questions, again I suggest you read a book or 3. You have to read a ton. Raising dumb questions already answered and is common sense wastes people's time hence you'll probably not get them directly answered. Which is good b/c due to the question asked someone will be a big ass answering it.

SkepticalOctober 26, 2014 4:43 PM


@Figureitout: Won't address the personal attack on me besides you not being a practicing security professional and not having responsibility for the sheer amount of threats to someone like a CTO or CIO has to deal with now...

Nothing I said was a personal attack, and I'm sorry if it appeared that way.

You seemed to imply that your professors were being given surveillance reports of some kind on you and repeating them in class. One possibility is that they are, in which case it would be a law enforcement matter. Another possibility is that they are not, and that your perception is false - in which case a medical explanation may be indicated.

Neither of those two possibilities has anything whatsoever to do with a person's intelligence, ethics, expertise on subjects, etc. They are in no way intended to be personal attacks. I suggested them to be helpful.

As to your questions, again I suggest you read a book or 3. You have to read a ton. Raising dumb questions already answered and is common sense wastes people's time hence you'll probably not get them directly answered. Which is good b/c due to the question asked someone will be a big ass answering it.

I understand, but I don't think the question is dumb, and in fact I think a lot of the discussion in this thread has been rather interesting. You may have found the discussion beneath you, or uninteresting, and that's okay.

WaelOctober 26, 2014 6:46 PM

@Skeptical,

Greater complexity does not necessarily imply less security
There is a lot to say about that. From a principle perspective, one would want to adopt the least complex solution to attain better security for reasons mentioned by @Clive Robinson, @Nick P, @Wesley Parish, @Figureitout, @QnJ1Y2U, @Dragonlord and others.

I would take it the more precise description would be "Greater undue complexity" is not good for security. It's a principle, a rule of thumb, and at times will have exceptions. In the general case (or the first model implementation), you want the systems to have enough complexity to achieve the security posture needed, but no more (complexity-wise). Also, less complex alternate solutions need to be evaluated. The comparison is not between a complex system and a simpler one; the rule means, from one perspective, that given two systems with the same (desired) security posture, the simpler one is the better choice.

According to the, related to this discussion, Occam's Razor principle:
Among competing hypotheses, the one with the fewest assumptions should be selected. Other, more complicated solutions may ultimately prove correct, but—in the absence of certainty—the fewer assumptions that are made, the better.

Replace certainty with "Awareness" and the picture will get clearer... Complexity-V-Security was somewhat related to a previous discussion we had about Efficiency-V-Security...

Sancho_POctober 26, 2014 7:25 PM

IMO the discussion regarding complexity is being moot.
If A (e.g. say the U.S.) will not deliver secure systems, B will.

WaelOctober 26, 2014 7:55 PM

@Sancho_P,

IMO the discussion regarding complexity is being moot.
Maybe true for this thread. But the discussion is interesting otherwise.

WaelOctober 27, 2014 12:57 AM

Adding a small elaboration to the moot discussion to reconcile the apparent difference in presented points of views:

If it were just a comparison between Complex and simple architectures, implementation of architectures, etc..., then so called "counter examples" would be abundant (supporting @Skeptical's claim that more complex does not necessarily mean less secure.) The other dimension overlooked in this discussion was the security posture. Two counter examples:

1) An OS such as DOS that runs in real mode -- no good separation between user mode and kernel mode: It's less complex than an OS that has better separation between user mode and Kernel mode. Which is more secure, given both have similar functionalities?

2) Single-tier architecture is a lot simpler than a properly configured 3-tier architecture with, fire walls between zones, protocol transformations between zones, session breaking between zones, etc... The 3-tier architecture is more secure and a lot more complex.

But the discussion is not about the above sample two "counter examples" ;)

Nick POctober 27, 2014 1:40 AM

@ Wael

Interesting that I made a similar argument here in my counter to Wirth's simplicity over everything strategy. We must fight complexity in general for the many ways it hurts almost every good aspect of software. Yet, we can't oversimplify to the point that we miss out on something important. Of course, I'm usually not talking about backdoors in such a conversation...

WaelOctober 27, 2014 1:59 AM

@Nick P,

Yes! The key expression in the referenced point you made is "unnecessarily complex"
I say:
If complexity reduces the surface of attack, it's probably good
If complexity expands the surface of attack, it's definitely bad -- unless you are on the other side ;)
If complexity violates a security principle, it's bad...

Then again, there are several types of complexity ;)
Complexity of nesting, complexity of layering, and simply "blind complexity",...

Paradoxically, well thought of "complexity" results in "simplicity"!

Nick POctober 27, 2014 2:05 AM

@ Wael

That there's a whole field of "Complexity Science" emerging in recent times shows even the theoretical aspects of this discussion are... complex. But, I like your way of saying my idea better:

"Paradoxically, well thought of "complexity" results in "simplicity"!"

Might even quote it in the future. It certainly is one of the more intriguing [and useful] paradoxes.

WaelOctober 27, 2014 2:18 AM

@Nick P,

Thank you! But remember: Easy to say, Hard to do :)
Maybe you can save this one as well:
Complexity is like cholesterol; there is good cholesterol and there is bad cholesterol. The complexity that comes from avocados is good... LOL (at my own joke)

Clive RobinsonOctober 27, 2014 2:28 AM

@ Wael, et al,

With regards Skepticals argument it's a bust in this case.

He argues that there are in effect two states insecure and secure, and that unless you can prove the probability of the insecure state is 100% then there is a small probability attached to secure. On the face of it a reasonable logical "theoretical viewpoint" argument. However he insited on applying it to Vagle and Blazes artical which is talking about "practical security" when looking at complexity based on practical experience to date.

However, we also know that in this case the front/back door is going to be a "golden key" based on a secret number of finite size, from the theoretical viewpoint such a system is always going to be subject to random / guessing and brut force attacks. So in it's case it's theoretical secure state is always going to be less than 100%. That is as Vagle and Blaze point out a "golden key" system can not be secure either theoreticaly or practically. Which is a valid view based on our current theoretical and practical knowledge / understanding.

So in this case Skeptical is applying his model on something we know cannot be 100% secure, so even though the code might be "perfectly written" the algorithm it is implimenting is not theoreticaly secure so the overal code is not nor can it ever be secure based on our current understanding.

So the Vagle and Blaze article is correct and Skeptical is incorrect in applying his argument to it.

I hope that puts that particular argument "to bed" ( for now ).

Nick POctober 27, 2014 2:44 AM

@ Wael

"Complexity is like cholesterol; there is good cholesterol and there is bad cholesterol. The complexity that comes from avocados is good... LOL (at my own joke)"

A nice metaphor with cholesterol. Problem: I hate avocados and had about three thrown at me today. Can't go into details except for the cleanup which tested my self control. People virtually never bring them up, nobody has ever thrown them at me, and I see your avocado argument after dodging avocados and two religious folks on "what are the odds?" conversation. Is this a Jungian meaningful coincidence or what?

Clive RobinsonOctober 27, 2014 2:54 AM

@ Nick P,

These two religious people did they worship "Cados" as there god?

And did you commit some kind of blasphemy as they incanted "Have O' Cados" prior to them partaking of his bounty?

If yes I'm not surprised they chucked them at you :-)

WaelOctober 27, 2014 3:03 AM

@Nick P,

Is this a Jungian meaningful coincidence or what?
Check out the theory of "Attraction"

@Clive Robinson,
I'll put it to rest, for now :)
Re Avocados, it seems we have a similar sense of humor :)

Clive RobinsonOctober 27, 2014 3:49 AM

@ Wael,

Like you I think it would be a worthwhile discussion to have, and actually needs to be had for various reasons, so don't stop on my account.

One of the problems I see happening is firstly the lack of distinct terms with recognised "domain meaning" will cause problems due to "normal usage" meanings lacking both precision and agreed meaning.

Further that people will get things wrong by using the information incorrectly.

For instance most of us accept the "weakest link" idea, but occasionaly forget it applies to a "chain" or "series" event, where as in parallel arangments we also accept that "the product can be greater than the sum of the parts" but not in all cases. Thus identifing what is effectivly in series and what parallel is difficult enough in simple stateless systems, it is going to be very much harder in systems with higher complexity but state as well. We have seen this problem before with engineers trying to come up with MTBF and MTTR figures used for assessing availability of a system. Whilst for availability an inacuracy has little practical meaning, for security even a small miscalculation can have very serious outcomes, which may not be apparent for some period of time.

SkepticalOctober 27, 2014 6:34 AM


Lot of good comments - let me address Clive's first, then Wael's and Nick P's:

@Clive: He argues that there are in effect two states insecure and secure, and that unless you can prove the probability of the insecure state is 100% then there is a small probability attached to secure. On the face of it a reasonable logical "theoretical viewpoint" argument. However he insited on applying it to Vagle and Blazes artical which is talking about "practical security" when looking at complexity based on practical experience to date.

This is not what I argue.

My argument pertains purely to what, on my reading, Vagle & Blaze claim in their article: that a more complex system is necessarily a less secure system. Vagle & Blaze do not write as though this is a claim contingent upon the specifics of the system and complexity in question, but as though it applies universally. This enables them to dismiss Comey's speculations (which lack any substance) without ever addressing what actual proposals might be put forth to meet those speculations.

As I've said before, my argument here is very modest and merely negative: it notes the existence of counterexamples to Vagle & Blaze's claim, and concludes that, therefore, Vagle & Blaze's claim is false.

I do make an additional claim: that whether a more complex system is less secure depends on the specifics of the system in question.

I also agree with the empirical argument made by Nick P and others that, given history, the probable nature of the endeavor in question, any proposed system that includes a means of lawful access and claims to be secure should be approached with caution and lots of questions.

Ultimately, I merely reject Vagle & Blaze's a priori rejection of the notion of front door.

All of this is quite modest and does not imply anything as to whether an eventual proposal would be acceptable.

@Wael: I would take it the more precise description would be "Greater undue complexity" is not good for security. It's a principle, a rule of thumb, and at times will have exceptions. In the general case (or the first model implementation), you want the systems to have enough complexity to achieve the security posture needed, but no more (complexity-wise). Also, less complex alternate solutions need to be evaluated. The comparison is not between a complex system and a simpler one; the rule means, from one perspective, that given two systems with the same (desired) security posture, the simpler one is the better choice.

As a rule of thumb in design, I have no objection. But Vagle & Blaze made a much stronger claim, arguing that because X would be a more complex system, X would therefore be a less secure system.

I like the Ockham's Razor analogy, though I think the dissimilarities are telling as well. Simplicity may be preferred because it is easier to evaluate, with potentially fewer "unknown unknowns", and of course it may be more efficient. But this is a preference driven by human knowledge limitations. If we were equally able to comprehend and test two systems of different complexity, then the preference might be of less utility.

Of course, all of this discussion assumes that we're using the term complexity in an ordinary fashion. If we're using it as a term of art, much of all of this goes out the window - but I don't think Vagle & Blaze's article would make any sense under such an interpretation.

Nick POctober 27, 2014 11:22 AM

@ Clive Robinson

Let's say the conversion effort didn't go as they expected. :O

@ Wael

What's theory of attraction have to do with synchronicities? Seduction community has attraction down to a near science. Psychologists are still catching up to them with so little "field work" to prove their guesses. ;) Meanwhile, the meaningful coincidence phenomenon I've experienced plenty of times and on some weeks with many in rapid succession. Had it been isolated, I could more easily dismiss it.

Nick POctober 27, 2014 11:31 AM

@ Clive Robinson

"For instance most of us accept the "weakest link" idea, but occasionaly forget it applies to a "chain" or "series" event"

Not the case. There might be a 'weakest link' concept among professional engineers like that. The one discussed in security is just a metaphor to remind people that hacker's will look for any point of entry they can. And that one part of the system that was insecure, the weakest link, causes the whole to fail.

Personally, I think it's catchy but an ineffective metaphor. The reference to a chain makes the brain naturally think about a series of things when the real point is that you have to guard everything and get it all right. One effective one for lay people is telling them they put an iron, padlocked door in the front, but they left one of their windows open. The burglar will look at every inch of the house, finding the one mistake that lets him in. Need more such metaphors or analogies.

Note: Ross Anderson's Security Engineering book had a nice example of my metaphor in protecting a painting in a house against increasingly determined attackers. Need metaphors and analogies like *that* too which justify higher levels of assurance. Red team exercises do fine, though. :)

WaelOctober 27, 2014 12:08 PM

@Nick P,

What's theory of attraction have to do with synchronicities

I meant Law of Attraction - not "theory of attraction". A Swedish ex-collegue who was into this sort of thing brought this up when some weird thing happened during a trip we took together from Lund to Hamburg. He is also a photographer and would take pictures of objects when he feels the "Positive Enegry". I asked him what is "Positive Energy"? He said I'll show you. We walked a bit, and suddenly he felt "it", then asked me to close my eyes for a few minutes, and said I am sure you can feel the "Energy of this object"!? I closed my eyes for what seemed like hours then said, I feel nothing man, lets get lunch -- I am a bit hungry! He wasn't amused, and started talking about the "Law of Attraction". Went in one ear, and came out of the other. Your story brough back this memory, and I pointed you to the "Law of Atteaction" because perhaps you maybe more perceptive than I am.

WaelOctober 27, 2014 12:49 PM

@Clive Robinson, @Skeptical

Like you I think it would be a worthwhile discussion to have, and actually needs to be had for various reasons, so don't stop on my account.
Okay, will say a thing or two about this later tonight then move on to other threads. Seems @Skeptical did not fully gather the meaning of what I wrote...

Sancho_POctober 27, 2014 3:49 PM

@ Clive Robinson, Skeptical, ...

I hope that puts that particular argument "to bed" ( for now ). [CR]

+1
Yawn! (Sorry, I’m ready)

An "old zorro" (as we call a professional deceiver here) sets fire close to the henhouse.
All rush down to control the blaze, ignoring the noise in the shed.
As the fire is extinguished there is also silence in the henhouse.

Well done, Skeptical, superb !

Nick POctober 27, 2014 6:05 PM

@ Wael

Ah, I got you. I explored some of that stuff when I was younger. Many interesting experiences. Not quite such wild beliefs for me. The universe has connections and forces we didn't expect all over. I just have a recurring thought that probability may not be what it appears to be underneath much like we found quantum physics underneath classic physics. What seems psychic or improbably meaningful might be a force we haven't discovered yet.

Or it might be very bland stuff combined with a perception bias. I keep my mind open on it either way. ;)

WaelOctober 28, 2014 12:53 AM

I change my mind. Can't add anything meaningful to "complexity" on this thread... I also see no major difference, as @Clive Robinson does, between a front door and a backdoor. Both are conduits to obtain private information. One is advertised (Front door), and one is unadvertised (backdoor, but we know it exists.) There are non technical implications to this subtle difference...

DragonlordOctober 28, 2014 11:19 AM

@skeptical

Sorry It's been so long since you replied to me.

Adding 2 factor authentication to a system does indeed decrease it's security by increasing its complexity, in the same way that adding more guards/patrols does in physical security. However in all of these cases it decreases security by significantly less than the increase in security caused by adding the additional components. Eventually you get to a point where adding more security things to something actually decreases your security because the added complexity increases by more than the increase in security brought by the added component. This especially true if part of the security of the system includes humans.

If you had a door to an office to secure that required a 4 digit passcode to open, it's fairly guaranteed that the door would remain closed most of the time. If you made the passcode more secure by making it longer (more complex) then as the complexity increases, the chances of the people that need to use the door all the time doing something to bypass the security increases as well. That could be anything from propping the door open with a chair to writing the passcode on a bit of paper and sticking it to the door.

If you add guards/patrols to a building then after a certain point the chances of one of the guards been the problem that you're trying to guard against increases, and/or the chances of a guard on patrol deciding that they can take a quick break because someone else will be along shortly increases. Or there are so many guards that no one guard can know all of the other guards that could be on duty at the same time as them. All of which decrease the security of the system by more than the additional guard added. As someone else mentioned above, you can mitigate this a lot by separating the guards into discrete groups that only ever guard certain parts of the system. But even that can end up being too complex if it's big enough.

The proposal by the FBI is that there is a front door added to phones and other things that allows the government to access the phone when they get a courts approval. Given that this doesn't increase the security of the system in any way shape or form. This is always going to be a net drain on the security of the system. If the system relies on a trusted 3rd party to hold a second secure key, then the system as a whole is only as secure as that 3rd party. If it relies on an override key, then it's only as secure as the least secure person that knows that key, which over time will be reduce to zero security. Additionally as this item cannot be a black box in the system as it, by design would need to interact with the rest of the phone and the phones security system in particular. You can't even prove that that component in isolation is secure, as it has to be considered with regard to the wider system.

Don BockenfeldNovember 24, 2014 1:23 PM

I'm reminded of a 1951 science fiction book by A. E. van Vogt: The Weapon Shops of Isher. The weapon shops sell guns that can only be used in self-defence -- they sense the user's intent and perform accordingly. Perhaps this is the kind of technology that Director Comey has in mind.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.