How the CIA Might Target Apple's XCode

The Intercept recently posted a story on the CIA's attempts to hack the iOS operating system. Most interesting was the speculation that it hacked XCode, which would mean that any apps developed using that tool would be compromised.

The security researchers also claimed they had created a modified version of Apple's proprietary software development tool, Xcode, which could sneak surveillance backdoors into any apps or programs created using the tool. Xcode, which is distributed by Apple to hundreds of thousands of developers, is used to create apps that are sold through Apple's App Store.

The modified version of Xcode, the researchers claimed, could enable spies to steal passwords and grab messages on infected devices. Researchers also claimed the modified Xcode could "force all iOS applications to send embedded data to a listening post." It remains unclear how intelligence agencies would get developers to use the poisoned version of Xcode.

Researchers also claimed they had successfully modified the OS X updater, a program used to deliver updates to laptop and desktop computers, to install a "keylogger."

It's a classic application of Ken Thompson's classic 1984 paper, "Reflections on Trusting Trust," and a very nasty attack. Dan Wallach speculates on how this might work.

Posted on March 16, 2015 at 7:38 AM • 61 Comments

Comments

Zachary MayerMarch 16, 2015 9:17 AM

The big question here is: did they do this to ALL versions of xcode, potentially compromising every mac and ios ap? Or did they target specific developers (and then the question is who?)

How could a developer tell if their version of xcode was poisoned?

I wonder if the NSA go the backdoor into imessages that the FBI wanted...

LMAOMarch 16, 2015 10:03 AM

So much for the security of closed ecosystems (Applestore).

I suggest a movement to create a truely open platform like a hardware software combination where one could easily run any reasonable linux distro of his choice with no secureboot lockdowns or firmware backdoors disguised as 'security features' like 'smartconnect' and the like.

I hope some capable group takes the initiative and get the liberation process into mainstream. Just imagine some mainstreamed process to reflash a MBA or MBP and turn it into a kickass linux platform without the backdoors intel chipsets come with.

SasparillaMarch 16, 2015 10:14 AM

Still taken aback by the attacks of our intelligence agencies "in country" actively trying to destroy the security of products used by the general citizenry. Fantastic article linked there.

@Zachary Mayer - If you could, you'd want to poison Xcode at Apple (prior to compile if possible, otherwise after) so that there is nothing to check and fail - this is the NSA/CIA we're talking about here and Apple almost certainly didn't think U.S. government was actively at war with it until last week.

If they are going after Xcode they sure as heck went after Visual Studio as well (unless of course Microsoft just did it for them - based on Microsoft's recent past history this actually seems quite likely, probably no need for the CIA there):

http://www.theguardian.com/world/2013/jul/11/microsoft-nsa-collaboration-user-data

If I was the NSA/CIA I'd also make sure I'd have put the "clipper chip" (i.e. a backdoors) into every firmware/BIOS/EUFI image I could (networking equipment, PC's/Mac's, mobile products, TV's, Xbox One's)...its obvious nothing is out of bounds for the U.S. intelligence agencies & inserting vulnerabilities of products used by the U.S. citizenry (growing up hearing about the Stasi and other oppressive regimes, its hard to wrap my head around that still...).

The other thought that occurs is that Stallman was right all along. At this point we can't trust our government (or probably anyone else's) to do the right thing with regard to our private systems vendors...we need open source from compilers through firmware through chip design/manufacturing through drivers through operation systems through applications - otherwise we can be sure (now, as the article shows) our government (or someone else's) will be actively trying to destroy the security of all major platforms/applications so they can have a guarantee of easy access when they want (which may be the unacknowledged wink and nod of all large government's on this topic - very frightening there isn't one large democracy that stands up on this).


WaelMarch 16, 2015 10:31 AM

@Sasparilla,

Uh! We've came back a full circle! That's where we need to start!

we need open source from compilers through firmware through chip design/manufacturing through drivers through operation systems through applications - otherwise we can be sure (now, as the article shows) our government (or someone else's) will be actively trying to destroy the security of all major platforms/applications so they can have a guarantee of easy access when they want

At least you got the "Total Assured Control" part of security identified!

Been preaching this for many moons and gave several examples in the C-v-P discussions with @Clive Robinson and @Nick P and @Thoth and @RobertT among others. I recommend you follow that thread - even if @Nick P hates it :)

keinerMarch 16, 2015 10:35 AM

@Sarsa

"very frightening there isn't one large democracy that stands up on this"

...my point for months now. World war III and no "good guys" left to save... eehhhm... us, whoever that is...

Marcos El MaloMarch 16, 2015 10:56 AM

@LMAO & others

Did we read the same Thompson essay? I'm not clear on why you believe FOSS is inherently safe, given the deep level of obfuscation outlined. I'm not a programmer, so maybe I misunderstand what Thompson means when he says "You can't trust code that you did not totally create yourself." I don't know, that seems pretty unambiguous. It seems to me that the safety offered by FOSS code is only incrementally greater than that of proprietary. It's not orders of magnitude. Indeed, you might be worse off if you place trust in FOSS when you shouldn't. To use a somewhat absurd example, how do you know for sure Stallman isn't a NSA mole?

calythMarch 16, 2015 10:59 AM

@Zachary Mayer: I'm willing to bet a pint they'd target app devs for China.

iPhones is super popular there, and if you net some big shots with it, more power to an intelligence agency.

NileMarch 16, 2015 11:04 AM

I find those allegations difficult to credit: the installer files for any app go out with a signature hash.

...I mean, they do go out with a signature? Er, an actual *secure* signature?

...And someone at the vendor checks that, right?

Sure as hell, Apple are checking it *now*. Tacking a malicious payload onto the 'build' for apps and patches is an obvious attack surface.

*Deep Breath*

OK, lets take bets: someone, somewhere, has let that one slip. Its probably not Apple. But there's at least one widely-installed application or OS patch out there that has been poisoned. We just don't know which one.

I guess we should start by looking for weak-to-laughable hashes and embedded keys - 'komodia' level negligence - in the installer packages we've already got. After that, I've no idea how to conduct a systematic search; it might not be possible to do this at all.

Is there any framework in place for trusted third parties among security product vendors - if any can be trusted - to examine and certify installer packages? Would any of them dare to take on a vendor with a legal budget big enough to run most countries' army, navy and air force?

LMAO March 16, 2015 11:07 AM

FOSS by itself is not enough, but it's a tool that can be used to take back ownership of hardware and software. Getting rid of firmware backdoors is a good first step towards this.

SasparillaMarch 16, 2015 11:12 AM

@Wael

You're totally right, thanks for the pointer, I'll take that in. The market is building for this now...there's serious money in search of this disjointed solution - which will obviously come in pieces/steps.

Nice quote of Sen Wyden (on Senate Intelligence Committee) that just came out regarding whether there's other things we don't know about yet (I don't even want to imagine):

"Asked if intelligence agencies have domestic surveillance programs of which the public is still unaware, Wyden said simply, “Yeah, there’s plenty of stuff."

http://www.buzzfeed.com/johnstanton/democratic-senator-obama-administration-is-failing-on-domest#.aqVGVVGvEE

@keiner

Yes indeed, its rather hard to fathom really. There should have been some Democracy that said no. Nobody to point to or go to - makes the us and them an uncomfortable grouping.

@Marcos El Malo

IMHO, it's really just that Open Source allows you to check things - that's all it does (its up to us to do that), but it does allow that. Closed source does not and as we've found out - our intelligence agencies are actively inserting back doors in the closed source infrastructure (compilers, firmware and on and on).

Open Source is the only way we can check it, because we now know (beyond a shadow of a doubt) we can't trust our government intelligence agencies with regards to our computerized electronic communication devices. JMHO.

As for R. Stallman being an NSA mole, while I smugly rolled my eyes at him through the years, I really do have to laugh at that.

NileMarch 16, 2015 11:25 AM

And the Big Picture?


I cannot imagine a more damaging attack on the USA's economic interests than the lack of trust that these and other NSA actions have created, and is still creating.

Sooner or later the other shoe will drop, in the form of tangible losses to US or foreign citizens mediated by these backdoors, or by blackmail from the recorded 'take': at that point, and far too late, the voters, the consumers, and the markets will realise that the NSA is waging an economic war against the USA, and winning.

It is now inconceivable that SIM cards for phones could be manufactured in the USA. No way will any European or developed Pacific Rim country import them now.

Right now, MS and Apple products have a stable market share outside the USA: that can't possibly last.

It is entirely foreseeable that the European Union will impose a requirement for full source code disclosure and audit, via their domestic security ministries, before permitting any US-authored software to be purchased and used within the EU.

It is entirely foreseeable that the European Union will insist on physically locating Google, Twitter, and Facebook servers hosting personal details and private conversations within the EU, and subject to the host countries' data protection laws.

Neither of these would fix the problems: both will impose massive economic costs on the USA and favour the growth of local competitors.

It is entirely foreseeable that every other country on Earth will take similar steps, with similar forms of words, with malign intentions divided between protection, profiteering, and sending all their own citizens' data through servers owned by the Interior Ministry.

WaelMarch 16, 2015 11:28 AM

@Sasparilla,

Forgot to add the two links:

1- June 10, 2012: Check the definition of "Security"
2- November 17, 2013, which leads me to the next comment:

@Bruce,
You may want to consider increasing the dose (with herbal tea, this time) - LOL (Oh, no! My days are numbered)

AnonMarch 16, 2015 11:36 AM

Spelling Xcode correctly in your blog title would be good.

What do you think of Dan's speculation?

PeanutsMarch 16, 2015 11:48 AM

I little while ago (September 2014, remember Apple said it would encrypt everything on the ios osx filesystems by default. Remember the NSA politisphere pitched a social hissyfit.

http://www.washingtonpost.com/opinions/apple-and-google-threaten-public-safety-with-default-smartphone-encryption/2014/09/25/43af9bf0-44ab-11e4-b437-1a7368204804_story.html

"Apple and Google , whose operating systems run a combined 96.4 percent of smartphones worldwide, announced last week that their new operating systems will prevent them from complying with U.S. judicial warrants ordering them to unlock users’ passcode-protected mobile devices."

Now does anyone think, the Law enforcement got immediate national news time or that the NSA was actually concerned judicial warrants? Really! Seriously!!

Warrants do not seem to be much or a moral or constitutional concern these days.

Logically then they were concerned that that a relevant (no human intervention required) computationally negligible working backdoor would be closed.

The local law enforcement is ruse complaint is always described in the context of a clunky human intervention workflow. One that could not apply to all devices in real-time, to set everyone at ease that it will only take sheep one at a time from the flock. Lie lies and politics.

The lie that the infrastructure design for all users is not the target when of course it is the target.

I think that without evidence of the open unencrypted access door being closed one has to assume its a fully functional remote compromised in use.

Assume source to sink taint until the whole end to end process does not require trust.

For the folks unfamiliar with the term source to sink
Taint Analysis attempts to identify variables that have been 'tainted' with user controllable input and traces them to possible vulnerable functions also known as a 'sink'. If the tainted variable gets passed to a sink without first being sanitized it is flagged as a vulnerability.

Nick PMarch 16, 2015 12:00 PM

@ all

The attack is likely to work because they're focusing on the desktops. The phones use various trusted boot, encryption, sandboxing, and so on. The developer's desktop is a full OS with much less security. Additionally, if the backdoor is simple, any Apple review might not catch it. An early, exemplar subversion back when the topic was new built the backdoor in around 30 lines of code that only activated when seeing a certain bit sequence.

The minimum security requirements would be a secure desktop, trusted distribution/verification of Xcode, end-to-end protection of app from developer to Apple, end-to-end protection from Apple to iOS, and iOS having necessary security. These minimum requirements aren't there. So, the attackers have plenty of opportunity to go after Apple's products and supply chain from a black box perspective.

Should be harder to do it from an insider perspective given Apple's strong OPSEC practices. They'd have to really work at it over a period of time. Most likely to target employee's computers, gradually mapping out people's job duties. Then, target individual employees or systems to get to the code.

@ Wael

I hate the metaphor, not the discussion. ;)

@ Sasparilla

Zealots like Stallman cloud the issue. It's not about closed vs open source. It's really about how its vetted and who vets it. I tried to cut through the nonsense to help people understand the various models and tradeoffs in source sharing in this essay. The best model, inspired by Burrough's, is probably hardware makers giving the source away with the product under *proprietary license* that lets paying customers inspect or modify it for their own use. Burrough's also let people submit changes to the company for potential inclusion into the product.

So, people got to see and build the source. The company still kept control of it to make money. The company also made money on the hardware it was tied to. Everybody wins. Only thing I'd change is a clause to let the customers keep the source in case company stopped maintaining product for whatever reason. Seems like the best model to push for these companies that have no interest in FOSS.

There's also my mutually-suspicious reviewers model. Each would be involved in some sort of espionage or be neutral. They all review the [closed] source. They all review it, approve, build the same image using vetted tools, and sign the resulting binary. They do reviews on whatever patches or upgrades happen as well. Development processes are also modified to make reviews for subversion easier. Company keeps their design/code closed for competitive advantage while users get increased assurance it does what it says. This is the model for those that wouldn't even adopt Burrough's-style model. Intel and Microsoft come to mind. Esp as Microsoft already shares source with Russia & China.

AnuraMarch 16, 2015 12:56 PM

Is it just academic on the part of the CIA, or has anyone actually detected it? If you have a relatively small application, it should be fairly easy to check if there is an issue by disassembling the binaries and looking through the instructions.

NukeItFromOrbitMarch 16, 2015 1:17 PM

@Anura "...it should be fairly easy to check if there is an issue by disassembling the binaries and looking through the instructions."

The trigger conditions for backdoor insertion may not be present during such a test.

Disassemble XCode instead.

TJ WilliamsMarch 16, 2015 1:47 PM

@LMAO
Indeed openness is the answer but we would still need the ability to screen the code, design, etc. If we look at the number of vulnerabilities in open source code over the last 12 or 18 months (SSL, libc, etc.), this is not an easy task.

Marcos El MaloMarch 16, 2015 1:48 PM

@Wael & Sasparilla

Thanks for clearing that up. Stallman as mole was meant to be a joke, but the underlying point is still there: how do we trust the guardians?

Steven C.March 16, 2015 2:57 PM

@Marcos El Malo

Take a look at how Debian are working towards Reproducible Builds. Debian's Free Software Guidelines already require that the binaries they distribute, the compilers, and firmware blobs, are all built from source. Developers and users will soon be able to attest independently with digital signatures, that each others' systems and the official Debian build systems, all produce identical output given the same source.

Logically we still have to worry about the correctness of the source code, but the point is that without the above procedure, a vulnerability could be inserted after any auditing takes place. In my opinion this puts Debian already 10+ years ahead of proprietary software platforms.

OtterMarch 16, 2015 3:09 PM

"there isn't one large democracy that stands up on this"

That's a feature, not a bug.

dbmMarch 16, 2015 3:28 PM

Having access to source code, for rebuild as needed, is no guarantee. The compilers could be spiked to insert magic instruction streams at numerous points. So not only poisoned libraries as a concern, but also poisoned tool chains. Need to examine the resulting binaries. Very tedious, not being able to trust anyone...

Wael March 16, 2015 3:45 PM

While checking the binaries, the libraries, the OS, and Xcode, don't forget OpSec

A tedious task, to say the least.

David HendersonMarch 16, 2015 6:33 PM

There were certain releases of Xcode available only to developers.

I'm not such an Apple developer, but I did download a torrent representing to be such a release.
I used it for a while.

The provenance of such a download is completely opaque.

I've installed an OSX app called LilSnitch, and it reported Disk Utility reporting home to Apple on a DVD burn.
I'm not sure what it was reporting, but it cant be good.

I'm clearly being mined for information about my usage patterns.

I'm moving all my data, to Debian Linux. Getting the closest matching app.

Its a personal boycott of Apple.

There is a steep learning curve. It helps that I was a Solaris developer many moons ago.

Someone from TORMarch 16, 2015 7:31 PM

If XCode is compromised why not switch to gcc or one of its derivatives? I prefer vi and gcc over Xcode anyway, especially if using Cygwin on a Windows port. I have to agree the NSA is in a "We have to destroy US companies in order to save them" mindset right now.

Clive RobinsonMarch 17, 2015 2:50 AM

@ dbm,

Where are these keys generated? Do individual users generate their own primes? or are these parceled out by some centralized repository? Or is this a paid service from RSA?

Back in 2012 I wrote on this blog that I thought it likely the NSA were using such an attack on 1024bit keys.

The reasoning I used is that the NSA is more like a factory than a lone craftsman. They would use any "industrialised" process not to attack specific keys but all keys they could find...

The reason for these "comman primes" as normal is poor entropy generation, back in 2012 it was found to be because many of the keys were generated on embeded hardware when first powered up and configured....

Also remember that some major ISPs insist as part of their service policy that you have to use the key certs they generate. As we know businesses want to cut costs and generating primes is not a very efficient process. Thus why not generate only one prime randomly and get the other from a list of between ten and a hundred pre-computed primes.... Even the randomly generated prime can be optomised by having found one prime, just keep sequentialy hunting for the next prime up. It makes their life easier, and the NSA grin from ear to ear.

From previous postings I had indicated that if I was the NSA I would have obtained and reverse engineered all embedded and software systems that generate keys and "characterized" the RNG used to find the primes. Thus they have a list of "likely candidates" that can be checked very quickly.

Another technique is as here to use an efficient algorithm to find certs with a common prime, put a lot of effort into breaking just one cert, then using those primes to quickly find all the other primes in the other common certs and then use these as likely candidates for checking other certs with differing commonality groups.

This way you get a lot of key certs broken for the price of just factoring one key cert... Used as an industrialised process it can churn out reams of broken weak certs alarmingly quickly.

So between "common primes" and "limited range primes" the NSA are probably quite happily breaking a lot of certs in the 1024bit size and down.

WinterMarch 17, 2015 3:28 AM

"It's a classic application of Ken Thompson's classic 1984 paper, "Reflections on Trusting Trust," and a very nasty attack."

There is a defense against this attack, and source code is crucial in this defense. The crucial point is that the binary the compiler ejects code that does not correspond to source code that is compiled.

All you need to prove infection is the source code of the suspect compiler and the binaries and source code of one or more different compilers where you are confident that at least one of them is clean. Then you can show that they are all clean or one of them is not.

The way to defend was discussed by David A Wheeler:

Fully Countering Trusting Trust through Diverse Double-Compiling (DDC) - Countering Trojan Horse attacks on Compilers

http://www.dwheeler.com/trusting-trust/


An Air Force evaluation of Multics, and Ken Thompson’s Turing award lecture (“Reflections on Trusting Trust”), showed that compilers can be subverted to insert malicious Trojan horses into critical software, including themselves. If this “trusting trust” attack goes undetected, even complete analysis of a system’s source code will not find the malicious code that is running. Previously-known countermeasures have been grossly inadequate. If this attack cannot be countered, attackers can quietly subvert entire classes of computer systems, gaining complete control over financial, infrastructure, military, and/or business system infrastructures worldwide. This dissertation’s thesis is that the trusting trust attack can be detected and effectively countered using the “Diverse Double-Compiling” (DDC) technique, as demonstrated by (1) a formal proof that DDC can determine if source code and generated executable code correspond, (2) a demonstration of DDC with four compilers (a small C compiler, a small Lisp compiler, a small maliciously corrupted Lisp compiler, and a large industrial-strength C compiler, GCC), and (3) a description of approaches for applying DDC in various real-world scenarios. In the DDC technique, source code is compiled twice: once with a second (trusted) compiler (using the source code of the compiler’s parent), and then the compiler source code is compiled using the result of the first compilation. If the result is bit-for-bit identical with the untrusted executable, then the source code accurately represents the executable.

No Such AgencyMarch 17, 2015 4:50 AM

Not finished reading all the comments yet, but I have to address this:

Why does everyone think FOSS/Linux is secure??? It drives me crazy - with the revelations over OpenSSL, IPSec, suggestions the FBI were subverting OpenBSD, and likely countless other things that might be or have been going on, along with the fact that the idea that "many eyes are checking the code" being absolutely discredited, I'm just shocked that people still think this software is more trustworthy than closed source (IMHO the opposite is true - it's relatively easy to contribute to many open source programs; that's the whole point after all).

Anyone vetted the major players in the OSS movement? Has Linus himself been subverted for example? Can the Linux kernel itself be trusted?

Just a hypothetical question, but something to seriously consider.

The obfuscated C competition is more than a challenge - it is a practical example of how to subvert and backdoor software in very sneaky, very hard to detect ways.

Attacking Xcode is pure genius. It's attacking the factory that produces all software. Regardless of what it is, it will get compromised. As for the Nicrosoft compiler, it's certainly possible, and quite likely. The hostile compiler is one of the hardest things to detect. It already does apparently benign, yet wholly questionable things when building (such as having a dependency upon mshtml.dll).

Scott "SFITCS" FergusonMarch 17, 2015 8:06 AM

@No Such Agency

Why does everyone think FOSS/Linux is secure???

They don't all think Linux is secure (but you know that, right?).

Some of us "think" it's "less insecure" - and "more securable". The former because access to the source code, reproduceable builds, and lack of a single subvertable body controlled by shareholders - and the latter because of it's customisability and diversity. Others will point to more "secure" closed source alternative while IMO overtrusting the tighter interests (more easily targetable) of a smaller controlling structure; and other "others" will point to BSDs which may be more focused on secure builds but suffer the limitations of less code review, less developers to be, um, influenced - and far less diversity in the configurations. Weighting the risks and benefits of all the factors would be an interesting and complex matrix...

The some of us amongst which I include myself know that Linux is far from secure - even when we focus on specific distributions like Debian. We also try and limit our risks by not putting all our eggs in one basket (different boxes for different tasks - each box built to satisfy only the needs of those tasks.


It drives me crazy

...

...(IMHO the opposite is true - it's relatively easy to contribute to many open source programs; that's the whole point after all).

You ignore/overlook several major factors in your opinion: Open Source is not different to Closed Source in that code can and is rejected for not being up to standards - and code can and is accepted that is sub-par. However Open Source (which many commercial companies contribute to, and are based upon) is not as easily swayed by approaches from government bodies promising easy or unimpeded market access "if you'll be patriotic" (cough Daddy can keep competitors in a holding pattern while the kid seals the IBM deal and later cripples OS/2 cough).

It's easy for the uninformed to decide on the basis of some Open Source failings: that the discovery of those failings is not a demonstration of the advantage of the ability for anyone with the desire to review the code; to forget about all the failings of Closed Source (Ffflash/Java upgrade v?? anyone) despite the fact that very few independent reviews of Closed Source occur - and draw the false conclusion that they have made a balanced comparison.

Apropos of which - "if two archers shoot at a target and A misses by 2 metres, and B misses by 20 metres - who is the better archer?"

Anyone vetted the major players in the OSS movement?

I happens - quietly. Likewise the financial interests of the Closed Source competitors. If you trust them I've got some hot share tips to sell you. Kind or like comparing stupid with ignorant - stupid has it's charms - ignorance requires dedication.


Has Linus himself been subverted for example?

Well... he's an American citizen (cough but his father is not*1 cough). Linux says no he's never been approached (cough while saying it). But then he would say no - it's illegal to say yes.

Can the Linux kernel itself be trusted?

No.

Can you trust any other kernel (even the HURDs)? Maybe we should trust Windows, after all it's been independently reviewed by China and Russia - it's not like either of those countries have access to Linux.

Personally I'd put more trust in something that the worlds greediest companies use to trade options with, and airplanes use to stay in the sky. We should especially trust any OS that's deployed by the US government.

Just a hypothetical question, but something to seriously consider.

Rationally, and in context of course.

*1 But - he is a Communist, and, despite your total lack of research you, assuredly, are not part of a FUD campaign. Actually - I'm not certain about much, there's so little mathematical proof (sigh).

Clive RobinsonMarch 17, 2015 8:35 AM

@ No Such Agency,

Why does everyone think FOSS/Linux is secure???

Not every one does, but it has the potential to be more secure if you put the effort in.

Ignoring hardware security issues, there is nothing to stop you rolling your own tool chain, thankfully it can be very primative and you build it up through several iterations untill you get to the point you can add seriously complex code.

I've done this for different reasons in the past, and in total there was about two months work spread over a couple of years.

You can then get to the point where you have a compiler you trust you can compile code you don't realy trust such as FOSS, and then study your compilers output to check what is hitting your assembler.

The advantage of FOSS is not that people do it but you can do it if you wish, wgich you cannot do anywhere as easily with closed source code.

Further it is a lot harder in FOSS to hide back doors etc where as with closed source you only have to get it through your company code review process if there even is one.

So FOSS is "potentialy" more secure, if and only if you put the effort in...

No Such AgencyMarch 17, 2015 9:42 AM

@Scott:

"Everyone" was over-stating it, I agree.

I agree also that closed source mass market software also has serious problems (the extent of which is so severe it raises the question of intent).

I don't think it is accurate to say China/Russia don't have access to Linux. I'm also not sure how that makes Windows more secure? It's possible that if Windows is insecure deliberately, that they could have special versions for those countries with strong crypto and any obvious but deliberate "holes" removed. I've actually seen "export" versions of Windows, and the install size is smaller. Makes you wonder what was removed.

code can and is rejected for not being up to standards

Code style? Readability? Function? Algorithmic design? I think if someone wrote good code (as in good English) then nefarious code has a better chance of making it through than if it is poorly written.

I refer back to the "many eyes" fallacy. As the TLS keep-alive flaw demonstrated, considering that was SECURITY CRITICAL code, because the developer was trusted, no-one even looked at his work. Even if they did, it would pre-date any known security issue with the code, so what was the likelihood the major flaw with the code would be spotted?

As for code standards, apparently it is atrocious, with calls for (again, security critical) code to be "abandoned" because it is so dire with little to no documentation that a complete re-write is easier.

@Clive:

Good points.

harder in FOSS to hide back doors

I guess it is a question of degree. In some areas I could imagine certain types of exploit being obvious (why would a text editor need SSH) but if that hostile SSH server was buried in net code for an FTP server, would it look out of place?

WinterMarch 17, 2015 10:17 AM

@NSA
"I guess it is a question of degree. In some areas I could imagine certain types of exploit being obvious (why would a text editor need SSH) but if that hostile SSH server was buried in net code for an FTP server, would it look out of place?"

I think the real point is that there exist processes and tools that allow us (imperfectly) to defend against the type of attacks discussed here. All of these defenses ultimately require that the tools and other software are FLOSS to work.

With proprietary tools and programs, there simply is no recourse.

BrookeMarch 17, 2015 11:05 AM

I found this read interesting and slightly scary and then I read an article on threatpost today about stealthy, persistent DLL hijacking (https://threatpost.com/stealthy-persistent-dll-hijacking-works-against-os-x/111661) The part that was alarming was this little blurb:

“My malware infects Xcode and any time a developer deploys a new binary, it would also add the malicious code,” Wardle said. “It’s an anonymous propagation vector.”

There you go! Load malware onto dev machines, infect xcode, profit or spy, whichever your flavor.

Nick PMarch 17, 2015 1:56 PM

@ No Such Agency

There's barely need for speculation as prior work has covered the topic very thoroughly. Here is the original work on subversion. It's quite thorough. Anderson built on it with this exemplary subversion of the NFS file system. The modification required only 11 lines of code split between two unrelated parts of the Linux kernel. Such modifications are less likely to get noticed.

So, you don't need a whole SSH system embedded in anything. You really just need a simple way to escalate privileges, esp turning incoming data into code. That common constructs (eg buffers) and mistakes (eg buffer overruns) lead to malicious code execution means the subversion agent's job is even easier. The agent just needs to intentionally screw up in privileged code. It's not surprising that this is one of intelligence agencies' top methods of subversion. It's also the most deniable.

@ Winter

There's been a large number of papers that offer solid, security enhancements for closed source. The Proof-Carrying Code schemes can deliver typed assembler with proofs to show safety. Another scheme modified all the jumps in a Windows binary in such a way that common attacks couldn't get outside of the running application. Another applied instruction set randomization on Linux executables at the hardware level without their source. There's also MAC and separation kernel approaches to containing individual components while mediating their actions at the interface level. And so on.

It's a myth that proprietary or closed source can't be guarded. It can and to greater degrees over time. The most honest statement is that open source is easier to vet and protect than closed source products. As in my essay linked above, the actual assurance is determined by that vetting process. Most FOSS isn't vetted so the many eyes argument fails in practice. Like I told DB, only proprietary software ever achieved high assurance. FOSS maxed out at EAL4+ (low-medium).

So, in reality, FOSS rarely to never achieved strong security while many closed source products did. I bet many FOSS enthusiasts would find that hard to believe. Yet, that's what they've achieved in practice. So, who do you trust for more secure software now? :P

Leben der AnderenMarch 17, 2015 3:39 PM

NSA, CIA, Nice work, shitheads! Now a majority distrusts the products of the entire ICT industry.

57% stand for law and order and Article 17, saying it is unacceptable for the government to monitor the communications of U.S. citizens. Pew nevers bothers to find out out if they know that's the supreme law of the land, with which law at all levels of government must come into compliance. That would let the cat out of the bag.

dbmMarch 17, 2015 3:57 PM

Frankly (and I'm sure I'm not alone) while very busy pursuing my daily business, and the notions here seem obvious in retrospect, but I find the whole idea as shocking as flying loaded airliners into skyscrapers. Why must we be surrounded by madmen?

Sometimes I long for the distant days when we had our computer in a laboratory and connectedness was just a distant dream of some others. Paper tape and front panel toggle switches brought a kind of confirming comfort. (sorry for rambling...)

dbmMarch 17, 2015 4:05 PM

...we need the computer equivalent of Dutch Doors -- those doors split horizontally in the middle, where each half could be opened or shut independently of the other half. I understand these were invented so that strangers knocking in the night would be forced to stick their head down through the lower half, whilst someone waited on the inside with a sword to lob off the heads of bandits.

65535March 17, 2015 6:58 PM

@ Nile

“And the Big Picture? I cannot imagine a more damaging attack on the USA's economic interests than the lack of trust that these and other NSA actions have created, and is still creating. Sooner or later the other shoe will drop, in the form of tangible losses to US or foreign citizens mediated by these backdoors…” -Nile

I agree. When US Tech companies don’t allow the Intelligence Community [IC] in there basement, the IC declares war against that company – behind closed doors. This time it is Apple. There are others such a Google to an unknown extent.

Cisco is one of the many shoes dropping.

"…we get another quarter of China just saying no to spending any more money on companies which are, as far as Beijing is concerned, a natural extension of the NSA. According to Reuters, China has just dropped some of America's leading technology brands from its approved state purchase lists, chief among them Cisco (which already was hammered a year ago due to the Snowden revelations), and everyone's favorite $1 trillion market cap…” –zero headge

http://www.zerohedge.com/news/2015-02-25/us-espionage-blowback-china-drops-apple-cisco-state-pruchase-lists

"The 'Snowden Effect' Is Crushing US Tech Firms In China"

‘EARNINGS IMPACT’
“…IBM, which reported a 22 percent drop in third-quarter China sales, led by a 40 percent decline in hardware revenues, may be a bellwether for the 'Snowden Effect' when it reports fourth-quarter results later on Tuesday…” -Businessinsider

http://www.businessinsider.com/the-snowden-effect-is-crushing-us-tech-firms-in-china-2014-1

'Cisco to lay off 4,000 workers; some will blame Snowden' -ABC

http://abc7news.com/archive/9206694/

'NSA Spying Risks $35 Billion in U.S. Technology Sales' -bloomberg

http://www.bloomberg.com/news/articles/2013-11-26/nsa-spying-risks-35-billion-in-u-s-technology-sales

@ Leben der Anderen

“NSA, CIA, Nice work, sh*theads! Now a majority distrusts the products of the entire ICT industry.” -Leben der Anderen

I concur, plus the NSA slide that said “We hunt System Admin’s” did not help their repudiation… are you listening NSA and Lockheed Martin (NYSE: LMT)?

Some people are changing their digital preferences. I use DuckDuck Go and Ixquick Https for my browsing. I rarely use Google because of Schmidt security clearance and his declaration:

'…Schmidt stated that the government surveillance in the United States was the "nature of our society" and that he was not going to "pass judgment on that".'-Wikipedia

https://en.wikipedia.org/wiki/Eric_Schmidt#Privacy

[Schmidt’s networth decline]

"Net worth down $9.2 billion (2015)"

https://www.google.com/about/company/facts/management/#eric

@ Nick P

Your post was very interesting an to the point. I agree to most of it.

“…There's been a large number of papers that offer solid, security enhancements for closed source. The Proof-Carrying Code schemes can deliver typed assembler with proofs to show safety. Another scheme modified all the jumps in a Windows binary in such a way that common attacks couldn't get outside of the running application. Another applied instruction set randomization on Linux executables at the hardware level without their source. There's also MAC and separation kernel approaches to containing individual components while mediating their actions at the interface level… in reality, FOSS rarely to never achieved strong security while many closed source products did. I bet many FOSS enthusiasts would find that hard to believe. Yet, that's what they've achieved in practice. So, who do you trust for more secure software now? :P” – Nick P

But, I am still not happy with Microsoft’s explanation of the NSAKEY debate.

“In September 1999, an anonymous researcher reverse-engineered both the primary key and the _NSAKEY into PGP-compatible format and published them to the key servers.'-Wikipedia

“Primary key (_KEY)”

[See: PGP 1024 key]

“Secondary key (_NSAKEY and _KEY2)” –Wikipedia

[Also see: PGP key]

https://en.wikipedia.org/wiki/NSAKEY#CAPI_Signature_Public_Keys_as_PGP_Keys

[Please excuse all of the grammar and other errors]

Nick PMarch 17, 2015 8:12 PM

@ 65535

I already posted an analysis of that here. They said they had to do it for export reasons (re key "backups"). If it was forced and for export, then it has to be NSA-mandated, key escrow. That's what others were doing at that time frame if they didn't limit strength of the encryption. That's also consistent with declassified CIA document. So, it's best to assume that's a backdoor.

The good news about that one is the backdoor is escrowed keys. Implies they didn't have a backdoor in Windows itself at the time.

Dirk PraetMarch 17, 2015 9:50 PM

@ Nick P, @ Winter, @ 65535

So, in reality, FOSS rarely to never achieved strong security while many closed source products did.

Entirely correct, but let's not forget that EAL-certification is a lengthy, time consuming and costly process that many FOSS-vendors will opt out of unless they have a very serious business case. Another thing to consider is that a higher EAL does not necessary mean that the product from a functional angle is also more secure than one with a lower EAL. Unlike the fixed assurance requirements needed to achieve a certain EAL, the functional part pretty much depends on the features described in the certification's security target document submitted by the vendor. A final nuisance about the entire process is that service packs, updates and patches can invalidate the certification and may require full or partial revaluation.

So the question is not whether it's FOSS or proprietary software that is more secure, but to which extent the product is/has been

  • properly designed (with security in mind)
  • implemented
  • maintained
  • audited by independent third parties
  • targeted by TLA's

Nick PMarch 17, 2015 11:00 PM

@ Dirk Praet

All true. My statement was more focused on the methods rather than formal certification itself. The FOSS products don't have a precise description of design, requirements, security policy, and so on. Further, they don't have thorough audit for vulnerabilities or covert channels. Just these basic things separate high assurance proprietary from FOSS before anything else is even considered. How can one argue for the security of a program without being able to clearly describe the program's functionality or security policy?

FOSS has a long distance to travel to arrive at strong security. Unfortunately, I see no evidence of it even being on the right path to get there. At least certain companies and academics working on highly assured deliverables are releasing them as open source after the fact. That's a start for wiser developers.

FigureitoutMarch 17, 2015 11:59 PM

RE: FOSS vs. Proprietary
--FOSS wins more so w/ a toolchain you can take apart. Proprietary you get this toolchain you can't take apart (or break a million things). Granted, GNU/GCC has grown such that it's in proprietary products, and I couldn't easily take it apart.

Proprietary delivers on better performance and obfuscation for initial attacks. Even if underlying security is bad, if new products change more obfuscating details, then attackers got new details to RE and it's a goosechase.

A lot of other things have been stated or are obvious, what we desperately need is a comprehensive PLAN. Everything PLANNED out, and counter-measures in place for attacks and unexpected changes. You need people, the best and most dedicated/committed people, in every area to come together. Every project is going to reach a point of "too many cooks in the kitchen". So it's best like w/ see w/ markets to find your area you enjoy most and deliver the *best* of area. Personally I like C-level embedded for now and will slowly move to more assembly and dibble deeper and deeper, in addition to high levels of OPSEC (and catching all kinds of intruders). An example of divvying off chunks of the project is as follows:

1) Chip/IC (high-level) development (we need *the best*)
2) Chip/IC Process (again *the best*, entire project can be ruined here)
3) Chip/IC Process OPSEC
4) Electrical/board layout design (making safe board for full operation)
5) Chip/IC Vulnerability/Testing team (these guys seek out bad OPSEC and design errors at chip facilities)
6) Toolchain Software development (important for finally getting some eyes into the chips, leverage existing probing software)
7) BIOS/Firmware team (another important group, will have to be closely watched)
8) Security of toolchain and OS's (likely Windows initially or Linux, but some of the best keeping signed versions of IDE's and keeping air-gapped/shielded PC's running normally, another critical group here, could go wrong)
9) Higher-level software team building browsers, integrated network analyzers/firewalls, text-editors, and other useful simulation software (these guys will be giving a face to the final product, so again critical, don't make it crap and annoying, give the immediate user control)
10) Physical OPSEC (watch over facilities 24/7, there will be many audits of this group)
11) Network team (they maintain and monitor external/internal networks and keep them as secure as possible, we won't be running a bunch of crap and maintain dedicated internet PC's for research (file exchange requires more scrutiny, and no bringing in files from home))
12) Business/Funding/Marketing (leaving these areas as one group, will likely be separate)

Thinking about it more, we could come up w/ better groups, w/in each is a small dedicated testing/Quality Assurance team (they have to be the most OCD ever, getting antsy over any little wrong detail). It's probably going to remain really scattered except for large engineering companies. If anyone from there is listening, like we're seeing which is unlike ever before (Broadcom/RasPi, TI/beaglebone, Freescale/Kinetis, Atmel/arduino, etc.) of huge companies making the designs and software open-source (there will still be chunks of IP in chips or the process of being made).

I personally think our best hope is w/ these companies and the engineers w/in them BEING HONEST and being watchdogs on the process and rooting out threats/backdoors. You buy chips from companies, not gov'ts or academia. They have all the tools you need in a lab that won't be matched in home-labs except for a few diehards, and they have relationships w/ Fab labs and can *potentially* work out a special deal for higher assurance.

Organizing such a project is beyond my means right now, and won't be cheap. You have to be prepared for attacks, of which I could consult on a few now. But damn, this is my dream job, working on *the most secure PC in the world*. That's what we paranoids need. I'm tired from school/work and generally have ~3-4 hours tops after being tired to work on my personal stuff. If someone from Google is listening, here's a project.

WinterMarch 18, 2015 2:41 AM

@Nick P
"The Proof-Carrying Code schemes can deliver typed assembler with proofs to show safety. Another scheme modified all the jumps in a Windows binary in such a way that common attacks couldn't get outside of the running application."

We are not talking about common malware defenses. This is about the Trusting Trust attack.

The whole point of the trusting trust attack described by Thompson is that if you cannot trust your tool-chain, you cannot trust your binaries. What you tell us is that we could check the safety of our binary compiler using another binary we can neither check nor control.

In the current particular case, we would try to check the XCode binary using a binary produced by XCode. Or, if it was not produced by XCode, it was produced by another firm using another compiler we cannot check for security.

That is exactly the scenario Thompson was describing.

The defense Wheeler described can actually show you that your compiler is secure, after which you can build a trusted tool-chain yourself.

Clive RobinsonMarch 18, 2015 4:33 AM

@ Figueritout,

Personally I don't think we will ever be able to get let alone ensure security in high density SoC and above systems. Nor do I think we will ever be able to get let alone ensure security in "do everything" OS's and graphical environments.

The reasons for this are,

1) the limitations of individual humans,
2) the influance that can be brought to bear,
3) the complexity of these large scale systems,
4) we can show that even when a vulnerability type is known it can not be detected unless it's in use,
5) the simple fact we only know a fraction of the various vulnerabilities there are to be known not just today but the new vulnerabilities of next week / month / year / decade etc.

The solution is to remove as much complexity as possible, strongly segregate / compartmentalise, fully mandate and moniter interfaces, and apply strong mitigation techniques.

History shows that this is the route taken by the likes of the NSA et al when security had a higher requirment than cost not just in the designers minds but layers 8 and up.

I still program "on the comand line" without many of the "must have productivity tools" that take so bl**dy long to learn. Which also change every ten minutes for "marketing" --read profit-- reasons.

I have written my own tools and limited OS's and byte code interpreters that run on hardware I have designed for customers in high asurance systems. Thus I have my own tool chain.

As I've often said I use development boards for simple microcontrolers as nodes in more complex systems with serial interfaces that I can and do monitor with other check nodes when my needs for security are increased. I only use "human readable" data protocols on interfaces and files (text / RTF / HTML) that are not from trusted and verified sources.

They lack "bells and whistles" and they are far from "optimal". However they have the advantage of simplicity modularity and easy portability and importantly ways to verify at each point. Which means you have a secure base to work on. You could then use other tool chains and the processes Nick P favours above this point but pull the results out as text based assembler or human readable byte code files that you can "walk through" with the original high level source code. Thus you can have a more "productive" enviroment if required.

As I've noted before there are single chip systems from the likes of MicroChip that have about the same power as minicomputers from times past. Thus terminal based *nix etc is well within their capabilities and the chips are just a couple of dollars each. Thus putting several on a PCB in such a way you can strongly segregate them is trivial and low cost (the PCB easily costs ten or more times the cost of a chip even in moderate quantities).

As you are getting "hands on" with embedded systems have a think of going down this road as it is within an individuals capabilities.

Scott "SFITCS" FergusonMarch 18, 2015 5:17 AM

@No Such Agency

I agree also that closed source mass market software also has serious problems (the extent of which is so severe it raises the question of intent).

I should clarify that.

  • Some closed source software has serious problems - both at project/application, and at a company level.
  • The nature of the license does not determine the security of the code. I think of it as a "security by obscurity" problem. As Nick P frequently points out - there are securely coded closed source applications (perhaps even OS). My "preference" for Open Source is primarily because I like to verify when I can and then compile from source. Closed Source does not allow me to do that.

@ Nick P

We (possibly?) have a difference of opinion over definitions - by my definition a Closed Source project that releases the source code is not a Closed Source project (Open Source does not only mean GPLvx).

I don't think it is accurate to say China/Russia don't have access to Linux.

Absolutely! :) I was being ironic - the question of verification by access to the source code is misleading. It's only useful if you then compile from that verified source code. Much of the "debate" about Closed vs. Open is brain dead. Hence my analogy of the two archers - the correct answer is neither of the 'apparent' choices. Both archers are rubbish - so neither can be better or worse that the other.

I write, both as a reply to you, and, to current and future readers (also so I can "see" what I "think" in the hope I'll catch one of those rare moments of self-insight).


[returning from that tangent] The nature of the license does not inherently make the product more secure - it's the intent, capability, and honesty of the project that determines it's capability. Though in practice it's more complicated - i.e. what will it be exposed to? for how long? understand that other than qmail no software is perfect, just relatively perfect in the context of those first two questions.


I'm also not sure how that makes Windows more secure? It's possible that if Windows is insecure deliberately, that they could have special versions for those countries with strong crypto and any obvious but deliberate "holes" removed.

The nature of the company that controls it (MS) and the companies that produce the code, and many of those that "verify" it - make it far more likely that the code should not be trusted. i.e. the security procedures for determining who contributes the code, and control of the "chain of custody" to release, are both weak (IMO). Most importantly MS is a company driven by shareholders - who by their nature are interested in short term returns (yes, I know how their dividends work, but the fact remains unchanged). Shareholders are people who will happily dump sewage upstream if it'll save them a dollar, more happily if it'll make them fifty cents. Moreover the shareholders invest the actual voting influence in a small number of individuals who care even less about enlightened self interest. Add in the "human wave" style of programming employed by MS and the companies it outsources programming to (as a means of keeping pay low and conditions cheap) - and the whole "import workers" because we want to lower costs (shareholder influences) and you add more elements of potential insecurity (bitterness, incompetence, more difficulties in auditing security of personnel and processes, more redundant code that is hard to audit).

I'm sure that some will argue that "MS wouldn't pollute their own drinking water because [insert rationale based on the triumph of optimism over experience here]"

The major weakness in terms of the NSA et al is that shareholders demand a profit - regardless of the altruism of management. That means when a government agency says "give us a font flaw backdoor for our use" and we'll put in a "good word" - they will almost certainly comply, even if it's just at a regional sales level allowing the supply chain to be subverted (a little more complicated but very do-able). Can the NSA et al "put in a good word"? Um, yes. They wouldn't even need to employ longstanding FBI type tactics in, say, Washington - it's like asking if Republican's like Baptists (or southern chicken chain outlets).

NOTE: the "traditional" Open Source (mainstream public perception of the model) has similar problems. The best way I can think of to explain the real issues is "it's not the code of football being played - it's the individual players, management, coaches, and supporters that determine the best team on the day" (and even then it doesn't cover it.)


I've actually seen "export" versions of Windows, and the install size is smaller. Makes you wonder what was removed.

I've looked superficially - mostly the export editions get less bloatware - primarily because much of the bloatware was English only. Is it more trustworthy than the non-export editions? I doubt it - very much. See my previous comments here and elsewhere about the irrelevance of source where you don't have reproducible builds and you intend to use the suppliers binaries.


code can and is rejected for not being up to standards

Code style? Readability? Function? Algorithmic design? I think if someone wrote good code (as in good English) then nefarious code has a better chance of making it through than if it is poorly written.

The easiest way for me to answer those questions is indirectly. Good code is secure code. Secure code is verified code. If it's obscure it can't be verified. If it's obscure it can't easily be maintained or tested. Maintenance and testing is the majority of work in any project. Verification at a high level is something that's outside my experience - but it begins at a specification level, followed by review as it's incorporated into a project. In real-life it's much harder to hide something in plain sight than in fiction. If it's clean, simple and clear what the code is supposed to do then it can be tested to: see that it does only what it says it does; tested that it does what it should do (per specifications). Again - that is outside my realm of expertise. It's a question Nick P would be better able to answer - though I suspect Clive Robinson might have some informative "head out of box" insights.


I refer back to the "many eyes" fallacy. As the TLS keep-alive flaw demonstrated, considering that was SECURITY CRITICAL code, because the developer was trusted, no-one even looked at his work. Even if they did, it would pre-date any known security issue with the code, so what was the likelihood the major flaw with the code would be spotted?

There are a number of logical flaws in that belief (I hope you won't take that personally). There's a major difference between "blind trust" and "informed trust" - the latter requires verification and at best only shows a degree of evidence that in the past something/someone performed a particular way, and extrapolation (something us fleshbags are notoriously bad at) from that to predict what someone/something will do in the future.(phew!).

Firstly you call the "many eyes" a fallacy, sorry but that's bull. Open does not mean insecure anymore than Hidden means secure - regardless of whether the scenario is physical, psychological, or language. It's a fallacy to believe that because the source is open it's going to be verified - and all the SSL problems prove that.

There is a problem of "misplaced" trust (failure to verify). And from that you've extrapolated to a concern that later validation might not of verified the code and spotted the error. The code was later verified (a Google project from memory) and found to be flawed. I don't think Eric Raymond really meant (certainly never said) that because something could be looked at it automatically would be seen. Complex things get simplified to sound bites so that journalists and other uninformed folks can feel informed. The process is known as "dumbing down" - trying to extrapolate from that (it was a line from an entire book) is like those television detective shows where they zoom in on 80 line TV display and extract detail (bullsh*t is the end result).

Bruce has written extensively on the trust issues - I highly recommend you read his books and in particular reconsider those beliefs in light of the plumber/cheque example he gives.


As for code standards, apparently it is atrocious, with calls for (again, security critical) code to be "abandoned" because it is so dire with little to no documentation that a complete re-write is easier.

Again - you are making the same mistake (quoting?) made by others. Not seeing it in context. If you make a circular knife for cutting pizza and I use it to create an international franchise of carpet cutting chains it's incumbent apon me to verify that the blade is capable of doing the job. More so if you freely gave me the pizza cutter. Especially if it's all "apparent" as opposed to "factual". Yes - there are many loud opinion about how obvious (after the fact) the errors were (some one should start a Monday morning football league - the world is full for A grade players). But like any field - you only know how good you are in comparison. It's not Linus Torvald's fault if GIMP is flawed, or openssl - should we form a corporation/authority to ensure it doesn't happen again? (why reinvent MS?)(typo in the previous post "Linux nodded yes while saying now when answering a pre-arranged question about whether he'd been approached by the NSA").

The problem is not that the code was not able to withstand unforeseen attacks - it's partially the nature of software, but mainly a problem of failure to verify by the end-users.

That remains a problem with any software development project regardless of the nature of the license. Independent verification is the only rational basis for trust. How much trust to give independent verification is proportional to the skill and experience, and direct relationship to you - of the verification.

Kind regards

65535March 18, 2015 5:52 AM

“The good news about that one is the backdoor is escrowed keys. Implies they didn't have a backdoor in Windows itself at the time.” – Nick P

You seem to have studied the issue. I hope that there is no back door in the OS itself. Knock on wood for good luck.

Nick PMarch 18, 2015 10:23 AM

@ Winter

The Trusting Trust attack was countered easily by the diverse compiling paper. Also, in the real world, it seems to not happen at all outside this XCode attack. Otherwise, they wouldn't need 0-days everywhere. It seems that commercial and FOSS compilers have enough verification that such a backdoor is hard to introduce. Even the XCode attack seems like one they hack the computers and plant on their machine. So, the solution is a base toolchain developed and checked by diverse parties. There's many. The amount of worrying Thompson's paper generated far exceeds the risk it identified.

Far as what I described, most of them are specific enough that someone could build their own with a toolchain they trust. In rare occasions, the source code is available too. The proving tools particularly tend to come with verifiers that everyone can look at and vet: the verifier is the TCB. The ML languages and C both have certifying compilers whose output can be traced. Plenty of options for the person worried about toolchains. I also pointed out in similar discussions that the Oberon language, compiler, and System are simple enough that amateurs regularly port them to new platforms with success. Write your toolchain in *that*.

The truth, though, is that most people won't do the work necessary to fully trust their tools. So, they must place the trust in a third party. That third party better be trustworthy. Apple has a very poor record in both security and transparency. Not trustworthy at all. An open or closed-vetted model is definitely better for core toolchains. That, along with networking effects, is why all the new languages being released by big companies are doing it open-source. Except Apple's. Lol.

Nick PMarch 18, 2015 10:27 AM

@ Scott

I was going to say I'd reply to your post later as I'm about to start a long day at work. Yet, as I read it, I noticed that every quote that comes after "@ Nick P" is "No Such Agency's" words. That's kind of misleading. The last statement, though, could've been the thesis to my own essay. Worded even better, though. ;)

WhatItIsMarch 18, 2015 12:50 PM

@CIC "So a world in which everything—from bitmaps to blood—can be understood as a "form of speech" is also a world in which nothing actually is understood, a world in which what a speech act does is disconnected from what it means."

There's a profound category error in this statement. Subsuming bitmaps and blood under the same concept is invalid because the two things are essentially different in kind.

Aristotle identified two different types of being, that which exists in a primary sense and that which exists in a secondary or derivative sense.

He used the example of wax and the impression of a signet ring to illustrate the difference. The wax itself has substantial being, whereas an impression in the wax exists in the arrangement of the substance. The wax itself has primacy over the impression in that it can exist without the impression, but the impression can't exist without the wax.

There are then two basic types of being, that which simply is and that which is in other things.

Bitmaps in whatever media they are found are like impressions in wax. Blood, however is substantial and so is more like wax.

More details can be found in the excellent book "On the Several Senses of Being in Aristotle" by Brentano.

Scott "SFITCS" FergusonMarch 18, 2015 3:07 PM

@Nick P

Grammer nazi, and semantic pendatics alert - 16 minute keyboard mash! (before work)

Thanks for your time Nick

I was going to say I'd reply to your post later as I'm about to start a long day at work.

I get that :/

Yet, as I read it, I noticed that every quote that comes after "@ Nick P" is "No Such Agency's" words. That's kind of misleading.

Yes. My apologies for the lack of clarity, formatting is very limited in this implementation of the CMS - the single paragraph prefaced @ Nick P was the only one directed at you. Though comments in the preceding post in that thread about "lack of verification of Closed Source" and the difficulty of appropriately weighting a decision matrix for "Closed vs. Open" were "kind of" in response to comments you've made.

Additionally I was writing much faster than normal due to my frustration with yet another mysterious vanishing post. Apropos of which:- Dear moderator - if you are going to remove my posts after publication it would be instructional if you could contact me on the conveniently supplied email address so I could learn from your reasoning. It's not only a pain having to try and remember, and rewrite posts after hours of waiting to confirm they have indeed been eaten by the internet pixies. It also might lead to the suspicion that all is not what it might be on Bruce's site. Thanks.

The last statement, though, could've been the thesis to my own essay. Worded even better, though. ;)

:) I would hope so (worded better). Writing in these tiny textboxes, at speed of thought ("On the Road" stream of consciousness style, without the assistance of adrenaline analogs), especially without independent proofreading leaves room for considerable room for improvement, clarity, and succinctness(ness).

The "reason" for that, confusingly formatted, aside was to clarify my position and avoid the appearance of supporting the false dichotomy of a binary choice - Closed or Open. Not only is the loose concepts of license types a poor basis for determining the "security" of the end product - it overlooks the most import factor in a decision matrix. Context (what are you using it for - a complex subject involving environment and stakeholder analysis/guesswork); Primary Operator skills and awareness; Risk management.

IMO faith in intuition is a primary human failing - so central our sanity that challenging invites cognitive dissonance (just watch the denial antics of those challenged to test their intuition. Most proponents of a binary license style choice in the quest for "security" fail to consider any of the points I've mentioned. A strong background in security or programming is not a defense against that bias. Very few make the detailed, self-aware, and experienced analysis of their "belief". "Clive" would be a good example of those few that do, and he manages, whilst strolling and operating his Crackberry with both thumbs to make his reasons much more succinctly than I. Kudos Clive "it's my real name" Robinson :)

"It's" a fascinating subject on a degree of difficulty similar to do-it-yourself brain surgery.


Kind regards


ModeratorMarch 18, 2015 3:26 PM

@Scott "SFITCS" Ferguson -- It's not clear what posts you're referring to, but I've just checked, and see that none of your recent comments have been unpublished, marked as spam, or otherwise deliberately deleted.

MikeAMarch 18, 2015 8:22 PM

An answer on Quora (I know, apologies) claims an example of the Trusting Trust hack in the wild:

http://qr.ae/QTeX7

Meanwhile, is GCC capable of being compiled by a standards compliant compiler anymore? If not, the divers-compiler fix is not applicable. I'm pretty sure that a lot of Gnu/Linux code can only be compiled by GCC. (what was that about embrace, extend, ...)

FigureitoutMarch 18, 2015 10:54 PM

Cliev Robinson
--There's no such thing as absolute security, but it'll be better than not implementing a conscious process I touched on. As I say, talk's cheap, if a group can actually make that happen then that's a serious accomplishment (there's an opportunity for someone[s] to really make a name for themselves). I think we've all had enough time for the "shockwave" to settle in. In addition to providing more work for TLA's and showing clearly who's the real terrorists preventing people from having secure computers(scare mongerers that can't even predict ISIS); there's principles behind it too.

So,
1) limitations go both ways (sneaking in more attacks)
2) influence brought to bear has costs
3) don't have a snarky remark here, meh
4) this is why honeypots and "traps" are useful for defenders
5) goto 3);

You don't need to tell me to get into embedded toolchains, chips, etc.; it's my favorite area. Seems as if you got lucky and were able to get paid in your day job and then use that as you'll be real familiar w/ it (I likewise believe I could modify a product I'm working on to be *very* cool and useful for various sensing or switching applications (ie: like some of the power systems you see w/ small yagi antennas)). I've been mulling over some PIC chips to dabble in, as I said to you (w/ no response, mind you), I have some OTP, very limited chips, and I need basically a "flashable" version of it to get the ROM right before seeing how these work (I've never messed w/ true ROM chips). I downloaded the "starter file" (didn't save, just to see) and it truly is minimal lol, I don't have such an ego to think I can write a perfect program from scratch 1st time and write-once, I need a relatively bug-free functioning binary (small, 4K IIRC, 192 bytes of RAM). What I'm really interested in is a parsing "node" I guess you can call it rendering only acceptable formats like you have, and rejecting others. Obviously I want it to be nearly foolproof and not subject to tricks/hacks. I figure having something like a write-protected OTP chip would be a strong start "facing the wind". I'd prefer to start off w/ a PICKIT so I can just focus on code initially, get it working, then study circuit and work out ways to make some custom programming interfaces and get a custom toolchain eventually getting off MPLAB maybe...It's one of quite a few projects I mull over, until I feel confident enough to implement.

RE: complexity
--Do you simulate circuits (think LTSpice-like programs) or simulate RF effects via the command-line? If those programs can't run securely the run-off effects are drastic. Think software w/ NASA, are they still using core-rope memory and command line trying to get to Mars? I hope not. These tools have significant value, and they need security support; force the really freaky bugs to be found.

RE: "productivity tools"
--I hear you, I learned a lesson to not trust a salesperson that the "latest and greatest" (which, the actual simulation of it is pretty good, just...not like necessary for us really...) tool won't completely screw all your old files. Then spend like a week minimum getting used to what some insane code monkeys spent a few months on, of course some of the "add-ons" return 404 errors LOL; f*ckin' christ...This is why I hate things like when Bruce changed his site layout (where's my blue?!), sites change things, Win8 changes things, and it makes it worse! If they just focused on WinXP and keep the old GUI, straight lines..etc. I'm done ranting lol.

MikeA
--Yikes, pretty funny story (nerd war lol, bet that was a rush finding that "bug" (more like attack)). I stumbled across a pretty severe bug today (only in programming can you be so near a major bug fix, then suddenly your world shatters). I can't even really describe it enough to make sense w/o you looking over the files etc., but let's just say, you suddenly can't flash your board (your programmer stops functioning correctly, yet doesn't return errors and "connects to chip"), even w/ old "safe" binaries, b/c some other variable in a different part of memory suddenly gets some insane value (I believe it "looped around" to a max value like I've seen before using "bytes" in arduino). This value get's multiplied and used for another address, something super super strange happens; and it kills me not really knowing what is happening (but I know what is likely causing it). Ah, just Wednesday...thank god for a recovery feature we made.

Nick PMarch 18, 2015 11:37 PM

@ Scott

It's all good. Stuff happens. I appreciate the clarification.

re point you made in previous post @ me

It's actually not quite so simple. Totally unverified, closed-source applications are clearly closed source. Applications released in full source form are clearly open source. That includes a paid app that comes with source: proprietary, open-source. What of the other models I mentioned? For instance, source shared to one or a few others for review without widespread publishing is still pretty closed off to me. I classify it as closed source and vetted. Might need a mainstream term for that one.

The other issue is development model. There's code that's open source in a literal sense. Then, as you mentioned, people use the term in the sense of a community developing the software. My argument about high assurance open vs closed was that the community-driven approach doesn't seem to work [so far]. It's always been one or more skilled people developing the system with little to no external input. The system might then be open sourced or remain closed. Yet, the security of the system wasn't usually dependent on the open or closed part as much as the team. You covered that in your overall post.

The rest of your post is mostly good points. We're in agreement about a lot.

@ MikeA

If the code is tied to GCC, then that could be a weakness. Dependence on a program that one can't understand does give the program authors leverage. The solution, if one *has* to depend on GCC, is porting GCC to your own toolset that catches common errors that lead to solutions. The better option is to simply ensure you use apps written in a portable way. A compromise might be a source-to-source compiler that converts GCC-specific code to general code. They can be compared visually, then run through diverse compilers.

Clive RobinsonMarch 19, 2015 5:22 PM

@ Figureitout,

Whilst there is indeed no such thing as "absolute security" you can if you work the right way get close enough that attackers will either rattle your door then leave you alone, or they will end up using a more obvious aproach such as "thermorectal" or "vice grips to the nads" or whatevere the contents of your own personal "Room 101" are. And there are fairly trivial steps to stop those working in that you can not telk what you don't know.

The question thus becomes "What is close enough?", which boils down to very much the same as physical security. You use alarms and delaying tactics such that you can get a response in place before the attackers get to what they are after.

The problem with this is the issue of "locality" I've mentioned on the odd occasion ;-) With physical security the attackers have to be present at the point of attack, and thus responders can get up close and very personal with them. With ICT they can be anywhere on or not to distant to the earth, and can hide behind all manner of cut outs, which makes a responders options very limited ( something I wish the US war hawks with their imbecilic "nuke the 13@5t4ds" attitude would wake up to).

Thus unlike the physical case where when the responders catch the attackers and haul them off to a place of confinement, all the responders can safely do is "pull the plug", "analyze", "patch/workround" and cross fingers as the plug goes back in...

Thus unlike the physical case the information attackers have a very distinct advantage, and the defenders effectivly have to sit there and take it.

Thus studying in detail the methods of how attackers attack and exfiltrate data is of importance. And with external attackers it boils down to two basic ways, "change flow control" and "insert rouge code". If you can reliably stop those then an external attacker can only passively look on. However easy as it is to say, in practice it's darn difficult because of the way we customarily go about things...

Thus "change current practice" and "reduce complexity" for journymen code cutters is a sensible place to start as I've mentioned on the odd occasion or ten ;-)

The simplest way to reduce complexity is "divid and conquer", if you have 10 basic functional blocks within the same code space then you potentially have 50 relationships you have to sort out. Using effective segregation takes this down to 9 if they are chained. Just dividing the 10 blocks into two segregated areas reduces the relationships to just two lots of ten relationships.

It's usually a farly simple process to do, but importantly when doing so other things generaly become not just clearer but simpler and thus more robust...

Any way...

With regards not replying to a PIC question, I guess I must have misssed it for some reason, thus all I can say is sorry :-(

The problem with OTP chips as you note is they are burn and "work or bin". The traditional way of doing development with these parts was not to... you spent the money on an ICE or emulator and wrote the code in a highly structured way with one or two pins reserved for debug status, and various bits of test kit.

FigureitoutMarch 19, 2015 10:14 PM

Clive Robinson
RE: nad-grips
--Yeah doesn't matter for me, I'll piss in their face if they try. I don't posses anything worth that.

Yep yep to the rest lol. I've studied some attackers quite a bit (and I don't like being studied lol).

I can't mention things since I talk too much, so I'll be unclear quite a bit, but your "design hints" don't really make sense lol. I already have general ideas and can work it out from here, so thanks anyway.

RE: OTP chips
--The point is firmware reflashing, assuming a clean write, I can saddle up and cackle at attackers trying to remotely reflash a ROM lol. Maybe fill up any remaining space w/ ASCII art lol, like a middle finger.

Clive RobinsonMarch 20, 2015 1:57 AM

@ Figureitout,

RE OTP,

As a cautionary warning don't assume because it's called "One Time" it realy is... I've been bitten by that once or twice.

Even back in the days of "fuse link" PROMs you could often blow fuses that had not been blown if you could get the required voltage etc to do it.

Modern Flash --which many OTP parts are-- you can likewise over write existing programing by pulling unchanged bits down. Look at a byte as though you have eight cans on a shelf, your first "programing" knocks three of them off so you still have five cans that can still be knocked down to for a new bit pattern. Which might depending on the instruction set give an attacker an oportunity... So you need to fill parts of the ROM you are not using with all the bits knocked down, or write a memory checksum / hash checker that runs before any critical code (also usefull for "sanity checking").

Because the silicone inside an OTP part is often identical to a EPROM / Flash part this has been known to cause problems in the past. Basicaly the original PROM OTPs were "one time" not at the UV erasable chip level but because they were plastic packaged, not ceramic with the quartz window and were thus very much cheaper.

But even when they started having a "security model" for E/PROM parts, initially it was not to stop bits being changed but to stop bits being read out, and thus prevent people stealing the software. So if your security model relies on attackers not being able to change bits... don't belive the data sheet actually test it with the production parts...

As for,

... so I'll be unclear quite a bit, but your "design hints" don't really make sense

There can be several reasons, firstly I might not be understanding fully what you are asking for thus give the wrong answer. Secondly it might be something that has been talked about previously on the blog and thus I just indirectly refere to it. Thirdly I might be warning you of a "gotcher" that has previously taken a chunk out of my hide. Fourthly I might not be able to give specific advice because I don't know enough about a problem because I lack specific information such as a circuit diagram etc.

FigureitoutMarch 20, 2015 8:56 PM

Clive Robinson
--I never really asked a question, just stating some things. I started reading *every single* blog post and every comment (spotted a viagra spam haha), don't have the time for it though. I even started trying organizing useful info but that file was attacked on my PC (honeypot).
RE: "otp" chips
--I've been aware of that since at least last summer when I started looking into ROM's that couldn't be reflashed. Writing something like a middlefinger is a joke, in real-life, if I can, I'd write 1 or F. Even still, w/ backdoors and "facing the wind" (spraying you w/ more chemicals then you can even wrap your head around), I find comfort w/ the odds of something reflashing a small chip I keep w/ me 24/7, programming pins protected, and potentially fitting a malware in remaining space. That would require "external" surveillance which I'm getting pretty good at predicting and if I feel like it, avoiding. There's a feature w/ "erasing flash" if trying to dump, don't really believe it, at all really. I'll play around w/ it when I can, but I need to find this motherf*cking bug that's whooping my ass and move on to some other things.

As usual, the actual implementation details pertaining to specific chips is more what I'm after. It simply takes time to get accustomed to them (then considering if the datasheet is a f*cking lie). So secure implementations of protocols and being sure I am actually getting a real picture of the memory is a big concern as well.

But...a more interesting question for you (if you're up for it). Have you ever had some extremely troubling conflicts w/ different types of memory that should be separated? Let's say EEPROM and Flash...Let's just say I can flash the Flash mem. and EEPROM is unaffected, but EEPROM was able to very visibly affect Flash memory *beyond flashes*, as in, it did not matter how many times you re-flashed, until you reset some variables in EEPROM. It seemed to prevent a new flash from happening; again details of the chip may be needed, and SoC's are a full-time job just to get a grip and follow full operation (you think). It's a bit of a petri dish I'm dealing w/, but it really got me going. The actual bug is probably less interesting I think *maybe*, but it's way too much to explain...NDA, I can't.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.