Comments

Martin February 11, 2015 10:37 AM

I don’t believe any of this. It’s just scare-mongering to generate more donations for the EFF. Just choose a longer password. That’s all you need.

Martin February 11, 2015 10:43 AM

More people die from insect bites than the NSA, so don’t worry about it. Isn’t that right, Clive?

Worried February 11, 2015 1:03 PM

Yes, but insects don’t track the web sites we visit, or record the brand of deodorant we buy.

Katrina L. February 11, 2015 2:47 PM

Literally not much anyone can do about this. Perhaps if people were willing to give up their much beloved technology, things would be able to change.

Anura February 11, 2015 3:18 PM

@Katrina L.

Any one person, no, but we are democratic, and when a very large and motivated majority of the population decides they want something, then they can get it unless it hurts a majority of elected officials or heir investors, and I believe that this is one of those cases. Our elected officials do not just represent the Surveillance Industrial Complex, they have a wide range of financial interests that they dedicate their careers to, and if they are at risk of losing re-election, they will happily completely change their platform so they can win, or else they will lose their investors to someone who stands a better chance.

BoppingAround February 11, 2015 4:47 PM

Worried,

but insects don’t track the web sites we visit
Actually they do. Those nasty web bugs.

65535 February 11, 2015 6:30 PM

The situation looks grave.

“We are tasked to remotely degrade or destroy opponent computers, routers, servers and network enabled devices by attacking the hardware using low level programming.” – from thinkst blog

http://blog.thinkst.com/p/if-nsa-has-been-hacking-everything-how.html

This would indicate that not only are all levels of the OSI model targets, but popular programs, well distributed Operating Systems and the firmware/hardware on which they operate are targets.

We will have to get Nick P’s opinion on this. But, I will make a few observations.

“For most security teams, low level programming generally means shellcode and OS level attacks. A smaller subset of researchers will then aim at attacks targeting the Kernel.” – thinkst

http://blog.thinkst.com/p/if-nsa-has-been-hacking-everything-how.html

Again we will have to get Nick P’s opinion or other expert’s opinion on this subject.

To my limited thinking shell code and kernel level attack should be mitigated by Anti-Virus venders and IDS systems. All bets are off if AV venders; Certificate Authorities and IDS systems are bribed, pawned or coerced by the NSA.

A number of AV vendors use certificates to determine if a program, or an executable, or other set of code is clean [free of viruses]. That is a huge mistake if the certificates and their respective companies are owned by the NSA.

The Antivirus system checks the code signing certificates and finds that they are ‘valid’ and doesn’t check further – the malware goes undetected.

Next, this leads to the survey where Bruce and his associates sent a letter to the Major AV vendors asking them if they were influenced by the NSA.

I asked, “What happen with that survey of AV vendors and their answers as to their cooperation with the NSA?

https://www.schneier.com/blog/archives/2013/12/how_antivirus_c.html

It appears that only a few AV vendors answered the survey. This leads to the unpleasant conclusion that AV vendors [or a number of AV vendors] are in bed with the NSA.

I will also note some AV venders also are Certificate Authorities or have business divisions that Issue Certificates [including code signing certificates]. If said combination AV venders/Certificate Authorities are influenced by the NSA, a huge security hole exists in the Antivirus industry [including IDS systems].

The uglier problem is firmware and micro-controller manipulation or corruption.

‘”We are also always open for ideas but our focus is on firmware, BIOS, BUS or driver level attacks.”‘

‘The rest of the document then goes on to mention projects like:
“we have discovered a way that may be able to remotely brick network cards… develop a deployable tool”.
“erase the BIOS on a brand of servers that act as a backbone to many rival governments”
“create ARM-based SSD implants.”
“covert storage product that is enabled from a hard drive firmware modification”
“create a firmware implant that has the ability to pass to and from an implant running in the OS”
“implants for the newest SEAGATE drives..”, “for ARM-based Hitachi drives”, “for ARM-based Fujitsu drives”, “ARM-Samsung drives”..
“capability to install a hard drive implant on a USB drive”
“pre-boot persistence.. of OSX”
“EFI module..”
“BERSERKR is a persistent backdoor that is implanted into the BIOS and runs from SMM”’ – from the Thinkst blog [see above link]

Many of these firmware/hardware modules are proprietary, diverse, and not easily tested. Worse, these firmware modules are in cars, medical equipment and other critical systems. We must evaluate how broad and deep this firmware subversion is and how to remediate it.

This also raises a number of uncomfortable questions because Americans buy a good deal of firmware/hardware from China and other countries that are not aligned with the USA.

If the NSA is inserting malware into firmware one would assume that the Chinese could do the same – but on a larger scale since they manufacture a huge amount of it.

The Chinese could theoretically flood the market with back-door’d firmware to the point where cars, medical equipment – and even the electronics that the USA government and its civilian contractors use are put in peril.

Maybe, all electronic equipment containing foreign made firmware/hardware should come with a warning label “Use at your own risk!”

If firmware/hardware is widely infected with malware from a large hostile country we may have started a “malware war” which we cannot win.

The first step is determining how broadly firmware/hardware is infected and how to re-mediate the problem.

We might find it is a low infection rate and we have little risk. Or, we may find the infection rate high and will have to build and test our own electronic equipment [which might be beneficial in the long run].

Thoth February 11, 2015 6:30 PM

@Martin
If we are to consider that NSA and friends are really capable, there is no place to hide … not even to fear…

If there is not even anything to fear or to hide… what is left for such a scenario to most people is to just lay down helplessly.

EFF already have a pretty good pool of donations and even have their own team of lawyers. I doubt they need to go around so much as to generate even more donations when they are actually receiving what seems to be a pretty healthy dose to keep them alive and even allow them to lend out their legal teams to help people fight their lawsuits !

Thoth February 11, 2015 6:46 PM

@65535
If you noticed your post, there is a huge mention of ARM-based attacks and the reason is likely due to the chip manufacturers in bed with the Warhawks.

Note that after the fallout of Snowden leaks, China decided to restrict foreign technologies and requires the use of home-grown Chinese technology brands (e.g Huawei) with the insertion of legal backdoors of Chinese variants (e.g. Huawei) for electronics to be sold in Chinese market. I am not surprise all the other Governments could give an excuse and have all their backdoors inserted as well when they figured FiveEyes Warhawks have already been doing it so frequently … what is left is computers and electronics ridden with so much backdoors by various nations due to their “National Security requirements” that it becomes a complete mess.

Will we see a massive flood of backdoors from so many countries ? Yes … because “US have it so we shall too” argument and justification.

Regarding the compromising of CAs, how is that going to be linked to hardware attacks NSA already have ? CAs usually use standard servers (HP servers and the likes) which are presumed to be compromised for their software and there is no tamper detection anyway. The only “protective” components in their setups are usually the crypto-modules like the HSMs but again those are backdoored. How about the firewalls and such ? Nope … all are backdoored or badly configured. Likelihood of a “clean CA” is 0%. No high assurance methodologies like the use of Guards and Data Diodes in my knowledge .. yet … but those shouldn’t be hard to compromise as they are “NSA certified”.

The situation is already in that status for years to the point the Snowden revelations which gave more visibility to the issues that have haunted the computing industry silently for many years.

What’s left is back to the chipboard and start designing a provable circuitry or to implement your own electronics.

Dirk Praet February 11, 2015 7:42 PM

The second article is asking the right question: “If the NSA has been hacking everything, how has nobody seen them coming?”

We’ve seen Stuxnet, Duqu, Flame and Regin, and it is really beyond me that so few other malwares, exploits and intrusions have been discovered in the wild. It’s equally bewildering that for as far as I know we still haven’t heard of any software, hardware, appliance or chip manufacturers making statements that they are actively looking into possible NSA/GCHQ subversion of their products. Which leads to the inevitable conclusions that

  • Most antivirus, antimalware, firewall, xIDS and other COTS “defense” mechanisms are either completely worthless or their vendors simply in on the game.
  • Many, if not all US technology household names (MSFT, Apple, Google, Intel, Cisco) are either complicite or under gag order.
  • Most infosec “consultants” are incompetent or completely out of their depth when dealing with state actors.

Then again, I guess most of us here have known that for quite a while already. There seem to be no technical solutions at this moment other than giving both state and corporate spies as much trouble as you can by practicing as much digital hygiene as possible and using encryption where available. Which in essence boils down to getting rid of any mainstream platform and adopting a completely different MO with regards to telecommunication and internet usage.

@ Martin

It’s just scare-mongering to generate more donations for the EFF. Just choose a longer password. That’s all you need.

Let’s call this “Martin’s razor”: never attribute to stupidity what can also be explained by cynicism. Or the other way around, depending on what you actually meant. And thank $DEITY for organisations such as EFF, ACLU, EPIC and the like. Given the ignorance and limited interest for these matters both from the general public and the legislature, it’s mostly these people that are today in the frontline of defending the ordinary citizen’s rights against the surveillance state.

65535 February 11, 2015 7:53 PM

“If you noticed your post, there is a huge mention of ARM-based attacks and the reason is likely due to the chip manufacturers in bed with the Warhawks… what is left is computers and electronics ridden with so much backdoors by various nations due to their “National Security requirements” that it becomes a complete mess.”-Thoth

I agree on that.

“…CAs usually use standard servers (HP servers and the likes) which are presumed to be compromised for their software and there is no tamper detection anyway. The only “protective” components in their setups are usually the crypto-modules like the HSMs but again those are backdoored. How about the firewalls and such ? Nope … all are backdoored or badly configured. Likelihood of a “clean CA” is 0%.” –Thoth

If the CA’s are use consumer hardware they will be backdoor’d. I was hoping that they would not.

“What’s left is back to the chipboard and start designing a provable circuitry or to implement your own electronics.” – Thoth

I am beginning to come to that very conclusion.

The problem is scaling those products to a number where the average Joe can purchase the product and have interoperability.

That may require setting up Test beds and even full production facilities in a non-Five Eyes jurisdiction countries and Harding the facility to the point of being fenced-off and all employees are vetted. That would be a huge project – possibly doable at a large cost.

gordo February 11, 2015 8:38 PM

@ 65535
“full production facilities in non-Five Eyes jurisdiction countries”

…which leads me to wonder if/how the current state of malware affairs will play itself out in terms of globalization, international trade, trade blocs, protectionism, national security, neutral states, etc. [sorry; kind of off-topic]

Thoth February 11, 2015 10:23 PM

@65535
We have the EJBCA packaged in a commercial server with the Utimaco HSM PCI card (https://www.primekey.se/Products/EJBCA+PKI/PKI+Appliance/) to be sold as a CA issuing appliance. I believe the chasis itself has no tamper detection or resistance except for solely the HSM PCI card so you could even open the chasis, mess with the innards and inject a backdoor which will MITM certificate requests ?

So far, for the CAs I have helped done deployment in the past, they are using all commercial stuff … all vanilla commercial. I personally think that CAs are already compromised (from my limited experience of deployment of such stuff).

Hardware based corruption of commercial servers and HSMs. Software corruption of closed source (or subtle attacks on open source) CA and HSM software.

Some attack vector includes hardware/software data logger to tap bus channels especially input of PINs and passwords to HSM Client software that interface with the HSM hardware module. Once you have stolen the PIN/password in the commercial data bus lines via soft/hard-ware based exflitration techniques, you could literally login as an authorized HSM user and sign as much bogus keys as you want which is rather dangerous. If the HSM is not configured on FIPS 140-2 level 3, you could extract the raw private key as well (theoretically) but I have mentioned in the past that HSMs and FIPS 140-1/2 is nonsense standard (search my FIPS 140 related posts).

Another one I am pretty concerned about is when a client communicates with the issuing server and passes the certificate request, it may get intercepted and replaced internally and replace a bogus certificate through some covert channels which is tagging onto a legit company with an escrowed cert.

Cryptographic hardware providers would likely expect users to utilize the crypto interface and who knows what escrow is inside already. The only other mitigation of cryptographic hardware is to simply not use the crypto engine and write your own limited library (considering the space constraints in most crypto hardware like smartcards) if you are forced to do so. Otherwise, these stuff are already gone case.

Buck February 11, 2015 10:49 PM

@Dirk

Just phishin’ or are ya missin’ somethin..?

  • Most infosec “consultants” are incompetent or completely out of their depth when dealing with state actors.

How ’bout ‘”paid” for??

Nick P February 11, 2015 11:03 PM

@ 65535

I already answered those points with the framework I posted here that showed risks at each layer from hardware to firmware to OS’s to middleware to apps. The risks must be assessed, countermeasures developed, and the design implemented in a robust way to prevent attacks. Industry in general has ignored those layers so they’re easy to beat. Most aren’t doing any solid form of monitoring or use stuff that’s easy to hide covert channels in. That plus disguising as vanilla black hat attacks is how NSA kept it hidden so long. It’s all just that insecure and the market that uncaring. No liability law = fake security abounds.

AV is easy to bypass. Blackhat’s method is to try a malware variant on a bunch of different AV software to see how often it gets caught. They just keep modifying it until most don’t detect it. Then, they fire. Nation states have even more resources to do exactly this. They also tend to buy 0 days in things like browsers and routers that are overprivileged. So, they don’t really even need AV’s cooperation. They do that, too, though. 😉

“This also raises a number of uncomfortable questions because Americans buy a good deal of firmware/hardware from China and other countries that are not aligned with the USA. If the NSA is inserting malware into firmware one would assume that the Chinese could do the same – but on a larger scale since they manufacture a huge amount of it. ”

That’s what the U.S. government has been saying all along… as they were backdooring everything they sent to the rest of the world. Chinese might be doing the same and might not. I decided to look at it as a potential solution: use Chinese hardware for privacy solutions with NSA as the threat and U.S. hardware (esp Freescale) if foreign countries are the threat. Then there’s diverse architectures with voting algorithms.

@ Thoth

“If you noticed your post, there is a huge mention of ARM-based attacks and the reason is likely due to the chip manufacturers in bed with the Warhawks.”

I disagree. I think it’s because ARM-based SOC’s have been replacing many traditional microcontrollers with unique instruction sets. That smartphone’s use ARM means there’s both many programmers and existing code targeting that architecture. Plus all the low cost boards and SOC’s popping up. It wouldn’t surprise me if many systems are switching to ARM-based microcontrollers to leverage that while reducing tooling costs. Add to the fact that they’re a lot more open to attack than the chips they’re replacing: von Neumann instead of sometimes Harvard; typically abundant flash instead of limited ROM’s; more integration with standardized buses.

I mean, ARM’s often have MMU’s and high end has TrustZone. Hardly anyone uses them effectively, though. So, moving from a limited resource, Harvard architecture to an ARM-based architecture would seem to increase risk to me. That plus less defensive coding in firmware would make attacks on ARM a natural outcome.

tyr February 11, 2015 11:30 PM

I’m just as thrilled about this set of revelations
as I was to find the Mexican drug cartel was in
bed with the DEA.

So our staunch defenders have neen piggybacking
off the “bad” guys for years because it somehow
secures their citizens. We need a new Orwell to
explain how this level of stupidity became the
new norm. The mad rush to embed comps into the
entire tech of the world was stupid enough and
placed us at enough risk. Now I learn some damn
fools have backdoored the whole infrastructure
just to play peeping tom.

Most comp systems are dreadful kludges of dubious
quality operated by the inept as it is, the last
thing we needed is government actively trying to
make the situation worse.

Here’s a news flash for the resident spooks here,
that clump of cholesterol between your ears is
supposed to help you survive whatever this world
throws at you, making things worse is not on the
agenda for any creature smart enough not to
shit in their own nest. Right now the smart Rus
are seeing WW3 looming on their horizon and no
one in their right mind thinks that’s a good
idea except for the tech illiterate in various
governments. You better wake up and notice the
smell you’re sitting in before it is too late.

Wael February 11, 2015 11:37 PM

@Nick P,

I mean, ARM’s often have MMU’s and high end has TrustZone. Hardly anyone uses them effectively, though.

True! It’ll take its time to evolve, like other technologies have.

Nick P February 12, 2015 12:09 AM

re thinkst article

The thing is that a number of us have been on top of them quite well. We’ve continually pushed for the larger INFOSEC industry to see and act on these risks. They were called speculative, impractical, overly paranoid, nonexistent, and so on. My own framework called for everything from custom firmware to strong TCB’s to covert channel mitigation at the cache level. The industry just doesn’t listen or learn shit compared to many other fields.

A great, recent example was the covert channel attack on cloud services that used a covert channel published years before and predicted over a decade before. Why didn’t we see those coming since we figured them out over a decade ago? Why do people keep rediscovering this same issue that attackers at NSA’s level actually know how to use? This applies to too many things in INFOSEC.

A few were interested in how a high assurance security engineer would look at these points. So, let’s have a look. 🙂

  1. Adherence to classification/secrecy.

Yes, this is the norm rather than the exception. I predicted that anything this risky would be in SAP’s with dedicated personnel, paperwork, host systems, networks, and so on. The information would be behind guards with people deciding what could be released at what level. People doing these SAP’s are highly vetted people. Summaries of their results could be released under certain clearances and to certain people. It would be highly compartmentalized with few seeing the big picture or how it was used in practice.

The leaks showed the technologies were developed in SAP’s with selective release under codeword. Easy prediction given it’s the black program M.O. I’ve even posted their public security guides here. The surprise was that so much access was concentrated at Booz with so little security and monitoring. I expected at least a little more given the Manning leaks. Especially since the data is so close to SAP’s. The expectation that did pass is that the Snowden leaks aren’t SAP’s that I can see: just summaries and briefings without the full data and tech that’s still compartmentalized. That system works so long as the personnel aren’t infiltrators (esp Chinese and Russian).

  1. You thought they were someone else.

We all do that. Proxies, black hat attack tools, strategies copied out of [good] hacking guides… anything to blend in. The NSA has it better given the huge number of both organized crime and nation states involved in hacking. If I were them, I’d use their vast monitoring systems and partnership with groups like Mandiant to obtain exact MO + toolset of these organizations. Then, their own people can use them.

Another possibility was that the tools are becoming standardized enough that it’s hard to tell who is who. These range from all-in-one kits on hacker forums to professional tools sold by the likes of Finfisher. We know both types are sold to a broad customer base of people committing espionage. This might lead their attacks to look a bit similar with the customization aspects, originating IP’s, and behavioral profiles being the identifier.

  1. You were looking at the wrong level.

I proved this by posting my own framework in a discussion on secure code vs secure systems. Most developers were satisfied if it ran a NIX with enhanced security “features,” some crypto protocols, code audits of app, and maybe things like Stackguard. The TCB concept dictates security must be baked in ground up and the common standard was bogus. Extra nails in the coffin came from defense contractors and NSA classifying those as for “inadvertant and casual attempts to breach security.” They then rated the whole market at that level (or below!) of security minus a few exceptions that mainly sold to them.

So, we’ve been saying it a while now. INFOSEC pro’s were stubborn, industry just pushes nonsense customers want, and customers didn’t want to sacrifice legacy for real security. Result was predictable.

  1. Some beautiful misdirection.

That’s really just 2 with less incompetence. This might include planting evidence on the system for pro’s to find that point in a different direction. Smokescreens abound with professionals.

  1. They were playing chess and you were playing checkers.

I love this one: it’s absolutely true. NSA themselves already defined the Orange Book A1 and C.C. EAL6+ High Robustness requirements that determine when they will start to trust something against software attacks by High Strength Attackers. Their own pentesters often couldn’t beat such systems. Instead, they worked to come up with more clever integrations of those with legacy systems and ways to apply such rigorous methods more cheaply to defense systems. Their teams also leverage all that security engineering expertise to identify and hit anything not developed to such criteria.

Which brings us to most proprietary and FOSS technologies: low to medium robustness through and through. These are build by so many people doing systems development who don’t know how to do covert channel analysis, what a trusted path is, the importance of secure SCM, how precise security/design specs + simplified implementation can mitigate developer subversion, the benefits of non-x86 hardware, and so on. Even most smart people doing mainstream INFOSEC are so far removed from true security engineering that it’s like they’re playing a different, amateur game altogether. Pro tip: pro attackers can only be defeated by pro defenses. Security has no amateur league [that wins].

  1. Your “experts” failed you miserably.

This builds on 5 actually. I’ve had to explain to top, press-making people in this field why a user-mode driver is better than a kernel-mode driver, how their idea is full of covert channels, and even that secure systems can’t be built on OS’s with megabytes of privileged code. The field inherently has trouble maintaining and passing on the wisdom learned from prior generations who designed or fielded highly secure systems. We need to work on that. I’ve done my part at evangelizing high assurance design but it will take a large, organized effort to actually succeed. Specifics of that are still an open question.

The other part of this issue is the “experts.” These are people who are believed to be experts due to possessing certifications, references from clueless companies, and references from INFOSEC companies. As they’re all doing low assurance, the expert is guaranteed to be clueless on building highly robust systems. However, he or she might be quite knowlegeable on what the industry focuses on. Industry and its experts are another issue though: pushing fake or ineffective solutions is profitable so they do that pervasively. Combine these effects, you probably have most of the professionals in the field and their collective voices drown out naysayers like myself promoting the strong stuff.

Conclusion

Answer is No 7: all of the above. The problems all feed into each other to become quite a vicious circle. The good news is that high assurance security engineering and practical approximations of it are still around. Lots of different companies and projects are working on the strong stuff: crash-safe.org’s SAFE processor; CHERI capability processor; DARPA secure fabrication work; new networking designs like MinimaLT; easy, strong crypto like NaCl; stronger virtualization like HAVEN and SKPP kernels; driver synthesis with Termite2; fundamentally better OS architecture like GenodeOS, EROS, or JX OS. The list goes on.

Moreover, such methods have more funding and publicity than ever before. Still a blip on the larger IT and INFOSEC radar. However, the methods aren’t lost, many pro’s are focused on secure endpoints, and some are even compatible with legacy software. World’s always getting darker but future for widespread high assurance is at least a little brighter. Meanwhile, study on what’s known about building highly robust systems and apply it in every component you can.

jdgalt February 12, 2015 12:19 AM

What jumped out at me from the first article was that PBXes (private telephone exchanges) are one of the targets. No wonder the industry is resisting measures that would defeat people who use a PBX to fake their caller ID when they spam — the government uses those loopholes, so it wants them to remain open.

Securing the phone network would be a much harder job than securing the internet. Fortunately it isn’t necessary. VOIP and similar services are fast making traditional phones completely unnecessary.

Z.Lozinski February 12, 2015 3:49 AM

@jdgalt,

“Securing the phone network would be a much harder job than securing the internet.”

About the same. The international phone network was designed on the assumption there are ~200 networks, all trusted, and you just need to interconnect them. That was true in the early 1980s when SS7 was conceived. With 700-odd telecom service providers licensed by Ofcom in the UK alone, now it is not a valid assumption.

“Fortunately it isn’t necessary. VOIP and similar services are fast making traditional phones completely unnecessary.”

I disagree, for several reasons. The transition from the fixed / landline phone network to VoIP will take 7-15 years in developed countries. There is a huge amount of infrastructure installed, and the capital cost of replacing the existing voice network is significant. The transition from mobile voice to VoIP is dependent on the widespread deployment of LTE (“4G”). Again we are only a few years into the deployment of LTE. And why do you assume that VOIP is any more secure than circuit switched voice?

Phil Lapsley (author of “Exploding the Phone” which is a wonderful history of phone-hacking and blue-boxing) gave a speech at USENIX Security 2014 conference last year. While writing the book, he interviewed many of the folks who designed and operated the Bell System network (AT&T). They did not believe the stories phone phreakers told them about what they could do on the AT&T network. “You can’t do that on our network”. That was 40-50 years ago, and the same effect is there today when people consider the systems they have developed.

https://www.usenix.org/conference/usenixsecurity14/technical-sessions/presentation/phone-phreaks-what-we-can-learn-first

Grauhut February 12, 2015 4:28 AM

@Nick, Thoth: “I mean, ARM’s often have MMU’s and high end has TrustZone. Hardly anyone uses them effectively, though.”

Right. A lot of infrastructure uses ARM cores, baseband controllers, hard disk controllers, NICs…

The problem is that these devices are updateable in an insecure way. Basebands can even be ota trojanized.

http://www.theregister.co.uk/Print/2013/03/07/baseband_processor_mobile_hack_threat/

SSDs need fast processors for wear leveling, that are updatable, the same is true for active nics with network offload functions.

The ARM processors used are not bad, the way they receive firmware updates is.

keiner February 12, 2015 4:36 AM

Bottom line: We know NOTHING…

In the meantime I think, this whole “Snowden” story is a big fake. We learned nothing on the ways this all should work (tunnelin protocolls? Over all platforms available?)

No way out except for pulling all plugs and throwing the stuff out. It’s a shame, after about one year of media coverage. And no one is responsible. Nothing is ever going to happen, the more pressure, the more fake wars they will start to cover it up. The job would have to be done quick and dirty not over years and years… (if it’s real).

Clive Robinson February 12, 2015 4:47 AM

@ Thoth,

I’ve downloaded the paper and it’s fourty pages long, so I’ve just read the first bit (upto contributors).

Two things come to mind….

Firstly about the trojan charecteristics being a black box with N lines in and X lines out, “no s41t Sherlock” I could have told you about this back in the 1980’s, and from what RobertT has said in the past so could he. It’s why I’ve said “functional encapsulation and segregation with strongly monitored interfaces” for years, and anyone “read into TEMPEST / EmSec crypto design” would have been told this… To visualise it think of –a more generalised case than the paper authors describe– a “matched filter” that is a “data pipline” which is tapped at N points, the output of the N circuits can be 1 or more wires, these then “enable” various circuits in a chain, which open one or more covert channels, the result is either data leakage or an undisclosed action.

The second point is that whilst this is fine and dandy for designers with net lists, it does not help chip consumers buying SoCs on spec….

I will read it further to mine out other “nuggets of interest” and I hope others will as well.

Grauhut February 12, 2015 4:52 AM

Knowledge missing on my side: Does somebody have information about the hackability of flash card interface controllers in SD cards?

Clive Robinson February 12, 2015 5:54 AM

@ Grauhut,

You can find a number of cracker guides on the subject.

The take away is the manufactures make it fairly simple if you know a “secret” or you know how to connect to the circuit. They do this for economic reasons to keep test and rework costs down as well as other manufacturing costs. This “secret” knowledge is far from secret, nbeing obscure at best. Thus various people have made money buying small memory capacity devices and reprograming them as high capacity devices and selling them back into the supply chain…

If you do go looking for links take care as I’ve found some are booby traped against Windows and other Common OS’s and browsers…

Army February 12, 2015 6:32 AM

@Thoth: “what is left for such a scenario to most people is to just lay down helplessly.”

Lay down and, during next world war, look at military aircrafts being downed by
software attacks over the air.

Erdem Memisyazici February 12, 2015 10:10 AM

Because hardware manufacturers got in bed with big money. That’s pretty much game over. Before then it was lack of knowledge, like assembly, programming processor instructions directly is pretty much a lost art. People were encouraged (still are encouraged) to pay attention to for example Scala over Java over JVM over OS, and not installing a firmware which keeps messing with your fan voltage until your computer fans burn out. That’s where “real hackers” thrived, and that’s who NSA went about hiring. Non social nerds who generally want to take over the world, suddenly got secret jobs working for the government. Hell, 10 years ago if NSA approached me when I was into trying to make people’s monitors blow up, and pretty much told me I have no worries legally and offered me infinite source of amazing tools, I don’t think I’d refuse either. Then there is actually intercepting your products and physically putting “implants” in them, I mean that’s not even hacking, that’s just changing the product you’re buying, which happens to still look like the product you ordered. Soon you know it, the world is simply selling and buying broken hardware and software, and that ladies and gentlemen is what you get when you decide that breaking things is more valuable than making them and trying to protect them. The balance was broken, we have money to blame. But mostly the Bush administration. How to fix it? Open up every single piece of hardware you own and start identifying ever piece of metal and silicon, including your toaster.

Grauhut February 12, 2015 10:14 AM

@Clive: Thx, i know that its possible to patch the size value sent by a card interface controller on SPI connect, i am more curious to find out if these units are usable for rewriting data on the fly in a way that is workable with the ARM controllers on SSDs/HDDs. How much free programmable power do they contain?

Nick P February 12, 2015 10:54 AM

@ Grauhut

The update is only part of the problem. The reason I mentioned Harvard architectures is they’re more difficult to code inject on a running system. ARM is worse in that regard. So, a vendor switching from Harvard to ARM would have increase attacker’s success rate.

Grauhut February 12, 2015 11:12 AM

@Nick: “Harvard architectures is they’re more difficult to code inject on a running system”

Correct me if i am wrong, trojanizing periferal controllers is about persistence, if one cannot patch the firmware but just inject code at runtime, this threat would not be reinstallation persistent and so more or less senseless. No external injector after cleaning, no more injections.

Imho the main problem is insecure firmware update methods that allow persistent trojan installation on a powerful controller.

gordo February 12, 2015 11:13 AM

@ Erdem Memisyazici

…the world is simply selling and buying broken hardware and software, and that ladies and gentlemen is what you get when you decide that breaking things is more valuable than making them and trying to protect them.

There’s a starter certification for that, too: Break it! Break it! Break it!

Nick P February 12, 2015 11:37 AM

@ Grauhut

There’s two persistence strategies with firmware: a springboard to rootkit OS and/or BIOS; that plus persistent reinfection of OS. The security measure you mention might stop persistence at peripheral level. However, DMA would’ve already rooted OS if IOMMU wasn’t configured or present.

A risk I just thought about: sleep mode. Do peripherals’ microcontrollers actually shut down in sleep mode? Do they go into a low power state? Do they just keep running due to their standard, low power draw? If they don’t shut down, then the attack on them would stay persistent on machines where user always puts them in sleep mode instead of shutting down fully. That’s why I said “might stop persistence” above.

Anybody work with peripheral microcontrollers enough to know?

Clive Robinson February 12, 2015 12:13 PM

@ Grauhut,

Correct me if i am wrong, trojanizing periferal controllers is about persistence, if one cannot patch the firmware but just inject code at runtime…

All malware first involves injecting data at run time that takes over control of the controler. If as an attacker you cannot do this then it’s game over…

So if your system cannot put data in an executable memory space, or treat data as code –like an interpreter– then it should be immune to “data attack malware”.

By definition a pure Harvard architecture computer cannot put data into code space as there is no connection between code and data memories. Further if you write your code as a “filter” not an “interpreter” then any data sent to it cannot cause unexpected activities.

So to get persistance in a Harvard architecture would require either some path from the data memory into the code memory, or some way to have the data stored and re-run by “interpreter” code on startup etc.

Sadly there are many examples of Harvard systems being abused in one or other of these ways, the main reason being to do remote functional updates to remove or reduce “returns or rework costs”. Whilst there are ways to reduce the possibility of unofficial updates –via code signing etc–, almost invariably some coder will put in hooks during development to alow this to be done quickly and trivialy for “development & test”. Then for any one of a number of reasons the hooks won’t get taken out again for production code. Thus the systems are “insecure by design” from the get go, irrespective of if it’s in the design spec or not.

Whilst these “hooks” are generaly not made known, anyone with access to the Jtag pins and even a little knowledge of the chip can then download the code and disassemble it and spot the tags.

There is an example up on the web of someone reverse engineering an ARM based controler chip on a bog standard Hard Drive. They did this without any knowledge of the chip as they found the jtag pins by “visual inspection with prod and poke” checking. Whilst not a Harvard architecture, they did discover the chip had three ARM processors and shared memory etc.

Grauhut February 12, 2015 2:22 PM

@Clive: “All malware first involves injecting data at run time that takes over control of the controler. If as an attacker you cannot do this then it’s game over… By definition a pure Harvard

architecture computer…”

Nope, you just need os level access to flash a controller. If you just use an ordinary online firmware update function to install your trojannized firmware version for a device and trigger for

instance a blue screen afterwars, then your job is done after the reboot without leaving any kind of evidence on a HDD if you used an in memory attack on the main os.

This kind of insecure online update functions:

howtogeek.com/196916/how-to-check-your-bios-version-and-update-it/
crucial.com/wcsstore/CrucialSAS/firmware/m4/070H/crucial-m4-windows-utility-w8-070h-en.pdf

Theoretical SSD attack:

  • you are the big bad no such agency, so you redirect your targets traffic to a transparent injection proxy
  • the proxy does some traffic fingerprinting
  • the proxy injects the latest working 0day loader to the target
  • the loader fingerprints your hardware, loads a fitting firmware and installs it over a regular firmware update channel
  • the loader then pulls an exeption

  • the firmware trojan gets active after reboot, opens a hidden partition on your SSD, marked bad blocks for instance, loads a toolkit into these blocks and injects its loader after every reboot

  • reinstall this machine with another os and the loader will try to update the toolkit

And life is to short for academical hopes. If some company would take the money in hand to build secure Harvard chips they would come and tell them what to do if they want to stay in business.

Hopeless.

We have to look for ways to get the best security out of what we have.

Some kind of “survive the first bullet and shut down before they can fire the second” systems.

Grauhut February 12, 2015 3:33 PM

@ Dirk: The second article is asking the right question: “If the NSA has been hacking everything, how has nobody seen them coming?”

They see it. Ronald Prins:

“”We didn’t want to interfere with NSA/GCHQ operations,” he told Mashable, explaining that everyone seemed to be waiting for someone else to disclose details of Regin first, not wanting to impede legitimate operations related to “global security.””

Freedom is just another word for nothing left to lose,
Nothing, that’s all that Bobby left me, yeah,…
” 😉

Sancho_P February 12, 2015 5:53 PM

Wait a moment, no paranoia please, the situation is sufficiently serious.
But as we know from any computerized device there are bugs in SW, as “simple” as the device may be.
Computer’s OS and application software is no exception, on the contrary.
So we need and do updates.

“Bonus SW” is no exception, there are bugs, too.
They are not Gods.
Everything will come to light, it may take time. Or an innocent update of any SW on that machine.
Nothing can be hidden forever if it has to go on stage.

High level SW is a common victim because of monopolization.
Of course it is possible – but will be finally detected (e.g. Stuxnet).
As a result of this act of aggression the victim not only learns about methods and tools but also becomes motivated and justified for retaliation.
It’s a win – win.
Ti for tat – keeps business alive. (/sarcasm)

There are still too many competing hardware vendors to keep them all in bed with different spy organizations. Low level computer malware is possible but in very limited scope only, one has to know exactly the victim’s environment [1]. AFAIK we haven’t evidence of any such “BadBIOS” in the wild till now?

Same goes for hardware implants. We haven’t heard of any … why?
That’s strange.
Do you think the victims – e.g. China, NK, Russia, Germany – would remain silent?

“Bonus FW” on low level common chips (MCU, …), in large scale:
If really useful it would be costly, error-prone and detected anyway. So many are out there working with logic analyzer and scope fighting for each byte in assembler, who knows what the chips are used for.
It would ruin the manufacturer, who would take that risk?

A hidden kill commando, um, very difficult to hide for years – but feasible in dedicated high level parts only ( BIOS, CPU, chipset, GPU).

Conclusion:

They are out there, but if you have nothing to hide … 😉
From my other postings you know what and how I mean that:

“If they want you they will get you.”

So if you are working in the “defense” industry, aerospace, atomic energy, automotive, AI, robotics, biotech, pharmacy, medicine, military, industrial automation, machinery, R&D, high tech generally, news agency, media, legislative, jurisdiction, drugs, porn, money laundering, politics … OMG any “important” business:
Take care!

@ Nick P
Yes, but would you call that “persistent” and useful for a very important target?

[1]
What @ Clive Robinson wrote (which is IMO the same as @ Grauhut wrote) is true but it needs exact knowledge about the victim’s system (e.g. HD) and it’s flaws. This is possible because of our standardized machines and OSs (“System Report” will tell you all).
On a PC you’d need OS malware first.
But I guess it would be difficult to extract useful data by such an exploit without other (detectable) persistent malware on the computer. Likely not useful for broad surveillance of PCs.
Again, the kill switch(es) may be there already.

Figureitout February 12, 2015 6:49 PM

Do peripherals’ microcontrollers actually shut down in sleep mode?
Nick P
–No, they don’t. They wait for a wake-up signal. At least 1 maybe 2 RTC’s running always. Out in the field, it’s a running risk, but disregarding physically plugging in (there are ways to make reverse engineering a bit of a pain, for minimal changes on original design) and infecting development environment, best they can do is jam or continually activate or try to catch the signal (for specific things, others, I have some troubling theories due to recent events, and I can’t say). Made a troubling discovery a couple weeks ago digging out yet more cell phones I may use for something, it didn’t have enough charge to turn on but when powered up had the correct date. It’s possible I suppose it could have connected to a network, but then I think it would’ve had correct time. On older PC’s sitting in a closet where the CMOS coin cell died, it saved the date it died and I’m leaving it’s timing off (I killed it anyway by plugging in a PCI card while powered on, and though I really want to, can’t afford the time right now to diagnose what I blew up, really not worth it too right now when I can replace the board for $20 on ebay).

Other threats include RFID implants, and I don’t like research into inductive charging (nintendo wii remotes use it) and some different wireless power research, ( http://en.wikipedia.org/wiki/Wireless_power )I’m not digging it up now. Also “energy harvesting” switches. They require no power source, it’s generated by somehow generating enough power to transmit when you physically press a button (these could go underneath floors or loose door stops or even the door locks themselves if the door is heavy enough, and transmit to an exfil implant in a safer area, telling when someone enters an area).

I don’t support that research b/c it affects EMSEC so much (unplug your PC and then it’s not really off and still vulnerable to code execution while you sleep or gone), and I don’t think it’ll be making huge waves for a long time (fingers crossed).

Figureitout February 12, 2015 7:10 PM

Nick P RE: inductive charging
–One trick for these things, if there’s loops near stop lights in the pavement, the trick to get the strongest current is to come in as fast as possible over them (metal in your car induces a current, just like power generation–a concept that blows my mind) and brake hard. Works, sends a signal to the stop light controller. Until the loops break.

Nick P February 12, 2015 7:33 PM

@ Figureitout

Appreciate the confirmation.

re induction and stop lights

I’ve seen people go back and forth over them attempting light changes. I might try your technique in an area with few cops. Traffic tickets are too expensive where I’m at so holding off. If it works, there’s a small chance the designer of the technique did it on purpose: people in a hurry get a light change faster. No evidence but an interesting thought.

Note: One prank for a manufacturer to play would be to design one to only have that effect on turning green to red. Light goes green, people start going faster through it, and it immediately starts changing. Add random variable to make it happen only 20-30% of the time for hard reproducibility. Drivers will be arguing with each other about the light for years. Mwahaha.

@ Sancho_P

I’m not sure how valuable it is. It’s just worth remembering it’s there in the toolbox if useful in a scenario on the attacked machine. I see the use of the peripheral as a springboard to sensitive RAM as more useful. Especially if it’s networking hardware with subverted firmware that activates after a certain series of incoming data. Could be disguised to look like a traffic handling bug.

“This is possible because of our standardized machines and OSs (“System Report” will tell you all).”

RobertT also noted that, for a given type of hardware, there’s usually only 4-6 vendors offering it with any market share. Also, one vendor or chipset usually dominates the market in a way that gives attackers a high probability of success by targetting it.

65535 February 12, 2015 8:01 PM

“The risks must be assessed, countermeasures developed, and the design implemented in a robust way to prevent attacks. Industry in general has ignored those layers so they’re easy to beat. Most aren’t doing any solid form of monitoring or use stuff that’s easy to hide covert channels in.” –Nick P

Yes, I agree. Micr0soft would to let their customer “beta test” their insecure products at date of purchase – and wait for the problems to be spotted -then and only then – would their products get patched. MS SDL is a step in the right direction.

The problem with Micr0soft is sheer attack surface. From memory I think MS extolled the fact that Windows 2000 Pro had some 500,000 different applications programs that were documented to run of their system [I assume this data was gleaned from the system scans when updates occurred]. I shutter to think how may of those were infected with malware.

“AV is easy to bypass. Blackhat’s method is to try a malware variant on a bunch of different AV software… keep modifying it until most don’t detect it.” –Nick P

Cough, like easily testing malware on VirusTotal’s free collection of AV products [or links to said AV products]. VirusTotal is a two edge sword. It can be used to hide malware or detect it.

“That’s what the U.S. government has been saying all along… as they were backdooring everything they sent to the rest of the world. Chinese might be doing the same and might not. I decided to look at it as a potential solution: use Chinese hardware for privacy solutions with NSA as the threat and U.S. hardware (esp Freescale) if foreign countries are the threat.” –Nick P

That is a thought provoking solution.

“I disagree [with Thoth]. I think it’s because ARM-based SOC’s have been replacing many traditional microcontrollers with unique instruction sets. That smartphone’s use ARM means there’s both many programmers and existing code targeting that architecture.” – Nick P

I don’t know enough about the difference between a full ARM chip and the full capacities of the hundreds of types MMU’s deployed on everything from Thumb drives to wear leveling SSDs to board controllers. I guess both can be pwnd if enough effort is put into owning them. I’ll leave that you and other experts. Thanks for the interesting input into this subject.

Figureitout February 12, 2015 8:04 PM

Nick P
I’ve seen people go back and forth over them attempting light changes
–What lol, that’s funny. Yeah I have a good eye for cops just from experience…but they also have unmarked mustangs, trucks, etc. The designer[s] didn’t take that into account I bet, but they are bad in areas that get cold as pavement freezes and settles and breaks wires, thus rendering it useless.

Another “trick” is cameras at night, you can flash your brights at them many times and the “dumb” camera thinks there’s 20 cars there. Works like a charm b/c I’m a freak that memorizes timing on stoplights. This trick was also evident w/ new cop lights (they’re super bright LED’s, so bright I think it actually damages your eyes), and they frickin’ held up a light at night I’m trying to get through.

I want to try the IR receivers (look exactly like this: http://upload.wikimedia.org/wikipedia/commons/7/73/Millersville_opticom.jpg ) on stop lights but it’s illegal…(only by myself) They put the stoplight into flashing red lights all around and that light flashes too. Meant for ambulances mostly.

RE: stoplight pranks
–I can’t laugh though, these need to operate reliably (and securely) all the time. People do die from stupidly designed intersections and these electronics on them. It’ll make engineers sad and nervous. All I’ll say.

Dirk Praet February 12, 2015 8:07 PM

@ Nick P

Lots of different companies and projects are working on the strong stuff: crash-safe.org’s SAFE processor; CHERI capability processor; DARPA secure fabrication work; new networking designs like MinimaLT; easy, strong crypto like NaCl; stronger virtualization like HAVEN and SKPP kernels; driver synthesis with Termite2; fundamentally better OS architecture like GenodeOS, EROS, or JX OS. The list goes on.

But which unfortunately begs the question: how realistic is it to hope that all of this eventually makes it to mainstream? In absence of any serious business case, it’s probably never going to get out of high security environments and applications.

@ Grauhut

We have to look for ways to get the best security out of what we have.

I think you hit the nail on the head. At least for now.

Nick P February 12, 2015 8:57 PM

@ Figureitout

re IR receivers

One of my old hacker buddies used to build lots of interesting gadgets. He told me about “MIRTS” that he designed for use by first responders that had a secondary market in street racing. He said they sold for around $300. They only worked on a subset of lights. That led to them profiling those to plot race tracks throughout the city. So, it can certainly be done.

“I can’t laugh though, these need to operate reliably (and securely) all the time. People do die from stupidly designed intersections and these electronics on them. It’ll make engineers sad and nervous. All I’ll say.”

The risk is low on the electronics end: failures usually just clog traffic up. It’s the people behind the wheel doing most of the killing.

@ Dirk Praet

“But which unfortunately begs the question: how realistic is it to hope that all of this eventually makes it to mainstream?”

Thing is, though, stronger security is in the market right now. It’s just a minority player. You have the SKPP vendors licensing and demoing their wares. I think Sentinel Security’s HYDRA web app firewall probably still exists. Secure64’s OS + DNS system got quite the review from Montasano Security. On embedded end, both Sandia and a number of commercial firms sell Java processors to eliminate certain native code issues from the start. So, the stuff is happening albeit in a limited way.

I think we could see stuff marketed so long as it delivers a result without too much extra price and clearly showing its benefit.

Nick P February 12, 2015 9:00 PM

@ 65535

Good idea on VirusTotal: it’s what crooks did with it in the past. VirusTotal caught on and started sharing their samples with AV providers to counter that. The crooks just use real AV software for testing far as I know. Haven’t looked into it in a while.

name.withheld.for.obvious.reasons February 12, 2015 9:21 PM

Why isn’t anyone jumping up and down, with their hair on fire, about the worst NON-LEGISLATIVE, NON-STATUTORY authority that is the text of Presidential Policy Directive (PPD) 20? The next worst offender is EO-12333…followed closely by HR-4681 Signed into law this year.

From my read of PPD-20, and from some recent experience that Nick P can back me up on regarding HR 4681 out of the 113th congress, is that PPD 20 effectively violates the First, Third, Fourth, and Fifth Amendments (albeit as part of a court proceeding–suppressing evidence) to the Bill of Rights of the U.S. Constitution.

This policy makes it OK for the NSA to trample on ALL systems, computing platforms, and networks owned an operated by ANYONE!!! Not only affecting platforms, but allows the NSA to access and/or control of private/public data.

WAKE THE Freak Up People!!!

Thoth February 12, 2015 11:51 PM

@keiner, Nick P
We know that almost every other thing can be compromised on the most atomic levels (hardware) and the leaks confirmed the suspicions. It seems as though nothing has moved because:

1.) The influence they have is overwhelming to the point it hinders any security progress. They can corrupt education to teach tainted technology, they can make engineers and programmers do things they would never thought of doing and so forth… which means ignorance (general ignorance) can continue to spread…

2.) The corruption is too massive to fix on an observable scale.

3.) Economics is a huge hindering factor to motivation and progress. You can tell the banks to delay/deny payment by coercion or by rewards and they do gladly ensure critical security projects never exist without needing to hack them but by starving them dead of funds

4.) You can add more methods to hinder here ….

In short, the corruption is deep into the bone marrows …

All “secure” technologies not in the interest of the FiveEyes Warhawks would likely be downplayed and made to disappear. DARPA and related defense projects may develop “secure” technologies but they can be prevented from taking roots in the World. Why hasn’t MinimaLT or many of those technologies caught up to mainstream ? Why are companies reinventing the wheels that these good projects have researched ?

Marketing, power, resources, people, politics, technology … they all work hand in hand. You can’t simply invent something good and expect it to spread. There was no push on the authors’ of MinimaLT’s side and their sponsors (NSF) did nothing to market it which should be NSF’s responsibility to spread such technology to the masses.

I made a post last year regarding the usefulness of having power over the masses, sufficiently huge resources and pioneering technology to stage a huge overhaul but most people only have technology but lack the other two sides of the “triangle of change”.

@Grahut, Nick P, ARM-TrustZone-Related
If you can trust the ARM TrustZone, Intel SGX and whatever TPM technologies out there from not being tainted by the FiveEyes Warhawks. There are a lot of applications running ARM TrustZone especially on critical Android applications like eWallets and banking apps to try to leverage on the ARM chips inherent in most smartphones and to not be tied to the use of SIM cards as Secure Elements or the use of Secure Elements in the form of microSD card HSMs like SecuSmart , SmartCard-HSM (using the certgate microSD card), Datacard’s microSD secure element and so forth. Do you want to leverage all your trust on a blackbox chip for TPM/SEE applications ? I won’t trust it personally but just a note, they are widely used by critical sectors’ applications. The touted benefit of ARM TrustZone or integrated TPMs in CPUs is you don’t need to root the phone or change the OS to run a trusted mode and normal mode since the TPMs handle the trusted mode themselves and the TrustZone integrated a TrustUI to “securely” handle GUI inputs directly into the TPM trusted mode.

A tiny SoC based microkernel with very small TCB as some form of secure “startup engine” would be the best method. It has been implemented in a couple of security processors. This way you can’t inject and compromise the entire base of trust.

@Army
It’s quite common these days to attack war communication lines with electronic warfare. Fighter planes have been adapted to carry out electronic warfare by advanced countries but those are not exclusive to certain countries anymore. Most equipments are carrying some form of lousy COTS chips or architecture inside them and it’s only a matter of time those weak links would snap and bring the castle walls down to it’s heels.

Frank February 13, 2015 12:30 AM

“But mostly the Bush administration. How to fix it? Open up every single piece of hardware you own and start identifying ever piece of metal and silicon, including your toaster.”_Erdem

I’d blame Obama too, if not MORE than Bush. Whatever Bush did, he magnified it exponentially.

I’d be happy if Obama had just rolled it all back to Bush levels.

Figureitout February 13, 2015 12:35 AM

most people only have technology but lack the other two sides of the “triangle of change”.
Thoth
–The problem isn’t 2-dimensional. Defenders will be left trying to analyze a polymorphic-self-replicating malware using an equivalent of Pythagorean theorem…

A tiny SoC based microkernel with very small TCB as some form of secure “startup engine” would be the best method.
–No it won’t. Way, waaaayyyyy too much possible in modern SoC’s, I don’t care how small you think it is. You have to build your PC w/ much smaller chips. All the mathematicians wanting secure PC’s need to join the embedded engineers and analog ones making chips. Otherwise you’re just trusting an infected process, soon those chips won’t be sold anymore.

Welcome to the new world. Actual hope rests on home-manufacturing. That is our only hope (assuming those machines aren’t infected, which is wishful thinking).

Thoth February 13, 2015 1:43 AM

@Figureitout
It is true that manufacturing plants are most likely infected by those who wants to corrupt and the best is to make your own chips if it can be easily done for those who are not technically savvy.

If we assume everyone has equal technical knowledge and are good at it, we won’t be in this scenario now but the reality is not everyone knows so the gap between those who knows and those who don’t would continue to grow. Of course it can be considered as an excuse to not learn security knowledge and implement it but that’s how it is where so many people just refuse to listen to security advises given by Nick P, Clive Robinson and the many of us here.

Regarding my “triangle of change”, it is a higher level view of the problem. If you want to zoom into the microscopic levels of what’s going on, my triangle would not be enough to explain it in fine details which you need some polygonal model of sorts or even an infinite array as assumption to drill into the deep end of details.

Grauhut February 13, 2015 2:24 AM

@all: We need something today that works like a pc and is connectet to some kind of network to get our jobs done.

What is the best combination of hard- and software we can use for this, based on our new knowledge about “the new surveillance normal”?

Thats the real question.

Which not known to be broken items do we have?

tyr February 13, 2015 3:39 AM

@FigureItOut
The traffic signal inductive loops
generate a magnetic field about
20 inches high, the vehicle robs
the field by inducing eddy currents
in the vehicle metal. Robbing the
field changes the loop frequency
the detector unit sees the frequency
change and sends a digital signal
into the rest of the system.

Since we did the alpha test on them
and that version turned out to be
the most effective (cost/reliability)
it got adopted all over the place.

You actually had to hand tune the early
models to get them to work. Later units
are self tuning but not instantaneously.

One other interesting part of that job
was hand selecting transistors by beta
to force a gray code counter into a known
state on startup. that’s why they call
them the goode olde dayes.

Clive Robinson February 13, 2015 9:08 AM

@ Grauhut,

We need something today that works like a pc and is connected to some kind of network to get our jobs done.

That’s an over generalisation…

I frequently use a couple of old 486DX machines with 8Mbyte of RAM, 100Mbyte HD, four serial ports with a six foot 10Mhz “thin net” coax network between them with no other connectivity. Both running an old version of Consensys Sys5R4.2 Unix and DosMerge as this enables me to use a whole bunch of “RS232″ based test equipment, ICE boards and their DOS software, ROM programers and a whole load of other stuff conveniently. Whilst both boxes have X Windows on them I frequently run it using just the command line or the Wyse VT52 compatable terminal that hangs off of one of the serial ports.

Something tells me that there is a vanishingly small number of newbie development people who would want to work without an all singing all dancing high end 17” screen laptop with 12Gbyte RAM and half a Tbyte of SSD, or multiple screen high end desktop with multi Mbit Internet connectivity.

The rest fall mainly somewhere between those extreams. However the closer to the old 486 boxes you are the more likely you are to be secure in your hardware (BIOS etc is in ROM not Flash).

As I’ve said before the Ed Snowden revelations, I seriously doubt I could sufficiently harden a modern laptop to survive the attention of the NSA et al at US border if they got their hands on it for an hour or two. Since the revelations I doubt even the likes of the NSA could harden a laptop against their own people… (it’s also the reason I tend to believe Ed Snowden does not have a copy of the document trove on the laptop he uses).

The simple fact is if you’ve got hardware from the mid 90’s that’s not been connected to the Internet this century and an OS on CD-ROM from then you are going to be considerably less likely to have “been got at” either in the “supply chain” or otherwise.

To be on the safe side I would work on the assumption that any hardware made since 2010 has been “got at” in the “supply chain” and even if not, if it’s been connected to the Internet at any point it’s been got at remotely even if the AV and other anti-malware software was kept up to date. The only question being who’s got at it when, where and how…

BoppingAround February 13, 2015 9:19 AM

name.withheld.for.obvious.reasons,
Jumping won’t help and I personally don’t know what will apart from tearing all those spooks out of existence.
Laws won’t restrain them — they ignore or bend them. Same for tech fancies.

keiner February 13, 2015 9:23 AM

@Grauhut

Zero. Null. Nothing. Niente. Everything is bad.

Buy a Raspberry P, buy a Micro-SD card, download a Linux distro. All three likely to be f*cked up.

Buy an appliance with more than 1 NIC, download pfSense, burn it to a CF-card. All three highly likely to be clusterf*cked.

Install Snort (Cisco-NSA) or Suricata (US-Homeland Security) and the f*ck-up increases exponentially…

Face it: There is nothing left in hard- or software to trust.

keiner February 13, 2015 9:30 AM

PS: I’m convinced for more than half a year now: All the “firewalling” is f*cked-up from the bottom, for the NSA inside is outside and outside is inside, they literally walk through (fire)walls.

Same with WLAN, they simply are inside and have one large WLAN, world-wide.

See these funny MS-protocols on Windows, tunneling IP6 packages via IP4 packages, which can not be controlled by any firewall. That’s how all this works. And we have simply no way to control what’s going in and out.

Nick P February 13, 2015 10:29 AM

@ Grauhut

I posted a list of hardware options in response to the leaks. The graphical machine I went with was a 2003 Mac laptop with justification that they weren’t popular among NSA targets and they weren’t doing much vendor subversion back then. Concept was to get one without built-in wifi to use airgapped behind a custom guard. Problem: seller didn’t list the built-in wifi and Rendevous worked entirely too well. (sigh) Least it’s a backup if my main laptop goes out…

Besides, I got a present for people in next Friday thread that surpases Control Flow Integrity’s protection with less overhead and has already been applied to FreeBSD. One could combine it with a simple UNIX or Oberon OS on older Intel hardware to stop about all code injection attacks. I’d still put the thing behind a guard, though.

Nick P February 13, 2015 11:14 AM

@ Grauhut, Clive

I used to brag about the SGI workstations I messed with to friends who didn’t know what true graphical power was. Their architecture was so far ahead of everyone else it was mindboggling. Here’s a demo for Bill Gates of their real-time capabilities in 1998. Such workstations are quite usable today (minus modern app support) and all over eBay. Just port one critical library after another to them plus maybe Python and you have plenty of workstation to work with. Cept for CPU critical loads as they’re dual 100-200Mhz 64-bit CPU’s. Use an embedded CPU as a coprocessor maybe.

Note: Best to mentally compare the graphics to simulation software rather than gaming. The system is optimized for accurate rendering rather than just faking it from one angle. Still better effects than my video games of the time, though.

Grauhut February 13, 2015 11:49 AM

@Keiner: “Zero. Null. Nothing. Niente. Everything is bad.”

No, i don’t think so. Look at China for instance. If they would catch a NSA clusterf*cker in the production chain, they would first squeeze him out, copy the stuff and than give him a bullet in his neck in public. And send the NSA a bill for the bullet afterwards.

Its not that easy to 0wn everything on condition of global markets.

And you cant cheat all of us all the time, there are no invisible packets in the internet.

@Clive: “I would work on the assumption that any hardware made since 2010 has been “got at” in the “supply chain””

With legacy pc hardware this is right, Intel for instance included a lot of stuff beginning with Sandy Bridge.

What about SBCs without on board firmware?

Grauhut February 13, 2015 1:22 PM

@Nick: Come on, the world needs something that could go into mass production! 😉

What would you put on a hard to 0wn mini-itx board?

Nick P February 13, 2015 3:22 PM

@ Grauhut

Easiest route is the CHERI processor and CheriBSD FreeBSD port. I’d just put an I/O in that transparently adds, removes, and enforces capabilities during I/O operations. You can then run a bunch of OSS code through their toolchain with whatever the maximum settings are. Add devices with easy to add open source drivers. Boom. Same can be done with any of the clean slate secure chip designs I’ve been posting here (esp SPARC). Gaisler’s Leon cores are designed to make that easy, have GPL versions available for experimentation, support Linux, and have sometimes been synthesized into ASIC’s that had no defects in first fab attempt (rare in general).

Anyway, the CHERI system is already running on the Terasic FPGA board with hardware and software source available on web site. Board is around $2,000 I think. As FGPA’s are expensive per-unit, an ASIC would probably bring the price to half-that or less. Custom OS’s, esp capability or microkernel systems, could be incrementally developed on the system. Alternatively, the system can be put on anti-fuse FPGA’s that can only be written once for extra security.

That’s the short route using existing hardware and software with better security. Have fun doing all that!

Grauhut February 13, 2015 6:35 PM

@Nick: Yes, object-granularity level memory protection and library-granularity sandboxing based on Mips64 sounds like great stuff!

But hardware code trojanizable softcores on fpga’s are a little bit nightmarish for me, please take these phantasies back Nick! 🙂

Afaik the Globalfoundries Dresden fabs are always happy about new jobs, the EU talks about more independence in the microelectronics sector, some former OSRC and hardware research workers could be available, GF already took EU money in 2011.

Maybe its the right time. Someone should present this to the EUrman security fish and chips mafia, Fraunhofer could consult… Yeah, nice idea!

WD February 14, 2015 10:26 PM

Can anyone comment on the effective resistance of cjdns to all creeps – private or government? I’m seriously considering the construction of a family internet permanently “airgapped” from the real internet.

This regular internet is just losing all utility…….

I’d be real happy if some experts outline how “rest of us” could opt out.

jdgalt February 15, 2015 9:13 PM

@Z. Lozinski:
“Securing the phone network would be a much harder job than securing the internet.”

About the same. The international phone network was designed on the assumption there are ~200 networks, all trusted, and you just need to interconnect them. That was true in the early 1980s when SS7 was conceived. With 700-odd telecom service providers licensed by Ofcom in the UK alone, now it is not a valid assumption.

“Fortunately it isn’t necessary. VOIP and similar services are fast making traditional phones completely unnecessary.”

I disagree, for several reasons. The transition from the fixed / landline phone network to VoIP will take 7-15 years in developed countries. There is a huge amount of infrastructure installed, and the capital cost of replacing the existing voice network is significant. The transition from mobile voice to VoIP is dependent on the widespread deployment of LTE (“4G”). Again we are only a few years into the deployment of LTE. And why do you assume that VOIP is any more secure than circuit switched voice?

I’m mainly looking at the first step in the process: getting trustworthy IDs of the persons sending (potentially untrustworthy) traffic to you on the net. The vast majority of Internet providers, at least in Western countries, now filter their outgoing packets and discard any packet with a “From” IP address that is not in a range belonging to their network or a network downstream of them. But a business with its own PBX on the telephone network can pretty much send whatever data it likes out onto the network, and no phone company is willing to similarly filter that information (because phone companies have all decided that all junk calls are legitimate traffic and any means a spammer uses to get their calls answered are OK).

If I replaced my phone with a VOIP phone, I might still see phony “caller ID” but I will also see the caller’s IP address, which is certainly legitimate (or at least belongs to the caller’s own ISP or one of its upstream providers).

Please, blow holes in this if you can. If I’m wrong I want to know about it.

Nick P February 15, 2015 11:13 PM

@ Z, jdgalt

A variety of defense and private firms have deployed secure communications over legacy infrastructure. I’m not sure if voice was the easiest but it was done by all major players with tech ranging from POTS to GSM to VOIP. All of these treat the middlemen as untrusted while leveraging a secure construction over the middlemen’s infrastructure. Changing everything across the board might be harder but it seems unnecessary: motivated parties need only mimic or exceed what’s been proven to work for the application area of their choosing.

I say this because the incentives of private and anonymous communication will rarely align with those who control infrastructure or the laws governing it. Best not to depend on them at the design level.

Clive Robinson February 16, 2015 4:20 AM

@ jdgault,

As per your wish… with regards,

If I replaced my phone with a VOIP phone, I might still see phony “caller ID” but I will also see the caller’s IP address, which is certainly legitimate…

Err not realy, all you actually see is what the next node upstream of you puts onto your wire.

The next node up is possibly a box in the street that your ISP assumes is a line card or bridge, which you cannot see, which is why you assume what you see is a reliable path to your ISP router which you don’t. The reality is that line card or bridge converts the signals that come across your wire to something altogether different, and in some cases can be an entire network hidden transparently in the “physical layer”. If an attacker can get into the line card, bridge or hidden network then they can put on your wire what they please and neither you or your ISP will be aware of it.

What you need to ensure you are not suffering from some kind of “active” man in the middle attack or impersonation is a “reliable” side channel between you and the person calling you over which you verify a shared secret, and also build a secure channel.

In large organisations with a “central authority” they can do this via various public key certificates or systems they issue to their employees / personnel.

However in the wilds of real life, you are unlikely to share a central authority you can both trust so the side channel is not reliable and you are back to MITM attacks this time on the side channel. Which is the problem we see with phoney CA Certs that have caused people significant problems if not actual physical harm.

Thus you get back to the issue of “trust” and what it means. In the past you would “hand deliver” the trusted secret to the party you communicate with and “cross your fingers” that they are trust worthy and remain so… It’s why in the past spy traffic had “check traffic” which implemented a covert channel such that the spy could indicate that they were “under duress”, but even that was not reliable (have a read of Leo Marks “Between Silk and Cyanide” [1]).

But on the “torture front” it is also known that specialist forces can be fooled by a clever adversary that their mission is over etc and thus unwittingly reveal secrets. So prior to a mission they agree a private code word with a person they trust, and thus do not give anything other than “name and number” untill the person debreifing them gives the code word.

The problems with trust and communications are legion, but of more recent times various political idiots have tried to do the impossible which is assume a person can be tied to an identity reliably ( they can not [2] ). Even a previous Director General of the UK’s MI5 Stella Rimmington effectively said it was a fools erand [3]. But many gready and dishonest people saw lots of money and thus kick backs to them and their political parties. So we still see cuckolded idiots riding at windmills with helms of lost battles long past [4]…

[1] Or have a brief overview via http://en.m.wikipedia.org/wiki/Leo_Marks

[2] The problem is at the end of the day a “Digital ID” is a number, which is an intangible “information object” a human is a tangible “physical object” and as the old saw has it “neir t’ twain shall meet”. That is there is no reliable way to map the physical and information objects securely, and any one who thinks they can does not understand either human ability to fake things or the laws of nature as we currently know them.

[3] http://news.bbc.co.uk/1/hi/uk_politics/4444512.stm

[4] http://en.m.wikipedia.org/wiki/Tilting_at_windmills

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.