The Equation Group's Sophisticated Hacking and Exploitation Tools

This week, Kaspersky Labs published detailed information on what it calls the Equation Group—almost certainly the NSA—and its abilities to embed spyware deep inside computers, gaining pretty much total control of those computers while maintaining persistence in the face of reboots, operating system reinstalls, and commercial anti-virus products. The details are impressive, and I urge anyone interested to read the Kaspersky documents, or this very detailed article from Ars Technica.

Kaspersky doesn’t explicitly name the NSA, but talks about similarities between these techniques and Stuxnet, and points to NSA-like codenames. A related Reuters story provides more confirmation: “A former NSA employee told Reuters that Kaspersky’s analysis was correct, and that people still in the intelligence agency valued these spying programs as highly as Stuxnet. Another former intelligence operative confirmed that the NSA had developed the prized technique of concealing spyware in hard drives, but said he did not know which spy efforts relied on it.”

In some ways, this isn’t news. We saw examples of these techniques in 2013, when Der Spiegel published details of the NSA’s 2008 catalog of implants. (Aside: I don’t believe the person who leaked that catalog is Edward Snowden.) In those pages, we saw examples of malware that embedded itself in computers’ BIOS and disk drive firmware. We already know about the NSA’s infection methods using packet injection and hardware interception.

This is targeted surveillance. There’s nothing here that implies the NSA is doing this sort of thing to every computer, router, or hard drive. It’s doing it only to networks it wants to monitor. Reuters again: “Kaspersky said it found personal computers in 30 countries infected with one or more of the spying programs, with the most infections seen in Iran, followed by Russia, Pakistan, Afghanistan, China, Mali, Syria, Yemen and Algeria. The targets included government and military institutions, telecommunication companies, banks, energy companies, nuclear researchers, media, and Islamic activists, Kaspersky said.” A map of the infections Kaspersky found bears this out.

On one hand, it’s the sort of thing we want the NSA to do. It’s targeted. It’s exploiting existing vulnerabilities. In the overall scheme of things, this is much less disruptive to Internet security than deliberately inserting vulnerabilities that leave everyone insecure.

On the other hand, the NSA’s definition of “targeted” can be pretty broad. We know that it’s hacked the Belgian telephone company and the Brazilian oil company. We know it’s collected every phone call in the Bahamas and Afghanistan. It hacks system administrators worldwide.

On the other other hand—can I even have three hands?—I remember a line from my latest book: “Today’s top-secret programs become tomorrow’s PhD theses and the next day’s hacker tools.” Today, the Equation Group is “probably the most sophisticated computer attack group in the world,” but these techniques aren’t magically exclusive to the NSA. We know China uses similar techniques. Companies like Gamma Group sell less sophisticated versions of the same things to Third World governments worldwide. We need to figure out how to maintain security in the face of these sorts of attacks, because we’re all going to be subjected to the criminal versions of them in three to five years.

That’s the real problem. Steve Bellovin wrote about this:

For more than 50 years, all computer security has been based on the separation between the trusted portion and the untrusted portion of the system. Once it was “kernel” (or “supervisor”) versus “user” mode, on a single computer. The Orange Book recognized that the concept had to be broader, since there were all sorts of files executed or relied on by privileged portions of the system. Their newer, larger category was dubbed the “Trusted Computing Base” (TCB). When networking came along, we adopted firewalls; the TCB still existed on single computers, but we trusted “inside” computers and networks more than external ones.

There was a danger sign there, though few people recognized it: our networked systems depended on other systems for critical files….

The National Academies report Trust in Cyberspace recognized that the old TCB concept no longer made sense. (Disclaimer: I was on the committee.) Too many threats, such as Word macro viruses, lived purely at user level. Obviously, one could have arbitrarily classified word processors, spreadsheets, etc., as part of the TCB, but that would have been worse than useless; these things were too large and had no need for privileges.

In the 15+ years since then, no satisfactory replacement for the TCB model has been proposed.

We have a serious computer security problem. Everything depends on everything else, and security vulnerabilities in anything affects the security of everything. We simply don’t have the ability to maintain security in a world where we can’t trust the hardware and software we use.

This article was originally published at the Lawfare blog.

EDITED TO ADD (2/17): Slashdot thread. Hacker News thread. Reddit thread. BoingBoing discussion.

EDITED TO ADD (2/18): Here are are two academic/hacker presentations on exploiting hard drives. And another article.

EDITED TO ADD (2/23): Another excellent article.

Posted on February 17, 2015 at 12:19 PM143 Comments

Comments

Nicholas Weaver February 17, 2015 12:36 PM

One thing interesting: Its ALREADY the stuff of Ph.D. theses: The big innovation is the firmware bootkit (aka IRATEMONK). There are already multiple implementations of such malcode in proof-of-concept form.

I’d give over/under of two weeks until we have a demo setup of a malicious drive firmware which is combined with the “sploit signed driver to (regain) root” trick to do a full windows trojan install protected with a HDD driver firmware attack.

Bob S. February 17, 2015 12:41 PM

It appears attack by corrupt firmware, hardware and hard drives is the wave of the future for most users and clearly a major threat for some special targets already.

It’s probably time to start turning over vulnerable hardware every 6 months or so. Somehow you would need a way to regenerate programs and data files without losing too much and still maintain security. Or maybe not. The idea is to replace hardware and drives. Other measures would be necessary to protect files and data.

I am thinking modular construction, everything is an add-on (and removable) from something else. Simply moving stuff around the shop might confuse attackers, somewhat.

Oddly, the goal many times will be to keep your own government out of your pockets more than some teenybop cracker across he ocean.

Those were the days!

AlexT February 17, 2015 12:43 PM

Wonder if there is any hard drive in the market with sealed firmware? If not it will most likely come soon…

steve37 February 17, 2015 12:56 PM

I’m wondering whether the drive manufacturers will publish a firmware to fix these security holes and whether dvd drives are affected too?

Grauhut February 17, 2015 12:58 PM

@Bruce: “There’s nothing here that implies the NSA is doing this sort of thing to every computer, router, or hard drive.”

I dont think so. Its a two step mechanism.

http://cdn.arstechnica.net/wp-content/uploads/2015/02/evo_doublefantasy.png

“DoubleFantasy: a validator-style trojan designed to confirm if the infected person is an intended target. People who are confirmed get upgraded to either EquationDrug or GrayFish.”

If they have a first foot in the door with the DoubleFantasy Trojan, they will try to keep it open. No one risks to loose a potential source by free will, they dont know if they would get a second chance.

Dave February 17, 2015 1:36 PM

I recall reading Tom Clancy’s Threat Vector in late 2012 or early 2013. One of the key bits of spy tech was a tampered hard drive.

albert February 17, 2015 2:05 PM

@Bruce

“…On one hand, it’s the sort of thing we want the NSA to do…”
.
What you mean ‘we’ Kimosabe?
.
‘We’ invade and decimate sovereign nations, and then get pissed when terrorists from those nations attack us. Sauce for the goose is sauce for the gander.
.
Our business and industrial infrastructure is dangerously non-secure. The NSA budget would be better spent on increasing our security, not wasting it by chasing bogeymen, and waging war on every little nation can’t can’t fight back.
.
Stuxnet was cyberwarfare. Totally illegal. For proof, just consider what ‘we’ would do if Iran did that to us (or actually, to Israel, who was the real instigator for using Stuxnet). There’d be boots on the ground. US boots of course.
.
One that would help tremendously would be for ‘us’ to stop viewing every nation we don’t control as ‘the enemy’. It’s easy to make enemies, but more difficult to make friends.

The Clapper Clap February 17, 2015 2:34 PM

Let’s hope Bruce is using ‘targeted’ tongue-in-cheek. Targeted at 30 countries. Countries we’re not at war with. Countries including the USA. And that notorious existential threat Great Britain.

The map clearly confounds NSA IT-sabotage intensity with IT diffusion. You really think Mongolia is less infected, sandwiched between and dependent on eternal enemies Russia and China? Of course not. You don’t have a computer in your Ghir, that’s all. You think Equatorial Africa is less infected? If you saw their security standards you’d pass out. They simply have fewer computers, shared by more users. Those boxes get everything there is. Aid workers will take the NSA sabotage toolchain home with their new STDs. If we had a world war with weaponized smallpox, this map is exactly what it would look like. In the early stages.

me February 17, 2015 2:50 PM

We need someone to come up with some simple instructions on how to modify the board on some semi-popular hard drives so the firmware can’t be updated — cut a pin, track, etc.. Same goes for the rest of the system (bios, dvd, network, etc.) Although I know some Western Digital drives actually store part of its firmware on the platters. Drives that load from disk should be listed so they can be excluded. Then trow it all out and try again.

sidd February 17, 2015 4:07 PM

Mr. Weaver wrote:
“I’d give over/under of two weeks until we have a demo setup …”

I referenced

http://spritesmods.com/?art=hddhack

in the last squid post, where linux was induced to partially boot from drive firmware, a couple years ago, so I would not take the other side of the bet. Along these lines, I would be very surprised if network adaptor firmware exploits aren’t lurking about too.

I also imagine that thirty seconds after the news hit the web, a large majority of (competent) admins began monomaniacally checking for those signatures, and all those CandC servers came under surveillance and/or attack. I expect more reports of NSA malware shortly.

These exploits are very short sighted, and will be turned against their creators as well as bystanders. Far, far, better would have been for the zero days and firmware exploits to be publicized and vendors shamed into fixes. But I suppose, although defense wins games, offense sells tickets. And the tickets are for much more lucrative budgets, whih grow ever larger the longer the game continues. There is no profit for the spooks in ending the game.

sidd

steven February 17, 2015 4:12 PM

Asking to write-protect the firmware or make it permanent is an odd request. Especially if the vendor can’t be trusted anyway. Surely we’d prefer the opposite: source code to compile our own, trusted firmware image; and ability to overwrite the firmware of a disk even if it has been compromised.

Of course, you’d still need a trusted system to build and flash firmware from. Live CDs should help a lot here, especially if you have old media lying around that predates the malware in question. Share and compare firmware source and compiled images with your friends. A diversity of hardware architectures and age of systems is also good to have, as malware has many constraints and can’t target everything conceivable.

Once you have a trusted system, having barely enough trusted storage to boot the OS, you can even use untrusted disks quite safely – as encrypted storage. That way you never write anything sensitive to that disk, and you authenticate the data returned to you by its firmware. Just so long as your BIOS won’t ever try to boot from it.

Michael February 17, 2015 4:24 PM

I thought most hard drive firmware was written outside the US. India. How does the NSA coerce Programers in India to create a Malware hook, and then stay silent about it.

If I was cracking Linux, I think I would think of putting the hook into Graphics Drivers.

Maybe a company itself does not have to be involved, just a private payment to someone. Such as getting hooks into Windows only requires greasing an individual’s hands to get Malware hook in.

I have never heard of anyone conjecturing that Servers have been altered in some way to aid NSA tracking.

tyr February 17, 2015 4:44 PM

I was particularly struck by the leftover detritus
as Kaspersky got hold of a bunch of abandoned domains
where the infected machines were still reporting back.

Bruce was looking good on RT television, the soundbite
nature of their format makes it hard to get much info
but at least it isn’t the hysterical spin version.

One posibility that might work is scrounging up some
obselete hardware and use that for your TCB basis. I
recall lots of new websites trying to push Windows
crap onto the Unix I was using at the time. They were
assuming everybody used Windows. Now they assume we
all use some modern disc tech. If you don’t it will
at least give you some small space of invulnerability.

On the political horizon if such things are considered
acts of war this is the most dangerous kind of insane
stupidity. The Atomic Scientist Doomsday Clock is far
too close as it is and major hacks against Nuclear
powers are a really bad idea. Leaving your cutesy ID
codewords behind is even worse.

steven February 17, 2015 5:09 PM

The FBI could learn a lot from the Kaspersky PDF: if you want to attribute a hacking incident to a nation state – convincingly – then this is how you should go about it.

Clive Robinson February 17, 2015 5:32 PM

@ Bruce,

We simply don’t have the ability to maintain security in a world where we can’t trust the hardware and software we use.

There are a couple of issues behind this…

The first is the “permissive business model”, basicaly computers have been sold into business not on the idea of segregation/security but connectivity/insecurity. That is the greater the degree of connectivity and access the more use computers are as “business enablers” which is what executives pay for. As the industry so often shows –Sony being one of the more obvious– managment are to short sighted to care about security other than in name for audit and compliance.

As for “trust” forget it it’s a pipe dream that’s not happened and is not going to happen, plain and simple, the systems are to complex and the supply chains to long and opaque. The solution as it is with placing trust in humans is to mitigate the problem in some manner.

However mitigation is going to be expensive, and that is going to be a realy tough sell to people who’s main interest is in maximizing “shareholder return” for the next couple of quaters as it’s the way to keep their jobs… Likewise at all levels down from executive row people want systems that enable them to keep up with the targets they have been set, and thus do not want systems that get in their way.

Yes we can have improved security, but only at a price few are prepared to spend, and then as with infrastructure companies because they are required to by legislation.

Thus if we want more secure systems we need legislation that will force a level playing field by requiring it of all not a few. However with legislation lies many pit falls and dangers. Personally I have no faith in our legislators getting it right, there are to many “well healed vested interests” that will lobby to neuter any such legislation.

Further I don’t believe that either of the two main US parties are even remotely interested in raising the bar on computer security, in fact just the opposite. As their deeds rather than their speeches have clearly shown.

wp February 17, 2015 6:15 PM

I would like to see hardware vendors especially of hard drives and motherboards to use parts which have a write protect pin and allow the user to disable the write protect only through a physical jumper or switch in the hardware in case you need to update it, otherwise keep it disabled. If you did this for all NV storage in the system (except for your hard disk itself) then you can at least prevent malware from being stored in NV memory and have some assurance that a system is sanitized by wiping the hard drive(assuming the hardware you have didn’t have implants). I’d like the vendors to also sign their firmware and publically post the signature, though you might have to pull the part off the board and read it manually to prove anything.

Sancho_P February 17, 2015 6:16 PM

HD firmware, BIOS hacks, subverted chips – OK.
Lock one door, they will use the other.

So the computer (targeted means a small (my) machine, not a server system) is pwned.
It also means there must be “something” very intelligent within the machine, undetected by AV software, collecting “specific” information on the machine – not a simple task, anyway. Nearly unbelievable to hide that in HD firmware only.

But the data (which? [1]) finally has to leave the machine via Internet (?).

Would that mean we need another device, much simpler than a universal PC, to monitor our computer’s networking activity?
Similar to a router or a proxy, or e.g. the “Little Snitch” (for OS X) running on the (supposedly) pwned machine, where it would be useless?

It would check all outgoing connections, cross check the destination IP against a manually configurated whitelist, checking the amount of outgoing data to each connection for plausibility and / or confirmation by the user.
The device would only listen, like Wire Shark, probably cut the line in real time, and could not be directly updated via Internet.

Could such a device detect suspicious activity?
(not a device for Jane & John Doe, however)

[1]
I’m asking because nearly everything interesting has to leave the computer – and the target is interesting, too (metadata). They already have it from the backbone.
So it must be looking for passwords and encryption keys / OTP’s?

Grauhut February 17, 2015 6:55 PM

@Sancho: “…there must be “something” very intelligent within the machine, undetected by AV software, collecting “specific” information on the machine – not a simple task, anyway.”

The attack has two stages, stage one “DoubleFantasy: a validator-style trojan” does the scanning.

Do we already know “DoubleFantasy” is not DualUse AV software? Some kind of “Malicious Software Removal Tool”? 🙂

If you want to monitor your traffic get a switch with a monitoring port (cheap gigabit websmart) between your systems and your internet link. Let a primitive machine do the scanning. Something like this combo: openbsd.org/armv7.html

Nick P February 17, 2015 7:00 PM

@ Michael

They could offer them a large sum of money or a better job later on if they do the subversion. Telling them the risk is low for them (if done properly) while offering a violent alternative to compliance might increase odds they go along with it.

Nate February 17, 2015 7:04 PM

Bruce, you said:

“On one hand, it’s the sort of thing we want the NSA to do. It’s targeted. It’s exploiting existing vulnerabilities.”

Sorry Bruce, but I beg to differ. In the strongest possible terms.

We don’t know that EquationGroup is only ‘exploiting existing vulnerabilities’. We know that they ARE exploiting vulnerabilities, and we know that they are not reporting the vulnerabilities that they discover (so they’re certainly not making the cyber-world more secure).

We also know from the Snowden documents that other groups within the NSA are tasked to introduce new vulnerabilities, and there’s absolutely no reason to believe that the group which creates vulnerabilities and the group that exploits them have any kind of conflict of interest. Common sense would argue the opposite: that a capability requiring vulnerabilities will feed demand for the manufacture of vulnerabilities. It’s at the VERY least a ‘perverse incentive’, and much likely far worse.

We also know from this very Kaspersky report that EquationGroup are additionally interdicting physical CDs in the post and introducing malware into them. We knew this from the TAO catalog but this should frighten us. We now can have no trust in the integrity of the postal service. Is this a good thing for democracy? I don’t believe so. The implications of this kind of intervention run very deep, and very dark.

Finally, ‘targeting’? Are you really okay with this? (p15 of the report):

“One such incident involved targeting participants at a scientific conference
in Houston. Upon returning home, some of the participants received by mail a copy of the conference proceedings, together with a slideshow including various conference materials. The [compromised ?] CD-ROM used “autorun.inf” to execute an installer that began by attempting to escalate privileges using two known EQUATION group exploits. Next, it attempted to run the group’s DOUBLEFANTASY implant and install it onto the victim’s machine. The exact method by which these CDs were interdicted is unknown. We do not believe the conference organizers did this on purpose. At the same time, the super-rare DOUBLEFANTASY malware, together with its installer with two zero-day exploits, don’t end up on a CD by accident.”

Scientific conferences in the continental USA? Is that REALLY a legitimate war/espionage target? I’m not American, but surely some things should be sacred to US military, and science happening on US soil would be up there, wouldn’t it? And yet, nope.

p21:
“Victims generally fall into the following categories:
Governments and diplomatic institutions
Telecommunication
Aerospace
Energy
Nuclear research
Oil and gas
Military
Nanotechnology
Islamic activists and scholars
Mass media
Transportation
Financial institutions
Companies developing cryptographic technologies”

Are you ABSOLUTELY sure these are ‘legitimate’ war targets? They look like civilian infrastructure to me.

So please speak for yourself, but I do NOT want the NSA or any other nation-state group doing this kind of subversion of the civilian infrastructure I depend on for peaceful living.

DEFCON 8 bis February 17, 2015 7:07 PM

Can’t wait to find out why dkk was carefully, precisely targeted by country and forum membership

https://forum.lowyat.net/topic/1488855/all

in 2010, while the KLWCT happened to be marshaling forensic evidence of universal-jurisdiction crimes by the US and UK command structure.

Monitoring threats to US impunity – the sort of thing we want the NSA to do!

JonKnowsNothing February 17, 2015 8:04 PM

My first computer was a terrific (RIP) Radio Shack one with 8″ floppies and a notoriously bad floppy drive that nearly every write cycle corrupted the file directory structure. From simple directions in a popular magazine anyone who had one of these used “defensive recovery” methods of re-writing part of the drive routine to make a copy of the file directory in a hidden/unused area of the floppy. If the main directory failed you executed your recovery script and rewrote the file directory.

The same techniques worked quite well when the IBM PC made it’s appearance and their 5.25″ floppies weren’t any more reliable than the 8″ ones.

Years ago the baddies found that they could stash their malware in GPU memory and other non-volatile memory areas beyond reach and detection.

Anyone with a bit of ddg-fu (or google-fu if you must) can download the specs from nearly any corporation that makes firmware and read all-about-it. It’s not scintillating reading but with a bit of perseverance you learn that: Anything in Firmware is Anything but Secure.

The best news though is that Kaspersky has broken ranks with the Silent Partners of the NSA – the AVCos. The US News outlets are busy pushing this news off the front pages as fast as they can but it keeps filtering upwards even though a goodly number of people probably couldn’t find a hard drive even if it landed on a plate in front of them.

What’s even better is that by “using your little grey cells” you can figure out a lot more about what is and isn’t happening simply by reverse engineering: What would it take to do X? For many complex problems stepwise increments can be very simple to understand.

What comes from this is what a lot of people have been saying: THE INTERNET IS BROKEN

There is no protocol, crypto or other solution that will “fix” this problem. A while back someone posted that the Internet 5 years from now won’t look like it does today. Once corporations begin to consider the magnitude of the problem and users are finally freed from the “I have nothing to hide…” ideology a Newer Net may have a chance to evolve.

That’s a big MAY … there will be a lot of skepticals who do not want to give up their bread and butter and governments who find the current incarnation so convenient that they will stick as many spanners in the works as they can.

There isn’t any security at all on the net. It’s all MAYA

ht tp://en.wikipedia.org/wiki/Maya_(illusion)
(url fractured to prevent auto-run. remove the space from the header)

Nick P February 17, 2015 8:09 PM

@ Nate

“Are you ABSOLUTELY sure these are ‘legitimate’ war targets? They look like civilian infrastructure to me.”

Absolutely. Being able to selectively or totally disable those during a conflict would be tremendously valuable to a military organization (eg NSA). Some are more appropriate than others, not to mention blowback. Problem I have is, as you point out, they aren’t just targeting these in hostile countries. Even worse, they’re firing at their own people. (!)

The funny thing is this was something that I didn’t seriously contemplate until I saw Enemy of the State. Even then, I thought it would only be used for the most exceptional (or just criminal) situations if Americans were the targets. The movie damaged their reputation and domestic power for years. They spent those years convincing us they wouldn’t do anything like that to Americans. After 9/11, they asked (and were allowed) to do way more than that. So, they voluntarily became exactly what they spent years telling us they had no intention of being. And this time their intel supported indefinite detention, torture, and murder by drones!

Note: Movie focused on satellites and CCTV surveillance where reality mostly ended up being Internet and cell phones. Gene Hackman using a faraday cage, air gaps, and countersurveillance proved to be accurate on minimum defense necessary. Blowing up one’s office because someone made a phone call: yet to be tested for effectiveness but his motivation makes more sense now than then.

@ All

Stumbled upon this hilarious Snowden remix of Enemy of the State trailer.

DA February 17, 2015 8:35 PM

On one hand, it’s the sort of thing we want the NSA to do.

I know you’re going to get a lot of pushback on this, but as one of those guys in the middle on this issue, I appreciate it. And you’re right.

There’s a pretty good chunk of the public who will believe that anything the NSA (and the rest of the IC) does, no matter how sketchy or draconian, is absolutely a good thing that protects us from all manner of horrible people. And there’s a pretty good chunk of the public who believes that anything the NSA does is by definition evil, that all they do is twirl their mustaches and think of ways to create a turnkey totalitarian state.

I do not myself buy into “the truth is always in the middle” fallacy. But in this case it is in fact true that a nation has to be able to suss out the secrets of other nations. It’s been absolutely necessary since the modern nation state has been a thing. At the same time, it can be a civil liberties danger. We need to be responsible citizens and try to see that this country meets both challenges.

Dirk Praet February 17, 2015 8:42 PM

@ wp

I would like to see hardware vendors especially of hard drives and motherboards to use parts which have a write protect pin and allow the user to disable the write protect only through a physical jumper or switch in the hardware in case you need to update it, otherwise keep it disabled.

Technically feasible, but highly unpractical. Admins and power users can open up servers and desktops to change jumper, dipswitch or pin settings, but that sort of stuff is beyond the average user and a complete nightmare in a corporate environment with hundreds or thousands of machines to service. Removing a hard drive from most laptops is not too difficult either, but what about other infectable components such as NICS, controllers and the like ? That essentially boils down to taking the machine apart. Same goes for tablets and smartphones.

We know from the ANT division catalog that NSA also has a series of BIOS exploits (e.g. ARKSTREAM + DEITYBOUNCE, IRONCHEF), which begs the question why these are not write-protected by default. I am pretty baffled too with nls_933w.dll, the Equation Group module that on Windows does the re-writing of the HD firmware. I always thought the UEFI+TPM combination was specifically designed to prevent this kind of subversion. On some HP Proliant servers, for example, HP SUM (Smart Update Manager) utilities for iLO, Smart Array, NIC and BIOS will detect if a TPM is enabled on the system and will warn users prior to a flash. If Bitlocker is enabled, the recovery key is even needed to access the user data upon reboot. Which obviously indicates that it is very much possible to detect any firmware tampering before it happens.

The really bad part is that because of the very nature of the infection, it is almost impossible to detect if a hard drive is infected, or to re-flash it. Which totally changes the game of data sanitation and incident response.

@ Nick P., @ Nate

Are you ABSOLUTELY sure these are ‘legitimate’ war targets? They look like civilian infrastructure to me.

Under the assumption that these are legitimate war targets, that would imply that the NSA is effectively preparing for such a scenario. It is however more likely that they have infiltrated many of these organisations for economic and industrial espionage. As for domestic targets, I guess that’s the NSA’s interpretation of “data sharing” between corporate entities and the government.

Mit M Fisher February 17, 2015 9:28 PM

Re. “Today’s top-secret programs become tomorrow’s PhD theses and the next day’s hacker tools.”

A number of today’s top secret programs are also today’s contractor employees’ outside-of-work ‘toys.’ This includes both weaponized software and hardware. This is likely one conduit by which the exploits find their way into the wild, but is also a significant problem in its own right. Many of those with ‘recreational access’ clearance are at best juvenile sociopaths, some are much worse. You can really do a lot of damage to a community with these things.

From Nov 14, 2014 article regarding NSA Director Rogers speaking at a RAND Corp. conference:
“Asked how he pitches jobs at the NSA to top tech recruits, Rogers cracked another joke. “We’re gonna let you do some really neat stuff,” he said. “Some really neat stuff that, quite frankly, you can’t legally do anywhere else.””

It might have been framed as a joke but it is very clearly a common recruiting angle. If NSA ($11B budget) or Defense Intelligence ($80B budget) recruits this way, what sort of behavior should we expect from them at home or afield? And where exactly does the allowance for illegal use of these weapons end?

Buck February 17, 2015 10:29 PM

All this talk of ‘(il)legitimate’ domestic/foreign targets is such a yawn… The lines have already been drawn:

DoD Strategy for Operating in Cyberspace (DSOC) (July 14, 2011)

Strategic Initiative 1: Treat cyberspace as an operational domain to organize, train, and equip so that DoD can take full advantage of cyberspace’s potential.

Press Release: http://www.defense.gov/releases/release.aspx?releaseid=14651

Official Download (.pdf): http://www.defense.gov/news/d20110714cyber.pdf (I’ve previously had some troubles with this specific file getting corrupted on the mobile phone, but surely a search for ‘d20110714cyber.pdf’ will result in the proper document 😉

The ‘great’ information war is well underway! The instant you step into to the virtual world (unavoidable), you are most certainly considered a potential enemy combatant. How ’bout this one: Congressmen Seek To Lift Propaganda Ban (May 18, 2012)
Yes, I am very much aware of my patriotic duty to support the party lines, yet I still can’t help but feel that Snowden’s revelations are more policy-based continuations rather than any real attempt of bureaucratic/political-subversions…

Wael February 17, 2015 11:55 PM

@Dirk Praet, @ Nick P

I am pretty baffled too with nls_933w.dll, […] On some HP Proliant servers, for example, HP SUM (Smart Update Manager) utilities for iLO, Smart Array, NIC and BIOS will detect if a TPM is enabled on the system and will warn users prior to a flash. If Bitlocker is enabled, the recovery key is even needed to access the user data upon reboot. Which obviously indicates that it is very much possible to detect any firmware tampering before it happens.

Several things to keep in mind:

  1. The TPM is a “passive” device (slave); it executes commands.
  2. The BIOS / UEFI controls what the TPM does in the early stages of power-on starting from the time the CPU comes out of reset
  3. The TCG specifications state what is “measured” in each PCR
  4. The TPM only measures and doesn’t “Detect” — that’s one of the differences between trusted boot and secure boot; secure boot will detect and flag an anomaly, but trusted boot won’t. It’s possible to have both trusted and secure boot.[1]

Regarding the hard drive firmware: As far as I remember (and it’s been close to 7 years since I worked on this), this isn’t an “option rom” that’s measured by the TPM. Some firmware isn’t exposed (reachable) to be measured. You can ask the hosting board to update it, but not read it. If it’s not reachable by the BIOS / UEFI, then it can’t be measured in the TPM’s PCRs.

As for:

NIC and BIOS will detect if a TPM is enabled on the system and will warn users prior to a flash.

It’s because of the possibility that a measured option rom will change one of the PCRs (PCR2, 3), which the decryption key is sealed to. And if the PCR value changes, then the decryption key won’t be “unsealed”, resulting in the drive data being indecipherable.

[1] “Trust” does not imply “Goodness”! It simply means that the challenger can cryptographically verify the reported state is correct. The challenger trusts that the state reported by the “challenged device” is true (good or bad state… is irrelevant, in this context)
[2] Hard drive stealth paper

pb February 18, 2015 12:35 AM

Hi,
Thanks for the link,

A note in article states that the code remained on the compromised disks even after being wiped by military standard wiping procedure.

” The malicious firmware created a secret storage vault that survived military-grade disk wiping and reformatting, making sensitive data stolen from victims available even after reformatting the drive and reinstalling the operating system. ”

As per know-how, a military issue re-writing consists of at least
minimum 7 tries of writing with pseudo-random data over whole span of a disk.
(Gutmann, Yours, DoD Unclassified Computer Hard Drive Disposition, etc. )

How is it possible for data to remain on the infected disk even after
7-35 wipe outs, cause usually such procedure is carried out on another physically separate system ?

Thanks.

Ole Juul February 18, 2015 1:24 AM

@pb I too am doubtful that the “military standard” wiping is inadequate, but it is certainly possible. Their standard wipe could be just the regular part of the disk where date is written. It is possible to create other areas, particularly at the ends, which are not touched by that. There may also be a little space left at the end of firmware (not data) sections. A full wipe (eg the whole disk made blank) would render it useless for any data storage purpose. It’s easier to just physically destroy the whole unit.

Grauhut February 18, 2015 1:33 AM

@pb: “How is it possible for data to remain on the infected disk even after 7-35 wipe outs”

The surviving part of the disk is invisible for the format / wiping program and it isnt wiped, not even one time.

Transparency February 18, 2015 2:25 AM

The answer is easy enough. Just like we have GPL3 software where we demand the requirement to be able to examine and change the source code, we need a type of GPL3-like hardware design where we have the ability to specify AND verify the hardware design to conform to publicly accepted standards of security.

The market for such open hardware design specs I think will grow in the coming years just like Linux grew though in a more open manner rather than the GPL2 kernel and other licenses that fall short of an accountable license model.

937da4fc4d February 18, 2015 4:04 AM

How can this firmware implant be detected? Is it enough comparing the geometry provided by the HDD/SSD drive to the one printed on the attached label?

937da4fc4d February 18, 2015 4:09 AM

To be more precise, some HDDs are larger than announced (e.g., some 2TB drives are really 3TB ones whose capacity has been limited in firmware) and SSDs usually reserve some space to overprovisioning. Are these areas used by Equation group implant or are they less clever and just get the space for the encrypted filesystem from the available one?

Max February 18, 2015 6:35 AM

In the Reuters story, it quotes Kaspersky researcher Costin Raiu saying that the hackers (presumably NSA) must have had access to hard drive source code. Really? What do you need source for? You put your code in some unused space and patch the existing code to jump to it. Or you can fully disassemble the code, modify it, and reassemble (but that’s more work and riskier).

Bob S. February 18, 2015 6:46 AM

“NSA, is taking advantage of the centralization of hard-drive manufacturing to the US, by making WD and Seagate embed its spying back-doors straight into the hard-drive firmware…”

“Kaspersky claims that the new backdoor is perfect in design.”

Source: NSA Hides Spying Backdoors into Hard Drive Firmware

When the Revelations first came out I was vocal about it here and other places. In a matter of a few months without warning TWO hard drives went BSOD on me. I wondered if that was a coincidence. I am still wondering, more.

The implications of a government having an undetectable, unbeatable backdoor in, almost, every PC in the world are boggling. And depressing.

Dirk Praet February 18, 2015 6:58 AM

@ Wael

You can ask the hosting board to update it, but not read it. If it’s not reachable by the BIOS / UEFI, then it can’t be measured in the TPM’s PCRs.

Which is exactly why it’s so hard to determine whether or not a disk has been infected. In the light of what we are seeing today, perhaps that design specification/implementation should be reconsidered, unless for some reason it would be technically impossible to do so.

@ Buck

All this talk of ‘(il)legitimate’ domestic/foreign targets is such a yawn… The lines have already been drawn: …

Which makes it all the more necessary to get a tad more serious about international talks, covenants and treaties governing the subject matter in order to avoid certain nations imposing their dominance in the field upon others. Not everyone on the globe agrees with the vision of US DoD and DoJ that they can pretty much do what they want where ever they want.

It may be interesting in this context to point to a current DoJ initiative that would make it possible for the US to “legally” search and seize data on computers worldwide. It’s not even debated in Congress but in the Judicial Conference’s Advisory Committee on the Federal Rules of Criminal Procedure. For more details, see this article by the Center for Democracy & Technology.

steven February 18, 2015 7:00 AM

@937da4fc4d:
someone (on Reddit I think – thank you) suggested reading the first blocks (MBR) from the disk, power-cycling the disk so that its firmware thinks the system is booting (how else can it know?), then read it again and compare.

Maybe the exact block size and sequence of reads has to match how BIOS would read it at boot. For GPT partitioning or UEFI systems maybe the process is different again.

I suggest booting from a Live CD in order to try this. (p.s., I wonder how safe optical drive firmwares are).

Surely there are old disks out there from machines that got infected and then taken out of service, and the malware would have no means to remove itself unless it booted to the OS (then reads system clock, realises much time has passed, panics and erases or disables itself).

An infected disk may have been sold on the second-hand market, having been ‘securely erased’ but still having bad firmware on it. If a different/newer OS was installed, one that the malware doesn’t know how to re-infect, it may be unable to erase itself then. Or similarly, if the hard drive had bad firmware installed from the start, or was interdicted, etc. and the implant was unsuccessful.

Still it may be more stealthy than this. It might only modify the MBR if the MBR on-disk exactly matches an OS it knows how to infect. It might count the number of boot attempts, and disable itself (return to normal behaviour) if it fails too many times to reinfect the OS.

Clive Robinson February 18, 2015 7:10 AM

One thing that does worry me is the repeated “Only the NSA had the budjets to do this” refrain from journalists, and it’s a real problem.

Let’s get down to brass tacks, all that’s needed is a brain, a computer and some information all of which was available longer than 14 years ago. In theory one person who was keenly motivated could have produced such code.

Thus repeated emphasizing “suprer state level resources” is in part like alowing people to use “I was gods instrument” excuses, and thus giving them a pass on their poor security practice. It also gives people a false sense of both apathy and security, with “but nobody can stand up agaist such resources, and I would not be a target so why should I bother”.

Whilst Bruce has pointed out that “these particular attacks” are being used by the NSA/GCHQ today and somebodies PhD thesis tomorow, I don’t agree on the attacks the day after. Personaly I think that there are potentialy way better attacks already around and have been for quite some time.

As I frequently point out as far as we can tell apart from one specialised case the TAO are not very advanced technically, everything we have seen about them comes into the “already known publicaly for some time” catagory. That is you can find sufficiently detailed information on the methods they use on this and other blogs, in accademic papers and printed books.

Now does this strike you as odd? Well not if you think the NSA/GCHQ etc are “followers” not “leaders” and are just as most national IC organisations do and have done for centuries, stealing and reusing ideas from other people.

The thing is we also know that the NSA/GCHQ et al are not employing the brightest or best people, they can not offer the pay and packages industry especialy start ups offer. We also know that criminals can offer more convincing terms in quite a large part of he world.

This gives rise to two interesting possabilities…

Firstly that the NSA/GCHQ might be “keeping their powder dry” or playing to “plausible denyability”, that is they have way better techniques they are deliberatly not using for various reasons of which I can think of a few. One is that they work their way up an attack list using steadily stronger attacks untill they gain access, thus the real high end attacks rarely if ever get used.

Secondly that private industry / criminals likewise have way better techniques that they are not using for various reasons…

Does the suggestion that the criminal fraternity are all willingly “hanging back” on new way more powerfull attack tools sound likely?

Well no it does not which leaves us with some interesting thoughts.

One of which is for some reason we are not picking up on these attacks… That is the AV companies and academic researchers are for some reason not spoting these attacks for a variety of reasons.

Now both Stuxnet and this current batch of Equation Group attacks, realy tends to suggest that AV companies and academic researchers are a long way behind the curve…

Thus I suspect that way better tools are not waiting on PhDs etc but are already in use but are being used cautiously enough that their difficult to spot signals are not sufficiently visable in the other “nuisance noise”.

Several years ago on this blog I pointed out that you could use what looked like a brain dead script kiddy attack to enumerate networks and discover if you were looking at network of real independent machines or some facsimile of a network of machines that were realy just virtual machines running on a single machine. Which would give you a very good idea as to if you had found a “honey net” trap or not. I also went on to say that if I had one or two very good new exploits, I would take quite a bit of care not to waste them by using them in such obvious traps…

Unfortunatly when you look at quite a bit of academic research into what attackers are upto, the instrumentation they use nearly always comes up short in a way that is detectable if you think about it… which suggests there is a probability their experements get detected, and thus their results will be skewed to the script kiddy end of the attack spectrum, which unfortunatly will paint a too rosie a picture of what may be going on.

The AV companies have an advantage in that people who think they might have been attacked send in samples of code for the AV techs to look at. However there are way to few techs so they will use what are in effect statistical methods to find attack ware, which means they will see the more common lower level attacks way more frequently than they will the high end attacks. Also it’s highly likely they will only recognise attack code where it uses some kind of familiar exploit code. We have evidence of this with the fact that the equation group attack was discovered back in 2006, but people did not act on it.

Thus if a new high end attack method is found and used judiciously then it will go either undetected or ignored untill it comes above the noise. As a sneaky attacker I might use my new attack as an APT back door but cover my activities by putting an already known attack which has been weaponised to pass AV detectors onto the system and use that instead. Thus when the AV techs go through the code they see the known attack and deal with that, I as the attacker will see this avenue closed off, so will keep my head down for a bit before using the back door to get in again at a later date. I might even make the backdoor a “pull system” so that it reaches back to me at some point after it has not had it’s timer reset. As I’ve shown in the past you don’t need a dedicated command server to do this, you can use Google or any number of main stream services instead.

Thus my view point is there is more than sufficient grounds to belive that there is much higher level attack code already out there, because importantly it does not need vast almost impossible to imagine financial resources to do it. Just a brain a computer and information will do it, you could find the required information in any number of ways including by your own research using various testing tools on commercial software, it just requires a brain a computer copies of the software and a little patience. Large financial resources, of course does also alow you to get a degree of exclusivity, by buying such information from exploit brokers thus giving a time money trade off but this has “trust issues”.

Ern February 18, 2015 7:51 AM

“There’s nothing here that implies the NSA is doing this sort of thing to every computer, router, or hard drive.” If they would have, this would proably have been discovered years ago already.

Let’s hope anti-virus developers will focus much stronger on this low level area of computing.

65535 February 18, 2015 8:04 AM

[Equation malware kit/botnet] appears [to] attack by corrupting firmware, hardware and hard drives [t]is the wave of the future for most users and clearly a major threat for some special targets already. It’s probably time to start turning over vulnerable hardware every 6 months or so.” – Bob S

This firmware malware is extremely troubling. Tossing out hardware [HDD boards and even motherboards] could get a expensive.

“We need someone to come up with some simple instructions on how to modify the board on some semi-popular hard drives so the firmware can’t be updated…” –me

I agree. But, I think we are a decade too late. I would guess that the NSA has even more firmware viruses at it’s beck-and-call.

“As for “trust” forget it it’s a pipe dream that’s not happened and is not going to happen…” – Clive

I concur. I would not trust anything within the tentacles of the NSA/GCHQ. They will just NSL you and gag you.

“A former NSA employee told Reuters that Kaspersky’s analysis was correct, and that people still in the intelligence agency valued these spying programs as highly as Stuxnet. Another former intelligence operative confirmed that the NSA had developed the prized technique of concealing spyware in hard drives… NSA spokeswoman Vanee Vines declined to comment.” –Reuters

http://www.reuters.com/article/2015/02/16/us-usa-cyberspying-idUSKBN0LK1QV20150216

“Update: Reuters reporter Joseph Menn said the hard-drive firmware capability has been confirmed by two former government employees. He wrote: A former NSA employee told Reuters that Kaspersky’s analysis was correct, and that people still in the intelligence agency valued these spying programs as highly as Stuxnet.” –Arstech

http://arstechnica.com/security/2015/02/how-omnipotent-hackers-tied-to-the-nsa-hid-for-14-years-and-were-found-at-last/4/

“Technically feasible, but highly unpractical. Admins and power users can open up servers and desktops to change jumper, dipswitch or pin settings, but that sort of stuff is beyond the average user and a complete nightmare in a corporate environment with hundreds or thousands of machines to service. Removing a hard drive from most laptops is not too difficult either, but what about other infectable components such as NICS, controllers and the like ? That essentially boils down to taking the machine apart. Same goes for tablets and smartphones.”- Dirk Praet

Yes. The situation looks grave. A lot of hardware and software will have to inspected or simply discarded.

Nick P has an opportunity of getting funding for his “secure” equipment project. I just hope it is not from the NSA. We need secure systems.

[Other observations]

The information stolen from the PC and prepared for transmission to the C&C
is stored in encrypted form throughout several fake font files (*.FON) inside the Windows\Fonts folder on the victim’s computer –Kaspersky
see pdf page 9

https://securelist.com/files/2015/02/Equation_group_questions_and_answers.pdf

“When the computer starts, GrayFish hijacks the OS loading mechanisms by injecting its code into the boot record . This allows it to control the launching of Windows at each stage. In fact, after infection, the computer is not run by itself more: it is GrayFish that runs it step by step, making the necessary changes on the fly…” –Kaspersky, pdf p. 12

GrayFish implements its own encrypted Virtual File System (VFS) inside the Windows registry. interesting observation: the first stage GRAYFISH loader computes the SHA-256 hash of the NTFS of system folder (%Windows% or %System%) Object_ID one thousand times. The result is used as an AES decryption key for the next stage. This is somewhat similar to Gauss, which computed the MD5 hash over the name of its target folder 10,000 times and used the result as the decryption key. -Kaspersky

“At least four of these were used as zero-days by the EQUATION group. In addition to these, we observed the use of unknown exploits, possibly zero-day, against Firefox 17, as used in the TOR Browser.” -Kaspersky

“An interesting case is the use of CVE-2013-3918, which was originally used by the APTgroup behind the 2009 Aurora attack. The EQUATION group captured their exploit and repurposed it to target government users in Afghanistan.” -Kaspersky

“How do victims get infected by EQUATION group malware?”

“The Equation group relies on multiple techniques to infect their victims. These include:
“• Self-replicating (worm) code – Fanny
“• Physical media, CD-ROMs
“• USB sticks + exploits
“• Web-based exploits”

“The attacks that use physical media (CD-ROMs) are particularly interesting Because they indicate the use of a technique known as “interdiction”, where the attackers intercept shipped goods and replace them with Trojanized versions…
“One such incident involved targeting participants at a scientific conference in Houston. Upon returning home, some of the participants received by mail a copy of the conference proceedings, together with a slideshow including various conference materials. The [compromised?] CD-ROM used “autorun.inf ” to execute an installer that began by attempting to escalate privileges using two known EQUATION group exploits. Next, it attempted to run the group’s DOUBLEFANTASY implant and install it onto the victim’s machine. “ –Kaspersky

[the USPS probably did the interdiction and spread spyware to a group of innocent scientists – along with photographing every individual’s piece of mail – thanks a lot./]

Hdd firmware implant details: SeePage18 of pdf for LZMA HDD ware implant details [neat info on the compressed implant to the HDD board]

“All C&C domains appear to have been registered through the same two major registrars, using “Domains By Proxy” to mask the registrant’s information…”–Kaspersky

[Domains By Proxy seems to be owned by the infamous “G0D@dd@y” owners – who just happen to own their own CA, certificate signing request, CSR, and issue SSL/TLS Certificates to their captive customers.]

https://en.wikipedia.org/wiki/Domains_by_Proxy

[RC6 cipher exploit]

“In most publicly available RC5/6 code, this constant is usually stored as 0x9E3779B9 , which is basically – 0x61C88647…Since an addition is faster on certain hardware than a subtraction, it makes sense to store the constant in its negative form and adding it instead of subtracting” -Kaspersky pdf p. 23

[Please excuse the grammar and other errors. I am pinched for time]

E2 busywork February 18, 2015 8:12 AM

Say hello to our special guest, NSA persona DA! He’s here with the official propaganda line straight from Big Brother. He’ll show us how it’s done.

First, acknowledge that your position has no credibility – clumsily, using the telltale bureaucratic jargon ‘pushback.’ Then, caricature the law and ethics of government privacy interference into two camps at opposite extremes. Extra points for implicitly dumping Bill Binney into the bin with the cartoon antis by appropriating ‘turnkey totalitarian state.’

Next split the difference in the crudest way, while acknowledging that that’s stupid. Present your unsubstantiated opinion as ‘a true fact,’ namely, dragnet surveillance is necessary. Absolutely necessary, that clinches it. Assert absolute necessity in general, and not in each specific case, as required by the supreme law of the land. But sneak in the discredited idea of balance – don’t come out and say it, or everyone will know you’re an enlisted puke following orders. Then close with some heartwarming buzzwords: challenges, responsible citizens, that’ll do it. Now anyone in your audience with a sub-95 IQ is putty in your hands. They’ll be mouthing your bumper-sticker slogans like a pop song.

That’s the theory, put out pre-chewed opinions for the dopes. Wait, hold on there, Gomer Pyle. This is not like your unit, there are smart people here.

Stick your rehashed balance up your ass. Each individual interference with our privacy must meet specific tests. You don’t balance anything, you meet the tests. Or else we’ll storm your fucking coward’s fort like the Stasi it is and shut it down.

Solomon February 18, 2015 8:44 AM

I don’t believe Kaspersky just found all this. Somebody told them first, then they dug into it.

34uifi3ufh3i February 18, 2015 11:11 AM

I’ve always believed the reason there isn’t a lot of BIOS firmware malware isn’t because complexity, but because of expense of development. You first have to RCE a lot of ROMs, then make a few generic infectors.

There are only a few BIOS providers, and often times the only difference are bit-fields and signing within each vendor’s solutions..

Also I see zero day payload techniques and infrastructure management and obfucation as the focus of real advancements..

This group only gets away with HDD firmware infection because there are no signatures for it, and nobody is looking at dumps in IDA..

Grauhut February 18, 2015 12:21 PM

@65535 “Nick P has an opportunity of getting funding for his “secure” equipment project.”

If Nick could offer a simple SATAwall he would be rich quick now! 🙂

Transparent SATA bidirectional i/o filter, USB powered, passing only standard commands and answers over the SATA bus, dropping all others and sending warnings via USB if something was dropped and why. Could be accompanied by a software SATA driver sniffer logging wich thread sent unusual commands.

Bob S. February 18, 2015 12:48 PM

The Kaspersky report on Equation Group is an absolute MUST read.

Search for: “Equation group: questions and answers” PDF
(available from various sources, pick one you trust, save a copy, some are already disappearing.)

I think Kaspersky deserves vast credit for their research and reporting on this situation. And, we should buy their product to show it. I did.

937da4fc4d February 18, 2015 1:13 PM

@steven:

Thanks a lot for this information. It makes a lot of sense.

Agreed, we should worry (at least slightly) about optical drives firmware. There are too many Live-CDs and operating systems — I am not using either Windows, OS X or Linux so, indeed, there are too many choices. However there are not so many computer architectures right now and there is a chance this implant will overwrite the boot record to hide its presence and only show itself while using an undocumented low level command. I have not a lot of knowledge about how firmware works, but it seems plausible to me allowing arbitrary execution of code if written in the ARM9-like assembler targeted to the HDD/SDD processor.

Kaspersky should release any information they have about the chances to detect a firmware implant (at least the ones they have analyzed) and, of course, information about the way the firmware implant destroys itself… perhaps there is a way to elicit implant self-destruction.

HDD manufacturers should release firmware to reflash their HDDs too (even if only a single firmware release has been developed for a given model) together with a flashing tool that erases the remaining space in the flash area. Just a dream, I know they will not do it.

I am not using any of the mainstream operating systems (Windows, OS X, Linux, …) so I learned years ago flashing firmware by using CD-ROM bootable media or, if a Windows executable is required, a bootable WinPE 3.0 CDROM. Never on a networked system (one of the few advantages of not managing a large set of computers!).

At last, I have an off-line collection of firmware releases. I may try reflashing anything I can, from BIOS up to HDD firmware, not to miss optical drives, external drives, network/video adapters or even my small eight port KVM server.

steven February 18, 2015 2:43 PM

@937da4fc4d: if some HDD/SSD vendor takes this as a PR opportunity, releases open-source firmware, flashing tool and hardware documentation, that would be a wonderful outcome of this. (Even if it is not freely-licensed or cross-platform, someone could at least reverse-engineer it to produce something that is).

JustMe February 18, 2015 5:03 PM

Hi i have been here so many times but never thought i could say anything that you allready dont know. so i keep it short and as reminders, there might or might not be possible but after my investigations an mitigation to these threats, and this is importent in security is to know your enemy. So they are more towards non state threats but in general.
– use an operating system that does not involve: windows, android
todays threat model also would meen dont use iphones or any apple products
to put it short make a fingerprint that is not standard and do that on the os level any
which way is conveniant for you to do your task.
– Now this is hardcore perhaps but never mentioned or i havent seen it, ditch DNS use hosts files only as whitelists to where you want to communicate (seems to mitigate all the threats even if you are using windows)
– disable: flash, pdf, utf8. java, javascripts, any scripting, be careful of fonts in general
– disable exec rights from any temp variables
– ok that environment might sound like it sucks, but it doesnt, it can be actually done and managable, the hardpart is the hosts file that needs to be maintained, but in an virtualisation easy, you can have a BANKING VM that dont have dns but only 10 hosts entries that are needed to do the job. again depends on who you want to protect yourselfe from

hmm then that gives me some other outputs because all the fuzz is about 3 letter organisations
lets put it this way even if I would be a mustafa al bubu if i dont trigger any known subsets then i will go under the radar, so mustafa needs mostly protect himselfe from the local isp

so local isp protection
number one
– block at least dns requests (use tor for that)
– block port 53 on firewall for outgoing to make sure no leaks can happen
– then depending on who you are and what resources you have
make sure that all the traffic is going in different paths as much as possible
so if you can only use tor and dont have access to ipsec or other vpn tunnels
you can use squid proxies to make sure that a destination for example usa goes through
one tor proxy chain, another countries destination goes through another proxy chain
preferably beacuse of legislation rules, its preferable to make that proxy chain end in
an tor exit point that exits that particular country ( read 5 eye regulation ) arab
israel russian and other countries dont care, and maybe the 5 eyes also dont care but
in a legislative point of view it makes sense.

Fingerprinting: ok there are many ways to do that and more are covered for everyday
and more will follow trust me, but just somethings
– check your browsers fingerprint at http://ip-check.info
timezones must be set to utc
my personal opinion about firefox is !BAD! its same as google
mozilla is in my opinion in bed with NSA so be very careful especially after upgrades
– if you do the hosts file only approach you will allready be protected to the unknowns
but if you insist to use dns then pay attention to some common tracking traps as these
* all sorts of updates
* all sorts of telemtery
* ocsp
* safe this and safe that (many brands have many names its useless and a sort of
tracking)
* authentication headers (some in firefox cant be disabled) !!!!!!!! why ???
* ssl http fallback
* canvas
* url headers in general are not between the most important competitors made the same
why the fuck not????????
now we only touched on tracking mechanisms, then we have the actual vulnerabilties in such operatin systems as android, iphone, windows etc, and yes *nix too

i am kindof pissed off rightnow to the fact that i actually have told my customers to use firerox and use some plugins, but the more i see what that company is doing it just feels like they are in bed with the enemy and not making it impossible to “opt out” but just about and for every damned upgrade i have to through all configs to see what is changed if nothing but!!! all the news that occured and most of it as far as i am concerned are not privacy oriented at all, quite the opposite, so a RED flag to mozilla.

btw most of my machines use dns but i also use host files and dnsmasq, and my hostsfiles are aroung 40k in size and consists of whitelsts, redirection lists and blocklists
but for the banking stuff i use a virtualmachin with a hosts only approach.
and even that feels scary!

ok
i have more points to cover actually but i am tired and i dont want to get people bored
but actually read
one final note i do have is, encryption, i use 2 encryptions only
encfs for cloudnetworks what evere those might be and for my containers i use
a swedish zip thing called axcrypt i was using bcrypt before and then “updated” to tcrypt but it only makes sense to usb drives, for me all my data is in my own cloud
for the cloud right now i use btsync version 1.3.0.9 you might argue not to, but i prefer it to any other cloud thing out there that i have tried, and if you put encfs to it you can access it from windows and *nix no problem so its doable and !!! i can use my android camera to backup my photos very nicely…

Last but not least due to the fuzz around eq group, check out LOKI and THOR
i dont have a license for THOR so i cant say how good it is, but i do know that LOKI works
then a disturbing reminder about USB stick insecurity, we all know about autorun.inf
and stuxnet, but for what ever reason i have not seen much talk about U3 sticks

i have still an image that i can copy to any usb stick that makes it an u3 and that puts payloads on it that autoruns no matter what you do, and it still works!! very well!!! this was perhaps forgotten but i have used it shitloads of times and it still works today some antivirues finds it but very very few. go figure

Hastalavista babes

Sancho_P February 18, 2015 5:26 PM

@ Grauhut

Re: Switch
Only we’d have to find a manufacturer that wasn’t blessed by the good guys.
Cisco?
Do they offer both, remote access and auto update?
Would they lend me their golden key in case I lost my pwd?

Regarding the “DoubleFantasy”: I agree, the AV software would be the only one not being detected by the AV software when searching for specific data on the HD.
Also to infect the HD one has to have admin access to the machine first, the drive can’t change it’s firmware because all of a sudden it learns from itself it’s in a targeted computer.
And again, the malware, not the HDD, has to send some data to someone.

Conclusion: They victims either didn’t use an AV or AV software is snake oil.

But the first question remains:
Would that suggested “safety box” (similar to an external firewall) help?
Is it technically feasible to delay / scan the uploaded stream by a switch and a small machine at 100Mbps?
Or within the switch itself to run the software to investigate uploads?
(same problem, trusted vendor, probably we’d have to buy from Russia 🙂
– Oh, that brings me back to Kaspersky 🙁

Could we forget about spyware on our machine because the external box takes care of unaware leaks?
The box could also warn the user if data supposed to be encrypted (sent to specific targets, like TFC would use) does not look like random data.

@ Clive Robinson

Yep, a single brain may be sufficient (to infect the hard drive) and what is generally attributed to TLAs could be in part the work of ordinary criminals (industrial / financial espionage).
Only the mentioned targets …

Interestingly we learn each year about “the most modern and sophisticated malware implant” and “highly sophisticated threat actor”, still now to some extent using vulnerabilities closed by Microbrain since years. The rest sounds a bit like a PR campaign for AV software.

However, to be useful for the attacker particular data has to be found at and uploaded from the victim’s computer. That’s common to all spyware.
But why seems no one really to be interested in the recipients of outgoing data?
Registrant’s personal information will be released in case of a legal subpoena.

Setting up honeypots (victims) should be relatively easy for someone who really wants to catch blowflies, regardless how sophisticated the most recent attack software is.

Let’s not only search inside victims after the attack is already cold (@ Kaspersky).

Justme February 18, 2015 5:31 PM

I forgoet to mention a product and i dont really like products, i am a linux opensource person
and also sorry for my bad engrish

Hehh… well the product is Sandboxie, i personally only use the latest version made before the company was bought by american defence industry but.. that is you own choise

this is for winblows machines especially w2k and xp, and yes there are still w2ks out there and its in my opinion a very good operating system compared to all the other crap microsoft have made 🙂

sorry good nights

albert February 18, 2015 5:33 PM

@Buck
“…Congressmen Seek To Lift Propaganda Ban…”
.
Hilarious! Belongs in The Onion. Can someone point out anything from the gummit/MSM that isn’t propaganda?
.
@anyone
1. I guess I missed those Microsoft TV ads with Orson Welles proclaiming, “We will release no software before its time.”
2. We need rewriteable ROM memory. Done!
3. We’ll redefine ‘beta testing’ as ‘first product release’.
4. Our EULAs will prevent any accountability for anything.
.
It is simple to use ‘fusible link’ firmware memory, which can be made Read Only after the unit is tested. But wait! The firmware needs to be bug free; this is a problem. Of course a state actor could tamper with the firmware before it’s loaded.

@Grauhut
I like your idea; call it SATA-Sentry(tm) 🙂
I suspect that a ‘standard commands’ filter might not be enough though…..

Regarding encrypted storage. Why not go back to the old battery-backed SRAM. We had memory modules with BB-SRAM, (and the battery was backed up with a 1 farad capacitor). You’d still have to trigger it manually, but that’s always a tradoff. The memory is gone once power is removed, so it’s not an ‘active’ system.

Loki February 18, 2015 5:55 PM

Hmm just realized after googl-ing that loki nor thor is easy to find
this is a really good tool for finding all sorts of stuff including writing your own yara sets

https://www.bsk-consulting.de/loki-free-ioc-scanner/

Included IOCs

Loki currently includes the following IOCs:

Equation Group Malware (Hashes, Yara Rules by Kaspersky and 10 custom rules generated by us)
Carbanak APT - Kaspersky Report (Hashes, Filename IOCs - no service detection and Yara rules)
Arid Viper APT - Trendmicro (Hashes)
Anthem APT Deep Panda Signatures (not officialy confirmed) (krebsonsecurity.com - see Blog Post)
Regin Malware (GCHQ / NSA / FiveEyes) (incl. Legspin and Hopscotch)
Five Eyes QUERTY Malware (Regin Keylogger Module - see: Kaspesky Report)
Skeleton Key Malware (other state-sponsored Malware) - Source: Dell SecureWorks Counter Threat Unit(TM)
OpCleaver (Iranian APT campaign) - Source: Cylance
More than 180 hack tool Yara rules - Source: APT Scanner THOR
More than 600 web shell Yara rules - Source: APT Scanner THOR
Numerous suspicious file name regex signatures - Source: APT Scanner THOR

Nick P February 18, 2015 6:37 PM

@ Grauhut

It’s pretty straightforward if you use a custom driver that simply won’t pass risky commands to the HD. Good interface and/or inline reference monitor. That’s on top of an IOMMU for DMA portion. The best route though is a general purpose I/O processor with interfaces to a number of devices and interrupt handling. Gives you all the benefits of Channel I/O (see wikipedia article) while letting you add silicon for security purposes.

A side benefit can be had: I/O protection on systems without I/O MMU’s. If interfaced right, the device can emulate Linux driver model for hardware support while using it’s own driver for security and portability. So, you could provide secure, modern device support for older or novel systems without device manufacturer support. With support, it’s even easier.

Many possibilities. I’d start with an FPGA-based product focused on PCI protocol. If it worked and sold plenty, it could be converted to S-ASIC or ASIC for lower unit cost.

Buck February 18, 2015 10:56 PM

@Sancho_P

We’ll just have to agree to disagree…

Regarding the “DoubleFantasy”: I agree, the AV software would be the only one not detected by the AV software when searching for specific data on the HD.

Well, that’s not quite right… Generally, the data that somebody would be looking for is loaded into working memory from time to time (so as to actually make some use of one’s ‘secret’ data). Since the exfiltrated bits and malicious code are encrypted and polymorphic, AV signatures are sure to be useless!
While I think that you may have been hinting at this later in your comment, I still feel like it should be stated here explicitly – We would all be incredibly foolish to focus our collective efforts on 15+ year-old spook frameworks… Doing so will mean missing out on the present.

Grauhut February 19, 2015 1:44 AM

@Nick: “I’d start with an FPGA-based product”

There is Openhardware SATA Controller IP for FPGAs, but FPGAs could be overkill for this. 🙂

A programmable Port Multiplier Chip like the JMB572 for instance could be enough.

And there is also the “Badusb” stuff, could be reworked… 😉

Tony February 19, 2015 10:18 AM

Hi,

I think a bigger problem here is that anything that has firmware can become tainted. For a government spying might not be the only goal, code that rendered a device broke on command would be a tool worth planting.

If a government/hacker could cause 200 million devices to go offline that would be something I would worry about. With the amount of connectivity for a multitude of firmware based control systems, appliances, computers, not to mention the so called Internet of things we should all be concerned. Anything that allows firmware to be written to is subject to remote infection.

vdsn892 February 19, 2015 11:03 AM

@Tony “Anything that allows firmware to be written to is subject to remote infection.”

The problem isn’t quite that extensive.

Many embedded devices that are firmware programmable require the use of a special programming adapter to do the job. Sometimes a specific write voltage needs to be applied at certain pins. Typically they are programmed at the factory using custom built equipment to save even on the cost of putting a programming socket on the board.

Due to the extra expense of supporting field upgrades to firmware, many products do not have this capability.

937da4fc4d February 19, 2015 2:13 PM

@steven.

Agreed, the open source model is the only way to go. Not only for firmware but also for anything that is serious about security.

In my humble opinion open source drivers work better than their closed source counterparts, so why not firmware? We had been fighting for years to workaround bugs in the firmware of wireless network adapters. Open source firmware will allow not only auditing these devices but also fix problems were they lie (firmware) instead of writing workarounds at the wrong level (drivers).

On the other hand availability of open source firmware will allow hackers improve firmware quality (and they are very good at it). Currently we depend on manufacturers to get firmware updates: some provide decent support (ranging from four up to eight years for high-end gear) but others hardly release a single update (if any!).

Offtopic… what will happen now that current systems have UEFI BIOSes? Will manufacturers fix security weaknesses that happen years after computers are EOL’d? In the past a buggy BIOS mean that the computer was not able to boot from an external device, or that it would hang when a PCI card was attached. Sad, but we can live with it. Now a buggy BIOS means remote code execution before operating system takes control of the hardware (see latest UEFI BIOS vulnerabilities). I feel lucky as I have a ThinkPad T430s that is supported (yet!). Lenovo provides ~four years of firmware updates. We will see what happens if a serious 0day exploit against UEFI BIOSes is discovered next year — something that will possibly happen, as UEFI source is full of integer overflows.

In short, I agree with you. We need open source firmware and (for people and business serious about security) open source operating systems and tools. We do not need POSIX certification, we need a computing base that can be trusted. And trust can only be achieved by public review.

sidd February 19, 2015 6:02 PM

I like the bit about suppressing billing. They used to screw this one up in the 90’s

Anyone still believe the certificate authorities are not penetrated ? Might as well switch to self signed certs now …

sidd

Sancho_P February 19, 2015 6:04 PM

@ Buck

First I will fully agree that finding virus signatures in encrypted data / program structures will not work 🙂
But hopefully the ancient and from the beginning insane concept of using an endlessly growing virus signature database is nowadays not the only tool of AV software. How would they fill their database? Yes, after 2000+ infected machines and someone investigating they may find a “fingerprint” to add. A small virus modification … back to the start.
Checking dead executes on disk is a nice attempt, however, from dead (encrypted) “data” on the HD a malware can’t run, the “data” has to be in RAM to be executed.

Running malware detection software on an infected machine is generally a problem, the (sophisticated) malware will answer all questions to perfection – or may kill the (known) AV program entirely.

E.g. the “Little Snitch” on my Mac has two problems:
– I don’t know about connections which it doesn’t report (the malware),
– I may accept “xyz wants to connect to port 80 of wu.apple.com”, but I would not allow (but don’t know) to upload more than a few Bytes.
Same goes for ntp (UDP at port 123), allowed to any server, I love it, but would like to know about “unusual” use of it.
It’s not the connection that’s dangerous in the first place, it’s the upload.

Only an external, clean and trusted observer is able to identify irregularities in destination, amount or content.
The network connection is the ideal (and likely the only) point to observe and prevent data theft.

Buck February 19, 2015 6:23 PM

@Sancho_P

Whilst I think I agree with much of what you’re saying, I can’t help but feel that something has been lost in the translation… Sorry, I’ll try my best to clarify soon. Thanks for your input!

Wael February 20, 2015 12:16 AM

@Dirk Praet,

unless for some reason it would be technically impossible

Few things are technically impossible. It would be rather expensive to “fingerprint” an extension card which contains firmware running on an internal bus not mapped to the host memory address space.

Clive Robinson February 20, 2015 1:25 AM

@ Dirk Praet, Wael,

Whilst few things are technically impossible, some are so difficult to do that they might as well be…

For instance you have an IO device with it’s own CPU, how do you ensure that that running CPU ROM/RAM does not contain modified code?

The obvious answer is halt the CPU and connect up to it’s internal buses… But what if you need the IO CPU running at the time for some reason such as it’s the display or disc controller?

Whilst there are technical solutions they are generaly neither easy or cheap, and in a cost sensitive market short cuts will be made…

Buck February 20, 2015 9:50 PM

@Sancho_P

I think one of @Clive Robinson‘s recent comments summarizes the issue quite succinctly:

Thus when the AV techs go through the code they seen the known attack and deal with that, I as the attacker will see this avenue closed off, so will keep my head down for a bit before using the back door to get in again at a later date. I might even make the backdoor a “pull system” so that it reaches back to me at some point after it has not had it’s timer reset. As I’ve shown in the past you don’t need a dedicated command server to do this, you can use Google or any number of main stream services instead.

(I can personally attest that Clive has indeed gotten to the real crux of the problem in the past, but I can’t seem to find any of those references off the top of my head)
@Wael also hit at it recently too:

@Leia Organa,

Wireshark should, but probably won’t, add an audio recording feature…

That maybe a good start. What if the audio is encrypted or encoded in a proprietry format? All you can tell is the existance of some audio communication channel.

Basically, connection monitoring is doomed to suffer the same fate as AV signatures. Novel exfiltration sequences will also be cryptographically encoded, so in effect we will be limited to looking only for known classes of previously discovered data smuggling routines. Whitelisting hosts won’t help here either, because whitelisted hosts will just be abused to attack other whitelisted hosts until the target has been breached.

Unless we terminate operation of all information producing or observing systems, the leaks will continue to drip. Thus, the real trick is to not focus so much on prevention of attacks themselves, but on mitigation of any potential fallout from the theft of that data to begin with.

Buck February 21, 2015 4:52 PM

Thanks guys! 😉

I do remember that one Clive, but I know that I’ve definitely seen the same theme come up in a few more comments before… No need to dig them all up though. 😛

Wael February 21, 2015 5:08 PM

@Buck,

Thus, the real trick is to not focus so much on prevention of attacks themselves, but on mitigation of any potential fallout from the theft of that data to begin with.

Perhaps you’ll have better chances convincing @Nick P that his Castle is dripping? What you described is a Prison type mechanism! Tell him the Castle won’t be bulletproof unless we have access to HW and SW, including all Firmware and protocols, communication channels, transports, etc… High assurance, my ankle (or a little higher) 😉

Nick P February 21, 2015 5:21 PM

@ Wael

” What you described is a Prison type mechanism! ”

He actually described the monitoring and recovery aspects of overall security. Strong prevention is another aspect. My schemes all call for having both.

“Castle won’t be bulletproof unless we have access to HW and SW, including all Firmware and protocols, communication channels, transports, etc… ”

The [in]security assumptions behind your critique also apply to prison architectures given the tendency of almost everything in hardware design to result in a monoculture or oligopoly for each market segment. At worst, the strong attackers fund a few more projects to cover the differences. The only counter to that kind of threat is distributed, transactional, diverse, obfuscated, verifying, P2P computation of each work unit with the voter itself distributed. The interface-level security might need to be stronger in this model. In this model, you’re trusting high level design correctness, interface, and probability that small percentage gets compromised.

A start on these are the recovery-oriented computing and software diversity papers I posted.

Sancho_P February 21, 2015 5:46 PM

@ Buck

Thanks for coming back to that point.

Probably we were (are) a bit asynchronous so I’ll try to go through your comment in detail (my jumping mind, poor English and “short” postings often contribute to confusion, sorry):

From where you cited @Clive he was talking about the fire brigade only coming when there is already a severe fire, so you need 2000+ infections (= ”above the noise”) to trigger the AV boys’ investigation. One (or 500) dedicated, targeted breach(es) will never trigger their radar until by chance.

-> AV software is not made to protect data of individuals. It’s made to prevent a pandemic.

In your quote @Clive wrote about offering a scapegoat, if it’s taken, hide in cover and wait: A nice idea but won’t work with AV software because the AV program won’t stop at the first hit (as a human probably would). So the malware has to have a unknown “signature” when trying to hide.
And “at a later date” the AV may still watch …
(To make use of encrypted malware there has to be something unencrypted “hidden” in RAM, before (to trigger) and when decrypting / manipulating files. AV should be able to detect that, but we are back at the 2000+ issue with signatures, + that the root kit malware “… may kill the (known) AV program entirely” whenever it’s going down to business.)

Then @Clive hinted to avoiding dedicated C&C servers, at which our friends from the Equation Group were not up to that time (see the page 34ff. of the pdf report)

Clive’s linked post pointed at strange postings which indeed can be seen here in this forum: Pretty useless text (“thank you so much …”), probably a spam (ad) link in it so the Mod would hopefully find and delete it soon (you could call it “ephemeral posting” then), posted at a certain time, the “recipient” knows and will check the last 100 postings some seconds later, to find the posting and grab e.g. the user name to append it (like a salt) to the already known key phrase for today’s message (different channel).
As Clive’s other hint (no crime in spy circles) is also true I will only reveal that there’s another salt in the message and it also depends how to use the salt shaker.
Disclaimer: I (we) do not abuse Bruce’s forum for such bad bad bad things, trust me!!! 😉

Re: Connection monitoring is doomed (the second half for @ Nick P).

I dunno what @Leia Organa meant, it sounds insane to me at first (many clever things do, no sweat), but probably @Wael’s comment has taken you far away from my proposal and led you to the hilarious idea of fingerprinting of communication, similar to AV databases 🙂

Connection monitoring isn’t about content, because no machine can evaluate communication content for good, bad, correct, or malicious.
Even a human observer could not, as no one could know whether the conversation is “encrypted” or not. I may write “Yes, it’s going to happen tonight” and the observer wouldn’t know ‘what’ would probably happen but there is no way they could know that “tonight” always means “this week on Friday”, because we (the recipient and I) always meet only on Friday at the pub.
We need a lot of surrounding information to understand human communication, and we often fail (see our comments).

The idea to ban encryption is a far (political) shot to get other measures through,
mind you, they are not stupid.

My proposal was only regarding outgoing data, the observer must be external (not in the root kit controlled universal Win or whatever PC).
The data stream may be encrypted / compressed, and the observer may inform us about the most common types of content, which may be interesting, but wrong / misleading anyway (e.g. the used port doesn’t mean the data is what we’d expect).
Digging deeper (e.g. frequency analysis) would involve too much time / delay for all connections but may be feasible for certain destinations to verify that the message is indeed encrypted.

Anyway, the content wouldn’t be the first question to ask an automated observer. The first question would be “How much” would go to “which destination + port” and “is this a known + confimed upload”.
The latter is tricky and involves some learning (“please confirm upload of 2.5 kB to https://www.schneier.com at 204.11.247.93”) to accept it as “known size and destination” but we do not upload so much stuff, most is sending requests – that could be basic knowledge of the observer.

You wrote ”… because whitelisted hosts will just be abused to attack other whitelisted hosts until the target has been breached.” but sorry, I did not understand that: When you think of MITM attacks (CA model) or that the destination is already owned by the adversary – these are different issues, the observer can’t help in both cases.

However, there is another point I’d like to make for the external observer:
I’d add a simple timer to the “confirmation” button, that is, the PC can only upload for a couple of minutes after pressing the button. If there is no outgoing (http or whatever) request from the PC the timer will run down and prevent any upload until the user is present again.
Of course a LED would indicate any upload above a couple of bytes to the user (chilling, reminds me of the sound of modems).

“Unless we terminate operation of all information producing or observing systems, the leaks will continue to drip. Thus, the real trick is to not focus so much on prevention of attacks themselves, but on mitigation of any potential fallout from the theft of that data to begin with.”
(Buck)

I will slightly agree when you think about avoiding nude selfies on your PC but I can not accept the basic idea that my machine is open to any investigation without my consent.

Long posting, thanks for reading!

Wael February 21, 2015 10:10 PM

@Buck, @Nick P

The rain will stop when there are no clouds or when there is no one left to witness the “rain”!

Buck February 21, 2015 10:35 PM

@Wael

Ahhh… So when that tree falls in the woods, it won’t make a sound..?

Seems to me as though you two generally agree – at least when you’re not pouring gasoline on each other! 😉

I think your C v P argument is deeply entrenched with now outdated assumptions and divisive generalities based on analogies/metaphors/trivialities… But hey, who am I to speak on such matters!? 😛

Wael February 21, 2015 11:18 PM

@Buck,

Ahhh… So when that tree falls in the woods, it won’t make a sound..?

Funny! I almost referenced this unperceived existence link when I posted, but was too lazy. Now you bring it up; it was meant to be brought to attention, I guess.

Seems to me as though you two generally agree – at least when you’re not pouring gasoline on each other! 😉

Yes we do! The gasoline is just us fooling about — don’t take it seriously 😉 I like @Nick P, even @Figureitout (masquerading as me) can attest to it… lol

I think your C v P argument is deeply entrenched with now outdated assumptions and divisive generalities based on analogies/metaphors/trivialities.

I’m coming to realize that @Nick P’s design isn’t a pure “Castle” it does contain elements of a “Prison”. I was simply trying to breakdown and model the elements of security and map them to the definition then go from there before we talk about “Firewalls”, “Firmware”, “subversion” etc… Or at least to understand at the high level what our best weapons are!

But hey, who am I to speak on such matters!? 😛

Didn’t stop @Nick P from speaking about them 🙂 didn’t stop me either. It’s all good.

Nick P February 21, 2015 11:22 PM

@ Clive

“@Nick P is the one with the “link farm” ;-)”

I’d prefer to think of it as a prestigious, well-curated museum of IT and INFOSEC history. Then, I remember the barely organized mess it is and… I’ll clean it up eventually. I got the search feature meanwhile. Haha.

@ Sancho_P

I don’t see myself posting anything to you in this thread. What specifically are you responding to and which points in your post? (Quite a few names in “the second half.”) Just trying to be clear about what points you’re directing at me and why.

@ Buck

re the video

Nice song. Hadn’t heard it in a while. Regardless, no need to wait for someone to stop the rain: America should just wake up and act like it’s raining.

“I think your C v P argument is deeply entrenched with now outdated assumptions and divisive generalities based on analogies/metaphors/trivialities… ”

That’s what I’ve been telling them. I don’t like the metaphor. The closest buzzphrase to what I attempt is Correct by Construction: using highly assured methods to design the thing right with a proof showing it has claimed properties. Here’s a case study on one of those methodologies. My old strategy focused on isolation kernels with assured pipelines of communication between components. The TCB would be built with methods similar to above case study. Now, I focus on applying such rigor to hardware-software combinations that make enforcing the security properties easier.

I’m sure there’s a good metaphor for that somewhere out there. Meanwhile, I just keep calling it high assurance security engineering: an engineered solution to a security problem whose design and implementation justify high confidence that it works. That simple.

Buck February 21, 2015 11:42 PM

@Nick PMeanwhile, I just keep calling it high assurance security engineering: an engineered solution to a security problem whose design and implementation justify high confidence that it works. That simple.Is it really so simple though? Your “distributed, transactional, diverse, obfuscated, verifying, P2P computation” can actually be derived from our abstract prison’s methods to account for inherent human uncertainties/vulnerabilities! Confidence maybe, but assurance? Certainly not…

Buck February 21, 2015 11:44 PM

Accidentally started with an end tag :-\ this should be a bit more legible:

@Nick P

Meanwhile, I just keep calling it high assurance security engineering: an engineered solution to a security problem whose design and implementation justify high confidence that it works. That simple.

Is it really so simple though? Your “distributed, transactional, diverse, obfuscated, verifying, P2P computation” can actually be derived from our abstract prison’s methods to account for inherent human uncertainties/vulnerabilities! Confidence maybe, but assurance? Certainly not…

Wael February 21, 2015 11:58 PM

@Buck,

Your “distributed, transactional, diverse, obfuscated, verifying, P2P computation”

Ummm, Buck! Don’t forget @Nick P is from the south! You know, the “Orange Book” belt?

Nick P February 21, 2015 11:59 PM

@ Buck

Oh no, that thing is a monstrosity created in response to the assumptions that absolutely everything is going to be compromised by various attacks. There are smaller assumptions that allow for much simpler designs. An older example trust the hardware to do its job while providing strong containment with separation kernels. A simple version of my monstrosity comes from Clive and I’s voting schemes: same highly assured software running on diverse hardware/software implementations with a simple voter checking the results. The obfuscation might involve what hardware, firmware, OS, drivers, language, and so on. There are, in existence and in development, various methods to automatically diversify software which can be tied into high assurance development. P2P is just a system of systems which we have modeling techniques for.

And so on and so forth. The models with simpler assumptions are easier to handle with high confidence. The full monstrosity addition is medium assurance at best. There is potential for high assurance in between the two. I try to select components in such a way as to minimize the number of worries I have and move things toward the simpler problems.

Nick P February 22, 2015 12:11 AM

@ Wael

“I’m coming to realize that @Nick P’s design isn’t a pure “Castle” it does contain elements of a “Prison”. I was simply trying to”

FINALLY! And to think that my starting point (Orange Book) used high assurance methods to implement a security model that was mostly containment of information in various levels and compartments (a prison?). Only took you several years to notice.

“Ummm, Buck! Don’t forget @Nick P is from the south! You know, the “Orange Book” belt? ”

You motherf…

“Yes we do! The gasoline is just us fooling about — don’t take it seriously 😉 I like @Nick P”

Of course! I like you too, buddy. Otherwise, the “…” would’ve been a lot longer and NSFW. 😉

Buck February 22, 2015 12:22 AM

@Nick P

Maybe, but I’m not sure if I can believe you, unless you can clearly define your concepts in the terms of human capabilities…

Or as Wael might say:

breakdown and model the elements of security and map them to the definition then go from there before we talk about “Firewalls”, “Firmware”, “subversion” etc…

P.S.: It’s OK! 😉 I’m from the south too…

Nick P February 22, 2015 12:38 AM

@ Buck

It’s already been done in a variety of situations. What we lack is a place that puts all the knowledge and wisdom together in a way that’s easy for the aspiring to make use of. Major government programs are unlikely to fund such a thing (personal experience) unless one can show that it will have buy-in. Private groups might. Who knows. If I ever get a chance, I’ll give you exactly that.

“P.S.: It’s OK! 😉 I’m from the south too…”

I know. 😉

Nick P February 22, 2015 1:12 AM

@ Wael

“Perhaps I inadvertantly offended you – my apologies…”

The metaphor was annoyingly distracting as usual. Past that… no offense at all. All in jest. 🙂

Wael February 22, 2015 2:26 AM

@Nick P,

C-v-P: Quit calling it an “analogy” — it’s not! Although it may have started that way between you and @Clive Robinson, and we decided long ago that we drop the analogy and move to a “model”.

FINALLY! And to think that my starting point (Orange Book) used high assurance methods to implement a security model that was mostly *containment* of information in various levels and compartments (a prison?). Only took you several years to notice.

Your “high assurance” methods are sound and all. However, they depend on one false assumption: The trust in other entities. Unless you control every thing including the “trust assurance certifications” and the “Formal testing and proofs”, your systems will be vulnerable to the attacks we are currently witnessing. You surely felt warm and fuzzy with an EAL 5+ SIM or smart card. Tell me how you feel about your “high assurance” now that you know someone has the private keys? What else don’t you know? In other words, what other assumptions have you made that became not true today? Take the hard drive firmware holes we saw recently… Are you yourself going to code review all the firmware? Are you going to pen test it? Or will you just accept a report that gives it a FIPS rating? The best solution you have right now is to complement your high assurance systems with additional OPSEC procedures, which will limit the functionality of the systems.

How about the advice of using “older” hardware? That’s also based on an assumption, although some people give it a probabilistic confidence level higher than newer hardware without proof, mind you. You know I am still waiting for someone to tell me at what point in time chips were subverted!

Clive Robinson February 22, 2015 4:21 AM

@ Buck,

@Nick P does not like “Castle-v-Prison” or as Wael contracted it “CvP” as a name, not as a series of ideas (one day I’ll ask him what he thinks of the name “Cathedral and the Bazaar” as long as we can both “duck n cover” from the “Most outrageously offensive” Fossie brigade 😉

I started thinking about how to deal with subversion in the computer stack a while ago (although it feels like eons it was only last century). Back then few people thought about what goes on down under the nebulous physical layer and what goes on above the presentation layer of the ISO OSI “seven layer” stack. At some point somebody defined layer 8 as “wet ware” but these days you quite often hear people joke about “layer 8” user problems or “layer 9” managment problems and “layer 10” political problems, which no doubt will get further expanded with time.

When it comes to “Intrinsicaly Safe” systems design –which I used to do– you quickly learn that the physical problems start with the “energy” and mass and work up from there. One lesson you learn is there is no such thing as “safe energy” or “unsafe energy” just energy in environmets. The underlying principle of IS is entropy happens to matter and thus “faults occure and you have to mitigate them safely”, thus you look at a design from that perspective.

As people are starting to realise “safety” is a quality issue as is “reliability” / “availability” and “security”, that is they are in effect all facets of the same problem, and that quality is a mitigation process . Further that quality is a “day -1” asspect of any project, that is a quality process has to be in place before the project is even thought of.

With that in place you then start thinking about the computing stack and what quality means and how you obtain it. One thing you end up realising, is that there are two aproaches you can take to get to a desired out come “bottom up” or “top down”. You also realise that there is an implicit assumption in nearly all “security reasoning” which is you start from a secure base or point, and reason forward from it.

What is usually not stated is how you get the secure base or point to start off with, especially when you remember that neither energy or matter are “secure” by default, and even if currently secure entropy says that the current state will change…

Which is a problem because it tells you that a secure base or point does not exist naturally, you have to create them, further even having created a secure base or point, you have to ensure it remains secure from that point on….

Which is probably why @Nick P says,

Oh no, that thing is a monstrosity created in response to the assumptions that absolutely everything is going to be compromised by various attacks.

Yes it’s a monstrosity, but like all real monsters it consists of many interwoven, interworking and interdependent parts and as such there is a certain minimum of parts you need to ensure it works.

Look at it this way a monsters heart is not the monster, and to function as the heart it needs the blood, lungs, liver, digestive tract etc to function, but those other parts need the heart functioning or all you have is a rotting pile of monster bits…

The real question is thus “What is the minimum” to which the answer is we don’t realy know.

For instance you see a lot of talk about “secure software” in design, build and use, but it is only valid if the platform it runs on is secure.
As has just publicaly been made clear with the NSA HD firmware attack you can not trust the standard hardware, because you can not establish trust. The COTS motherboard CPU can not see what is on the HD platters it has to ask the HD controller. The controller functionality comes from the firmware in it’s flash memory and from a hidden place on the HD platters. The motherboard CPU has no way to reliably check what the controler flash or HD platters have in the way of firmware because the controller has to be functioning for the motherboard CPU to communicate with it. The reason it’s not reliable is the controler will tell the motherboard CPU whatever the running controller firmware tells it to… Catch-22.

Likewise the COTS motherboard CPU will only tell you what it’s software tells it to, and that software came via the HD controler…

There is no real way for you to check that your once secure software is still secure, because you do not have an independent and transparent and verifiable verification mechanism.

You could add a PC card that uses the COTS motherboard DMA channel, to examin the memory, and this is just part of the “Prison” process. However you could argue that such a PC card would have it’s own controller… however you could design the PC card with it’s own verification port.

That said as long as the verifing system is Turing compleate then you can not trust it. Thus you have to design a state machine verifier where all states are known and can not be changed.

That can be fairly easily be seen to establish an initial chain of trust. However unless the systemchecks the chain periodicaly the trust can not be assumed to remain, which is a major failing of all current COTS systems.

The question then is how do you limit the ability of malware getting into the system inbetween the checking times… Well there are several ways, but I think this post is long enough as it is.

And before you ask, yes I was well aware of the problem of modifing firmware back nearl a third of a century ago when I designed and got aporoved the first 16bit Intrinsically Safe RTU. It was based around an 8086 and originally had a number of 8051 microcontrolers to do various types of IO for reading pannel controlls and some types of instrumentation as well as controlling communications equipment for both reliable and secure communications across open channels. Needless to say I had a few run ins with managment over it and they vetoed the security asspects… something they might think differently about today, if they are still working…

Wael February 22, 2015 1:03 PM

@Clive Robinson, @Nick P, @ Buck,

Evidently @Nick P’s designs, approach, and recommendations cannot be accurately described as a “Castle”. If you want to drop the labeling, then you will need to shift the discussion to “My approach” vs. “His approach”. Feel free to use different terminology.

I repeatedly tried to abstract out the characteristics of the Castle and the Prison without much success or agreement. Looking back at the discussion thread, I realize I was trying to achieve a different goal, that I oulined on several occasions, than what you two originally aimed for — perhaps that’s the reason we haven’t progressed much in the past three years, although it was a fun discussion.

Nick P February 22, 2015 1:30 PM

@ Wael

I’ll address smaller statements before the big one.

“Unless you control every thing including the “trust assurance certifications” and the “Formal testing and proofs”, your systems will be vulnerable to the attacks we are currently witnessing.”

You have to put your trust into something. Still, I don’t have to control everything. I only need my distributed, secure SCM concept with entities involved that I can trust to do their part. Bonus points to their trustworthiness if they’re ideological, paranoid, and/or have skin in the game (eg use deliverable). I’d also have members of intelligence services involved to reduce the likelihood that one state actor could shut everyone up. If everyone’s product comes out the same, then the result should inspire high confidence that it does only what it claims to.

“You surely felt warm and fuzzy with an EAL 5+ SIM or smart card. Tell me how you feel about your “high assurance” now that you know someone has the private keys?”

EAL5+ is medium assurance. The plus signifies a few pieces are higher assurance. I feel the same about them: they offer significantly higher security against most attackers than the competition. I’d also avoid the key escrow option or request a method to ensure I control keys generated on-chip. That must be independently evaluated. I’m pretty sure some company already offers it. Anyway, even if I used vanilla smart cards, I’d have avoided this compromise because you and I already agreed that another company was the best at security IC (esp TPM’s).

“Take the hard drive firmware holes we saw recently… Are you yourself going to code review all the firmware? ”

If you trust the firmware, you’re Doing It Wrong. My advice has always been to use a host IOMMU, a per-device guard with PIO + trusted driver, or a typical network guard mediating a machine dedicated for that function. I also advice mixing up which manufacturers and software you use for each. Combined with EAL6+ trusted software, the vast majority of their TAO catalog wouldn’t have worked.

“The best solution you have right now is to complement your high assurance systems with additional OPSEC procedures, which will limit the functionality of the systems.”

Yep: diversity and obfuscation as I said above. Also makes for good insurance against surprises in trusted software.

“How about the advice of using “older” hardware? That’s also based on an assumption”

I’ve already explained it to you. I’ll add that all leaks indicate they have an ongoing program to break into different stuff that took until 2008 to get some critical successes. Recent leaks show their subversion programs began years after 9/11. No assumptions: just concrete evidence from the inside along with probability. There’s a risk that a datapoint, esp in a SAP, hasn’t leaked yet and the subversion happened a few years before. I documented that in my essay, though.

“You know I am still waiting for someone to tell me at what point in time chips were subverted!”

2001-2004 mainstream. 2004-ongoing anything else. Chips themselves might not even be subverted: NSA just attacks errata, firmware, and so on.

Security models are next.

EDIT: Just noticed several posts showed up while writing this. I’ll respond to them later, maybe. I promised the security models and I’m halfway into it.

Nick P February 22, 2015 1:56 PM

@ Wael

re CvP and security models

Oh you want a model. A single model that governs the security of every conceivable type of system. Two models that divide the entire field into two neat categories? I hate to tell you, Wael, but there’s no such thing. There are many types of systems, attributes, and security policies. Each one is easy to model using one set of techniques, but hard to model using others. Even within one system we find that formal verification (includes models) must use different tools for, say, computation and I/O. So, there’s no reductionist cheating in our field.

I can give you some security models to play with, though.

MLS/MILS non-interference model

  • All active entities in the system are subjects
  • All entities they can manipulate are objects
  • They are isolated by default
  • A security policy exists for how subjects can manipulate objects
  • A security kernel (or mechanism) mediates every access
  • Hard to express many security policies using pure isolation
  • Implemented by Aesec GEMSOS, Boeing SNS Server, XTS-400, and INTEGRITY-178B

Capability-security model

  • Builds on capabilities (pointers) that act as keys
  • Access to individual resources or methods requires a pointer to it
  • Pointers are unique, only created via trusted software/mechanisms, can’t be modified by untrusted data, and must be unforgeable in general
  • Capabilities might be passed as arguments to functions or used to build data structures
  • Special techniques exist for controlled propogation and revokation
  • Can express arbitrary security policies
  • Closest thing to your “One Model to Rule Them All” preference
  • Implemented by KeyKOS, CAP, System/38, EROS, and CHERI

Information flow control model

  • System activity is a series of information flows between entities in system
  • Sources and destinations of flows are labeled
  • A security policy dictates permissible flows
  • A reference monitor (hardware or software) ensures only permissible flows happen
  • Reference monitor might also look for risky flows (taint-based methods)
  • Like assured pipelines, it can express many security policies
  • Implemented by SAFE w/ Breeze and Cornell’s SIF

Language-based security model

  • Identify correctness or security properties system needs
  • Build a type system that prevents or detects insecure constructions
  • Integrate that with an existing or new programming language
  • Construct tools that convert that to an executable embodying same properties
  • Build system or application in that language and tools do the rest
  • Researchers stretch this into new use cases every year
  • Risk is in abstraction gaps, translation, and/or runtime
  • Examples include SPIN OS in Modula-3, JX OS in Java, and ASOS in Ada (also fits MLS)
  • Tagged processors enforcing language-related typing (eg SAFE, SSP) are also examples

Security through artificial diversity model

  • Assumes success of single attack goes down as diversity of targets increases
  • Identify the various attributes targeted or leveraged in attacks
  • Diversify those as much as possible
  • Amount of manual effort, diversity, and granularity varies between techniques
  • Leads to probabilistic security argument
  • Weaker than others: best used in combination with strong model

Security through cryptographic protection of TCB

  • Usually just a MILS, capability, or info-flow model with crypto
  • Gets its own category because the researchers all do similar things
  • Encryption is used for confidentiality of data and/or integrity of instructions
  • Design makes unauthorized reads get scrambled data
  • Unauthorized attacks on control flow causes integrity checks to fail
  • Designs might make processes, devices, and/or everything outside CPU untrusted
  • Used in Aegis, SP, HAVEN, SecureME, and many modern prototypes
  • Modern work either models them with MILS or information flow control

Control flow integrity (weaker)

  • Included because all successful models must have this property
  • This property by itself prevents external data from hijacking a system
  • Can be applied at hardware, firmware, and software levels
  • Can be compatible with legacy code or use purpose-built languages
  • If hijacking is prevented, basic software and language techniques can do the rest
  • Potentially the easiest and cheapest option if no others are available
  • Examples include Burrough’s B5000 (1961), NaCl, CPI, and many academic prototypes

So, there’s you some models to play with. The next thing for you to do is to map a given system’s features and security requirements onto one of the models. Then, you implement it using high assurance techniques. Boom: a [hopefully] secure system.

Sancho_P February 22, 2015 2:27 PM

@ Nick P

Um, I’m a bit confused now, here’s what I saw in this thread:

Buck, 17, 09:29 PM (@all, short)
Buck, 17, 10:29 PM (@all, but does it have 2 parts?)
– 18, 05:26 PM (Sancho, reply @Grauhut)
Buck, 18, 10:56 PM (comment @Sancho – Grauhut)
– 19, 06:04 PM (Sancho, reply @Buck)
Buck, 19, 06:23 PM (short @ Sancho)
Buck, 20, 09:50 PM (reply @Sancho)
– Clive, 21, 05:35 AM (reply @Buck – Sancho)
Nick P, 21, 12:17 PM (@Buck: “That whole second half of your post was spot on.”
– 21, 05:46 PM (Sancho mentioning Nick P for the second half)

My guess was you were referring to @Buck 20, 09:50 PM replying to me, the second half of his posting starting “Basically …”.
In fact there is a third part in Buck’s posting, but it seems you didn’t address that either?
Just wanted to give you a handle to find my comment, which would also disagree with your ”was spot on”.

Sorry if I got that wrong!

Nick P February 22, 2015 3:23 PM

@ Sancho_P

You’ve just illustrated the advantage of sub-threading on forums quite nicely. 😉 Alright, here’s what I agreed with:

“Basically, connection monitoring is doomed to suffer the same fate as AV signatures. Novel exfiltration sequences will also be cryptographically encoded, so in effect we will be limited to looking only for known classes of previously discovered data smuggling routines. Whitelisting hosts won’t help here either, because whitelisted hosts will just be abused to attack other whitelisted hosts until the target has been breached”

“Thus, the real trick is to not focus so much on prevention of attacks themselves, but on mitigation of any potential fallout from the theft of that data to begin with.”

I thought his critique of connection monitoring was spot on. I’ve used variations on this approach and let’s say it’s ridiculously hard to do with strong security. You have to greatly constrain the communication protocol’s data formats and operations. You also have to strongly enforce integrity and authentication. Each application will have its own type, list of permissible operations, and so on. Guard software and configurations must be done per app. Covert channel mitigation must be applied to transport level if TCP/IP. Only then, can connection monitoring hope to achieve something without being bypassed by existing or novel techniques. I doubt you will find any major app or commercial network meeting these requirements. So, connection monitoring by itself can’t counter strong attackers hitting such apps and networks.

His whitelisting argument is true, too. For one, the whitelisting method might be attacked. An example is forged IP’s for IP-based whitelisting. Two, the system might be compromised by incoming data’s effect on lower layers before whitelisting code even runs. Three, DDOS attacks on whitelisted hosts might force users or admins to work around whitelisting for a homebrew solution [which is insecure]. So, whitelisting on insecure machines isn’t going to stop strong attackers. It’s great for stopping some social engineering and spearfishing attacks, though. DefenseWall showed it can block some malware, too.

Additionally, I agreed that it’s good to “mitigate potential fallout from theft of that data.” The fallout comes in a number of forms. The exposure of the data itself can be mitigated by keeping it encrypted in storage and/or untrusted processing. This would’ve mitigated leaks from physical hard disk failures, hacks of legacy database machines, and some attacks on virtualized systems. Another aspect is recovery: ensure you can recover your machines and data to a clean state rapidly. I’m sure you understand the benefits and issues here enough that I don’t need to explain further.

So, that sums up my thoughts on what I saw. Now, let me look at your comment in light of that.

You’re comment basically describes an NIDS that tries to identify malicious leaking of outgoing data. Am I right or wrong? As is typical there, you try to leverage the metadata of the connections (including past history) to estimate how legitimate it is. This has many of the issues I describe above and I’ll add that these systems have already been beaten. There’s a huge cat and mouse game here where the cats win more often. One example technique is disguising the C&C as HTTPS sessions with traffic behavior consistent with web applications or browsing. Takes a certain set of skills and setup to reliably spot such a leak. And that’s just one method many with a range of dozens of techniques on about as many OS’s and protocols!

You also mention an external device that mediates uploads where user has to physically authorize it. Now you’re getting closer to the The Right Thing approach to this situation. The problem is that you’re just looking at download time. You should look at the data’s characteristics and metadata about destination like you did above. Then, combine that with the external device to leverage its trusted functionality to validate those things and even facilitate the transfer. For an example, look up how Nexor combines their Outlook-compatible proxy with a robust, mail guard for an easy-to-use, secure email solution. Think of how you might do something similar for uploads.

EDIT: This is actually already a solved problem. I was enjoying the conversation and overlooked the obvious fact that there’s a whole industry dedicated to this: cross-domain solutions. This usually involves a guard with both automated and manual inspection available. The endpoints need some strong isolation capability (eg MLS, MILS, SKPP) with labels on different partitions. The networking system, isolated in its own partition, puts the labels on the packets. Smart implementations do IPsec-style protection of packet data/metadata. The guard makes decisions on information flow based on metadata, data, and labels. Example would be a MILS Workstation + BAE’s XTS-400 + SAGE. Boeing’s OASIS architecture modernized it with a more clean-slate approach. E programming language does something similar at the object level for distributed systems and could be ported to, say, JX Operating System.

Wael February 22, 2015 5:17 PM

@Nick P,

Oh you want a model. A single model that governs the security of every conceivable type of system. Two models that divide the entire field into two neat categories?

Yes! I am after that model! It doesn’t have to be two, it can be more.

I hate to tell you, Wael, but there’s no such thing.

I know, let’s make such a thing! But I think you are saying: Such a thing cannot exist — It’s impossible.

Each one is easy to model using one set of techniques, but hard to model using others.

Great! Now substitue these in a definition of security of your choice, then tell me how secure or insecure the system is!

Even within one system we find that formal verification (includes models) must use different tools for, say, computation and I/O. So, there’s no reductionist cheating in our field.

This is about validating the low level work to be done later. It’s not a reductionist cheating, its a validation vector that tells us what it is were trying to achieve and how it fits in the big picture. It tells us what other gaps exist (in terms of the security definition) that need to be bridged.

I can give you some security models to play with, though.

I’m familiar with some of them — not all. But I’ll humor you and continue…

A security policy exists for how subjects can manipulate objects

  1. Where did the policy come from?
  2. Who protects the policy?

A security kernel (or mechanism) mediates every access

  1. Who developed the kernel
  2. Who verified its functionality
  3. Source code available?

What you describe is protection against bugs, security flows and unintentional weaknesses. Does it really cover subversion and defending against an adversary who has control on everything from the silicon all the way to layer 12?

Then, you implement it using high assurance techniques. Boom: a [hopefully] secure system.

Uh! But I’m not willing to implement every component of the system by myself! Bang! The system doesn’t exist, therefore it’s unconditionally secure 🙂 — ok, @Buck: You can chime in about “The possibility of unperceived existence” 🙂

However, the model + the definition will help you identify what “minimal” component you need to control to give you the desired security properties of the platform! I detailed an example once on confidential “Texting”.

We will be in agreement if you identify the “people” involved in designing the system using your approach, then describe the trust relationship between them. And then tell me how such a system can be shared with the uninformed! And after you do that, then:

  1. Tell me how some entity or organization with close to omnipotent powers can subvert it
  2. Go back and add the needed mechanisms to mitigate identified weaknesses.
  3. Repeat 1 – 2 as necessary 🙂

After a few iterations, bring the system to this blog for us to poke a few more holes in it, and… oh well Repeat the process again.

Nick P February 22, 2015 5:45 PM

@ Wael

“What you describe is protection against bugs, security flows and unintentional weaknesses. Does it really cover subversion and defending against an adversary who has control on everything from the silicon all the way to layer 12?”

What I describe is how to protect the system. The high assurance (eg CC EAL6-7) processes apply to each layer of it. The people applying it and physical security are separate aspects of the process. At silicon level, you have to trust the people making the tools, masks, and so on. That’s inevitable. So, getting control of that situation is a prerequisite for silicon-up security. If it’s just firmware up, then hardware/software/system design as I described it should suffice.

“Uh! But I’m not willing to implement every component of the system by myself! ”
“We will be in agreement if you identify the “people” involved in designing the system using your approach”

The important point is that the process is followed properly (requirements to implementation), it’s reviewed properly, you can trust at least one reviewer, and you have trusted distribution from them to you. If requirements were right and process was followed, the trust portion reduces to one or more parties of your choosing with the skills capable of evaluating it and distributing it to you. That’s my whole trust model for security evaluation. Simple, eh?

Far as people, there’s actually a lot of people that can do the review whose morality and loyalties run the gamut. Take your pick and pay them if you lack the time slices. Don’t have the money? Convince a group to invest in funding the design or review with a beneficial outcome for them. Crowdfund it through potential users. You could also pick a guy that’s survived plenty of government intimidation + has IT brains to do it. (I vote Appelbaum or Moxie.) You think that person is going to take money from the government to subvert what they themself will use? Not likely.

I think what you and a lot of others are missing is that the problem isn’t really one of technology: it’s one of psychology. It’s trying to understand whether a given person will do what he or she says. If that’s a high probability, you can trust the outcome. If it’s uncertain, don’t trust outcome unless a bunch of them with different attributes come to same conclusion. Even then, watch out.

“However, the model + the definition will help you identify what “minimal” component you need to control to give you the desired security properties of the platform! I detailed an example once on confidential “Texting”.”

I get the concept. Yet, that was one type of device and problem domain with relatively narrow security requirements. Even then you had to work quite a long time to produce a model. And you’re asking me to do something similar for general-purpose systems running arbitrary (even self-modifying) code in an NP complete number of hardware, software, and I/O combinations? Where do I even begin…? So, instead, I give you models that you apply to a specific problem while (if smart) greatly limiting complexity the above factors introduces.

“After a few iterations, bring the system to this blog for us to poke a few more holes in it, and… oh well Repeat the process again.”

Alternatively, I can keep coaching people on how to build, evaluate, and distribute such systems. I might do both, though, as people keep asking those questions. Clearly they need a bit more than my security framework and the Common Criteria. Gotta get back to work, though.

Buck February 22, 2015 5:46 PM

@Wael

Your ‘unperceived existence’ link really made me wonder about a surveillance equivalent of Heisenberg’s Uncertainty Principle or the Quantum Zeno effect! 😉

Sancho_P February 22, 2015 5:47 PM

@ Nick P

Unfortunately there seems to be a basic misunderstanding, so from your detailed reply (that before and after the light) only the sentence mentioning the NIDS is somewhat close to the point:

No. it’s not a NIDS, as this term is occupied by a phantasy, as is worldwide peace and no crime. Not realistic. These devices are too bloated (see your EDIT).

I’m talking about a small budget device with two RJ45 sockets, a RS232, a small LCD (text only), a LED (or two) and a big button.
No “OS”, no games, no fun.
Not a device for everybody, but for those who do not want their data leaked.

A single PC (in my case a Mac, but it doesn’t matter which OS) is connected to the box which then connects to the LAN / Internet.
To simplify thinking we assume:
Only the upload (Tx) from the PC is connected to the electronics (probably an FPGA) and investigated.
Download from Internet is not interesting.

We can’t fix our workhorses, they were lost from the beginning, we only didn’t know [1],
but we may block unauthorized uploads (?).

[1]
This makes me again thinking about highly specific malware to destroy data, alter documents (!!!) or place “evidence” on our machines.
Horrible.

Wael February 22, 2015 6:06 PM

@Nick P,

I think what you and a lot of others are missing is that the problem isn’t really one of technology: it’s one of psychology.

Is that a fact? Would you like me to give you links that show the opposite?
Basically you need technical solutions for protection against “hackers” or “crackers” — You need political solutions for state actors.

Wael February 22, 2015 6:14 PM

@Buck,

made me wonder about a surveillance equivalent of Heisenberg’s Uncertainty Principle or the Quantum Zeno effect

That’s some profound sh*t! Lemme help you think about it. It really is an amazing observation! 🙂

Wael February 22, 2015 8:49 PM

@Nick P,

A while back you posed a question about subversion.
My answer then was It’s impossible for an individual to protect against it.

I get the concept. Yet, that was one type of device and problem domain with relatively narrow security requirements. Even then you had to work quite a long time to produce a model.

The model isn’t complete, still! And I didn’t do it alone; it came out of our discussions.

And you’re asking me to do something similar for general-purpose systems running arbitrary (even self-modifying) code in an NP complete number of hardware, software, and I/O combinations? Where do I even begin…?

Oh, no! I’m not asking you to do it in one step! Neither will I press @Clive Robinson to detail to an implementable level how a “prison” – in all it’s manifestations works! I’m picking your brains – or whatever is left of it 😉 Where do we start! Thats the good question! We start from the basics, I claim 🙂

Don’t take my persistence as an attempt to show your recommendations are “weaker” than others; it’s not the case! I learnt a lot from them!

Wael February 22, 2015 9:18 PM

@Buck,

I took a glance at this. That Zeno dude was pretty slick for his time! Perhaps if we convince people that observing us changes the results, or stops us from committing the acts they want to catch? Or should we find someone to observe the observers?

Buck February 22, 2015 9:22 PM

@Sancho_P

My proposal was only regarding outgoing data, the observer must be external (not in the root kit controlled universal Win or whatever PC).

Anyway, the content wouldn’t be the first question to ask an automated observer. The first question would be “How much” would go to ‘which destination + port” and “is this a known + confimed upload”. The latter is tricky and involves some learning (“please confirm upload of 2.5 kB to https://www.schneier.com at 204.11.247.93″) to accept it as “known size and destination” but we do not upload so much stuff, most is sending requests – that could be basic knowledge of the observer.

Sounds good in theory, but it’s a bit more complex in practice, and it doesn’t really help much that the observer is external… User-input is still required to verify your expected upload data, so how do you propose to validate the actual data that’s leaving your unsecured endpoint? Are you really prepared to manually count each outbound byte for every legitimate connection you initiate? Even a single bit of information lost each outbound request is likely to add up real quick. I have been told that patience can be a virtue, and thus I believe ‘the leaks will continue to drip’.<blockquote You wrote “… because whitelisted hosts will just be abused to attack other whitelisted hosts until the target has been breached.” but sorry, I did not understand that: When you think of MITM attacks (CA model) or that the destination is already owned by the adversary – these are different issues, the observer can’t help in both cases.These are not totally distinct issues – the observer can’t help unless it knows precisely what it’s observing, and the observed can’t communicate that clearly unless it knows exactly what is happening!

What you’ve missed from @Leia Organa and @Wael’s exchange is, there’s no guarantee that uploaded data will even go through your external observer. It could be transmitted via electromagnetic radiation, power usage, sound waves, or any number of various other side-channels.

I will slightly agree when you think about avoiding nude selfies on your PC but I can not accept the basic idea that my machine is open to any investigation without my consent.

Your use of the word ‘investigation’ seems to imply some sort of legal procedure, yet at the same time ignoring the threat of potential criminal activities or the possibility of radically different policies from other countries.

Clive Robinson February 22, 2015 10:11 PM

@ Wael,

Just to chip in and help out 🙂

Oh, no! I’m not asking you to do it in one step! Neither will I press @Clive Robinson to detail to an implementable level how a “prison” – in all it’s manifestations works! I’m picking your brains – or whatever is left of it 😉 Where do we start! Thats the good question! We start from the basics, I claim 🙂

Have a look at what I’ve said to @ Grauhut in answer to a different but similar question,

https://www.schneier.com/blog/archives/2015/02/friday_squid_bl_466.html#c6689851

Buck February 22, 2015 10:22 PM

@Wael

I think I may have inadvertently touched upon your last question in my most recent reply to Sancho_P. 😛

I can’t claim to profess an optimal solution to the problem, but I definitely do feel that more transparency will be the best route forwards!

Nick P February 22, 2015 11:38 PM

@ Wael

“Is that a fact? Would you like me to give you links that show the opposite?”

They’d be wrong. The solution is multipart: political, psychological, and technical in that order. The law (political) must allow people to protect their data and not force weakness into systems. People, yourself or others, must put in honest effort to protect the data. Technology gives them increased capability to do that. It’s not a prerequisite, though: the Amish have the least of it and the most privacy of information. 😉 Security technology certainly helps protect data moving through other technology.

Alright, to the specifics. Your counter is funny because you essentially said what I did with your questions. So, there’s a protection, a process, a review. By WHO? And why do you trust them? Were they incompetent? Were they malicious? How do you know this? If they left traces, you might use technology and logic. Otherwise, you’re going on your assessment of people. Welcome to counterintelligence and personnel security. 🙂

It’s critical that you understand that it boils down to who you trust and why. Psychology, reason, and probability constituting the why. Who wrote the tools you trust in the assessment? Who said the security proofs of the design check out? Who said the configuration management was in order? Who (a bunch of who’s) said the design was properly converted into a mask? Who physically moved the hardware and/or ensured seals were properly checked?

All these who’s clearly argue that it comes down to you trusting a person’s word or actions at some point. With subversion as a risk, they might turn on you for any number of reasons. Make the wrong choices, get a subverted design. The current President and the technology they promised they used to build it had no effect. You just trusted the wrong person to vet or do something critical. So I maintain, whatever the tech or political position, it’s human psychology that’s the most important part of assessing those that you end up trusting.

re subversion

Ahh, the thread where you first asked about CvP.

” The (necessary, but not sufficient) condition is, they have to fully control the design, manufacture, test, and deployment of the hardware / software. This implies nothings outsourced including the FABs.” (Wael)

Correction: control or verification of that. Or verification of controls that ensure the process works correctly. You have a lot more flexibility than you think. This lets you outsource about all of it so long as it’s done in a way that’s verifiable and you trust your verification. A simplistic example is how verifying isolation and communication mechanisms in a MILS kernel lets you ensure policies are enforced for arbitrary computations in partitions. One verification of someone else’s work keeps applying to new work. Doubtful that it would be a single individual for a whole, modern system as you pointed out in the old thread.

Yet, Moore and Wirth have both done the whole thing from language down to silicon by keeping it simple. The systems were usable, too, albeit in a very minimalist way. SAFE is going from application-level, info-flow language all the way down to dedicated, custom hardware. That’s quite a complex effort with under 15 people total. A single author once wrote a paper on verifiable translation of high-level HDL to RTL. The reverse engineering firms seem motivated by money more than ethics and could do a tear-down of the fab’s work. So, a whole new platform down to gates with 16 people plus one or more companies reverse engineering it for extra effort. That’s the easiest and cheapest way to keep trust down.

I have better one’s but they’re not easy and might not be cheap. Gotta hold off on publishing them for now.

” I’m picking your brains – or whatever is left of it 😉 Where do we start! Thats the good question! We start from the basics, I claim 🙂 ”

Good catch haha. I might take a stab at it. There was one great paper that I can’t find or remember the name of that integrated much of it. It literally worked from the most fundamental issues to the specifics of high assurance design in PowerPoint style. Best one I’ve ever seen. Anyway, I’ll have to put together a different one lol.

“Don’t take my persistence as an attempt to show your recommendations are “weaker” than others; it’s not the case! I learnt a lot from them! ”

I appreciate it. I counted you as one of my proteges in high assurance engineering. Older, smarter, and teaching me back more than most. I labeled that link in my archive: “I teach Wael how to make secure reference monitors. He, Clive and I have many enlightening discussions from that point.” Clive and I had started more practical discussion on high assurance security than anything I’d seen in academia or commercial sector. Your entry made it even more interesting and productive. It was all fun, too. 🙂

re Zeno Effect

For malware, I’d say it’s the NSA Equation Group malware that disappears the second you look to see if it’s there. Kaspersky, using the Unified Surveillance Theory, was able to undermine the effect and grab a sample of the stuff. Meanwhile, the physical, Zeno Effect was one of the many things the Ph.D’s at the Cracked Institute of Technology warned us about.

Nick P February 23, 2015 12:01 AM

@ Sancho_P

I still don’t get it. The reason being that most networking and Internet activity is built on two-way protocols. For instance, TCP always starts with an upload (three-way hand shake) and keeps sending more (i.e. ACK’s). Protocol layers above it do as well. So, most of the Internet doesn’t work on your machine. If you change it, then you’re allowing both download and upload by protocols susceptible to covert channels. Allowing download also lets them hit you with malware. So, they hit you with malware, then they leak your secrets over covert channels.

The designs in my EDIT didn’t get more complex just for fun: they needed the extra functionality to deal with real-world use. Not to mention bloat was in some specific examples rather than the architecture itself. The MILS kernels are usually under 10Kloc, guards can be built with them + limited extra stuff, and the automated software can be similarly small. The whole thing can be kept reasonably simple, modular, layered, and with interface checks if you tolerate the overhead. The TCB’s can always be small and easy to verify. That’s been proven by many past works.

“We can’t fix our workhorses, they were lost from the beginning, we only didn’t know”

Sounds beautifully poetic but thankfully not believable. There are a number of projects that prevent, limit, contain, and/or recover from damage to workhorse software (esp Linux/BSD). They have achieved many interesting results. Our workhorses might be fixable after all. Yet, regardless of method, we will have to gradually re-engineer them over time to get the most assurance of the method. Although, someone might find a way to fight the need for that.

Wael February 23, 2015 12:17 AM

@Buck,

I think I may have inadvertently touched upon your last question in my most recent reply to Sancho_P

I noticed and thought about it a bit. You must have some kind of ESP. I guess it’s one of those strange synchronicities…

Wael February 23, 2015 2:55 AM

@Nick P,

They’d be wrong.

Well, Excuse me! I know you guessed the link before you clicked it, but you won’t guess the next link correctly 🙂

The solution is multipart: political, psychological, and technical in that order […] the Amish have the least of it and the most privacy of information.

Nonsense!

Alright, to the specifics. Your counter is funny because you essentially said what I did with your questions […] Otherwise, you’re going on your assessment of people. Welcome to counterintelligence and personnel security. 🙂

Nope! The decision has been made. We can’t trust anyone or anything because of subversion, interdiction, and other tactics. The rules of the game is: Design a secure system under these conditions. The system doesn’t have to be a general purpose system — It could be a specific purpose system at least initially. This is what @Clive Robinson mentioned recently, and is the exact reason I asked @Figuretout this question. When we have a few of these systems, perhaps we can think about combining them later to get our “general purpose” system, if it’s doable.

It’s critical that you understand that it boils down to who you trust and why. […] Who physically moved the hardware and/or ensured seals were properly checked?

1- Isn’t it evident I understand that when I said: If you are forced to relinquish[1] control of a component of the system, divide control among parties that are not likely to cooperate against you?
2- When a gov agent interdicts your hardware, you’ll be none the wiser. They aren’t “schmucks”, see! They won’t put a label in the package saying this package was inspected by xxx!

Correction: control *or verification* of that. Or *verification of controls* that ensure the process works correctly…

Now where is that link to the bull video? 🙂

There was one great paper that I can’t find or remember the name of that integrated much of it.

I want no papers! I have them coming out of my ears and can’t keep up with the amount I receive! What I want is your brain.

I appreciate it. I counted you as one of my proteges in high assurance engineering. Older, smarter, and teaching me back more than most.

Older? Probably! Smarter? Unknown and very subjective. But thanks for the compliment — as long as @Clive Robinson and his country folk don’t read it as “more handsome” 😉

Zeno Effect…

Read that stuff. They came across as whack jobs. Maybe I’ll change my mind later, but for now, the site name they chose isn’t exactly a missnomer 😉

My turn to go to sleep. Gotta wake up in 4 hours!

[1] I know “forced to relinquish” sounds oxymoronic.

Nick P February 23, 2015 11:28 AM

@ Wael

“Nope! The decision has been made. We can’t trust anyone or anything because of subversion, interdiction, and other tactics.”

I call bullshit right back at ya homie. As Bruce pointed out, you trust way more people than you give credit to. He was flying too. 😉 Your decision can’t work in the real world unless you’re pulling a Moore or Wirth-style job on custom hardware. Even Magic 1 homebrew CPU wouldn’t meet your criteria because you’d have to trust the TTL chips. You’d be back to handwiring circuits that ran at kiloherts with so little memory the command line overflows! 😛

Bruce’s point is even stronger when I think about you’re engineering degree: all that math, theory, and methods you’ve trusted your whole career without independently re-running their experiments and checking their proofs. Now you tell me you’ll leverage that in determining secure INFOSEC methods, but in the new field you can’t trust any single person? Lmao…

Our infrastructure will be built by more than one paranoid person living in a faraday cage. You will need ways to assess those people or vet what they produce. You’re going to trust someone or something at some point. That’s a fact that you can’t escape. Now, you must determine how to make human nature work for you instead of against you. Who and what do you trust? Where do you begin in your work?

I trust TTL and FPGA chips sold in lots before 2004. That’s back when they were just focused on delivering products. 🙂

“It could be a specific purpose system at least initially. ”

That was my plan too. Except it’s nearly as easy in hardware to design a secure, general purpose system. So, I went straight for such designs.

“I want no papers! I have them coming out of my ears and can’t keep up with the amount I receive! What I want is your brain.”

I’ll give you a paper on my brain. Jk. Meanwhile, I’ll work on the model in the future.

“Read that stuff. They came across as whack jobs. Maybe I’ll change my mind later, but for now, the site name they chose isn’t exactly a missnomer ;)”

It’s an infotainment site: part informing, part bullshit, always funny.

Nathanael February 23, 2015 12:52 PM

“But I suppose, although defense wins games, offense sells tickets. And the tickets are for much more lucrative budgets, whih grow ever larger the longer the game continues.”

This seems to describe the dynamic. It’s quite self-destructive in the long run. The question is how much these NSA/CIA/DOD clowns are going to take down with them when they go down. It could be the entire US government. It could be the world, if they manage to trigger a nuclear war (so let’s not let them).

Sancho_P February 23, 2015 2:51 PM

@ Buck

”… but it’s a bit more complex in practice …” – Bingo!
As I have learned from my boss it’s always easy when one has no idea about the details 😉

“Even a single bit of information lost each outbound request is likely to add up real quick. I have been told that patience can be a virtue, and thus I believe ‘the leaks will continue to drip’.”

Yes, that’s really dangerous.
However, the combination of “destination + amount + frequency” should suffice to trigger an alarm,
whereas the destination alone could only be a once confirmed destination, say Facecrook or Dr. Goo and needs either a MITM or being a compromised service / host / destination (e.g. my friend there with a gun at his head).
Anyway, this point needs special attention and knowledge.

Not sure how Wireshark (@Leia Organa) and various side channels are related but I think we’ll agree on the following:
When I’m such a high valued target that they rent the apartment above mine and have their van 24/7 parked downstairs then I’m lost, whatever communication I’ll try, including the waggle dance, and even when I don’t communicate at all.

By using “any investigation” I really meant “any” as I see the content of my PC (Personal Computer) as my intellectual property without difference to my brain. All my ideas, thoughts, artwork, phantasy, personality is there and no one but me could sort or interpret the content to make sense of it.
The content per se can’t harm anybody and therefore is off limits to anybody without my consent.
Regardless of (man made) laws and legal hairsplitting I deem any attempt to secretly access it as a crime and a personal and hostile attack.
What’s in my brain can be right or wrong, sane or ill, genuine or altered by mistake or intention.
No “evidence” can be taken from my brain.

Self destructing malware, as it was shown by Equation Group and can be utilized by any attacker, renders whichever “evidence” void anyway.

@ Nick P

Yep, the hand shake is tricky but standardized (I don’t know much about it, so that’s “the easy part” for me, but what I see from Wireshark it is not rocket science or magic). There is the possibility of leaking data in snippets, and as @Buck pointed out, add drop to drop …
Even access to well known sites (“get”) may leak some information in the URL, but not to an “unknown” C&C server.

”Allowing download also lets them hit you with malware.”

We must start one step before: The malware is already on the PC.
So there’s no need to download it, but yes, add more, no problem.
To keep the workhorse usable is a different task.

My “we can’t fix our workhorses” was meant as
– we can’t go back in time, the machines are already there
– the system were not made with adversaries in mind (“lost from the beginning”)
– there is a monoculture of only three systems and a billion of machines
– non of these systems was built from scratch as it is now, they grew up over the years
– the task to build a universal, modern machine (PC) is tremendous complex
– running nearly any low level (compiled) software is insecure per se
– the machines are all out there, both cheap and useful.

What we’d need are simple boxes which are not subverted from the beginning, created with today’s experience, knowledge and caution.

I just think that would make the crook’s job not impossible, but it would be much much harder for them to access e.g. the my proposal to Anesrif before I send it to the intended destination.
What would the malware look for at my HD? Drawings? Text? Encrypted stuff?
Upload 200GB by appending slices to URLs and collect them from the backbone?
Without transport control? Good luck!

Nick P February 23, 2015 7:07 PM

@ Sancho_P

” Drawings? Text? Encrypted stuff?
Upload 200GB by appending slices to URLs and collect them from the backbone?”

Fair enough. It’s unlikely if they can’t use any air gap jumping techniques against you. They can destroy the data or systems, though. You also still have to integrate your apps with the upload system so it only lets out traffic from the exact apps you want it to. Tricky stuff on a machine you assume is compromised. It will complicate the design.

Btw, the more typical use of covert channels is leaking keys, passwords, and so on. Then, they just use them on data they’ve intercepted on the network or on an encrypted disk they’ve physically stolen. They might also leak a description of your hardware, OS, and running software to make for a better attack. That’s one of my old tricks.

Far as rate of leaking, basic TCP channels can leak over 1Kbps on top of any IP or HTTP channels in use. Let’s use the 1Kbps per TCP connection number. A typical web session makes hundreds of connections. File uploads also make a steady stream of connections. The result is that, after even 10-15 minutes, they could leak a lot more than encryption keys or passwords. Most of the above data would get out. Now, if they get a copy of your transfers or storage, they can decrypt it.

I’ll admit this is a targeted attack, though. This isn’t likely to be used in fire-and-forget QUANTUM attacks although it could be done that way. Same can be said about guard architecture, though, which gives you more options. Most common thing employed by non-experts is OpenBSD with a particular firewall configuration just because they probably got most of the bugs out. You can always take apps and even kernel code out if it’s not needed.

Figureitout February 25, 2015 12:12 AM

Nick P
As Bruce pointed out, you trust way more people than you give credit to.
–Which is wrong too, I don’t trust them, there’s a difference between having a choice and not having a choice to do certain things when monopolies take over. Doing something b/c there’s no other way and placing trust in something are 2 separate things.

For instance, almost any “clean slate” computer security effort is going to start w/ likely infected tools b/c handwiring and amount of circuits and building the logic necessary for a real re-do isn’t feasible (and even discrete components have large space for all kinds of crap due to the market focusing on SIZE when it doesn’t frickin’ matter much anymore).

The math and electical theory have had quite a testing (though I think we all know there’s some holes lurking).

One thing you’re right about is going back to boring circuits, no frickin’ command line, no normal keyboard, no screen, just LED’s (still susceptible to EMSEC issues as they are not ideal diodes, not to mention emanations from simple circuits will be very revealing), resistors, and transistors (power supply components too, preferably hand-rung transformers). There’s analog ammeters capable of measuring microAmps, but it’s boring eh? Digital screens are so much cooler. Fck this sht, it’s annoying. I’d rather code and order pristine perfectly manufactured boards and cables. Hand-wiring 1000’s upon 1000’s of turns of wire is frickin’ dull and annoying, let me tell you. And you have to be a craftsman heat-shrinking your cables or they’re going to suck ass and break and not be portable, which is another security risk. Going backwards.

Wael April 11, 2016 1:18 AM

@Nick P,

I trust TTL and FPGA chips sold in lots before 2004. That’s back when they were just focused on delivering products. 🙂

Was digging around in search of something[1]… Quick question: how can you tell when the lots were sold? Can’t subversion include bogus chip markings — I would put a 2003 date on a chip I subverted!

[1] Things I said I’ll do later…

Nick P April 11, 2016 10:16 AM

@ Wael

Good thinking. It’s actually the date of manufacture that matters. The listing can be faked. The main defense is heuristics I have no intention of publishing. However, one is acquiring hardware from people that are low risk of subversion but know when they bought it. Just gotta buy a bunch extra cuz used breaks more.

Wael April 11, 2016 11:15 AM

@Nick P,

It’s actually the date of manufacture that matters.

Wouldn’t you think the manufacture date can be tampered with?

Nick P April 11, 2016 11:44 AM

@ Wael

I just said that. 😛 One of the other benefits of stuff like TTL and old FPGAs is a small outfit can implenent them on 350nm or 500nm processes. Visually inspect some if them.

Btw, accidentally discovered ECL logic chips when Googling for this response. Apparently, it was even faster than TTL for use in things like mainframes. Also better side-channel resistance. So, make that two runs on old fabs. 🙂

Clive Robinson April 11, 2016 11:57 AM

@ Wael, Nick P,

<

ul>Wouldn’t you think the manufacture date can be tampered with?

<

ul>

It is quite frequently, along with speed markings etc.

Sometimes though the manufacturer lies when they mark the package up which is why the scammers get away with it.

Take a company like Intel or similar, they manufacture chips for the best spec and then due to pricing issues mark many premium parts down into the higher selling lower spec groups. It’s why quite a few “overclockers” get away with running lower spec chips at top spec performance.

So if you buy second hand stock of low to medium spec parts you can quite quickly “test them up” to a higher spec, then you have to re-etch the chip cap to reflect that and make a nice little earner. The problem is their “quick tests” will not always pick up on issues, so such marked up chips will in a number of cases die an early death.

In the PC game this does not realy matter as businesses can write them off tax wise in as little as 18months so upgrade them quite cheaply overall. Likewise home PCs have only a three months warranty in some parts of the world. So a chip dying at a couple of years rather that ten or twenty years is not going to attract any attention in the majority of cases.

Nick P April 11, 2016 12:07 PM

@ Clive

You got experience with Emitter-Coupled Logic? The Wikipedia pagd had some impressive benefits. Power use was a drawback. That’s expected. Any uses or issues a quick Google would miss?

Clive Robinson April 11, 2016 12:10 PM

OH phutz I left out the slash on the second UL above 🙁

@ Nick P,

At the time ECL was upto 450MHz whilst TTL was lucky to push 30Mhz, however there were several prices to pay for that speed. Firstly PCBs of the required spec were very expensive, tracking was a pain due to transmission line impedence matching. The power requirments were a pain etc etc and the chips were invariably gold/ceramic at MIL spec so cost about ten to thirty times as much as commercial grade TTL of the same gate count.

I’ve still got some tucked away somewhere but they are worth more for their high valuable metal count than they are for functionality value. Though I’m told NASA did used to pay through the nose for them which was just one reason the Space Shuttle got to expensive to keep going.

Wael April 11, 2016 12:21 PM

@Clive Robinson, @Nick P,

Sometimes though the manufacturer lies when they mark the package …

That’s exactly my point. It seems easier to etch a chip than to subvert it. So what assurance will you have that the “old” chip you got isn’t a subverted/re-etched one? Manufacture date isn’t such a strong control to use, then! You’ll have to work with the assumption that the chips you’re building a system with are “evil”.

Nick P April 11, 2016 1:53 PM

@ Wael

My question is instead what are the odds that THEY… who I get it from… were given a subverted unit. Run of the mill, cheap, older chips are unlikely to have been done that way because physical behavior would’ve been different.

Today, they might be able to pull some shit on new chips or new supplies of older ones. Not the ones already printed that I have good reason to believe are what they appeared to be.

Nick P April 11, 2016 1:58 PM

@ Clive

Damn. Didnt know it was gold and stuff. You could still implement it on one of modern CMOS or Bipolar processes with similar benefits, though, right?

Clive Robinson April 11, 2016 4:22 PM

@ Nick P,

To rework an old saying…

    Thar be gold in them thar chips

Early chips were full of precious metals, gold was used to not just heavily plate leads and provid links from the carrier to the chip where they were “cold welded” by hand under a microscope.

As for the ECL masking etc, I’m not sure, if RobertT was still around I’d say chat to him.

One of the reasons ECL was fast is that the output stages were linier and not driven into saturation, but the price was quite high currents etc.

Perhaps oddly we regarded ECL as obsolete around the time the first 16bit bit slice processor chips with inbult register files such as the AM29016 became readily available in the early 80’s.

The major use I had with ECL after that was in digital video and RF systems such as frequency synthesizers and counters used as pre-scalers and early pulse swallowers and other digital radio stuff. For various reasons there is not much I can talk about on what such systems were to be used for, other than I still expect some of it will fall on my or somebody elses head eventually.

Nick P April 11, 2016 7:11 PM

@ Clive

Thanks for the tips and insight. Interesting stuff. Another resource gave actual numbers where the ECL vs TTL had ECL literally 10x more speed at like 60x more power. Still acceptable power at 60mW per MHz given 1GHz would’ve been about 60W right? Just might need Cray’s cooling for multicore. Then they subvert THAT with an EMSEC attack. 😉

@ Wael

Thanks for the reminder. I sometimes forget little things. Periodic reminders keep tgat to a minimum..

@ both

Last thing for today was finding that old Z80’s, etc implemented XOR in pass-transistor logic. Apparently makes it and some other gates common in crypto faster or cheaper. Might be worth remembering for anyone working on old nodes. Needless to say, I quickly found a synthesis method. 😉

Clive Robinson April 12, 2016 11:29 AM

@ Nick P,

Talking of the Z80, did you know that originaly it’s ALU was actually 4bit not 8bit and it did some other quite nifty non logic gate tricks. Which is why it had a minimum clock speed of around 120KHz which made “single steping debug” well nigh impossible…

khizer ali February 11, 2020 1:36 PM

In the PC game this does not realy matter as businesses can write them off tax wise in as little as 18months so upgrade them quite cheaply overall. Likewise home PCs have only a three months warranty in some parts of the world. So a chip dying at a couple of years rather that ten or twenty years is not going to attract any attention in the majority of cases.xshare(s)

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.