Hijacking the PC Update Process

There’s a new report on security vulnerabilities in the PC initialization/update process, allowing someone to hijack it to install malware:

One of the major things we found was the presence of third-party update tools. Every OEM we looked at included one (or more) with their default configuration. We also noticed that Microsoft Signature Edition systems also often included OEM update tools, potentially making their distribution larger than other OEM software.

Updaters are an obvious target for a network attacker, this is a no-brainer. There have been plenty of attacks published against updaters and package management tools in the past, so we can expect OEM’s to learn from this, right?

Spoiler: we broke all of them (some worse than others). Every single vendor had at least one vulnerability that could allow for a man-in-the-middle (MITM) attacker to execute arbitrary code as SYSTEM. We’d like to pat ourselves on the back for all the great bugs we found, but the reality is, it’s far too easy.

News article.

Posted on June 6, 2016 at 6:10 AM51 Comments

Comments

Kai Howells June 6, 2016 6:44 AM

I fail to understand what OEMs actually hope to achieve by installing all their crapware on new machines. I don’t know one single person who uses any of it, and no-one actually likes having it there.

You can’t add crap like this to Windows and expect it to run better – so what purpose does it serve?

Surely the OEMs expend non-trivial amounts of resources in developing this software – why do they continue to do so?

Nothing will run as well straight out of the box as a machine with a fresh install of Windows and nothing else.

I can understand that there are bundling deals with 3rd party software vendors and OEMs, that’s why we get things like Evernote, CD burning software, Outlook plugins, trial versions of Office and crappy AV software – but this doesn’t explain why the OEMs feel like they have to put their own software in the mix as well.

With all the marketing that Microsoft is putting behind Signature Edition (basically a clean install of Windows with minimal other 3rd party crapware) – why doesn’t any OEM realise the value in this and go one step further – just install Windows and nothing else?

http://www.microsoftstore.com/store/msusa/en_US/cat/Signature-Edition-Computers/categoryID.69916600

keiner June 6, 2016 6:55 AM

@ Kai Howells

Monye$? The might simply get paid for installing crap, kind of subsidy for the price, huh?

Your perspective is totally WRONG, the customer is just the idiot to pay the money for the crap he buys. Nobody cares for your “product experience” or related marketing bullshit bingo. Proof: Try to get adequate service for your “high-level” products, and you are out alone in the dessert…

Capitalism is not there to make customers happy. But the shareholders.

steve June 6, 2016 7:41 AM

Isn’t this one of the ways FinFisher attacks the OS? By their FinFisher ISP appliance product? It intercepts MS updates at the ISP and deploys itself on your PC from there.

Mike Gerwitz June 6, 2016 8:03 AM

There’s a new report on security vulnerabilities in the PC initialization/update process

There is no one process for “the PC”; would you mind changing that to “the Windows”?

For example, on many GNU/Linux distributions, this report would be a non-issue.

Cormacolinde June 6, 2016 8:27 AM

@ Kai Howells

This isn’t about the usual crapware, though. We’re talking about the self-update software. About the only OEM software you’d want to leave on a system to easily allow users to update drivers and various software in order to protect them from other vulnerabilities that will often crop up in those.

keiner June 6, 2016 8:34 AM

@ Mark Gerwitz

Same thing irritated me in the beginning. But count together 1+1 (IBM + Microshyte) and your’ done!

Btw, nearly no consumer hardware to buy with Linux out-of-the-box, anyways…

Anonymous Cow June 6, 2016 10:38 AM

I just scanned the Duo paper Bruce linked to. The coverage on mitigation strategies is generic and high level.

Does anyone have concrete mitigation procedures for popular PC brands, especially ones that don’t involve wiping the system and installing a “clean” copy of the Windows?

de La Boetie June 6, 2016 10:46 AM

@Anonymous Cow – very sorry, but the only mitigating MO is to re-install from new, as clean a fresh copy of Windows as MS will let you, then install necessary drivers as needed. A Royal PITA, especially with update times, but necessary.

I don’t feel there’s an alternative, all the other controls you add afterwards are worthless unless you have a more solid base. And for laptops, it’s an opportunity to pop in a larger SSD than you’d otherwise be get, given the fabulous margins they put on them.

If you’re corporate, this will naturally be rather more industrialised and slick.

K15 June 6, 2016 11:46 AM

Bruce, how about a standing post on the sidebar or someplace, about the government internet security org whose presence and activity would have averted this?
What are the utilities now, and how do we keep them reliable?

MrC June 6, 2016 12:08 PM

@ Anonymous Cow:

  1. First and best option is to reformat the drive and replace Windows with Linux (or, better yet, something like Genode when it’s ready for prime time).

  2. Second-best option is to reformat the drive and do a clean install of non-OEM Windows.

  3. Third-best option is to identify and manually uninstall all of the OEM crapware before connecting the computer to the internet for the first time.

  4. Fourth-best option is to identify all of the OEM crapware, figure out which registry entry, etc., is causing it to run on start-up, and manually disable its ability to run on start-up before connecting the computer to the internet for the first time. Then remember to never update manually. (Though, if you’re going to do all this, I don’t see any reason not to just uninstall the crapware instead.)

albert June 6, 2016 1:53 PM

If you’re forced to use Windows at work, then rule number one is ‘don’t do ANY personal stuff’ on that computer, EVER’. Now, -you- are relatively safe (HR drones notwithstanding). I don’t understand why folks use Windows for their -personal- business.
. .. . .. — ….

Richard June 6, 2016 2:12 PM

@ Mike Gerwitz

There is no one process for “the PC”; would you mind changing that to “the Windows”?

For example, on many GNU/Linux distributions, this report would be a non-issue.

You’re so right Mike ! – Linux is soooo much better…

For example with Linux Mint, you don’t even have to wait for an ‘update’ to get your malware; you can download a backdoor’d O.S. right from the source!

linux-mint-hit-by-malware-infection-on-its-website

Of course the Linux fanboyz will say that this was just a one-off freak occurrence.

Uh huh, NOW, all you have to do is verify your Linux Mint download ISO against its “MD5”.

The only problem is – MD5 as a signature mechanism is TOTALLY BROKEN AND COMPLETELY INSECURE.

With even the very modest computational power of a single desktop PC, it is trivial to generate a “good” file for validation purposes, and a “malware” file for distribution – BOTH OF WHICH HAVE IDENTICAL MD5 HASH SIGNATURES.

So even after their recent disgraceful security screw-up – THEY ARE STILL CLUELESS ABOUT SECURITY.

In my experence, Ubuntu, Debian, Arch and others are only a little better security wise, and ALL have had remarkably stupid security screw-ups – in part because the whole POSIX “C” language based development model STINKS and leaves a LOT of vulnerabilities – and partially because Linux twits are so stupidly arrogant about security.

Trust me on this – ‘open source’ projects are amongst the EASIEST to hack.

Easiest because it’s ridiculously easy to hide malicious code written in a cryptic language like ‘C’ in a bloated O.S. which has millions of lines of poorly documented code, managed by arrogant twits who think their O.S. is soooooo wonderful that it MUST be absolutely secure – and who cling to this belief as a matter of religious faith, despite obvious facts to the contrary
.

I say these things as someone who uses Linux on a daily basis, but who no longer harbors any illusions about its so-called ‘security’.

Ray Dillinger June 6, 2016 2:34 PM

Wow.

I honestly had no idea. For the last ump years, every time I’ve gotten a new computer I’ve been installing a new hard drive in it before starting it up for the first time. (Step Zero: replace the drive. Step One: Install an operating system … ). I have vague memories of a few little pieces of extraneous crapware packaged by OEMs, but I hadn’t thought about them in years.

This is just a simple cost-saving thing, really; a supposedly lower-end machine built on a good motherboard, plus hard drive and RAM upgrade, is usually indistinguishable from a high-end machine that would come at a greater total expense.

The drives that I take out are usually good enough to stick into external enclosures for use as removable media. The first thing I do with removable media though, is format it.

huffed glue till my eyes turned blue June 6, 2016 4:13 PM

@ keiner

“How long until Win 10 is officially marked as “MALWARE”?”

“People are aware that Windows has bad security but they are underestimating the problem because they are thinking about third parties. What about security against Microsoft? Every non-free program is a ‘just trust me program’. ‘Trust me, we’re a big corporation. Big corporations would never mistreat anybody, would we?’ Of course they would! They do all the time, that’s what they are known for. So basically you mustn’t trust a non free programme.”

[Apple and Microsoft] “Those companies are very powerful. They are cleverly finding new ways to take control over users. Nowadays people who use proprietary software [programs whose source code is hidden, and which are licensed under exclusive legal right of the copyright holder] are almost certainly using malware. The most widely used non-free programmes have malicious features – and I’m talking about specific, known malicious features.”

[malicious features] “There are three kinds: those that spy on the user, those that restrict the user, and back doors. Windows has all three. Microsoft can install software changes without asking permission.”

  • Richard Stallman, a few quotes from an interview:

http://www.newint.org/features/web-exclusive/2012/12/05/richard-stallman-interview/
https://web.archive.org/web/*/http://www.newint.org/features/web-exclusive/2012/12/05/richard-stallman-interview/

Erik June 6, 2016 4:23 PM

Fortunately, nearly all of this software registers itself through Microsoft’s software installation processes – which means it’s fairly easy to get rid of via scripting. We have a script that runs on all machines regularly that gets rid of all of the crapware. We simply update it as necessary. It’s a few hundred lines that look like this:

start /WAIT “Uninstall Evil Bing Bar” msiexec /x {77F8A71E-3515-4832-B8B2-2F1EDBD2E0F1} /quiet
start /WAIT “Uninstall Trend Micro Client /Server Sec Agent” msiexec /x {BED0B8A2-2986-49F8-90D6-FA008D37A3D2} /quiet

etc.

Yoshi June 6, 2016 5:51 PM

This reminds me of how most BIOS/firmware updates that I run into are NOT checksummed and are also delivered in the same non-secure ways.

To me the BIOS/firmware is probably the most important part of the system, and yet it’s given the least amount of care by some tech supporters/manufacturers.

It really makes me kind of sad and mad at the same time.

jdgalt June 6, 2016 7:05 PM

In other articles linked from this blog, it is made clear that operating systems behaving as malware are just the tip of the iceberg. We have BIOSes that can be updated remotely, disk drives that can be addressed over the Internet, and probably back doors in CPUs themselves.

Clearly the community needs a whole array of secure technologies that don’t yet exist (or have existed in the past but were abandoned). We need to generate enough demand that the market produces all of them — and do enough checking up that they can’t lie to us and get away with it. The job of building good security really hasn’t even begun yet.

Clive Robinson June 6, 2016 9:50 PM

@ Yoshi,

To me the BIOS/firmware is probably the most important part of the system, and yet it’s given the least amount of care by some tech supporters/manufacturers.

It’s not just the main BIOS/firmware to consider, there is a whole load of other “hidden” computers in I/O devices as well including the keyboard. Most of these can be messed with these days. Oh and don’t forget the “Smart batteries” like those found in Apple portable equipment.

But it also goes lower than this, there are CPU “microcode upgrades” to consider as well…

@ All,

The base assumption to work from these days is that you do not in any way own your computer, those days went at the turn of the century. Likewise your router etc.

Thus you have to consider how you are going to behave at the physical OpSec level.

Complaining and moaning about OS XXX is a little bit like arguing the shape of the hole below the waterline in the side of your boat is important when you are sinking.

As with the boat you need to “beach it” and make sure it’s “ship shape” from the lowest levels up. Only you can’t…

Due to the way every body gave a pass to “bells and whistles tommorow marketing” instead of insisting on “security today engineering”, and likewise with the hardware we have grown the computer industry of our current nightmares one purchase at a time for the last thirty or so years. And it is to late to scurry off backwards to do anything about it, you can now nolonger wind that clock back.

That is the reality of it so accept it and make your plans based on the fact you don’t own your computers at any level the majority of people can understand let alone control.

For instance it is known that the PC architecture has had a security flaw in the original design which is certainly still there today in Win 10 operation. It was put in for good reason so that “added hardware” via interface cards could add software to the computer so they could work. This feature was a “direct steal” from the Apple ][ design that preceded the PC by a long way, and was the most popular computer of it’s time back at the 1970s and early 80’s.

It was only a short while ago that a Chinese laptop manufacturer got “rapped over the knuckles” for using this mechanism for hiding advertising type malware in the BIOS image, thus this backdoor into your PC is well known about. So even taking out the harddrive and installing a new OS was not going to stop the malware getting up into the freshly installed MS Win OS.

Thus as has been described on this blog before, you have to develop an entirely different strategy. So you have to ask how do use PCs to get your required degree of privacy and security…

In essence you need an “off line PC” and an “On line PC” and a way to securely transmit data between the two PC’s in a mediated way, to have any chance of a little privacy.

However as always with ICTsec and OpSec in general “the devil is in the details”… And if your particular demon has state level resources and a picture of your face on their wall with “cross hairs” then you have a harder problem on your hands. Which means you have to take your security further than just the “Off line PC”. It needs to go back before the PC and through your eyes head and hands (see previous posts about securely athenticating transactions).

However for the majority a little change in behaviour will get them a lot of privacy and some security. Firstly remember everything you type, every picture you post, and nowadays every word you utter close to your computers microphone will be whisked away to the likes of Google then onwards to some US Government agency etc etc. George Orwell might have expressed it in a more dystopian way, but he was not wrong about the technology, or the politics, when he wrote 1984 back in 1948.

So treat your computer not as a confident but as a potential mugger on the street, would be step one.

Thus for most of you “give up both the faux and real convenience the PC may give you if you need “security”, or if you must use the PC and only want some measure of privacy and a little security then practice sensible OpSec and don’t bleed every drop of your private and financial life through it…

Clive Robinson June 6, 2016 11:41 PM

@ Yoshi, All,

I forgot to add a link to the three blatent “crapware / snoopware / malware” offences of Lenovo in 2015

http://www.computerworld.com/article/2984889/windows-pcs/lenovo-collects-usage-data-on-thinkpad-thinkcentre-and-thinkstation-pcs.html

Of the three offences,

1, Superfish adware.
2, Service Engine Software.
3, Lenovo Customer Feedback program

It’s number two you need to have a think about, because it was effectively “put in thr BIOS” so you could not remove it…

Whilst the Lenovo Customer Feedback Program 64, is “spyware”, it’s on their “proffesional / Business” line of machines (thinkpad etc) not the consumer ranges that the first two were on. This of course has legal implications for businesses in all sorts of ways, and it shows that Lenovo realy do not think or act appropriately. Something Microsoft should heed with their Win10 “spyware” and “crapware” forced updating in Win7 and beyond, which has caused some people to unknowingly incure huge data charges when MS downloaded Win10 in the background without the users express or for that matter implied permission.

Whilst it is possible to own something you can not control due to your own limitations, having something you can not control unless you have well above normal almost “super human” abilities means in effect not only do you not own it, but others do not care about their “duty of care” when they force such “rent seeking” malware down your throat.

Figureitout June 6, 2016 11:44 PM

Clive Robinson
–Yeah, for targeted attacks maybe but I think that’s still a bit overblown. There’s no way in hell they can store all this data (unless there’s some ultra-compression algorithm) and do something useful w/ it, more likely it’s all simply a massive waste of power and humans analyzing crap data providing a lot of useless reports.

If this were true, why aren’t all the embedded chips all around us failing nonstop? Internet goes down? Our world would simply crumble if this level of hacking was actually feasible b/c we all know some people would do it if they could.

I agree w/ the gist of your sentiments, just the reality of making use of all that data. I don’t believe it.

Clive Robinson June 7, 2016 12:27 AM

@ Figureitout,

I agree w/ the gist of your sentiments, just the reality of making use of all that data. I don’t believe it.

Like all bubbles “Big Data” has kind of had it’s day from the commercial perspective, though the IC want their “time machine” so will not give up on storing it (on the “just in case” principle that costs more than any king’s ransom ever did).

The whole “Big Data” idea hung on two assumptions. The first that there was data that could be sensibly “commoditized”. The second was that the “numerati” could come up with algorithms to do the commoditization. Of the two the second has nowhere near been achived in a “cost effective” way, and the first looks fairly shaky due to “data polution” both accidental and deliberate as people try to stop themselves being commoditized.

But even where worthwhile information has been obtained, it appears that it’s value due to other constraints is worth far less than was originaly expounded. Interestingly that and the “ham fisted” sociopathic behaviour of the FBI/DOJ with regards Apple, has made a number of businessmen think (funny how potential ruin and jail time focuses some of the smarter peoples minds). They have suddenly realised that “the little extra” income they were getting from the “collect it all” policy on user data is not worth the risk attached with the likes of the FBI/DOJ or bad public oppinion sending customers to competitors in “safer” countries. Thus some are stopping the collect it all as the risk is now seen as to high.

However that said, it will not be long before they find ways to “externalise the risk”. One way of which is to use a third party data collector within their software that sends the data in an encrypted form they never see to a data aggregator. The aggregator then pays them a few cents but takes on the risk of the FBI/DOJ as part of that. Which raises the question again of is there sufficient money to be made in the aggregation of data to keep the likes of the FBI/DOJ at bay (Microsoft are fighting this one at the moment so watch for a judgment).

Like you my thinking is that there is not enough money in it and the bubble will either pop or deflate as the pendulum swings back the other way (think on it as kind of the “ringing” on a fast rising edge, that will eventually settle at some mean level hopefully at near zero).

keiner June 7, 2016 2:10 AM

@Yoshi

Yesterday I wanted to update a brand new Gigabyte board to the latest BIOS and found:

  • download via http
  • no checksums on site or anywhere to be determined from the homepage with reasonable effort.

But it’s an UEFI Bios update. So what could go wrong here?

Winter June 7, 2016 4:39 AM

@Clive
“In essence you need an “off line PC” and an “On line PC” and a way to securely transmit data between the two PC’s in a mediated way, to have any chance of a little privacy.”

All true, but more than a little disheartening. Where to start to solve these security challenges?

What would be a good model for investigating this?

You do not trust the hardware of either PC, and your solution seems to rely on isolating the off-line PC from all communication channels (including direct power access) and then carefully control all communication between the PCs. I was wondering whether such an approach could be used between the hidden computer components inside a PC?

Maybe some things could in future be done with homomorphic encryption?
https://en.wikipedia.org/wiki/Homomorphic_encryption

Clive Robinson June 7, 2016 5:03 AM

@ Winter,

I was wondering whether such an approach could be used between the hidden computer components inside a PC?

Probably not.

We used to call it “air gapping” but more strictly it needs to be called “energy gapping”. As energy in one form or another will travel through just about any medium you can think of, one of the best defences is distance, because signals tend to fall off or antenuate between the square and qube of distance.

Whilst shielding does work, you have to know what it is you are shielding against. For instance you decide it’s EM radiation, so you put things in a nicely made metal box with internal metal partitions that are nicely welded / braised and silver plated. It does not work if in fact you have magneto constriction in one coil causing mechanical vibraton that gets coupled into the box that then gets mechanicaly coupled to another coil that picks it up via the process of microphonics… Infact the more solid the metal construction the more likely it is to happen.

Similar arguments go for ventilation slits/holes and acoustic energy.

There is a lot you have to know when doing security engineering that most engineers don’t get to think about normaly. Our host Bruce calls it “thinking hinky”, whist that does come into it you need to understand what to “think hinky about”. To do this you need a very good grounding in “test and measurment techniques” as well as a deep understanding of what a transducer is and why.

But a good rule to remember is “distance” be it real or synthetic is usually good.

Who? June 7, 2016 8:01 AM

@ Yoshi, Clive Robinson

Yoshi has a valid point here, firmware updates must be signed or at least have secure hashes available. Not only BIOS or UEFI updates, all other firmware classes should be authenticated and provide secure downloading methods (AMT, ECP, HDD/SSD, LAN, WLAN, WWAN, USB controllers, video, processor microcode, battery, diagnostics, optical drives…).

At least we should have valid strong hashes and the ability to download the firmware images from an HTTPS server; in a perfect world all these blobs should be signed so they can be easily verified by the computer during the upgrade process.

UEFI firmware is usually signed, as it is the ECP, processor microcode or video firmware when it is part of the same bundle, but in most cases firmware cannot be automatically verified. Even in this case secure download channels and hashes should be provided. I download firmware by HTTPS and verify its hash if it is possible, even if just to discard a transfer problem, before applying an update even if firmware is signed.

Hopefully NIST has some good advice that has been followed by most manufacturers on the last years:

http://dx.doi.org/10.6028/NIST.SP.800-147

But, as said, this one is a recent set of recommendations. Even some printers (like the HP LaserJet P3005) have signed firmware updates now. With regard to signed firmware updates this printer is a valuable example, as its first firmware versions were not signed but they added this feature later. I would invite any serious hardware manufacturer to re-release its most recent firmware with a signature verification algorithm on it if not supported yet, so it makes more challenging an unauthorized modification of its code.

By the way, recently I worked with a few cheap i3 computers (Acer?) that had flash BIOS and ME write protection switches on the motherboard. But it is unusual these days.

Another problem is our confidence on these binary blobs. Can we trust UEFI, AMT, our processors microcode or other firmware? I try running my computers firewalled, using a strict egress filtering policy if possible, to minimize the impact of a backdoored firmware, but I cannot be sure it is enough. On the other hand if something odd happens with our firmwares it should be known to security researchers, right?

With regard to Lenovo I understand why they add bloatware to their low-end computers. Each bloatware manufacturer pays Lenovo to add its software, so these computers can be relatively cheap. Anyone is free to change the operating system that runs on these computers. In my humble opinion, Windows itself is the most dangerous malware installed on these computers.

A different matter is the dangerous management software provided in the UEFI firmware of these computers. I know, Lenovo Service Engine (LSE) has been widely criticized, and with strong reasons. But it has been quickly removed from the firmware of affected Lenovo systems (again low-end series only). Please, do not point your finger to Lenovo, LSE is an implementation of Windows Platform Binary Table (WPBT). Why are users accusing Lenovo while Microsoft is the author of this abhorrent technology?

Allowing control of the operating system from the computer firmware is a poor design choice. LSE just uses a technology that has been implemented in Windows by Microsoft. This technology has not been only exploited by Lenovo’s LSE. I remember how some time ago Microsoft remotely changed a few files on all Windows systems through a backdoor in the Microsoft Explorer browser to recover from a bad patch that blocked further system updates. This time this technology has been used for the good, but it can be obviuosly used in very wrong ways or even exploited by criminals and intelligence services to plant malware, child pornography or other compromising files into computers running Windows.

Anyone running Windows 7 was able to test this widely deployed PC update process hijacking on his own computer two years ago. Good for Microsoft that helped its users, but I would avoid an operating system that gives so high degree of control of our computers to its manufacturer.

Clive Robinson June 7, 2016 11:28 AM

@ Who?

Please, do not point your finger to Lenovo, LSE is an implementation of Windows Platform Binary Table (WPBT). Why are users accusing Lenovo while Microsoft is the author of this abhorrent technology?

As I noted above Microsoft did not invent the idea, Apple did for the Apple][ back in the 1970s such that IO cards could load software in to the system from onboard ROMs (it’s why the Apple disk IO card only had seven chips as opposed to the thirty or fourty other drive controlers had). IBM filched the idea for the PC so it to could load ROM code to control the cards. Microsoft inherited this and you’ve been able to put ROM code into a PC ever since. So back in the early days of DOS and later Win you could overright the likes of autoexec.bat etc (which some early video cards and hard drive controlers did and is alleged to have happened in BadBIOS more recently). Microsoft just evolved the technology into WPBT.

Now the question is who do you blaim when cookies get stolen from the cookie jar? The kid that took the cookies, the person putting cookies in the jar, or the jar maker for not putting a child proof lid on it?

When you look at WPBT MS put it in Win8, not Win7, it was Lenovo or one of their suppliers that effectively backported the idea into Win7. So personally I think Lenovo get the blaim for being not just the kid steeling from the jar, supplying the cookies and along with IBM being an inferior jar maker, whilst MS was only making a new jar, to retain compatability over thirty years…

th regards,

Yoshi has a valid point here, firmware updates must be signed or at least have secure hashes available.

Hashes are of no real use when you have a State Level or equivelent attacker up stream of you doing a MITM attack, the network is a pipe not a security mechanism and is inherently insecure. As Stuxnet showed neither hashes or code signing is of any security use. Also as I’ve pointed out for a long time befor Stuxnet, neither the hash or code signirure mean squat diddly about the code quality or security, and does nothing to stop insider attacks.

With regards,

<

blockquote>

Have you actually read and thought about section 3 of NIST SP 800-147?

Because it does not solve these security issues. And most of all it does not solve what you note with,

Another problem is our confidence on these binary blobs. Can we trust UEFI, AMT, our processors microcode or other firmware?

Which as you ask it in a rhetorical fashion I’m sure you know the answer is “No” plain and simple. Which brings us onto your question of,

On the other hand if something odd happens with our firmwares it should be known to security researchers, right?

Not realy, how long and often has “suspect code” been uploaded to various AV research lists etc to basicaly be ignored for months, years and in effect decades for some classes of attack vector (ie loading ROM code, boot sector attacks etc). Put simply there are to few researchers and to much malware, thats why “China APT” made so much noise several years ago, and proved if that was required that the “squeaky wheel” principle applies to AV companies. So from an attackers point of view keep it “low and slow” or “tight and out of sight” and you will boil any number of selected frogs before the rest ever catch on.

Which brings us to your “operational” comment,

I try running my computers firewalled, using a strict egress filtering policy if possible, to minimize the impact of a backdoored firmware, but I cannot be sure it is enough.

I can tell you for free it’s nowhere near enough even for everyday Jo’s privacy / security, as is becoming obvious to more and more people. Modern Commercial OS’s not just MS offerings try to hard “to be all things to all men”. WHich means the attack surface of Commercial OSs is generaly way bigger than most people can even realise let alone understand sufficiently to mitigate.

Which is why I basicaly say “don’t try, most won’t succeed” against many of the low hanging crook attacks. As for your better than average crook, they are always going to find a zero day you’ve not prepared for. Even if you have prepared if you are targeted by a state level attacker it’s game over. They will if required do either a “black bag” job against your systems when you are not around and if that does not work a “wet work” job against you in person, if you are lucky, your pocket will get picked, if you are less lucky it will be a wet cloth over your face, if you are unlucky the $5 wrench or “potty trainer” will be only a short term issue that you will not get to tell anybody about, before they drop you in their chosen waste disposal system…

It’s why I talk about the need for humam OpSec as much as I do technical measures. The Squeaky Wheel issues apply to everybody, if you don’t make noise, the chances are you will not become a target. Likewise if you don’t squeeze every last drop of private and financial information into social networking and online purchase / banking you will be much less of a target.

However not having an Online Presence is almost deemed suspicious these days so you do have to be slightly visable. But what you mainly say and do can be kept inoffensive (unless you live in the likes of Thailand where not attacking those who make anything other than glowing praise for the monarch godhead is considered as a worse crime than murdering people).

It’s why I advocate as a minimum a “two machine” solution, where you have effectively a sacraficial “On-Line” PC to establish a presence, and an “Off-Line” PC which you never connect to any potential communications path, and keep your private thoughts and actions there. Whilst this second PC is often called “air gapped” it needs to be a lot more than that hence I refere to it more accurately as “energy gapped” these days.

Depending on your level of need you may require to get data to or from the Off-Line PC “online” but this does not mean you have to use your “On-Line” PC, in fact I would suggest you do otherwise. As for your On-Line PC I would recomend a stripped down turn of the century PC with a lot of RAM no hard disk and an older DVD-ROM (ie read only) drive with one of the smaller Linux OSs on it. If you do need to get data on or off it “localy” there are various ways to do this. One of which is to use an RS232 serial connection through a mediating device which acts as a Guard. Other ways are with a printer and scanner into OCR software, as a matter of “OpSec Cover” it is best to keep a reasonable quantity of the print outs in a filing cabinate etc, such that anyone who gains access to your property will see just a load of paper records not a hidden away hard disk or other electronic storage. You will also need to shred some of the more innocuous print outs and put them in the household etc rubbish, or get a rabbit/hamster and use it as bedding in it’s cage, then chuck it out in the refuse.

But as with all OpSec it is all to easy to be in a hurry or some such and blow your cover, hence keep stuff away from home and thus don’t dirty your own doorstop.

Clive Robinson June 7, 2016 11:41 AM

Oh darn…

@ Who?,

In my above I mucked up the blockquote tags when pasting in part of your post, thus this,

Hopefully NIST has some good advice that has been followed by most manufacturers on the last years:

Got left out between “With regards,” and “Have you actually read…” at the start of the indent for the rest of my comment back to you.

Note to self :- Remember to use Preview… 🙁

wumpus June 7, 2016 2:51 PM

@Mike Gerwitz

It doesn’t help that Mint is likely the place potential new Linux users would go. Supposedly Cannonical is cleaning up some of the more blatant issues with Ubuntu (presumably sending all your searches to Amazon), but I’d still point a newbie at Mint.

The bigger problem is that a user has to be willing to learn a whole new set of software. Sure, way too much is just chrome, and lots of other stuff is straightforward, but the details are killers. And I don’t think 1/3 of my steam games work in Linux (although with Kerbal Space Program and Civ5, that should be enough). Anyone who stuck with Microsoft this long has to be suddenly seriously motivated to build a [slightly] more private machine.

Who? June 7, 2016 3:12 PM

@ Clive Robinson

I can assure know better than me. I am glad for being able to learn from people like you and other readers of this blog every day, so let me start saying a big THANKS for the valuable information your are sharing here.

However I do not fully agree with you on who invented the WPBT technologies. What Apple did on its early computer systems is not comparable to Microsoft and its WPBT technology. Apple just allowed customers to install internal expansion cards on the Apple II. We have similar firmware cards today, let us say the ROM cartridges in the DEC VT-xxx family of serial terminals. I see nothing wrong on allowing a user to install a card that expands the features of a computer, it is comparable to installing a software package.

However WPBT is a very intrusive technology that permits software being installed remotely without user authorization or knowledge. My point is that firmware should not be allowed to interact in this way with an operating system, and an operating system should not provide ways to allow this unrestricted access to its supposedly private resources. Firmware should not be a mean to allow persistence, at least not without user acknowledgment. Customers who care about thievery may hire a corporation like Computrace or Intel to track their devices, but this should be a choice and it should be disabled by default.

Now the question is who do you blaim when cookies get stolen from the cookie jar? The kid that took the cookies, the person putting cookies in the jar, or the jar maker for not putting a child proof lid on it?

Both Lenovo and Microsoft are blamable. LSE would not have existed without collaboration from Microsoft. It is a highly intrusive technology. Perhaps it has its place in remote management, but it is a feature that should be disabled by default and certainly restricted to a few trusted management stations. I do not know Lenovo’s goal. Perhaps they were testing this technology on low-end system to analyze user’s response before applying it to more expensive, and critical, gear.

Hashes are of no real use when you have a State Level or equivelent attacker up stream of you doing a MITM attack, the network is a pipe not a security mechanism and is inherently insecure. As Stuxnet showed neither hashes or code signing is of any security use. Also as I’ve pointed out for a long time befor Stuxnet, neither the hash or code signirure mean squat diddly about the code quality or security, and does nothing to stop insider attacks.

Sure, firmware should be protected by means of digital certificates and ideally, once installed on a computer, by means of write protection jumpers like the ones protecting Sun’s OpenBoot PROM on sparc64 systems. I said strong hashes and secure download channels should be a minimum, just to assure integrity in the case of a failed download or providing some basic protection against less advanced attackers.

Have you actually read and thought about section 3 of NIST SP 800-147?

Sure, I am not saying this document is perfect nor I am saying in any way NIST must be trusted. However in the last years more and more manufacturers are providing secure methods to upgrade the firmware on their devices through signed images. This document at least has improved the status when compared to previous years where any binary could be written on the flash chips. This mechanism, even being mathematically sound, is not perfect either — let us consider the sleep mode configuration bypass on UEFI firmware.

Indeed, we should not trust on firmware. It can be backdoored (in fact, I would be really surprised in case it is not, as most firmware manufacturers are U.S. corporations at just one small step from a government NSL). Even if firmware has no backdoors it is not written by the security-targeted developers and, in recent years, it has become extremely convoluted.

About firewalls:

I can tell you for free it’s nowhere near enough even for everyday Jo’s privacy / security, as is becoming obvious to more and more people. Modern Commercial OS’s not just MS offerings try to hard “to be all things to all men”. Which means the attack surface of Commercial OSs is generaly way bigger than most people can even realise let alone understand sufficiently to mitigate.

I know, commercial operating systems are not really protected even behind the most strict egress filtering policies. These operating system may be compromised by just downloading the wrong Excel or PDF file, not to say email attachments or hijacked updates. However a restrictive egress filtering policy (in both outbound and inbound traffic) implemented in an OpenBSD firewall running on simple devices (PC Engines APU systems, old i386 gear, arm, sparc/sparc64 hardware…) protecting a small network of OpenBSD computers is a good starting point, in my humble opinion. It is not perfect, but should be enough for Internet facing computers.

Of course air-gapped (or, as you suggest, energy gapped) computers are important too. I have a few ones, in the same way I have off-line storage (usually on CD-ROM) for critical files (like firmware images, that I download periodically from different networks on different days just to compare different checksums with the images stored in non-writable media). When using ISO images from a manufacturer is not possible, I run the firmware updates from a Windows PE 3.0 media stored on a non-writable CD-ROM a few years ago. In all cases, the firmware is applied off-line and moved to the air-gapped computer using non-writable CD-ROMs to avoid the risk of USB devices. I have a few “air-gapped” USB devices too, the only ones that use on the air-gapped computers, to provide the firmware updates to the WinPE CD-ROM when writable media is required (e.g., when certain installers need to be used to run the firmware upgrade tool). Of course as soon as one of these air-gapped flash drives is used on a non air-gapped computer it is not used anymore on the air-gapped systems. I know, it is far from being perfect, but it is the best I can do as there is a risk on anything downloaded from external sources.

Sometimes I download firmware images years after being applied just to verify that checksums on the original media have not changed. If possible, getting the firmware images from different sources when a corporations gives a choice.

Indeed, I try to maintain a low profile on all my activities. And I can assure you all these activities are completely legal ones, I just try to be careful.

Attack surface on simple operating systems like OpenBSD should be easy to understand for an advanced system administrator. So the firewall is mainly to stop some unexpected activity from the few software packages or firmware installed on my computers. Just a basic operational security, nothing advanced.

One of which is to use an RS232 serial connection through a mediating device which acts as a Guard.

This sounds clever. Would you mind telling us a bit more? How is the serial connection being used? I understand you are not encapsulating full TCP/IP on it. Are you using it to just transfer files using, let us say, Kermit?

Thanks!

Yoshi June 7, 2016 4:44 PM

@Clive R:

Surely you aren’t advocating for the abolishment of checksums and signatures etc just because of Stuxnet types of attacks? The whole internet isn’t Stuxnet.

So yes, valid checksums, still have their value. Don’t throw the baby out with the bathwater. Yes there are other multiple points of vulnerability in a transer of data, but it makes no sense to quit one security feature that is still valid just because of a different vulnerability that (sometimes) (may) exist elsewhere.

Most people would value validation of some sort even if not perfect that their ware isn’t corrupted nor infiltrated with malware nor replaced with a completely different ware.

Yes, it’s important to have data integrity and security throughout the chain, but just because one vulnerability might sometimes exist doesn’t mean that then you therefore give up and add more vulnerabilities.

Some things aren’t mutually exclusive.

pwned June 7, 2016 5:24 PM

Holy crap, I had never heard of WPBT. I guess I am never running Windows 8 or 10 then. Why would MS intentionally introduce such a dangerous vector for malware? Did the NSA insist on it or something?

moz June 7, 2016 5:31 PM

@richard’s comment is interesting, shows clear commercial bias and looks like a reminder that Microsoft will never give up on it’s strategy of FUD outlined in the Haloween Documents. It mixes a small amount of truth (some Linux distros have had real security problems / many are not nearly careful enough) with a large part of FUD and ignores the key facts.

The key fact is that RHEL is basically the only place where you can get a complete signed system (no, kernel + GUI + some admin tools as delviered by Apple and Microsoft does not a system make) with properly considered and implemented security. Their signing infrastructure has actually been shown to stand up to compromise and they do appear to take things seriously. It’s also, alongside OpenBSD, one of the few places which take security seriously and attempt to deliver the entire source code for their system allowing you to audit who caused a system compromise once it’s discovered. You can expect to get a system where you know the entire code base is built from source by one organization on one set of infrastructure from source to delivered, signed, verified package.

In this area, as in many others, you get what you pay for. RHEL security is almost acceptable. Fedora, with fully working SELinux and a serious attempt at a proper release process is live-with-able. No other operating system commonly available (and I am staring hard at OpenBSD which lacks MAC and only recently implemented working binary updates for normal admins) comes close.

Linux is not nearly perfect. Multics was almost certainly better. VMS probably came close and was made much longer ago. However in a world in which the default language for writing large parts of operating systems remains C and competitors like Microsoft continue to leave office macros on for commercial reasons even though they are a major reason for compromise, Linux is the slightly glowing coal in the gloom of Mordor. Let us give it the credit for that until we get a better option. Maybe one of the new proposed operating systems based on Rust? Maybe for example perhaps a future version of Qubes?At least they seem to care about their build process.

Nick P June 7, 2016 6:48 PM

@ moz

Overall, good outlook of it. I’ll add a few points.

re Windows

Richard is mostly right that there’s tons of issues in FOSS right now outside just a few projects and OS’s that are high-quality. I was just countering the many eyeballs myth earlier where someone thought it meant something for security. It does for debugging but not security. Security takes responsibility by a team. That’s uncommon in both proprietary and FOSS sadly.

Now, Microsoft started as a clone of the best OS from the 80’s. Far as I can tell, it’s problems developed due to rush to market with no concern for security, weird stuff required for backward compatibility, and code to make broken, 3rd party stuff work. The reason was to get a near monopoly, which they did. Then, they had to fix it once problems got too big. Lipner’s SDL turned it around quite a bit. They did mandatory integrity controls, sandboxing, some whitelisting, safer languages for apps, transformations for unsafe languages, formal verification of hypervisors… all sorts of stuff. Now, there’s hardly any 0-days found in Windows kernel compared to Windows apps and FOSS software. Speaks wonders for its current quality.

If only they’d stop being such subversive and expensive assholes, eh? Then I might become a customer again. As you said, specific Linux or BSD OS’s offer much better package for users. Plus have source for anyone willing to inspect.

re build process, securing it, and… the software supporting that

Like with “Reproducible Builds,” the Qubes work is another that’s barely adequate and ignoring history. You seem to know some OS history as you named one that still beats out modern ones in some respects. Wheeler already did the definitive guide on securing SCM or builds with many, relevant links. Here’s a start on principles and methods for ensuring correctness of systems & software while eliminating subversion from lifecycle. I take it further here with a post countering “reproducible builds” that describes, immediately & in follow-up comment, how to do subversion-resistant, verifiable builds. Much larger issue than OSS or most proprietary are willing to tackle despite tools available to help with some components already built.

Red Hat doesn’t have it. Better than many, though. Qubes certainly doesn’t have it. Most FOSS doesn’t have it. Most proprietary not only are missing secure SCM but take steps to prevent us from knowing. 😉 Shapiro’s OpenCM, Aegis, and others like them that try get ignored and wither. So, this is one of software lifecycle’s most longstanding problems. Unusually, they have everything they need to know and use to deal with it to. Just outright refusal outside some outliers that deploy a half-assed or decent custom process that approximates prior work.

Figureitout June 7, 2016 9:35 PM

Clive Robinson
–The fundamental flaws in your advice is a) use older computers (they’ll all be dead in 10-15 years regardless) and b) how to develop your offline PC w/ a PC that has verifiably never been connected to another PC that has malware. Solving a) is only possible creating new hardware and being able to check it which is a problem that’s yet to be solved. Solving b) is again more or less impossible. Something I do at my work, is send commands to a small infrared receiver that actually has an “MCU” in it (this thing is so tiny you would not believe it) from a completely separate chip over i2c. We write to registers from one firmware to another. It’s weird, writing code for 2 chips at once. This kind of thing makes you wonder if say up to 5-6 chips registers could be touched by just one chip, so long as the connections are correct on the motherboard and actual chips can be disguised as a power transistor or op-amp or rectifier even. It’s a false sense of security even if there’s minimal IC’s on the board. F*cking impossible.

People know about code signing not being a silver bullet but there’s no better ideas, nothing practical to implement. Insider attacks are impossible to prevent besides doing strip searches on entry/exit and total surveillance (and surveillance on the surveillance, it’s surveillance all the way down); no business would survive that as no one would work in that kind of place. So, it’s not some revelation that code signing isn’t secure in itself if the computer signing the code is infected. More like it means that specific computer needs to be compromised and that binary targeted for the attack to work.

State level attackers aren’t game over, some of them suffer from severe dunning-kruger effect, it’s pretty clear when it happens since they don’t know how to be really paranoid like everyday attackers since they can sometimes operate w/ no fear of getting arrested. Once you go thru it and get your free training in, you can sneak quite a bit past them and they won’t have a clue b/c again, there’s simply too many holes to slip thru, it’s just a fact of life lol. It’s a game of attrition at that point, how much time and money can you get wasted. Losing game all around, devolving into mutally assured destruction of resource depletion where we all lose, just losers all around and the world becomes a darker more death-filled place. Living in fear of a black bag job or someone mugging you or whatever is no way to live. 2-way street. It’s just like bullies in grade school, fight back. Learn self-defense and hopefully kill the bastard/coward if they try anything physical. Proper self-defense, can be very effective. Always love hearing the stories in the US of some home intruder getting killed by a 80 year old granny, one less scum bag threat to breaking into your home.

Nick P June 7, 2016 9:58 PM

@ Figureitout

“Always love hearing the stories in the US of some home intruder getting killed by a 80 year old granny, one less scum bag threat to breaking into your home.”

Agreed. Another fun example. 🙂

Richard June 8, 2016 12:06 AM

@ Mike, Nick P, moz

I don’t want to leave the wrong impression – I DO run Linux on my PC’s, and I do consider it reasonably secure given the current threat model (which includes a certain amount of security through obscurity) – but don’t fool yourself, if Linux Desktop users were targeted and attacked with the intensity and sophistication that MS Windows users are on a daily basis – then given the current pathetic state of Linux Desktop security, Linux would fair worse – FAR WORSE.

So, I’m sorry if I seemed a bit crotchety, but the idea that the typical “Linux Desktop” is somehow a paragon of security is simply ludicrous.

The problem is that the Linux community seems to be totally clueless regarding the simple fact that SERVER SECURITY IS NOT THE SAME AS DESKTOP SECURITY, so the security measures you need to keep pizza munching nerds from hacking into your precious server at 3:00 A.M. are not the same as those you need to secure a user’s desktop while they surf around the net clicking things at random.

As a result, ALL the major Linux Desktop Distributions lack even the minimal essential features you would expect in a modern secure desktop environment. Features like an outbound application level firewall and effective real time heuristics based virus protection.

But given that MS Windows systems HAVE these features, where Linux Desktops do not, then why does Windows get hacked all the time and Linux only rarely?

Simple answer; Linux Desktop users don’t get hacked BECAUSE NO ONE CARES TO WASTE THEIR TIME.

Why would any self respecting malware author waste their time writing malware to attack the Linux Desktop, when Windows is where the money is, and Linux has such a minuscule portion of the desktop PC market?

When they do try, such as during the recent Firefox PDF Zero Day…

mozilla-patches-firefox-zero-day

… Linux users had their critical information stolen to the same degree as their Windowz neighbors (or possibly worse) – so as a Linux user, I think some humility is in order.

Clive Robinson June 8, 2016 2:39 AM

@ Yoshi,

Surely you aren’t advocating for the abolishment of checksums and signatures etc just because of Stuxnet types of attacks? The whole internet isn’t Stuxnet.

No I’m saying, as I’ve said for a very long time, that code signing is not an indicator of the quality of code, or the security of code at any time. All it realy says is thst a collection of files were hashed and the overall hash was signed by a private key. Further that as we know hashes can be vulnerable to falsification and likewise so can the signing keys that can also be stolen. Further we know from other attacks on CA systems that it is possible to hide from a user the fact that a signing key is nothing what so ever to do with the organisation the user is mislead to belive.

Put more simply “Code signing is not fit for it’s stated purpose as currently implemented”.

If we are going to make code signing more fit for purpose we need to sort out as a first step the CA issues, which as we know significantly effect security in other areas. Yet a quater of a cenrury down the road nobody has addressed the CA issues, in fact they have scurried in the direction of making it worse in nearly all cases, under the excuse of either making it easier for users or not scaring users with details. Need I say that security is all about the details? Especialy those the devil hides in.

Go and read NIST SP 800-147 (it only takes about twenty minutes) then see what it actualy does (very little) and what it’s weak points are (many) and how you might attack it…

So just to reiterate I am saying code signing does not currently do what people think it does, and worse it is based entirely on known to have failed assumptions.

Clive Robinson June 8, 2016 4:31 AM

@ pwned,

Holy crap, I had never heard of WPBT. I guess I am never running Windows 8 or 10 then.

As I’ve indicated it started back in the late 1970’s over a third of a century ago, and it has evolved with time and OS complexity and other changes.

It is a solution to a quite real problem, but has real side effects.

The problem is fairly simple to describe “new technolog” especially in storage technology.

Let’s say I come up with a new way to store data it needs an interface to the computer part of which is a “software driver”. The driver will not be in the standard OS, so how do you get it into the OS?

Well the original option was ROM code on the interface card that would be mapped in at a specific area of memory with a “magic number” to tell the OS it should be run. OS functions such as reading from mass storage is often done by either a jump table in RAM or a Software IRQ, both of which can be over written to redirect the execution of code.

Thus you would write your ROM code such that it replaced the Jump Table entry to point to your functions in your ROM code.

When the system boots up the ROM code gets run by the BIOS, which puts your ROM code into the BIOS code via the jump table. The BIOS then loads the OS image in using the disk driver on your ROM. When the OS image starts it also runs the ROM again which updates the OS jump table to use the ROM code, thus the OS now has your ROM code driver in it and things will work despite the fact that the OS does not have a driver for your new mass storage system.

This need is not going to go away, the fact that it is a massive security hole is likewise not going to go away.

Whilst there are ways to close the security gap down, it involves extra work, liability or both to the OS supplying company. So in a cost saving and risk averse environment the closing of the gap is not going to realy happen.

The fact that it is also a godsend to the likes of the NSA, GCHQ et al for doing updatable implants is just an unfortunate side effect of an otherwise usefull if not necessary function…

Clive Robinson June 8, 2016 8:08 AM

@ Figureitout,

The fundamental flaws in your advice is a) use older computers (they’ll all be dead in 10-15 years regardless) and b) how to develop your offline PC w/ a PC that has verifiably never been connected to another PC that has malware.

As you well know I have solutions to these problems BUT they are not practical for most people. For instance, have you ever taken a microcontroler development board with two serial interfaces and made a “Guard” from it? I know that you have access to the resources and some of the knowledge, but would you consider it a practical project? Not just for you but 99.8% of computer users who don’t have the resources, or for that matter the knowledge / ability to code it all up…

So odd as it might seem it’s kind of a more practical suggestion for most to work with, and as old computers can be purchased for as little as $10, it does not carry a finacial risk for most people.

But lets examine your two points in a little more depth.

a) use older computers (they’ll all be dead in 10-15 years regardless)

Whilst newer computers do die alarmingly fast, the older ones were often built to a better build quality. I’ve a couple of 386 HP computers that are not only going strong, their level of EMC shielding means I can use them inside an RF cage and not mess up readings on the other instruments. Admittedly they run my own wierd OS and C compiler but they do their jobs admirably. Importantly though, they are –compared to modern surface mount computers– fairly easy to repair the PSU and motherboard, and I have plenty of spares of old serial and display cards, some of which I’ve already made “socketed” for easier repair.

As for storage, they use floppy and QIC tape drives which you can still get hold off that work off the old floppy drive interface. I also have another with a cheap well known NE2000 “knock off” network card (the chips of which are still very much being manufactured) which has a PXE boot NetROM on in a 27C128 of which I have bucket loads of.

Whilst this does sound very “hobby shack” there are loads of older Ham Radio guys still using and cherishing such kit, rather more of them than I suspect can write guard code on a modern microcontroler.

b) how to develop your offline PC w/ a PC that has verifiably never been connected to another PC that has malware.

It depends on the vintage of the motherboard and cards, but there are lots of boards that never had Flash PROM or EEROM on them. Thus after you make them run on floppies or CDROM only, the fact that in a previous life they were connected with out protection to that festering disease pit of the Internet does not actually matter.

Thus the main concern with vintige kit is the one you did not mention which I’ll add,

C) How do you get conectivity

The answer is via several methods SneakerNet might be old and venerable but still works, and direct conectivity such as serial, Parallel or Ethernet ports, with SLIP/PPP, PLIP, if you need IP connectivity or Kermit / Xmodem etc if you don’t. Supprisingly if you know where to look on the Internet it’s all described in excruciating detail and usable source code that an individual can get to grips with all nicely presented.

D) Performance of old kit
The important point to note with even computers a couple of generations old is you will get a performance hit, it’s unavoidable. However the reality is that for most things you might want to do in terms of “productivity” not “entertainment” performance is not an issue, and it’s possibly the most important consideration to think about.

For instance I still use WordStar a version of which including the spell checker fits happily on a 1.44Mbyte floppy and will run within 64K of memory on an 8086 machine I have. Likewise I’ve older K&R style C compilers, and more modern ANSI compatible Turbo C that all run off of floppies including the IDE. As for lower level code DEBUG can be used to write assembler code in and will import text files from the likes of Edit/Basic. You can with a little hunting find all of those for free on the Internet as well as Forth (yeh I know you don’t like it but it’s got many advantages when considering Privacy and Security).

As for OS’s you can still find MS Dos 5 and earlier if you look, there are also “Free DOSs” out there you can find out about, one works well not just on it’s own but in WINE or Qemu etc. And there are early versions of Linux and similar *nix that will run off of either a CD or striped down on a floppy or two and will mount what else you need read only via NFS etc across SLIP at 9600baud. You can get these early verisons of Linux on a CD out of the back of older second hand books (Slackware being a notable early contender). Which gives you a certain degree of confidence that any bugs, are bugs not builtin backdoors.

E) The modern Approach
But if you do want to go down the modern route and want to avoid the security madness that is modern Intel / ARM hardware, you do have the option of the microcontroler route. For instance there is a copy of RetroBSD for PIC32 chips (MIPS core) that even though SMD can be purchased on “break out” boards for the PicKit Explorer 16 etc. Which is about the cheapest way I know of getting *nix up on the cheapest of microcontrolers, without getting your soldering iron out.

Though you might want to look up MMbasic ( http://geoffg.net/Maximite_Story.html ), it will give you the full GWbasic experience on a PIC32. The guy that developed it alows people access to the code to extend it in various ways, so a crypto extension or two would be of interest to him. But perhaps more importantly there is another person with a crowd funded project to produce a rather nifty little computer with kby, screen and an electronics break out prototype area ( http://www.theverge.com/circuitbreaker/2016/6/1/11829996/ello-2m-retro-computer-crowdsupply ).

So yes there are options which cater for most skill / risk / productivity points. But I will leave you with another consideration.

On the assumption your modern PC is owned by MicroSoft, Google, GCHQ, NSA et al, that does not make it unusable. Providing your “entertainment” requirments are not particularly “moral offending” then you can use it for that. And to be honest, how many people realy do “productivity” at home?

That is you by and large don’t need MS Office/365 or Google equivalent on an entertainment box. Thus it’s no great loss to not have them on it. Especialy when you consider they are slurping up every key stroke back to the “Corporate Motherships” and thus to the NSA, GCHQ et al, in transit, thus you will hemorage any privacy / secrecy if you use them.

By and large most “productive” work on a computer is destined to be “human readable” print outs, or text files for the likes of Emails etc. This means old and slow tools that were more than usable in the mid to late 1980’s are still more than adiquate nearly a third of a century later. The only real issue is “file formats”. Which unless you realy need leading edge fancy effects can be a non issues if you use standard “old as the hills” human readable file formats.

As a matter of habit I only generate “human readable” files so CSV, RTF, PostScript, which I can check with text based tools (occasionaly converting to early HTML if I do need to pull in pictures and graphics).

Whilst it’s not impossible to hide malware in such files –source code is human readable after all– it makes such attempts more obvious to the eye and various tools I’ve developed over the years.

Thus your “productive” private secure machine can be quite old and not realy slow you down. The only difficulty left to solve is printing. To which I say get yourself a PostScript Level 2 printer with a network interface, most will directly print out plain text files or earlier levels of PostScript (if it won’t directly print text, you can find online a PostScript wrapper you can hand edit into a text file in seconds).

Mike Gerwitz June 8, 2016 9:04 AM

@Richard:

I encourage you to substantiate your claims with concrete examples. Lumping all GNU/Linux systems together, and lumping all software together, does not make sense.

I can’t possibly address such blanket claims.

Debian is known for its stringent controls, and GNU Guix is well on its way.

@wumpus:

I’m referring to the standard package system on that particular OS. If you have other software (like Steam), they’re subject to their own practices.

Who? June 8, 2016 1:42 PM

@ Clive Robinson

I have nothing to add this time, just want writing to say you thank you. Your approach to security is sane and logical.

I have not enough old hardware right now. My 286 was destroyed fifteen years ago as a consequence of a known design flaw in its power supply that fried the motherboard (two decades ago it was much harder getting a recall for a defective product from a company). Now I own just a Siemens Nixdorf pentium computer I bought in 1997 to run Solaris on it (hopefully, one of the last machines designed without the memory sinkhole vulnerability), an IBM 5140 (convertible) with a poor quality display and, worse of all, without ISA slots or serial ports, and a few old (but reliable) SPARC/SPARC64/PA-RISC/MIPS gear. There is nothing as DOS on these computers, but can run isolated, have serial ports and a physical switch to protect BIOS (the Siemens Nixdorf PC has a switch for this purpose too) and run OpenBSD nicely. These machines are airgapped (well, all except the Sun Ultra 30), but are my main gear yet.

I had not been very lucky with that old hardware. Only a few but reliable computers, as you can see. Surprisingly, I had been more lucky with software (PC/TCP, WordPerfect 5.1, Lotus 1-2-3, DOS from 4.01 up to 6.22, Microsoft Word for DOS, MASM 6…). All these software packages are complete, with full media and documentation. Most of them were gifts, some adquired by me over the years.

I will certainly have room for a few reliable computers. Siemens Nixdorf had an exceptional PCD and PCE series (the PCD-4T/66 was a superb computer, well engineered and easy to repair, and the PCE-5T was a fine server), but these old computers are very difficult to get right now. I will get a few of these Siemens, HP, Unisys, IBM systems if I have a chance, but it seems they have just dissapeared from the earth.

I understand your HP 386 computers are Vectra… these are very nice systems. Heavy, but quiet and reliable (and, if you have the right models, small desktops too). It is very nice to see you have them yet. These are good systems, like the HP 150 and 150-II. These machines were expensive years ago, but its quality was higher than the one of modern computers… and they were more reliable too:

https://en.wikipedia.org/wiki/Capacitor_plague

The old computers approach seems right. Better do not try to connect one of these machines to the unfriendly Internet using PC/TCP, but certainly these systems have its place on our infrastructure.

Sometimes I like running DOS (DR-DOS and PC-DOS mainly) on an hypervisor.

Who? June 8, 2016 1:53 PM

@ Clive Robinson

About the “modern approach”

I like it a lot too. It is easier right now, cheap and widely available, and works as expected in terms of performance/features. However, I try to get “old hardware”-like machines for the firewalls. I think the Alix/APU boards are a good choice. Never worked with an APU, but its predecesor (Alix) had a decent firmware. Nothing related to UEFI or the Intel Management Engine on it, have AMD Geode processors (APU has a quad-core Jaguar), so they should be sinkhole-free too, and run OpenBSD.

My first step when setting up a network exposed to the wild Internet is making a firewall using one of these small computers. Then I build the internal systems using OpenBSD too. At least I know the expected AMT ports are blocked for both inbound and outbound traffic.

Not perfect, but it is a good compromise between performance, features and hardware availability.

Of course, I am talking only about networks connected to the Internet. I like running most computers airgapped.

Gerard van Vooren June 8, 2016 3:22 PM

@ Richard,

… Linux users had their critical information stolen to the same degree as their Windowz neighbors (or possibly worse) – so as a Linux user, I think some humility is in order.

Wtf? It’s easy to forget that the windows security problems that started with Windows XP being “internet ready” (when Bill was caught with his pants down) are self inflicted. Today we still see “security updates” for Windows 7. How is it possible that a multi billion dollar company still needs to update the security of their operating systems? It’s bullsh*t.

So humility? Yeah right. What is it to be humble upon in the first place? W10 and Office365 are tracking (=stealing) ad platforms with legal fine print up to Mars. I can continue if you want to but I am not humble when it comes to Microsoft software.

Figureitout June 8, 2016 11:02 PM

Nick P
–Ha, yeah. Dude got owned.

Clive Robinson
–They can be practical, someone just has to take the time to make step-by-step tutorials. No I’ve not made a guard b/c I’m not 100% how they work or what to filter for (have to update filters all the time like web firewalls..? B/c that’s a nightmare) but I told you that I got lucky and my next project is going to be something very much like a guard. I’d be taking in comms of one protocol from an external MCU (w/ another older chip handling the critical timing of it, then another being that first serial interface) and my job will be to make one serial output for debugging and another passing on data to an internal MCU. I’m going to lean on our team a bit but want to do as much as I can (well commented and clean). The chip I choose needs a bare minimum of 3 serial ports.

No I wouldn’t consider it practical now though (but maybe I’ll be the first one to post instructions on how to build one here, who knows), once I get nRF_Detekt deployed for testing and my radio connected to my PC I’m going to try to make a data diode like M. Ottela made, and transfer a file (preferably zip file) via putty or some popular terminal program on an internet connected “infected” PC one-way to a connecting PC. Then vice-versa.

RE: old pc’s
–I told the blog about my oldest one in case it helps someone debug a broken PC. I just removed one bad RAM card and it booted up again. Then I let it be as I was saving it, well won’t boot again lol. The HDD, I’m not sure how it’s still alive lol. I’m not sure if the RAM cards I see online for ~$1 will work so I didn’t pull the trigger (carders getting my debit card and making fraudulent charges on paypal had something to do w/ it too; I’m not going to live in fear of those scum either). If I could boot off a floppy disk (I’ve got like 5-10 floppies) and just run “live” w/ FreeDOS or DOS, then hell yeah but I’m not feeling it right now.

Another one still works but I took offline myself and I gotta replace like 8 caps around the CPU that are oozing like hell (ugh why?! stupid design…). It’s you know got wires galore so I need to take a picture etc. then just make sure I get right polarity on them.

Another one that’s working, not sure if I want that to be an openBSD machine or a firewall, saving it. It’s got a nice serial port so it’d make a nice endpoint. Cool design too, very hacker friendly. I just want to replace the HDD, another project I don’t really wanna do but gotta do it…my gf now takes up more of my free time and money so…bah can’t win :p

Regardless, there’s no getting around that that strategy is temporary and relying on dwindling supplies of parts. Depressing. I definitely would’ve been stocked if I was an adult in the 70’s/80’s.

It depends on the vintage of the motherboard and cards
–Still doesn’t account for if the computer burning the ROM was itself infected…pretty easy to point to impossible situations eh? Highly unlikely, still feasible.

So yeah you know I’m going to be doing the modern approach w/ some dev board w/ a chip I really like and a toolchain supported by linux. Yawn. Oh well if someone can do better and actually lay it out here…otherwise it’ll be most future proof and I look forward to the attacks on air-gapped MCU’s that don’t involve known methods.

CallMeLateForSupper June 11, 2016 12:47 PM

@Who?

“[…] IBM 5140 (convertible) with a poor quality display and, worse of all, without ISA slots or serial ports […]”

Thanks for the memory; the “ragtop” was the second PC in my arsenal. The stock LCD was indeed eye-strain-in-a-box. Knowing that ahead of time, I ordered both the “super-twist” display and a CRT. Also a backpack and the serial- and parallel-port “slice”. [1] The 5150 fed my EPROM burner for a number of years.

I made two productivity enhancements. First, I added a certain terminate-and-stay-resident (TSR) to every boor diskette that greatly sped up the coffee-grinder diskette drives’ seek time. I think that TSR originated at Toshiba (or maybe not; some well-known Japanese company at any rate), and one could find it at many a BBS. Second, I added “turbo” to the processor with a DIP oscillator and SPDT toggle. Switch to standard clock; boot; switch to turbo.

I trashed the whole system when I tired of replacing “reversed” NICAD cells. The only thing I kept was the backpack. A cyclist, I carry groceries and all manner of other things in it, so I’m tempted to say that the ($$ optional) backpack has been the most appreciated part of the Convertible.


[1] A colleague gave me a new-in-the-box Convertible printer. (I didn’t ask how/where he got it.) It snapped onto the 5150 like the other slices”. It was a curious thing, a very pricey, MONO-directional dot-matrix that not only was glacially slow but also consumed very expensive, propriatary, ribbon at an alarming rate. I immediately cannibalized it for the motor, screws and a few other general use parts.

Nerdicus Baticus June 12, 2016 9:40 PM

I once had access to the private FTP server that a certain hardware manufacturer used for their code monkeys to upload drivers and BIOS updates for distribution to their various support websites. I was just given the password without verification of who I was so I could then access a newer BIOS version with updated hardware and memory capacity support from the FTP server. I scanned all the drivers and BIOS versions on the FTP server, starting with the most recent network driver, and found the driver had been injected with a quite versatile Trojan/Worm for it’s time.

Router manufacturers don’t bother to patch for most exploits in under a year or mostly they never fix the issue. I don’t imagine hardware manufacturers are any better at securing who has access to their network assets containing BIOS and driver versions these days either.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.