On Not Fixing Old Vulnerabilities

How is this even possible?

…26% of companies Positive Technologies tested were vulnerable to WannaCry, which was a threat years ago, and some even vulnerable to Heartbleed. “The most frequent vulnerabilities detected during automated assessment date back to 2013-­2017, which indicates a lack of recent software updates,” the reported stated.

26%!? One in four networks?

Even if we assume that the report is self-serving to the company that wrote it, and that the statistic is not generally representative, this is still a disaster. The number should be 0%.

WannaCry was a 2017 cyberattack, based on a NSA-discovered and Russia-stolen-and-published Windows vulnerability. It primarily affects older, no-longer-supported products like Windows 7. If we can’t keep our systems secure from these vulnerabilities, how are we ever going to secure them from new threats?

Posted on March 9, 2021 at 6:16 AM48 Comments


Untitled March 9, 2021 8:17 AM

I guess that the reason many companies are still on Windows 7 – or even XP – is that long ago they bought very expensive applications which won’t run on later versions, whose manufacturers can’t be bothered to update them, or are demanding prohibitive fees for doing so, or even have gone out of business. So those companies are faced with enormous cost of switching to new applications. Or companies long ago bought very expensive equipment whose manufacturers won’t provide Windows-10-compatible software, or have gone out of business. So those companies are faced with a choice: either completely replace some wildly-expensive equipment which still works perfectly well, simply because there’s no driver for it, or stay with Windows 7 or XP. For top management that choice is a no-brainer, and the CIO has no chance of persuading them otherwise.

Neal Krawetz March 9, 2021 8:22 AM

Not trying to defend the practice, but I understand why it happens. People and companies invest in the current products and not in the costs for upgrades.

I have had many Android smartphones. None have ever had vendor patches released, and most have no option for upgrading the operating system. The theory is that you’ll only use the phone for a year or two and then get a new model. (Why patch when you’ll be replacing the entire thing?) The flaw is that many people don’t change models regularly. And even if you do change models, you’re still vulnerable to the latest exploits until you upgrade again.

With Linux, upgrading often requires fixing code that has broken dependencies. (The SBOM problem.) I’ve seen people apply patches as long as the OS is supported, and then stop applying patches. By that point, upgrading from Ubuntu 2010 to Ubuntu 2020 becomes a large step and big effort. It’s made worse if they have 3rd-party drivers or devices that are no longer supported.

Linux usually does give you the option to download the source code, but there is often a point where that won’t help. “Compiling this requires autogen 1.2.3 and you have autogen 1.1.1” or some other dependency is out of date. Manually upgrading all of the dependencies is usually not practical and may not be feasible.

With Windows, there is usually no simple upgrade path. Copy off all of your files, write down all of your apps, and then reinstall. Hopefully you can put it all back. (You should allocate days for this effort. That’s days of downtime and resources tied up in the upgrading.)

With Mac, OS upgrades usually last 1-4 years, then you need new hardware. Even though the current hardware runs fine on the current OS. Is the same problem with Android smartphones, but with much more expensive equipment.

You want “0%”? That’s never going to happen. Even 25% really doesn’t surprise me.

Etienne March 9, 2021 8:22 AM

It’s been my experience, that no small business should even have computers.

Most only need a typewriter, carbon paper, a calculator, and a FAX machine.

Peter March 9, 2021 8:39 AM

Additionally, it is completely impossible to upgrade the software on computers that control mechanical equipment. One device in the next office won’t run unless the attached computer has IE6-8 installed. Another one has software that only runs on NT.

Both machines are airgapped, but it would be very tempting to connect them to the network so that the data wouldn’t have to be exported by hand once a week.

Yes, newer instruments are an option, but it is hard to convince management that we should spend 200k replacing an instrument that works perfectly.

wiredog March 9, 2021 9:14 AM

Back in the 90’s I worked in industrial automation. The machines I pwrote software for were mainly used in the circuit board industry, but one was a chrome plater for truck bumpers. Very large, 3 hoists with 2 motors per hoist, controllers, digital I/O. A PC running all of it. At the time (mid-90’s) DOS was much more stable than Windows, so the software ran on MS-DOS. The security was an air gap. It was 15 years before the system was replaced. This is typical in capital equipment installs. You spend a million dollars on the hardware and, once it’s running, you don’t mess with it.

Matthias Hörmann March 9, 2021 9:26 AM

I disagree with some of the previous comments.

This is not understandable.

These are all regular old business problems, to be solved by actually learning about the risks and benefits of the different products on the market or the different options to get custom software written for something or to join some sort of industry group that gets custom software written for a purpose with pooled resources.

Any operating system or software has various risks attached to it that are not particularly technical and can be understood even by non-technical management if they apply their existing business knowledge to it.

Software is a risk to the business when it

  • can not be changed, even when new security holes, compliance requirements, bugs,… appear
  • the hardware it runs on can not be purchased anymore (replaced)
  • nobody or only a single person (low bus number) knows how it works anymore because it is outdated
  • is unsupported so nobody is tracking exploits targeting it anymore


  • other stakeholders in the software (e.g. the company or group writing the software, others paying for it, the relevant lawmakers,…) might have interests and might make decisions that go against the interests of your own company
  • the world around your software will always keep changing, no matter how much you wish it wouldn’t. There are new exploits, new laws, the person who knows retires, falls ill, is in an accident,…

So the sensible thing from a business perspective is to

  • use software where you have or can gain access to the source code so at worst you can hire someone to fix things for you
  • use software that runs on hardware that is still available
  • use hardware where the specs for the hardware/driver interface are available so replacement drivers can be written for newer operating systems where necessary
  • use software where the work of maintaining it and tracking security holes and explots is split among all the users, i.e. supported software
  • document your own uses of said software
  • only use skillsets where you can hire replacements that can get up to speed using that documentation if necessary

Too many business people ignore simple truths like that because they do not want to examine the risks (possibly scared of what they might find) but that doesn’t mean the current state of affairs is in any way inevitable or understandable.

Carl Fink March 9, 2021 9:50 AM

If we can’t keep our systems secure from these vulnerabilities, how are we ever going to secure them from new threats?”

We won’t. That’s an easy question.

Bcs March 9, 2021 10:06 AM

Matthias Hörmann: I would like to live in a world where I could agree with you. But I need to live in a world where companies stay in business.

In reality, a lot of companies are in a position where they have a choice between going against everything you described or shutting down. They run equipment that is out of support because it’s what can be affordes or because it’s the only thing that does the job. They run insecure software because it was custom built when nobody knew any better or didn’t have the resources to do it right and now they can’t fix that because nobody alive and locatable understand it.

The kind of practices you describe are good ideals and, for large companies and even smaller software companies, are something they should be aspiring to. But for the majority, the small business, the machine shop, the coffee shop, the specialized retailer for some strange corner of the market? They are lucky if they have someone who knows enough to even understand what we are talking about here. And the cost of doing what you suggest is so far from practical for them that it is a joke.

tfb March 9, 2021 11:03 AM

If we can’t keep our systems secure from these vulnerabilities, how are we ever going to secure them from new threats?

Betteridge’s law applies: we won’t.

Gregorio March 9, 2021 11:25 AM

26%!? One in four networks?

It’s not the whole network that’s vulnerable. It’s that one in four companies (of those tested) had some old system kicking around, which had not been updated. Who knows whether anyone would even notice or care if it were ransomwared? It could be a forgotten system under someone’s desk, or an old VM image with less-than-ideal network configuration. I’ve worked with developers who just hadn’t had the time or motivation to upgrade some ancient VM images—they “worked”, after all. If they stopped working, the developers would likely have a fresh installation up within a day.

Of course, such systems are occasionally important. I was once at a company where they’d marked the account of the former software build manager as “no password expiry”, because the password was hardcoded in too many build scripts. The person had quit years before. And the systems were running outdated software, because ISO processes required that software be fully reproducible. Building with a newer libc would produce software that wouldn’t run on the customer OSes we were targeting; newer compilers and linkers couldn’t rebuild some of the code at all (third-party code being especially problematic). These problems can all be solved with enough work, but they didn’t have the time or expertise to do it and didn’t feel it was worth hiring for. So, who knows, maybe you’re relying on embedded software built on an ancient Red Hat or Ubuntu image.

It’s also important for GDPR et al. that companies don’t have private data lurking around on unknown systems.

AL March 9, 2021 11:54 AM

I read:

However, 26% of companies Positive Technologies tested were vulnerable to WannaCry


A little more than a quarter of companies have TCP port 445 open

So, it sounds like they saying that computers are vulnerable to Wannacry simply because port 445 was open. But it doesn’t necessarily mean that SMBv1 is enabled. That said, it’s pretty stupid to have file sharing ports open to the internet. Since Windows 2000, IPsec can restrict connectivity to ports, apart of any firewall solution. I needed a public IP address on my XP back in the day, and I had those ports blocked by both IPsec and the built in firewall, so no single point of failure. There is more than an update problem here, there is a configuring problem as well.

MK March 9, 2021 12:28 PM

I had software that needed to work with Windows XP, some time ago. Why wasn’t the OS updated? “Security reasons”. A real pain to get software that took advantage of Windows 7 features to work with XP.

On the other hand, my wife just got bit by upgrading her Mac to Big Sur. All the USB to RS-232 cables have quit working, and Apple isn’t interested in helping the software developers fix the problem. So when should we upgrade? And how will we know?

Clive Robinson March 9, 2021 12:33 PM

@ Bruce,

How is this even possible?

It’s been clear for several decades this was going to be a problem of major proportions even before “Windows” existed.

Look at the average lifetime of,

1, Railway engine – 50 years
2, Utility meter – 30-50 years
3, Large industrial systems – 50-100years
4, Medium sized industrial systems – 10-50years.
5, Transport signalling systems 10-50 years
6, Telephone systems 15-30 years

And the list goess on and on untill we get to “Sillyville”

A, Smart device – 0.5-3years.
B, PC – 0.5-3years.
C, Operating system – 0.5-1year.
D, Application software – 0.25-2years.

Remembering that the first lot are dependent on the second lot you can see there is an aproximate “Months for years” issue. That is Applications and OS’s last about 1/12th of the time that physical machinery and systems do…

Remember it’s an even worse time convertion in “critical systems”, “Safety Systems” and “Medical systems”. All of which have to be re-certified on anything other than minimal changes… Which is an expensive thing to do for what is effectively a small customer base…

Not being nasty but you are of an age where you will get implanted medidical devices that are going to stay in your chest for atleast 30years. Because no surgeon wants to be cracking your chest to change batteries or do a software upgrade…

Think back to 1991, do you even have any functioning PC hardware or software from that time 30years ago? Would you want your life to depend on it?

Something to think about, especially with just how easy such devices can be hacked…

As a general rule software companies need to make around four new products or major upgrades every year to be profitable. The same is not true of even Fast Moving Consumer Electronics(FMCE) where a minimum three year life is expected on each product. But anything that needs to be “installed” like utility meters is expected to have upto a twenty year life otherwise they won’t make “break even” on installation costs.

Untill these two sets of time periods are brought into line, then problems such as this will arise. But anyone who has worked on the hardware side or the mechanical side as design engineers have been well aware of this since before the turn of the century.

As a first step a law ensuring that all software meet the basic requirments for “merchantability” that electronics hardware and mechanical hardware have to meet would be a start. But expect a fairly vicious set of “cut backs” on jobs, salaries, security and benefits for software developers. As for “free / Open” software, that will not be able to meet the “merchantability” requirment, thus issues arise. The obvious way around it is that the end user has to “DIY Build” thus no executables etc. Whilst there are Linux and BSD distributions that can meet these requirments, you still have the executable in the build chain issue to deal with.

Jesse March 9, 2021 12:35 PM

When that Win2K3 box generating licenses for legacy customers is earning half a million a year in revenue, you do whatever it takes to keep it on. Big corporate IT is really messy and we often don’t even have a full list of what systems we own or who is responsible for their care. It’s easy to say “upgrade all of your assets and purge the ones that can’t be”, but the business needs come first and we just have to work around that.

Wayne March 9, 2021 1:26 PM

Things like this is why I’m retired from active IT. I was a database/network admin for over 25 years. I consolidated and shut down old boxes, I locked up instances tight, I employed best practices, I had the most secure servers when we were pen-tested.

And my advice was routinely ignored when management asked for it.

I’m now quite happy working in a university library running inter-library loan. I work with cool people, read cool books, and pursue personal computer projects that I think are neat. And hear about database instances at a former employer burn because they don’t think they need a DBA.

Mat March 9, 2021 1:39 PM

There is a difference between real world and the lab. PHDs have hard time understanding real world!

Clive Robinson March 9, 2021 1:53 PM

@ Mat,

PHDs have hard time understanding real world!

Not all PhD’s, you might be surprised, some have become rather wealthy applying their knowledge domain to the real world.

Much though we might want to think otherwise neither the commercial or academic worlds have much use for “absent minded proffessor” types these days. In fact it’s getting hard to say which is getting more cut-throat especially where Universities are turning themselves into Hedge Funds.

Chairs tend to get handed out to those that bring in considerably more “profit” cash than students cough up or research grants give…

JohnnyS March 9, 2021 1:53 PM

There’s another dimension to this: In North America, senior management over the last couple of decades has made a strenuous effort to avoid responsibility for ITSec. There’s a lot of C*Os who have adopted the mantra that “getting hacked is inevitable and there’s nothing we can do about it”.

What this does is reset the bar for responsibility: If it’s “inevitable” then that inevitability justifies doing nothing about ITSec at their level and that becomes the “common business practice” in the eyes of the regulatory bodies and courts. So they can continue avoiding responsibility for their company getting pwned, and push the responsibility and negative consequences down to the rank and file.

The flip side of this is: No C*O is going to spend big on ITSec: That would mean they actually take responsibility for ITSec, and can be blamed when it all goes wrong.

It’s going to take real regulatory actions to force this responsibility onto the senior levels in business.

TRX March 9, 2021 3:15 PM

It’s been my experience, that no small business should even have computers.

Most only need a typewriter, carbon paper, a calculator, and a FAX machine.

We should be so lucky, yes…

It’s hard to do without e-mail now, for dealing with suppliers and the State if nothing else. Or electronic payments. Or computerized inventory and billing. And modern accountants use software, not ledgers. And at least the secretaries want word processing and a printer.

The camel has been inside the tent for a good long while, and the rest of the herd is shuffling in and making themselves at home.

JonKnowsNothing March 9, 2021 3:19 PM

@All @Clive

While we are all here decrying vulnerabilities it might be useful to consider how and why some of them (not all) happen or worse, become a problem years onward.

@Clive has mentioned the relative Duration of Use problems where systems intended to have a lifetime usage of decades are crippled by upgrades whose life time duration is in fractional months.

There is another component rampant in every major (and minor) software and hardware systems which is: The CUSTOM BUILD.

This albatross (not nearly as interesting as Wisdom age 70) occurs when a Big Bucks Company approaches Your Company to purchase The Product but If And Only If you make changes X Y Z just for them and them alone.

Tied to this Flash of Cash, is a stipulation that Big Bucks Company gets all enhancements, updates and improvements N-Months before anyone else.

The feathers of this start to gum up everything quickly. Forking the code is easy. Maintaining the fork only works if the same person works on the changes in each fork and that nothing goes pear shaped during the builds and testing.

Just because the tests don’t fail, doesn’t mean the change worked correctly, it just means the change did not trigger a collapse in regression testing.

Once you start down the mutations path things get bogged up with plenty of mutually exclusive conditions all of which are met with “We’ll worry about that in the next release”. Everyone involved hoping they will have rotated-one to the next company before that happens leaving some other unlucky to deal with the left overs.

Vulnerabilities from Custom Builds for Customers is not that different from Custom Builds for platforms, devices, systems; just far less visibility.

There is more than one way to turn a water wheel. It works better if you don’t use two methods at the same time.

ht tps://www.theguardian.com/environment/2021/mar/05/wisdom-the-albatross-the-worlds-oldest-known-wild-bird-has-another-chick-at-age-70

ht tps://en.wikipedia.org/wiki/Water_wheel
ht tps://en.wikipedia.org/wiki/Water_wheel#Summary_of_types
(url fractured to prevent autorun)

TRX March 9, 2021 3:25 PM

> it would be very tempting to connect them to the network so that the data wouldn’t have to be exported by hand once a week.

Stick a serial or parallel port in and use one of the old file transfer utilities to move the data. Some of the comm programs used to be able to do that as well.

I moved a lot of data between incompatible networks in the old days, with just a cable and LapLink.

TRX March 9, 2021 3:46 PM

> Think back to 1991, do you even have any functioning PC hardware or software from that time 30years ago? Would you want your life to depend on it?

The last version of my text editor shipped in 1986. It ran under PC-DOS 2.1. I use the editor every day; I’ve written three books, a bunch of magazine articles, and who knows how much code with it. It has file size and name space limitations by modern standards, but it’s essentially an invisible link in the brain-to-file chain after 34 years.

I’ve used the same editor binary through PC-DOS 2.1, MS-DOS 3.31, and DR-DOS 5. Under DESQview. In Windows 3, 95, and 98. In every major version of Linux since 1995-ish, in DOSemu/FreeDOS.

I don’t recall ever encountering a bug or having the program do something unexpected, even when running in an emulator under an alien OS. It “just works”, and I expect it will continue to do so.

TexasDex March 9, 2021 4:23 PM

How about we stop linking security updates to unpopular user interface changes and increases in privacy invasion? I stuck with Windows 7 until the last possible moment–even though the upgrade was free–because Windows 10 is user-hostile: it pushes the app store model onto the desktop so MS can get their cut of all app sales; it includes ads right out of the box; it installs games without my consent; it comes bundled with some voice assistant that I wouldn’t touch with a 39.5′ pole; it seems to be designed for touchscreens when I hate touchscreens; it sends all sort of data to Microsoft that I can’t really limit or stop or even view; it’s just plain ugly.

This is why I have sympathy for people who are still running older versions of software: Manufacturers are incentivizing people to NOT upgrade by abusing their customer base more and more in every new version for short-term profits.

SpaceLifeFor March 9, 2021 5:23 PM

@ Clive, ALL

Pay attention to the attack


Stick a serial or parallel port in and use one of the old file transfer utilities to move the data.

Have you seen any PCs that have either Serial or Parallel recently?

No, you have not.

@ MK

Note what MK noted above.

All the USB to RS-232 cables have quit working, and Apple isn’t interested in helping the software developers fix the problem

My USB to RS-232 will work. Because I can use on old hardware and old software.

Do you all see the attack?

I think most do not see the attack.

Get off of the treadmill. Keep your copper dry.

Clive Robinson March 9, 2021 5:54 PM

@ SpaceLifeForm,

Pay attention to the attack

There are many to fend off…

But I suspect that you might be refering to the fact that Android by default only does “wireless”, you can not plug it into a USB-2-Ethernet dongle.

Which is most anoying when you have only a wired network for security reasons.

Though the new network build will be wireless but not glassless, fiber looks like it will be less costly when you consider the security gain you get from it.

Dave March 9, 2021 5:58 PM

It’s not just the legacy stuff that’s causing people to hold up on upgrading, it’s the massive instability that Microsoft have built into Windows 10 where it’ll randomly reboot itself, download and apply changes that break functionality, even brick itself due to toxic updates, a game of Windows roulette that many businesses can’t risk. So the quite reasonable decision is to stick with a version of Windows that won’t randomly brick itself at 3am.

Dave March 9, 2021 6:07 PM

@Clive Robinson: An associated problem is that a lot of the IETF standards groups that set standards for this area have been taken over by large web vendors for whom nothing exists beyond about six months out. That is, pretty much the entire installed base can be updated and replaced within a period of months so we can create neverending churn in every protocol we design, throwing out old features and adding new ones secure in the knowledge that the entire world will keep up.

Except for the entire world that isn’t the web, which won’t. I recently saw a comment on an IETF mailing list which pointed out that, just as with any new medical research announcement you need to add “in mice” to the end of it, so with any new security proposal in the IETF you need to add “on the web” to any arguments being made for it.

Dave March 9, 2021 6:24 PM

@Mat: I don’t think that’s the main problem any more today. For one thing, there’s an awful lot of PhDs working out in the real world, but more importantly the main problem is no longer ivory-tower academics but pocket-universe big corporates trying to push their ideas on everyone else. Yes, great, it works for Google or it works for Facebook, but it doesn’t work for anyone who isn’t Google or isn’t Facebook.

Fed.up March 9, 2021 6:34 PM

@ JonKnowsNothing

You are 100% – so much bespoke enterprise software out there. If it isn’t built from the ground up then it is extremely customised COTS (commercial) software.

How did this happen? Ask the body shops who spend 7-10 years deploying a single enterprise application. I’ve see a SharePoint project take almost a decade. Major ERP projects are $50 to $100’s of millions.

Meanwhile these major IT projects are discussed each quarter by enterprise CEO on their Wall Street Analysts calls. Because IT spending is often the excuse for reduced profitability. Wall St. analysts are always closely tracking IT project status. So there’s no way that once this project takes half nearly decade to deploy and then it’s already 10 years old because it was built on mature (now end-of-life) technology is going to be replaced anytime soon. Nor should it be.

The whole patch management ideology is flawed. Enterprise cannot just patch. If you don’t already understand why, then nothing I say will make sense.

Problem is Big Tech doesn’t understand their customers. They should stop creating new products and perfect those that they have. Constant upgrade cycles have killed major IT vendors, especially when the migration path is very disruptive.

The enterprise is burned out. Especially in the time of COVID, no new IT projects or migrations. That’s way too much risk.

ht tps://www.cio.com/article/2389430/microsoft–accenture-joint-venture-avanade-sued-over-alleged-erp-project-failure.html

Scott March 9, 2021 8:01 PM

While I’m sure it’s self-serving, sometimes I can’t blame people when it comes to Windows. MS has decided to push non-security updates, including recently full-blown adverts, through the update pipeline. Not to mention the possibility an update will break software, or just happen at a bad time. That users would go out of their way to disable and hide this noise doesn’t surprise me.

Clive Robinson March 10, 2021 12:40 AM

@ JonKnowsNothing, ALL,

There is another component rampant in every major (and minor) software and hardware systems which is: The CUSTOM BUILD.

It’s a variation on the custom build which is “second line product” often seen as the “Next Generation build”.

Put simply I have a long life product that every two to three years I “obsolete” the current version with a new version that has major “feature upgrades” designed to keep revenue flowing through the door.

But not only do I keep core base components I have to keep upgrading the now two or three product lines.

Every thing is fine unless a software fault is found in a base unit that requires to be fixed BUT differently for each revision/version.

Mayhem can result, as you almost always have to split working across teams within teams. The team members obviously do not want to be “taking care of business” on a product that is heading towards “End Of Life”(EOL) as their employment prospects are also heading for the same set of buffers. Managment know this as well thus in effect tell employees their days are over by how they select those who stay on the old revision of the product. Those on the old product either want to get into the “next revision team” or to jump ship whilst their skill base and knowledge still has market value. Either way the support of the EOL’d revision “Does not get the love it needs” unless managment find some way to keep all their development teams motivated, which can be hard.

MrC March 10, 2021 12:44 AM

I can’t help but wonder what proportion of the “the driver software for our industrial equipment only runs on WinXP” and “our irreplaceable industry-specific software only runs on WinXP” problems could be resolved with a VM or WINE.

wiredog March 10, 2021 5:24 AM

As I pointed out above, and as Clive said, industrial hardware lasts decades. So I wrote software to run that hardware (including writing my own assemblers for some hardware) and the software ran on the best available OS at the time, MS-DOS. 15 years later (in 2010) it was still running on 16-bit MS-DOS (5.5 IIRC). That’s just how it is in industry. You build to (hopefully) the best practices of the time.

Later I ended up in the intel community. I worked on one project that was highly classified doing very secret squirrel stuff. 10 years later it was so obe’d that the things the government paid millions for are now built into the OS (much to the government’s dismay). Meanwhile, 15 to 20 year old motion control software that communicates over RS-485 is still running.

Peter March 10, 2021 8:29 AM


This is an excellent suggestion, but runs into a few problems.

In my industry, it is ruled out by “Computer Systems Validation”. Basically, any changes made to hardware or software must be documented and tested in the dumbest and most expensive way possible before we can use them. This is all done by the vendors, and absolutely no one wants to do this in house.

Also, setting up a custom VM with an ancient OS/drivers may be beyond the skill levels of IT at a small shop. This is not the sort of thing you can get done at your local PC repair place either.

Not Surprised March 10, 2021 9:09 AM

I’m surprised you’re surprised, Bruce. Nothing in the security world surprises me. I often look at stuff and say … what could POSSIBLY go wrong?! I think we’d all be in better place if that question were asked more often.

Supply chain. Maintenance contracts. If the integrators knew what they had, then there wouldn’t be a problem. Or, they know it’s a problem, but it’s too expensive to fix. Or, a fix is a available, but someone forgot to pay their maintenance contract so they don’t get a fix. Or, the person that got the notice that there’s a fix left the company and no one else knows that there is a fix.

If security were easy, then we’d all be jobless. I’m also a firm believer that there is no such thing as new threats. It’s just the same weaknesses hidden under new interfaces and features.

Fed.up March 10, 2021 10:50 AM


The USA doesn’t have the issue of American software engineers job hopping. Over a decade ago they were forced to train their offshore replacements within a week or 2 and then were mass fired. Very common to walk onto a software dev floor at a the biggest American companies and not see a single American engineer. Safe to say that every major breach in the US involves some element of outsourcing. The only thing you can blame the customers for is buying into the outsourcing model and not joining forces to fight back against deceptive trade practices which is ultimately what this is. Microsoft doesn’t sell directly to the enterprise. There is always an integrator involved. Which just goes to show the cost and complexity to get up and running.

Software licenses should now include life span with warranty’s like a car where they guarantee the EOS date and that they will support the software. I once worked for a vendor that sold multi million dollar systems to one of the most powerful institution’s in the world. The vendor retired their system 4 years later and the vendor never worked in that industry again as a result. The fall out was sudden and swift after 30 years of dominance.

If Microsoft doesn’t want to support on prem anymore then ultimately this means another vendor will step up and fill that need. It’s not like email is even that important anymore. But for legal purposes it cannot go to Microsoft’s cloud and also because it loses search functionality and other necessary features such as embedding an email within an email. As I said, Microsoft doesn’t understand their customers. Perhaps because they don’t have access to them or probably because they don’t care.

Microsoft was sued for antitrust in the 1990’s. They made it too difficult for competitors to use their OS. They wanted to break the company in 2 – 1 for the OS and the other for the apps. If you read this, it sounds like this case needs to be revisited. Their customers cannot be on upgrade cycles arbitrarily decided by Microsoft, htt ps://www.investopedia.com/ask/answers/08/microsoft-antitrust.asp

Fed.up March 10, 2021 11:34 AM


There’s records retention laws in each country. In some sectors in Germany some data needs to be kept more than 25 years. Systems maintaining legal records need to be built with this in mind. There needs to be a universal data dictionary that goes beyond ISO 15022 and covers any type of data covered by universal records retention laws. Records such as this benefit from structured data. It doesn’t make sense to put them in the cloud. Because these legacy records do not need to be instantly accessible. And records that need to be kept for decades are often very sensitive information.

Many mature data centers look like IT museums because they need to keep retired legacy equipment to restore records from decades ago. But these systems are often air gapped.

Unstructured data is a nightmare. Cannot be secured when you migrate it to the cloud because you lose the metadata so you don’t even know real create date, real author, nor what you have or where it is. This is also a violation of SOX if it controls financial or risk records. I know the push has been towards unstructured data for the past 15 years then tell me why Facebook’s data is structured (look at the archive)? Truth is you cannot perform machine learning on UNstructured “noisy” data. But everyone is pushing cloud because then they can supposedly use artificial intelligence which is the furthest thing from the truth. To be machine learning the data needs to learn from itself, not from a ‘scientist’ instructing the data what to look for or what the associations should be.

Cloud is not the solution to these Microsoft attacks. Quite the contrary. Solution is keep your data where you can see it and make sure you know exactly what you have, where it is, who has access to it and where it travels.

Clive Robinson March 10, 2021 12:46 PM

@ Fed.up,

There’s records retention laws in each country. In some sectors in Germany some data needs to be kept more than 25 years.

Actual it’s a lot longer than that, you’ve forgotton,

1, Land “Title Deeds”.
2, Land/Property “leases”.

Titke deeds are “in perpetuity” and the longest lease I know of off the top of my head is that of the Guinness Brewery in Ireland that if menory serves correctly is 9999years.

The UK Gov got rid of title deeds in favour of electronic records that are in no way to be considered to be even remotely as secure as a bit of paper, which as a physical object can be tested in various ways to see if the tests show up fabrication or modification.

As was being discussed the other day on the squid thread, you will be extraordinarily lucky if Digital Storage media makes if to 25years these days …

Thus you have to consider how you secure information in an immutable and preferably traceable way…

Well we do not know how to do that. On average crypto systems have about 25years of “industry life” with an average of 17years to become accepted in the industry.

So RSA in 1974 is in it’s fourth decade, so 17+25 is 42 years, so I’d say it’s getting close to being “past it’s ‘best before date'”… The fact people are talking about signitures of 2kBytes kind of suggests it’s already swimming in troubled waters…

JonKnowsNothing March 10, 2021 2:08 PM

@Clive @All

re: The UK Gov got rid of title deeds in favour of electronic records

In the USA, during the collapse of the housing market, banking, derivatives and asset stripping of citizens all over the globe, one of the peculiarities of the collapse was the way house mortgages were Sold On to the investment market.

iirc(bad) recap:

  • A person wants to buy a house, and if they do not have enough cash they go to a Lending Agency (bank) and get a Loan.
  • In the USA, many home loans are backed by US Gov Agency, which gives the funds to the banks specifically to finance house sales.
  • The Lender(bank) holds the deed or has a lien (claim) on it.
  • The home buyer effectively gets a 30 year long term rent-to-buy setup which in the USA lasts 5-7 years before it is recast.
  • The loan is bundled with other loans of varying risks and sold to Institutional Investors. If the bundle has a “great credit score” the interest payback is small. If the bundle has a “poor credit score” the interest payback is high (risk).
  • As these riskier bundles were traded along with higher and higher risk factor, eventually the house of cards collapsed. The results are still sitting in Greece waiting for another blow up.
  • As the banks collapsed, so did the loans that needed recasting and people lost their jobs and foreclosures became standard fair and fully automated with the courts in various jurisdictions.
  • What was found was that often times the legal documents were RoboSigned, with forged signatures and filed with the courts as fait a compli.
  • Someone got curious because in order to reclaim the property someone had to have had the title and lien on the title. These RoboSigned documents and filings had neither. No One knows where the title is or who owns it. All somewhere in the bowels of Greece.
  • The banks have since found ways to officially forge signatures that are missing and to re-claim ownership of property without full proofs normally required (Title Search).
  • The electronic Title is worthless unless you are a Bank that can forge signatures and file claims.
  • The cycle is soon to repeat as part of the Pandemic.

A useful counter is to demand that the agency provide Full Proof of Title and Complete Unbroken Chain of Ownership and Verified Title Search. Quite often the lender cannot provide all the required documents proving their claim and so the eviction/foreclosure may be voided.

Fed.up March 10, 2021 2:23 PM

@ Clive Robinson

Yes, I know of Paper Records Archives that go back over 100’s years.

It will never be possible or legal to migrate some sensitive data to the cloud. For Government and regulated industries, which is most multinationals, this is the reality. I’ve been interviewed by big cloud players who wanted to know the secret about how to “force this migration” and short of rewriting 100’s of US laws, I don’t think it is possible. There are even more onerous laws in the EU.

Also I don’t know how eDiscovery or Legal Hold works outside of the US but here lawyers can put a hold (do not alter nor destruct) on blocks of data. Litigation in the US often takes decades and I’ve seen petabytes of data on legal hold. In one place I worked there was so much litigation that 70% of their data was on legal hold. But this is because data is unstructured so lawyers scoop up way too much even if they use specialized software to do so. I am not going to comment on how Azure legal hold works, but I invite everyone to look at it.

Unstructured data causes all of our ills. We cannot comply with GDPR or any data privacy law because of it.

ht tps://www.law.com/legaltechnews/2020/05/27/how-gdpr-and-ccpa-apply-to-unstructured-enterprise-data/?slreturn=20210210152723

Should legacy regulated data be migrated to the cloud? Never IMHO. Much safer and cheaper to put it on tape just like Big Tech does. Yes, they all still use tape

Github has an on-prem version, I wonder if Microsoft uses it? If not, why? If so, was their on-prem version breached? https://www.enterpriseready.io/github/deployment-options/

JonKnowsNothing March 10, 2021 5:55 PM

@Fed.up @Clive @All

re: Fallout over Destruction of Hard Copy Legal Documents

A now, infamous case of deliberate destruction of legal documents continues in the UK. A policy called Hostile Environment enacted by the then Home Secretary Theresa May (later PM, now ex-PM) deliberately destroyed the only existing legal immigration papers of an entire generation, including their children and grand children.

Ms May destroyed the hard copy manifests and documents of nearly 500,000 legal immigrants to the UK. Many of these people, children and grand children qualified for UK Citizenship even under the arcane rules of the UK. Having had their bona fides destroyed they were deported, denied citizenship, were unable to work, get medical care, rent a house, go to school or challenge these actions.

One hardly expects one’s citizenship to be mulched, destroyed, and negated, and finding that you’ve been exiled, and banished as a result.

The scandal continues unabated by the UK Government and the department, now run by Priti Patel.

The sad part, is that the USA does similar activities but is a bit better at covering it up. However, as more US Citizens are deported, exiled and banished, it is getting harder to redirect inquires.

Many governments seek to disenfranchise their citizenry by any means possible, quite a bit of that is currently in progress (yet again) in the USA.

ht tps://en.wikipedia.org/wiki/Home_Office_hostile_environment_policy
ht tps://en.wikipedia.org/wiki/Theresa_May
ht tps://en.wikipedia.org/wiki/Windrush_scandal

ht tps://en.wikipedia.org/wiki/Priti_Patel

ht tps://www.theguardian.com/uk-news/2021/mar/05/windrush-victim-denied-uk-citizenship-home-office-admitting-error-trevor-donald
(url fractured to prevent autorun)

Dave March 11, 2021 2:22 AM

@Peter: Ran into an analogous problem recently with replacing a firewall in an aircraft that had minor cracks from, most probably, a too-hard landing. Cost to get it made in a local shop from 316 stainless: Under a thousand dollars. Cost to get the official thing, made from some SS alloy that was current circa WWII: Seventy thousand dollars. The price difference, and reason for use of some museum-grade alloy, was that one was type-approved and certified and the other wasn’t.

No Clue March 11, 2021 10:30 AM

This is a very disappointing blog post. Complaining and pointing to vulnerabilities is the easy part of security.
Prioritizing security efforts in the real world is the much more difficult task. And it sometimes includes accepting to leave some things vulnerable.

JonKnowsNothing March 11, 2021 1:24 PM

@No Clue

re: Prioritizing security efforts in the real world is the much more difficult task. And it sometimes includes accepting to leave some things vulnerable.


  • no doors?
  • no walls?
  • no livestock?
  • no fodder?

What do you have left if the barn is “vulnerable by choice”?

Once it’s all gone, taken, re-homed, what exactly are you protecting?

Leaving something insecure may be an option, if you have no ability to change that.

  • If you don’t own the land because the NSA owns it.
  • If you don’t own the barn because M$ owns it.
  • If you don’t own the contents because FB claims it.
  • If you don’t own the usage because Google controls it.

That’s pretty much the picture of an empty barn, that doesn’t exist, except in VR.

nicola March 15, 2021 5:11 AM

I’ve seen things you people wouldn’t believe. Old servers on line off the shoulder of Orion. I watched old software glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears…in…rain. Time to die. 🙂

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.