On Vulnerability-Adjacent Vulnerabilities

At the virtual Enigma Conference, Google’s Project Zero’s Maggie Stone gave a talk about zero-day exploits in the wild. In it, she talked about how often vendors fix vulnerabilities only to have the attackers tweak their exploits to work again. From a MIT Technology Review article:

Soon after they were spotted, the researchers saw one exploit being used in the wild. Microsoft issued a patch and fixed the flaw, sort of. In September 2019, another similar vulnerability was found being exploited by the same hacking group.

More discoveries in November 2019, January 2020, and April 2020 added up to at least five zero-day vulnerabilities being exploited from the same bug class in short order. Microsoft issued multiple security updates: some failed to actually fix the vulnerability being targeted, while others required only slight changes that required just a line or two to change in the hacker’s code to make the exploit work again.

[…]

“What we saw cuts across the industry: Incomplete patches are making it easier for attackers to exploit users with zero-days,” Stone said on Tuesday at the security conference Enigma. “We’re not requiring attackers to come up with all new bug classes, develop brand new exploitation, look at code that has never been researched before. We’re allowing the reuse of lots of different vulnerabilities that we previously knew about.”

[…]

Why aren’t they being fixed? Most of the security teams working at software companies have limited time and resources, she suggests—and if their priorities and incentives are flawed, they only check that they’ve fixed the very specific vulnerability in front of them instead of addressing the bigger problems at the root of many vulnerabilities.

Another article on the talk.

This is an important insight. It’s not enough to patch existing vulnerabilities. We need to make it harder for attackers to find new vulnerabilities to exploit. Closing entire families of vulnerabilities, rather than individual vulnerabilities one at a time, is a good way to do that.

Posted on February 15, 2021 at 6:14 AM20 Comments

Comments

me February 15, 2021 6:57 AM

i work with some kind of certifications (unrelated to computers and computer security).
but when inspector comes he say for example “let’s see if the expire date of item X is valid” and you forgot to do that, than he say “ok i found that this one was invalid, now it’s up to you to check if all other expire dates are correct”. he will not check every expire date, he will check only one of them and than go on a different class of problems saying “i’m doing a vertical check, it’s up to you to expand it horizzonatlly”.
i think that in security it should be the same, if someone reports oyu a buffer overflow you don’t fix THAT overflow in that line, you should think “ok i coded a buffer overflow, if i did it once i will probably did the same mistakes over and over in all the code”.
same goes for other class of vulns for example: if you did not know what sql injection was, you don’t fix the one that someone found, you assume that your code is full of them and search for them.

Rj February 15, 2021 7:18 AM

The problem is that unless it is a very small company, the guy who wrote the code probably doesn’t even know there is a vulnerability. Somebody else, likely on a different continent, is charged with fixing the bug. A “ticket” system vends the next problem in the queue to the next guy who logs into that system and asks for another problem to fix. These guys get rated on how many problems they can fix in a given interval of time, not on how good the fix is, and there is no extra credit for doing a good job and fixing all similar problems in the same body of code. Its just how many tickets did you close today.

This mentality encourages the tactic to knowing leave similar problems untouched, as they will be easier to fix later when someone writes a ticket on them. Remember, the only count tickets, not instances of problems, nor types of problems.

Furthermore, I have seen situations where the fix introduces additional problems, but full up regression testing after every fix is too time consuming, so they batch a bunch of fixes up together before the regression test the whole mess and then write more tickets for the problems they find. As you can imagine, these tickets are vended impartially to essentially random junior workers with no connection to the particular fix that cause the failure, because there are many fixes regression tested all at once.

If each fix required individual regression testing before it was considered fixed, as is the case in certain regulated environments such as aviation, medical, or functional safety, then we would have a much more secure computing environment, but alas, one that was prohibitively expensive. After all, MOST of these problems are rather trivial, and have only annoying consequences. They do not kill the patient, crash the airplane, or blow up the chemical plant.

Look at the cost to develop the software for even a rather simple computer controlled medical device that is attached to a patient, or implanted inside a patient. Compare that to the cost of your typical consumer software program that connects to the internet. The consumers would never pay for that level of security.

tac February 15, 2021 7:28 AM

Why would anyone expect anything different?
1) Many companies do not put their ‘A’ team on product support. The original developers are considered too valuable to work on the support side of the house. Also, the support teams are understaffed and have more than just security vulnerabilities to fix — many of these are from customers with critical functional problems. Seems to me that these teams would concentrate their efforts on making customers happy.

Security bugs are just another manifestation of poor SDLC practices management has implemented to satisfy the wet dreams of its CEO.

2) What is the defined criteria for fixing a security vulnerability? For many people, it’s that the given POC is correctly handled. Yes, developers should horizontally expand their analysis, but this takes time and an understanding of the original design. It also may take a complete redesign of the system in place. So, in order to get the fix out the door, these teams are OK with playing whack-a-mole. This situation exists in both open source and with proprietary software: for open source I hold up jackson-databind’s issues with RCEs over the past couple of years.

3) Where are the incentives to do the right thing? It’s not like a company has any penalties for not completely fixing a class of vulnerabilities. Who cares a product has multiple patches for security vulnerabilities — it will have numerous patches for non-security patches as well. It doesn’t matter what the patch contains as long as the process to apply it can be automated.

Tom Henderson February 15, 2021 8:44 AM

It’s about culture, not a monolithic set of directives.

In corporate environments, a Product Manager holds sway over activities. It’s about product life cycle, revenues, and often not about Quality. The minimal is done to get past the huge red stain on the product. There, that’s finished! What’s for lunch?

Although it’s true that some code is poked more frequently because it represents a gateway to profitable dark hacking, you can look at the daily list of CVEs to find which organizations are deeply involved in quality products by architecture.

Conversely, little praise is given to those organizations whose teams are wholly responsive and whose names do not appear with the repetitive citations of goofy bugs, week after week after week. It’s culture. We don’t laud positive quality architectural culture. We need to do this.

Clive Robinson February 15, 2021 9:37 AM

@ Bruce, ALL,

This is an important insight. It’s not enough to patch existing vulnerabilities.

The real problem is there should not be any vulnerabilities to fix. In theory that is what Beta1, Beta2, and Alpha testing is supposed to cure before release date…

But managment dictated release dates are all… So we end up with games of “Patch whac-O-mole”…

Not exactly the way you would run most other manufacturing programs, in fact you legaly would not be alowed to do so. After all who is going to wait for “Patch 3.51” to have breaks that work on their car…

I know this has all been said before… BUT managment are still not getting the message. The reason for that is a whole different conversation, but if it was properly addressed we would not be having this one…

uh, Mike February 15, 2021 10:06 AM

Patches are always incomplete.
There is a popular operating system that, itself, constitutes a family of vulnerabilities.
Like the IBM 360, it hangs onto its past. Unlike the IBM 360, it’s expected to repel attacks.
Oh well.

humdee February 15, 2021 10:21 AM

Bingo. Clive nailed it. There is a sick symbiosis between those who break things and those who fix things in a dance of mutual need. Security professionals need liars and outliers to justify their jobs. So there is no real interest in whole scale improvements.

JonKnowsNothing February 15, 2021 10:32 AM

@All

Others have put forth many of the reasons why this is not going to change.

Not only is the Bug Fix Team distant, they do not know the entire code section or even the application itself. They get hired to “FIX something” not “KNOW something”.

Even huge companies like Google and Apple rely on Temporary Contract Labor. In the USA there are specific rules about when someone is or isn’t a contractor, and recent laws in California attempted to prevent “GIG contractor exploitation”. This last was overwritten by the Gig Economy Companies to ensure they can continue to underpay their drivers (Short Change AI algorithms), overcharge there customers (surge pricing) and create a new Employee-Debtor class of workers (required car purchase, lease, upgrades).

In the Program Contractor area, you get 3 months or 6 months contract with a company and then a blanket No-Hire period duration from 6 months to Never Again. Although these folks are expected to write code faster than Pac-Man Eats Dots, they cannot ever know more than the immediate task set to them. Everything is Proprietary and Secret. So, there’s not a chance for them to “take something away”. There’s also no chance they will realize that the same error exists in n-number of other places.

Contractor Companies only provide a middle-man to protect companies like Microsoft from having these workers declared Employees. The only thing Contractor Companies do is to provide the Raw Beef on demand and to rake in profits by skimming the fees.

The rake off used to be 60%
If the rate charged was $100 to the hiring company
  $40 went to the contract worker
  $60 went to the contracting company.
It is worth $100 for Google to hire contractors instead of an employee?
Look at the numbers of contractors they hire to answer that rhetorical question.

There is no contractor and hardly any employee who is going to win an “Engineering Discussion” with the CTO, or Architect or System Designer about a design issue.

One might get into the Mix, and that One will certainly be shown The Exit.

Even high level, high paid, VEEPS have mortgages to pay. They may not worry too much about the cost of the BMW or Mercedes or the Florida Country Club, but they “Like Their Money” and they are not going to stick their hands up and declaim their design is faulty.

Faulty designs are features to be added in the next release.

xcv February 15, 2021 10:46 AM

vendors fix vulnerabilities only to have the attackers tweak their exploits to work again

That’s the Microsoft Patch Tuesday attitude, and it’s the gospel truth.

“No man also seweth a piece of new cloth on an old garment: else the new piece that filled it up taketh away from the old, and the rent is made worse” —Mark 2:21.

wumpus February 15, 2021 12:05 PM

@Clive Robinson

“Beta1, Beta2, and Alpha testing is supposed to cure before release date…”

In days gone by, alpha testing was done in house with beta done by customers. Unless customers are doing real penetration testing, they will never find security issues (if they find them, it is a security disaster. You might as well dump your stupid IoT device and start from a clean sheet of paper…).

I’m pretty sure this is something that Bruce has been harping on for 20 years: ordinary testing finds problem customers will face. Security testing requires testing the nasty corner cases that an attacker will hammer in hopes of finding something that breaks and lets him modify something. It won’t happen with ordinary testing and it will never happen with beta (now called alpha)[customer] testing.

Finally, I’m pretty sure Microsoft can’t fix the inherent flaws in Windows without killing the entire API from XP and obsoleting mission critical apps right and left. They’d probably have to obsolete net API as well, but at least they presumably made an effort towards security, unlike window’s default “run any code you ever see” basis.

It looks like Windows merely made it hard enough to find windows zero days that cyber criminals go elsewhere (like Android) and the big attacks are for bigger hauls like ransomware and state-level attacks. I guess that once a userbase is convinced to install any old ap they see, you can manage to have even worse security than Microsoft.

Mike February 15, 2021 12:13 PM

Most in the Infosec industry including these so called CISOs lack any security engineering background. Just filled with all sorts of mostly useless certifications.

Just look at companies like Twitter, Facebook and check their security executive backgrounds. Mostly hired through connections and not really the talent.

Especially, these silicon valley companies filled mostly with Indians and tend to continue their native dirty hiring practices based on caste, religion, language, as such.

Check this link out: https://www.brightworkresearch.com/how-indian-it-workers-discriminate-against-non-indian-workers/
Each and every word is true in that report.
This is at least one of the major root cause of this issue which cuts across the industry in silicon valley.

MikeA February 15, 2021 12:13 PM

Some time ago (2006 or so), I ran into a tool that would make a dent in this problem. It was something like a “semantic patch”. That is, it would search (and potentially replace) code based not on specific text, varible names, etc, but on a localized parse tree. Sort of a like “peephole optimizer”, but a peephole inspector. The company I was working for could have really used such a thing to find instances of “patterns”, but they were circling the drain, so I didn’t get the time to try the tool out while dealing with emergencies.

Yeah, all the management stuff is so true. On my very last day, I responded to a product manager who had been stonewalling me on data I needed to complete a release to a major customer. I pointed out that, as it was 4:45 on my last day, the problem he had created was now solely his.

Great feeling. Did feel a bit sorry for that customer, but I am pretty sure the root was their inability to deliver timely and accurate specs, compounded by his not bothering to follow up.

The very first ingredient needs to be giving a damn.

If someone else recalls that tool, I’d love to check it out in my now spare time.

jdgalt1 February 15, 2021 1:58 PM

The root of this problem is the incentive structure. The law needs to take away the ability of software vendors to weasel out of liability for problems that only they have the access and/or knowledge to cure. Software licenses disclaiming that liability must be rendered null and void. Until that happens no system can be trusted.

Ismar February 15, 2021 2:49 PM

Being a software engineer and having changed jobs more than an average fellow engineer, I can attest to the problems of fixing bugs created while some other developers were doing their best to write the original code.
You are, at best presented with a poorly documented and difficult to maintain code while working under tight delivery deadlines.
Hence, I am not at all surprised that patches are both necessary as well as incomplete.

Solution, well the only one I can see is to build less complex software, at least in those scenarios where security matters the most.

This may go against immediate financial benefits, but not once we realise that all of these immediate financial benefits can easily be undone by damages caused by major security breaches.

In other words, factor in the cost of a potential security failure when planning the development process.

This is currently not done, as discussed before on this blog, simply because the cost of the security failure is not borne by the software manufacturers but by the society as whole (Solar Winds saga).

Hence, to have a desired level of security, let’s, then, put some effort in designing separate and , as simple as possible, software systems which don’t have generic bells and whistles, but contain only the minimum functionality set to ,say, perform a simple communication task between two end points.

To hope that the same can be done securely by complex systems such as Windows or even Linux distros that try to be everything to all people is but to delude oneself.

Pete Forman February 16, 2021 6:20 PM

When I fix a bug I also look beyond the immediate report to search for similar instances. I suspect that is a a luxury I am afforded, others may be constrained to do an allocated task and not think beyond it.

RealFakeNews February 18, 2021 8:30 PM

I n+1 all the above!

As I often tell clients I work with: I’m not the fastest, because I try to ensure good quality software.

I use building a high-rise or house as an analogy for explaining how/why software quality can differ so radically, yet the end-user is oblivious because they click a button and what they expect to happen, happens (most) of the time.

What they have zero clue about is HOW it does that thing, nor do they care.

On one level I’ve stopped trying to explain how bad things are, and just ensure that my own stuff is good.

I agree that waiving liability needs to go. You wrote a piece of software that went or is bad? You must fix it.

The worst thing that happened to software was the ability to auto-update. It has made developers lazy, and companies give even less of a damn than they did back in the days of “release is final”.

I also think that too much emphasis on testers, and test harnesses (or whatever) are made, and that developers are not doing enough testing themselves to find problems.

You only need to look at some software to realize that certain bugs were never found because someone just straight didn’t try that piece of code.

I’ve worked with some developers who literally wrote code, and NEVER tested it. They were quite happy to push it out the door. I left soon afterwards.

The question is: aside from what we can do individually on our pieces that we work on, what can we do to collectively improve the situation?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.