Mac OS X, iOS, and Flash Had the Most Discovered Vulnerabilities in 2015

Interesting analysis:

Which software had the most publicly disclosed vulnerabilities this year? The winner is none other than Apple’s Mac OS X, with 384 vulnerabilities. The runner-up? Apple’s iOS, with 375 vulnerabilities.

Rounding out the top five are Adobe’s Flash Player, with 314 vulnerabilities; Adobe’s AIR SDK, with 246 vulnerabilities; and Adobe AIR itself, also with 246 vulnerabilities. For comparison, last year the top five (in order) were: Microsoft’s Internet Explorer, Apple’s Mac OS X, the Linux Kernel, Google’s Chrome, and Apple’s iOS.

The article goes on to explain why Windows vulnerabilities might be counted higher, and gives the top 50 software packages for vulnerabilities.

The interesting discussion topic is how this relates to how secure the software is. Is software with more discovered vulnerabilities better because they’re all fixed? Is software with more discovered vulnerabilities less secure because there are so many? Or are they all equally bad, and people just look at some software more than others? No one knows.

Posted on January 11, 2016 at 2:33 PM29 Comments

Comments

Z January 11, 2016 3:09 PM

The interesting discussion topic is how this relates to how secure the software is. Is software with more discovered vulnerabilities better because they’re all fixed? Is software with more discovered vulnerabilities less secure because there are so many? Or are they all equally bad, and people just look at some software more than others? No one knows.

Yes, indeed, no one knows. There’s no relationship between the security of a software and the amount of vulnerabilities found for it, aside obviously for the trivial observation that a vulnerability discovered and patched cannot be exploited any longer. But how significant this is for the overall security of a software isn’t clear – what if there are tens or hundreds of unknown vulnerabilities for every discovered ones? If it is the case, then finding a new one would only be marginally useful, or could even be counterproductive since discovered vulnerabilities end up being massively exploited…

Also known as “the big elephant in the room of the infosec industry that so many fails to acknowledge.”

BlackJack January 11, 2016 3:42 PM

Right…
But I wonder how many users keep their systems and apps constantly up to date (Regardless auto-update and patch delivery). Because if you don’t that, with such high number of vulnerabilities…you can be severely bashed.

Thomas January 11, 2016 4:37 PM

Ah, nostalgia….

I miss the days of “X is more/less secure than Y because it has more/fewer/smaller/larger patches”.

Can we do “X has a lower TCO than Y” next?
How about “license X is better than y because it does/doesn’t cause communism/cancer/cruft/colour-blindness”?
And finally: “vi vs emacs”

jdgalt January 11, 2016 5:02 PM

Software should be built like a good building: from the ground up, on a solid foundation. A system like Windows is just a lot of kludges thrown together; it lacks that solid foundation, so if it’s still in use 50 years from now, they’ll still be finding defects in it.

Unfortunately, the same goes for approximately all systems newer than 1970.

Nox January 11, 2016 5:46 PM

@jdgalt
I’d challenge you to find any “relevant” (i.e. non-academic-playground) software of your “specification”. I dare say that any OS and indeed any big SW project nowadays is a patchwork. It may start out monolithically but will never stay that way.

@BlackJack
Without auto-update less than 50%?

Regressing in Upgrades January 11, 2016 6:51 PM

The latest software is full of vulnerabilities because it is constantly changing. Testing cannot find many vulnerabilities simply because of schedule and cost constraints.

Each user/system administrator needs to ask themselves if they really need this update.

Those who choose a mature OS and stable applications don’t need to ‘upgrade’. They enjoy long-term stability, less risk and headache. Newer is not necessarily better, just different. Investors demand continuous new ways to monetize users.

To lessen risk, people are also keeping their phones longer and installing fewer applications.

Windows 10 not only spies, but FORCES a continuous new stream of just code ripe for (NSA) zero day exploits. Many of these updates don’t benefit users but rather sneak in new ways to data-mine. Can back-doors be secretly installed on targeted machines?

I also stuck with Windows 8.0 when MS made local File explorer searches PUBLICALLY shared. So I removed the Search functionality in the Control Panel. In Windows 8.1 these forced shared searches cannot be uninstalled.
The quiet signs NOT to upgrade were apparent years before the frog public finally became aware with Windows 10.
Is it safe now (eight years later) to use ‘secure’ software from Juniper? LOL

Buck January 11, 2016 8:35 PM

@Regressing in Upgrades

Windows 10 not only spies, but FORCES a continuous new stream of just code ripe for (NSA) zero day exploits. Many of these updates don’t benefit users but rather sneak in new ways to data-mine.

That rather depends on those users’ individual jurisdiction, does it not? If the NSA and cooperating partners are constantly patching up the old exploits and ensuring that no other entity has access to their current backdoors, that’s not necessarily a bad thing for people residing in countries with coercive power and ideologies opposed to Microsoft, the NSA, and friends…

In all likelihood though, I’d suspect that many of these persons probably also work for a living at some corporation. If those companies are in competition with American businesses, then yeah, it may not be good for their livelihoods to ever use Windows…

The truly tricky part however is deciding what other options are safer to use!

Jacob January 11, 2016 9:18 PM

  1. The vast majority of SW shows, on each revision, some bug fixing and some new features. The new feature set is the scary part, so the overall SW rate of improvement in the field is extremely elusive.

In addition, if detailed info is available about a specific critical bug, such that the reader can assess how stupid it is and whether it reflects on vendor’s prowess, dev procedures and release policies, taking into consideration company response to said bug, then can assume some fuzzy metrics regurding trust in that vendor’s products.

How manu of you would trust Trend-Micro, for example?
https://code.google.com/p/google-security-research/issues/detail?id=693

(The problem is that not many bugs are opened to the public to such an extent)

  1. This is also relevant – CPU bugs:
    http://danluu.com/cpu-bugs/

Kyle Rose January 11, 2016 9:24 PM

@Regressing in Upgrades: A “mature OS” stops being patched at some point. How do you deal with that problem? Security patches only go back so far, and you generally can’t upgrade components piecemeal.

You sort of point this out with your last line:

“Is it safe now (eight years later) to use ‘secure’ software from Juniper? LOL”

SocraticGadfly January 11, 2016 9:33 PM

@Regressing, there’s also the issue of older software not being able to handle newer versions of files, etc. Pre-Intel Macs (really, it’s OS 11, after that point, but that doesn’t sound kewl to the kids in Cupertino) won’t run today’s Internet browsers. Older versions of Photoshop won’t do RAW, JPEG 2000 or other formats.

Colin January 12, 2016 3:57 AM

@Kaan – “Mac OS X 10.1 to 10.11 versions are counted as one but xp, vista, 7, 8, 10 are seperate.”

Yes, but that’s a Mac minor version difference, so is still the same operating system. The Windows operating system are entirely different entities and therefore cannot be counted as 1.

But if they were the same entities, you’d not sum up the values in each of the security flaws, you’d only add the different security holes together as fix it for 1, you’d fix it for both.

Mic Channel January 12, 2016 4:19 AM

@Colin – “but that’s a Mac minor version difference, so it’s still the same operating system”.
OS X 10.0 was released before Windows XP, and you’re saying that it’s the only operating system Apple produced in the last almost 15 years?
Fact – just because Apple markets all the operating systems as 10.something does not mean they are point releases.

Neill Miller January 12, 2016 4:33 AM

just comparing total counts is almost useless –

one would need to consider the # of users, the severity of the problem, and the location of that system, e.g. a server at a hospital gets a much higher ranking than a single user system

global threat = #users x #severity x #ranking

but as we know n>1 is one too many

we all have become “beta” testers now …

jeremyp January 12, 2016 6:43 AM

@Colin

Yes, but that’s a Mac minor version difference, so is still the same operating system. The Windows operating system are entirely different entities and therefore cannot be counted as 1

This misunderstanding is one reason why Apple use the code names rather than the version numbers now. The difference between OS X 10.x and OS X 10.(x+1) is usually of the same order of magnitude as the difference between any two “major” versions of Windows. I put “major” in quotes because the reality is more complicated. In fact XP, Vista, 7, 8, 10 are all releases of one operating system: Windows NT. Their version numbers are 5.1, 6.0, 6.1, 6.2 and 10.0 respectively (Microsoft skipped versions 7, 8 and 9 altogether). So you see, many of the “entirely different entities” are also minor version differences.

Who? January 12, 2016 7:03 AM

The interesting discussion topic is how this relates to how secure the software is. Is software with more discovered vulnerabilities better because they’re all fixed? Is software with more discovered vulnerabilities less secure because there are so many? Or are they all equally bad, and people just look at some software more than others? No one knows.

There are a few facts to take into consideration.

Does the operating system provide some sort of proactive security approach to mitigate the effect of exploiting a previously unknown vulnerability? How seriously is proactive security being implemented? For example, address space layout randomization (ASLR) is not the same on Linux and OpenBSD. The latter has a more robust ASLR implementation. Use of some sort of system calls to force a process into a restricted-service operating mode (like pledge(2) on OpenBSD -current) helps stopping unwanted behaviour on running processes.

Is the source code being audited? Is it not the same finding one hundred vulnerabilities on a source code audited by a large team of highly skilled developers than on a source code that is occasionally reviewed by a few committers.

In my humble opinion, the number of lines of code does not really matter a lot. Obviously, the larger the code the greater the number of bugs that may become exploitable. However, even really small software tools may have serious bugs. The number of bugs itself is not a very good metric.

Is software being coded using modern security-oriented programming practices? It helps reducing and identifying software bugs before code is put into production.

Are processes being run on restricted (sandboxed, chrooted, jail) environments? Do processes follow a privilege separation model? In this case, even exploitable bugs may have a reduced impact on system security.

Marco January 12, 2016 7:20 AM

My 2 cents regarding the error-detection and global software quality: think of the new Tesla software release (7.1): they claim to have fixed a number of bugs, but also improved the usabiility thanks to their capbility to collect vehicle data.
An example: auto-steering was not very efficient in curves, when the angle was too important and speed too high; they collected usage data (thanks to the constant monitoring of the connected software), and corrected if without having even a user’s complaint. Did you ever have a Ford/Chrysler/Toyota firmware upgrade?
IMHO, the software quality can rather be measured by its continuous improvement provess rather than by blunt number of defects over time (even though software packages usually reach a maturity level with number defects decreasing over time).

herman January 12, 2016 7:47 AM

@Colin: “The Windows operating system are entirely different entities and therefore cannot be counted as 1.”

That is a marketing myth. If every major Windows version was different, then why does the latest versions suffer from the same old bugs as the older versions?

Who? January 12, 2016 12:49 PM

@ herman

That is a marketing myth. If every major Windows version was different, then why does the latest versions suffer from the same old bugs as the older versions?

Good catch! Staff at Microsoft is not very good at version control system. 🙂

Who? January 12, 2016 12:53 PM

I was thinking on how some (supposedly fixed) bugs sometimes reappear after a few years on new Windows versions.

After re-reading your post I understand now that you refer to the same bugs being fixed on different Windows releases… indeed, Windows versions share bugs (lots of them), not to say zero days.

Anura January 12, 2016 2:08 PM

@jeremyp

Wait, are you suggesting that version numbers are not based on a universal and rigorously determined criteria, but are, in fact, entirely arbitrary?

Anyway, there are major differences between Windows versions, but they aren’t complete rewrites and do, in fact, share most of the same underlying code and components. However, that’s irrelevant; the question is in how they are counting the vulnerabilities – in OSX, are they counting only the exploits that are present in the latest release, or are they counting exploits that don’t exist in the current release and just counting those in prior releases? In Windows, typically an exploit that appears once for every product it affects; so a vulnerability in Windows 10 will also show as a vulnerability in Windows 8, Windows 7, Server 2008, Server 2003, Windows XP, and Windows 2000.

Clive Robinson January 12, 2016 2:45 PM

@ Herman,

If every major Windows version was different, then why does the latest version suffer from the same old bugs as the older versions?

For the same reason that some attacks bring down various versions of multiple OS’s such as *nix, Windows and MacOS.

Either the developers filch Open Software (like the BSD networking code) or the standard / protocol they write to is ill defined or incorrectly designed / specified, and interoperability testing makes them all fail according to specification.

Then of course there is the favourite of all managers “the holy grail of code reuse”. Just like the C lib, most library code rarely gets touched when it has been written and tested. If the testing is none to thorough –and it tended not to be up untill a decade or so ago– then bugs would stay in the library either ignored or with work around code in the program body, so other code would not break.

There are a few people around who can tell you tales of “Horror and daring do” with MS Win MFC and secret / hidden entry points and distinct lack of documentation. Where MFC programers would hord fixes, work arounds and knowledge and withold them from other programers like misers hording pennies.

There realy was an ethos amongst MFC programmers of “I went through hell and back to get my MFC cred, there is no way I’m giving you a free ride!” and in the middle of the mess sat the poison spider queen of MicroSoft avoiding where they could litigation, and making false claims in court about the integration of IE and how it could not be removed… Which in turn effected all the MS OS’s IE was “included with”.

Yup those were the days of “vinigar and thorns” not “wine and roses”.

Then if you believe such things Bill Gates had a vision of how code should be written, and lo he came down the mountain with tablets of stone etc etc… The real story, was Microsoft’s reputation was lower than a snakes belly in a wheel rut at the bottom of death valley. Malware was chewing up product development time, and as with all Red Queen’s races the MS developers were running just as hard as they could just to stay where they were. They had bugs seeping out of every pore, each one just another little cut through which money and reputation bled.

So Microsoft actually started to sort their act out. To be fair they still get bugs and malware, but then they do have the major chunk of the desktop and graphical server admin interfaces. Thus they are in efect “Target Zero” for exploiters, and even with this and carrying the “backwards compatability” issue the number of bugs program errors are effectivly dropping.

Sadly just as they were improving their reputation for security, they went and ruined it all again with Win 10. Arguably it does everything FBI director Comey wants for his “front door”. So all the FBI and NSA have to do is get the data from Microsoft or “sniff it off of the wire” as it’s on it’s way to Microsoft…

@ Z,

aside obviously for the trivial observation that a vulnerability discovered and patched cannot be exploited any longer.

If only that were true…

Look at it this way, you can fix the problem or fix the symptoms, often it’s the latter not the former, especially when backwards conpatability is required, or the problem is in effect an architectural issue. I’ve seen one fix undo an earlier fix, even when there was no potential malice involved [1].

[1] Juniper networks is perhaps the most recent demonstration of fixing one problem but opening another in the process. Some have gone as far as saying that what has happened at Juniper could not have happened by accident, it’s to improbable [2]. Thus the implication is the bugs are there at the behest of a State Level Agency.

[2] The peoblem is we don’t have any real information on cascade vulnerabilities. All we know is some do not get found for years. To reason that a four step vulnerability is deliberate or not we need to know how many “near misses” there are where three out of four steps get through testing and code review etc, but by luck the fourth step does not happen. The thing is we are never likely to know either. Because if the other steps are seen in house they will be in effect silently fixed.

SocraticGadfly January 12, 2016 6:29 PM

@JeremyMP, your statement, on the other hand, is an overstatement. Really, there’s only one major divide, and that’s between 10.5 and 10.6 Realistically, this is like the break between XP and 7. Now, there may be lesser transitions between 10.6, 10.7, etc., but not every Windoze transition is equally large, either.

tyr January 12, 2016 11:54 PM

Clive is far too kind to Microsoft and their business
model. I’ve been running their software since CP/M
days and have seen them deliberately break their own
legacy code and fix it again in the next release.

As a business model it makes the company rich as a
method of ruining your customers business model it
is also quite successful. I keep hoping people will
learn enough to demand software that works instead
of chasing the next upgrade fix for bad coding and
crooked business practices.

Thomas Schneider January 13, 2016 5:44 AM

Determining which software is more or less secure can’t be done on sole statistics alone, but only by the people who find the vulnerabilities.
If you take a quick glance at a product and can gain remote code execution it is less secure than a product where you spend 3 months and 4 bugs to execute something with a probability of 50%, if you catch my drift.
A analysis through questioning the founders of the vulns would be indeed interesting to get a better picture of the situation, even thought I’m 100% sure that adobe * still suckz.

MrTroy January 13, 2016 7:38 PM

@jdgalt, @Nox,

Indeed, I’m still waiting for a capabilities-based microkernel to get to the stage of being usable. What comes after Coyotos/CapROS?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.