Comments

Winter November 10, 2015 7:24 AM

In summary, Linus is convinced that security comes at a cost, and he wants to see a positive cost-benefit analysis before he will implement it.

But this is nothing new. Linus has always insisted that new (and old) features needed a solid user base and that the benefits would be worth the costs of the code.

Sounds to me as a very sensible position.

Sasparilla November 10, 2015 7:39 AM

Frankly Bruce I was looking for your take on the article. I don’t know enough about OS development to know if Linus isn’t seeing that the general security environment has been changing over the last decade and isn’t taking it seriously enough (success is a terrible teacher to be aware of changing conditions) or if he is but is playing around with the interviewer.

Seeing Microsoft go this way (& remember that new U.S. law giving legal cover to companies sharing with the government) makes me further write them off:

http://www.theregister.co.uk/2015/09/01/microsoft_backports_data_slurp_to_windows_78_via_patches/

As the only open source PC OS that has a bit of mainstream appeal it was very unsettling to read the article in light of all we’ve learned since Snowden – as I assumed security was a main goal in Linux kernel development…as the article sounded like its viewed as a lower priority convenience requirement (if it doesn’t impact other requirements, performance and usability in particular, then we’ll consider it). I could only think the folks at the NSA must have been smiling after reading the article.

Torvalds_Idolatory_Is_Bad November 10, 2015 8:07 AM

Linus is simply wrong and has surrounded himself with sycophants who stroke his ego and indulge his nonsense as pearls of wisdom.

Security is essential to everyone using a peripheral that has access to personal and financial data. That is, almost everyone connected to the net.

Unless Linus wants the kernel to come under increasing attack from the Stasi who don’t give a shit about his views, he had better start taking GRSecurity and other options seriously.

Why not majorly overhaul the kernels, look at a major re-write and harden with efficiency pay-off? Surely there are savings amongst 19 million lines of code.

Every major distro should shift with hardened kernels as optional install options by default, instead of forcing users to use vanilla kernels with a (hopeful) patch.

PS You know X-KEYSCORE runs some Red Hat linux variant and Apache, standard SQL programs etc. It can therefore be sabotaged by inside administrators just as badly as any other Linux box… just saying…. 😉

Uhu November 10, 2015 8:10 AM

I have read the Slashdot thread, and my take on the comments (not the article, which I haven’t read) is that the article is very pro-Microsoft and attempts to show Linus in a bad light.

I read today in another Slashdot article that the top 10 exploited vulnerabilities are 1-8 in Flash, one in Silverlight and one in Internet Explorer. So I think it is a bit early to panic because of alleged security problems in Linux (maybe a lot of desktops are running Windows, but a lot of servers are not. If there was a fundamental security issue with Linux, I think we would see more hacked Linux machines).

Aris November 10, 2015 8:54 AM

I read today in another Slashdot article that the top 10 exploited vulnerabilities are 1-8 in Flash, one in Silverlight and one in Internet Explorer

You can bet the top10 exploited vulnerabilities on Android have at least 3 or 4 nasty kernel exploits, that could have been avoided with existing security technologies despised by Linus.

Linux kernel security is a joke because of their “a bug is a bug” stance that is out of touch with the real linux world, in which nobody uses the latest release (=highly unstable development branch). Distro maintainers are left alone to triage in the thousands and thousands of commits to find which ones solve security problems and should be backported in their stable port (the one that Linux does not provide).

Ken November 10, 2015 9:03 AM

I worked for a long time as a systems programmer, but the last dozen or so years of my career have been immersed in computer security.

I have no problem with Torvalds’ point of view and generally agree with him.

There are some aspects of the Linux kernel design that could better facilitate security: Torvalds himself probably would do things differently if he started from scratch today.

It’s actually nice to be able to have the kernel source and rewrite or remove things. I know that is beyond most everyday users, but companies that have millions, or even billions, of dollars running through their Linux systems can certainly afford it. Even closed source operating system companies — yes, even THAT one — will give you the OS source with nondisclosure agreements.

Chris November 10, 2015 9:08 AM

TLDR: people who are heavily invested in security think everyone should have the same priorities.

Ken November 10, 2015 9:10 AM

re: my previous post

I didn’t mean they will give you the OS source for free. You have to negotiate a “nominal” charge, a topic well beyond the scope of this thread.

David November 10, 2015 9:25 AM

Security is expensive.
It takes time and money to develop.
It makes developing features and tools harder as you have to consider the security implications of everything you do in addition to the challenges of building whatever tool you are actually trying to design.
And it slows down the operation of the system and makes it harder to use.

The argument that people want systems that are fast and easy to use and will often pick a less secure but easier to use and faster system if given the choice is probably very true. And applies pretty much universally, people want cheaper easier to use stuff/systems.

However both individually, and more importantly as a society, we require an appropriate level of safety and security from the systems and stuff that we use.

The way to get that is to introduce regulations that [one way or another] require that level of security and thus promote a level playing field.

If all systems MUST have a given level of security, your competitors can’t undermine you by producing a less secure but faster and cheaper product.

Of course while our governments are prioritising Surveillance over Security and Privacy they will never ever introduce the required regulations needed to produce this level playing field. And market forces will continue to ensure that we never get decent security.

Anoni November 10, 2015 9:31 AM

Linux has had SELinux since ’98. Everybody turns it off. (Unless you have a front-line attack surface under constant fire.) SELinux makes it impossible to get anything done. Just about every little program or library does something SELinux doesn’t like. Should they be fixed? Maybe. But the resources required, both time and money, doesn’t justify the results. Those resources are better spent elsewhere.

And indeed that’s what we see here. Those who favor security above all else will kill productivity and profitability for little gain. That may be what they’re ultimately after. Bog down Linux with worthless security initiatives that are bypassed almost before they’re invented.

You’ll notice that Linux is Open Source. These folks complaining can go off and write a more secure Linux themselves. (That’s exactly what the NSA did to create SELinux.) But no, they want somebody else to spend the money & time. And in the meantime they want to make Linux look unsafe or bad. Whereas in reality, Linux security is vastly superior to the commercial alternatives.

Kudos to Linus for standing up to these jerks!

Tommy Dohn November 10, 2015 9:45 AM

Linus is a strong advocate of “security through name-calling”.

As in, if you write “bad” code, Linus is going to personally insult you. This type of security is only slightly better than security through obscurity.

Sancho_P November 10, 2015 10:13 AM

Now it may not take further years for the good guys to find and subvert the garage 🙁

Basically I’m with Linus: A bug is a bug.


Security: Don’t expect to steer what you don’t pay for it.

Fix the broken system first:
1. Those who use Linux commercially should pay for.
2. Those who use an OS for lulz should not be obliged to pay for an OS they don’t want.

Restore the free market, restore capitalism!

herman November 10, 2015 10:16 AM

@Anoni: Your views against SELinux may have been true when it was first released, but I haven’t had an issue with it in years. I guess that you are an Ubuntu user. SELinux is a RedHat product, so you should use Fedora instead, if you want to give it an honest try.

David November 10, 2015 10:25 AM

@Sancho_P

The idea that all bugs are equal is absurd.

A bug that causes a toolbar to be the wrong colour is not as important to fix as a bug that allows someone to take over your system, steal/alter your data, and/or trash your system.

One bug causes a minor annoyance [if you even notice].

The other can lead to ID fraud, financial loss, and depending on how you get attacked could lead to people committing suicide. As has happened with people who’s webcams got hacked and had nude/pornographic video/pictures of them leaked online.

This example proves to any sane and rational person that all bugs are not equal.

It also proves that it is possible to classify bugs, in terms of seriousness of effects [or potential effects] if nothing else.

Risk assessment and mitigation should prioritise more serious threats over less serious ones where resources don’t allow fixing all bugs simultaneously. You should do triage.

Claiming all bugs are equal and not prioritising fixing those that pose the greatest threat over those that pose lesser or no threat needlessly reduces security.

And claiming that time spent doing triage is always a waste of time is also a nonsense. The situation is directly and exactly analogous to triage in battlefield surgery. And you will find no experts in that field who think that triage is a waste of time. It isn’t and it saves lives.

Oskar Sigvardsson November 10, 2015 10:29 AM

I would also love to hear Bruce’s take on this, but from where I’m sitting Linus’s take seems entirely reasonable.

This is one of this issue where the discussion benefits greatly from specifics, and the article was lamentably short on them. One they did provide was this:

The best known of these techniques, called address space layout randomization, reshuffled each computer’s memory regularly. So even when hackers attempted to penetrate a system, it was difficult to steal files or implant malicious code

If an attacker can read and write to a system’s memory at will through buffer overflows or the like, making the kernel shuffle around the memory periodically will only mitigate the security risk, not eliminate it (the attacker will, after all, still be able to read and write memory directly, which would still be an unacceptable hole in security). And the security flaw is clearly located in the userland software that allowed the buffer overflow in the first place, not in the kernel. Like Linus says, if a nuclear reactor running Linux was successfully attacked, it probably will not be the kernel’s fault.

Making this change would result in a significant performance hit (all that moving around of memory doesn’t happen by magic) which would be unacceptable to most people running Linux, for very limited security gain. It seems to me that it was entirely reasonable to reject this change.

Regardless, this is the perfect example of the benefits of open source. If you highly value security over performance, no one is preventing you from using OpenBSD or the grsecurity patches to Linux.

ianf November 10, 2015 10:32 AM

ADMINISTRIVIA @ Ken

As “long time systems programmer, with the last dozen or so years of your career immersed in computer security” you ought to know that, in archived hypertextual forums such as this one, THERE IS NO SUCH THING as “re: my previous post,” but there is something better, the “re: my previous post.” You know it already, so why don’t you use it?

(You never know when someone will come across the latter, and find an orphaned reference to the former. Let’s say it wasn’t all that important, but then why post at all. If I could do it tapping with me left toe floating upside down in a shower suit on the ISS, so can you in less extreme conditions).

bob November 10, 2015 10:34 AM

@herman (& @Anoni)

I agree. In the early years of SE Linux I just switched it off. After a couple of years, front facing boxes running just httpd and iptables had it on. When the reporting mode became available, everyone box had at least that enabled. And, finally, as the reports became more manageable, I started working directly with it to resolve problems. There’s little excuse these days to leave it off.

** CentOS user: if you’re not on a flavour of RedHat your milage may vary **

paul November 10, 2015 10:50 AM

Tragedy of the commons?

Security “costs too much” because the risk/benefit tradeoff for any given person is not that good — even if I pay tens of thousands of dollars (tens of millions?) for a seriously secure kernel, and tools, and the interfaces to use them effectively, my money will be mostly wasted unless all my friends and people I do business with also use it. So a bunch of individual decisions lead to a collective mess.

This is one place where a top-down organization can potentially do a better job, because security decisions can be amortized over everyone (and can be monetized fairly directly).

blake November 10, 2015 11:14 AM

@Tommy Dohn

if you write “bad” code, Linus is going to personally insult you and not merge your commits

Important part added. On another tangent,

“People don’t really care that much,” Spengler later said. “All of the incentives are totally backward, and the money isn’t going where it’s supposed to.

That’s not about Linus at all then, but about how the entire market values security. There’s also a bit of irony about the use of the phrase “the money isn’t going where it’s supposed to” in a market context. Is he implying corruption? Who supposes where everyone else’s money should go?

@David

The idea that all bugs are equal is absurd.

Your example compares bugs of quite different scales. If one bug was “database input not sanitised, malicious user can destroy data” and another bug was “changing font size before clicking Save button deletes customer data” then yeah, those bugs are doing similar bad things. One is security and one is not, they’re both really bad bugs and both need to be fixed. If a bug was “malicious user can change toolbar color” then I don’t think anyone would care (but would check what else a malicious user could possibly do).

it is possible to classify bugs, in terms of seriousness of effects [or potential effects]

I might be overly charitable to Linus, but I think that’s his exact point. He doesn’t care if it’s a race condition or a malicious hacker or an accident, if it has serious implications it’s a serious bug.

blake November 10, 2015 11:28 AM

@David

If your point is that security bugs often have more serious implications than an “average” bug, than yes.

It probably also important to distinguish between an edge behaviour that an attacker can take action to exploit, and a bug which for example accidentally makes all private messages public. The latter could considered “security” because of the personal security implications for users, even though there are no hostile hackers involved. But they’re important because of the serious implications, not because they’re security.

Thierry November 10, 2015 11:30 AM

@paul • November 10, 2015 10:50 AM

Security “costs too much” because the risk/benefit tradeoff for any given person is not that good

That statement assumes that the cost of risk is properly evaluated. We all know that such evaluations are at best very approximative, at worse guesswork, and in all cases simply wrong. Even insurance companies, beyond seriously presented analyses, use guesswork. They are more experienced, though, than the Linux kernel maintainers and CSO’s (not mentioning CFO’s).

Risk evaluation always includes some “financial theater” which, just like “security theater” is required by the target population before acceptance.

Andrew November 10, 2015 11:59 AM

Interestingly enough at about the same time as this one another security post popped up in my feeds:

http://sobersecurity.blogspot.bg/2015/11/you-dont-have-nixon-to-kick-around-any.html

In short: FOSS is FOSS and can’t handle security like big-bucks corporate firms – it’s a bazaar not a cathedral, right? Also by most chances there’s no single big security Armageddon in the future.

I wonder if Linux security enhancements/frameworks/IDS/IPS can and will become the next cash cow for the antivirus companies.

Nick P November 10, 2015 12:31 PM

Linus’s position is indefensible as the evidence against it has piled up for decades. His arguments seem more grounded in personal preference and unsubstantiated guess-work as can be seen here and here on one debate. There’s already been systems designed without the problems his has. Better architectures/implementations that were easier to analyze and did great in pentesting. They used some consistent principles to do that. Linux and Windows did the opposite with consistently bad results. Switch to methods that work, deploy them as pervasively as possible, and make the changes incrementally if necessary. Seems obvious but Linus will make excuses instead.

Meanwhile, around time Linus wrote Linux, the QNX microkernel demonstrated ultra-fast IPC, tiny kernel mode attack surface, isolation of driver/component failures, and POSIX compatibility. It, being designed for reliability, got smashed when put online. (Mostly in the code imported from UNIX, interestingly enough.) However, such methods combined with security engineering led to a series of security and separation kernels that got the job done. Post-2000 examples had nano-kernels, fine-grained isolation for security-critical components, and Linux/POSIX VM’s for untrusted, legacy code. The small MINIX 3 team also started with MINIX codebase, like Linus did, while leveraging microkernel architecture to create a foundation more reliable that what UNIX’s achieved for decades. Shapiro built on KeyKOS capability line & similar principles to produce EROS: a very-secure architecture w/ trusted GUI, networking, fast IPC, and persistance for system state.

This doesn’t even count alternative schemes that modify development practices to increase security of monolithic architecture. For consistency and reliability rather than security, both Wirth and Hansen did a series of OS’s with type-, memory- and concurrency-safety respectively. Hansen even did a Wirth-style one, Edison, with more safety on same machine UNIX & C were invented on! Trusted Xenix used Intel’s rings and segments to counter many problems in a UNIX while eliminating issues like setuid. SPIN OS team wrote their OS in type-safe, memory-safe Modula-3 with safe linking and dynamic loading mechanisms. Microsoft did two similarly with Singularity and VerveOS. Work like Criswell’s SVA and KcoFI try to immunize existing kernel against unsafe usage focusing on most bang-for-buck issues. Even trimming the kernel can help dramatically but current state makes that difficult without dedicated team. There’s also a ton of tools that, combined with certain coding style, can often find serious defects or auto-insert checks that block them with Linus making little use of them.

So, there’s all kinds of methods that can be used. There’s even academic teams that test new methods against FreeBSD and Linux as a benchmark while submitting the results to them. Overall, though, Linus and major UNIX teams invest very little in proven methods to eliminate reliability and security problems. Continuous reliability and security problems result. I can understand trying to preserve backward compatibility and doing stuff incrementally. However, their main excuse is that they just don’t agree with those methods and will continue doing what they’ve always done. Indefensible from a technical perspective.

Leads to conclusion that Linux is so insecure because Linus and his people want (or need) it that way for political reasons. Anyone wanting security should bake it in themselves or replace Linux as aforementioned groups were doing. You can’t rely on Linus or mainstream UNIX’s to do it for you. They’re too heavily invested in practices and code that fight reliability & security at every turn.

Clive Robinson November 10, 2015 2:24 PM

On Pengins, Lemons and Dodos

From the article,

    Security of any system can never be perfect. So it always must be weighed against other priorities — such as speed, flexibility and ease of use — …

Firstly a point I’ve made before in some languages there are not two words “security” and “safety” but just one, that is to be safe you have to be secure and the other way around. They are but the two sides of the same coin, and it matters not a jot which side up a coin is to deternin it’s worth.

Now the article point out Linus drives a car with registration “DAD OF3”.

I Wonder if Linus thinks of the brakes on it, after all they don’t add to the speed or flexability, or if your intent is just to get from A to B the usability of the car. So from his perspective all the brakes do is slow you down and stop you getting from A to B, so they serve no purpose in his world view. So perhaps he should follow that philosophy on his car and remove the brakes, and whilst he’s at it, he should save some time and add an “E” to the registration for “DEAD OF3”.

As it has been pointed out that sort of thinking was prevelent in the US Auto industry fifty or more years ago. The result was some very expensive court cases and the oft quoted “Lemon Laws”. What you don’t hear about is that those Lemons helped engineers think differently and in new ways. The result is that the “safety” features whilst initialy bolted in, quickly became the in built mainstay of inovation in the Auto Industry, via engineering sweet spots they turned those bitter lemons into the sweetly desirable lemonade, and thus the whole industry and it’s customers were refreshed.

The lemon laws worked for two reasons, firstly it draged the Auto industry out of it’s near profitless terminal “race for the bottom” “free market failing”, and secondly it forced the Auto Companies to “spend on innovation” that actually delivered considerably more profits in a fairly short period of time.

Which brings me around to Dodo’s, the reason they died out was the sudden change in their environment by the introduction of the most wantonly destructive of apex preditors “man”.

Linux is in the same position, if it’s continued safety/security failings continue unabated then Software Lemon Laws will eventually appear and the FREE OSS movment will get that Dodo feeling, as legislators set up mandatory testing that will be beyond the means of those who take no payment for the OSS they create.

So Linus if you are reading this, you might want to stop and consider that “DEAD” might be the epitaph of your life’s work, if you don’t start “trimming your sails” for the bad weather to come…

But if you do trim, like the auto industry, you will probably find new innovation that will pay considerably more dividends long term than your current path can get.

Nix November 10, 2015 2:56 PM

Aris, your reference to Android would be more compelling if the Android ecosystem wasn’t a total nightmare security-wise, with every phone running a horrifically outdated kernel and next to none of them supplying any fixes whatsoever.

I think fixing that comes first.

GEdward November 10, 2015 2:57 PM

I wonder if computer security must always remain poor, because if we invest enough resources to cut intrusion down to low levels, then people will not want to spend so much on security to prevent the very unlikely intrusions. So then less will be spent on security until the intrusions get bad enough again.

The other big problem is that 80% of people are very reckless, as proven by the fact that before seat belts were legally mandated, 80% of people didn’t wear their seat belt. How can we expect people to expend many hours and lots of money securing their systems when we couldn’t even get them to take a few seconds and insignificant discomfort for free, to save their life.

Another indication that it will be tough to get people to expend much effort on security, is that few people regularly back up their computers. Doing backups is far easier than doing strong security. I’ve heard numbers as low as 3% of people do backups. Even people who understand the need and importance of backups tend to not do it, or put it off much longer than they know they should.

Worst of all is that it may simply not be possible to make systems with strong security, at least given any remotely reasonable amount of investment. Even organizations like the NSA use Windows and Linux. Why is that? Is it because they think Windows and Linux are good enough, or is it because they can’t come up with anything better? It has been claimed that the Chinese have hacked into the plans for every weapon system the US has, and much more, so they don’t seem to be good enough. It seems that to have any chance at it, we will need to abandon the bolt on attempts to secure legacy operating systems like Unix, which incorporate many risky legacy operations for compatibility, and were not designed from the ground up for security.

But people shouldn’t just say that if you want a secure computer you can unplug it or melt it down, as if that truth is some kind of justification for lax security. There can be no question whatsoever that we must expend significant effort at security, because to leave our systems completely open would have obviously unacceptable consequences. The question is how much effort should be expended. I don’t think Linus and many others put enough effort into security, but I’m not sure how much should go into it.

It seems like some obvious and easy steps should be taken at least. For example, our hardware should check the boot sector and load a verifiably clean kernel, which should in turn verify the integrity of the other operating system files during boot. Why is this only recently being implemented with the trusted platform system? And our hardware should not have firmware that is both programmable and impossible to inspect for viruses. HTML and Web browsers should be simplified and hardened for security with dangerous features like Javascript disabled, except maybe when really needed.

Daniel November 10, 2015 3:10 PM

I’m with Linus on this issue and I think Clive’s analogy is incomplete. No one disputes the importance of security, not even Linus. The key question everyone has to ask themselves is why that security needs to take place in the kernel. This is the whole point Linus is trying to make about the nuclear power plant. When Laura Poitras was working with Snowden she didn’t go whining to Linus to fix bugs in the kernel, she went internet dark. Now, I’m sure that someone like Clive will say, “But Daniel that is like saying the way to make a car safe is to take off the wheels.” Well, yes, that is exactly what it is saying. And that is exactly what smart cultures do.

We don’t rely on just brakes in the car…in some cases we take the keys away from our teenager, in other cases we put on interlock devices to stop the drunk driver. So not every security bug needs to be fixed in the kernel no more than every method for making driving safer involves improving the brakes. What matter is the security of the entire ecosystem. This is the whole point of software like TAILS or Qubes–we don’t need to trust the kernel because the attacker has other, bigger problems.

Linus is right that there is no good reason to privilege a certain class of bugs (security bugs) over other bugs.

Scott November 10, 2015 3:20 PM

I know Linus casually through geographic proximity and being a kernel contributor myself. If there’s one guy on the planet that doesn’t surround himself with sycophants, it’d be Linus- I have no idea where anyone would get that idea about him. He likes brutally frank exchanges- it’s simply his way of being efficient by avoiding the hand-holding millennials are used to. Alan Cox and some of the other early tree minders are all pretty much the same way.

The Linux cost/benefit approach to implementation of security stem from the ideas of Simpson Garfinkel and Gene Spafford who were fairly influential in academic circles when Linux was emerging as a collaboration on USENET. Simpson’s currently with the NIST working on similar topics and Spafford is still publishing papers and teaching.

That said, given my familiarity with some aspects of the Linux kernel, I use BSD for my websites for a reason.

John Macdonald November 10, 2015 3:51 PM

Security classifying of bugs is not easy.

Sure, there are some that are obviously serious security problems because it is clear that the bug can be used to break security; and there are others that are obviously serious security problems because an exploit has been found that uses the bug to break security.

But the vast majority of bugs are fixing something that is important to someone, but with no obvious way of exploiting for a security issue. Of those, a significant proportion will turn out to actually be exploitable as a security problem in a not immediately obvious way.

Linus would rather that people fix many bugs than spend large amounts of time trying to classify whether the bugs they fix might be exploitable and thereby fix fewer bugs because of that “wasted” time. Sure, if a bug has been found to be exploitable, it is generally treated as an urgent to fix issue; just as a bug that causes the system to not work for large numbers of people is generally considered to be an urgent to fix issue. Spending much time in trying to classify whether a bug is a security issue is a waste of time – most often it will appear to be “no” but with a sufficiently clever attacker the answer is “yes”. Trying to pick bug fixes so that you only apply the security fixes guarantees that you will miss some that later turn out to be non-obvious security issues. (And that leads to complaints that the classification system is not working well and needs to be made even more time consuming.)

That is the thrust of “bugs is bugs” – critical security bugs should be fixed as quickly as possible, critical operational bugs should be fixed as quickly as possible, other bugs (tilted in either direction) should be fixed quickly too. Spending much time to try to classify the security issues is a waste of that time and will simply lead to more bitching anyhow.

What is the advantage of changing 100 bugs fixes (10 known security problems) into 70 bug fixes (15 known security problems, 55 labelled as not being security problems but 20 of them actually are)? Fewer bugs have been fixed, the same number of a priory known problems have been fixed, and the others have been “classified” with a label that is probably right if it says that the bug can be exploited and often wrong it is says that the bug cannot. (Those numbers were absolutely created off the top of my head – but the basic issue that many exploitable issues are not obviously exploitable
is certainly true; and spending more time classifying them does not improve the detection much.)

Nick P November 10, 2015 4:42 PM

@ GEdward

“Even organizations like the NSA use Windows and Linux. Why is that? Is it because they think Windows and Linux are good enough, or is it because they can’t come up with anything better?”

It was part of both DOD pushes and IIRC legal requirements to shift to Commercial Off The Shelf (COTS) software where possible. Plus, the secure systems required each component to be designed, implemented, and evaluated for security. That made time-to-market terrible with the secure stuff looking like ugly terminals or Win3.x-style graphics when market was moving toward things like Windows NT. Support of any standard always lagged because they had to improve its inevitable, horrible security. Worst, NSA started actively competing with the private sector in a way that undermined return on investment for high assurance products. The overall combination killed the high assurance market, left a few by defense contractors for most sensitive uses (eg cross-domain guards), and allowing huge uptake of insecure garbage across everything. And stuff got hacked en masse. 🙂

These links explain some of that:

http://www.acq.osd.mil/dpap/Docs/cotsreport.pdf

(Faster, cheaper, and more agile even if totally insecure. So, let’s load up on it for security-critical apps!) 😉

http://lukemuehlhauser.com/wp-content/uploads/Bell-Looking-Back-Addendum.pdf

(Traces rise and fall of high assurance security industry with recommendations.)

https://blogs.microsoft.com/cybertrust/2007/08/23/the-ethics-of-perfection/

(Lipner on what he learned from high assurance security meeting business realities. Time-to-market and feature disadvantage are huge issues even DOD won’t ignore.)

D G November 10, 2015 5:31 PM

Reading both the Washington Post article The kernel of the argument relating an interview with Torvalds and the ZDNET article Linus Torvalds vs. the internet security pros you’d get the impression the major beef here is that if Linus did all the heavy lifting on security issues all those with Linux applications requiring security wouldn’t have to, it appears someone wants something for nothing.

There’s also a conflation between kernel vulnerabilities and distributions or targeted software, exploiting kernel vulnerabilities requires security vulnerabilities in one of the latter. Linus doesn’t control the entire platform.

Fielding a ‘secure’ product might involve the use of virtual machines, where the product producer can be responsible for the entire thing. Instead, trusting Linux to be secure were someone else is paying more attention to security than before is just another example of Magic Thinking.

Anura November 10, 2015 5:42 PM

@Nick P

That’s why I want the government to fund an open-source project to make secure software from the ground up with the intention that all government computers use that software (thus ensuring driver/software compatibility for the public too!).

Phase 1: lower level, lightweight, performance-oriented, high assurance programming language/formally verified compiler (C alternative)

Phase 2a: Formally verified operating – the bare minimum to have a usable, fully POSIX complaint OS (NO GUI, just the kernel, boot loader, shell, terminal, file system, services, programs, and APIs necessary to achieve) written in the above language

Phase 2b: Higher level language suite, including a feature oriented language designed to balance performance and usability (C++ alternative), a fully managed language (C#/Java alternative), and a scripting language. All with formally verified runtimes/interpreter/compilers.

Phase 3: GUI, core graphical software (Browser, web server, calculator, terminal emulators, text editor, etc.) written entirely in Phase 2b languages. Not all formally verified, but heavily vetted in design and testing.

Phase 4: Productivity software (word processor, spreadsheets, email, project management, etc.), RDBMS, Reporting tools, IDE, anything else that is widely used by government agencies. This phase would not necessarily have an end.

Give them a few hundred million dollar annual budget, and the government would end up saving huge sums of money on licensing in the long run, and individuals and businesses that are not directly competing with those products would see huge benefits.

Earl Boebert November 10, 2015 5:45 PM

I’ve studied this problem for a bit (since Linus was 3, to be exact) and have essentially given up on any hope of the balanced solution that everybody agrees is necessary.

The problem, in my view, is not Linux or Microsoft, it’s Intel. The facilities needed to provide a robust foundation for hard-to-attack code have been known for decades. The problem is that they cost cycles, they cost memory, and they cost real estate on the chip. If you live and die by application benchmarks these facilities are not features, they are bugs.

So we have fragile stacks, fragile memory management, fragile execution control and levels of multiplexing that precludes affordable a priori analysis of execution streams.

If you do a dependency analysis of any reasonably robust approach to the above problems you’ll find that as you go down the dependency tree in today’s architectures things get weaker instead of stronger, just the opposite of what you want. So kernel implementations put facilities in mutable code that should be down at a level of abstraction that requires physical access and multiple hurdles to change. And that code is “protected” by these fragile mechanisms (read implies execute, anyone?).

The consequence is an endless cycle of patch and pray. And since attack is an adventure while defense is just a job, the outcome is inevitable.

Dirk Praet November 10, 2015 5:58 PM

@ Scott

That said, given my familiarity with some aspects of the Linux kernel, I use BSD for my websites for a reason.

That statement pretty much reflects my own feelings. However much I respect Linus, I completely disagree with him on the importance of classifying and fixing kernel security issues. Linux is no longer a hobby project for basement geeks, but a ubiquitous operating system running critical infrastructure all over the world. With the IoT upon us, it’s becoming even more widespread. Which in my opinion requires a different and more professional approach to security than the philosophy of “natural evolution” Linus and his kernel team keep sticking to.

@ Daniel

This is the whole point of software like TAILS or Qubes–we don’t need to trust the kernel because the attacker has other, bigger problems.

Most certainly not. You can secure userspace as much as you want, but it’s game over when someone manages to exploit a kernelspace or hypervisor vulnerability. Just like the nuclear powerplant is toast with one or two malicious insiders with the proper access and clearances. Remember Manning and Snowden?

Sancho_P November 10, 2015 6:26 PM

@David

I’d suggest to read Linus again.
The fact that a bug is a bug doesn’t mean all bugs are equal.
Do you think Linus and his gang don’t prioritize bugs?
Probably they won’t discuss their reasoning in public.

Funny to see Linus working on toolbar colors 🙂

There’s a huge advantage of a good imperator versus a weak democracy.
Maybe the time is ripe to rethink.
Linus will.

Spooky November 10, 2015 7:23 PM

I tend to side with Linus on the issue of priorities. He’s the designated maintainer of the production source tree, it’s totally his call. Since this is open source software, after all, those who disagree with his priorities can copy the source, patch it, and produce their own security-focused kernels. If their kernels can offer better security and stability without catastrophic losses in performance, then perhaps consensus opinion will change. If they’ve overstated their case, perhaps it will not.

Daniel November 10, 2015 7:51 PM

@Dirk P.

but it’s game over when someone manages to exploit a kernelspace or hypervisor vulnerability.

The game is never over for a bug is nothing without an exploit.

If the kernel bug is detected one call rollback, refuse to update, or do many other things to protect the system. If the bug is not detected TAILS features non-persistence. Any malware that exploits that kernel bug is wiped upon reboot. The user would have to repeatedly download the same malware endlessly for it to become a problem. And if that happened that would not be a kernel problem but a stupid user problem. The same is true for Qubes or any other set-up featuring virtual machines–just reset the VM back to it’s original state and poof the exploit is gone.

It really is amazing to me how otherwise sane individuals lose all perspective when the topic of Linus and kernel come up. Linus is correct–people keep trying to push off non-kernel problems, especially bad opsec, onto the kernel.

Fazal Majid November 10, 2015 8:52 PM

@Anura: The only OS that has ever been validated using formal methods is the seL4 microkernel, which is extremely minimalist. As for government design, in the post-Snowden era that would automatically disqualify the OS for consideration.

To a certain extent, this debate is irrelevant. The overwhelming number of server vulnerabilities come from user-space components above the kernel, and packaged by companies like Red Hat that Linus has no control over. One example was the Debian bug where Debian helpfully “corrected” code in OpenSSL that used uninitialized variables, and in the process destroyed the randomness of keys.

It’s not as if there aren’t alternatives. I use OpenBSD for my firewalls and Illumos for app servers. A lot of the grousing sounds like ego-driven complaints by some that their pet features were not incorporated.

Dirk Praet November 10, 2015 9:08 PM

@ Daniel

The game is never over for a bug is nothing without an exploit.

That’s exactly the word I used: “exploit a vulnerability”. Any particular reason you are assuming I don’t know the difference between the two?

Any malware that exploits that kernel bug is wiped upon reboot.

You’ve never heared about malware that installs itself in hardware/BIOS/UEFI and how hard it is to detect it?

It really is amazing to me how otherwise sane individuals lose all perspective when the topic of Linus and kernel come up.</>

Indeed. Everybody would be screaming bloody murder if Microsoft or Apple would still take the same relaxed approach to security as Linus and his kernel crew are doing. Theo de Raadt may be a git, but he does take security serious, and which is why folks like @Scott (and myself) will prefer OpenBSD over Linux anytime when it comes to critical or internet-facing infrastructure.

Jack Dodds November 10, 2015 9:18 PM

The reporter who wrote this article does not take the time to educate his readers about the Gnu Public Licence. He presents Linus Torvalds as an arrogant gate-keeper who is standing in the way of better security for all computer and Internet users. This sells papers, but it’s not reality.

Any of the critics of Linux or of Torvalds are absolutely free to copy the product of the work of Torvalds and thousands of other contributors, add their pet security enhancements, and release their improved kernel code to anyone who cares to use it.

What the critics are complaining about is that Linus Torvalds will not use his influence to promote their code. That is, they want Torvalds to use his own reputation to sell what they can’t sell on its merits.

Clive Robinson November 10, 2015 10:42 PM

@ Spooky, Jack Dodds,

Since this is open source software, after all, those who disagree with his priorities can copy the source, patch it, and produce their own security-focused kernels.

There are a number of problems with this idea of “Forking” a project.

Forking has come up on FOSS projects in the past and effectively ended up killing both, for various reasons, primarily due to divergence in the source.

However it causes other market place issues, companies developing products don’t want to have to develop product for what is effectively two “similar but not quite the same” kernels. Because it causes extra effort and quite a few support issues. Thus they will favour what is seen as the “default” path, unless there is an over riding reason to use the other.

As it happens, there have been people releasing security patches to go onto the stock kernel, but they have seen no reward for their efforts. In fact they face the same problem the kernel it’s self faced a few years ago. That is “freeloading” by comercial organisations who take the code and the good name of the developers and give nothing back. And often worse, the companies fail to keep the code they use upto date, and pass over support issues back to the original developers.

As the article notes Linus and a few of the maintainers of the stock kernel get paid for their efforts via various means. But does not mention the “Peter robs Paul” issues this creates for others developing enhancments, as there is no “trickledown effect”. And as history has shown the “drawbridge effect” comes into play whereby those who do get payment wish to protect their patch and are prone to NIH syndrome.

As others have noted their are alternative less well known OS kernels with improved security out there. Whilst from a code perspective this does provide “hybrid vigour” as the history of *nix shows it causes market place issues.

DISCLOSURE : In the past I’ve used some of the “other kernels” for embeded products, for various reasons, not just security. I personaly found them easier to work with, but they suffered from the non technical branding issue of “Not Linux/Android inside” which other “CV conscious” developers got behind their “marketing departments” on…

Clive Robinson November 10, 2015 11:58 PM

@ Daniel,

The key question everyone has to ask themselves is why that security needs to take place in the kernel.

Err no that is most definitely not the “key question”. The “key question” should be,

    As the kernel is the foundation stone of the Operating System through which everything normaly passes, what effect does it’s lack of security have on me?

Followed by,

    What mitigations does this lack of security force me to take?..

As for your “take off the wheels” comment, don’t try forcing statments I would not make into my mouth, it’s not polite and it realy damages your credability.

It’s also a very very silly argument and one I suspect the Linus portrayed in the article would disagree with. Because if you “take off the wheels” from a car it ceases to be a means of viable transportation, thus it would destroy the user land experience, which the article indicates is so important to Linus, along with speed etc…

Further I suspect from the “nuclear power station” comment, you have not worked in “Safety Critical Systems Design” either. Whilst it might seem a good example argument for both perimeter and air gap security it’s not. As I noted earlier the security/safety coin has value. If security is weak due to the designers devaluing it, then almost certainly safety is likewise devalued. It’s an “attention to known details” thing that is the heart of “Quality Design”.

Mind you I’m on record about my views on so called “Computer Engineering” and it’s distinct artisanal as opposed to engineering bias in most software developers. Search for “code cutters” and my name and your sure to find some of my past comments on the subject.

Tom Bortels November 11, 2015 12:18 AM

The idea that the kernel can be “secure” is a myth. Security is not a checkbox, or a boolean value, something the kernel is, or is not; it is an emergent property of systems, of which the linux kernel is only one part. An important part, surely, but still just a part. If somehow the kernel were made to be a perfect paragon of security – all it takes is one insecure userspace program to blow things wide open. In the end, a chain is only as strong as it’s weakest link.

Could the linux kernel be more secure, in and of itself, than it is today? Of course. It could be faster too. Or smaller. Or more reliable. And each of those goals has an associated cost, in time and effort, and to some extent working on one of them steals that time and effort away from the others, if not directly conflicting with them. Each of these features has to be balanced against the others – none trumps the others on their own.

Want more linux security? Sweet! submit a patch that does not adversely affect other aspects of the system, I bet money it’ll get merged. Failing that, pick up the mantle yourself – fork the kernel, and add your patches, and distribute your version. If it is better, people will use it, and you have a new, rather burdensome, hobby. But forcing your specific needs on a community, especially when those needs can be met by other means, is unlikely to be successful or a wise use of your time.

Clive Robinson November 11, 2015 12:41 AM

@ Tom Bortels,

The idea that the kernel can be “secure” is a myth. Security is not a checkbox, or a boolean value, something the kernel is, or is not…

Whilst as you note security is evolving, you can say if something has been tested against known instances or classes of attack, and how it fared in those tests.

You can also compare the code to other code of it’s type and say what features it does and does not have in comparison and the likely outcomes from the inclusion or not of certain methods. Which in turn can give rise to further testing.

Thus you can certainly rate a given piece of code.

You raise the point of hobby-v-employment, and one or two other related issues I’ve already covered in one of my comments above.

Grauhut November 11, 2015 4:05 AM

@Dirk: “which is why folks like @Scott (and myself) will prefer OpenBSD over Linux anytime when it comes to critical or internet-facing infrastructure.”

Me to. If one needs to be anal about internet security he should use a BSD dialect, openbsd preferred if possible on the driver availability side of the equation.

If an attacker ever gets into ring0 on ones box, that one has to handle it as toasted, fubar (with the r translated to recovery, not recognition), because of that funny APIC remapping bug feature in x86 that exposes SMM RAM to ring 0.

https://www.blackhat.com/docs/us-15/materials/us-15-Domas-The-Memory-Sinkhole-Unleashing-An-x86-Design-Flaw-Allowing-Universal-Privilege-Escalation-wp.pdf

SMBs usually do not have the money to buy new servers after p0wnage, so…

When it comes to security we have to make wise choices and there is enough to choose from nowadays.

But i can also understand Linus, they compete with commercial OSses and want to win on the user base side. That’s fine.

Whatever brings people and software into the provable open source arena is good.

Linux is better than any closed source os crap.

Marcos El Malo November 11, 2015 5:53 AM

@Nix

It’s like coming upon a house on fire and remarking, “I don’t think that foundation will survive the next earthquake”. The Linux kernel is probably the least of Android’s problems.

Doug Coulter November 11, 2015 6:43 AM

Strangely, I find myself in agreement with posters here who I respect, but who are disagreeing with one another. (sorry if someone slipped in between the time I opened this last night and this AM when I’m posting).

I found the news article disingenuous. Yes, many of these hacked systems were running linux. So is my whole outfit. To my knowledge, even a professional pen tester (have no idea how good) – could not crack my LAN in many tries, even with my IP address in hand, remotely. Yet he could crack my ISP/web presence provider in well under a minute. They run linux too. But they run some fairly cheesy software on top. Huge, spaghetti packages that let users create web sites with a few clicks and drags, without understanding the implications of using this or that software they have no idea of the internals of.

What I found disingenuous about the reporting is that the famous successful attacks weren’t against linux itself (the part Linus is responsible for), but against poorly written web facing user applications – and ancillary programs provided by various distros – not the kernel. The kernel, while doubtless attackable, isn’t the weakest link or the attack vector in nearly all reported incidents. Yet even this site has members who want someone else to provide huge inputs that cost money to harden what’s already essentially the hardest part, when in fact if you cared and were competent, you’d do it yourself. Or you’d at least analyze in a practical sense where the problems were instead of just pointing out various things you’re pushing as “better” when they would suffer from the same primary issues.

I’d urge those people to go spend some time fixing, say, wordpress plugins (which at least you can get the code for) or Adobe flaws. Or help get the latter banned. It would serve security goals much better. The list of web facing software that stinks security-wise is of course far longer than the above.

While I know from my own systems programming that the linux kernel isn’t perfect, I don’t think it’s being honest to blame it for things like SQL injection attacks allowed by what I call “monkey code” – written by so-called programmers who don’t understand the languages and libraries they use which were simplified so monkeys could write code that seemed to work – supposedly saving some PHB some bucks since he didn’t have to hire real IT staff. Make any opsys kernel or microkernel as hardened as you like, and these guys are still going to be the weak link, in my (rarely humble) opinion.

Why not fix it where it’s broke? Yes, you could make opsys that have more in userspace, don’t make it seem like some app in userland needs to run as root or equivalent, and so on – and that would improve things a little bit. But only a little – the best locks on earth do no good if you leave the door open to your database for example.

NickF November 11, 2015 8:40 AM

@David:

I think we need to keep in mind the distinction between Linux the O/S and Linux the Kernel.

If the taskbar is the wrong colour, that almost certainly is not a kernel bug, and therefore (from Mr. Torvalds’ point of view) can safely be ignored.

On the other hand, if a kernel bug is resulting in the taskbar colour being corrupted, then the kernel is probably corrupting memory in userspace processes. Which means the bug is at least as serious as root escalation since it stands to both corrupt your data and render the system unusable.

It’s a matter of perspective.

Anoni November 11, 2015 10:16 AM

@herman et al: I’ve been using Red Hat, pretty much exclusively, since the 90s. SELinux may be great for internet-facing production boxes, but it’s pretty horrid for development work.

@clive: “Forking has come up on FOSS projects in the past and effectively ended up killing both, for various reasons, primarily due to divergence in the source.”

You mean like Gnu-emacs & X-emacs? Or OpenOffice & LibreOffice? Or are those just bad examples?

Forking can lead to a split of resources from the original project, leaving too few for either to suceed. But that doesn’t HAVE to happen. And indeed, here, we are not talking about removing resources from existing Linux development. This is so-called security folks demanding somebody else do the work they aren’t willing to do themselves.

One thing about open source. It’s very democratic. If your ideas won’t stand on their own, if you need to play these sort of games to get attention, well it says something about your ideas…

Linus has always said: “If you think you can do better, do it. If it works, he (they) will incorporate it.” And I’ve seen people submit ugly code to Linus. He said “I’ve seen worse” and used it.

 

One more point: If Linux is so insecure, why do I constantly read about bugs in Windows but almost never in Linux? Even when there is a problem with Linux, it’s never in the kernel. It’s always in some application layered on top. Heartbleed, or those folks who thought unsanitized inputs to BASH in an internet-facing webpage was a good idea.

8 of the 10 Top Security Flaws Used By Cyber-Criminals This Year Were Flash Bugs. Vulnerabilities in Microsoft’s Internet Explorer and Silverlight were also major targets. Source: http://it.slashdot.org/story/15/11/10/0218207/8-of-the-10-top-security-flaws-used-by-cyber-criminals-this-year-were-flash-bugs

So where is all this hatred for Linux security coming from? Where’s the data to back up that this is a problem? It’s got all the substance of a soap bubble.

I mean, Windows, we’ve got drive-by malware. Visit the wrong webpage with internet explorer and get powned. But Linux, Linux is solid as a rock. My machines stay up for years. Unlike Windows, which crashes every few days, my Linux boxen go down when the power goes out for an extended period of time. And Linux does that with thousands of windows(*) being displayed, with many hundreds of processes being run, including windows & dos programs under WINE & DOSBOX.

Windows crashes at the desktop without any applications being run. It’s night and day.

(*) Yes, thousands. Many are iconified. They’re development boxes, with multiple concurrent projects spread across a great many virtual desktops. Side effect of machines that never crash on a UPS with work spread across many years.

Gerard van Vooren November 11, 2015 11:05 AM

In the mean time it’s awfully quiet when it comes to Ethos-OS. The site SSL certificate has been expired and not a single word about MinimaLT.

Clive Robinson November 11, 2015 11:22 AM

@ Anoni,

You mean like Gnu-emacs & X-emacs?

No those are user level applications. The point I was refering to was the change in the OS or other bellow user application code. It means application developers have to in effect support two or more similar but different platforms even though they borh say “Linux” on the box.

With regards,

If your ideas won’t stand on their own, if you need to play these sort of games to get attention, well it says something about your ideas…

It depends on what you mean by “stand on their own”, your own point on the unreliability of MS Win OS’s and the fact that there is a paucity of user applications being profesionaly developed for *nix platforms shows that that argument works which ever way you want it to, so is fairly irrelevant.

WHich leads onto,

One more point: If Linux is so insecure, why do I constantly read about bugs in Windows but almost never in Linux?

Ever hear of the “principle of the low hanging fruit”?

The simple fact is few malware attacks get reported, the majority are because of their obvious side effects. Which as APT tends to lack immediate or any obvious side effects they get not reported or even detected.

Thus the majority of reports are about low level criminal malware on individual users and need go no further than the apps they use. Not sophisticated intelligence / espionage attacks that have to get deep within a system at ring 0 or lower to remain covert.

The nuclear power station argument for what is effectivly perimeter security is about low level of sophistication user application attack malware. Not the high level sophistication intelware that gets to the kernel on your firewall via an application zeroday, then exploits a kernel vulnerability and worms in below the level that AV etc can find –if and when the original attack zero day becomes known– cleans up and thus remains covert…

unsepulchered pleasurist November 11, 2015 12:37 PM

There’s something very fishy about the insinuations in that article. “His three-car garage,” “surrounded by sycophants,” “dead like a dodo,” “security problems”…

Now let me think, who would benefit from Average Joe reading an article like that in the Washington Post…?

Nick P November 11, 2015 3:01 PM

@ Anura

It would be nice and I’ve encouraged them to do the same. They did at least fund commercial and academic development of high assurance security back during Orange Book days. They still fund a lot in academia with prototypes sometimes released for public use. However, a lot of it gets locked up in spin-off companies or patents due to greedy universities. Europe’s recent attempt, a similar demonstrator, was Verisoft. It produced a verified instruction set, compiler, microkernel, small OS, and some applications with top-to-bottom correctness argument. U.S. projects like crash-safe.org and U.K.’s CHERI team are doing something similar with clever mechanisms.

I doubt it will happen in general, though. That’s despite it being one of the best routes to better infrastructure. Besides, it probably would turn into another corrupt form of spending benefiting contractors paying bribes. 😉

@ Earl Boebert

You’re definitely onto something. As I pointed out here, hardware of the past was configured to protect key primitives with plenty of efficiency. The part your off on is putting it on Intel. It was actually the markets themselves that demanded the insecure constructions we have today. They also punished most companies that dared fight against it, including Intel.

It started with computers being so expensive. The originals were just calculating machines you may remember. The IBM System/360 and Burroughs B5000 were both designed around the same time to define business computing. Burroughs had a great architecture for security for its time, even better than today’s in critical ways. Businesses demanded these machines get faster, get cheaper, and maintain backward compatibility. The way to do it, as Brian Snow sagely observed in first link, was to share as many physical resources as possible (defeating isolation). The other side effect was removing bottlenecks the market didn’t care about (eg safety protections) while differentiating on hardware by accelerating what they were using (eg see mainframe & x86 chip family). The result was a strong, consistent push away from secure chips/OS’s and toward cheaper, faster stuff compatible with insecure constructions they were building.

To their credit, both Intel and IBM tried to break the cycle to create more robust architectures. Comp Sci had already figured out by the late 70’s that large software demanded languages suited for safety/maintenance, mechanisms for isolating faults, protection of key OS software, and so on. IBM’s answer was two-fold: tells customers to continue spending money on profitable mainframes (haha); buy a revoluationary System/38 otherwise and/or for branch offices. That succeeded thanks to compatibility with IBM stuff and overall great design. True to my meme, market push for “faster and cheaper” led them to remove capability-secure hardware in place of POWER architecture in the successor called AS/400. Still around as IBM i. So, it’s the only successful capability architecture in business and proves the future-proofing approach described in the link.

Intel wasn’t so lucky: iAPX 432, while brilliant, failed in the market despite 100 man-years of work invested. The chip was too slow due to its complexity & process node limitations of the time. Subsequent papers showed design alterations could’ve knocked out many of those delays but too little too late. Also refused backwards compatibility with prior processors, language, and OS in an attempt to use only most robust and maintainable components. Sold almost nothing and market canned it. Intel still wasn’t done, though.

Next was BiiN with Siemens. It was a simplified iAPX 432 for parallel and fault-tolerant processing with RISC processor. Market rejected it, too, at around $1 billion loss. That’s despite i960 CPU having a nice combo of simplicity, speed, fine-grained isolation, support for prior OS’s/langs, fault-tolerance, and lockstep. Personally, I think they should revamp it and push it more for safety-critical embedded which it had successes in. See BiiN Architecture Reference Manual in External Links of i960 if you’re interested in its features. Still supported for legacy apps and by Green Hills tools IIRC due to use in F-35, etc.

Next they tried a straight, UNIX/Win/C/C++-compatible, RISC chip. That was Itanium. Biggest failure was the gamble on VLIW, which didn’t pay off. Otherwise, though, it got rid of lots of x86 baggage, let you map any model to it given it was RISC + tons of registers, and still had good extensions for security. They got buy-in from top names in databases, supercomputers, legacy UNIX’s, and so on. Although Secure64 built a secure OS on it, market as a whole rejected it in favor of Xeon’s that sped up legacy x86 while maintaining most of its problems. Cost was at least $300+ million but probably more than they said.

Conclusion: Intel has probably invested more into alternative, safer architectures than any other company. Market consistently rejects them for the same reasons of resisting change, fastest, and cheapest. The best bet at this point are modifications to a legacy ISA that let one run and protect legacy applications while using better methods on new projects or individual components. There’s a lot of work in that such as CHERI or Hardbound. For clean-slate, architectures like SAFE or SSP have strong advantages. Waiting on industry adoption. They’re not eager to repeat Intel’s mistakes, though. 🙁

@ Daniel

“If the kernel bug is detected”

Most people that build kernel bugs don’t design them to be detected. People weren’t even sure NSA was bypassing or breaching Linux kernel until leaks said so. Enough said. TAILS also doesn’t have a good track record with quite a researchers finding problems without a huge investment in time or money.

@ jdgalt

“Where can I get Spengler’s improvements?”

grsecurity has been available and improved on for some time. Enjoy!

@ Fazal

“The only OS that has ever been validated using formal methods is the seL4 microkernel”

There were actually many OS’s validated using formal methods back in Orange Book days. They usually stoped at the design but had a near 1-to-1 correspondence with source code functions. PSOS got the trend started IIRC. GEMSOS (see sections 4, 6, 7, 8) and LOCK are also good examples. The VAMOS microkernel of Europe’s Verisoft project was formally verified for correctness at code and assembler level with model-based proof and simulation. seL4’s contribution was using full, theorem-prover approach to verify correspondence of a high-level design (Haskell), a low-level implementation (C), compiled representation, a security policy, and correctness criteria. A major achievement that exceeds EAL7 requirements on development but plenty was certainly done before it.

“As for government design, in the post-Snowden era that would automatically disqualify the OS for consideration.”

They’d be fools to do so. What matters is being able to review the design and its assurance argument. Most methods for high security were invented with U.S. military or government funding. Governments here (U.S.) and abroad have been main sponsors of every improvement computer scientists have made. Origin doesn’t matter so much as ability to vet the proposal or product for security. I’m for one grateful to U.S. government and even NSA for what good work they did in the past and are still doing (eg DARPA, NSF, even NSA a tiny bit).

All the crap on the SIGINT side? Yeah, I’d be fine with that all going away. It’s why I evangelize highly-assured INFOSEC instead of crap like Linux, Windows, C/C++, x86 that makes their job easier. Funny how much anti-NSA side leverages what aids their enemies. 😉

“I use OpenBSD for my firewalls and Illumos for app servers. ”

Smart move if one is worried about kernel 0-days. They probably have the least.

@ Jack Dodds, Tom Bortels

“What the critics are complaining about is that Linus Torvalds will not use his influence to promote their code. That is, they want Torvalds to use his own reputation to sell what they can’t sell on its merits.”

It has been done many times with merits proven plenty. Linux and some distro’s even occasionally adopted methods those teams pioneered. Mostly, they ignore them and tell them to get lost while letting the 0-days pile up.

Meanwhile, many teams in academia keep producing ways to secure kernels like Linux with little takeup and even report 0-days they find with static analysis tools. Lots of 0-days. They take and fix those but no change in proactive approach leveraging such things. Linus and Linux community, outside security-focused distro’s, don’t care about security in practice. They actively fight it even if one forks and codes up an improvement.

Clive explained what happens well above and we’ve seen lots of companies/projects tank from such forces. I think a handful remain that are mostly disconnected from core Linux.

@ Doug Coulter

re cheesy software on kernel being big problem

That’s true that the reporting wrongly ties the kernel with poorly-written apps on top of it. That’s unjustified. However, the kernel has had plenty of CVE’s that were easily prevented by diligence and nation-states currently have attacks on it. So, it’s worthy of its own criticism.

re security of other software

“Yet even this site has members who want someone else to provide huge inputs that cost money to harden what’s already essentially the hardest part”

I’d rather toss it but they won’t do that. 😉 It’s not an either or situation, though.

The kernel is the main part of the TCB with 0-days that can bypass everything unlike user-mode which can be contained with MAC, compiler transformations, jails, etc. Assurance argument requires they… anyway they can… dramatically improve assurance of kernel and drivers if not isolate those components away w/ microkernel architecture. In parallel, people can work on the other components which have all kinds of problems as you said & are frankly easier to improve than OS kernels. There’s actually a ton of work on both in Comp Sci with stuff regularly submitted to Linux (Saturn team alone found 100+ bugs) and published for OSS to use/improve (see SAFEcode, Softbound + CETS, Code-Pointer Integrity). Little uptake by either despite tools getting easier and easier to use… even automatic. Overhead is lower than ever, too, even though significant.

” Adobe flaws”

I do agree that much of low-hanging fruit is in user-space with desparate need for work. Adobe is an interesting example, though, that illustrates resistance we face on major software instead of toy, FOSS projects.

Just getting Adobe’s stuff into Linux distros was an uphill battle. I can only imagine the difficultly of (a) convincing them to rewrite their stuff in a way we prove is better or (b) a clean-slate implementation that’s backward compatible, OSS, and uses modern security tech. However, we both know hackers love to cheat around obstacles: many teams developed methods that protect binaries like control-flow integrity that they applied to versions of Adobe software much like they did for Linux. They showed how they stopped or contained real vulnerabilities or attack classes. Neither Adobe nor Linux have adopted or cloned such methods unless the Adobe sandbox is one of them. Results suggest it’s not or they chose unwisely. No push from FOSS community on this outside requests for basic sandboxing.

Not sure if it’s a demand or volunteer problem for stuff like this. Been blaming demand so far.

@ Anoni

“One more point: If Linux is so insecure, why do I constantly read about bugs in Windows but almost never in Linux?”

Because you’re probably reading media outlets with either a pro-Linux bias or whose desktop OS market has around 90% Windows boxes with attacker focus that follows. Nonetheless, post-SDL & hacks of NT/2000 days, Windows has a record low of 0-days being discovered due to combination of quality improvements and most low-hanging fruit gone. Although I prefer Linux, the CVE’s and reports from bug hunters show thousands of flaws discovered that might have resulted in code execution. By 2015, NVD’s numbers showed Windows was ahead in vulnerability metrics (38 WinServer vs 119 Linux kernel).

They’re both monoliths with MB in kernel code written in unsafe languages, though. So, they’re both bad designs. It’s why they need to apply the best vulnerability mitigation tech possible to it or replace with with more secure architecture. I’ve given examples above. Two more for fun. See how one side tries to make security & reliability easy by default while the other makes it hard even for elite developers? Monoliths (and Linux) refuse to join the former camp.

Justin November 11, 2015 3:03 PM

@Anoni

I mean, Windows, we’ve got drive-by malware. Visit the wrong webpage with internet explorer and get powned. But Linux, Linux is solid as a rock. My machines stay up for years.

You’re right about Windows, but you’ve got a dangerous and false sense of security about Linux.

my Linux boxen

are probably, statistically speaking, owned and part of a botnet. (Those distributed ssh login attempts coming from everywhere??? Or are you so hacked that you don’t even see them?)

Jack Dodds November 11, 2015 5:25 PM

@ Clive Robinson, Nick P

Clive claims that only user software projects get forked. What about XFree86 which forked to X.Org? In that case the motivation for the fork was a licence change, not a security issue. The fact remains that it was forked.

It is quite clear that most of the users find the level of security in Linux to be acceptable for their purposes.

Nick P November 11, 2015 6:49 PM

@ Jack Dodds

Most of what Clive said applies to kernel fork attempts as well as user-mode. I gave a link with all kinds of attempts to change Linux security for the better. Most ignored by main kernel.

That’s also a funny counter-example you chose. The X Windows system that most of BSD and Linux depended on had an issue that put that in jeopardy in their minds. So, they forked it and change the license. All the momentum going in its direction met a slight obstacle that was eliminated to continue going in the same direction. Both a very exceptional type of situation and totally the opposite of people pushing heavy modifications of kernel code for security against the tide of main developers and Linus.

Justin November 11, 2015 7:12 PM

@Jack Dodds

It is quite clear that most of the users find the level of security in Linux to be acceptable for their purposes.

Right. And what they don’t know won’t hurt them. That is the prevailing attitude. Sure, for most purposes, a Linux box is still quite usable, even if it’s making automated distributed ssh login attempts to other Linux “boxen” or sending out the occasional spam.

Dirk Praet November 11, 2015 8:17 PM

@ Jack Dodds

It is quite clear that most of the users find the level of security in Linux to be acceptable for their purposes.

Let’s change that to “It’s quite clear that most of the users think the level of security in Linux is acceptable for their purposes”.

Many Mac afficionados are still convinced their beloved machines are impervious to viruses and malware. They’re dead wrong, and so are the Linux fans thinking in similar terms about their favorite OS. Not to mention the fact that most of the Linux folks I know – whether it be home users or sysadmins – actually know very little about securing and hardening both OS and applications, let alone about kernel related issues. Most of them just assume they are safe by definition because “Linux is a secure platform”.

Whether you are with or against Linus on the kernel security issue does not change the fact that unlike Microsoft, Apple and other tech giants, the kernel crew does not have a controlled program in place to efficiently deal with critical vulnerabilities. And which in terms of risk management is a serious liability.

As @Clive has explained, forking the kernel is not the solution. A professional, industry standard approach is. But for which you first have to acknowledge the problem, which in my opinion Linus is either still in denial about, or – understandably – reluctant to relinquish his full control over the kernel for.

Justin November 11, 2015 9:31 PM

@Dirk Praet

Whether you are with or against Linus on the kernel security issue does not change the fact that unlike Microsoft, Apple and other tech giants, the kernel crew does not have a controlled program in place to efficiently deal with critical vulnerabilities. And which in terms of risk management is a serious liability.

All Linux distributions do have “program[s] in place to efficiently deal with critical vulnerabilities.” In terms of risk management, whom do we sue when we get hacked? Oh yeah. Good luck suing “Microsoft, Apple and other tech giants.”

Trouble is that neither Windows, nor Linux, nor iOS, nor anything else created by the tech giants, has a process and practices in place for designing and building in resistance to critical vulnerabilities in the first place. Critical vulnerabilites are looked at by all these companies as a P.R. issue for which some token response or “patch” or fix is to be made after the fact, presumably to avoid a civil lawsuit. That’s why there is so much emphasis in the industry on the “zero-day” aspect — whether the vulnerability has been disclosed to the general public or not. If it hasn’t been disclosed to the general public, it isn’t seen as a problem at all, even if it is actively but covertly being exploited. And the “hackers” are blamed for the vulnerability, not the vendors of the insecure system.

It’s like blaming the burglar for the flimsy design of the lock on your front door. All the vendors insist that cheap locks are adequate for most people’s home security needs, and by the way, they sell expensive home security solutions….

cdmiller November 11, 2015 11:27 PM

Heh. Most server level breaches are via the social engineering human element. Next are crap web software followed by crap sysadmins using crap configurations. UNIX heredity has tools out the wazoo to mitigate kernel and user land problems, Linux based systems are no exception. Sysadmins need to be using the tools to provide defense in depth, detection, adaptation, resiliency. Devops in particular makes it easier to take a Ranum like approach to creating and managing hardened systems, Linux or otherwise. In software I’ll take open vs closed in general, trust the code I can see, audit, and correct more than code I cannot.

All the concern is kind of amusing given this blog has pointed out time and again the more insidious attacks are now happening in the hardware, bypassing OS kernels. Corporations are probably already taking that documented government approach to compromising systems.

Dirk Praet November 12, 2015 4:51 AM

@ Justin

All Linux distributions do have “program[s] in place to efficiently deal with critical vulnerabilities.

None of which have any control over the kernel. And fortunately so, or we would have ended up with the same rotten ecosystem Android has today. They can have as many programs in place as they want, they depend entirely on Linus & co. to acknowledge and fix things.

Trouble is that neither Windows, nor Linux, nor iOS, nor anything else created by the tech giants, has a process and practices in place for designing and building in resistance to critical vulnerabilities in the first place.

Which is an entirely different thing that in no way prevents a professional approach to software life cycle and vulnerability management. It’s not like Linus doesn’t know how to do such things – remember he’s the man behind Git too – , he just totally hates that kind of stuff. Like most engineers do.

Clive Robinson November 12, 2015 6:31 AM

@ cdmiller,

All the concern is kind of amusing given this blog has pointed out time and again the more insidious attacks are now happening in the hardware, bypassing OS kernels.

Whilst I among many others have pointed out quite frequently that attacks below the CPU level –in the computing stack– are not just possible, but are often persistant and not detectable by the CPU, or AV etc software that runs on it, there is usually a caveat or two given,

1, The attacker has had physical access to the system at some point, after it has come into your possession (ie evil_maid).

2, The attack was put in in the supply chain (ie interdiction).

Unfortunately there are other ways which have not realy been talked about prior to Bad_BIOS, but even since then few have picked up on the implications.

For “below CPU” attacks to work, there needs to be some kind of hardware that works below the “main CPU”. In times past that would have been some kind of addition that worked like a DMA device, often present in high speed IO devices such as hard drive, video or network controlers. Thus finding them was possible by the mark one eyeball spotting the actual hardware, or by observing both the main CPU and memory control interfaces with a logic analyzer (to see the DMA “cycle stealing”).

Whilst this observation is still possible, attackers don’t actually need to add “attack” hardware these days, just “Hijack IO SoC CPU flash memory” instead. Which means that on some operating systems (MS-Win, Apple-Mac) and some hardware platforms (x86 PC architecture) users can do it without much difficulty from a siple Phishing attack.

However in addition the Intel’s later IAx86 chip sets have a dirty set of secrets introduced to correct Ring0 security and backwards compatability issues. This resulted in a second Hypervizor CPU for SMM sometimes called Ring-2, that just like IO SoC CPUs works below the Main CPU, but unkike them shares main memory, which can be “got at”.

As FOSS OS’s tend not to support IO SoC flash memory writing, they were marginally more secure than the common comercial OSs. However this additional security is now being side stepped by the use of “User Land IO” introduced to get around other security weaknesses or the OS-Bottleneck in high bandwidth servers etc.

Such second CPUs have a much more unfortunate side effect than most realise, that is not much talked about, which is it can be used to defeat “code signing” and TPM and “Fritz-Chip” DRM systems, including Main CPU “microcode updates” (that Intel have mandated are essential for IAx86).

It’s why the FOSS OS’s kernels like Linux need to ensure “firmware updates” are not possible when the kernel is up and running and UserLand IO avoided or properly secured. Thus the kernel security is the last line of defence against the State Level and equivalent APT attackers, and increasingly targeted criminal attacks. So kerenel security matters, especially on externaly connected systems especialy those used for security like routers, firewalls, diodes/guards/sluices and instrumentation. Just the sort of areas Linux is getting a lot of mileage in as “Network Appliances” etc…

Anoni November 12, 2015 11:25 AM

@Grauhut “The Linux Kernel NEVER had priv escalation probs! Never ever! :)”

It has. Last time I heard about one, many years ago, somebody had bypassed the repository checkin to insert a privilege escalation bug. It was detected quite quickly and corrected.

It’s not really a question of my selling you insurance so much as my deciding between a Volvo (Linux) and a Yugo (Windows). Which do you think will survive a crash better?

 

@Nick P

I always thought of it more as Linux provides a deeper hole to climb out of.

Windows has a lot of stuff talking to the net, a lot of ports open, an awful lot of applications that need to run as Administrator, and most of which were written with a “what security” attitude.

Linux, well first you’ve got to find a way into the system. A port that’s open, or a program that will talk to your server that has a bug in it that you can exploit. There’s a reason social-engineering is the favored approach. Then you have to be able to exploit that bug and get out into the system. Then you have to find a second bug, a privilege escalation bug, that you can exploit before you can actually do anything. And that’s assuming, big assumption here, than the original application wasn’t chroot’ed into a jail which really limits your options for privilege escalation.

It’s not impossible, but it’s a lot harder.

Microsoft has, for decades, pushed features over security, often with horrid beyond belief security implications. (Remember ActiveX? Silverlight still makes the top ten.)

Linux has gone the other way, and it really shows.

You talk about “Windows has a record low of 0-days being discovered”. Yet you ignore the record exploits of applications running under windows, usually as administrator. Flash led the list last year.

It’s like you’re looking myopically at this one little paper cut on your pinkie while ignoring the part where your legs have been blown off.

I’d like to know more about what these vulnerabilities are! You link says Win8 has 36 whereas Linux has 119. But if Windows lets anyone on the net break into your machine as administrator, which actually took a matter of minutes with an unpatched machine just a few years ago, whereas Linux requires someone to be on console with the right packages installed. You know, that’s night and day.

I notice when I try to click through your links and view the NVD vulnerabilities for Linux that their web-server fails to deliver with an ASP error. Perhaps they’re not the most unbiased source? Wouldn’t be the first time we’ve seen such things. A few years ago Microsoft’s webserver was claiming record adoption rates. Only when you looked into it, their growth was entirely in parked domains. Active webpages stayed the same or declined slightly. Why do you suppose they did that?

 

@ Justin “You’re right about Windows, but you’ve got a dangerous and false sense of security about Linux.”

Maybe. Maybe not. There’s a lot of very simple things you can do to pick up on problems early.

I have dealt with Linux boxen being owned. (Stupid managers sending passwords in plaintext over ftp.) One of my favorite tricks, back in the day, was to boot off DVD and manually verify all the rpms against the originals.

Anyway, I really doubt my current machines are part of a botnet. Doesn’t show on third-party tcpdump. (Hub & third machine sniffing.) Doesn’t show on ethernet lights.

Remote login attempts fail. I disabled external password access decades ago. ssh-agent is so much more convenient, and makes it so much easier to send public keys through email.

Nowadays (my personal) ssh ports are internal only, hidden behind multiple NAT boxes & Firewalls which have to be breached first. The most likely route of entry is through a bug in a webbrowser. But there are good webbrowsers out there, that try like hell to avoid that. Flash is disabled, of course.

Those distributed ssh login attempts coming from everywhere??? Or are you so hacked that you don’t even see them?

Back in the 90s when I ran a website out of my bedroom, aside from disabling ssh passwords, I also logged to console. And ran tcpdump. I saw everything.

One of my other favored tricks was non-writable storage for anything static. Makes it harder to change things. Or, back in the day, tripwire.

@ Dirk Praet

It’s been some time since I’ve worked in Linux Systems Administration. But, back when I did, one of the problems I ran into was that every distributor had the stock Linux kernel along with tons of patches and extensions. Made things difficult when you needed to recompile to support newer hardware.

For decades now I’ve been hearing this FUD. It’s not true. Critical Vulnerabilities are dealt with very quickly! I’ve seen relatively minor bugs confirmed and patched to development branch within the hour. Some of the lesser players may be a bit slower to roll things out as they depend on upstream sources. But we’re talking hours not years.

Things get fixed extremely quickly in Linux. There’s a lot of people with a lot of money, tens of billions of dollars, at stake here with a vested interest in getting things fixed really quickly and really right! And they hire the best, the very best. They can afford to.

 

What we are not seeing is Linus redesigning the entire kernel from the ground up around a microkernel architecture because that’s magically more secure.

Curious November 12, 2015 11:51 AM

I don’t use Linux, but at the very least I would expect security to be considered important, so that there is at least a critical understanding of the state of security with Linux. I would have liked to see some kind of periodical review of anything security with linux, to keep everyone informed.

Grauhut November 12, 2015 12:54 PM

@Anoni: “It’s not really a question of my selling you insurance so much as my deciding between a Volvo (Linux) and a Yugo (Windows). Which do you think will survive a crash better?”

When it comes to critical information, security: None of them.
I prefer BSD tanks, because i have a tank driver license. 🙂

And trust me, i know linux. Today i had to help a junior devop debug a selinux problem on an updated mail av gw, rhel7 using mailscanner. Not funny! But better than seeing them switch it off, because nobody reads man’s anymore…

Nick P November 12, 2015 1:56 PM

@ Anoni

The topic is Linux security. Focusing on its issues isn’t myopic: it’s the topic of discussion.

Regarding Windows, that was the first link I got in Google with metrics but not the only one out there. I used it because it was consistent with everything else I’ve read on the subject in the past few years. It’s a fact that Window’s code quality is better than ever, even on new features, due to their multi-million dollar investments into QA (esp Lipner’s SDL). Industry’s annual reliability studies usually had IBM’s AIX, then Linuxes, then Windows in terms of uptimes. Recent years had Windows server ahead of enterprise Linux distro’s. Also, more than one firm specializing in investigating breaches has said attacks are almost always on applications instead of system due to the security improvements. Now, there’s certaintly plenty more to be found but there’s no denying it’s apparently gotten more difficult to the point that Linux is providing higher returns for kernel, bug hunters.

Regarding difficulty, the results at hacker challenges don’t support your claim. Exploitation of vanilla Linux vs Windows seems straight-forward once they have a vulnerability with any difference being trivial vs finding that vulnerability. I’m guessing you haven’t used Windows in a while if you think all the core apps are Administrative or that Admin vs User/Limited is only thing they have. They’ve added whitelisting, sandboxing support, Mandatory Integrity Controls, SLAM verifier for drivers, the EMET technology for legacy, formal verification tools for apps… all kinds of things to prevent or limit attacks in addition to 3rd party products like SandboxIE and DefenseWall.

Aside from ignoring that, what you’re doing is conflating security of Windows and the often-attacked apps. The latter are the problem of the app authors. Standard Windows security advice is to avoid Flash, Acrobat, Java, etc because they’re just untrustworthy. There’s also sandboxing solutions for various programs and higher-quality alternatives to common ones. That a few vendors have terrible security despite everything available to them implies nothing about what Microsoft makes. It just says… use different apps and follow security advice.

Now, your false claims aside, let’s focus on what real advantages exist for OSS and Linux platforms. That will be my next comment.

Nick P November 12, 2015 2:01 PM

Real, security advantages of popular OSS such as BSD or Linux

  1. Source code is inspectable and modifiable. Contrary to wide believe, this provides no immediate advantage whatsoever to security. What it represents is potential that can be realized when someone steps up to review, improve, or extend that source. That potential is still a real benefit given it’s the foundation for others.
  2. Many eyes argument as applied to debugging. Users spotting problems might submit detailed bug reports. With both source and those, these problems might be fixed at a faster rate than others. Many eyes might fail for security in general but it’s especially effective here. Any potential vulnerability that is also a user-visible bug might get it noticed and fixed. Past that, OSS track record isn’t much different from commercial in terms of quality.
  3. Widespread hardware support. Enemies target specific code on a specific ISA. That code might be an app, the OS, drivers, or firmware. I advocate taking advantage of this by Security via Diversity and Security via Obfuscation. With diversity, a problem in one implementation of a standard (eg SSL, POSIX) kernel might not be in another. Diverse hardware/software combo’s can reduce risk as such. Further, everyone using one combo (eg Win/x86/ARM) creates a monoculture that lets one attack work on all with no further investigation or work. Obfuscating what you’re using while picking among many hardware/software combo’s increases odds that their attack (a) won’t work or (b) will be detected as unusual crashes or network activity. So, the portable and OSS systems have an advantage here.
  4. Control over kernel and user-land attack surface. Solutions such as Windows Embedded do let one strip out much of what they don’t need. However, they control the cut-off point for that in terms of what’s visible and what’s achievable with reasonable resources. Projects like Linux From Scratch, kernel config tools, and Poly2 take advantage of OSS’s potential by (a) letting you strip unneeded kernel functions and (b) letting you decided exactly what you want above it. This can dramatically reduce number of 0-days in the system along with providing early warning signs of attack when non-existent functions get called and sound alarms instead of activate.
  5. Ability to harden kernel against attack. The best security approaches usually require the source to provide the best balance of protection, compatibility, and performance. Unsurprising that many academics, individuals, and companies have modified OSS software to improve its security. These included everything from compiler transformations to hardware-based security to mandatory access controls. As stated above, these rarely go anywhere and OSS projects often fight against them. Nonetheless, the potential exists and occasionally works out in practice in a way that closed-source, legacy OS’s virtually never deliver.

  6. Easier re-engineering of existing software. On top of “many eyes for debugging,” Eric Raymond also identified another benefit of OSS that comes from its nature. Most features start out as an itch someone wanted to scratch for their own needs or fun. Lots of those means tons of code out there at varying stages of completion. Like ESR did, the next developer is likely to get ahead further in OSS by having a good starting point forking a prior project. That let’s one avoid many unknowns the project might have already tackled. Further, many complete and document projects will exist for other needs that might simply be re-engineered for security/quality. Many automated tools exist as I listed in my first post.

  7. All of this can happen with less risk of lawsuits. The standard OSS licenses eliminate the kind of liabilities that come with modifying or remarketing proprietary software. Groups like BSA are very likely to come after companies over licensing payments, even hiring informants. Use of OSS requires nothing at all with redistribution requiring no real burden in most licenses. Even GPL just requires release of source upon distribution. FSF usually issues warnings to violators rather than do highway robbery with legal teams. Overall, the legal risk of OSS w/ standard licenses is much lower than proprietary.

  8. Support. Both companies and communities are more likely to support the effort. If it’s a major project, this can mean an increase in innovation, bugfixes, and even product/project-specific funding. The companies that offer paid support for these often deliver higher satisfaction to customers because the customers are their core business rather than an afterthought from licensing revenues. Nonetheless, the support level should be assessed on a project-by-project basis due to variances.

So, there are some concrete benefits of OSS over proprietary platforms. This includes, but is not limited to, Linux. They range from technical to legal to political. Anyone pushing OSS should focus on these real benefits instead those that don’t exist or misleading statements about competition. After all, who can argue against real concrete benefits. At this point, the discussion becomes one about which open-source approach is best suited to solving the problem at hand.

Hence, my analysis of Linux and its community against competiting approaches in terms of achieving its security objectives. That analysis showed problems all over with plenty of room for improvement. That causes me to call out Linus or the Linux community to recognize and respond to those issues. This isn’t to say Linux has no security benefit. It simply is nowhere near where it needs to be if people want to claim it’s secure against even skilled blackhats. Much less nation states. It’s on them to get it there given other projects and products have already amply shown how to do that. Linux community so far is simply unwilling to take those steps.

Grauhut November 12, 2015 4:14 PM

@Nick P “It’s a fact that Window’s code quality is better than ever, even on new features, due to their multi-million dollar investments into QA (esp Lipner’s SDL).”

Correct, windows has become a lot more stable. But the newer versions also leak a lot more data by design. And when it comes to security, yes, its more secure against unchartered access. Chartered access, can we know if the NSAKEY was removed? We cant. Trust without knowledge? No? -> OSS 😉

Nick P November 12, 2015 4:25 PM

@ Grauhut

Oh definitely many leaks in the newest ones and for who knows what nefarious reasons. I always told people the character of an organization is very important. Microsoft, IBM, Oracle… all organizations of poor character known to abuse or exploit their users in inexcusable ways. Many were stuck with Windows so I just told them to use Windows 7 with certain stripping, hardening, and other security advice. Otherwise, alternative desktop OS’s are good enough for business is one is using portable apps. Specific Linux desktops on pre-tested hardware are a much better deal. I plan to investigate PC-BSD soon, as well.

The safest bet, if 0-days are the concern, is probably an OpenBSD-based desktop with most minimal and security-focused components on top. Linux is simply too inclusive, fast-moving, and focused on features to maintain code security required. In effect, they and Google’s Android are becoming the new Microsoft in the two categories. History repeats.

Least there’s new projects starting from the kernel up with much better TCB’s and odds of damage containment. Hopefully they get some momentum.

Dirk Praet November 12, 2015 5:05 PM

@ Anoni

For decades now I’ve been hearing this FUD. It’s not true. Critical Vulnerabilities are dealt with very quickly! I’ve seen relatively minor bugs confirmed and patched to development branch within the hour. Some of the lesser players may be a bit slower to roll things out as they depend on upstream sources. But we’re talking hours not years.

Unfortunately, your claims are not entirely backed up by the facts. There are currently 1322 CVEs for the Linux kernel, some of which have been open for years.

Nick P November 12, 2015 5:36 PM

@ Dirk Praet

Holy crap, I didn’t know it was that bad! Gotta love all of those that say it nets root, requires no real access ahead of time, and compromises every part of CIA triad. Must have been seriously complex stuff nobody could’ve prevented with code audits, compiler transforms, or a safer language w/ checks on. (looks closely) No bounds check, “multiple integer overflows,” “buffer overflow”… oh dear, it’s the low-hanging fruit that’s been preventable for over a decade.

I’ve seen secure coding and… that… ain’t… it…

Grauhut November 12, 2015 7:50 PM

@Nick P: “I plan to investigate PC-BSD soon, as well.”

I don’t know why, but i don’t like PC-BSD. For a simple desktop i would give Ghostbsd a try.

Screenshot GhostBSD Mate: http://up.picr.de/23692944yi.jpg

For the flexibility of FreeBSD with some extra security HardenedBSD (FBSDs sec kindergarden)

Screenshot HBSD Xfce: http://up.picr.de/23692970it.png

pkg (yes, install)
pkg install nano
pkg install xfce
pkg install xinit
pkg install xorg
startxfce4

You may need some extra rules in /etc/secadm.rules

https://github.com/HardenedBSD/hardenedBSD/wiki/Non-Compliant-Applications

Wael November 12, 2015 8:14 PM

@Grauhut, @Nick P,

I don’t know why, but i don’t like PC-BSD.

I also share your sentiment. I only install it for “cheating”. When I have troubles setting the optimum resolution on FreeBSD (a virtual machine installation), I install PC-BSD and copy the Modeline parameters to the X.org config on the FreeBSD instance. I then remove PC-BSD.

Give TrustedBSD a try. I haven’t yet, although I’ve been using FreeBSD for several years.

Clive Robinson November 13, 2015 10:16 AM

@ Nick P,

Least there’s new projects starting from the kernel up with much better TCB’s and odds of damage containment. Hopefully they get some momentum.

Even though it might not appear so they do have momentum these days. Thanks to the Ed Snowden trove, the giant of public opinion is slowely awakening with disgust. Due to the likes of the UK’s ham fisted Home Sec Theresa May MP desperately trying to push through the Stalinists wet dream of the Snoopers Charter.

With regards “History Repeats” yes history goes around, but it is more like a wheel than a top. That is it has both momentum and direction, and usually does not cover the same ground in the same way twice.

IBM won the first battle, but lost the second to Microsoft. Which whilst they won the second, they ceeded the third to Google, Facebook, etc. Now Microsoft are trying to play catchup on ceasing personal data, but it may be to little to late as their falling market share in OS takeup shows.

Meanwhile Google and Facebook have learnt different lessons about privacy from “the old countries” in Europe. Microsoft know from bitter experiance that what riles a European Court, can have repercussions back in the US. Whilst US political appointees with less brains than “god gave a goose” may blab “F**k Europe” in a way it gets internationaly known, others are painfully aware the US is outnumbered and losing influence on the world stage. Hence the behaviour surrounding TTIP etc, to get in quick whilst they still have the ability to “Fix the Game” in their favour.

Battles are won and lost, wining Kings and their Generals rarely learn from a win, whilst losers inovate new ways to win the next battle and sometimes the war.

The Internet founders, did not have the resources to do anything but break ground for tracks and paths. Fences were not an option, in Europe old experience and the lessons learned gave forsight, hence the OSI model long before the resources to support it were available. However the older lesson which gave “The Great Game” ment that vested IC interests kept Fences from being designed.

Now however the implications of “The Great Game” and the stupidity of the Kings and Generals are more widely known. And as Pandora found, the box once open is not just difficult to close, things that got out can like quicksilver escape the grasp that tries in vain to put them back in.

So whilst History does repeat for those that fail to learn the lessons and think on the future, for those that learn, envision and inovate, it brings fresh opportunities. Thus Kings, their Generals and Empires do fall, and at each rebirth they lose to those who can dream of futures new.

The power moves, absolute power is nolonger held by a “god head King” even dictators and tyrants know that to stay on top power has to be shared. Science and the technology it spawns may be a double edged sword agnostic to it’s use, but in the process it takes power and changes it and spreads it out further. We are now in a time where power is in the hands of those who pay the technologists and scientists. But even they are starting to lose that, hence the laws they buy to try to stave off the future and try to corral what power they can. But like the Kings of the past the writing is on the wall even for these “Kings of Commerce” and their “Trading Empires”. Power like heat suffers from the consequence of a closed system and the entropy of finite resources.

It is the “entropy of power” at work that gives the notion of “Information wanting to be free”. It’s also the higher order function that Bruce is looking for in his search beyond the first order “Hawks and Doves” model. Likewise it is also the unrecognised law behind the various faux axioms of “economies” various people fail to model.

Anoni November 13, 2015 11:59 AM

@ Nick P

I’ve seen this same garbage so many times and so many ways.

You guys are so concerned about the quality of the lock on the front door (Kernel) while the back of the house is outright missing.

“Windows is secure and we can prove it.” Yeah, but for some reason automated botnet attacks are prevalent on Windows while social engineering and password guessing is favored for Linux attacks.

Why are people trying to guess SSH passwords if there are all these security holes in the Linux kernel?

It doesn’t add up. Nor does it match my firsthand experiences.

“I’m guessing you haven’t used Windows in a while” I use windows for many hours every week, unfortunately. That may change. Win8 & Win10 have been a disaster. WINE is maturing very nicely.

In my experience, there’s always a lot of hype about how great and stable Windows is. It’s like a soap bubble. It’s all hype. It doesn’t stand up to the least scrutiny.

Windows, in my experience, is consistently highly unstable, requiring multiple reboots every week, often every day, often with hard crashes requiring powering down the box. Linux, on the exact same box, on the exact same hardware, is stable for years. And, in point of fact, I need Linux and ntfsclone or dd to reliably back up and restore my windows partitions. Also to detect problems, as every tool, every single tool, under Windows (win7) swore up and down that my harddrive was fine while Linux smartctl showed tens of thousands of errors in the on-disk SMART log and a large chunk of the drive as unreadable.

This is night and day man!

Microsoft always has somebody else they can point the finger to for their problems. It’s the graphics driver. It’s third party software. But you know, on Linux, something dies, it doesn’t take out the whole freaking computer complete with multiple reboots and harddrive repair programs. The offending process ends and life goes on. In my experience, Microsoft just does a really crappy job.

It’s a bit like when Microsoft required permission for every little thing in a pop-up box, and then claimed it wasn’t their fault you got powned because you clicked yes on the wrong one of those tens of thousands of pop up windows.

Have you even seen the list of telemetry domains that need to be blocked for Win10? It’s like over a hundred. And Win10 bypasses the hosts file and the local firewall to reach them. You have to block them upstream. It’s just a nightmare.

“what you’re doing is conflating security of Windows and the often-attacked apps.” One of the top 10 attack-vector apps from last year, noted in my earlier link, was Microsoft’s Internet Explorer.

You know, under Windows, when I’m running the app as Administrator, as that’s required, well when it’s penetrated my computer is powned. Whereas Linux, I’m running as nobody, and they need a second privilege escalation hack before they can do anything. And they’ll have to break out of chroot jail too. It’s a better security model by design.

Again I ask: All these bugs in Linux. Is it the sort of bug like we saw with winXP where you couldn’t put the winXP box on the internet to get it patched without it being owned before patching completed? Or is it the sort of thing where somebody who tried really hard and for a really long time could eventually maybe crash the ntpd time daemon and it would be have to be restarted?

They’ve added whitelisting, sandboxing support, Mandatory Integrity Controls, SLAM verifier for drivers, the EMET technology for legacy, formal verification tools for apps… all kinds of things to prevent or limit attacks in addition to 3rd party products like SandboxIE and DefenseWall.

Great. Why doesn’t it work? Why is windows still such a piece of junk?

And why are folks trying to guess SSH passwords if Linux is so vulnerable?

And why is a company like Red Hat, with a market cap of 14.28 billion, blithely allowing these bugs to exist that could utterly destroy them?

Or is this the same thing we’ve seen since the ’90s of folks who can’t get security right to save their ass claiming the competition is insecure because they haven’t jumped through all the right hoops TO HIDE THEIR OWN MASSIVE FAILURE!

Thanks but no thanks. I’ll stick with what actually works.

 

@Dirk Praet

Have you even read what you linked to?

First one I clicked on was CVE-2000-0506: https://www.cvedetails.com/cve/CVE-2000-0506/

The “capabilities” feature in Linux before 2.2.16 allows local users to cause a denial of service or gain privileges by setting the capabilities to prevent a setuid program from dropping privileges, aka the “Linux kernel setuid/setcap vulnerability.”

Whoa. Looks serious. Okay. Where’s the details? First detail link I clicked on from that page was: http://archives.neohapsis.com/archives/bugtraq/2000-06/0062.html

Quote: While it has not been proven that any software shipped with TSL 1.0x can be used to exploit this bug, we know that many of our users add their own packages to their servers, making us indirectly susceptible to exploits

We have therefore chosen to release packages for TSL 1.01 containing
version 2.2.16 of the kernel, in which the hole is fixed. These files
can be found at:

It’s dated “Fri Jun 09 2000”. Fixed over fifteen years ago.

Are they all like this? Ancient bugs dating from the win98 era fixed over a decade ago? Lets try something more recent…

CVE-1999-0524 ICMP information such as (1) netmask and (2) timestamp is allowed from arbitrary hosts.

Gee, click on that and you can see it’s also a problem with MacOSX, HPUX, AIX, All Windows, etc.

You can also see that it gets blocked by the firewall under Linux.

CVE-2012-6542 The llc_ui_getname function in net/llc/af_llc.c in the Linux kernel before 3.6 has an incorrect return value in certain circumstances, which allows local users to obtain sensitive information from kernel stack memory via a crafted application that leverages an uninitialized pointer argument.

Click on it and there’s a link to the Github patch dated Aug 15 2012: https://github.com/torvalds/linux/commit/3592aaeb80290bda0f2cf0b5456c97bfc638b192

Most of these you don’t even need to click through. Look at the description and they require an ancient 2.x series kernel. Version 3.0 was released back in 2011.

 

Enough folks. Enough. I’ve wasted enough time on this topic. If you think a bunch of fancy new technologies magically makes you secure in spite of reality pointing to the contrary, if your giant list of dreaded kernel bugs is so outdated they were fixed decades ago, or are complete inaccessible due to other mitigating technologies like a simple firewall, well don’t let the door of reality hit you in the ass on your way out. Reply if you must. I’ve had enough.

I for one will stick with what actually works.

Nick P November 13, 2015 2:38 PM

@ Anoni

“You guys are so concerned about the quality of the lock on the front door (Kernel) while the back of the house is outright missing.”

I said use highly-assured OSS with detail from hardware to kernel to apps. It’s what I always push. Where does your sentence fit there?

“”Windows is secure and we can prove it.” ”

That’s the second comment where you push your agenda with strawman claims. I said Windows’s coding has improved to the point that it’s easier to find 0-days in Linux than Windows. That’s according to 0-day hunters, security companies that fight them, and the organization that tracks them. It’s a fact.

However, I didn’t say Windows is secure. My comment to Grauhut also acknowledges backdoors and lack of trust in Microsoft. So, you need no strawmen or misrepresentations of my statements to argue against trusting Windows: I already did that for you. I’m not about to let you pretend, though, that Linux’s kernel quality is on some higher level when the evidence contradicts that. Along with acting like every advance in Windows security that counters many real malware don’t exist.

“Why are people trying to guess SSH passwords if there are all these security holes in the Linux kernel?”

Red herring. Why do people try to guess passwords or spearfish on Windows with all the holes in Windows kernel? Because it’s easier than finding 0-days. They always go for lowest-hanging fruit.

“Windows, in my experience, is consistently highly unstable, requiring multiple reboots every week, often every day, often with hard crashes requiring powering down the box. Linux, on the exact same box, on the exact same hardware, is stable for years.”

“But you know, on Linux, something dies, it doesn’t take out the whole freaking computer complete with multiple reboots and harddrive repair programs.”

It’s funny how that works. My Windows XP Pro SP2-3, Vista SP1, and Seven Professional installs with careful choice of hardware and drivers worked for years with almost no crashes. The Linux desktops had issues more often, even freezing entirely. I haven’t had a freeze on Windows since Vista when they re-did their driver architecture to reduce that. Many problems went away after the rough transition period. Far as files, I’ve lost GB of files to heisenbugs in both OS’s. I don’t trust either in isolation for protecting important data.

Of course, I haven’t had to endure the pain of Windows 8 or 10 except working on relatives laptops. All the questionable behavior and leaks are why I discourage use of Microsoft software. I switched to using one of several Linux distro’s since they were finally stable enough. I get whoever I can on that. They’re still far from secure and I have a whole list of security precautions I offer to people who need them. Far as I can tell, most security professionals using Linux or BSD don’t stop at “Install Linux.” There’s usually 100+MB of updates/patches, security software, and hardening steps right after. Remind you of another OS experience? 😛

“All these bugs in Linux. Is it the sort of bug like we saw with winXP where you couldn’t put the winXP box on the internet to get it patched without it being owned before patching completed?”

There’s been bugs like that in Linux. It also happened to QNX when they first used NetBSD code for networking. Know why they don’t get hit most of the time, though? Same reason users didn’t get hit on Mac OS, Amiga, BeOS, SkyOS, Syllable, TempleOS, and so on. Here it is: almost all attackers focused on Windows w/ it’s 90+% market share & small no of configurations to maximize bots collected & profit made for effort or money expended. Soon as Mac OS X got a newsworthy marketshare it got slammed with a huge botnet and attacks. Still hardly any due to low, overall share. Linux desktops still represent almost nothing in marketplace and they’re all different software setups. The diversity means malware authors have to deal with 100+ configurations instead of the few for Windows or Mac. And with fewer bots to show for it.

So, for economic reasons, malware authors focus on Windows for attacks casting a wide net. Even then, with all that labor on it, the coding quality is up enough that the attackers are focusing on widely-installed, insecure, privileged apps. The same ones that hackers at pown2own compromised to control Mac and Linux systems with ease. There’s just more reward in hitting apps on Windows and so that’s what they hit.

Note: In mobile, the situation is the opposite where the Linux-based system gets slammed across the board while the Darwin-based system has far, fewer attacks.

“And why is a company like Red Hat, with a market cap of 14.28 billion, blithely allowing these bugs to exist that could utterly destroy them?”

Another red herring: Red Hat has no liability legally or realistically for vulnerabilities in the Linux kernel. Both Linux and Windows networks get hacked all the time. Your comment presumes Microsoft or Red Hat might be issuing press releases begging for companies to not totally ditch them for it. Yet, this is not the case. People instead expect hacks. They’ve been conditioned to think getting owned by easily preventable bugs is just a fact of life rather than an intentional choice by proprietary vendors for profits and FOSS pushers to focus on fun stuff (and/or profits). Getting owned by clever or unforseen attacks is different: only good detection and recovery helps at that point. Yet, most vulnerabilities I saw in Linux were due to no bounds checks, buffer checks, overflow checks, input validation checks… the same bad coding attackers always hit that’s easiest to spot?

Clearly they aren’t trying very hard, don’t care very much, lack skill at writing robust softwar, or some combination. Nonetheless, it has zero effect on Red Hat’ or Novell’s bottom lines. The remedy is to invest large sums of money into revamping security from bottom up. Nobody will do that. So, customers stay and status quo remains. That their competition, both FOSS and proprietary, mostly have the same attitude helps. It’s how oligopoly markets work.

Summary: Windows sucks all around. They did improve their kernel vulnerability metrics at or above Linux’s level according to people who respond to hacks and track vulnerabilities. They and 3rd parties did provide ways to fight or contain many attacks. Linux and BSD’s, except OpenBSD, put features over kernel security while refusing to eliminate or fix anything risky. They also provided features to fight or contain many attacks. Similar benefits and problems to Windows. Due to OSS advantages & Microsoft’s backdoors, the Linux and BSD distros are the lesser of two evils if one wants mainstream features with OpenBSD best if one wants least defects. None have a security architecture designed to reduce problems ground up despite countless worked examples over decades from academia and industry.

So, I push for improvements based on what’s worked while telling people to leverage Linux or BSD’s as an interim solution with hardening, backups, monitoring, and pre-tested recovery processes. All that can be done while the market cries “Worse is Better!”

But, still, don’t take my opinion on what mainstream OSS software is like. Here’s a top OSS figure giving plenty of detail of life in the bazaars. With rare examples, all my buddies that work on Linux/BSD FOSS report the same types of experiences and battles. Plenty of the code is good with legacy and compatibility issues dominating. Interestingly, when Windows’ source leaked, an analysis showed it to generally be “excellent code” with the “hacks” all being for legacy and app compability. They are all being held back by the same problem despite opposite philosophies and goals. What irony.

BoppingAround November 13, 2015 4:20 PM

Nick P,

They’re still far from secure and I have a whole list of security
precautions I offer to people who need them

Any chances you have them posted somewhere around already?

Nick P November 13, 2015 7:01 PM

@ Grauhut

“I don’t know why, but i don’t like PC-BSD”

Now that’s scientific.

re GhostBSD, HardenedBSD

I’ll try them out. Thanks for the tip.

re picks with my name

Goofy mofo haha.

@ Wael

“I also share your sentiment [re PC-BSD].”

That’s two of you practicing science. Where is the data, though? What are you two leaving out?

” When I have troubles setting the optimum resolution on FreeBSD (a virtual machine installation), I install PC-BSD and copy the Modeline parameters to the X.org config on the FreeBSD instance.”

Reminds me of how I used Visual Basic 6 to develop GUI skeletons for console BASIC, C, and C++. 😉

“Give TrustedBSD a try. I haven’t yet, although I’ve been using FreeBSD for several years.”

If I do FreeBSD, I’ll be giving that and Capsicum a try. Plus I have the benefit of moving to CheriBSD if I suddenly have $2,000 to spare for the Terasic. I’ll dig in and around the couch just in case. 🙂

@ BoppingAround

I had a habit of jumping distro’s a lot for diversity advantage. Don’t do it as much any more given capabilities of my adversary and my situation. Nonetheless, going through them showed Linux to have gotten really complicated and inconsistent across the board. So, I came up with a simple strategy for hardening:

  1. Keep any books or guides I had for the general case.
  2. Type “security guide,” “hardening guide,” “locking down,” etc with Linux and the year.
  3. Type that with the names of the distro and specific programs I plan to use instead of Linux.
  4. Sift through the resulting recommendations to determine applicability and trustworthiness.
  5. Apply them.

  6. Strip out anything I don’t need.

That’s how I do it for a basic rig. Assumption being the box will be owned by spooks anyway. The truly critical stuff I do is on dedicated, mobile machines and easily hidden storage. Physical separation. I don’t even connect those to my internal network except throwaways. Probably going to have to make some changes if I switch to BSD’s. More Googling to follow. 😉

Note: I keep some stuff on 16-bit DOS because modern malware can’t fit in it. Security via Lack of Storage. Just kidding haha.

@ Clive Robinson

“Which whilst they won the second, they ceeded the third to Google, Facebook, etc. Now Microsoft are trying to play catchup on ceasing personal data”

I don’t see it that way. I think they were all fighting different battles (eg markets). The overlap certainly surprised Microsoft and put it in catch-up mode. The catch-up is for data collection, though, not ceasation. They started including stuff in Windows that seemed harmless. They turned their Xbox platform into a full advertising system. They pushed Bing with its ad-driven. Now they’ve put all kinds of leaks into Windows. They’re collecting more than ever while using the legacy effect to get away with it.

Google and Facebook continue trying more stuff. Apple is the most interesting player given that they’re bothing doing collection in some ways and ending it in others. That’s probably because their money is on selling devices rather than ads.

“Microsoft know from bitter experiance that what riles a European Court”

Been a long time, but didn’t they get hit with a billion dollar fine or something? Or am I misremembering?

“We are now in a time where power is in the hands of those who pay the technologists and scientists. But even they are starting to lose that, hence the laws they buy to try to stave off the future and try to corral what power they can. ”

Good point. Seems to be their strategy.

“But like the Kings of the past the writing is on the wall even for these “Kings of Commerce” and their “Trading Empires”. ”

We’ll see. The model has worked for decades with bandage after bandage. Even the times it fell apart over here (U.S.) it still came back together well enough to survive. So, I have no idea what’s going to happen. There’s a lot of players involved at various levels of cooperation and competition. Complex game. Unfortunately, it’s people like us that have to deal with whatever headaches the Kings and Generals’ games cause us.

Wael November 13, 2015 8:59 PM

@Nick P,

That’s two of you practicing science.

“Sentiments” are hard to justify with “Science”. Let’s just say it doesn’t “feel” right 😉

Figureitout November 13, 2015 9:19 PM

Nick P
an analysis showed it to generally be “excellent code” with the “hacks” all being for legacy and app compability.
–Happens in embedded world too of course. And of course you need some medium to long term experience implementing a product thru a few years to see why this is nice (otherwise you aren’t touching the details enough). Otherwise every few years (or less, more like 6 months in software area) your work is rendered worthless and you have to start over and see which registers need what values for how long etc. You can’t share your work w/ people or the build instructions decay and it’s just fossilized history…

So there’s a damn good reason the market (aka the people implementing products mostly) wants backwards compatible, not that hard to see. Your tools constantly changing every 6 months is extremely frustrating, forcing “administrative” installing and sh*t like that, not real work focusing on “the good stuff”. If it gets even less than 6 months, I have to spend an entire day fiddling w/ installs and configuring common sense I’m going to…complain on internet (lol) or try to force not using it.

Not to say I don’t get burned by it too, truly terrible backwards compatible macros build up these psychotic pieces of logic. I get bugs I can’t debug, what should execute separately seems to “smush & smear together” which has led me to seeking out ways to turn off “optimizing” for tiny chunks of code while I’ve read to “leave it on always” and I have to make other sacrifices and bring in more peripherals to get what should be a 2 second job, extremely irritating.

So for some things it’s good, others, not. Never clear cut or definitive even though it makes a nice media soundbite…

Grauhut November 13, 2015 11:22 PM

@Wael, Nick: “What are you two leaving out?”

Let me give it a name: Imho it “feels” a little like patched hobbyist crap, not like engineered software. Feels like it under performs on my standard platforms, Hardware i know very well. If something doesn’t fly on my old lab Z400 i don’t need it.

The rest is instinct, if it feels crappy i don’t trust it and i’am not willing to invest time in verifying things that feel crappy if i have other choices. I prefer testing things i like. 🙂

Wael November 14, 2015 12:01 AM

@Grauhut, @Nick P,

The rest is instinct, if it feels crappy

That’s right! I have the same feeling about Linux too. That’s the reason I prefer BSD 😉

Clive Robinson November 14, 2015 4:48 AM

@ Nick P,

The catch-up is for data collection, though, not ceasation

Sorry phonetic spelling brain fart due to being time lagged (illness causing irregular sleep), compleatly changed the meaning, it should have been “seized” not “ceased”…

Wael November 14, 2015 6:10 AM

@Clive Robinson,

being time lagged (illness causing irregular sleep)

Same here… Hope we feel better 🙂

Figureitout November 14, 2015 6:38 AM

Nick P Addenum
–For more your areas, what about web HTML5 consoldating a lot, for compatible code. The few browsers, code works for one, not the other (I’ve had to use gov’t services just on IE, not firefox…they finally got it compatible w/ firefox recently). Doing web stuff, I’d flip sht if my code constantly breaks due to someone else *constantly changing builds and updates etc.

Nick P November 14, 2015 9:22 AM

@ Grauhut

Ahh, that makes more sense. The Z400 has CPU specs that should be fine with any vanilla BSD or Linux unless doing heavy stuff. Nice evaluation strategy, too. People taught me over a decade ago that trying to benchmark on old hardware is a good indicator of efficiency and robustness.

Another reason to collect junkers. Got Core Duo’s, P4’s, “Athlon-XP’s,” PPC G4’s… all in varying states of usability to be sure. Worst case I use them for burn out runs of input validation, compilation, password cracking, whatever. Better a shit box die than a good one.

Note: You could squeeze even more efficiency out of your setups if you went back a little further with a PA-RISC, HP 9000. 😉

@ Figureitout

Yeah embedded and browsers are good examples. Especially browsers. IIRC, Google’s GWT was originally made to try to smooth over the differences between browsers. It wasn’t smooth haha. There’s 4GL’s and stuff doing a lot better now. Yet, it’s still a nightmare. I’m sure Microsoft’s EDGE work will add to the fun.

@ Clive Robinson

That puts us in more agreement then. 🙂

metaschima November 14, 2015 10:15 AM

@ Uhu

I totally agree, not only that, but the writer of the article seems to know very little about Linux and Security, so perhaps they should not have even written about it.

Example:
“Ashleymadison was running reportedly running on Linux servers and suffered a data breach.” What does this have to do with Linux exactly ? It has to do with incompetent ITs that failed to secure the servers no matter what OS they were running.

The topic is important though. I do run Linux and I greatly admire Linus Torvalds for all that he as done to create probably the most amazing collaborative operating system ever. However, I know that he can be stubborn, hard-headed, and insults people that mess with his kernel. The Linux kernel is his baby and he wants to raise it the way he sees as best.

I think both sides of the argument have valid points. It’s true that:
1) Security should be improved in the Linux kernel.
2) Stability, performance, and usability should be maintained.
3) 1 & 2 are really hard to implement together.
4) Submitting patches for the Linux kernel can be intimidating. A solution would be to change Linus’ attitude a bit, and help him understand how the proposed security patches will improve security without significantly impacting stability, performance, and usability.

Solution: Talk it out. Discuss with Mr. Torvalds, help him understand the benefits of proposed security patches. Make sure to take into account Scandinavian culture when dealing with Mr. Torvalds. Don’t let him intimidate you, stand up, give him logical explanations and solid reasoning and he’ll work with you (even though he may insult you a few times along the way).

Dirk Praet November 14, 2015 11:11 AM

@ Anoni

if your giant list of dreaded kernel bugs is so outdated they were fixed decades ago, or are complete inaccessible due to other mitigating technologies like a simple firewall, well don’t let the door of reality hit you in the ass on your way out.

Right back at you, mate. One of the CVE’s you mention apparently was at some point fixed for TLS, not for the mainstream kernel. Another one you refer to was committed 5 months after original publication. And claiming that you can just ignore vulnerabilities in older kernels because there are newer versions does not change the fact that there are still scores of older machines and embedded stuff out there that never get updated.

By all means feel as secure as you want to, but it’s a good thing you’re not posting under your real name because if ever someone were to vet you for a linux security gig, these are the sort of comments that would indeed jump up and bite you in the ass.

@ Nick P, @ Grauhut, @ Wael

I don’t know why, but I don’t like PC-BSD

I was a bit apprehensive at first but am quite happy with it today. It’s somewhat of an acquired taste. Ghost’s pretty neat too.

Wael November 14, 2015 11:59 AM

@Dirk Praet, @Grauhut, @Nick P,

PC-BSD is real easy to install and setup and that’s what I liked about it.

Grauhut November 14, 2015 3:08 PM

@Nick: “Note: You could squeeze even more efficiency out of your setups if you went back a little further with a PA-RISC, HP 9000. ;)”

On the under powered non Intel side of the equation i have my ARM soc collection. 🙂

Anoni November 23, 2015 11:27 AM

Once more unto the breech dear friends… I’ve been debating whether to reply further or let sleeping dogs lie. Perhaps I choose poorly, but nevertheless…

Fundamentally we have a problem defining the problem. We talk about “Windows Kernel Security” knowing full well that 99% of the population won’t recognize the “Kernel” limitation in there and will equate “Windows Kernel Security is better than Linux” with “Windows Security is better than Linux”, and will conveniently ignore the continued gaping holes in Flash, Internet Explorer, and the rest of the Windows’ ecosystem.

The kernel, Linux or Windows, in and of itself, is pretty useless. Heck, most students write a simple kernel as part of their OS class. A kernel won’t play games or balance the checkbook or browse the web. The kernel is one component of a larger system that, ultimately, is useful. Talking about kernel security is like putting a bank vault door on the front door of your house. If the rest of the house is similarly secured, then yes a bank vault door could improve things. But it’s not worth a hill of beans if the back door to the house is missing. (Viz a viz Flash, Internet Explorer, and the need to run everything as Administrator.)

This is where a lot of our grief comes from. Folks want Linux to implement all these outlandish kernel security features that, well it’s deeply questionable as to whether they will improve overall security for the house or just for that one door whereas they do come with really awful tradeoffs with respect to CPU costs, overhead, slowing down computation, reliability, stability, etc. What good is it to implement new security if it comes with bugs that allow root access from the network? The development costs involved here are not trivial. The gains may very well be trivial, gains have not been established, but the costs sure as hell aren’t.

Then there’s another point regarding the differences in design philosophy.

Linux does everything in the kernel. From filesystems to sound to ethernet to obscure hardware, it’s all compiled or loaded into the kernel. Everything is there. As a consequence, you can pull a drive from a working linux computer, install it on completely different hardware, and it will just work. Well, unless the hardware on the new box is too new for your outdated kernel. But even then, you can grab the latest Fedora DVD, boot into rescue mode, copy (dd) the old drive over to a new drive from the command line, chroot to the new drive, and start firing up processes from /etc/init. The backward compatibility and capabilities of Linux are just amazing. Unlike windows, you can use almost any hardware, almost any filesystem, and it just works.

In comparison, Windows is very limited. With Windows, every stupid little thing is a different driver: Keyboard, mouse, trackpad, sound, everything. (If you can even get it to work with windows. Try EXT4 or HFS filesystems.) Often the drivers have to be loaded from the manufacturer’s website, which often only supports some greatly outdated version of Windows. The manufacturers seem to think “security” means “lost sales”, and if their drivers don’t support newer versions of Windows then you will buy their newer products.

What good is it to have all this kernel security only to turn around and load literally dozens and dozens of drivers that have made their way through an extended supply chain and could have been corrupted or infected anywhere along the way? There are attempts to have Microsoft Verified Drivers, but that deeply impedes actually getting Windows to work.

It’s just, what the hell? In what world is this crazy bullshit model more secure?

 

And how can you compare total bugs in Linux, which supports dozens of filesystems and all these different devices, with Windows that supports so very little, and then say Linux is bad it has more bugs? It’s like comparing a train with a bicycle and saying the train has more points of failure. Well it also moves a lot more cargo! Add in all the drivers for Windows and see how your bug counts compare then! And realize Linux still runs rings around Windows in terms of what it supports and what it can do!

Then you’ve got reliability and stability. Linux is legendary. Servers, even desktops, stay up for years and years. To the point where updating the kernel without rebooting is a significant and ongoing concern.

Windows talks about reliability. They have gotten better. I remember in the old days when Windows used to crash a dozen times a day. Now you can sometimes go for a day or two between crashes. But that’s still a joke. Blue screens are still a regular occurrence. Of course, Microsoft blames the drivers. They tell us that’s where the problem lies. But come on, how can they keep building a kernel that can’t stay up?

 

To the folks who talk about how bugs go unfixed for so long in Linux. You need to understand how this works. The bug is identified. Programmers working for the major distributions fix it and roll out a release for their distribution(s) quite quickly. The bug patch is pushed upstream and eventually incorporated into the next kernel release. There’s no rush with that kernel release, since everyone using Linux already has access to a fixed kernel in a matter of minutes or hours after the bug was identified.

Whenever you compile your own Kernel, and you get the sources from your distributor, those sources always come with a ton of patches. Over time, those patches get rolled back into the main-branch kernel.

It’s a good system that allows for rapid deployment in critical front-line systems, while still allowing for review and consideration of complications that may arise before incorporating into the main-line kernel.

 

The belief of some people that Linux bugs from 15 years ago or more can never be taken off the lists because that kernel was not deleted from the internet and someone somewhere might still choose to use it just shows… You know, I think we all know what that shows…

Wael November 23, 2015 12:35 PM

@Anoni,

Fundamentally we have a problem defining the problem.

It seems we have a more fundamental problem with “definitions”! Linux is a “Kernel” — a modular, monolithic kernel. So the comparison between Windows kernel and Linux is a valid comparison. KDE, Gnome, Firefox, … aren’t part of Linux; they are a part of the “distribution”.

The other point is the kernel acts as the “foundation” and whether a freshman or a sophomore can write a “kernel” is totally irrelevant, they can write a browser or a mail client just as easily, if not easier. If the foundation is weak, the building will be weak. The right comparison would either be between a given distribution and Windows or Linux and the Windows kernel.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.