Comments

Fazal Majid February 22, 2016 3:12 PM

I don’t use Linux on my computers (Mac OS X and Solaris) and am not directly vulnerable there, but many IoT devices in my household are, and if past experience is any guide, manufacturers will be slow at best in releasing updates. Mitigation strategies to protect these would be helpful. It seems using a hardened DNS resolver like unbound would help, as would dropping TCP DNS and UDP DNS packets more than 512 bytes.

Kurt Seifried February 22, 2016 3:44 PM

Good thing all the IoT (many based on Linux) will never get an update for this. Once someone creates a working exploit we’ll have botnets forever (so nothing out of the ordinary).

Doug D February 22, 2016 3:52 PM

Many (most?) IoT devices do not use glibc. It’s a big, hefty libc, with a lot of features, many of which are overkill for IoT. Android uses Bionic instead of glibc, for example.

Gunter Königsmann February 22, 2016 10:46 PM

Similar, but not the same. But testing for a missing range check might not be a bad idea, anyway: in the 90s Linux and windows once had the same bug independently from each other that they did not check for the length of simple tcp packets. Linux and bsd are even related…

Clive Robinson February 23, 2016 4:31 AM

@ super max,

Isn’t MAC OSX and FreeBSD somewhat similar?

All *nix have a common ansestry, and what we would now call Open Source saw a lot of code crossing over between the various branches.

Of special note however was “networking” it came somewhat later and most used the BSD Network code. Which due to the BSD licence ment it also ended up virtually unchanged in other operating systems (MS OS’s being one of the more obvious). This commonality of code whilst good for interoperability also ment commonality of vulnerabilities as well, as early DoS attacks demonstrated.

keiner February 23, 2016 7:24 AM

OT: Headline switching from “gilbc” to “giblc” is only a minor improvement. 😀

Does anybody want to scrub in and explain to Dr. Schneier the logics of linux file names?

keiner February 23, 2016 7:27 AM

@blake

fuked is fuked, no help in applying any patches.

You have to setup a complete fresh system to get rid of any implants/malware. Nobody is going to do this, so only thing is: No new infections to patched (!) systems, nothing else.

Exploits of the NSA will be functional forever and new infections to embedded devices will work for DECADES without new firmwares.

jbmartin6 February 23, 2016 8:50 AM

Just because the library is present does not necessarily mean any given IOT or other device is vulnerable. There still must be a mechanism to induce it to make some arbitrary DNS lookup, and then return the payload as a response to the lookup. An email server doing reverse lookups on IP addresses would be vulnerable. A refrigerator that only uses DNS to look up the configured mail or IM servers is not much of a concern.

jeremyp February 23, 2016 9:20 AM

@super max

Isn’t MAC OSX and FreeBSD somewhat similar? I mean MAC OSX may contain glibc

No. Apple’s libc is a completely different implementation of the C standard library and OS X contains very few Gnu components (on account of the licensing terms).

Here is a list of Apple’s Open Source components

Nix February 23, 2016 9:29 AM

As a fairly relevant aside, I’m working on a patch series (first posting, with numerous bugs I’m testing fixes for: https://sourceware.org/ml/libc-alpha/2016-02/msg00585.html and followups) to compile most of glibc with GCC’s -fstack-protector stack-canary buffer-overrun protection facility. (Actually I first wrote it in 2008 then had too severe impostor syndrome to get it in: I’ve just revived and improved it). This wouldn’t solve all security problems in glibc, or even stop all overruns, but it would certainly stop this one, or rather convert it to a DoS attack by calling __stack_chk_fail() and aborting the affected program.

ld.so and small parts of static binary initialization and thread initialization are not stack-protected even after this series (fixing that is a place for future improvement); nor are functions written in pure assembly and a few C functions called directly from assembly. But the rest should be protected if this patch series gets in, including the nss facilities that do the low-level name/user/group database lookups and that were buggy in this case. libresolv and nscd already were protected this way, but nss was not; nor were all the other library components of glibc.

At least, they’ll be protected after it’s had a good few months to have the bugs kicked out of it on as many architectures as possible!

Nick P February 23, 2016 11:46 AM

@ Nix

Why use tactics (canaries) when you can use elite strategy (memory safety)? Try compiling it or utility apps with Softbound + CETS to see what happens. Cutting-edge tools need more testing so their problems can be fixed. Only issue I see is that Softbound is LLVM, not GCC. Someone once told me you could get portable code out of GCC compiler arguments but idk. Might be a solution there if any LLVM vs GCC issues turn up.

Note: If performance hit is too much, try the lesser invariant of Code Pointer Integrity that still provides quite a bit of protection usually at single-digit percentages of performance loss.

Marc Espie February 23, 2016 1:31 PM

The most annoying thing in that bug is that it’s been known since July.

Even though Drepper is no longer involved, the developers of glibc still have that “know it all” attitude. And they still badly fail at prioritizing bug fixes.

I mean, why was this bug not fixed earlier ?

There is something deeply wrong about the mentality “show me a proof of concept exploit for that bug or I won’t fix it timely”.

keiner February 23, 2016 1:45 PM

@Marc Espie

It’s about the engineer attitude in computer business, never look for something safe and sound, just kick out the cheapest, fastest, easiest solution. No real quality control system, no independent review (until it’s too late).

Same thing leading to death and accidents e.g. in automobile industry and related product recalls. Sick system.

CoincidenceAgain February 23, 2016 5:30 PM

Is it really a COINCIDENCE that Google discovered this bug independently from Red Hat. I remember that similar coincidence happened when the Heartblead bug was discovered. Or maybe Google just knows what other researches are working on?

Clive Robinson February 23, 2016 6:08 PM

@ CoincidenceAgain,

Or maybe Google just knows what other researches are working on?

The chances of that are higher than most would think…

I used to work for an organisation that supplied databases to research organisations. Customers had various options for using any given DB either on a personal workstation, on an organisation hosted server or via an online server.

All major research organisations –like drug companies– went down either of the local DB options, never down the online option.

I got to chatting with a researcher for an organisation that went down the personal workstation route for all their researchers and asked them why, as it would be cheaper and more effective to go down the local server route. What they told me caused a raised eyebrow at the time, which was the money difference between giving everybody their own personal DB and loosing a research edge even within the organisation was so large that the cost was effectivly less than pocket change in their general computer and data security costs.

Later when involved with another DB supplier at a meeting I sugested that a new online product should have an “environment” which gave a researcher citation tools (like EndNote etc) there was luke warm interest. Untill somebody pointed out that in knowing their searches and what citations they indexed we would know what they were thinking and researching and that was “market gold” at which point the meeting became very high energy…

In short they were in effect talking about becoming insider traders…

So yes there is a good chance that Google could know rather more about others research than you would expect. I suspect it would take very little to corelate what people at Red Hat etc searched for and have a very good idea about who or what they do at RH, thus what their searches related to…

Marc Espie February 23, 2016 7:43 PM

@keiner: please, don’t generalize the glibc attitude to every developers team.

What you describe is not an engineer. Specifically, an engineer is supposed to have ethics and standards.

People who write software libraries should be concerned about actual security these days (heck, they should always have been, but 30 years ago, the machines were slower, the ram was cramped, and you didn’t have computers and internet everywhere).

We should call obvious mistakes and not let it pass with a cynical view “oh, everybody does the same shit, so it’s okay”. These guys should be held accountable. There is no excuse that holds for letting a bug with possible security implications fester like that for six months.

How do you expect the situation to improve if you don’t at least try those guys to feel ashamed of themselves ?…

Mark Mayer February 23, 2016 9:25 PM

@Clive Robinson
@CoincidenceAgain

Are either of you referring to the Gmail Special Branch? Weren’t they shut down in 1844?

keiner February 24, 2016 3:04 AM

@Marc Espie

…and btw. which piece of software is NOT poorly written? We had unbelievable glitches (?) in nearly every piece of essential software, from kernels to essential security suites (openSSL pure trash, etc. pp.).

Even Microsh*t, drowning in money, is not capable of providing some reliable code.

So name a software project with reasonable quality control system.

And this won’t change before infamous US product liability lawyers will take the case 🙂

These guys sue each and everybody for the last trashy case you can shake of some dollars from (foreign?) companies. But the US software industry is sacrosanct. Maybe because in bed with the most reliable procurer in the world, the CIA/FBI/NSA complex?

Marc Espie February 24, 2016 4:07 AM

@keiner OpenBSD of course. The old existing code can be a bit crufty (filesystem layer in the kernel), but we’ve been cleaning the code for decades, and we do try much harder than the others to not let anything pass (systematic eradication of anything that’s broken).

keiner February 24, 2016 6:41 AM

@Marc Espie

Nice to hear, as my router is built on BSD (pfSense) 😀

The downside: PC-BSD, the only usable desktop version of BSD (afaik) is a bit, eeehm, difficult to handle, I have one machine with 10, but e.g. setting up Truecrypt or VNC is beyond my capabilities, while not a real problem on Linux…

z February 24, 2016 11:34 AM

@keiner

PC-BSD is far from the only usable desktop BSD. I’ve been running OpenBSD on half a dozen laptops, some ethernet switches, and some wireless APs for years and it works very well for all of it. It’s high quality code and well documented. It is much more usable out of the box on my ThinkPads than FreeBSD or its derivatives, including PC-BSD.

Since he’s here, many thanks to Marc Espie and the rest of the OpenBSD devs for all the great work you do.

Marc Espie February 24, 2016 12:50 PM

@keiner most of KDE4 runs on OpenBSD, thanks to the tremendous work of Vadim Zhukov over the last two years.

There are a few bumps. More tester might be a good idea.

The main issue these days is… the systemd plague, with more software each day thinking they only need to run on linux.

keiner February 24, 2016 2:27 PM

@Marc Espie

I hate this systemd thing, but no way arround any more, this war is lost. Next thing I will have to move my Raspis to Raspian with systemd, although with a very bad felling. But using some machines with opensuse, the struggle to avoid systemd is already somewhat lost.

Marc Espie February 24, 2016 3:49 PM

There is some gsoc project aiming at creating a systemd-like interface in userland so that gnome/kde/whatevergui can use it without all the security risks systemd incurs.

(I’m serious, putting all kinds of security stuff into one gooey stuff that deals with user accounts, timezones, dbus-like xml shit is totally insane, yet the guys behind systemd advocate for it. The hubris!)

Anon February 24, 2016 4:59 PM

These guys should be held accountable. There is no excuse that holds for letting a bug with possible security implications fester like that for six months.

…but the fanboys still think OSS is superior to closed-source! Sad reality is that you get what you pay for.

Unfortunately, on both sides of the isle, software quality can be non-existent. It’s about time we called out the rubbish software devs, and made it impossible for them to work in our industry.

I still haven’t forgotten when the OpenSSL code quality was being condenmed in the strongest possible terms after Heartbleed. Unfortunately, it seems no-one listened, and nothing got done about it (the code quality was worse than junk, and a total re-write was recommended).

There are too many people out there that think knowing the C++ standard backwards makes a good software engineer. I know how to drive a car, but don’t ask me to build one…

Dirk Praet February 24, 2016 6:08 PM

@ Anon

I still haven’t forgotten when the OpenSSL code quality was being condenmed in the strongest possible terms after Heartbleed. Unfortunately, it seems no-one listened, and nothing got done about it…

Although pretty much everyone agrees that OpenSSL is beyond repair, there was a massive code audit by the Open Crypto Audit Project in the wake of Heartbleed and several forks like LibreSSL by the OpenBSD foundation and Amazon’s s2n.

If you don’t believe in FOSS code, stick to Apple and Microsoft. If you want to make a difference, learn how to code and join a project. If more people did, maybe code would be better and bugs squashed faster.

Anon February 24, 2016 8:14 PM

@Dirk Praet: I’m not saying bugs can’t be fixed, but referring more generally at the basic code. Maybe we are more aware/enlightened now than before, but all we are doing is trying to patch the Marie Celeste, instead of building a totally new ship.

I’d prefer to see an attempt made to re-write core software from scratch, knowing what we do today. If it breaks compatibility with some things, tough. Those things that broke will either need correcting or re-writing as well.

Until we get beyond being afraid of breaking old code, we will be unable to build better systems in the future.

ada February 24, 2016 10:34 PM

It is time to recognize that it is not possible to write safe or secure code in c/c++, just like it is impossible to safely juggle with a running chainsaw.
Currently we pretend that it is possible to do so; each time someone’s head gets cut off we blame him for making an ‘obvious’ rookie mistake. Of course, mistakes continue to be made, and heads keep rolling.

This is exactly what the aerospace industry did in its early days: each time a pilot made a mistake and crashed a plane, the pilot was blamed for an avoidable ‘operator error’.

Luckily, aviation has moved beyond blaming the pilot and has recognized that no one is perfect. If a system is susceptible to human error, it is not a safe system.

It is time to drop the heroics and confess that we all make mistakes. Then we can move on to using tools that don’t cut off our heads when we make one.

Ada is a mature language that provides compile time protection against all of the c-typical vulnerabilities. With Ada, there would have been no heartbleed bug. There would have been no glibc bug. There would have been no Debian OpenSSL bug.

Also, Ada comes with many tools that allow to mathematically prove code properties. Safety-critical code used on millions of servers deserves this kind of scrutiny.

Is it a coincidence that there is an Underhanded c-Code Contest, but no similar contest for Ada?

Clive Robinson February 25, 2016 12:17 AM

@ ada,

It is time to recognize that it is not possible to write safe or secure code in c/c++,

Actually it’s true of all languages that are of any use.

There are two reasons for this. Firstly language designers are human not omnipotent and omnipresent with perfect foresight. Secondly all useful tools are dangerous, because what makes them usefull is agnostic to use, it’s the controling mind or lack there of that makes their use safe or dangerous in any given situation.

Thus whilst Ada has got “safety guards” they can be bypassed or removed.

To use your style of analogy, they are like manual feed table saws with guards. To do their job there has to be a gap in the guards for an operator to do their work. It is therefore down to the operator to use a safe “push stick” to protect not just their fingers but the saw it’s self.

keiner February 25, 2016 1:56 AM

@Dirk Praet

” If you want to make a difference, learn how to code and join a project.”

That is absolutely unacceptable as an attitude against the users of software, although I see it in every tech forum, may it be firewall, OS or something else: Shut up or learn to code and write something better.

It’s unbelievable that these guys in software industry get away with that. If you don’t like your headache pills, learn something about drug development and invent a better one. Utter nonsense, that is why we have a system called “division of work” in modern societies, everybody does the best in his profession, but has to TRUST (!) that others also do their best in theirs. And here is where (many! 🙂 ) software engineers betray society (as do bankers, btw), in looking only for a fast job, take the money and run away from responsibility for the trash they produced. And then tell people: Move on, do it better if you can!

Marc Espie February 25, 2016 2:47 AM

Thinking that if you change language you will get better security is a common fallacy. Get off your soapbox. There are obvious issues with every language. The main reason you mostly talk about the C/C++ ones is because they’re more publicized and easier to exploit (the low hanging fruit model of offensive security).

Take PAM for instance. Its configuration language is more high level than C/C++, obviously. Yet every distribution that integrated it managed to fuck up its configuration in a big way at some point.

There’s also absolutely nothing C or C++ specific to the way certificate authorities work. Yet they got broken regularly.

Or take the gnupgp issue a few years back. I’m talking about mixing up what you trust a certificate for. Chains of trust have complicated semantics. It was not language specific at all.

BTW, WRT openssl, there’s a reason OpenBSD forked that pile of poo into libressl and started removing all the bad code…

But hey, the rest of the world still waits for the openssl crew to magically get their shit together. Oh, and in some cases there’s money involved. Not a surprise.

keiner February 25, 2016 3:26 AM

OPNsense (forked pfsense) had a libreSSL amr of the project, but currently I don’t see this any more to be downloaded (maybe enough other problems?). I would love to switch to something based on libreSSL for my router, but…

Gerard van Vooren February 25, 2016 3:32 AM

@ Marc Espie,

Thinking that if you change language you will get better security is a common fallacy. Get off your soapbox. There are obvious issues with every language. The main reason you mostly talk about the C/C++ ones is because they’re more publicized and easier to exploit (the low hanging fruit model of offensive security).

First of all, Ada is a safe language. Its safety doesn’t only come from bounds checks but also because of code organization, readability, contracts and you name it. If Ada would have been used in computing we would probably have half of the security breaches today. Yes, half of the security breaches are directly related to C. So using Ada eliminates a significant vector of problems.

Talking about PAM, GPG or OpenSSL doesn’t make the problem of C any less, on the contrary. The more code that is written in C, the worse it becomes.

Clive Robinson February 25, 2016 5:06 AM

Speaking of “Safe Languages” there is a point that few consider about their security from attack.

Most “attack scenarios” you read about assume the attacks come down the computing stack either by program input doing something like smashing the stack or out of range/bounds causing branch errors etc or via other programs being able to get timing side channels.

What you don’t see mentioned very often is attacks going up the computing stack. The most known about such attacks are via the DMA controler used in high speed IO (FireWire etc).

Such attacks are fairly leathal in that it’s very very difficult to do anything about it at a higher level to mitigate it.

Look at it this way you have an application that has no vulnerabilities to inputs or IO issues (yes I know there is no such program but this is a thought example ;-). The code is protected by a signing process so it will only load and thus execute if it is unaltered. That is about the limit of what we generaly do with current general purpose computer systems and OSs, though we are starting to put in detection for writes to the static segments such as code and act sensibly on detecting them (well over a decade or two after the hardware would support it). But it does not work if the attack is from below those mechanisms in the computing stack or for read write memory in the heap etc. Nor do they work with attacks like Row Hammer, or certain types of soft hardware faults that can be exploited (EM fault injection etc) that well resourced entities can develop and deploy as they see fit.

Thus an attack from below will alow “input checking” to be bypassed etc and most of those “safe language” features as well. In fact arguably programs developed with “safe languages” are more vulnerable for a number of reasons. Firstly programers know they are working with a safe language and become over reliant on it’s features. Secondly as a rule of thumb “safe languages” have a more structured and organised memory map thus directed attacks are easier to make.

It’s only when you look over into people developing space and aerospace mission critical systems do you see the start of this sort of attack class being mittigated. Not because they are considering this type of attack class, but because they know hardware no matter how well built and protected is unreliable in certain environments. Unfortunatly in many cases their solutions result in even more highly structured low level code out of their tool chains, because it makes in operation hardware fault checking easier but also makes deliberate attacks easier in the process.

Yes it all sounds theoretical, untill you remember that OS design is changing. Traditional OSs are seen as a “bottleneck” these days and thus we are getting “userland IO”… Which moves the security foundations around a lot and opens up cracks where “bubbling up” attacks will work.

The message is as always “security is hard, very hard” and “very very few can write secure code” at the best of times. It’s not helped by over reliance on tools such as “safe languages” where the programmer abdicates much responsability to the tool chain, thus fails to code defensively.

But worst of all is “managment expectations”, the field of endevor has turned in many area into “sausage machine, code cutting shops” much of which is based on “code reuse” to crank it out. Unfortunatly much of this old code is bad very bad because nobbody dares,fix it in case they break other things, it also gets “hidden away” in lower level libraries, putting a “safe language” program on top will not solve these “code cutter” “old code reuse shop” problems either.

All of which is not to say I’m against “safe languages” it’s the “not using them properly” which applies to all languages safe or otherwise that I’m against.

And that’s the point at the end of the day if programers used a language properly within it’s limits and known environment we would not have the number of vulnerabilities we have.

When I hear people say such things as “XX% of problems would not happen if we used language YY” what it realy is say is that the industry is failing to educate practitioners in “the proper use”, or that the it is demanding to much of practitioners, or the practitioners are not behaving as engineers in other fields of endevor do…

As has been observed a long long time ago “A bad workman blaims his tools”. Back then articifers and craftsmen and those who would later become the first engineers –as science progressed– made their own tools, so by blaiming their tools they were telling people that they were bad workmen… I think the software industry has a lot of catching up to do.

Gerard van Vooren February 25, 2016 5:39 AM

@ Clive Robinson,

Sorry but I disagree, entirely (about PLs that is). The fact that the underhanded C contest exist AND that nothing has been done to solve the issues, that alone says that C is an unfixable dead end. But it’s also a language that sticks like dog poo and it’s as smelly as well. C doesn’t go away because people are still defending that stinky language. With Ada you are designing a program, with C you are hacking a program. THAT is the main difference in programming with Ada and C.

With Ada you spend 4 hours designing and 4 hours testing. You go home and sleep well because you know your program works as specified.
With C++ you spend 2 hours hacking, 2 hours satisfying the compiler and compiling and 4 hours testing. You go home with the feeling that cross-platform compilation might break down the line.
With C you spend 1 hour hacking and 7 hours testing. You go home with the thought that you might have forgotten something. 6 months from now you get a phone call that the world is on fire because you made a typo.

BoppingAround February 25, 2016 9:36 AM

keiner,

It’s unbelievable that these guys in software industry get away with
that. If you don’t like your headache pills, learn something about drug
development and invent a better one. Utter nonsense, that is why we have a
system called “division of work” in modern societies, everybody does the best
in his profession, but has to TRUST (!) that others also do their best in
theirs.

It is believable because it is probably just as bad in other areas, the
difference being less vocality. Count in the amount of people ‘not in their
place’, i. e. working at $PLACE because $PLACE were hiring and because those
people needed money to survive. I find it hard to believe that those people
will work their arse off because ‘division of work’ tells them that they
must. Count in other variables like incompetent management, bike-shedding [A],
other circumstances that may influence the decisionmaking on whatever
level.


[A] https://en.wikipedia.org/wiki/Bikeshedding

Nix February 25, 2016 10:04 AM

@Nick P, because compiling glibc with Softbound + CETS is almost certain to be a total nightmare to get working, that’s why. glibc doesn’t compile with LLVM yet, for starters, and even if it did significant parts of it are assembler and it does all sorts of deeply evil things, particularly in early startup (and we have to be aware of that even when ld.so is not being considered, because the static library does many of the things ld.so does at startup using un-rebuilt objects from the core libc — things like memcpy() which you really do want to keep an eye on in the general case).

In this case, the perfect would very much be the enemy of the good. Stack-protecting glibc found real bugs when I first wrote the patch (e.g. glibc bug 7066), and would probably have defended us against this too. Yes, it’s not utterly ideal, but it’s a hell of a lot better than what we have now, which is next to nothing.

Nix February 25, 2016 10:11 AM

@keiner, that’s nonsense. glibc has never been about ‘kick[ing] out the cheapest, fastest, easiest solution’. If anything, it’s historically been about insanely high walls that prevented contribution, and about premature optimization far beyond the bounds of sanity. That, at least, has ended now, which leaves us with the usual free software problem: everyone wants to write code, nobody wants to review it.

If you’d ever worked on glibc, you would never have claimed that glibc was easy to work on. It’s a C library, and a notably involuted and arcane one.

@Marc Espie, I don’t think it’s know-it-all in this case. I think it’s more likely to be “oh crap we have to notify everyone in the world first”, or possibly a belief that the bug was not exploitable… or simply “there are so many bugs left open or wrongly closed from Uli’s ostrich reign that we are drowning in them”. Honestly… this bug was there since 2008, and you’re complaining about a few months’ delay? Ideally it would have been fixed faster, but this was hardly covered up once it was clear how serious it was.

Nix February 25, 2016 10:16 AM

@Marc Espie, ah yes, OpenBSD, the project which shipped the client side of a feature with no server side and with serious security holes since, oh, 2008 in OpenSSH (“UseRoaming”), switched on by default but left wholly undocumented.

Honestly, though OpenBSD at least has a review policy it is not a flawless angel here either, and the review policy clearly does not prevent all bad decisions and most especially does not cause the decisions to be magically reverted once the code has got in. This is not a fault that only some terrible teams have. It is a general problem with software development (more specifically, with development in languages in which specific classes of security holes, in this case stack overruns, are possible).

(glibc, post-Drepper, has an extremely active development team and a similarly creditable review policy too — on the rare occasions when people do jam stuff in without it getting reviewed they are invariably castigated for it and don’t do it again, even when the person doing it was the original author of glibc.)

Marc Espie February 25, 2016 10:32 AM

@Clive Robinson about the “sausage management school of work”.

Over the recent years, I’ve become convinced of some things we need to do, is to teach youngsters to always code for posterity.

Face it, if you’re vaguely succesful, your code is going to be reused. So, when you write that first project, if there are corner cases that would never happen to you MAKE SURE YOU PROTECT AGAINST THEM anyway, because you’re certain it’s going to be reused in the wrong context.

To wit:
– don’t use malloc in signal handlers. YES, that will work in Linux. But nowhere else.
– don’t use wait when waitpid is appropriate. Sooner or later, you’re going to integrate with code that has other processes running (libraries, etc) and you’re going to lose.
– don’t assume fd 0, 1, 2 are already taken (bug in kdesu helper a long time ago)…

etc

yes, all of that is Unix specific (that’s my background, after all), but we really should try to weed out as many bad idioms as we can, and replace them with correct stuff…

I’m pretty certain high level languages are not the solution, precisely because their runtime gets too complicated. Speaking of runtime, ever try to figure out why glib2 was a performance hog under some circumstances ?… there are so many abstraction layers that it’s well-nigh impossible.

About the glibc team: I’m happy to know that the remnants of Drepper’s reign are slowly fading away. His text about elf programming was about the opposite of what I’m advocating here…

keiner February 25, 2016 10:34 AM

@Nix Your argumentation is 100 km away from my arguments.Other try: They were sitting on this bug report for more than 6 months, how does that match to

“…and a similarly creditable review policy…”

?

Sorry, but this industry is self-regarding beyond all limits…

Nick P February 25, 2016 11:38 AM

@ keiner

“20 years software review by the NSA, that’s hard!”

I don’t see NSA in the linked document at all. Besides, they wouldn’t get a real pentest unless they were certified at EAL5 or higher. Basically nobody does that. So, NSA does limited black-box testing and feature checks based on paperwork (eg user guides) submitted to them. Common Criteria EAL4 and below are a sham.

“The downside: PC-BSD, the only usable desktop version of BSD”

I just reinstalled my system. Went with a Linux distro because I expected exactly that.

@ Marc Espie

re OpenBSD

“OpenBSD of course. The old existing code can be a bit crufty (filesystem layer in the kernel), but we’ve been cleaning the code for decades, and we do try much harder than the others to not let anything pass (systematic eradication of anything that’s broken).”

I’ve always given you all credit for that. The proactive approach shows much lower defects and better documentation. It’s still a UNIX with a huge TCB, plenty of covert channels, and underutilizing mitigation technology coming out of CompSci. My old scheme for producing vulnerabilities for it was just to watch the mailing list then weaponize what was posted before a patch went in. That people used it fire-and-forget sometimes neglecting updates meant the attacks should work. I’ve always been surprised to not see anyone doing that. I wrote it off due to a combo of almost non-existent market share and high effort required to beat its mitigations.

Nonetheless, it’s the highest quality of the UNIXen. I’ve always advocated someone put it on top of a secure microkernel in Nizza architecture style to get the best of both worlds. One could escape both monoliths and C by running security-critical components directly on the microkernel in a language (or C subset) amendable to highest analysis or full mitigation. Good middleware can handle the interface security automatically or semi-automatically.

re languages

“Thinking that if you change language you will get better security is a common fallacy. Get off your soapbox. ”

That’s not true at all. Both CompSci and industry studies going back to the 60’s proved you wrong conclusively: choice of language had huge impact on number of defects, ease of integrations, and ease of maintenance. Some studies, esp by defense contractors, looked at various languages in use including Fortran, C, C++, and Ada to determine if it was worth a switch. Ada had less defects… usually half… than any other except one outlier where C++ and Ada had the same while doing better than C.

Other examples. Burroughs in 1961 coded the Tron villain, err their MCP OS, in a version of ALGOL which didn’t have C’s undefined and unsafe behavior in default use. It kept pointers safe, bounds-checked arrays, protected the stack, and so on. MULTICS used PL/0 because of safer string handling (you’d know about that) and made it use a stack where incoming data flowed away from stack pointer. Wirth’s languages, used in OS’s, had all kinds of checks on data types and interfaces while compiling & executing very fast. Hansen’s Concurrent Pascal in Solo OS was immune to dataraces and usually deadlocks. Eiffel’s SCOOP was immune to data races w/ some projects showing deadlock & livelock freedom. Army Secure OS combined tagged CPU, security kernel, and high-assurance runtime for Ada to let one write rest of apps in Ada without worrying about what’s underneath. SPIN used Modula-3 for OS-wide safety also with type-safe linking. Most recently, an OS was developed in Haskell: a language whose type system & easy, formal verification can provably eliminate more problems than I can list here.

So, history is definitely not on your side there. There was one system after another that was straight up immune to the problems that plague UNIX’s and C-based projects. They did that with a combination of better hardware, software design, and/or language choice. The language choice eliminated much low-hanging fruit. Plus, certain choices made further analysis easier rather rather than so hard that people got their Ph.D’s trying to detect basic problems. 😉

Now, that doesn’t give reliability or security by itself. Plenty more areas to worry about and mitigations to perform. Just establishes a foundation that makes a secure system easier to build and maintain. Not to mention, one’s tools make the common things (eg pointers, arrays) should be easy to work with rather than hard.

Note re Gerard’s claim: I challenge you to skim the chapter titles and methods of that Ada book I linked. Your words indicate you’ve never really looked at Ada to see just how many issues it eliminates or catches. Remember that it was designed by experts in response to industry’s, real-world problems to systematically eliminate as many as possible before the program ran. Whether best design or not, it’s safer than C in so many ways one has to write a whole book to describe them. And that’s not even the height of what language design can prevent which is happening in functional programming, including functional systems languages.

@ Anon

“I’d prefer to see an attempt made to re-write core software from scratch, knowing what we do today. If it breaks compatibility with some things, tough.”

I semi-agree. I listed many architectures better than UNIX here. Knowing C’s history, there’s also good reason to ditch it for one of the old (Modula-2/3, Ada, PreScheme) or new (Rust, SPARK, ATS) languages for systems programming. Hell, there’s been typed assembler languages that are both faster and potentially safer. Crazy, eh?

That said, what we have in our UNIX OS’s and libraries are decades of accumulated wisdom. They ran into all kinds of problems in hardware, software, networking, etc. The solutions to those problems might get little to no documentation. Yet, their fixes are in the code. Clean-slating the code, like MirageOS for instance, gives the benefit of new tooling but drops much of that prior wisdom. Teams could spend a long time re-inventing that stuff. Hence, the other strategy of trying to evolve and polish the existing stuff.

Strategies for doing that include safe C’s, virtualizing them, obfuscating them, hardware reliability/security tricks, and so on. They work to a variety of degrees. They just haven’t been applied to a whole kernel and toolchain yet. Might get us further than doing everything from scratch. That’s why projects like Cyclone, SVA-OS, Nizza, Softbound + CETS, etc are invaluable in letting us gradually improve what we have in case other stuff doesn’t get written.

@ Clive Robinson

“What you don’t see mentioned very often is attacks going up the computing stack. The most known about such attacks are via the DMA controler used in high speed IO (FireWire etc).”

That’s on purpose. There’s a whole literature of techniques dealing with that side of things. Essentially, the strategy is to force all that into either a known safe or known failed state by the time it hits memory. Also, restrict what memory it reads or writes while applying tags if applicable. Then, the valid data or failures are dealt with at a higher level of abstraction that’s safe in how it operates.

“Thus an attack from below will alow “input checking” to be bypassed etc and most of those “safe language” features as well.”

What you’re describing is not a failure of safe languages but a failure or consequence of existing approaches to develop hardware. There are examples of hardware that do not have these problems because they were specifically designed to avoid problems. Further, a RAM expert told me problems like Rowhammer weren’t unavoidable so much as industry cutting corners on QA for a long time. So, the solution here is to use the same mindset when developing hardware: languages that make safe expression & analysis easier; correct by construction transformation/optimization tools; formal & test-based equivalence checking; gate-level testing; physical testing of various modules to ensure DSM or manufacturing problems don’t change logic; rad- or fault-tolerant logic for key modules with TMR internally (aka AAMP7G) or externally (aka NonStop).

Systems have been built with first pass, inherent safety/security, and at comfortable abstraction for developers. Two I like to cite are Sandia’s SSP/Score processor for embedded Java and Rockwell-Collins’ AAMP7G for stack machines. They were exhaustively analyzed to find logic, environmental, etc faults plus at least one done on rad-hard process. Result is no failures in field on the record so far while all coding can be done at a higher, safer level. So, the model is to combine secure methods for developing hardware with safe/secure languages or abstractions for software to get an integrated hardware/software system that’s immune to most problems of both.

And then install backup, recovery, and monitoring mechanisms to handle the inevitable failures called known unknowns and unknown unknowns. And then add those into the requirements of the next release. 😉

@ Nix

” even if it did significant parts of it are assembler and it does all sorts of deeply evil things, particularly in early startup (and we have to be aware of that even when ld.so is not being considered, because the static library does many of the things ld.so does at startup using un-rebuilt objects from the core libc — things like memcpy() which you really do want to keep an eye on in the general case).”

I’ve read about that mindset and resulting problems before. The solution is to begin fixing that so it’s at least possible to analyze or transform it. Just piece-by-piece improve it so the cruft and horrors will eventually be gone. The result is something all these wonders from CompSci and industry can work with. Then there’s efforts like musl replacing it entirely. Interesting results there. Might be able to work on those if not glibc. 🙂

Note: Your name also reminds me of a distro that’s finally fixing huge, unnecessary issues in both package management and filesystem organization. Amazing things happen when people see one of the root problems then solve it.

Nick P February 25, 2016 11:47 AM

@ Marc Espie

On second though, while we have an OpenBSD dev here, let’s try to learn something from him too. 🙂 Just saw you dropping little warnings in your comment to Clive. A prior interview also called for more people doing contributions to OpenBSD but acknowledged the difficulty of kernel coding. Plus, I’ve been away from C coding for a while and could use some up-to-date recommendations in case I get back in it.

So, let’s say there’s some people doing C programming & playing with OS code that want to contribute to some projects. I’ve run into some here and there, esp on Linux side. I’d like to steer them in a direction where they can contribute to something like OpenBSD. Let them experience a bit of high quality coding and work toward slowly earning the privilege of being elite C coders/reviewers. There’s the motivation, anyway. 🙂 What would be your minimum set of recommended books or articles that would explain safe/secure C coding, UNIX pitfalls to avoid, and how OpenBSD team wants code submissions to look? If you have something, I’ll give it to anyone thinking of contributing to kernels with a nudge in OpenBSD’s direction.

Marc Espie February 25, 2016 1:43 PM

if you want to contribute to OpenBSD, that’s actually easy. Learn C, read Stevens’s advanced Unix Programming, and use OpenBSD. Peruse its manpages (best in the world!) Hang out on the mailing lists. Pretty soon, you’ll find something that doesn’t quite work like you think it should, and you should try to fix it.

That’s how you become a developer. You send patches, using diff -u. People poke holes in your patches. They get better. If you have something to contribute, as in actual bug fixes or wanted new functionality, they will work with you. Not a lot. They will just tell you what you do wrong, and point you in the right direction (by giving you examples of good code, and occasionnally, by insulting your poor coding practice).

Thick hide. No misplaced pride. That’s what you need mostly.

keiner February 25, 2016 2:04 PM

Vulnerabilites:

I use Adobe Acrobat X Pro for printing selected emails (Thunderbird on Win 7 pro) and whenever I print my Amazon order confirmation emails with subjects something like

Your order of “A wider shade of grey”

I get error messages that the printer driver tried to execute everything after the leading quote ” as a program in the directory of the Adobe Acrobat.

Isn’t that a security hole, as the subject line of the email can trigger execution of arbitrary programs?

ianf February 25, 2016 4:39 PM

@ keiner […] “Isn’t that a security hole, as the subject line of the email can trigger execution of arbitrary programs?

No, that’s (a) a feature of Acrobat X; and (b) the consequence of using shoddy Adobe software by choice.

Nick P February 25, 2016 5:26 PM

@ Marc

Makes sense. Thanks for the tip on that book. I didn’t have that one.

@ All

Here’s a link to the book he referenced: Advanced UNIX Programming by Stevens.

@ keiner

“Isn’t that a security hole, as the subject line of the email can trigger execution of arbitrary programs? ”

Not only a security hole but a perfect example of why Gerard and I push languages that prevent that by design. 🙂

Dirk Praet February 25, 2016 7:25 PM

@ keiner

That is absolutely unacceptable as an attitude against the users of software,

Reality check: there simply is no such thing as a free ride. The majority of FOSS developers are not heavily paid job hoppers jumping from one gig to the next, but unpaid hobbyists and volunteers either struggling to make a living off grants or doing their FOSS coding in their spare time. You can’t possibly expect them to give away stuff for free and also pick up the bill if it blows up in your face. Which goes for both freeloading private individuals and commercial entities.

If everyone using OpenSSL decided to donate even 1 measly dollar, they could probably bring in (and pay) an entire team to re-write the bloody thing from scratch and have it thoroughly audited.

Isn’t that a security hole, as the subject line of the email can trigger execution of arbitrary programs?

Yes it is. Adobe is not exactly known to write secure code, neither in its free or paying software. Acrobat and Flash were removed from all of my devices a long time ago. I have one copy of each left on one heavily secured VM on a seperate segment completely shielded off the rest of my home network.

@ Nick P, @ Marc Espie, @ Gerard Van Vooren

First of all: thanks for joining the discussion, Marc. It’s quite a treat to have you here.

I tend to agree with @ Nick P and @Gerard Van Vooren. Over time, I have mostly given up on C, which I have come to consider more as a passion and a religion than the best or most efficient way to get the job done, especially for greenhorns. It’s like a dark art that takes years to master, especially if you want to write secure code too. @Clive and @Marc do however have a point that high level languages may be more vulnerable to “attacks from below” and that you have little control over their runtime environments.

That said, the matter of the fact remains that C is at the core of pretty much all unices and therefor simply cannot be discarded until the next really disruptive technology comes along that not only is C-free, but also secure and has enough impact to persuade our internet overlords to get rid of legacy OS’es, software and infrastructure.

@ Nick P

Here’s a link to the book he referenced: Advanced UNIX Programming by Stevens

What do you mean you didn’t have that one? Even I’ve got a copy of it 😎

That’s why projects like Cyclone, SVA-OS, Nizza, Softbound + CETS, etc are invaluable in letting us gradually improve what we have in case other stuff doesn’t get written.

I second that emotion. As usual, wide adoption remains a problem as most devs keep focusing on getting stuff to work instead of getting it to run securely. Budgets, deadlines, cost-cutting, you know.

@ Clive

All of which is not to say I’m against “safe languages” it’s the “not using them properly” which applies to all languages safe or otherwise that I’m against.

I guess that sums it up pretty well. The learning curve for low level languages of course does tend to be a wee bit steeper than that for high level ones. Try explaining the concept and usage of pointers in C to someone who has learned .NET or Java in school. Last time I tried they looked at me as if I were from Mars.

keiner February 26, 2016 1:34 AM

But (to earn some money) I have to use .pdfs a lot and manipulate them (including encryption with passwords etc.). What is a SECURE software tool for these .pdf operations?

I removed Flash some years ago, but without Acrobat I can’t get my daily work done…

Marc Espie February 26, 2016 6:17 PM

the best pdf parser I’ve seen so that is mupdf and its associated tool suite.

It is unfortunately not complete (wrt form handling, for instance), but it’s one of those few open source pdf package that’s not just yet another fork of xpdf.

The code is actually readable, which is another plus.

Nix February 27, 2016 6:24 AM

@keiner, a ‘similarly creditable review policy’ relates to whether a patch gets reviewed by someone else with experience of that bit of the system before it gets in. It has no relevance whatever to management of incoming bug reports, and most especially not to any part of security response other than writing patches to fix the bug. I can’t imagine what you’re driving at, or how you can imagine that strong code review has any relevance to speed of fixing security holes (other than perhaps to slow it down as the fixes are reviewed).

It’s also not ‘self-regarding’ to say that glibc has more or less the same review policy as OpenBSD now (everything has to be reviewed before it gets in, no exceptions). It’s simply a statement of fact. Of course, more of OpenBSD’s codebase has been reviewed that way, since their policy has been in place for a great deal longer, but that doesn’t necessarily fix everything, as the UseRoaming flaw demonstrates.

@Nick P, given that much of the really complex and involuted stuff in glibc was written by two people (Roland McGrath and Ulrich Drepper) in a distinctly cathedralic style, and that both are among the smartest software developers you’ll find anywhere (though Uli was not a good maintainer for other reasons), I don’t see how you could possibly describe glibc as “a pile of old festering hacks, endlessly copied and pasted by a clueless generation of IT “professionals” who wouldn’t recognize sound IT architecture if you hit them over the head with it”, as PH-K hilariously describes various things in the build-system stack. (Which do have rationales which were good in the early 1990s, when their sole competitor was the pile of cruft which was imake).

glibc certainly has old festering hacks, like any large software system, but they are being identified and removed as they cause trouble, insofar as glibc’s extremely severe ABI-compatibility requirements permit (which is, of course, a problem the BSDs don’t have because they don’t care about ABI compatibility: libc changed, oh just recompile everything!)

TBH, I find it hard to believe that PH-K doesn’t know that the entirety of the GNU build system, Automake, Autoconf, GNU make, you name it, was written by gnarled old hackers more or less like, uh, him, and most definitely in the same generation.

(btw, my nom de net has nothing to do with the Nix distro, though I think it’s excellent. It predates the Nix distro by about twenty years and I see no reason why I should change it because some upstart has reused it for their project 😛 )

@Marc Espie:

Thick hide. No misplaced pride. That’s what you need mostly.

YES, this indeed. Even non-misplaced pride is problematic: Uli had a lot of that, justifiably (his work on NPTL alone justified it), but it made glibc a horror to work on for more than a decade.

Your praise of mupdf is justified too. 🙂

@Dirk Praet:

The majority of FOSS developers are not heavily paid job hoppers jumping from one gig to the next, but unpaid hobbyists and volunteers either struggling to make a living off grants or doing their FOSS coding in their spare time.

This hasn’t been true for many years. The traffic on free software mailing lists now peaks during the working week, and on the order of 80% of contributions to the core toolchainey projects, the kernel, etc now come from people who are paid to do so. There are a lot of crucial projects (like ntpd, openssh etc) where this isn’t true, but mostdevelopers who want a job doing free software development can get one, and the most active projects are almost without exception most active because of paid contributions.

Heck, even I got a job in this area in the end, and I have an interview phobia and intense dislike of job-switching. If I managed it, everyone who wants to can.

Taken February 27, 2016 10:15 AM

From googlesecurityblog:

glibc reserves 2048 bytes in the stack through alloca()[…].
Later on, if the response is larger than 2048 bytes, a new buffer is allocated from the heap and all the information (buffer pointer, new buffer size and response size) is updated. Under certain conditions a mismatch between the stack buffer and the new heap allocation will happen. The final effect is that the stack buffer will be used to store the DNS response.

Would it be fair to arguee that this process has too much complexity, and that a simpler process (with a single buffer, always on the heap) would have been safer?

Dirk Praet February 27, 2016 11:57 AM

@ Nix

This hasn’t been true for many years. The traffic on free software mailing lists now peaks during the working week, and on the order of 80% of contributions to the core toolchainey projects, the kernel, etc now come from people who are paid to do so.

I’ve been doing some digging and it would indeed appear that you’re right. Thanks for pointing this out to me. I guess I have been overly generalising the situation of folks like Werner Koch, Harlan Stenn and the like. That said, there still is no such thing as a free ride. One can only hope FOSS communities and developers adopt software cycle, version and release management best practices and pay sufficient attention to code review and independent audits. But which goes for commercial software just as well.

I do get people being upset about the sometimes questionable quality and maintenance of ubiquitous stuff like OpenSSL, but wining about it is not helping. Getting involved or making donations is.

Gerard van Vooren February 27, 2016 1:04 PM

@ Nix,

glibc certainly has old festering hacks, like any large software system, but they are being identified and removed as they cause trouble, insofar as glibc’s extremely severe ABI-compatibility requirements permit (which is, of course, a problem the BSDs don’t have because they don’t care about ABI compatibility: libc changed, oh just recompile everything!)

Being backwards compatible certainly has its benefits but the costs are that the maintainability and review-ability are close to impossible. When it comes to security simplicity is always the best answer. I agree with Dirk that some schedules would be really beneficiary but refracturing and heck using a safe language (think Ada or Modula) is very desirable. This approach is of course very anti the current anarchy but we don’t live in the nineties anymore. The codebases need to shrink, not expand any further. I believe that cooperating is the real answer, it’s not gonna happen of course but it is. There are too many flavors of Linux, too many distractions. When the organizations behind the major Linux systems can cooperate and manage to set some goals the world would be a better place.

TBH, I find it hard to believe that PH-K doesn’t know that the entirety of the GNU build system, Automake, Autoconf, GNU make, you name it, was written by gnarled old hackers more or less like, uh, him, and most definitely in the same generation.

The GNU Autocrap tools are indeed crap. That’s all there is to say about it and the likes of cmake only makes it worse. I really like the simple Makefiles of BSD systems. There is nothing hidden and there is no fancy stuff in it, only a couple of LOC.

Nick P February 27, 2016 10:42 PM

@ Dirk Praet

Don’t let Nix’s statement mislead you. Most OSS projects are people scratching an itch. Many important, widely-used projects barely have any developers. Some get corporate contributions. Some with corporations largely behind them almost totally get corporate contributions. A handful have successful foundations raising real money to work on them.

Most are barely getting by with one or a small number of volunteers keeping them alive, though. Many still get shutdown due to the burden of this. Werner Koch etc are representative of the wide use but little support problem before the donation. Here’s a great article by Nadia Eghbal that showed critical categories like infrastructure and many important projects get almost no money. The developers themselves are quoted. Another shows resistance volunteers encounter with some not-so-great numbers.

So, open source is all over the place plus simultaneously being shown to be a terrible business model or even way to keep things alive. The ones doing the best are the hybrids that make the money & get developers with proprietary stuff while pushing/using a key OSS technology. That’s the model I push since it’s most proven.

Nix February 29, 2016 7:09 AM

@Taken: Yeah, a lot of alloca() premature optimizations have been turned into malloc()s of late, for exactly the reason you suggest. (In some places, malloc() is actually impossible — among other things, functions that must be async-signal-safe cannot call malloc(), but can call alloca(), and in general even if a function is not guaranteed to be async-signal-safe, but people often act as if it is and if it otherwise would be but for a call to malloc() that might easily be translated into an alloca(), that translation will often be done. But nobody expects hostname lookup to be async-signal-safe!)

@Dirk Praet, oh yes, you are entirely right. A whole lot of critical projects do have only a few largely-spare-time maintainers — but thanks to CodeSourcery backk in the day, and later Red Hat and others, this hasn’t been true of the core toolchain (nor of glibc) for a long, long time. Heck, glibc originated in a paid-for project (Roland McGrath was paid by the FSF to write the first version).

@Gerard van Vooren, simplicity is indeed the best option where security is concerned, but if we broke glibc’s ABI at this stage (requiring every single Linux program to be relinked simultaneously), nobody would ever use the resulting library. Compatibility is not an optional addon for software like this: it is its single most important feature.

Equally, well, if you’d looked at glibc’s build system you wouldn’t be complaining about Autoconf! glibc’s configury is really very simple and is aggressively pruned of no-longer-needed checks: it really is just checking for toolchain features that compilers/assemblers/linkers within the supported version range may not always support on all platforms glibc works on. The difficult part of the glibc build system is the very makefiles you are praising here. Roland also wrote GNU Make, you see, and was the maintainer of both GNU Make and glibc for a very long time. For all that time, if he needed a new feature in Make to make glibc’s makefiles smarter, he added it. So the glibc makefiles use more or less every GNU Make feature you can possibly imagine, and are ferociously complex, without the least doubt the most intricate makefiles I have ever seen. (They are comprehensible, and undoubtedly elegant, with things like dynamic overriding of platform-independent code with platform-dependent replacements without any changes needed to the makefiles at all — but they arenot in any sense simple.)

Gerard van Vooren February 29, 2016 2:52 PM

@ Nix,

simplicity is indeed the best option where security is concerned, but if we broke glibc’s ABI at this stage (requiring *every single Linux program* to be relinked *simultaneously*), nobody would ever use the resulting library. Compatibility is not an optional addon for software like this: it is its single most important feature.

It needs to be done. There is no other way to do this, at least not a way that I am aware of. How to do this? Just create a new version! (and keep updating todays version as well of course)

Equally, well, if you’d looked at glibc’s build system you wouldn’t be complaining about Autoconf!

I am not talking about glibc but everything else in GNU user land.

The difficult part of the glibc build system is the very makefiles you are praising here. Roland also wrote GNU Make, you see, and was the maintainer of both GNU Make and glibc for a very long time. For all that time, if he needed a new feature in Make to make glibc’s makefiles smarter, he added it. So the glibc makefiles use more or less every GNU Make feature you can possibly imagine, and are ferociously complex, without the least doubt the most intricate makefiles I have ever seen.

Ask yourself why gmake and the Autocrap tools are there in the first place (really do this). I can give you a hint: think about complexity itself. Once this problem is finally recognized, the answer is quite simple. I will give you another hint about that: BSD Makefiles.

Nix March 2, 2016 3:58 PM

@Gerard, unfortunately ‘just create a new version’ will not suffice for glibc, because it maintains shared data structures (such as the heap) which must be consistent and maintained by the same code across all shared libraries loaded. That more or less means that you cannot just bump the soname: you need to simultaneously rebuild every shared library that will ever be simultaneously loaded into the same address space at once. That’s a flag day, and for Linux at least that’s not going to happen without a very, very good reason. (It seems unlikely that any reason would ever be considered good enough. The theoretical possibility of improved security is not such a reason. If you want that, use a completely different library, like musl.)

BSD Makefiles… well, suffice to say that you have not convinced me that they are in any sense preferable to what we already have. Argument by assertion is unconvincing.

Gerard van Vooren March 2, 2016 4:36 PM

@ Nix,

unfortunately ‘just create a new version’ will not suffice for glibc…

It would for platforms such as Debian where you get a new release each two years or so with different stages that shift (unstable -> testing -> stable).

BSD Makefiles… well, suffice to say that you have not convinced me that they are in any sense preferable to what we already have. Argument by assertion is unconvincing.

The problem is complexity, mostly introduced by mixing different platform specific code inside header and source code files. Think how OpenBSD solves this with their LibreSSL and OpenSSH products. They have one native branch (for the GNU platform that would be GNU) and then for each other platform they have specific wrappers. Then you have all that complexity out of the source code and for the build system it’s only picking the files that belong to the platforms. This is pretty easy with Makefiles.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.