Shane September 25, 2014 11:16 AM


Though what they *should be doing is praising the fact that, despite the time it has taken to find these bugs in each of the last-worst cases, we are still actively auditing these code bases and pushing fixes faster than ever. This, in direct opposition to their closed-source counterparts, whose list of vulnerabilities unknown to all but their exploiters would no-doubt boggle the mind, and whose zero-day vulnerability windows can last the lifetime of the software itself.

squarooticus September 25, 2014 11:26 AM

So, let’s be clear here: as much of a fan of open source software as I am, and as many advantages as it has with regard to the security of the most visible, highest-profile projects, there is an awful lot of garbage and atrophied legacy code out there. Most projects have only a handful of people working on them, not the hundreds or thousands of eyeballs that are envisioned when people talk about OSS security.

mark September 25, 2014 11:33 AM

Let me note that, unlike Certain Organizations homed in Redmond, Washington, or even Silicon Valley, four distros, RH, CentOS, Debian, and Ubuntu had patches out yesterday. In fact, I read about the vulnerability on a CentOS mailing list… and in that email was mention that the package update was already available….

I also think, in spite of what the author for the NYDaily News thinks, that at this point in time, overwhelmingly, Linux is run by folks with a clue, who will patch… or who have autoupdate turned on.


Andrew2 September 25, 2014 11:45 AM


What about embedded devices? There are probably some routers/printers/whatever out there that use CGI shell scripts.

Calum September 25, 2014 11:57 AM

It’s also fixed at source now, and at least if you’re on a Debian derivative it’s probably updated already, so apt-get upgrade all round. My server was vulnerable, but by the time I read the story yesterday the fix was in the repo.

jggimi September 25, 2014 11:59 AM

Andrew2 asked above “Does sshd run bash (and, therefore, the forkbomb) before you log in?” Not to my understanding. From OpenSSH’s sshd(8) man page, highlight mine:

If the client successfully authenticates itself, a dialog for preparing the session is entered. At this time the client may request things like allocating a pseudo-tty, forwarding X11 connections, forwarding TCP connections, or forwarding the authentication agent connection over the secure channel.

<b.After this, the client either requests a shell or execution of a command. The sides then enter session mode. In this mode, either side may send data at any time, and such data is forwarded to/from the shell or command on the server side, and the user terminal in the client side.

Joe September 25, 2014 12:03 PM

IF ((open source software is actually audited) AND (open source software has an equally open bug tracking and patch mechanism) AND (users of open source software really have the luxury of time and money to fix it instead of simply using it via JIT and rapid development schemes)) THEN
OK, let’s not fault open source software
You get what you pay for with open source
and it does deserve its bad rep

Anura September 25, 2014 12:20 PM


“WOOHOO! MY WEBSERVER HAS A TWO-YEAR UPTIME! (and hundreds of unpatched kernel, LibC and other vulnerabilities)”

Don’t tell me you haven’t come across that before, although granted the patch for this vulnerability may not require service restarts/system reboots.

Linux sysadmins can be just as dumb or dumber than Windows sysadmins, the difference is that Linux sysadmins are more likely to have a false sense of security.


Historically speaking, Linux has a much better track record than Windows, Apache has a better track record than IIS, and Mozilla has a better track record than IE. To make that kind of statement, you have to do a much more thorough analysis than just “Hey, there have been vulnerabilities in open source, therefore don’t use open source.”

parrot September 25, 2014 12:23 PM

We should avoid the knee-jerk that the media might wrongfully shame open source software. Maybe they’d be right? Or, maybe on a more practical note, who cares? If they start shaming open source, let’s use that interest to fuel some good assurance efforts in OpenSSL, the Linux Kernel, and dustier projects like bash.

The thing we should definitely challenge the media on, though, is any claims that closed source software is somehow a better alternative, at least without some solid facts around the claim.

Gerard van Vooren September 25, 2014 12:34 PM

@ Joe, Open Source is IMO still the right model. The problem with Linux however is that it is scattered (lots of distros) and the GNU history is that of Freedom and making software run on every system. All in all this leads to very large and bloated software. OpenBSD and MINIX3 however focus on code correctness. To me as an engineer, that is the right approach. That said, I really admire what RMS has achieved.

@ parrot, you are absolutely correct.

Anonymouse September 25, 2014 1:14 PM

@Joe — I’d accept your logic if you added “at least as good/well as the average closed-source software” to the end of each inner-parentheses booleans.

Jacob September 25, 2014 1:26 PM

A warning from another forum: It is not enough to fix your system – many vulnerable web sites are now exposed to malware insertion, and when you visit them with your browser (scripting/flash/java enabled), you are in danger.

shadowfirebird September 25, 2014 1:42 PM

I’m not vulnerable. Crunchbang, which is based on Debian Wheezy.

So this is hardly a universal problem…

Q September 25, 2014 1:46 PM

I can’t help but notice that all the complaints with respect to Open Source code are found in really dusty unused areas of the codebase. Areas where nobody ever goes.

It’s a bit like owning an apartment building and finding a SINGLE roach in a disused utility closet that hasn’t been opened in 50 years. In contrast, my experiences with Win7/XP/98/95/NT/etc are a bit more like finding millions of roaches in the main entrance-way. Good lord, windows likes to crash on me (bluescreen) at the desktop with no applications running. But dual boot the same box to linux and everything’s stable as a rock.

Perhaps that’s why we see so much made out of so little. Microsoft’s reputation has been in the toilet for decades. I can’t even begin to count the number of security holes, viruses, trojans, and pieces of malware currently out there for windows.

Microsoft is not known for quality or security, and everybody knows it. If they can’t sell their own products, perhaps their advertising machine can hurt their adversary.

name.withheld.for.obvious.reasons September 25, 2014 2:08 PM

As some have commented I see two primary issues and a related underlying concern.

1.) Mono-cultures of any stripe represent inherent risks (NASA learned this a long time ago and has probably forgotten what history teaches).
2.) Having an AWARE community is the best defence.

My concern here is that we castigate hackers, once again, and forget the invaluable services the COMMUNITIES provide. I for one do not see casting stones as productive. Anyone can stick out their tongue and say nan-nan-nan. If you’ve got something to contribute, even if there maybe some errors, it’s better than listening to complainers (that just complain without offering counter/debate level discourse) or crickets.

Anura September 25, 2014 2:34 PM

What the open source world needs is more structure. Many eyes are great, but only if those eyes look at the code. The problem is that the corporate structure doesn’t really work in an open source environment, so I think this is what we need:

1) Coding Standards designed to minimize mistakes.

2) Standardized development processes
2a) Specifications for each change
2b) A test plan, reviewed to verify it covers the scope of the changes
2c) Automated unit testing wherever possible
2d) Code review procedures to verify the code covers the specification, and has no obvious potential issues
2e) Test Plan review required to verify tests cover all paths in changed code

3) Formal verification where possible for major components (kernel and shell included, but also filesystem, compiler, major libraries, etc.).

The thing is, this is great going forward, but what about all this legacy code? It has to go. We need to start building new components making sure they conform to strong standards (especially critical components), and we should preferably do this with languages that fail more safely and are designed to prevent stupid mistakes (which we’ve gone over before in great depth).

name.withheld.for.obvious.reasons September 25, 2014 2:35 PM

@ figureitout
I’d like to give a shout out to you for bringing it up! If I could permissibly speak for the community I’d like to say thanks!

You go dude/duddette

Thomas September 25, 2014 2:35 PM

Apparently this one is called “Shellshock”.
Still waiting for the graphic.
Hopefully it will feature a turtle.

To exploit this bug from a login shell (“NASTY=… ssh root@victim”) you have to get that shell in the first place.
That makes it post-auth, so if you’re vulnerable then the bad-guys already have a foot-hold on your system.

The predominant pre-auth exploit would be via a CGI script that uses bash.
bash CGI scripts are so difficult to get ‘right’ (escaping all untrusted inputs correctly etc.) that it’s easier just to port them to another language.
If you’re running bash CGI scripts this bug may be the least of your worries.

name.withheld.for.obvious.reasons September 25, 2014 2:39 PM

@ Anura
WELL SAID!!! This is what I am talking about. We teach and learn from each other. It’s too bad that the political and corporate royals can’t take a lesson from these types of communities.

Thomas September 25, 2014 2:44 PM


What you say makes sense, but it’s not limited to open source.

“goto fail” demonstrated what even companies with money to burn don’t always get this right, so expecting perfection from projects run on less than a shoestring budget may be just a little unreasonable.

name.withheld.for.obvious.reasons September 25, 2014 2:52 PM

@ Figureitout
Sorry for the misspelling–don’t want to be the recepient of the same ire Nick P received. ;>)

Anura September 25, 2014 3:02 PM


These are designed to work with small organization. Each project needs reviewers, and then anyone can submit the specification, the test plan, and the code. Each component just has to be reviewed after that. The goto fail issue is a problem with C and/or the coding standards, one which could easily be solved by a language that offered better cleanup functionality:

cert = get_cert(foo);
    if (check_signature(cert))
        if (check_host(cert, current_hostname))
            if (check_revocation(cert))
                return OK;
                return REVOKED;
            return BAD_HOST;
        return BAD_SIG;
finally //Always executed

Paul September 25, 2014 3:11 PM

Can Mac users do anything to take action on their own? Or do we have to wait for apple to make an update?

Anura September 25, 2014 3:40 PM

Scratch that; it looks like you are still vulnerable as long as bash is installed. So you will have to wait for a patch.

Thomas September 25, 2014 4:54 PM


Software is always written under resource pressure (mostly because it’s life, and life always happens under resource pressure).

If you’re volunteering then you spend your limited free time.
If you’re being paid then you have someone who wants a product in exchange for your salary (and preferably wants it before $COMPETITOR releases theirs).

I wish “getting it right” were as easy as some flow-chart or certification.
In the end it’s about managing limited resources, a problem mankind has always struggled with.

Tim Bradshaw September 25, 2014 5:19 PM

Although this is clearly a horrible vulnerability, I think that bash (certainly) and other Unix shells (probably) are just a mass of awful security bugs. In particular the whole ability to take random crap from the environment and turn it into definitions even when those definitions override shell built-ins is pretty nasty, as you can invent environments where nothing is what it seems and you can’t find out that it’s not.

Really you don’t want a Unix shell on any call stack you care about.

alo September 25, 2014 5:41 PM

It is actually quite easy to binary patch bash. Open bash with eg. emacs and search for string “() {” and replace “(” with a null character. This disables the horrible “function definition from the environment” feature altogether.

Anura September 25, 2014 6:42 PM


It’s just a matter of making sure there is enough pressure for well-built open-source software. With large enough communities, especially those that are making distributions, we can enforce those rules. Start a community to list all major operating system components, from shells, to filesystems, to browsers, to kernel components, etc. and identify what software is available that meets the standards (little to none at first). Host repositories and project management software, and get volunteers to start working on them.

Mimic what Stallman did in the 1980s and construct an enitre operating system component by component using strict coding standards. Start with the low hanging fruit like libraries, shells and terminals, and gradually work up to making a new compiler, new filesystem, new browser, new graphical shell, remote shell, webserver, and new kernel. Once you have enough, you can have OS projects that require these standards for contributions.

Of course, this is a great time to move away from C. Ada? Maybe. New language? It would probably take too long for some of the components, at least.

Yusuke Shinyama September 25, 2014 6:49 PM


I’ve been a reader of your articles for a long time and really appreciate it, but when you put a bunch of links to various sites that are more or less reporting the same thing, this practice bothers me. I tend to check the URL of each link before clicking, but this is neither very readable nor easy to check. Do you do this to imply that it doesn’t matter which article you read? I prefer each link to be numbered if there are multiple reports.

Yusuke Shinyama September 25, 2014 6:55 PM

I was talking about the style that you put a link on each word, not the practice of citing multiple sources. Sorry for confusion.

name.withheld.for.obvious.reasons September 25, 2014 7:11 PM

@ Thomas

I wish “getting it right” were as easy as some flow-chart or certification.
In the end it’s about managing limited resources, a problem mankind has always struggled with.

I don’t accept your premise. In fact, I believe it to be simple and easy to “solve”. Based on your comment I’d not expect water to come out of the facet tap. It is the failure of imagination AND the markets. The classic “winner take all” crap is fine in a world where cognition is not a requirement…oh yeah, cognition isn’t a requirement on this planet.

Most of our markets are captive to some interest or person. The fallacy that is the “freedom of the markets” is just that, FALLACY.

I know for example brokers of every sort “manage” or “control” markets. Where ever a handle, knob, or lock can be placed (physically or logically) there will be someone asserting their (not yours or our) interests. Our systems are so corrupted by power that their is little to trust–period. On good example is where data brokers (holding information about you) will sell you insurance (live lock) or “monitor” the data THEY already control. What a f’ing racket.

nr September 25, 2014 7:16 PM

The fork bomb won’t do anything other than eat a little CPU as most users these days. Every sane system has default user limits these days to prevent a fork bomb from opening enough processes to crash the system. You would have to escalate privileges in most cases to actually crash a system with a bash fork bomb.

name.withheld.for.obvious.reasons September 25, 2014 7:17 PM

Would making an alias filter in etc/bash.completion.d work to solve this in the intrim? Or could setting the perms on the completion directory to octal 000 work?

JimBob September 25, 2014 7:36 PM

I’ve got an ancient system running bash 3.0. After grabbing the source for 3.0.16 and patch 17, patching and compiling, the resulting bash binary still fails the env x='() { :;}; echo vulnerable' bash -c "echo this is a test" test.

Running the same process for different versions of bash on different systems results in similar output to the below:
$ env x='() { :;}; echo vulnerable’ bash -c “echo this is a test”
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x’
this is a test

If I’m overlooking something, I’d appreciate someone pointing it out before I head over to the GNU mailing list and (more) publically make an idiot out of my self.

not_my_real_name September 25, 2014 8:50 PM

I admit that, at first blush, this seems to be a sign of a weakening of the FOSS security and/or an example of a time when FOSS is just as insecure as the other guys in town (windows). Again, at first blush, you can almost hear a bell tolling somewhere.

However, upon reflection, I believe that this is merely a symptom of the lesson we learned from heartbleed. Namely that we use certain software all the time, without thinking about it. Both Bash and OpenSSL are VERY VERY popular and are installed (if not used) on many modern linux OS’s. The code for these programs needs to be scrutinized. Vulnerabilities like these, and their subsequent patches make us more aware (and thus more secure) with time. Heartbleed was a lesson, shellshock is a lesson. I hope the FOSS communities learn from them.

Props to M$ for implementing the SDLC however, I think that something similar for the FOSS community would be a wise choice at this time.

T September 25, 2014 8:51 PM

Uh, @JimBob :), I think from what I’ve read today that means your system is good.
Ya’ll having a fun on me?
Honestly, I don’t know why anyone actually thinks one command line test and response makes sure that anything is actually secure, but yeah, that’s what the innernets are suggesting today with regard to the Bash problem.

T September 25, 2014 9:07 PM

Hey, something I don’t get, if the shell doesn’t have to be root, or does it?, why even bother with setting the function, why not just enter the command once you have command line terminal access? I don’t have time or access to try that right now.

And about the shell and its’ ownership, to fix this do we need to fix it at every user level if we we don’t have a system patch yet? For instance, if Sally the janitor has an account and she’s using bash, but we’ve changed the shell for everyone at higher levels, Sally’s account must still be vulnerable, right?
Does changing the shell have any effect on the problem? And if you leave the bash code there, can’t anyone just change the shell to bash again?

The more I think about this, the more my head gets stuck in a blender of confusion. How big is this back door?

Now I am JaneBob.

Chris Abbott September 25, 2014 9:19 PM

This is discomforting since my webserver runs Ubuntu LTS 12.04. I know this sounds terrible but I’ve been busy and forgetting to run security updates lately (enter tar and feathers). I know it’s affected by Shellshock. The good thing is my website doesn’t use any CGI or other scripts. However, it looks like it could still be possible to get in. I’m working on it now, but what I don’t want is something on it that’s persistent and opens up access to everything else on the server. I don’t feel like starting from scratch on my server. Soon, I will be separating the duties of that server to another server that’s semi-air gapped.

name.withheld.for.obvious.reasons September 25, 2014 10:02 PM

Does this apply to /bin/ash? If so, updating etc/passwd with ash instead of bash would be a simple–process space only–restart. Yeah, uptime now 4 years and 137 days and 14 hours.

JimBob September 26, 2014 12:45 AM

@T – I discovered my mistake. I was attempting to verify the 3.0.17 patch fix after compiling but before installing, and I failed to test it properly. All clear, onwards to the next adventure!

Now I can look like a public idiot on Bruce Schneier’s site!

uh, Mike September 26, 2014 12:45 AM

@T, I’m with you, for different reasons.

There are things in the system like the shell, the editor, awk, and so forth, that MUST NOT allow input, environment included, across a security barrier.

Has that been forgotten?

name.withheld.for.obvious.reasons September 26, 2014 12:57 AM

@ JimBob

Now I can look like a public idiot on Bruce Schneier’s site!

You’re going to have to do a whole better then that! We have 535 congressional members that are MUCH further along the elliptical curve than you–they’re on the record (check the library of congress if you don’t believe me). And that’s just for starters.

eCurmudgeon September 26, 2014 1:16 AM


Of course, this is a great time to move away from C. Ada? Maybe. New language? It would probably take too long for some of the components, at least.

Better yet, the SPARK 2014 dialect to Ada (

SPARK is a software development technology specifically designed for engineering high-reliability applications. It consists of a programming language, a verification toolset and a design method which, taken together, ensure that ultra-low defect software can be deployed in application domains where high-reliability must be assured, for example where safety and security are key requirements.

Vance September 26, 2014 3:22 AM


As I understand it (and I may not be understanding it well), there are two main concerns.

  1. CGI web applications where untrusted user input is stuffed into an environment variable, then a shell script is called.
  2. Non-CGI web applications which stuff untrusted user input into an environment variable and then use the system() call. A shell (sh) is invoked regardless of the command being run.

I think that environment variables were not generally thought to be vectors for code, so applications may not have been designed to avoid assigning hostile input to them. Or they may have been relying on the receiving command/script to sanitize the input, which is too late in this case.

Note that in both cases, if “sh” is not bash and if bash is not explicitly called by the application, this exploit should fail.

bob September 26, 2014 3:43 AM

@Anura Errr, my webapp has a 4 year uptime and auto-patched this bug before it was publicly known and just auto-patched the “oooops, the first patch missed something” bug this morning. It’s still got a 4 year uptime. Last week it installed a kernel patch without interrupting its uptime. Do you have a point?

Stuart September 26, 2014 6:28 AM

Correct me if I’m wrong, but as I read the various commentaries and articles on this bug/exploit, I get the distinct impression that the simplest (and hence, most likely) path to exploit this is via CGI scripts that use system() or popen() calls – because these calls use /bin/sh to execute commands. Correct?

That being the case, it seems to me that the obvious short-term mitigation is to make sure that /bin/sh is something other than bash, and that no remotely accessible scripts use bash explicitly for anything.

Of course, this then creates the problem that there are far too many Linux shell scripts that assume that /bin/sh is bash, and hence will break (because they use #!/bin/sh rather than #!/usr/bin/bash or similar) – but, bluntly, I’d rather deal with that than with a remote exploit, thank you very much.

Yes, you would want to get a properly patched bash in place in due course, but wouldn’t getting a sane (ie: smaller) shell into /bin/sh close off most of the vulnerabilities and hence give some breathing room?

(So glad I’m not in the sysadmin game at the moment…)

Carlos September 26, 2014 9:27 AM


“Apache has a better track record than IIS”

Yeah, no it doesn’t.

Apache HTTP Server 2.0.x – 43 vulnerabilities
Apache HTTP Server 2.2.x – 62 ”
Apache HTTP Server 2.4.x – 17 ”

IIS 4 – 0 vulnerabilities
IIS 5.x – 12 ”
IIS 6 – 11 ”
IIS 7.x – 8 ”
IIS 8.x – 0 ”

(source: Secunia)

Doing the math for Windows vs. Linux and IE vs. Firefox will probably yield interesting results too. But I’m too lazy to do it. Err.. I mean, busy.

Miksa September 26, 2014 10:24 AM

I’m surprised no one has mentioned DHCP as the exploit avenue, I would consider that the biggest threat. I would suspect many internal networks aren’t protected against someone bringing in a malicious DHCP server and even some ISPs may not have required protection in place, if your ADSL/cable modem is set to bridging mode.

Szigi September 26, 2014 11:16 AM

People argue that closed source is even less secure. I think we are looking at the wrong problem. It’s a common belief that open source is automatically more secure, because those thousands of pairs of eyeballs looking at it.
What OpenSSL and Bash have shown is that this might not be true. We had horrible vulns lurking there for years.
It’s high time that some kind of “This opensource project has been security reviewed” program comes forward, so that trust remains in opensource.
I’d much rather like to see a change freeze and security review then a new –with-another-never-used-feature option in bash for syntax highlighting directory listings containing device files.
And the same goes for a lot of software.

Metzelder September 26, 2014 12:26 PM

Some local RedHat-aligned fellas (the author of the original post and its general tone do not inspire too much trust in him, but I may be wrong) are advocating moving Linux from textual interaction between utilities to binary interfaces, since

…glueing binaries with pipes carrying unstructured text data is absurd and archaic. We have proposed shell pipelines modifications. We have been telling about the need to move from text-orientated systems to binary ones (see journald and lumberjack). And, of course, our colleagues have always been asserting that this unix-way “launching application from other applications” is fallacious (see Andy Grover’s presentation (pdf).

We continue to uphold the view that all this “unix way” has to be replaced with tighly coupled but independent (systemd components, D-Bus applications) and isolated (SELinux, cgroups etc) applications communicating using standardised binary messages (JSON, GVariant, BSON, XML, whatever) over the standard data bus (D-Bus, kdbus), and whose events are operatively logged by OS’s database (journald). This is what we call a Linux platform and that where we lead the opensource world.’

From the viewpoint of security, how sane or insane are their ideas? They seem to have been preaching these ideas for quite a while, the latter being Revisiting How We Put Together Linux Systems (this one seems to be somewhat complex to me).

T September 26, 2014 12:53 PM

Apparently there is another “T” on this forum…

@JimBob — I didn’t mean you were a fool!!! Nobody knows all the details of this thing yet! Well, actually, my favorite joke has always been:
What’s the difference between a Car Salesman and a Tech Support Person?
Answer: The Car Salesman KNOWS he’s lying.

And I have seen some wild speculation today and yesterday, and people suggesting Fork Bombs to users, and wow, it’s been hard to grab a hold of the tail of this thing and shake it for useful info.

You didn’t say what OS you were using, but
Redhat has a good page at

Here is the recent info on Redhat patching:

Diagnostic Steps

To test if your version of Bash is vulnerable to CVE-2014-6271, run the following command:

$ env ‘x=() { :;}; echo vulnerable’ ‘BASH_FUNC_x()=() { :;}; echo vulnerable’ bash -c “echo test”

If the output of the above command contains a line containing only the word vulnerable you are using a vulnerable version of Bash. The patch used to fix this issue ensures that no code is allowed after the end of a Bash function.

Note that different Bash versions will also print different warnings while executing the above command. The Bash versions without any fix produce the following output:

$ env ‘x=() { :;}; echo vulnerable’ ‘BASH_FUNC_x()=() { :;}; echo vulnerable’ bash -c “echo test”
bash: BASH_FUNC_x(): line 0: syntax error near unexpected token )'
bash: BASH_FUNC_x(): line 0:
BASH_FUNC_x() () { :;}; echo vulnerable’
bash: error importing function definition for `BASH_FUNC_x’

The versions with only the original CVE-2014-6271 fix applied produce the following output:

$ env ‘x=() { :;}; echo vulnerable’ ‘BASH_FUNC_x()=() { :;}; echo vulnerable’ bash -c “echo test”
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for x'
bash: error importing function definition for

The versions with additional fixes from RHSA-2014:1306 produce the following output:

$ env ‘x=() { :;}; echo vulnerable’ ‘BASH_FUNC_x()=() { :;}; echo vulnerable’ bash -c “echo test”
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `BASH_FUNC_x’

The difference in the output is caused by additional function processing changes explained in the “How does this impact systems” section below.

The fix for CVE-2014-7169 ensures that the system is protected from the file creation issue. To test if your version of Bash is vulnerable to CVE-2014-7169, run the following command:

$ cd /tmp; rm -f /tmp/echo; env ‘x=() { (a)=>\’ bash -c “echo date”; cat /tmp/echo
bash: x: line 1: syntax error near unexpected token ='
bash: x: line 1:

bash: error importing function definition for `x’
Fri Sep 26 11:49:58 GMT 2014

If your system is vulnerable, the time and date information will be output on the screen and a file called /tmp/echo will be created.

If your system is not vulnerable, you will see output similar to:

$ cd /tmp; rm -f /tmp/echo; env ‘x=() { (a)=>\’ bash -c “echo date”; cat /tmp/echo
cat: /tmp/echo: No such file or directory

If your system is vulnerable, you can fix these issues by updating to the most recent version of the Bash package by running the following command:

yum update bash

Jacob September 26, 2014 3:42 PM

I am positive that the TLA have a field day, owning all the linux web servers that are on their target list in no time.

BTW, the bash issue hasn’t been resolved yet, and in the last few hours it has become even a bigger issue due to related processes which inherently call it.

JaneBob September 26, 2014 4:26 PM

I called the Apple store near us this afternoon to ask about the Bash problem, having found little online about any fix.

The first support person had no idea what the Bash vulnerability was. I was transferred to another Tech Support person at a Call Center who actually said “I have no idea what you are talking about”, but was kind enough to look it up on the Internet for a minute or two, and then said “This is absolutely not true”.

I might just has have hung up there, but instead was transferred to a Supervisor.
I have to say that ALL Apple support personnel are NICE and try to be helpful, I’m not complaining about them. I am concerned that Apple left them on the front lines without a hint about this latest issue.

The Supervisor said they are working on a patch. I challenged, said I’d read somewhere that GNU had already provided the patch. Bottom line–no one could tell me what might break if I managed to change the shell I was using, as a stopgap, and I was told that while Apple does not condone user modifications, I am free to compile a bash fix on my own. sigh. I will do that only after I am comfortable that I can fix whatever breaks.

Regarding the iMore announcements today that “the vast majority of Apple users are not vulnerable”, I take issue with a couple of the statements from Apple.

“Although the Bash command shell is included in OS X, it is not vulnerable unless users specifically enable it.”
THAT IS NOT TRUE–my system uses bash as the default shell, and sh points to it, as a DEFAULT–whether I know about it or use it or not–it’s the default. It’s there.
And if it’s there and is vulnerable, which “user” do you think is going to go for it?

“With OS X, systems are safe by default and not exposed to remote exploits of bash unless users configure advanced UNIX services.”

Advanced UNIX services? Which ones? Maybe the speaker meant “users are less likely to be a target if they don’t enable basic common services that Apple makes readily available”–like ssh, remote access, Apache, whatever. But that sure does not make anyone SAFE.

I am very disappointed with Apple. Maybe it’s time to go Linux only again. It seems so much worse to have inane statements of reassurance made than it would to hear that Apple is struggling to make sure any patch they ship out is perfect.
Or, even, that they are busy on multiple fronts right now and getting to it as soon as they can—just don’t tell users that bash isn’t the default shell–that’s a damn lie.

BoppingAround September 26, 2014 4:53 PM

I am concerned that Apple left them on the front lines without a hint about this latest issue.

Totally expectable, I’d say. There aren’t too many people using idevices that are concerned about this, I suppose.

Regarding the iMore announcements today that “the vast majority of Apple users are not vulnerable”, I take issue with a couple of the statements from Apple.

I’ll add a little here, though it is off-topic but the attitude is similar:

Apple knew as early as March 2014 of a security hole that left the personal data of iCloud users vulnerable, according to leaked emails between the company and a noted security researcher.

Am I from Mars? September 26, 2014 4:55 PM

You will just have to wait for Apple to push out a fix. If you are not running a public web server from your computer with a (IMHO) bad implementation of its web services, you don’t have much of a vulnerability.

— Expounding on the obvious follows —
#1, for a web service to be vulnerable, the service must accept input that is not cleaned up.
#2, the input must be placed into an environmental variable before a Bash script is executed.
#3, the Bash script has to be executed with root privileges, or with privileges elevated enough to do some harm.

So there are at least three things that must be skipped in the implementation, setup, and deployment of the web service. I stand by the statement that it’s a problem with incompetent developers and administrators. Really, it’s like Bobby Tables. Stop leaving the door wide open, there’s real jerks out there.

The other part is a dhcp client that doesn’t check its input, and uses a Bash script for some of its settings.

This “bug” was implemented as a feature, and it has been in the manual pages for as long as Bash has had this feature. Just now someone has noticed it, and of course others have capitalized on it. I learned the importance of checking input before the web became popular. And now people are noticing that maybe its a good idea now??

Oy, vay.

JaneBob September 26, 2014 6:30 PM

@BoppingAround Thanks for that link–that is interesting and sounds like Apple missed a good opportunity.
To be fair, though barely, Balsic’s translations of English would raise red flags with me–some of his translations don’t make sense if he was using google translate, a dictionary, phonetics etc. Pardon me Balsic, Asifa, please, minfudlick, but even I had a second of wondering if you were faking your translations to appear to be someone else. (I have made many messes of Arabic, Mandarin, and Russian, worse that most non-native English speakers do with English)
BUT, Apple should have ignored what they thought of the person and looked at his data.

Thanks again for the link!

@Am I From Mars: First off, try being from Venus and loving IT–that is a painful place to come from!
Thanks for your thoughts, I really hope you are right. But the DHCP part, that is something I still worry about, no one has much control over their internet connection anymore. Know of any good ways to test what DHCP vulnerabilities might be with bash? Yet another protocol I only know the basics of.

Dewey September 27, 2014 12:23 AM

Am I from Mars? wrote:

This “bug” was implemented as a feature, and it has been in the manual pages for as long as Bash has had this feature.

Not really, from what I can tell. The string “() {” doesn’t appear once. There are vague references to “importing function definitions from the shell environment” and the like, but I can’t find any description of how it works. There’s certainly no huge warning like “bash will look at literally every environment variable and import anything starting with ‘() {‘ as a function”.

“Everyone knows” that the environment is just a series of null-terminated key=value strings which, in general, are not interpreted except as documented in man pages for libc and each program that reads specific keys. Duplicate keys can exist, which can create interesting security vulnerabilities, but historically only a subset of variables like PATH have been viewed as requiring special caution. Prefixing strings, e.g. with “HTTP_”, has always been seen as enough (which also explains OpenSSHd’s default “AcceptEnv LANG LC_*” setting—without LOCPATH they can only refer to system paths—and the similar cleanup done by glibc for setuid programs).

If the vaguely-defined “importing function definitions” feature had used “BASH_”-prefixed names, or names with any prefix, these things would never have passed a whitelist. Or if it had been properly documented, someone would have probably figured out earlier that this was a terrible idea (and last I heard, the patches don’t completely disable it, but only fix the bug of trailing definitions being executed immediately). But we have to infer from statements like “The export and declare -x commands allow parameters and functions to be added to and deleted from the environment”, and that’s not nearly good enough for such a dangerous behavior.

Vance September 27, 2014 5:22 AM


From what I understand, you are spot on. For example, Debian and Ubuntu use dash as /bin/sh, not bash, and so are not vulnerable. (Unless, as you note, the remotely-accessible application calls bash explicitly.)

Also note that this is not a privilege-escalation bug – any exploit would be running with the permissions of whatever daemon was attacked (probably the web server). Of course, the injected commands could attempt to exploit further vulnerabilities to gain privileges.

alo September 27, 2014 7:06 AM

As I wrote earlier, simple binary patch removes whole “automagic function definitions from the environment”. I made a simple Python script to make the patch. It simply finds null-ending string “() {” and writes null over first “(“.

The script assumes that the first occurrence is the correct one. At least bash shells I have, that is the only and correct one. This can be also used to disable the feature in already patched vendor supplied bash binaries.

David Henderson September 28, 2014 12:25 AM

zsh is not vulnerable to shell shock; I checked.

I’m on OSX 10.7. After creation of an admin user using an existing version of zsh as the shell, I fixed the bug using a zsh driven terminal window to get to the following configuration:

bluejay% ls -l /bin/sh
lrwxr-xr-x 1 root wheel 8 Sep 27 18:58 /bin/bash -> /bin/zsh
-r-xr-xr-x 1 root wheel 1371648 Jul 1 02:18 /bin/bashOld
-rwxr-xr-x 2 root wheel 772992 Jul 1 02:18 /bin/csh
-r-xr-xr-x 1 root wheel 2180736 Jul 1 02:18 /bin/ksh
-r-xr-xr-x 1 root wheel 1371712 Jul 1 02:18 /bin/sh
-rwxr-xr-x 2 root wheel 772992 Jul 1 02:18 /bin/tcsh
-rwxr-xr-x 1 root wheel 1103984 Jul 1 02:18 /bin/zsh

I wonder at the write privs for tcsh, zsh and csh; I left them alone anyway.

JaneBob September 29, 2014 4:12 PM

Hey, anyone have any idea what THIS is?

in /bin on my mac:

-r-xr-xr-x 2 root wheel 43120 Mar 28 2013 [

$ file [
[: Mach-O universal binary with 2 architectures
[ (for architecture x86_64): Mach-O 64-bit executable x86_64
[ (for architecture i386): Mach-O executable i386

Wikipedia says this about Mach-O:
Under NeXTSTEP, OPENSTEP, OS X, and iOS, multiple Mach-O files can be combined in a multi-architecture binary. This allows a single binary file to contain code to support multiple instruction set architectures. For example, a multi-architecture binary for iOS can have 6 instruction set architectures, namely ARMv6 (for iPhone, 3G and 1st / 2nd generation iPod touch), ARMv7 (for iPhone 3GS, 4, 4S, iPad, 2, 3rd generation and 3rd – 5th generation iPod touch), ARMv7s (for iPhone 5 and iPad (4th generation)), ARMv8 (for iPhone 5S), x86 (for iPhone simulator on 32-bit machines) and x86_64 (64-bit simulator)

But it’s a little rattling to see a file named [ in my /bin directory. It hasn’t always been there.

Anura September 29, 2014 5:01 PM


Mach-O is the format, which everything compiled for OSX will be, it has very little to do with the program itself.

JaneBob September 29, 2014 6:35 PM

@Anura, thanks, that was my worry. So, how in the heck do I determine what that program is? I don’t know how to open it or see what it does. Does running it tell me anything?

The terrible suck about being interested in computer security is the depressing feeling that the battle can never be won. What started as curiosity and enthusiasm has over the years turned into a defeatist attitude, one of hopelessness.

There are so many files on any operating system…and only so many hours in a day.

Anura September 29, 2014 6:50 PM

It’s generally not a good idea to run unknown executables. You can try running md5sum and googling the hash. Odds are you accidentally copied or renamed a file, and that should pick it up. You can also try running strings and looking at the output, although you can’t trust that the strings will not be made to look legit.

Nick P September 29, 2014 6:51 PM

@ JaneBob

You’re not going to have such assurances if you use binary software from a third party. You need to code the OS yourself or get one that has a book/docs describing each source file. Then you compile/link them with a trustworthy toolchain. Then you install it. Then you do this for all updates.

Seems like you got a lot of work ahead of you.

JaneBob September 29, 2014 6:59 PM

@Anura, thank you, I had forgotten to try strings, and hadn’t even thought to run md5sum and googling, I greatly appreciate the help. I don’t go to terminal too often these days, been a rough year, so the possibility that I accidentally did this is kind of slim I think, though not at all impossible. I’ll search what history I can find.

@Nick P: Sorry Nick, I have no idea what you are talking about there. Wanna try again to help me get what you are saying?

thevoid September 29, 2014 7:47 PM


are you absolutely sure that it was not there before? on most unix systems this
is just hardlink to another program called test.

-r-xr-xr-x 2 root wheel 43120 Mar 28 2013 [

note the ‘2’ right after the permissions. that indicates there are 2 hardlinks
to the file. do an ‘ls -l /bin/test’ and it should show the same file. maybe
‘sum /bin/[ /bin/test’ to show it is the same file.

just do a ‘man [‘ or ‘man test’. in fact to test if a file ‘/bin/test’ exists,
you use ‘test -e /bin/test’ which will return true if the file ‘/bin/test’
actually exists (or it will not run at all if it doesn’t!). ‘test’ can be used
for scripts ie ‘test -f somefile && more somefile’ will test to see if a
regular file named ‘somefile’ exists, and if so (that is ‘test’ returns true,
indicating the file exists) it will run ‘more somefile’. otherwise it will not
run ‘more’ (test returns false, ‘somefile’ did not exist or was not a regular

the reason for the program being called ‘[‘ is so that you can do things like
if [ -f somefile ];then
echo “file ‘somefile’ exists and is a regular file”
echo “file ‘somefile’ does not exist or is not a regular file”
instead of doing something like this:
if test -f somefile;then
which is the same thing.

instead of my first example:
test -f somefile && more somefile

you could use:
[ -f somefile ] && more somefile

JaneBob September 29, 2014 10:23 PM

hey thanks, I really appreciate your help. Either I’m seriously Janebob on this, or I just can’t figure this out logically. I did a few of the steps you mentioned, here is the result:

$ sum /bin/[/bin/test
sum: /bin/[/bin/test: No such file or directory
$ man[

(Yeah, this jumped to the man page for test–I didn’t understand all of it, but
I will need tons more coffee before I can zen “The test utility evaluates the expression and, if it evaluates to true, returns a zero (true) exit status; otherwise it
returns 1 (false). If there is no expression, test also returns 1 (false).

What does that mean? I wonder to myself.

The [ file has the same size and date stamp as the /bin/test file

-r-xr-xr-x 2 root wheel 43120 Mar 28 2013 /bin/test

compared to [

-r-xr-xr-x 2 root wheel 43120 Mar 28 2013 [

the test -e /bin/test came back with nothing, just the prompt:
$ test -e /bin/test

So I think you are right, I just never saw that file before. I did have to replace my harddrive last spring, I think it was in late July, definitely not March, and I think the Mac guys gave me an update on the OS and that’s where the change could have appeared. I hate it when I don’t have time to do the nitty gritty details but have to guess or go along. I have to apologize that my details are fuzzy, I had a major personal crisis in August and after that all computer memories were fuzzy.

I will check my backups and see if I can find when the change occurred, unless it turns out its just my mistake.
thanks so much for helping me with this, one less thing to stress about.

Nick P September 29, 2014 11:04 PM

@ JaneBob

Your goal seems to be to ensure what’s on your computer is only what should be there. The thing is that you have to understand the whole operating system source code, check it yourself, compile it yourself, install it yourself, and then measure all the files that are there. Now you know what each component does, the files it produces, and what should be where. You have to do a similar process for anything you update. If you don’t do this, then developers or hackers can slip backdoors into the system during the software lifecycle.

If you are worried about vanilla malware, the situation is easier. In that case, you do a vanilla install of the operating system. Install security software. Do all the updates. Install Chrome or Sandbox+Firefox+NoScript+HTTPSeverywhere. Install other software you will need. Now, do a backup to external media, preferrably onto write-once media (DVD-R’s). This gives you a clean initial state and a backup hackers can’t mess with. Only then do you use the computer.

In that model, you will regularly need to do updates or maybe install new software. Go ahead and do them immediately. However, you will periodically (daily weekly or monthly) load a rescue CD to avoid compromised OS, restore your system from known clean backup, redo updates/new-stuff on it, and then back the new system up again onto external media. This is how you continuously ensure infections don’t last & are recoverable.

Note: There are also programs for Windows or Linux that claim to be able to tell when files have been modified. Of course, you need insight into the processes to know whether that indicates a threat.

So, ensuring your system hasn’t been tampered with isn’t easy. Preventing or eliminating stealthy malware on a Windows, Mac, or Linux system takes a lot of work. Preventing subversion in the system itself takes an incredible amount of work. Actually, that takes so much work only a few systems try to do it and you still have to trust other people to have evaluated it. The problem you’re wanting to solve is outside the scope of most desktop operating systems. It’s not a problem worth solving for a user. You’re better off just maintaining good security software (esp whitelisting software), avoiding risky sites/apps, and using a strong backup strategy as I outlined.

Btw, your posts indicated you used Apple. That’s one of the worst to use for security, although vanilla malware target them less. Apple has a long history of simply not caring. Their desktops were trivially easy to exploit. Their server OS had flaws like an administrative service required a username/password, but didn’t check to see if password actually matched. (?!) Their response time on bugs doesn’t seem all that great. They also regularly fail to design security services well. They’re basically like Microsoft 10-15 years ago in terms of security expertise. Unless common malware is your only worry, you’re better off not using Apple products because they don’t really care about security. Bling, usability, and higher margins are their forte.

thevoid September 30, 2014 12:20 AM


$ sum /bin/[/bin/test
sum: /bin/[/bin/test: No such file or directory
$ man[

there should be a space after the ‘[‘
$ sum /bin/[ /bin/test
$ man [


$ ls -li /bin/[ /bin/test

will display the inode number, which is essentially the file’s real name on
the file system. if the programs are the same exact file, they will have the
same inode number. in my case, on an openbsd system:

$ ls -li /bin/[ /bin/test
2520449 -r-xr-xr-x 2 root bin 95028 Sep 10 19:42 /bin/[*
2520449 -r-xr-xr-x 2 root bin 95028 Sep 10 19:42 /bin/test*

the first number (2520449) is the inode.

(Yeah, this jumped to the man page for test–I didn’t understand all of it,
but I will need tons more coffee before I can zen “The test utility evaluates
the expression and, if it evaluates to true, returns a zero (true) exit
status; otherwise it returns 1 (false). If there is no expression, test also
returns 1 (false).

What does that mean? I wonder to myself.

every program returns some value on exit. if successful it returns a ‘0’, and
an error usually returns some other number (to allow for different error codes
on return).

there are programs on most unix called ‘true’ and ‘false’. ‘true’ always returns
true ie 0; ‘false’ always returns false (in this case ‘1’).

test does the same thing. if what it was testing was true, it returns true (0).
if the condition was false, it returns false (1).

so in my previous example:
if [ -f somefile ];then
echo “file ‘somefile’ exists and is a regular file”
echo “file ‘somefile’ does not exist or is not a regular file”

‘[ -f somefile ]’ (which is just an alias to ‘test’ and equivalent to ‘test -f
somefile’) returns true (0) if the statement is true (that is the file exists)
or false. if true above, it prints the first ‘echo’, if false the second one.

another example:
if true;then
echo “this branch always runs”
echo “this branch never runs”

also, to display the results of the last running program, do ‘echo $?’

$ true
$ echo $?
$ false
$ echo $?

chances are it was always there, and you may just not have noticed it. i’ve
done a double-take before myself when i saw it once, before i remembered.

there are many filesystem integrity tools available, so that might be something
to look into if you want to ensure your system is consistent.

Adjuvant September 30, 2014 2:34 AM

Just a note: I’ve made a post on the squid thread that touches upon several topics raised here, including Ada 2012/Spark 2014 and alternative shells. Replies should probably go on that thread.

Somebody September 30, 2014 10:46 AM


You may also want to know that in most modern shells test and [ are shell builtins. They are evaluated directly by bash rather than forking /bin/test as a new process. In theory there may be odd cases where
test some_complex_expression
/bin/test some_complex_expression
do different things. The reasons for this date to system 7 when the world was young and resources were scarce. You might want to try

$type test
test is a shell builtin

name.withheld.for.obvious.reasons September 30, 2014 3:34 PM

@ Nick P

They’re basically like Microsoft 10-15 years ago in terms of security expertise. Unless common malware is your only worry, you’re better off not using Apple products because they don’t really care about security. Bling, usability, and higher margins are their forte.

I’m afraid all COTS and FOSS platforms fail the assurance and reliability expectations for even the simplest of uses. We both know that in today’s environment the DO178B/256 platforms are still brittle due to integration costs; qualified hardware is passed to qualified platform vendors using certified binary sources. How could anything possibly go wrong.

How about this…
During a site visit to the new lab, the project scientist and I meandered about the old CIA space satellite imagery building purchased by our senior management. We set about the task of mentally resourcing the lab for research, design, and engineering. Walking down the first floor corridor a “young-ish” man (mid-thirties) was painting the walls.

I’d no idea who this person was but he informed me that he had been to the atolls for six months and he’d been developing a flight system for Trident. My queries lead him to state that they’d used an FPGA platform for the control, flight, guidance, and targeting and expressed confidence in the FPGA vendors security model. When he told me the device family–if had to be said–I quickly described why his confidence was missplaced. After describing the problem in detail, he stopped painting, put the brush in the paint pan, turned around and looked me square in the eye and said; “You are a threat to national security.”

Nick P September 30, 2014 3:45 PM

@ name.withheld

Wow. What a reaction. A nice example of what’s wrong with how classified security work is done. You shouldve showed him the paper I linked about the military grade FPGA with the backdoor in it.

pkoning October 1, 2014 12:07 PM

I’m puzzled by much of the discussion around this bug.
I believe the general rule is NOT to allow unchecked strings into the system from the outside without filtering, quoting, or otherwise protecting. The example exploits that have been quoted all start with the premise that you have a system that just blindly allows this weird crud into its shell. If you made an error that large, don’t you have issues worse than the misparsing of function or symbol definitions in bash? Don’t you have shell code injection problems generally? If yes, then changing the bash version will do exactly nothing for your system security.
Of course, that excludes cases where the string in question is passed as an ssh command-to-execute string. In that case, the purpose of the command is to execute an unrestricted shell command, and the ssh access checks were set to allow that particular requester to do so. In other words, for that example you’re looking at a feature, not a bug.
I also don’t quite understand the narrow focus on bash, as if it were the only way to exploit such basic design issues. For one thing, it doesn’t appear to be the only shell with this precise parsing bug. And I say “basic design issues”, because (for example) if you feed the string in question through Python standard library method shlex.quote(), the problem goes away. And the use of that method, or its equivalent in other languages/libraries, is a well known basic best practice.

Adjuvant October 2, 2014 3:44 PM

@Anura, eCurmudgeon
Mimic what Stallman did in the 1980s and construct an enitre operating system component by component using strict coding standards.

I’ve identified a body of existing work which might possibly be cannibalized for the cause.

From OSNews, 2009

Ever wanted a simple, compact, small, yet usable and relatively full-featured operating system using a SunOS kernel with most of its utilities written in Ada? Whatever the answer, now you can. “AuroraUX is a SunOS-derived kernel and userland. The core of the project are its utilities written in Ada. Other, poorly implemented features have been fixed or rewritten, too. Ada was chosen over other languages because it sucks the least.” At least they’re honest.

Sadly, according to Phoronix 2012 (pre-Snowden, natch), it’s dead.

Archived ghosts here:

Anyone know a good software necromancer?

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.