Open-Source Software Feels Insecure

At first glance, this seems like a particularly dumb opening line of an article:

Open-source software may not sound compatible with the idea of strong cybersecurity, but….

But it’s not. Open source does sound like a security risk. Why would you want the bad guys to be able to look at the source code? They’ll figure out how it works. They’ll find flaws. They’ll—in extreme cases—sneak back-doors into the code when no one is looking.

Of course, these statements rely on the erroneous assumptions that security vulnerabilities are easy to find, and that proprietary source code makes them harder to find. And that secrecy is somehow aligned with security. I’ve written about this several times in the past, and there’s no need to rewrite the arguments again.

Still, we have to remember that the popular wisdom is that secrecy equals security, and open-source software doesn’t sound compatible with the idea of strong cybersecurity.

Posted on June 2, 2011 at 12:11 PM57 Comments


Spaceman Spiff June 2, 2011 12:26 PM

Yes, the old myth of security by obscurity at its best. Bruce, you have been trying to educate people for years that security by obscurity is no security, and I laud you for it. Unfortunately, I don’t think it matters how much you beat that drum. People are always going to look for the easy answers, even if there aren’t any. 🙁

Captain Obvious June 2, 2011 12:30 PM

“… the U.S. Department of Homeland Security sees such software, which anyone can tinker with, as a possible tool for defending government networks from both online thieves and professional cyberspies.

A new five-year, $10 million program ….”

Most of us in the business have been using a combination of proprietary and open-source software for … well … always! Where have you been all this time, US-DHS, and why is it taking you so much time and money?

Hasufin June 2, 2011 12:44 PM

Something I told a customer just yesterday – “Real security is not compromised when people become aware of it.”

Steven Hoober June 2, 2011 12:59 PM

I presume your new book will delve into this in detail, but I’ve come to believe humans are simply not wired for security. We get secrecy (bury the bone), privacy (hide in a bush), and safety (I haven’t been killed yet) and draw not necessarily true relationships between the several.

It’s very much like my cat’s understanding that staring at doors makes them open. Could take a while, but works every time. Understanding true mechanisms is harder, and requires knowing what’s occurring on the other side of the door.

Victor June 2, 2011 1:07 PM

After seeing what software is actually like in two billion-dollar companies, I’m not at all surprised: The only criterion for software is that it works well enough for a non-technical person to use it. Any kind of automated testing is frowned upon because it adds code which “doesn’t do anything useful”, refactoring is seen as striving for “perfection” rather than bringing a little sanity to brittle code, and SQL injections and execution of unvalidated user-provided strings is kosher because “how will anyone find out?”.

Tim June 2, 2011 1:21 PM

Security by obscurity isn’t no obscurity. It’s just that it provides a false sense of lots of security.

James Turnbull June 2, 2011 1:29 PM

Of course the article is also sort of redundant. FOSS is already used widely in the IC, Defense, and Government – the prime example I’d cite is Snort.

Andrew B June 2, 2011 1:35 PM

Open Source software has the same problems as its closed sourced competitors only we, the public, have a better view of how the sausage gets made. That said I have grave concerns about the way security is, or is not, included in the development of many essential open source projects. Say what one will about the people from Redmond they have pushed forward the process of incorporating security into the software development lifecycle to the benefit of us all. So long as vocal managers of open source projects believe that bugs are bugs the whole world round (without security impact) and that security researchers are “masturbating monkeys” open source will not be making the swiftest progress toward a “more perfect” security posture.

John Campbell June 2, 2011 2:00 PM

Many eyes make for shallow back doors.

Many eyes make for shallow vulnerabilities.

The real question, of course, is where you get your patches (as source code or not) from… which, again, is dependant upon trust.

Mark R June 2, 2011 2:06 PM

I have seen this knee-jerk “Open Source is insecure” reaction many times. It’s accompanied by an absurd desire to pay money for everything, even if a superior free (as in beer) alternative exists. Perhaps there’s also some disdain for “dirty hippies” under the surface.

In my previous job, we wanted HIDS and didn’t have much money, so I suggested OSSEC.

“No no no,” I was told. “We don’t use any Open Source Software.”

This was after about half our infrastructure had been converted from HP-UX to Red Hat.

I pressed for an explanation why, and the response was, “How do you get the software and updates and patches? Just download them over the internet? Don’t you see a problem with that?”

I hope nobody tells him where most of our commercial software comes from… he’ll hit the roof.

Dana Blankenhorn June 2, 2011 2:10 PM

You’re right, and god bless you for continuing to make the argument.

I just hope some don’t misread your statements that “popular wisdom” is based on “erroneous assumptions” as your endorsing security through obscurity.

As I read it, you’re saying just the opposite.

Peter June 2, 2011 2:38 PM

@John Campbell: The problem with the nostrum is that in reality most FOSS has almost no eyes looking at it – there is too much software, and too few (competent) eyes.

No mention has been made of another point made in the article – price. Even if one accepts the claim that FOSS is not as secure – good security widely used is almost certainly going to be more effective than excellent security rarely used.

R Cox June 2, 2011 2:57 PM

I wonder if people who feel that knowing the working of an inside every did any black box analysis. To me, knowing the details of a design can make something can harder to understand. It is like we know something about workings of a human mind, but we don’t really know how the individual molecules add to to that working.

Software, even really well written software, takes a long time to understand. Most software are really well written. The errors in software that allows attack are usually far from obvious unless one knows exactly what one is looking for. Contrary to the opinion of people who have never creating anything of value, writing code is hard, and obvious mistakes are caught during normal debugging processes.

In fact, even people who write the code tend to treat is as a black box. Attack it with a bunch of inputs and look for undefined behavior. Once a unwanted behavior is found, look at the code and figure out why the program is behaving in this way.

What open source software allows is the potential of the immediate correct of unwanted behavior by the end user. It also allows the end user to remove potential security vulnerabilities that provide no benefit.

Lan Colshaw June 2, 2011 3:06 PM

Skype relied on security by obscurity for a long time, until some folks reverse engineered their code.

jb June 2, 2011 3:25 PM

The problem with security by obscurity is that you have to remain obscure. Lan Colshaw’s comment about Skype shows this–Skype got less obscure until it was worth people’s time to reverse engineer their code, but it would have been a poor business practice for Skype to try and remain sufficiently obscure to avoid that, even had it been possible.

Clive Robinson June 2, 2011 3:54 PM

I wonder if the “closed source” fans actually realise just how much open source software ends up in it?

The clasic example being Microsoft and their network utilities software from Win 3.11 NT 3.5 days?

@ Steve Hoober,

I like your cats methodology, it’s got you well trained 8^)

dumb user June 2, 2011 4:15 PM

Does anybody here worry that once the government “adopts” a piece of open source software, and adapts it to their needs, that it will suddenly become “classified”, and removed from the public domain due to its new duty in protecting the security of the country?

Richard Steven Hack June 2, 2011 4:29 PM

Nothing like closed sourced software. Microsoft Windows is so wonderful.

Installed Windows 7 on a new build last night for a client – it can’t see the network until I log in to every machine on the network from it – then it can see them. Google for this. There are pages and pages of people who can’t get Windows 7 to see XP networks without a variety of utterly stupid random fixes. For two years now.

Then when I rebooted after creating two RAID 0 volumes in the BIOS, it hosed its Boot Manager which took me a variety of incantations from the Recovery CD command line – wait! To fix Windows, you have to use – wait for it! – the command line! – to fix.

So, yes, Windows 7 continues the tried and true Microsoft way of being a complete POS.

And security? Microsoft’s way is “security through unreliability”. It’s such crap no one can penetrate its security because no one can get it to work correctly in the first place.

wrb June 2, 2011 5:02 PM

I think its naive to think that all commercial software has an open source equivalent that is “better”. I love open source and find it to be a necessary part of the “equation” but also find it, like all things, has flaws that many simply deny or ignore. I don’t think open source is insecure but rather provides many with a false sense of security because they assume the source has been picked over by people who know what they are doing. The reality is many pieces of open source code have only ever been looked at by the developer who may or may not have malicious intent. As somebody already mentioned its about trust.

Isaac Rabinovitch June 2, 2011 5:29 PM

“Security by obscurity isn’t no obscurity. It’s just that it provides a false sense of lots of security.”

Which is actually worse than no security.

Rob Slade June 2, 2011 5:54 PM

Ah, yes. The myth is still with us. I recall in the 1991 lead-up to the Michelangelo virus (twenty years ago? My, how time flies …) that we were trying to get people to scan their computers. Michelangelo being a simple BSI, that would have worked. The technology “thought leaders” of the day objected to this advice, since the only major antiviral scanners of the time were shareware and therefore suspect because you had to get them from bulletin boards. (An early form of Websites. Go ask your grandmother.) The voice of those who had access to newspaper and magazine columns suggested that people make a backup.

Generally speaking, good advice at any time, but, given the backup technology of the day (floppy disks) the wrong advice to give for dealing with a destructive BSI.

Sudanese Son June 2, 2011 6:05 PM

This definitely not correct. Open source software pushes it’s owner to design the best & not to hide or embed malicious codes. One thing helps more, is open for all over the globe; which allow many developers to check and test it. Even for closed source, flaws are flaws which could be figured out easily. I don’t think nowadays there is professional developer who couldn’t able to Reverse-engineer the closed-source looking for flaws. So, flaws are flaws, and open-source push you more to come up with the best. And we saw last week list of vendors (Siemens, HP, …) who intensionally left back doors open.

Jack June 2, 2011 6:23 PM

“Still, we have to remember that the popular wisdom is that secrecy equals security, and open-source software doesn’t sound compatible with the idea of strong cybersecurity.”

You can cherry-pick statements by others to make any point you wish. I really do not find people believe that just because software is proprietary that makes it more (or less) secure. In fact, few possessing significant knowledge of security science are developers. Some, when they do know something about programming speak as if they’re from the 80’s. I don’t say this to insult, I say this to make the point that even if all software is open source that doesn’t mean anything. And I am in good company when I say this. A leading person from a testing firm that specializes in network security and teaches classes etc. made this same statement to me recently – how difficult it was to find good developers who also knew and understood security.

Look at the way people who know a lot about finance and banking react to BitCoin. All the software for BitCoin clients is opensource. This has no effect whatsoever on their reaction – because they cannot evaluate the software. It’s like investors. They have no ability to evaluate ideas and proposals no matter how they sound. They can only watch if others are investing or if sales are good then maybe jump in. Only after that do they suddenly “understand” it all. Same thing with open source. The fact is most companies doing development obscure their software to keep others from stealing it. I hope you’re not saying code obfuscation has no value. Companies spend tons on software development, and no, they don’t have the luxury of taking infinite time and man hours to test it. Notice after something breaks, experts get to swoop in and point out why it failed. Why is it they never know before it fails? Too many comments here are from groupies that post stuff they anticipate everyone will agree with. Like the typical anti-MS nonsense. MS has made more proprietary software public than many companies have EVER produced.

Tony June 2, 2011 6:39 PM

Skype’s security doesn’t matter – the bad guys just introduce malware to pick up the conversation in the clear before Skype gets a chance to encrypt it.

tommy June 2, 2011 7:09 PM

@ wrb, and In General:

OMG! I’d better ditch this Firefox browser I’ve been using all these years, along with its security add-ons, like NoScript, RefControl, Request Policy, Certificate Patrol, SafeHistory and SafeCache (on some versions), JSView, etc., and get back to IE in a hurry!

Dirk Praet June 2, 2011 8:00 PM

@ Jack

“how difficult it was to find good developers who also knew and understood security”

Because for a very long time the market didn’t ask for them. OVer the last couple of years we have seen a big change where under pressure of all sorts of regulation and customer pressure this is slowly becoming more and more of a requirement. As are certifications, sense or non-sense thereof is beyond the scope here.

Compare it to playing in a band: you can be a great guitar or piano player an as such be asked by many rock bands, but good luck joining a philharmonic orchestra when you can’t read music scores. If for some unknown reason tomorrow all bands go philharmonic, you’re pretty much out of a job unless you start working on your solfege skills or the orchestra decides to educate you on it.

Proprietary and open source software both have their pros and cons, choice for which should always be based on a business case and rational rather than religious arguments. As for scrutiny, the simple fact however remains that the more (competent) eyes can review your code, the more chances there will be to proactively reveal vulnerabilities and other shortcomings alike. This is even more true in those cases of developers/companies that don’t have the means to go through rigorous and time-consuming product lifecycle processes. With regards to the fallacy that open source code primarily benefits bad actors, it should be noted that any determined opponent generally has enough techniques available to find and exploit vulnerabilities, including but not limited to reverse engineering.

One funny anecdote I have always remembered in this context, was a large enterprise account where former Sun Microsystems CEO Scott McNealy did a Star Office (now Open/LibreOffice) presentation before a group of decision makers. Although most were quite impressed with the software’s features, there was one person who expressed the entire audience’s unease with the fact that it was free. McNealy just smiled and told them: “Well, if that’s the only problem I can change that just for you.” They eventually threw M/S Office out.

tommy June 2, 2011 8:27 PM

@ Dirk Praet: LOL!

The whole post was well-stated, although it’s good you added (competent). Tons of reviews by non-security-aware teams are useless…

Now, a not-so-funny story, one of many: The April MS Patch Tuesday update, dated April 12, 2011, included MS11-029,

a vuln in the system-critical file, GdiPlus.dll. (Try changing it to .bak and rebooting. You won’t even get to Safe Mode.) The patched version (for XP, anyway), had a “Last Modified” date of October 22, 2010.

Yes, patches need to be tested, to make sure they fix the problem and to ensure compatibility across many platforms, but almost six months? … They say it was reported privately by responsible disclosure, but in six months, isn’t it possible that an “irresponsible” party could have discovered this also, and quietly used it, with the victims not even knowing they were part of a botnet or whatever, and having no clue why?

Just one example of many. I remember one Patch Tuesday where they saved some updates for the next month, because they’d already hit some self-imposed limit, like 15 or 20 different Bulletins/Updates, and they “didn’t want to inconvenience users”. … Like having your machine compromised, your bank account drained, being arrested for spamming or kidporn that you didn’t do, etc., aren’t “inconvenient”?

My experience with good, reputable FOSS is that such things tend to be patched much more quickly. (If you have a patch ready anyway, why wait until the second Tuesday of next month to release it? – rhetorical question). And if the FOSS doesn’t, it quickly loses that good reputation (and the voluntary donations that support some).

Vles June 2, 2011 8:31 PM

“I presume your new book will delve into this in detail, but I’ve come to believe humans are simply not wired for security. We get secrecy (bury the bone), privacy (hide in a bush), and safety (I haven’t been killed yet) and draw not necessarily true relationships between the several.”

I like what you say.

I agree and also believe humans do not posses in themselves ‘security’. I do believe we are wired for ‘security’. Those wires don’t reach inside, they reach outside. Feeling and being secure is part of why we want to connect and be in a group. Like a school of fish. But there’s no such ‘security’ if you’re purely by yourself. (Group) Security is a result and quality that we get from ‘giving something’ from ourselves to a group to get a Greater Good in return. (But, like Trust (see paper reflections on trusting trust by K. Thompson) it’s an Intangible, it’s relative and it’s (at least it should be) subject to the groups scrutiny).
Everyone feels insecure when travelling hostile territory alone. Walk alone, naked and unarmed in to a forest knowing it’s full of deadly snakes, grizzly bears, lions, fire ants etc etc etc and the first thing you wish for is a buddy as you would like someone to watch over your back / share your burden. Did you see the movie cast away? Draw a face on a ball… Who felt a sense of relief when he entered back in to society? No more facing the unknown alone…
No one is perfect and 100% invulnerable. I believe security can never solely be obtained by ‘static products or devices’ alone. We all know the dangers of a moat mentality. Your general Patton of old proclaimed “fixed fortifications are monuments to the stupidity of man”. Just so with static implementations of technical security systems. We now like security to consists of ‘layers’. Regardless, if those layers are static, they are still as vulnerable. Stuxnet proved that.
By it’s very nature security has to be dynamic and flexible because it faces a dynamic, always changing environment. In achieving – a level of – security the emphasis should be on educating and training people, only then do security products start to make sense. Neglect your finest asset – people – in favor of products and you risk hollowing out ‘the reality of security’ and build up ‘the perception of security’. If you start to view security as a commodity, to be paid for by individuals to other individuals, or by individuals to companies there’s something seriously wrong with the group you are in.

Nobodyspecial June 2, 2011 10:46 PM

@wrb – so do you trust the algorithms like AES, twofish, SHA etc that have been picked over by experts or would you rather trust something you invented yourself. – Like the al-queda guy they caught who used a cesear cipher because all that published stuff was by American govt and so wasn’t secure.

Or assuming you read and understood the algorithm papers do you implement them yourself? Or do you use well known and well tested opensource crypto libs? Or do you use a commercial app and hope they aren’t lying about what algorithm they use or how well they tested it. Like a certain 3 letter agency that had a AES256 security system. unfortunately it only encrypted your passwd with AES and then used the same 256 bits to XOR each 256bits of your data

Anton June 3, 2011 2:55 AM

I’d put my money on open source in preference to propriety software any day.

Best bet is to find out what the geeks are using.

szigi June 3, 2011 3:21 AM

Actually, security of a software is more about the quality of the code, the knowledge of the designers, coders and operators, than open or closed source. Unfortunately, most of the people making programs, are not skilled enough to be able to write secure code, most of them lack the basic knowledge of security. Just look at the code of most open source software, some look loke, that the author would fail Programming 101 immediatley. To be extreme, creating software should only be aloowed for appropriately skilled people, similar to the medical practice. Most of the software are not properly designed security or otherwise, and most of the people creating and operating them are not at all skilled to be able to properly do it.
However with open source, there is a greater chance, that someone who actually know security might take a look at the software.

Adam June 3, 2011 3:59 AM

Open source can certainly aid security in large projects where you can reasonably expect the code to be reviewed by people with good intentions.

I wonder if that holds true of obscure open source projects. Maybe no one looks at the code at all, or the only people who do are those with a malicious intent and by exposing the code you’re putting yourself in more danger than if you hadn’t bothered at all.

Richard Steven Hack June 3, 2011 4:58 AM

For those comparing “obscure open source” systems as opposed to something like Microsoft, how about all those one- or two-man COMMERCIAL software companies – where no one has time or the security expertise to vet the code?

It doesn’t matter whether the code is open source or commercial – what matters is the skill of the developers.

And the entire industry is doomed as a result because, like every other industry, 98 percent of the output is crap.

I’ve said this for years: Windows is CRAP. Linux is ALSO crap. BUT Linux is FREE crap.

Ted Nelson at a West Coast Computer Faire back in the ’80’s said there was no acceptable software on the market – commercial or otherwise. That statement is still true today.

‘Nuff said.

Clive Robinson June 3, 2011 5:26 AM

@ Vels,

The problem with security is boundries. There is an innate but incorrect assumption that ‘what is inside is secure and what is outside is not’.

The faux assumption comes from the fact we are physical beings and we implicitly assume most of our security is derived from physical objects around us.

And we learn this implicitly from an early age buy the actions of our parents,

We teach our children that playing in the playground is safe, but playing outside the playground is unsafe, that is we teach them that being inside is “security” and outside is “insecurity” and further that those inside with us are also safe. The child therefore implicitly assumed that those inside are also trusted and those outside are unsafe and therefore implicitly untrusted.

Such thinking gave rise to physical defencive places such as castles and forts, where those on the inside where assumed to be “trusted” and those on the outside “untrusted”. But castles and forts are also prisons in that we can be trapped in them against our wishes.

This is on thing army instructors have difficulty teaching soldiers, that is ‘before running into a building for cover under fire, be sure you have another way out thats not under fire’, ‘otherwise you have trapped yourself and are of no use to anyone including yourself’.

Thus they try to teach you to run around the corner of the building to get out of the current direct fire, not to go in it, and thus retain the freedom to both act offensively and defensively and the freedom to move, the later being the most important to longterm survival and effectiveness.

But people tend not to talk about the difference between a castle and a prison, they are afteral both only buildings and one can be used as the other. Most assume it is just the function to which they are put and thus the people inside.

Well there is one major difference a castle is generaly like a shared playground where “trusted people play” safely away from the “untrusted peoople outside”. A Prison on the otherhand has tiny playgrounds or cells for each very untrusted person to be incarcerated in by trusted warders, who also have the job of keeping untrusted people from outside getting access to those incarcerated in the cells.

In a prison you have untrusted people both inside and outside and a group of trusted (warders) and semi-trusted (trustees) people controlling access between all the untrusted people.

Now one of the problems with incarcerated people is they are still people that have basic physical and mental needs that need to be addressed. Which due to their incarceration they cannot do for themselves. Thus the warders and trustees have to provide food water and things for the prisoners to do in their cells.

As I have said before one of the problems we have with information security is we tend to think of castles not prisons, thus trusted insiders can do a lot of damage, and likewise an intruder once inside the castle can also do a lot of damage.

If however we think in terms of prisons then we can get significantly higher security. The problem then becomes how do we do it effectivly (please note there is a lot of difference between effectivly and efficiently even though one can look like the other).

Now one obvious difference between humans and computers is that computers don’t come with a load of emotional touchy feeliness, and computers have a bad habit of doing exactly what they are told, be it good or bad.

This means at the computer level it cares not if it is a prisoner or a king and it will accept and act out either role equaly as it is told.

However at the human level they care very much if they are a king or a prisoner and will frequently strive to be a prince or king rather than a surf or prisoner.

In essence humans have the deisre to be in control not controled and reap the apparent rewards of that position.

Thus for security to be effective it has to treat everything as untrusted whilst giving the humans the illusion of being atleast princes of the relm.

From the bottom up view the key to this is effective segregation of information by controling resources and communications. From the top down it’s effective segregation of roles and this is achieved by controling resources and communication.

Thus at a more abstract level security is achived by control of the information and role of a function via effective control of the resources and communications required to carry out the function essentialy good managment.

But the difference between managment and security is to also treat the information as a controlable resource.

Thus security can be considered as the control of roles by the effective management of communications, information and other resources between roles.

The weasle word is “effective” and this is where you have to deal with the touchy feeliness of humans and their desires to be kings of all they survey. And the trick behind this has nearly always been providing the illusion of being in control.

At a low level we see this in machine shop where the larger tools have safety guards, to in the main protect us from what we are doing, without being overly restrictive on our ability to do a job or be creative.

Less obviously is the engine managment systems in high performance systems that stop the driver holding the engine in damaging or inefficient performance points etc.

And further we see this through to the likes of modern fighter aircraft that cannot possibly be flown by humans, the computers actually fly the aircraft, and the pilot simply indicates the current vector to fly on, unless it is dangerous in which case the aircraft takes control to prevent the danger (ie stall and crashing into objects).

The trick is in all cases “to give the human the illusion of control” whilst “not overly restricting their ability to do things” in a “safe and secure” way.

Thus the problem of getting the right balance between “safe and secure” and “restricting freedom” of action without stifling the ability to work or learn.

And if you spend enough time around children you will know that this is difficult because children learn their envelope by coming to sufficient harm that they know certain actions have consequences that will cause them much worse harm, hurt or pain.

Clive Robinson June 3, 2011 7:51 AM

@ Nobodyspecial,

“Or assuming you read and understood the algorithm papers do you implement them yourself? Or do you use well known and wel tested opensource crypto libs?”

Two problems you could have mentioned,

First off the algorithm papers are only about the algorithm, not how you implement it securely.

Secondly that part of the “competition” was to produce a very fast speed of execution for the algorithm on a seried of known test platforms.

However the law of “unfortunate consiquences” came into play, and as some know I have a doctrine I give voice to quite often which is “Efficiency -V- Security”. Well the code for the speed trials had been optomised to as “efficiently” use the hardware as possible in the aim to maximise speed. The security of the code implementation was not given any consideration.

Thus many many code libraries lifted the speed optomised code without any further thought under the “hey the codes’s fast and the algorithm is secure what’s to think about” developer thought ptrocess.

The result was many many implementations were (and still are) subject to timing attacks and leaked key related information through time based side channels to the network, enabaling an observer to find the key in very short order, often from a very great distance.

Which is why I tell people to only use AES (and many other crypto algorithms) “Off Line”. That is to do your encryption and decryption on a standalone system with a sufficient “air gap” and other precautions and do the sending and receiving of encrypted traffic on another machine.

[I should add a disclaimer here, that I have what appears as a quite strong / biased opinion, all I ask is that you accept that side channels in implementations are a very real issue (as has been demonstrated a number of times). Further I also hold a view that the AES competition was significantly flawed in this area. NIST ran the AES competition under the NSA’s guidence (if not control). The NSA without a shadow of a doubt were fully cognizant of time based and other side channels and how they are an almost natural consiquence of most efficient designs. So why did they not say anything at the time or later. You can actually deduce they are fully aware of the AES time based side channel falings by the way they licence AES for use in the likes of ther IME and other products.]

Clive Robinson June 3, 2011 8:11 AM

@ szigi,

“To be extreme, creating software should only be allowed for appropriately skilled people, similar to the medical practice. Most of the software are not properly designed security or otherwise, and most of the people creating and operating them are not at all skilled to be able to properly do it.”

I’m glad you started with “To be extrreme” because although your opinion is correct in the long term, we are no where even close to that point in time.

Currently the dynamics of the software “freemarket” are (like most “information” based freemarkets) compleatly wrong.

I could give a whole host of reasons and long winded explanations as I have in the past but I won’t (just to keep the majority of readers happy).

I will simply say that people need to look into the history of their comparative suggestions and see the how and the why of how the comparison example arived at where it is today.

And more importantly look at a counter example and it’s history.

For this I will suggest another freehand creative activity that is not licenced (yet) which are the various forms of art such as music, drawing, painting, sculpture, acting and writing.

It is interesting to look at the how and the way of money creation with these activities especialy that of music to look for significant pitfalls into which we could easily fall, both by accident and the machinations of others seeking control for their own benifit.

John Campbell June 3, 2011 9:51 AM


The real point is that, for the most part, you can get closer to confirming security on your own.

The real point is that, should a vulnerability be mentioned, there will be a lot of discussion and at least 17 corrections proposed within the first 24 hours.

The people who are the cats being herded by themselves will discuss these and form a consensus allowing the vulnerability to be close within, say, 72 hours.

How many months will a closed-source system stay w/o corrections?

The real advantage when using FOSS is that a vendor– be it Red Hat or Canonical– will be driven to provide any patches ASAP ‘cuz the vulnerability can be seen by others which drives them to close the gap. For proprietary software where the code is “sekrit” there will be committees meeting that have to agree that there is a problem and then bean-counters working out the cost of fixing it versus the costs of leaving it un-fixed… long before any programmers might get the RFC.

In open source software the question isn’t whether there is a problem or not, the question is whose fix will be blessed by the coordinator for inclusion?

There are apparently some serious bragging rights for the person whose fix gets blessed.

Techies are a deviant form of Engineer: we live to work, not work to live, so it is often the emotional benefits that outweigh the financial. My wife, effectively, is paid by the company to let me out of the house with some money for expenses (gas, etc) because the social and technical challenges at work are emotionally rewarding.

FOSS plays into this “emotional rewards” scenario. (Somehow I doubt that companies think about ways to ensure that an employee’s spouse is kept happy to ensure that an employee’s loyalties do not become divided.)

Miguel June 3, 2011 1:35 PM

Is the phrase “Smile you’re being filmed” a security threat to all the convenience stores around the world?

John David Galt June 3, 2011 6:20 PM

I don’t like “security by obscurity” either — though it can sometimes be the best available option if the system vendor isn’t willing to do the work to install real security — but there are other reasons it’s quite correct to view open-source software as insecure or at least untrustworthy.

(1) Most open-source projects are by people who assume their product and anyone working on it is no threat to anyone’s system security. Thus they make no effort to vet their volunteers, nor do most of them minutely examine their code to prevent threats from being put in.

(2) Many open source projects, such as Mozilla, have made the deliberate choice to enable as many new features as possible even though the effect is to create new ways to infect your system with malware faster than you can learn about those new features and decide if you want to allow their use. Requests to make the new features optional get replies to the effect of, “Mistrusting web-site authors is lame.”

(3) Some open source projects have included intentional spyware or other malware from the word go, but don’t tell their users so. (FileZilla is one.)

and of course,

(4) Many software providers, including some open source distributors, do make “security” efforts but have their own definitions of “security” that amount to taking control of your computer away from you and giving it to themselves. (The whole field of “DRM” comes to mind here. “DRM” ought to be renamed to something like Digital Access Control, since it is often used to deny you capabilities that intellectual-property law doesn’t really give them any right to deny you.)

I would never install open-source software and use it to protect anything important unless I first know how to vet it for all these threats, and have done so.

Rob June 3, 2011 6:57 PM

@Sudanese Sun

What about the backdoor that was in the OSS Phoenix DB for a year and a half? (hard coded password, not obfuscated)

Or the exploitable double free in zlib that was there for 4-5 years?

Tony Zbaraschuk June 4, 2011 4:57 PM

I seem to recall (from David Kahn’s The Codebreakers) that one of the late 19th-century specifications for a secure coding method was that “secrecy must reside in the key alone”; you should just assume that the enemy knows everything about the device and how it works except for the (changeable) key.

How far we have yet to go…

tommy June 4, 2011 7:57 PM

@ John David Galt:

I never knew John Galt’s middle name. IIRC, it was not mentioned in the 1100+page novel. Thanks for the enlightenment.

“(1) Most open-source projects are by people who assume their product and anyone working on it is no threat to anyone’s system security. Thus they make no effort to vet their volunteers, nor do most of them minutely examine their code to prevent threats from being put in.”

Whereas Microsoft, for example, assumes their product to be inherently dangerous, minutely examines all code to prevent possible exploits, and vets all employees, including the summer interns who wrote much of at least some versions of Windows, very thoroughly… (seriously?)

At least with OSS, outside eyes can minutely examine the code as they wish. No guarantee that they will in any particular case, but try doing that with Windows….

Odd how the US Govt, notorious for paranoia about its secrets, probably because so many of them are ugly, chose the open-source AES-256 for encryption of all material up to and including Top Secret classification, instead of hiring MS or another obscurity-vendor to write it for them, or even doing it in-house. Perhaps because the entries were vetted by competing entrants, each with a strong interest in cracking the others (professional prestige), including the guy whose pic is in the upper right of this page?

“(2) Many open source projects, such as Mozilla, have made the deliberate choice to enable as many new features as possible even though the effect is to create new ways to infect your system with malware faster than you can learn about those new features and decide if you want to allow their use.”

Whereas Microsoft, Adobe, etc. never add new, often useless, and very frequently dangerous, features, presented to non-tech-savvy users, and sometimes with no way to disable them without breaking the product, or skill at tweaking/hacking your own puter?

Google Chrome is full of spyware, but I never understood why anyone would use, much less trust, a browser produced by an advertising agency.

Mozilla isn’t 100% innocent, but generally it’s opted for precisely because the user has some level of tech knowledge, at least enough to know that IE is terrible. Just the fact that Firefox doesn’t support ActiveX OOB is a huge plus, even if you change nothing. It’s easy enough to disable most things through the GUI, and they have knowledge base articles for the about:config settings — perhaps you’d care to cite a few examples?

“(3) Some open source projects have included intentional spyware or other malware from the word go, but don’t tell their users so. (FileZilla is one.)”

And no closed-source project has ever done this? Including the Windows Update that changed the user’s Firefox configuration? The Windows Notification tool that phoned home every day, to make sure that the licensed copy of Windows that you were running yesterday didn’t suddenly turn into an unlicensed copy today, on the same machine? Etc….

(4) is far more common among closed-source: MS was a “leader” in breaking machines with DRM.

“I would never install open-source software and use it to protect anything important unless I first know how to vet it for all these threats, and have done so.”

You haven’t told us what OS you use. Windows? Mac? Linux? Did you thoroughly vet the entire OS yourself before allowing it to run “live”? If the answer was one of the first two, it’s impossible. If you use some nix or other open-source flavor, and *have vetted it, please let us know your credentials, and tell us which OS exactly is bug-free. We’d all be very grateful.

Note the comment above from frequent poster Richard Steven Hack that both Windows and Linux are crap. So again, what OS do you use, and did you vet it thoroughly, and if it’s a Linux-based, could you please provide sufficient proof to satisfy Mr. Hack?

Clive Robinson June 5, 2011 2:45 AM

@ tommy, Richard Steven Hack,

Yup most commodity OS’s are code bloated and not designed in a way that is amenable to being secure.

For those long enough in the tooth to have had their fangs chopped a couple of times, the exciting time for computer security was the late 1960’s through 1970’s

Most are vaguely familiar with the write up read down MLS model of D. Elliott Bell and Leonard J. LaPadula the first report/version came out in 1973. Well back in 2005 Elliott Bell gave the invited ACSAC lecture/address which he called

“Looking Back at the Bell-LaPadula Model”

You can get a PDF of it at,

It makes an interesting read.

tommy June 5, 2011 4:41 AM

@ Clive Robinson:

Interesting, indeed. Thank you for the link, Sir. Highly recommend to all. Highlights for this reader:

1960s “Tiger Teams”, which today would be called “Penetration Testers” — and their 100% success rate at hacking everything in existence, with the same attack sometimes working across different platforms.

Late 70s:

“At the time, no one felt it necessary to state that pinning one’s security hopes on having smarter geeks than the opposition was a failed concept. Neither did most experts believe that market forces would produce secure systems…”

How prophetic.

The use of the term “cloud” as far back as the 1980s, along with the recognition of inherent weaknesses in this concept. (I feel even better now about all my comments here saying that if there is a choice between home and cloud, home is always the choice.)

That security must be built in, not added on, which Bruce and others have long said.

21st Century:
“Technology’s increasing pace of development and shortening product cycles have made most computer users full-time beta testers.” Amen, Brothers. Which is a good reason never to buy the first version of anything, a personal rule in this household.

Thought: Since market forces will not produce security due to the well-known factor of misplaced cost of insecurity:

The US Government has found it to be to its advantage to hold competitions for secure encryption algorithms and secure hash algorithms (Go, Bruce and Co.!), which are also freely available to vendors and consumers. Suppose they were to hold a competition along the same lines for the most secure OS, which they definitely need (China scare! China scare!) — with the proviso that it be practicable for home users as well. A very substantial monetary reward, to the winner and to close finishers, would probably actually save the Gov money in the long run, vs. buying COTS or custom-built systems, but at the very worst, make a miniscule impact on the Federal budget. (Say, one day’s cost of the troops in Iraq?)

Naah, MS donates too much to politicians – would never happen. However, if the cost of insecurity to the country as a whole — not even counting the rest of the world — is factored in, something could be worked out… and if sw vendors were to become liable for defective products, MS is either out of business quickly anyway, or rushes to join the competition, and to sell users and OEMs their implementation of the system, just as my online bank had to pay somebody to implement the FOSS AES-256 on their web site.

Regarding bloat: The 4 GB Windows folder on this XP system has been debloated to about 175 MB. That 95% reduction may not translate linearly into 95% less attack surface, since it’s the critical files that are the juciest targets, but it’s amazing how often an MS Update produces a list of files of which half or more no longer exist on this machine.

Nick P June 5, 2011 3:48 PM

@ tommy

It was a nice report. But, in spite of the market, high assurance systems were being produced at one point. If you want to know why that stopped and how to fix it, you got to real Bell’s second Looking Back paper.

As for China scare, China’s more an inspiration in this area: they already started and finished a “secure OS” project. It’s basically modifications to an older version of free BSD and they created their own processor as well to reduce the risk of subversion. It’s an MIPS processor that features x86 emulation for legacy compatibility. They ported tons of Linux apps to the new Kylin OS. So, China illustrates that a little government commitment can get the job done. Our government just isn’t committed to real security like that.

“Regarding bloat: The 4 GB Windows folder on this XP system has been debloated to about 175 MB.”

How exactly did you do that without breaking critical functionality? Is there a guide available somewhere? Well, I’d need one for Windows 7 because I’ve migrated everyone to it. The security of Win7 is vastly improved over previous OS’s.

anon June 6, 2011 10:09 AM

And Closed source is better how? Note that for Closed source, security is a cost (i.e. the cheaper / less there is, the more the company makes). For Open Source, you can HIRE someone to do security audits, not depend on the vendor. Anyone you want to pay – and who now makes money from security (profit center vice cost center).

@John David Galt
I assume you do the same level of scrutiny with Closed source – oh wait, you can’t, because you don’t have the source code. I won’t go over the other points, as others have thoroughly debunked them for me.

To find vulnerabilities only requires the program. To fix them requires the Source Code – which you get with F/LOSS. So you can hire whoever you want, not dependent on the original company (which may go out of business) or not have time to fix the problems, or want you to upgrade to the newest version with all the features you don’t want.

Shane June 6, 2011 1:26 PM

Surprise surprise, and some people still believe the world is flat and was created in a few days by an old white guy in the sky. Go figure.

tommy June 6, 2011 10:19 PM

@ Nick P:

I shall indeed read that follow-up, thank you. What’s needed is high-assurance systems for the AHU (Average Home User), although of course,

“There’s no system foolproof enough to defeat a sufficiently great fool.” (meaning, a foolish or careless user) – Edward Teller.

Gee, I wonder why I never heard of the Chinese secure OS? Our Gov embarrassed at being so far behind? Afraid of losing the campaign contributions from the clo$ed-$ource vender$? Afraid of users having systems the gov can’t crack, just as they formerly prohibited encryption they couldn’t crack? It might not be a lack of commitment, but rather an actual antipathy to widespread deployment of high-assurance OSs. (The trrrists will use them! Just like they use cell phones, and cars, and box-cutters!)

Is the Chinese system all open-source, and can we get it, along with the recipe for the more secure processor?

My main guide for debloating XP was:

There are a lot of caveats, but you’ll have no trouble seeing them. I didn’t follow every recommendation, because everyone’s usage, hw, etc. differs, and there were some things that I liked having around, even though they could be deleted. E. g., some of the DOS commands, calc.exe, etc. But I also experimented and found a few more that could be deleted, but weren’t on his list.

This guy apparently has a stand-alone machine directly connected to the modem (no router), and apparently, no printer, given the advice to delete the %windir%\system32\Spool folder. Most of us would probably want to have a printer. Etc. But I’ve heard of such bare-bones setups getting to less than 100 MB in the Windows folder. Mine supports a wireless laptop with WPA2 encryption, a remote printer/scanner/fax that is Ethernet-wired to the router and accessible wirelessly by the laptop, plus a stand-alone USB printer.

You don’t need the following, but in case any readers with a more modest level of tech savvy than Nick P. are tempted, I’d feel irresponsible not to post some precautions: Before every significant change, be sure to make a full-disk-image backup using a program that can boot itself from a CD, in case you crash the whole thing. Using incremental backups will greatly reduce the number, time, and space of full ones, so make sure your chosen program supports incremental backups. There is definitely some trial-and-error involved, and one discovers the most amazing, non-intuitive dependencies in Windows — the hard way.

Also, before even starting, make backups of everything on the machine in standard desktop format, i. e., actual folders and files that can be drag-‘n-dropped from a CD, DVD, USB flash or external HDD, because not every “required” file lives in the Windows folder or a subfolder thereof. If you do break something, sometimes you can just slide these files in from your backup without having to do a complete restore of the full disk image.

And since we all add new files and folders regularly, frequent data-only backups to a flash drive are good, too. (Of course, true even if you’re not tinkering with your system, but how many of us actually do so regularly, despite good intentions?). I wrote myself a simple batch script that copies the specified files and folders to a flash drive quickly with two clicks, which encourages frequent backups.

Unfortunately, I don’t think you’ll be able to get anywhere near that small a footprint in Win 7. Vista added a lot of new files to the “required” list, so I assume W7 did, too. You can look at many MS Updates that cover all three systems, and see how many more files are required for the newer systems vs. XP, for the same vuln. And more so with x64. (I’m x86.)

Even “upgrading” to IE 7 vs. my OOB IE 6 adds to the “you no longer can delete this” list, even though I never use IE, because of the well-known “tight integration” of IE and Windows. I refused the “upgrade”, and I’ve deleted IE itself (the .exe) and some supporting files, but others are needed by Windows. Let’s assume that IE 8 and 9 will expand that list.

If there is a similar guide for Win 7, I’m not aware of it, and I don’t believe the author of that Guide intends to do the same for V and 7. He must have put thousands of hours of research into it — you’ll see when you read it — and I assume that he, as I, are happy with our debloated, fully-functional, light, fast systems. But it might get you started on some ideas of your own.

Finally, it sounds like you’re administering multiple machines, not just helping friends and family, so you’ll need to keep Group Policy Editor and a lot of other stuff that was in XP Pro but not in XP Home.

I feel bad about hijacking Bruce’s thread, so if you want to talk about any details, etc. that wouldn’t be of general interest to the crowd here, you can click my sig in this or any of my posts of the past few months. The landing pages have a reCaptcha at the bottom of my various posts there, which will give you an e-mail address to contact.

If you would like to see my own personal project that debloated ZoneAlarm Home Free firewall from 100+ MB (some users reported getting into the multi-GB range), it’s at

I haven’t updated that in a while. The folder %windir%\Internet Logs, listed as “less than 2 MB” on that page, is now kept in the few-hundred-k range.


John June 7, 2011 4:21 PM

As RSA’s SecurID shows, the whole closed-source “secrecy”-based model is clearly more secure than an open-source model.

I feel much better hearing from RSA that their SecurID tokens were not compromised after they were hacked.

Of course, learning that RSA was lying the entire time, and that it took the breach of a Defense contractor relying on RSA’s assurances to get RSA to ‘fess up the truth really drives the point home.

Presumably, Lockheed Martin will be suing RSA for damages tied to failure to disclose the security vulnerability in a timely fashion, along with product defect.

pete sandoval June 8, 2011 8:43 PM

Open-source software may not sound compatible with the idea of strong cybersecurity, but….

But it’s not.

Did you mean, “…but it is”?

tommy June 8, 2011 10:30 PM

@ Nick P.:

Finally got to read Daivid Bell’s Addendum on the history of high-assurance systerms. Pathetically sad that NSA once led by example, then dropped that ball entirely (especially with MISSI), along with DoD and CIA surprisingly lowering their starndards. Summarized in one line,

“‘On time, on budget, but unacceptably weak’ is not a defensible compromise.” Amen.

Re: “MS couldn’t find a video game hidden in Excel”.
Open Office hid a video game in their Calc program (= Excel). See:

(I might know the person who posted that… )

In summary: The US Government is a huge consumer of IT products. If it were to use its mass purchasing power only on true high-assurance products, this would change market incentives dramatically. The cost of developing such – or even better, using existing products that have been discarded for no apparently-good reason — would be further amortrized by sales to enterprises eager to show compliance, avoid the negative publicity of breaches (coughSonyRSAetc.cough), and the loss of revenue that accompanies loss of customer confidence. Even though many home consumers are not security-aware, if these systems become mainstream, or if presented with a choice between products similar to existing consumer OS and those certified by NSA as high-assurance, surely more consumers would choose the latter.

And since most home computers are bought OEM-preloaded, an OEM selling machines with a weak OS would be at a strong market disadvantage to those whose ads, web site, machines, boxes, etc. proudly claim, “Certified by NSA”. (Yes, there are still the problems of implementation; 3rd-party apps not applicable to Gov needs, such as video games; foolish users, etc,, but it’s a start, and a truly hardened system would reject, or sanitize, weak apps, and there would be pressue on those vendors to comply.)

I think this is in line with my prevoius analogy: NIST invites submissions of encryption and hash algorithms, and NSA approves them for use in classified situations. Requiring all Gov purchases to be of high-assurance systems with similar approval may be the way back, very similar to Bell’s proposed action plan.

Thanks for the link. Highly recommended reading for anyone here interested in IT security, which is probably everyone.

GregW June 9, 2011 6:14 AM

@Tommy, I agree that if the government only purchased high-assurance products, there would be a sufficient market for enterprises to develop them.

That said (and this is just my two cents), I tend to disagree with your subsequent comment. The existence of high-security alternatives does NOT imply that “surely more consumers would choose [them]”.

If the high-assurance offering does not provide key functionality (in the 90s, it was networking not available in A1/B3/etc, today it’s probably (per Bell) filesharing/database/webserving or (groan) SAP/”cloud”-access/etc), then high assurance will continue to be the available-but-widely-unused-stepchild it was in the 90s. (And which security feature sets will be implemented in this cross-polinating “high assurance” product and will “more consumers” accept it? For example, the pervasive military requirement driving MLS is that people need to write documents that others can’t read, while the pervasive consumer requirement that everyone can read a created document. You can support both technically by having consumer content default to the lowest label, but can you do so in a consumer-acceptable UI?)

Ironically for Bell, since 2005, the business model making the most progress towards a true mandatory access control (albeit at the OS-only level) is not a government-driven effort, but a corporate-driven “IP protection” market as seen in attempts to create TCBs when locking down cellphones and Playstations. While those approaches ultimately do not protect content for rather fundamental reasons, they do ensure a stakeholder in the tech development process who cares about assurance and has a funding stream and business model dedicated to it. Will this motivation and funding eventually extend to securing non-OS layers of the network and application stack? Perhaps, although as Bell laments, that probably is a 10+ year away scenario (for a full high-assurance tech stack) so we are left with Bell’s recommendations.

(I can see corporations, in theory, buying into use of some sort of higher-assurance MLS system for internal-company “confidential” information, but in practice, my experience with corporate VPNs, extranets, backup technologies, database security practices, etc suggests that the cultural and best practices shift to make that truly work would be a LOOONG time coming.)

Clive Robinson June 9, 2011 8:32 AM

@ GregW,

From my point of view MLS is to little to late.

Whilst it might be appropriate for “gateway functions” as a hub between secure but non MLS systems it fails in many other respects.

It was quickly realised after MLS was proposed that secrecy clasification was but one axis on an n-dimensional problem. The second axis “compartments” was known well prior to MLS, and the MLS solution to “compartments” was one MLS machine for each compartment.

We have since moved on from those two dimensions to add another axis of “roles” within the two dimensions of security level and compartment, that is you may well have technical, maintanence, accounts, resources etc within each level and compartment pigeon hole.

However the issue with roles is that a person may have multiple roles and be cleared for different levels in the same or different compartments.

You quickly get to realise that data objects or documents need to be effectivly “self redacting”. That is each meaningfull part (word / sentance / paragraph / chapter / index entry / etc) needs to be multiply classifieed for secrecy + compartment + role and this needs to be done at a very fundemental level in the OS.

As others will point out there are further dimensions to the security / assurance problem space, but just getting these three dimensions working properly on a single machine is going to be extreamly difficult without even considering side effects such as the various forms of side channel.

Then when you network machines how does one machine trust another, the method Bell assumed was have them all at the highest security level required within a given domain, but this is neither practical or tenable in anything but a secure and issolated area with no access except to those cleard for all levels all traffic (and they are rarer than hens teath or should be). The solution has to be minimum security level consistant with a particular role or function not just on the machine but within the domain, and this requires trusted hubs or arbiters to ensure correct handeling of credentials and data between machines.

All of this can be done and not in an overly long time frame, but the security has to be done at the logical CPU+Memory+Comms level controlled by segregated hardware hypervisors. And you can be sure that this sort of architecture most certainly does not fit in with current CPU offerings in commodity products. You need to start looking at architectures like the Seymor Cray designd system used in Sun’s Starfire systems and IBM’s Z servers as a starting point.

Nick P July 7, 2011 2:05 PM

@ Clive Robinson

I disagree. The MLS problem was handling compartments and segregated information from the get-go. I just re-read the SCOMP Final Evaluation Report last night. It was dated in 1985, implemented Bell-Lapadula, supported compartments, applied segmentation to manage device labels/access, used the ring model for integrity, and mediated everything. Later systems, like Trusted Xenix & XTS-300, had some of the extra functionality you describe.

Roles are new & discretionary. Hence, they wouldn’t be covered by MLS security guarantees anyway: they would merely be encompassed by the mandatory policy. BAE added RBAC in XTS-500 and STOP 7 OS. It’s assurance level has dropped though to EAL5+ (originally the line was EAL6+ equivalent). Numerous MLS cross-domain, chat, wiki and collaboration systems run on this platform.

It shows that the problems you mention can be solved, but it might be better to compromise at some point. For instance, keep the number of clearance levels in one document minimal by taking out unnecessary information or upping its clearance level. I think proper use of software reclassifiers can make the sharing problem easier to solve. I’ve always seen MLS as a good way to implement a DLP scheme, especially if combined with thin clients. Hardware rings have also proven to be excellent for integrity if all of them are properly utilized.

Antti July 27, 2012 3:15 AM

Plenty of commercial software packages contain statically linked OSS components. In Windows application world, for portability’s sake vendors don’t use microsoft-provided APIs for graphics decoding etc most likely because of portability (or microsoft’s apis are junk, not sure really what’s the reason), but instead link OSS libraries statically in. Now, in case vulns are found in those libs, vendors may or may not patch those libs. Many popular windows apps ship for example with vulnerable version of libtiff or libpng..

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.