US Also Tried Stuxnet Against North Korea

According to a Reuters article, the US military tried to launch Stuxnet against North Korea in addition to Iran:

According to one U.S. intelligence source, Stuxnet's developers produced a related virus that would be activated when it encountered Korean-language settings on an infected machine.

But U.S. agents could not access the core machines that ran Pyongyang's nuclear weapons program, said another source, a former high-ranking intelligence official who was briefed on the program.

The official said the National Security Agency-led campaign was stymied by North Korea's utter secrecy, as well as the extreme isolation of its communications systems.

Posted on June 1, 2015 at 6:33 AM • 29 Comments

Comments

wiredogJune 1, 2015 7:36 AM

And there are still people who claim that cyber warfare is a fantasy.

bickerdykeJune 1, 2015 8:35 AM

On the upsite: It's not bad news that nuclear weapon facilities all over the world are secured well and can't be accessed by any ol' hacker.

Clive RobinsonJune 1, 2015 10:33 AM

I hate to say it folks, but I've been saying this from virtualy day one, and every body who said anything disagreed.

I susspect that NK was always the original target and Iran got used as the "back door" because it was known that Iran and NK were talking about enrichment and delivery technology due to things said by A.Q. Khan and others. That is the Iran-NK technology swap was the only route into NK and the US knew this because the original "type 1" Khan enrichment system was very crude and grossly inefficient. Both Iran and NK were working on the "type 2" system with NK being technically more advanced.

Iran was the secondary target for the US but later the primary target for Israel whe brought on board to amalgamate seperate efforts. That is Israel had already started on their own virus to go after the AQKhan technology because of the various middle east countries that had bought it (note the plural). The type 1 system destined for Lybia got intercepted and the Israelis got hold of not just specs, and thus built a working prototype for their code development which Stuxnet got tested on.

People have wondered why Stuxnet is less advanced than earlier code (flame etc) the US had produced, the answer is that the parts of the US IC realy does not trust the Israelis as far as they could spit them and would certai ly not put anything that poidonous in their mouth in the first place. However they were working under direct orders from the top to co-operate sufficiently to get things up and running but thre was no way the IC was going to give the Israel's access to the crown jewels for several reasons, not least that certain parts of the IC and US Mil have plans to target Israel should top control of the Middle East become uncertain ( this much is assumed on information from various sources).

It's the old "you're independent as long as we say so" arangment which you can see around the world with the US's supposed allies. For instance the supposadly "independent" UK Nuclear Deterant called Trident --which has been in the news recently-- can not actually be used without the US agreeing to the launches and providing certain key input. Similar can be seen with other systems in other countries, you buy US high end weapons systems you know they have final control one way or another so they can not be used against the US. Obvious as this is it's supprising just how many people think otherwise, especialy when the French amongst others can demonstrate this and have let potential customers know in the past (Argentina being one).

Clive RobinsonJune 1, 2015 11:15 AM

Oh I forgot to mention the article is misleading on one point, which is one of the delivery methods the US tried.

However NK has a fairly good idea of exactly how attempts were made, that is via a probably unwitting UN Inspector and a USB key (apparently the UN has made complaint on this to the US already).

NK when "thumbing the nose" at the US, by inviting in UN Inspectors to see their type 2 cascades shortly after the Stuxnet story broke into worlg news did two things. Firstly they made it clear that certain behaviours by the inspectors was unacceptable and secondly refused to allow inspectors any kind of access to the control systems and other computers. It was also commented at the time that the NK had decided some time prior to the Stuxnet story breaking to go a different route to the Iranian's for the control systems.

Making such a change mid project would have been both expensive and carry significant extra risks, so why did NK do it when they did? The most likely explanation is that NK had already become aware of a previous US attempt and taken prudent action.

The question then arises as to why Iran did not take similar action? Possibly because NK chose for quite good intelligence reasons not to tell them...

Whilst a few other things can be surmised about the Stuxnet events --like China and other nations involvment-- the chances are that reliable confirmation is unlikely in our life times if ever.

kingsnakeJune 1, 2015 11:31 AM

Apparently the North Korean policy of executing f**k ups works. U.S. take note ...

addon to cliveJune 1, 2015 12:00 PM

@Clive Robinson

Thank you! Finally someone else who saw what I did! I've been saying for years, ever since the late 70's/early 80's when everyone who was anyone was buying US Warplanes:
"You can't use these against the US, their planes won't show on the radar they sell you, etc"

These little electronic backdoors have been in US exports for quite some time.

Slug Crawling Across a Razor BladeJune 1, 2015 1:49 PM

@Clive Robinson

Whilst a few other things can be surmised about the Stuxnet events --like China and other nations involvment-- the chances are that reliable confirmation is unlikely in our life times if ever.

I would be surprised if you were not dead on right, on all of these points. [With some skepticism only about 'reliable confirmation' comment.]


On a related post, about 'China and NK' hacking, that you made, I am still pondering that one. I think you saw a line of events likely to happen, but not entirely sure what that is. Some possible jigsaw puzzle pieces that come up for me 'it might be convenient for the US and possibly other nations to have a multiple front diversion'; reading some stories on the 'South China Sea' situation pinged up, for me, when hearing the reasonable seeming China response... the fact that they were actually aiding NK hackers, which framed the response as disingenuous; there is very strong evidence either NK was behind the hack, or made to look like they were behind the hack (the later, I do not think is a theory US Intel would take seriously in the slightest); fake or real, if NK (or NK clone proxy) performed that sophisticated of a hack over "The Interview", then what, on earth, might they do after seeing the US attempted to hack their nuclear systems and sabotage them and that through UN Inspectors?

A few other factors: the Sony hack, I do not think, had strong anti-NK propagandist advantage. But, it *did* strongly impress Michael Rogers, head of the NSA. According to some recent talks he recently quoted, and the way those talks were quoted in the news. Fuel works on the fire, and if there is no fire, fuel is largely wasted spent.

I also found it convincing that China could somehow be involved in such a future event, and what is even more interesting to me is... whatever might happen likely might mean both China and NK's hands are revealed. In a manner which might greatly add fuel to the US Intel running fire, and might even start a fire in the larger population (though attempting to imagine what they might do to get *that* going, well, more difficult to imagine, if not for the gruesome possibilities necessary. Considering how the population has been not so much moved by so many everyday espionage stories out there.)

Normally, when I have heard of NK haxor capabilities, including mentions of "they have 6,000 hackers, I have reacted with unbridled skepticism. Little fish, in a little pond. How competent can a sophisticated hacker force get in a deeply isolated nation.

But, on that point, the Sony hack very clearly shows that they do have some scary capacities now.

(The South Korean hacks, 'kind of scary', but could be explained as relying on intel in SK, which is relatively easy for them to do. But, the Sony hack, that is scary and sophisticated. They are improving, and that time, it does appear whatever teams they have in SK did not need to offer their advice or provide services.)

That the Sony hack received no response, effectively, likely - as successful actions by smaller countries against the US historically have tended to do - probably has emboldened them. And if China is wanting to use NK as somesort of patsy for them, it does very much spell a troubling storm arising on the horizon pretty damned sure to go really bad awry.


MeJune 1, 2015 2:24 PM

bickerdyke: "Security through obscurity works again."

Related to that, I ALWAYS had a disagreement with security experts (with me not being one):

I agree that one should not rely on obscurity only to be safe (although I can see many non-commercial cases this could be a viable solution).
And also that an encryption/protection method should not be evaluated under "black-box" conditions.
BUT: once a method for protection/encryption is proved to be secure enough (with all modern methods of cryptanalysis, etc), I really believe that incorporating this method in an obscure system (while being careful enough not to weaken the method by doing so -- I don't think this is so hard if you know what you are doing...) is the right thing to do for anyone who considers encryption to be vital in his line of work and not simply a matter of fashion, mainstream practice or a chore.

Any thoughts on that?
*feeling heretic again* :D

And, in order not to be off-topic here, some extra comments:

I think that seeing what the western secret services do every day, everyone should do their best to make them powerless. I think that China and Russia starting to build their own computer hardware is a nice move that will make us all sleep better at night.

Also, despite the fact that I am not a fan of great and unwanted diversity in technological solutions (e.g. having 100 different mainstream browsers, 100 different mainstream OSes, etc for no good reason other that re-inventing the wheel), I think that building an open-source OS that is better than the existing ones (more stable than Linux, more user-friendly by Windows, more secure, etc), that is not linked to any NSA/CIA-front corporation (like Google, Facebook, etc), is a MUST. I think Linux has wasted enough time from our lives trying to replace mainstream desktop OSes. Perhaps now with the mobile phones it gets more users, but having this thing on my desktop was never pleasing (despite the fact I can use the console, white C++ and C# programs in it and even play around with MetaSploit).

I want an OS that is good-looking, performs well, is user-friendly, is concerned for the user's privacy, has all the tools for developers that Windows has, has all the tools for penetration testing that Linux has, has not a monolithic kernel (monolithic kernels are only good for servers, I guess), only accepts one endianess (enough with this hardware madness), gives the user full insight into what happens inside it (probably with tools like Windows' ProcessExplorer, ProcMonitor, etc), is highly configurable, is highly recoverable, only accepts open-source drivers, its shell is sufficient to allow the user do every job without using commands (if one loves commands better link the terminal to a compiler and link against the OS's API. I find LinqPad 100 times more flexible and appealing than any Linux terminal), etc.

Is that too much to ask? :(

activités préjudiciable à la sécurité de l'EtatJune 1, 2015 2:59 PM

@Clive re 11:15, Exactly. The Six-Party Talks initially flopped when the US refused to commit to non-aggression. China resuscitated the talks, the US promised not to invade, and by 2008 DPRK had completed 8 of 11 agreed disablement tasks. All parties signed a joint communiqué on verification July 12, 2008. Verification issues in a subsequent US proffer ultimately derailed the talks. Sometime in there, the USG attempted its sneak attack in breach of the Hague III Convention. DPRK restarted its reprocessing plant, shot off a Taepo Dong-2, and conducted a bigger nuke test.

Nice work, NSA!

https://www.armscontrol.org/factsheets/6partytalks

Slug Crawling Across a Razor BladeJune 1, 2015 4:36 PM

@"Me" (You)

'Security by obscurity is invariably bad meme'

Obscurity is a profound and very important to understand concept of security. I use obscurity as a factor in computer security from many levels with making risk estimates and performing threat analysis.

I think the meme got the strongest force 'way back when', when pushing open source was an uphill battle, as well as software security. Ten, fifteen years ago.

Which is 'way back when' in the technical field.

To forego understanding of 'security by obscurity' is deeply ill advised. But, it is but one layer of security. And any good security approach should be multi-layered.

really believe that incorporating this method in an obscure system (while being careful enough not to weaken the method by doing so -- I don't think this is so hard if you know what you are doing...) is the right thing to do for anyone who considers encryption to be vital in his line of work and not simply a matter of fashion, mainstream practice or a chore.

I am not entirely sure what you are talking about, but if you mean you implement a well peer reviewed encryption solution in a well peer reviewed implementation manner, but include in the overall system a level of uniqueness about it so the system is "obscure", and guard those details of obscurity, then yes, of course, that can even substantially increase the security of the system.

There are a wide variety of caveats here, however. For instance, I am a coder and can create my own systems which utilize encryption properly. Say I do so and circulate this to a small circle of associates. That can seem to enhance security, because if anyone sees the communication, they do not have either the client or the source code so it is much more difficult for them to find security vulnerabilities in it.

Whereas, if they have the client and the source code, then they can find much more easily security bugs in the system and exploit those.

A better example is a system which uses a method of obscurity to even hide the encryption in the first place -- like steganography. Under the steganographical traffic the message is further encrypted, and the application is set on a very limited release. Obviously, that even increases the overall security of the system that much more. And further if the actual method of using the system is its' self obscure, such as via bluetooth or say, in youtube videos uploaded via often changing dummy accounts and posed as music or other videos... then, yet another layer of obscurity (if not several) adds even more.

But... even in these far flung situations, obscurity is a dangerous method to *rely* on, because once discovered, the entire communications network can be compromised.

And the additional obscuring measures used then can become used against the network: depending on the level of confidence given to the 'security by obscurity' layers of security.


Slug Crawling Across a Razor BladeJune 1, 2015 4:46 PM

@"Me" (You)

'secure, open source operating system not linked to cia/nsa front company'

I do not think that everyday linux/similar is linked to a cia/nsa front company, as I do not see, Google, Apple, and other such firms as plausibly genuinely 'cia/nsa front companies'... they may have influence, these and other organizations may even get capacity to put in backdoors, but they are not 'front companies'.

That aside, I agree with the old sentiment that 'monolithic software' is inherently dangerous, but I have to point out something here about the real problem (besides arguments on the actual definition of 'front companies') here, which is:

Such a project would be extremely focused on by intelligence agencies, worldwide. That would be considered to be a very high value target for compromise. So, even attempting to do such a project has that danger right from the first.

In fact, that manner of project is on the 'good idea, let's really fund this' level for intelligence agencies.

So, if you want to be 'really paranoid', consider that. And if you are unaware of such a priority as being there... think about it.

rgaffJune 1, 2015 5:28 PM

As others have said... "security by obscurity" is actually a good thing, when applied merely as one added layer of a comprehensive security system. Obscurity keeps out the riff raff, and makes it so that the rest of your security system is only having to resist smarter and/or targeted attacks (which still do grow in numbers over time, but at a slower rate).

When "security by obscurity" gets a bad rap is when people try to rely on it exclusively for their security, and there are no other layers in place (or only very weak ones) to resist intrusion once obscurity is cracked wide open like an egg. This is a terrible practice.

What you really want to do is to have a security system/level/layer that can and has been well proven to withstand full-on open attack without any obscurity... and then apply an obscurity wrapper around that in a specific installation. When done properly you have both the benefit of obscurity, and the benefit of openness at the same time.

Examples include: TCP/IP firewalls that drop packets they don't expect (so they look offline), instead of replying with an error response (which would announce their presence). Apache servers that don't say what version of Apache they are. Web apps that don't publicly reveal configuration details, paths, or anything else important in any error messages generated.

winterJune 2, 2015 1:08 PM

@"Security by obscurity"

To analyze your security under "obscurity", treat the "obscurity" exactly like a shared password.

How secure will you be when you share your password with others?

@Me
If you want to create a new OS that is better than Linux you will need a developer effort worth roughly $1B or more.

MeJune 2, 2015 2:46 PM

@Slug Crawling Across a Razor Blade

But, it is but one layer of security. And any good security approach should be multi-layered.

...if you mean you implement a well peer reviewed encryption solution in a well peer reviewed implementation manner, but include in the overall system a level of uniqueness about it so the system is "obscure", and guard those details of obscurity, then yes, of course, that can even substantially increase the security of the system.

Yes, that's exactly what I mean. Not depend on it alone, but strengthening the security of a tested encryption with extra layers of security (some of which will contain obscure methods).


I do not think that everyday linux/similar is linked to a cia/nsa front company, as I do not see, Google, Apple, and other such firms as plausibly genuinely 'cia/nsa front companies'

I don't know about Linux or Apple in general, but I tend to consider Google and secret agencies very close in attitude, ideologies, and there are also some claims that Google is linked to CIA from its very beginning. But I see your point and agree with it, just I still don't trust these guys.
I cannot say I like Apple or its products (perhaps I am an endangered species or sth... I don't know :D Most people seem to be crazy about Apple and their stuff), so I am not so exited in the thought of me using MacOS and the like.

Such a project would be extremely focused on by intelligence agencies, worldwide. That would be considered to be a very high value target for compromise. So, even attempting to do such a project has that danger right from the first.

Yes, I know. But I guess there must be a way. I don't know if it would be possible that many governments will agree to build an open-source OS in order to avoid using the ones that exist out there (that are either closed source or not good enough) which may be threats to their security (or threats to their productivity...).
Having funding from multiple governments and having many teams of source code reviewers and testers MIGHT be able to pull this off. I do not know if there are many (or any for that matter...) paid people that code-review the code of Linux, but in the case described above you can have many groups of people, many times mistrusting each other, doing nothing else than trying to find security holes or bugs. If that can't do the job, I don't know what will... :(

@rgaff

As others have said... "security by obscurity" is actually a good thing, when applied merely as one added layer of a comprehensive security system.

Yes, that was my understanding too. :)
I am glad I find people that I agree with.
My experience so far was that whenever I mentioned anything about putting extra obscure methods in front of RSA (for example), I got doors and windows shut on my face...
I don't know if people trust their methods so much that makes them be offended by you implying that you want something more on top of it...

rgaffJune 2, 2015 4:08 PM

@Me

My experience so far was that whenever I mentioned anything about putting extra obscure methods in front of RSA (for example), I got doors and windows shut on my face

If you are in any way altering the workings of RSA (for example) so that you can't even tell it's RSA... that's not the kind of obscurity I was talking about. That's taking a free and open product, proven to have a given level of security, and modifying it to be a completely new product that hasn't been proven to be secure at all. You destroy the benefit of RSA being so open and well reviewed and well used.

What I was talking about was only a "wrapper" around it, done in such a way as to not actually alter the openly-done security inside it. For example, a firewall that dropped packets that didn't match the exact parameters of what you are expecting, would be a valid way to wrap it with obscurity, without destroying the benefits of being open.

Now, that said, there may very well be ways to alter the inner workings of things that are still safe, but the average joe programmer wouldn't know how to do that, he'd need to be a real cryptographer too. Otherwise he's just asking for trouble. Even then he still could be asking for it...

If you are talking about simply renaming method names, well that just creates a maintainability problem. I know I wouldn't want to hire someone to write me a program that I then could not maintain due to all the methods being renamed to something obscure and confusing. This creates a bigger problem than it solves. Everything has a balance.

Slug Crawling Across a Razor BladeJune 2, 2015 9:22 PM

@"Me" (You) -> Existential realities or theories aside.

I don't know about Linux or Apple in general, but I tend to consider Google and secret agencies very close in attitude, ideologies, and there are also some claims that Google is linked to CIA from its very beginning. But I see your point and agree with it, just I still don't trust these guys.

Lol, what claims? The CIA is foreign focused. Sorry, should be no "the" before CIA...

The princess and the pea. How does one know it is the princess? Because she is the one who will be bothered by the pea at the bottom of a stack of twelve mattresses she is sleeping on. The "Universe" is always surveilling everyone. I know it may not look like it. But, I am sure whenever the "Universe" feels a pea stabbing her back, she responds. Just often in a way we mere mortal human beings can not exactly understand. The "Universe" knows and thinks on matters in a much more long term way.

But, the consideration that dirty, slimy people might be doing this looking to get information for trade or extortion, I think unnerves people. Especially if they are dealing drugs or dealing in arms. And other stuff like that.

I don't really bother about it too much my own self. In corporate security, in the US, they are not much of a likely attacker, and even if they were, they probably would not show they hacked us or do something like steal money from us.

If there were substantial, persuasive evidence the US Intel establishment was angel investing major corporations (covertly, not In-Q-Tel shit), that might also imply they might be doing domestic industrial espionage. Then, I would be concerned.

It would also be nice, however, to also get evidence that they were literally doing domestic industrial espionage.

So... barring that, you have many other, more credible threats, like the many criminals always trying to own your systems.

I will note, however, there has been some evidence Eric Schmidt is friendly with US Intel, even some whom the public does not regard friendly.

And, Google, its' self, like all such free sites, most surely does spy on you and sell your data. They do swear they anonymize it, however.

That is the price we pay for using their services, their free ones, anyway.

It seems a decent enough model, we also get targeted advertising. Not too targeted, or that would be creepy. (As Schneier pointed out, in 'Data and Goliath').

Could there be some secret conspiracy there, maybe some secret funding by some US three letter agency? Sure. But, then, again, on that realm of thinking, for all you know, you could have just materialized now, fake memories and rips in jeans and all.

Roswell really might have had an UFO crash.

Obama might really be Muslim and a deep cover spy from Indonesia.

Oliver Stone could be working for the FBI all along.

And anyone denying the conspiracy is part of it!

Hehhehehheh... :-)

Having funding from multiple governments and having many teams of source code reviewers and testers MIGHT be able to pull this off. I do not know if there are many (or any for that matter...) paid people that code-review the code of Linux, but in the case described above you can have many groups of people, many times mistrusting each other, doing nothing else than trying to find security holes or bugs. If that can't do the job, I don't know what will... :(

I did some security reviews on a major opensource OS, but for various corporations. It is open source, so you can easily do that. Problem is the firmware and similar areas, from what I understand. Black box testing is also gimped in comparison to Computer testing because you can not easily see a lot of the traffic on the wire, without your own stingray.

The multi-national approach does sound like a viable idea... but to get something like that going there has to be shown reason, strong evidence, of compromise. Otherwise, why even begin to bother? I can get NK or China bothering. Neither have shown their reasons for why they did bother. Did they find security vulnerabilities? Do they have evidence US Intel puts moles in developer pools? Or are they just thinking from their own mindset (the most probable answer), 'if they were the US they would put moles in the developer pool and have them put intentional security vulnerabilities in there which would be extremely hard for even a nation state to find'? I think, they would.

But, Google development team comes from all over the world. Many from India, many from China.

Still... it is open source.

Easiest route, at least, does seem to have an international consortium or even a peer led movement paid to find bugs. Someone could raise this as a project on a crowd funding site.

Otherwise, some corporations have performed such analysis for their own publicity. It is worth it. Others could be encouraged to. See how many security vulnerabilities their bug finding tool finds.

Barring all that... there are small projects people can engage in to secure small parts of the systems. Like secure an IM client. Secure an email client. (Sure, if the system is root compromised by a nation state, take China and NK's stance -- 'Good Luck with that!')

MeJune 8, 2015 4:17 PM

@rgaff

If you are in any way altering the workings of RSA (for example) so that you can't even tell it's RSA... that's not the kind of obscurity I was talking about.

I don't think I am. What I was suggesting was taking RSA as an untouched black box and sticking other black boxes in front and/or behind it (in a pipeline manner). So, RSA will still encrypt the data 100% the same way it did before. But it will encrypt different data (e.g. because there is another processing stage before it that compresses the data and does some simpler but obscure encryption on the data). I do not think this should affect RSA or its reliability in any way, if you keep the two encryption methods totally independent form each other (unless you do something profoundly stupid).

I know that these things can be tricky and I guess that you may reduce the strength of RSA in some exotic scenarios performing the above process. But, again, I have the impression that you will have to do something really stupid or, more likely, something highly sophisticated in a deliberate attempt to weaken RSA to reach that point. From my view, RSA does not care if it encrypts the raw data or a compressed and scrambled version of them. No matter how many modifications the data may endure before it reaches the RSA encryption, it is still data. And RSA claims it encrypts it securely.

@winter

If you want to create a new OS that is better than Linux you will need a developer effort worth roughly $1B or more.

Well, I guess the whole world wants it. Or they should be wanting it. Certainly, I don't think the vast majority of users are happy with Linux or Windows (although crappy drivers usually seems to be the most frequent reason behind their dissatisfaction at this stage - smaller bugs and security holes seem a luxury to get rid of in comparison...) and I don't see so many people going to Apple either. I certainly wouldn't go there, even if I got paid to do so. But that's just me.

Also, consider the amount of money NASA gets. And they are just an American institution. I wouldn't bet my money on the assumption that more than 30% of the American citizens really want to give so much money to NASA in order to enable them send junk into space and bring back dirt or data from space rocks. I can see the benefits in doing those things and I would enjoy doing them myself, but having NASA do them for you is just not fulfilling enough for the bill paid. I am not asking to shut down NASA or anything (at this post at least...:D), but I want to compare it with this fact:

I bet more than 90% of computer users worldwide want a better OS than the one they currently use. And, from my experience, they will not find it no matter where they search, simply because it does not exist. Doesn't this problem deserve a MUCH better funding than NASA's private adventures? I would suggest that it does, despite the shiny and wishful claims of people saying that "we will inhabit other planets and will preserve our species" (and not so many people really care about their species anyway...). And even if it doesn't, even a tiny fraction of NASA's funding could be more than enough.
So, I don't think the problem is the money. The problem, as always, is in people's heads. From simple users, to developers, to businessmen, to politicians.

@Slug Crawling Across a Razor Blade

The multi-national approach does sound like a viable idea... but to get something like that going there has to be shown reason, strong evidence, of compromise.
...
But, Google development team comes from all over the world. Many from India, many from China.

I think there are numerous examples in the military were reason is given no place at all. The amount of military expenses in most countries is outrageous and usually without any good reason (I guess the U.S. is the best example nowadays). I think having a secure OS is something to be taken seriously in the military. If they are not paranoid about this, I don't know why they bother with the rest. Going for Linux might be considered a solution, but I don't think they would/should simply get a Linux distribution from the countless out there using lottery and start using it. That might involve work as well (research before the adoption, code reviews). I would like to believe they would not just "plug and play" with it when dealing with weapon systems or sensitive facilities. I don't know if that would be more work or less. It is an each-country-for-itself process and the security holes found will not be reported and fixed in Linux until they are heavily exploited (that's what I believe at least).

The problem with Google dev teams is that they are all paid by Google, to work for the benefit of Google. There is no mistrust between many existing teams working on the same project. It's like using the developer team to do the testing on a project they develop, but even worse. Also, even Google doesn't spend money without good reason. So they will not use 100 code reviewers when they can do the same job with just 2.

But I agree with you that building a new OS is not easy (unless you want to make something simple - DOS is an OS too). But nothing is easy and I think someone must do it at some point in time. And there are more that security in an OS that make it good or bad. Even if Linux was the most secure, most people still wouldn't use it.

MeJune 8, 2015 4:22 PM

Also, even Google doesn't spend money without good reason. So they will not use 100 code reviewers when they can do the same job with just 2.

The same job but with worse results, that is....

Clive RobinsonJune 8, 2015 7:09 PM

@ me, winter,

If you want to create a new OS that is better than Linux you will need a developer effort worth roughly $1B or more

Or maybe just twelve years or so of deadication to come up with your own OS,

http://www.codersnotes.com/notes/a-constructive-look-at-templeos

Please note this is not a recomendation just an example of what some people do. I could have pickex one of several RTOS's but they are usualky cross compiled with other OSs tool chains

Nick PJune 8, 2015 9:30 PM

@ Clive Robinson

Or a new core with an isolated Linux ABI. That's how the separation kernels did it. Security-critical stuff can happen outside the Linux box using tech designed for that. TCB is pretty small while reusing others' Linux-compatible software. CHERI team took a similar approach using FreeBSD.

Clive RobinsonJune 9, 2015 6:38 AM

@ Nick P,

Yes CHERI etc are other ways of gaining an increase in security that --probably-- cost less than $1b.

The question as always falls to a balance, security is exponentially expensive and probably can never be 100%, thus you have to pick a point at which the security is at an acceptable cost. But that has the unfortunate problem of all "Defence spending" you know when you are spending insufficiently but not when you are spending excessively and thus missing other opportunities...

The trick --as with crypto-- is to find a method of privacy / security that costs you marginally whilst making the work factor and it's attendant costs impractical for an attacker.

The problem is few practical use processes show the desired asymmetric cost advantages of crypto.

Thus we try to use more linear processes in combination to reach an acceptable "sweet spot". Which brings us back to the question of if obscurity is a valid security mechanisum or not and the attendent issues.

MeJune 21, 2015 4:27 PM

@Clive Robinson
Thanks for the link. :)
TempleOS seems very cool to me.
It may not be secure, but I think it succeeded the aims of its creator.
Apart from its aesthetics, I really like it when used for what it was designed for.
If I manage to locate its source code and get permission from God or from Terri to change the screen resolution, I will seriously think of using it in a separate partition (I have some ideas on what I can do with it).

I think that's a very good example of what people can do indeed. I guess Terri built his own HolyC compiler from scratch as well?!!? And his own 3D graphics engine from scratch?!?!

I really admire his insistence on simplicity. Certainly security is not one of his concerns and that's understandable. But I find his OS quite exciting. It won't replace my current OS, but it definitely seems like a good tool to have around.

I have found some other posts on this site as well suggesting those OSes for security oriented purposes:
http://www4.cs.fau.de/Projects/JX/
http://genode.org/download/index

I have never heard of those either. The truth is that the thought of dealing with Java makes sick, so I probably won't like JX, but who knows...

Clive RobinsonJune 21, 2015 5:41 PM

@ me,

I'm glad it's of interest.

When you start to hunt around a bit you do find quite a few limited OSs developed by individuals or small teams. This is especially true of Real Time Operating Systems on "single chip systems".

Sometimes when you develop simplified OSs for embedded systems which I've done in the past you end up with something that is not realy an OS but is certainly more than a BIOS.

I have a sort of rule of thumb, if there is no MMU and thus no virtual memory then it's effectivly a BIOS not an OS. Likewise if the devices don't have a sufficiently abstract application side interface then it's a BIOS not an OS. Then if it does not preemptively multi task its more a BIOS than OS.

I'm sure there's many a developer who will disagree, but at the end of the day there is effectivly no "official" rule on what is an OS or BIOS, so it's up to the end user to decide...

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.