Schneier on Security
A blog covering security and security technology.
« Picking a Single Voice out of a Crowd |
| Fingerprinting Telephone Calls »
October 15, 2010
India is writing its own operating system so it doesn't have to rely on Western technology:
India's Defence Research and Development Organisation (DRDO) wants to build an OS, primarily so India can own the source code and architecture. That will mean the country won't have to rely on Western operating systems that it thinks aren't up to the job of thwarting cyber attacks. The DRDO specifically wants to design and develop its own OS that is hack-proof to prevent sensitive data from being stolen.
On the one hand, this is great. We could use more competition in the OS market -- as more and more applications move into the cloud and are only accessed via an Internet browser, OS compatible matters less and less -- and an OS that brands itself as "more secure" can only help. But this security by obscurity thinking just isn't true:
"The only way to protect it is to have a home-grown system, the complete architecture ... source code is with you and then nobody knows what's that," he added.
The only way to protect it is to design and implement it securely. Keeping control of your source code didn't magically make Windows secure, and it won't make this Indian OS secure.
Posted on October 15, 2010 at 3:12 AM
• 104 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
"Western operating systems that it thinks aren't up to the job of thwarting cyber attacks"
Let me guess...
I wonder if they'll modify FreeBSD or Linux (with SELinux included). If the latter, will they violate the GPL?
If the Indian defense ministry would take Linux, modify it, and use it for their own purposes only, then I don't think they would violate the GPL.
As soon as they would distribute binaries without source to persons/institutions outside their organisation, then they would violate the GPL.
Is that correct?
I wish that was true, at least it would be technologically interesting. Most news about "new operating systems" are just marketing for new Linux distros :)
Oh, I just read the post (other commenters didn't). It's a proprietary OS with closed architecture and source code. A bad idea, and a boooring one :P
I think they're after Windows compatibility at the API/ABI level, so Reactos might be a better starting point than either Linux or one of the BSDs.
With the source code of Reactos, mono, Wine, etc freely available to study, they could deploy a team to write API specifications for these existing projects then do a white room re-implementation for their OS.
India's got plenty of competent coders so they should be able to create their OS in a reasonably short time once they have a specification.
Making it secure is left as an exercise for the reader.
This announcement, like most announcements by Indian government agencies, is a joke, and a scam for some officials to make some money. I wish it weren't taken seriously.
It has already been parodied online.
Surely this is just going to open up this OS to a great attack. When you brag, you have what is coming to you.
Keep your operating system close, keep your source code closer.
The only way to create a drop-in Windows replacement would be to reproduce many of the same hacks, glitches and backwards compatibility modes that the real thing does. You only have to look at Wine or Reactos to realise what a horrifically complex and probably futile task the Indian government would be engaging in by doing this. Wine has been going for years now and struggles to support even a handful of core apps and games.
On top of that, say you *do* emulate Windows. The chances are you'll be dragging in 3rd party DLLs and files anyway. For example apps that use the MS Visual C++ runtime, or ODBC drivers from Oracle, or graphics drivers, or Java runtimes, or fonts etc. etc. etc.
It seems to me far better to virtualise some genuine copies of Windows but isolate them so they are restricted as to what IP addresses and other network services they are able to run against. That or dump Windows entirely and produce a dist around Linux or a BSD variant which can be as restrictive or liberal as they like in what it allows to run. Audit the code, put extra logging in, contribute the findings back to the community, open up the source to review, offer cash bounties for bugs, rinse and repeat until passes whatever security certification the thing needs to run at.
These are the same people that put in jail the researcher that investigated serious vulnerabilities in their electronic voting machines?
Considering this and the average level of IT products there, this could be a masterpiece regarding to software vulnerabilities...
Wasn't there an OS that came out with an identical slogan three years ago?
What happen to them ?
Why not patent the 0, which they invented and then take over all other OS.
Other suggestion, awake Zuse from the dead and start from scratch.
Moot. Every operating system will inevitably have faults, and cyberattackers targeting India will research and exploit any operating system they happen to be running.
It might not be as useful in all places as Windows exploits are, but if the target is India, this doesn't matter very much.
Heh. You assume the actual purpose of this project is what they say it is - to make a "hack-proof" OS. This is not true. The goal is to make an OS _they_ can hack, and then force everyone in the country to use it, for domestic spying purposes. That's the only goal.
Of course, within a year, anyone with a Speak-n-Spell will be able to run scripts that hack it as well, but I really don't think they've considered that far ahead...
I'm a little curious...
When I had a discussion with an out source vendor's employee, that person referred to developers in India as a low-end job. That if someone was a developer for more than 2 years and not trying to move onto management, they were considered a failure. (again, this was not me thinking this, this was a discussion with an employee of a out source provider).
So, I'm left wondering, will they understand finally the issues that they cause their customers? or are it only the out source providers who stress the 2 year developer role?
The meaning they ascribe to,
"protect" and "secure"
May not be those limited to those that a crypto algorithm designer assumes.
_he pfotection may be against competative practice and the secure may be against potential litigation.
Anyone remember the HUGE marketing campaign by Apple claiming that OSX was "more secure"? That, even though to this day, it provides poor heap execution protection and a very poor ASLR implementation. When it was first released, it provided neither.
Yeah, this new OS is going to be bullet-proof...
My guess would be that their main goal isn't an OS hardened against arbitrary attackers, more of making sure that there aren't any back doors in the OS placed by the vendor at the behest of the vendor's government.
The Indian approach is flawed on so many levels it may be wiser to see it as a form of 'make work' project. The US engages in war to employ people. The India government apparently has opted to hire programmers on a fool's errand. To paraphrase: "Let it be written; let it be broken."
Really???? It's funny to me, because I don't think they're any smarter about computers, networks and software than the 'western world.' I don't think India has EVER built any technology that wasn't a piece of shit right out of the gate. I bet the Chinese are laughing their assess off.
I can see it now...
Based on my experience with pure Indian development teams, every time they need the length of a string, they'll drop into a while loop right at that point, looking for nulls.
They'd be better off with WinME....
closed source code - 100% in the wrong direction agreed. What I don't understand is with all these brightest minds here why there are so many patches released month after month, year after year. Are we incapable of doing a code walk through and get it fixed once for all or have no clue what to look for? Well ... I forgot about that.. we have outsourced the coding so that is someone's else problem now ... let us get back to the blame game..
Linux is modified for specific purposes all the time. Brocades run a form of Linux, most home security DVD players are running embedded Linux, Backtrack is a custom load of Linux.
As for a government having control over a proprietary OS for security purposes from the ground up and keeping control over the code, it's been done. One word: Honeywell. Mind you, that OS never saw much daylight, much less exposure to a commercialized Internet, but that it was NSA Orange / Blue book certified as an A1 system is well known.
One wonders how much India will rely on pre-existing security evaluation systems such as the NSA rainbow books, or whether they will implement their own.
but maybe they will create one from scratch?
what kind of OS would the Indian folks actually manage to come up with...looking at the disorganization and not-so-great end results in preparation for the commonwealth games...
You can not do that from here.
You must reformat your hard drive.
@VT: You severely underestimate the complexity of modern software. Here's the analogy I use when I have to explain to non-programmers why their software contains bugs.
Imagine a machine with 2 million moving parts. Now imagine that you have 50 or more people all changing the machine at the same time -- each making dozens or hundreds of new parts every day and sticking them in there somewhere, or modifying or removing existing parts. Is it any surprise when the machine occasionally doesn't work perfectly given how complex it is and how many changes are being continuously made to it by fallible humans?
And millions of parts really is the kind of scale we're talking about, especially for something as large and complex as an OS.
Individual drivers, background services and utility programs will each have tens of thousands of lines of source code. Even a bare-bones OS has hundreds of these; something like Windows has literally tens of thousands of them. If you add up the source code for all of the drivers and services and utilties in a modern Windows distribution, it's probably somewhere between 50 million and 200 million lines of code.
Now realize that most programs contain at least 1 bug per 1000 lines of code (in some programs, its more like 1 bug per 100 lines of code). NASA-like reliability is more like 1 bug per 100,000 lines of code, and it costs them an incredible amount of money and time to make their software that reliable.
Its an intractable problem because, well, its big and intractable. The best we can do is to adopt better development processes and techniques and be rigorous about fixing vulnerabilities when they are found, and learning something from the experience (i.e. improving our development process to reduce the likelihood of similar vulnerabilities in the future).
True.. so as you said, if it is for security reasons theyre doing this new installation and making a new OS for themselves, it is right they can't keep it a secret. Like you said, like in Little Brother, you need to open it to the world and put it to the test. Unfortunately, someone's going to be able to break it sometime someway or the other... so what makes them think if they make it it will be more secure than others? Perhaps they'll feel better and more reassured that they did all they can. Ha!
Also, I should have mentioned in case its not obvious -- security vulnerabilities don't always look like bugs, and are often not even recognized as vulnerabilities until someone shows how its possible to exploit them. Most programmers are not used to thinking about the code they write as if it is under attack from hostile outsiders, but thats the mindset you have to have if you want to write code with fewer security vulnerabilities in it. Also, vulnerabilities (and also regular bugs) are sometimes non-obvious even to people carefully scrutinizing the code; after all, if there was something *obviously* wrong with it the programmer who wrote it would have fixed it, right? The defects that made it past internal testing and QA and reached end users, are almost by definition, the ones that are "hard to find". Sitting down for a few months and going over millions and millions of lines of code with a fine-toothed comb would find some fraction of the vulnerabilities; maybe 20% of them or something. It definitely won't find them all, probably not even most of them. And it would be colossally expensive and time-consuming.
Also, there are vulnerabilities that don't exist in any single component and only come into play when two or more components interact together in an unexpected way. Each one might be behaving completely legitimately, in the intended way, and yet the combination of the two might be vulnerable in a way that would not be obvious from reading the source code of either component.
Basically, preventing vulnerabilities is hard.
I stumbled on this blog site and find it interesting; will try to visit often..
Everyone worth his name, Govt or enterprise, has tried to make it's own OS from time to time; enough pointers in the posts here to that.
But I wonder, why the sharp tounge against the Indians by many 'western' posts....
Sure India has it's share of flaws; some major ones.. but they have managed to keep a democracy alive and kicking and in today's world that's almost a miracle. Despite their flaws so well pointed out in the posts here, they have the largest pool of fairly good English speaking and fairly good quality programmers, developers, testers and integrators.
Just 10 years back, Indians themselves considered computerisation of their own railways and their own banks impossible. They have broken their own boundaries....
Their press is ferocious and free, that's why their scandals tumble out regularly.
Agreed, fact is their DRDO has not too much to show compared to NASA but for a country where real technology happened just about 15 years back; their progress in missile, nuclear, IT and bio-technology etc are not worth giving a pass. And all this despite the corruption, poverty .... snakes, elephants, and whatever else people who are unaware of the new India can imagine.
And their GDP has been growing more than 8% for last more than a decade. And the recession that's gripping us still very strongly did barely touch them. Thier IT giants with 100,000 work force plus managed to shed just about 1% of employees.
I think many of us need to change our attitude towards the world out there. India is fast moving and fast changing. I see that many of my Indian contacts themselves have already changed their attitude of hopelessness about their own system; some have returned to equal to or higher paying jobs at Bangalore, Gurgaon or Hyderabad ...
The new generation of Indian youth is getting involved with Govt projects. Indian Govt is one of the major investors on infrastructure, technology, power, compared to it's industry counterparts.
There are young people there who are making it their life's mission to work in backward parts of India, connecting it's people to it's resources, and facilities. Such people are the backbones and bulding blocks of the emerging India.
It's a people whose time has arrived; may be a OS is not rocket science after all.....
Why don't they start with CP/M-80? It had all the apps a government could need (Wordstar and Multiplan) and I never had any security issues with it ...
@ Trichinosis USA
You are referring to SCOMP. This isn't the only very secure, closed, government-protected OS. It's successor, BAE Systems' XTS-400, is in many installations and is Linux compatible. They now have an XTS-500 with more capabilities and a number of high assurance hypervisors exist. The real problem is that the US government classifies systems at B3/A1 level as munitions and restricts their export. SCOMP only sold 30,000+ units in the US. If they could have sold it to allies, they might have recovered enough investment to produce more high assurance systems.
As for the Indian OS, it will be an epic fail like their military.
It's a great news :)
Proud to be an Indian !!!
I think you'll find that most of the criticism (not all, some of it is clearly stepping out of line...) being leveled against India right now is extremely similar to the criticism people here have against just about every other country. Disillusion government officials with no concept of security is an international problem.
I don't deny the fact OS might have millions of lines of code and bug to code ratio etc.. At the same time it is not acceptable for commerical OS's not able to do a code walk through if they have modularized their product. In the same token unless the vulnerabilities are in the inherent coding Language with property input & output validation, simplistic & modularized logic 90% of the code can be vulnerability free. If you happen to see the exploits most of them will fall in to the validation category. This can be caught while developing or during various modes of Testing.
legion is surely right... the only reason to keep the code closed is to own the back door, and make it more difficult for adverse entities to find one.
Bet they use GPL code in it to save time, and try to obfuscate that use in the compilation. Can a gov be sued for GPL violation?
excellent. India bashing by many. should I remind you to look at each of "your" governments to see if ever some dumb bureaucrats made some dumb comments on the behest of some dumb politicians ? Indians are pretty dumb. sure. People still die here from malnutrition. For such a poor and dumb country we have done pretty well over the eons. thanks.
@moo u r absolutely right! Thanks for a gr8 post!
Nick, things were a little different when I was active duty many moons ago. Thanks for the update.
I really don't think the nation-bashing is appropriate. Anyone who examines India's history will see they're quite capable of building things that last and impress if they can stop fighting amongst themselves and put their collective minds to it. I have a lot of respect for their culture (the only thing I really don't like is the caste system). Western world centrists have very short memories. India is one of the world's oldest and most historically significant civilizations. In many respects, to diss India is to diss oneself if one is European or an American of European descent.
As for America, our technical people (along with everyone else) have been handicapped for the last several decades by a culture that focuses more on dumbed-down infotainment, fear-based politics, cronyism, religious extremism and a pathetically failing "corporate" approach to our educational system than rewarding actual competence and merit. Pride goes before a fall, and I suspect these displays of arrogant opinion will be proven wrong.
We do need competition in this area - it keeps people honest if nothing else. I hope India succeeds and will watch with great interest.
Competition is good, and writing an OS is not rocket science.
I think this will be an interesting story, assuming that they will quickly move into a security by design approach. India has the talents, the volumes and the national proud to create something new and innovative in this field.
And I don't think that racist bias is a recommended engineering practice.
Methinks that India noticed the Stuxnet problem, and is looking for a way to close the backdoors that Stuxnet took advantage of.
Not that this attempt has much likelihood of working, as noted above.
But I think this is the first visible part of the global security community's response to StuxNet.
They're going for Windows compatibility? Yikes! Any vulnerability that's a design flaw, rather than a bug, will then be present in the new product as well.
I'd bet it'd be easier to write new applications for the new, secure OS, rather than try to emulate Windows's quirks well enough to run Windows application binaries, and then you can eliminate design-based flaws as well.
Methinks that India noticed the Stuxnet problem, and is looking for a way to close the backdoors that Stuxnet took advantage of
So they're rewriting the SCADA systems too? It wasn't just Windows vulnerabilities.
"Keeping control of your source code didn't magically make Windows secure" - interesting use of the past tense.
Years ago .. I saw an Indian OS . HCL i think .. was bought out by HP ..
It was a hacked CPM .. this new indian OS is just going to be a hacked *ix .. a few bums will become directors or managing directors of whatever ..
Get real .. just use linux .. it's good enough .. and yes it's western ... the language you yapp around all day long in to fee educated.
Isn't there another whole class of vulnerabilities inherent in the interaction of hardware-level design flaws and susceptibilities to glitches, EMI...?
Are they going to call it Hindux or HindOS?
They would be better off doing like China and starting with a solid OS like a BSD or Linux and working from there. Train their programmers to audit code and find vulnerabilities and let them go wild on the OS to certify it doesn't have back doors.
Anyway, it's probably just some gov't department's wet dream. Western country's gov'ts aren't any better with their ideas but I think they have more accountability. I can understand them wanting to get out from under Windows though. I just think they're naive to think they can create something as some BSD variants in any useful time frame.
Isn't there another whole class of vulnerabilities inherent in the interaction of hardware-level design flaws and susceptibilities to glitches, EMI...?
Yea but I thought western countries (by the way is Russia classified as a "western" country?) also have their own military hardware that their systems run on...
But you are right that just getting the OS would not necessarily be enough.
What they could do is to take Windowze and rebrand it with a new theme.
It *might* fool some virii...
For the many making comments about "rocket science" -v- "OS development" I know "It ain't rocket science" is a hackneyed old cliche but....
Do I realy need to point out that both India an Pakistan have developed rockets (and the nuclear weapons to go on top), as have a number of other nations but no nation has actually developed a secure operating system that is of any use?
Developing a secure OS is in the way we currently do it, is considerably more difficult than developing a rocket and warhead that can hit a small village.
Likewise the comments about "millions of parts" with people simultaniously changing them... Have the people posting them never flown in a modern wide body made by Airbus or Boeing?
Some of these aircraft have in excess of 7million parts some of which are highly specialised and ground crews and maintanence staff swap them in and out all the time yet they stay in service quite happily and in the case of the 747 through god alone knows how many upgrades both in design and service.
Have you ever considered why there is this marked difference?
The problems with designing rockets etc are constrained by the laws of nature / physics and these don't change that much (think Newton to Einstein). The main changes their designers face are in "materials and methods" for which well established engeneering processess exist. Many of which are expressly designed to reduce and manage interaction and thus complexity. Also rockets etc are effectivly closed systems, where most of the attack vectors are either "Known Knowns" or "Unknown Knowns" because they are all considered to be in the "class" "physical in nature" and "sub classes" such as "projectile" and "blast wave". Thus the engineering design process deals with "Unknown Knowns" by addressing the general characteristics of a "known Class" not the specifics of one "instance" of an attack vector in that "known class".
Although as some have pointed out above the APT crowd "poster child" Stuxnet has just made obvious that even these closed systems have become relient on computers and operating systems (something the design, system and maintanence engineers have known and discussed quietly for over thirty years to my knowledge).
So why are secure software systems so much more difficult?
Well the honest answer starts with "because of the way we do it", we just do not follow any kind of engineering practice with sufficient rigour in the main stream software industry. I have said before the majority of software writers are "code cutters" not "engineers".
Even the best of the coders behave more as artisans or craftsmen not engineers and scientists. Cart wheels became what they did by artisans making tiny incremental improvments to establish a "pattern" which all other cart wheels where made to. The pattern was grudgingly taught to journymenn and aprenticess by master wheel wrights. Taken an honest look around you esspecialy with the likes of Windows MFC do you see any differences in the way code cutters learn their trade?
Even in "realtime", "embedded" and "safety" system design engineering practice is being denigrated to artisanal "code cutting". We appear to be regressing to the attitudes of Victorian and earlier boiler makers. We learnt back then that the attitude of "when something breaks don't investigate why just guess and bolt a bit on" gets people killed. Back then it took several acts of Parliment to change industry attitudes. We don't have the excuse they had back then of lack of knowledge, ask yourself do we realy need people dying in large numbers to make the software industry change it's attitude?
And for those suggesting *nix solutions what on earth makes you think they are even remotly close to being secure?
If you look at the scale of "known knowns", "unknown knowns" and "unknown unknowns" *nix is still vulnerable at "known knowns".
Unix was never designed for security, it was designed for efficient multi tasking, with lipservice paid to multiple users in it's file system. Everything to do with it's security has been "bolted" on since the initial design. It is a testiment to the original design that it is still hanging in there and thriving under all that baggage.
And thereby shows one of the software industries problems "efficiency". As with all things efficiency is a double edged sword, at the low "nuts and bolts" level efficiency is good but at the structural and system level it can be bad as if optomised in the wrong way it makes things very brittle and prone to failure. Oddly to some people efficiency can sometimes be best gained by increasing redundancy.
In a nut or bolt or other basic component the design requirment is for effectivness in "all classes" of use. To do this the design is stripped down to just the very few fundemental basics which are then optomised for efficiency. As you work your way up a system design you develop moduals for more specific purposes such as engines and other sub assemblies. Again they are designed to be efficient but they are also designed to be reliable and resiliant and this tempers the drive for certain types of efficiency. Above a certain point outright efficiency is not possible as the basic properties of the components are insufficient the design process moves over to redundancy to ensure resiliance and reliability and above all safety.
In physical systems this switching from just focusing on certain efficiency asspects happens because of very real and unavoidable "physical limitations" due to the fundemental properties of materials etc that often have subtal issues at the component level, that have devistating effects at the system level. This is often demonstrated to engineers by the Tacoma narrows bridge disaster of 1940 ( http://en.wikipedia.org/wiki/... )
The problem with software design is there are no real "physical limitations" other than clock speed and memory size and we assume these are going to double about every 18 months. Thus software system designs generaly are not forced to move away from a single view point on efficiency of "maximum bang for your buck".
Nor does the software industry feel constrained to say no to "Marketings" desire to be all things to all people in the "bang for buck" chase. This features over all else attitude appeares to be the worst it is in any industry with software design. And is a second major asspect of why the development of software is so disaster prone.
For many years a big proponent of the "bang for buck" attitude was Microsoft based at Redmond which oddly is at the oposit pole of Puget Sound to Tacoma...
I have often noted that incorect consideration of efficiency gives rise to insecurity, and that when designing a secure system you should consider efficiency as considerably subserviant to security.
Another asspect of "features" is "complexity and interdependence" although they should be treated seperatly they are the yin&yang of the problem with features.
A guiding rule in a secure system is "there is only one path", one of the problems of "features" is that often a particular task is started from many different points in many different ways. This gives rise to unintended differences which potentialy open up attack vectors.
Also "there is only one path" applies to "feedback" in it's many forms. For instance at the user level let us say that a particular basic function such as saving a file has many steps in the procedure. If you have the ability to "back out" of a choice how do you ensure you return to the original point of departure in exactly the same state?
It is desirable not just from the asspect of security but reliability and maintanence that complexity and interdependance be reduced to the minimum required to achive a well defined function.
There are other asspects where the software industry should sit up and note what occurs in other industries about good engineering practice and sound design but what is going to make the software industry get out of the rut they are in...
"Do I realy need to point out that both India an Pakistan have developed rockets (and the nuclear weapons to go on top)"
Yes but as I am sure you know neither India nor Pakistan developed rockets or nuclear weapons on their own. Both countries got that technology from abroad, and what-ever development work was done domestically was done with assistance from other countries.
With that in mind, what does the fact that they have developed missiles or nukes have to do with the OS development? Nothing much.
In fact neither India nor Pakistan would have modern technology if they had not obtained it from abroad.
Besides that both countries have been around for so long one would have expected them to develop a culture of governance and organization by now. But not so yet, from the looks of it.
That's lame statement from DRDO guys. That's another way of wasting taxpayers' money. (I am from India).
I would have loved to see them embracing an open-source GNU/Linux or other operating-system, build on top of it or customize it. Give back good things to community, while still keeping that made sense.
I don't think, bunch of smart guys can beat thousands of smarter guys from across the world. GNU/Linux is better because it's open-source.
Don't they realize we have Macs? Not even computer systems created by invading space aliens are safe from being infected by a virus delivered from Macs.
@ Trichinosis USA
India definitely has a great history and innovation potential. My comments were not simply nation-bashing. The military analogy is quite important: they design and deploy many excellent pieces of equipment that end up unusable due to no maintenance money or poor integration planning. So far, India has been satisfied with spending money on PR campaigns claiming the problem doesn't exist rather than fixing or preventing the problems. This is poor management. I was just wondering how *those* beauracrats will pull off creating/deploying a whole OS with a significant software stack and hardware support.
Another thing about this is their idea of creating a new OS. That's entirely unnecessary. I mocked the Chinese when they said the same thing. I think they eventually noticed my or similar mockery on the web and just used a version of FreeBSD as the start of Kylix. India should follow the same "build on an existing stable & low defect OS" strategy. They seem to want control of the IP, as well. That disqualifies Linux.
So, if they asked me, I would tell them to build a QNX- or MINIX-style microkernel with Linux ABI/API compatibility if they want something with any security enforcement assurance. If they wanted reuse, they could build on the L4 family like OCL4, TUD OS, Perseus Security Framework, or OKL4 3.0 (w/ commercial license). All of these have paravirtualized Linux and provide a wrapper for Linux device drivers. Hardware problem partly solved. If they wanted less investment, then they could pick up NetBSD. It's got the cleanest, easist-to-modify kernel of the BSD's, plenty of hardware support, Linux compatibility, helpful community, and works closely with OpenBSD's bug finders. A side effect of choosing L4 or NetBSD is that those could continue to improve if India submitted any of their improvements. This could also get India good press in IT circles.
I just think the fact that they are trying to write a secretive OS from scratch shows just how ill-informed they are about the digital threats of the modern age.
"Don't they realize we have Macs? Not even computer systems created by invading space aliens are safe from being infected by a virus delivered from Macs."
LMAO! Cracked.com mentioned it as one of the top 5 things "hollywood thinks computers can do." The list went from 5 to 1 and I bet you can guess which hacking attempt got No. 1. Here's what they said about it:
The Earth is under attack by a race of vastly advanced aliens, so Jeff Goldblum creates a virus from his PowerBook that disables the entire apparently Macintosh-compatible fleet of ships.
Why It's Ridiculous:
This is difficult to wrap our minds around. The aliens in Independence Day were not only thousands of years ahead of us technologically, but also were an entirely different species. Therefore, Goldblum's feat was the equivalent of colony of baboons in the Congo hacking into CitiBank using tree bark and clumps of their own feces."
"Both countries got that technology from abroad, and what-ever development work was done domestically was done with assistance from other countries"
Hmm an interesting statment have you considered the historical perspective.
Inda got reactor technology and fuel from amongst other places Russia. However Packistan is not clear cut and could have got some of it from China via other countries.
However Packistan developed it's own enrichment technology and to the anoyance of the US then (alledgadly) exported it to the axis of evil countries via an organisation based in Switzerland allegadly run by Packistan's father of nuclear technology.
The Russians got the technology from a spy in the Manhaten Project and China supposadly got it from second and third generation Chinese going to US Universities.
If you look at the 1973 edithion of the Encyclopedia Britanica most of what you need to know to start on in making a nuclear device is in there simply because the US Government decided that there was little point keeping it secret. You could as I did get further information simply by asking for it (Project Y the Frenchman,s Flat experiment, being one such document)
The simple fact is no country can claim to have developed nuclear technology unassisted by any other country one way or another. The only reason the US got their first was it had the industrial resources and was sufficiently far from any other nation to have areas not subject to bombing etc. The actual development was started in the UK (see Tube Alloys Project, and leo Slizard) and was shipped to the US for the aformentioned reasons. It was because of this that the state of Israel was formed in the Palestine Protectorate.
The ideas and the brains behind much of the development where from various European countries and Jewish scientists as both General Leisly Groves and Robert Openhimer freely admitted.
As for rocket technology with one exception (Nazi Germany) the same is true.
And guess what the same is very much true of computers and their operating systems.
With regards to India and Packistan both are young nations formed from what was once called the British Empire and later the British Comenwealth. Interestingly Britain did not want an Empire in either Asia or Africa they came about due to various machinations of private enterprises and the French.
It was once claimed that India (as was) was run entirely by 1000 British Civil Servant's who had been trained in the English Education system that had been very much set up to train such bureaucrats.
History sometimes makes the odd behaviour of nations and their peoples explainable and for those sufficiently wise the lessons it teaches us can help us avoid costly mistakes (one modern example being Afganistan, another Kashmir).
@ Clive Robinson
I second your points on nobody being able to claim their weapons or nuclear technology was entirely in-sourced. As far as US tech goes, you neglected to mention the extreme significance of Project Paperclip. All those Nazi rocket scientists and such that we brought in were quite useful. We were also pretty good at stealing research from other countries via covert means. We also brought in some of Europe and Asia's best scientists to work on improving the technologies that we bought, traded or stole. Afterward, the other countries just copied, bought or stole our stuff. The process repeats, history repeats, etc. The names of those involved might change, but the game always stays the same.
Where's the Friday squid post!?
Who are you and what have you done with the real Bruce Schneier?
Speaking from very slightly indirect personal experience (and I can't elaborate on that, sorry), the DRDO is full of Windows, and pirated Windows at that. At least one lab, I have reason to believe, orders assembled systems from local vendors, who -- of course -- put on pirated windows on it.
It's not like they couldn't use the Linux version of matlab and so on if they were forced to by policy.
And speaking of matlab -- that's one software (a) they haven't been able to pirate and (b) doesn't really have an open source eqvt that's close enough, so if the US govt wants to backdoor something, they should go for that one!
Anyway, I believe DRDO want Windows compat because I suspect all the senior officers buy high-priced laptops and then take them home for the kids to play with.
I can't say this for sure, but with my very small window into the world of DRDO senior people, it's quite likely this is the case.
Too bad I can't mention any names...
Well, I can understand why they don't want to go with a closed-source O/S (like Microsoft or Apple), but writing your own O/S is a massive undertaking. If they want security, they would be better off starting with a known highly secure O/S (like OpenBSD for example), and then add the functionality that they need.
@ Ken Peek,
"If they want security, they would be better off starting with a known highly secure O/S (like OpenBSD for example), and then add the functionality that they need."
Although I agree they would be better off starting with a known O/S that is FOSS or Public Domain, OpenBSD is not "highly secure".
Although OpenBSD has some code coding practices which have resulted in it's good vulnerability record it is based on *nix.
As I said further up *nix was not designed with high security in mind, and as Nick P has pointed out in the past there are better systems available.
I'm in favour of stripped down micro kernels with very limited functionality with all their resources controled by a seperate security hypervisor structure. The removing of any kind of privilege from I/O and drivers and the turning off of things like DMA or other direct access to process memory by other processes etc.
The result won't be as efficient as the current promiscuous *nix, MS & Mac O/S's but it will be able to avail it's self of another thirty years of O/S security research and other advances.
As I'm frequently heard to say "efficiency is the enemy of security" and although secure designs can be made efficient you are very unlikley to make an efficient design secure and most current O/S fall into the latter group.
Like quality security has to be built into the design process before it even starts...
Got to agree with the article obscurity doesn't provide any security. Microsoft is the perfect example there.
Creating your own OS is work a number of books are out on it and no linux isn't required. But you got to wonder do they really think they are that much smarter? Kind of shows a lot of arrogance.
@ Clive Robinson
Yes, I would call OpenBSD the most secure UNIX that's widely available, but not "secure." People often say it's only had about 2 security holes in the default install in over a decade, but that's propaganda. OpenBSD is known for proactively hunting and fixing bugs. Any of those bugs may have been a security vulnerability. If someone wants a zero-day on OpenBSD, they just need to look at actively reported bugs or hit patched ones before OpenBSD users patch the system. They've hand probably hundreds of bugs. A production system with even one serious known bug isn't secure. What does this make OpenBSD?
That said, I have always wanted a paravirtualized version of OpenBSD on a high assurance hypervisor or high quality microkernel, with drivers and trusted services outside of OpenBSD. This would be secure for the apps with trusted services as their TCB and pretty good for apps trusting OpenBSD. We already have something close with all the Linux paravirt implementations and stripped down Linux kernel in user-mode is usually safe enough. Need more open-source high assurance components like TCP/IP, USB 2.0, filesystems, etc. Developed outside the US to prevent arms control and encourage foreign adoption. I don't mind helping them stay out of botnets that DDOS my stuff, even if NSA doesn't like it that way.
Microsoft appear to be running scared of the growing awareness of 'almost as good and sometimes better' open source OS's. They're ploughing a lot of money into discrediting this software - which is wonderfully serving to raise people's awareness of it. Paid for software's (at least the boxed product kind) won't survive in the longer term.
"Paid for software's (at least the boxed product kind) won't survive in the longer term."
Yes it will in some forms, for instance there will always will be a demand for vertical market packages, especialy in emerging markets.
The past few years has shown that even quite specialised applications need not be fully "bespoke".
That is a "boxed product" with minor "scripted" customisation will be a viable first option over fully bespoke.
Obviously with time if an emerging market becomes sufficintly viable then programers will potentialy develop their own alternatives. However you only have to look at the number of orphaned projects that don't get beyond the first couple of prototype revisions.
Then there is the question of what will happen in regards of hardware platforms, FOSS is almost exclusivly done on and for what is effectivly COTS hardware.
Thus there will I think always be a market for the closed source "Boxed packaged kind" of software. But it may not be sold, it may simply be leased or used as part of a suppoort contract, but closed source it will remain.
@ Taskado on Proprietary Software Dominance
That's kind of shallow thinking. Proprietary isn't going away. Companies are out to maximize their return on investment. This means they need to get as much money out of what they produce as possible. They can't do this with open source software. They need ways to get a leg up on the competition (trade secrets) and, if possible, patented improvements (read: monopoly). GPL software essentially requires that they disclose the trade secrets and let others use their patented technologies. They do all the R&D, then their competition immediately benefits from it. In the proprietary model, the competition must catch up, allowing temporary higher profits.
Another issue is vendor lock-in. Proprietary vendors try to create this by keeping their source and maybe protocols closed. A successful lock-in strategy builds on existing platforms or software with widespread deployment and (very important) trained operators. For instance, in the Mid-South, most companies are Microsoft shops. These companies will always choose a Microsoft-compatible, closed-source product over a non-Microsoft, open-source product.
Another strategy for lock-in is to create a family of products that meet many needs with incredible integration and extensibility, but incompatible with open standards or open-source. Again, Microsoft is the leader in this, but SAP is another good example in enterprise application development. The reason the strategy is successful is that it's cheaper to pay the licensing fees for the proprietary software than to port the entire legacy codebase to an open-source alternative. That's one reason that COBOL programming is a safe career choice: maintenance and minor extensions are cheaper than a rewrite. (Note that there are automated translation tools available, but matching COBOL's numerical reliability in C++ or Java is hard for average code cutter.)
These are just a few classes of situation where proprietary comes out on top of open-source. There are others. The important thing to understand is how a business executive looks at things and how market forces work. These will sometimes work in favor of open-source, but they will usually work against it. AFAIK, these are the primary inhibiting factors of open-source: usually alpha or beta quality (exceptions exist, obviously); often poorly documented; poor integration with other apps; require overhauls of existing corporate enterprise stacks; scarce supply of specialists in open-source software; weak marketing. These issues must be addressed by a large number of open-source ventures in a very public way before open-source will sweep the consciousness of the likes of Dilbert's boss.
Of course they will have to write/validate the compiler as well otherwise somebody may put a backdoor in that...
@ Ian Woollard,
"... have to write/validate the compiler as well otherwise somebody may put a backdoor in that.."
Yup they will have to bootstrap up an entire tool chain frm scratch, which is no easy task (and probably harder than writing an OS in some respects).
But to stop that being "bubbled up" they need a hardware platform where they have designed and validated the hardware otherwise they will still have an open attack vector that can be exploited...
Then there is the design of the case etc after all active side channel attacks are known including those using Fault Injection by RF carrier/modulation.
Really secure systems are very very difficult and with over 50years of security research we are still finding new major classes of attack vector...
I'ts a tough job.....
With so many examples to pull from, now would be a pretty good time to try building an OS from scratch.
There is a wealth of, not just theory, but code in practice to examine and learn from.
If this is a project they plan on taking years to finish, I applaud them.
Remember that it must be Windows compatible. That's why it's a joke rather than a serious and potentially successful project. They would be better off using or designing Microsoft-compatible software with low-defect development methods, then running them on existing hardened UNIX's, security-focused virtualization or safety-critical RTOS's. It would be a hell of a lot easier/cheaper, too.
@ Ian Woollard
Not necessarily. Subversion attacks on Linux have been caught in the past due to good controls. This is a start. Wheeler also wrote a paper on defeating the Thompson attack called (I think) "Countering Trusting Trust through diverse double compiling." High assurance requirements to design to source code to object code correspondence mapping techniques, like seen in DO-178B & CC EAL7, can also beat many forms of subversion and faults at every level. There are also numerous compilers designed for verification, including formally verified systems like CompCert C and VLISP scheme. Manually mapping it, double checked by numerous engineers, followed by a hash of the resulting binary seems to be the most feasible method. Indian labor is also cheap: if we can do it, they can do it. ;)
"The only way to protect it is to design and implement it securely. Keeping control of your source code didn't magically make Windows secure, and it won't make this Indian OS secure."
Exactly! As for me, give it a few more time and the binaries will be available for torrent download. And when the binaries reached the world, the hackers will have new doll to play with and we'll see who is better on this. I just hope that a nuclear missile from India won't be launched by hackers.
I wonder if that OS will allow folders to be named either 'CON' or 'LPT1' ... ;)
My problems with open source software:
1) Code might be good and well checked BUT it at best addresses publicly "known" attacks.
2) There is no way to add features to ensure as yet "unknown" attacks are no enabled by the coding styles, or algorithms in use.
3) Strange code that addresses specific "unknown" attack vectors is often rejected, because it adds no value....
My problem is that the first time I saw someone break a code using "Power analysis" was in 1985. The method was unknown to me, but was already a well developed technique at the time. If you look in the publicly disclosed literature I think mention of this class of attack first surfaces in about 1995. That's at least a 15 year lag after the attack was developed, maybe 20 years I don't know...
I think it is accurate to say that general awareness of the effectiveness of DPA techniques did not reach the security coding community till maybe 2005. That's at least a 25 year lag after the attack was probably developed!
I'll be prepared to bet that the professional hacking community has not sat still for 25 years, so this means there are probably several classes of devastating attacks that are completely "unknown" to the open source community.
What's the point of exhaustive peer review if the reviewer is unaware of certain attack vectors, and cannot be made aware of the vector. (for either legal or business reasons)
What's the point of exhaustive peer review if the reviewer is unaware of certain attack vectors, and cannot be made aware of the vector. (for either legal or business reasons)"
Put simply, businesses practice risk reduction and code review of open-source software helps that. Being unable to beat unknown classes of attacks is unimportant for the majority of users. They've been trained to find vulnerabilities to be inevitable and acceptable to a degree. Only high assurance projects must worry about that and it's a niche. So, even if we only eliminate "known" classes of attack, that's still nice. I mean, that is what the vast majority of casual and sophisticated hackers go for, is it not? Even most industrial espionage doesn't use such esoteric techniques.
We also get the obscurity benefit that hackers usually target whatever software is most used... the low hanging fruit. I believe that this obscurity benefit, rather than inherent security, is the reason OpenVMS claimed it only had 26 vulnerabilities in its history. Nobody cared, so few looked and it was safer as a result. Even open-source software benefits from this until it becomes popular. Upon popular, it gets attacked and then the many eyes principle begins taking effect as more white hats attack the code.
I don't disagree with your analysis, far from it, we're largely in agreement. BUT as an individual (or company), what do I do when I'm made aware of a class of attacks for which my existing product is defenseless ?
Business reality says I patch the vulnerability, regardless of the relative security / insecurity created by the patch, or patching process.
Legal necessity also dedicates that, I patch it quickly AND furthermore that I keep NO record of classes of attack for which I know my product is defenseless. (the PLEASE sue me list)
Now with respect to Open source, security OS's and software. How do I implement the patch, quickly and effectively, ( from both a business and legal perspective) without necessarily revealing the nature of the attack that I'm patching?
The legal problem is far more complex than the technical problem, because once it can be shown, that I have been made aware of a vulnerability, I MUST, in good faith, quickly rectify this problem. Unfortunately the details of the attack are typically business confidential, so an "open source" patch solution is impossible.
Within the Legal context of liability, the relative occurrence rate of attack vectors, has very little meaning, especially WRT business consequential losses.
Developing a secure (within the cost/benefit requirements limitations of a user PC; ie a nation-state level attacker could compromise it in a week but a script kiddie cant at all) OS is not a difficult task.
Developing a secure OS that a person can put on any cheap random WIntel computer platform with random monitors, drives and accessories, and still have it be able to run seamlessly a DvD movie player, MS flight simulator V1, WoW v4 (+ list 20 more complex and varied applications written across the last 20 years) and the entire MS Office suite while printing color graphics would be complex to the point of impossibility.
I believe the first step would be to build a secure voting machine. This would be a single known hardware platform: only one model of display, keyboard and printer (and it would have to validate itself as part of POST), performing an extremely limited range of functions (accept candidates and issues along with descriptive text from an admin-level; allow user level selections and changes [while facilitating handicapped access] then print results and tally totals). It would not need pretty graphics, high performance audio or internet connectivity so 98% of Windows would be wasted even if it could be made scure. It would not be built on -ANY- existing platform. If you HAVE to use an existing platform (and I think you want to avoid that at all cost) a 1976 era Docutel ATM would be a good starting point.
If you can't do that (and nobody has even tried to as far as I can tell) then you certainly cant do a general-purpose OS.
I've seen the quality of work coming out of outsourced software from India. This "OS" will just crash before it even boots.
> So they're rewriting the SCADA systems too? It wasn't just Windows vulnerabilities.
I don't pretend to assert that this is a respose to StuxNet that might actually work.
I just notice that this looks like a government/military trying to insulate itself from StuxNet-style attacks.
Exhibit A: StuxNet showed that even non-networked Windows Systems can be targeted by carefully-designed worms. These worms can also target non-Windows systems monitored or managed by Windows systems.
Exhibit B: A government announces that its military and security apparatus are trying to do something to keep the gov/military from running machines which are prone to such risks.
All the evidence against this being easy/possible/workable has been presented above, by other commenters.
My own thoughts:
Closing the national security apparatus (and related military/industrial/vital-technologies apparatus) against StuxNet style attacks is impossible, if any element in that apparatus uses a computer which can accept portable data media.
StuxNet has shown that unplugging your system from the Global Internet (or even Organizational Intranet) does not make it safe from targeted, malicious software. If anyone plugs a memory-stick into the machine, and that memory-stick has been on a machine exposed to an unsecure network, then the local machine is at risk.
Random thought: was there a variety of StuxNet capable of spreading itself on a 1.44 MB (or 1.2 MB, or 768 kB), floppy?
@"I've seen the quality of work coming out of outsourced software from India. This "OS" will just crash before it even boots."
I absolutely disagree. I've worked with multiple teams on multiple different software environments and the people I've worked with produce good code and actively provide feedback on improvement. With a landed team I noticed the same distribution as everywhere here: 20% brilliant people who form the heart of a project and 80% who "just does work". Same everywhere. Just need to identify the 20% to talk to for anything that requires thinking. Which is also identical as everywhere, and you are good to go.
on-topic: I hope they base it on minix (http://www.minix3.org/) !
Given what the Indian outsourcing industry has done in the past, this is more likely a cynical method of preserving market advantage. Tata Consultancy is well known for using 'Tata Tools' in their outsourcing project. They use being non-Indian as an excuse for not hiring, with the explanation that you have to know their tools to work for them, and you can only learn the tools if you work in India. This is probably just another version of the gambit.
@ bob (the original bob)
"I believe the first step would be to build a secure voting machine. This would be a single known hardware platform: only one model of display, keyboard and printer (and it would have to validate itself as part of POST), ..."
Nearly impossible without some kind of verifiable human intervention.
All the computing unit can do is perform a particular protocol with its connected devices. Any device that can perform identical protocol will be considered an identical device, even though the actual device is an impostor. You can add cryptographic authentication to the protocol, but it's still just protocol. Anyone who can determine the secrets of the authentication protocol will be able to mimic the protocol.
And where is the code stored for the computational element to perform the protocol? If that device isn't secure, then how do we know the codes it provides for execution are correct? How do we know the device is secure, except by some trustworthiness indicator or tamper-evident seal? How do we know that those indicators, which are indirect indications of trustworthiness, haven't been tampered with?
The humans are also needed to perform checks for physical tampering, which includes side-channels added to physical devices that send to eavesdroppers, as well as MitM or impostor devices that sit between a device and the computing unit. For example, an interposing device might sit between the printing unit and the computing unit, and subtly change what's printed in some unpredictably occurring but statistically significant way. Or the single model of keyboard may have a leaking side-channel that doesn't affect operation, but which gives a listener some indication of voting choices. If your candidate is behind, maybe you activate the line-scrambler that alters uploads to the main collating host, or you pay off the confederate who's sitting on a box of mail-in votes.
Secure voting is a lot more than just securing individual point-of-presence voting machines. In some ways, securing voting machines is the easy part, despite the prodigious difficulties in doing so.
@ Robert T,
"My problem is that the first time I saw someone break a code using "Power analysis" was in 1985 The method was unknown to me, but was already a well developed technique at the time."
I can confirm that what we now call Power Analysis was around before that. I independantly discovered it and quite a few other equivalant issues back in 78-82. To me it was an obvious consiquence of the way certain things worked (or did not work). I later found out it was known "officialy" back in Churchill's day when the UK ultra secure telex system was shown to be deficient. Basicaly it used Post Office type 600 relays to XOR the plantext tape and the One Time Tape together. The relay had different pull in and release times and this caused a small amount of asymitary in the signal waveform which allowed the OTT to be removed leaving the plaintext (this has been public knowledge for thirty years or more). However what has apparently not been public knowledge is that the first attempts to fix the problem although successfull with the "on the line" signal made the power supply noise considerably worse, and this in turn caused a signal that allowed the OTT to be stripped. What made the problem worse is that the power supply was a seperate unit to the relay unit and thus the connecting leads radiated the signal especially when various longer term environment issues had done their work and increased circuit resistance.
" If you look in the publicly disclosed literature I think mention of this class of attack first surfaces in about 1995"
It was due in the main to "lack of interest" not a desire to keep it hidden. I had demonstrated Power Analysis to be just one of a number of attacks against the "electronic wallets" that where being prototyped. The lack of interest came from it being percieved as a "physical attack" and they where more interested in the attack where I showed that an RF carrier on passing through the electronics became "modulated" with the circuit functioning and that you could by a synchronous direct conversion receiver see the power spectrum at a multiple of the clock frequency and elicit details of their proprietary encryption.
The chose to try and eliminate the radiated signal and deliberatly ignored the other attack issues as being "impractical to carry out".
"I think it is accurate to say that general awareness of the effectiveness of DPA techniques did not reach the security coding community till maybe 2005."
In the smart card industry they where only too aware of it but chose not to do very much about it relying instead on PR and other marketing of known to be ineffective obscuring and masking techniques rather than solve the actual problem. The problem by the way still exists even in some security evaluated products, the trick is knowing how to tease the desired signals out.
"That's at least a 25 year lag after the attack was probably developed!"
Well it could be and is actually worse than that. In 82 I showed a fault injection technique using a modulated RF carrier to cause the execution of a simple program to be predictably changed in a low cost micro system (based on 1802 processor that is still in use).
It was only just last year there was a paper by some bods over at the Cambridge Laps demonstrating a simple form of RF Fault Injection Attack against a 32bit random number generator which had the effect of making it less than 8bit random.
Based on the usuall uptake time (~8years) it will be well over a third of a century before the industry gets it's act together on RF fault injection attacks using controled modulation to change the execution path in a known way.
Philip Zimmermann has rightly quoted an Indian in his PGP User's Guide "Why I Wrote PGP.."
"Whatever you do will be insignificant, but it is very important that you do it."
Thanks for clarifying your points. So far, I agree. The problem with that perspective is that I don't see any satisfactory solution. The only real solution is for businesses to accept that there will be delays for patches offering real assurance if a new class of attack shows up. Fortunately, this rarely happens in well-thought-out systems. Many "new" types of attacks, like cross-site-scripting, are rehashes on old concepts and would have been eliminated in secure designs. However, if something like a RF injection attack shows up, I don't see anything short of an expensive, cumbersome redesign and a recall fixing the problem. Software could get a patch, but a new class of attacks might require a total redesign.
Temporary hotfixes might help, though. Diverse implementations do as well. For instance, the attacks on SMM mode for Intel chips created a new class of attack for businesses, but a group with a combination of Intel and POWER chips wouldn't sweat that much. They could just order a bunch of cheap, used POWER chips to make up for lost capacity until the Intel attack was sorted out. Of course, Invisible Things lab didn't exactly make an original discovery: these weaknesses in Intel architecture are in papers dating back over two decades in development of A1-class systems, available on IEEE and ACM. Building more on the hotfix approach, we can also use gateways to act as guards or screw with the data in a way to prevent a leak. We have options.
Diversity is what I usually recommend. I've recently been looking into using PCI or VME cards and backplanes for enterprise server infrastructures. Put the critical application on three different cards with different processors and software stacks, but a compatible (POSIX?) interface. That's one backplane. Have another in hot standby in another rack with different power supply (or at another facility with a dedicated link). Voting algorithm is used to isolate for errors and individual cards can be excluded when a vulnerability is found. Cost is an issue, but certain applications don't take a ton of compute resources.
Of course, even if we don't use three cards for one app, a mixture of processors and software running the same app or several compatible apps can help prevent or reduce costs of new attack classes. For instance, one can split web servers between OpenBSD/Apache/PHP on POWER and HardenedLinux/AppWeb/PHP on x86. If good scripting is used, extra maintenance isn't as burdensome as one would suspect. This kind of strategy also does wonders for common attacks and worms, as the attackers get headaches trying to figure out why their signatures keep changing. I love giving attackers headaches. ;) What you think of diversity or gateways for temp fixes to new attack classes (mostly excluding EMSEC)?
@ bob (the original bob)
Nice analysis but...
"If you can't do that (and nobody has even tried to as far as I can tell) then you certainly cant do a general-purpose OS."
...are you sure that this statement reflects the security goals of most users? Most users would like all of the stuff you mentioned to have no holes, etc. but realistically they would accept a good method of damage control. Personally, I'm most concerned about knowing my OS isn't subverted, my keystrokes unintercepted, encryption/signing keys protected, screen unspoofable, and inability of attacker to get code executing without my permission. This is doable (and has all been done) in general purpose OS's.
This turns a system-level attack from devastating to annoying or time-consuming to deal with. Even if software vendors only do *this*, we will have a *huge* improvement in our security profile. This also provides a necessary building block for web 2.0 security strategies: if the systems themselves are insecure, then how can the web security features built on them claim to be secure? Hence, we start with system security at least in key areas, then we can do the other stuff with some sense of confidence.
For instance, OS with strong process isolation and capability based security (think EROS, OKL4, or INTEGRITY) can make a browser design like ChromeOS seem nearly invulnerable to attackers. On the other hand, mainstream OS's with primarily discretionary access controls on hardware or kernels with poor process isolation give the developers headaches and users little assurance that the security policy will be enforced correctly and consistently. Greatly improving the security of existing systems is actually pretty simple: companies just don't want to invest in it because consumers won't spend extra for it.
If this obscurity idea doesn't hold water, then I think we've been looking at a purely political remark. What they really want is to develop more of their software domestically.
"@Bruce I know you think on a higher plane these days, but it would be nice for you to sum up your opinion of hypervisors as a security sandbox in a blog post ;)"
Well, I'm not Bruce, but if you google my name and "schneier" you will find me talking extensively about this subject and mentioning many secure hypervisors. It's currently one of my main hobbies in security engineering. QubesOS is an interesting project but I wouldn't trust it for anything requiring serious security. If you allow me to explain some basics, I think you will understand where I'm coming from.
To understand things like this, you must understand the concept of a Trusted Computing Base (TCB). This is all of the code or functionality that a given app/component depends on in order to meet its security requirements. Anything that, if subverted, could be used to defeat security requirements is part of the TCB. Large or complex TCB's generally lead to security problems or situations where you can't even estimate how secure a system is. In a typical Windows app, here's what's in the TCB: processor hard-coding; processor microcode/firmware; BIOS/other-firmware; bootloader; OS kernel; OS user-mode code; key user-mode libraries. This assumes the app is isolated and doesn't interact with other apps. Take a look at that list and look into how much functionality/complexity each has. That's a HUGE TCB. That's why Windows desktop systems have vulnerabilities popping up all over the place.
Now, let's look at QubesOS. I've only given it a passing glance, admitedly. This has taught me some things, though. In addition to hardware/firmware/BIOS/bootloader, it's TCB contains these components: Xen hypervisor; Dom0 OS instance; Dom0 support software; QubesOS extensions like GUI; DomU OS code; DomU support software (like virtual drivers & stuff). A failure in any layer can keep the app from functioning properly. Notice how many avenues of attack there are here. I mean, Dom0 is the size of an entire operating system by itself. If anything, a Xen-based solution *increases* the attack surface and odds of vulnerabilities. Attacks on x86 architecture, including memory leaks & errata, are also not addressed. The only way a hypervisor-based solution can work is if it reduces the TCB or makes it maneagable, which Xen does in some ways and doesn't in others. Let's see a high[er] assurance virtualization scheme.
The Nizza security architecture (google for the paper) is one of my personal favorites because it's available now. It starts with a microkernel: L4/Fiasco. Microkernels are the best strategy for secure systems: they provide the bare minimum of functionality in kernel mode, then run everything else in user mode in a client-server model. L4 is about 15KB, has a native interface for apps (including drivers), has a runtime for higher-level apps, and has a Xen-style paravirtualized Linux layer that runs in either Ring 1 or 3. They also have a wrapper that allows Linux drivers to run in isolated address spaces, although it doesn't stop malicious drivers. Nizza is built on a L4, a few trusted components running on L4, and a Linux VM for the main interface.
In a Nizza setup, the security-critical part of an app runs isolated directly on top of L4 while the non-trusted part runs in the Linux VM. A trusted path GUI system ensures you know which you are looking at. Trusted path gives control of the screen and input hardware to a simple, isolated app. Other apps pass their screen renderings to it or ask for input. Only the app with focus gets to interact and an unspoofable display shows which app has focus. TUD-OS is a demonstrator LiveCD from Dresden showing Nitpicker, L4Linux, and a Nizza eCommerce app. The eCommerce app has you doing a purchase where you do the digital signing with a native L4 task. The trusted path ensures that the Linux app, compromised or not, doesn't get the password that decrypts your private signing key or modify the data you are signing before you see it. The TCB for the security-critical portion is very small, inspiring confidence. The Perseus Security Arhicture web page shows a similar design with great explanations of assurance arguments and such. It led to Turaya Security Kernel, already used in crypto device, VPN, and end-point security systems. Micro-SINA VPN (google paper) uses same concept.
If you want to do Intel, there are options. The most recent processors have the fewest errata. I would pick a Nehalem chip, disable everything unneeded in the BIOS, kill all but one core to prevent sync attacks, and put a hypervisor like LynxSecure or INTEGRITY RTOS w/ Padded Cell on top of it. Make sure a TPM verifies the trusted software stack upon load. Then, Nizza-style, put security critical functionality directly on the RTOSes/hypervisors and the untrusted code in one or more Linux/POSIX VM's. The safety-critical microkernels also usually support C, C++, Ada, Real-time Java, and POSIX interfaces, along with higher assurance CORBA ORB's to let them communicate in a high level way.
Personally, I'd use a aerospace or telecom-grade POWER chip from vendors like Freescale, Curtis-Wright (see VxWorks MILS board), or AMCC. Pick one that has a good board support package for one of the safety-critical OS's or MILS kernels with Linux layer. Then, carefully design your security-critical components to leverage the assurances of the kernel, BSP and support software. You must design each component to be secure, as well as its interactions with others. Make sure crypto is immune to cache attacks and registers are cleaned when switching to untrusted processes from trusted processes. Sanitize all inputs, check for errors everywhere possible, and use languages or tools with very strong static typing and possibly formal methods to validate the functionality and information flow. After all this, you might have a secure system.
If you want to see what it takes to build a real secure (or just correct) system, look into papers on "SCOMP", "GEMSOS", "Secure Ada Target" (or ASOS kernel), Type 1 cryptographic devices, "LOCK" by Smith (free papers online), VAX A1 security kernel (free docs), seL4 (free docs), AAMP7G (free docs), or TX window manager (some QubesOS is very similar to this, except two decades later). These systems all addressed issues from firmware up, minimized/managed TCB wherever possible, removed as many flaws as possible, and dealt with covert channel attacks that neither Xen nor QubesOS address at all. If hypervisors and associated TCB aren't correct by construction, then they are just extra complexity that should be looked at with suspicion. Theo de Raadt's (OpenBSD founder) has also publicly bashed x86 virtualization for this reason on kerneltrap mailing list.
However, it might be good for resource consolidation or job security. ;)
that's an interesting take, reminds me a lot of how beautiful an idea Symbian's thingie was and how many holes were left unpatched in the old, too-trusting code that run on top of it.
Rather than pitting you against Joanna - I can't find the link but she once replied to a forum question I asked saying the attack-surface of QubesOS was about 10K lines - I'll instead ask you how you think some chink in the usability should be handled:
Imagine I am a power user, and I have a hypervisor-based VM running with two sessions. I have an external, USB-attached storage that I plug in. How do I manage which VM it becomes available to - or can it be both? - and how is this secured?
WRT to RF fault injection attacks.
I wrote some code to prevent single point fault injection attacks form controlling, critical decisions points in program execution.
The code used "computed gotos" and vectored branch tables. The resulting assembly code was beyond "strange". However, It did eliminate the problem of forcing execution control changes by branch flag manipulation. Unfortunately the original problem was replaced with a different problem of illegal branch vectors. The range of the computed goto had to be confined to the branch table vector space. In the end this was done with special hardware to check the branch address range. ( Including these registers probably increased the valnerbility space, so I'm not sure that anything was really fixed...).
What they should do is offer creating several functioning operating systems, each designed to be secure.–
and use these various operating system along with other secure operating systems. however security threw obscurity doesn't work. (Especially since it’s may be harder to figure out weather they know the code as that itself would be kept secret.)
"...saying the attack-surface of QubesOS was about 10K lines..."
It was either misreported or she was referring to a particular critical component that interacted with untrusted code. These two guesses come from the fact that the smallest microkernel out there, L4, is still about 15KB. Separation kernels can be as small as 4k LOC, but she doesn't use one. Everything with privileged interaction with the processor or hardware is part of the TCB in any design. That's why you have to count Xen, drivers, Dom0, etc. An attack on any of these may violate the security policy.
"Imagine I am a power user, and I have a hypervisor-based VM running with two sessions. I have an external, USB-attached storage that I plug in. How do I manage which VM it becomes available to - or can it be both? - and how is this secured?"
This is a very good question and I've wrestled with this. I assuming you meant to say a hypervisor running two VM's, each representing a session. If so, we can take a page out of VMware or VirtualBox's book. These two programs can control whether a guest VM has USB access statically or dynamically. With paravirtualization, it may be a bit easier: we can use virtual USB drivers in each guest that connect to an isolated USB stack with access control enforcement. If a specific guest doesn't have USB access, then it simply looks like nothing is connected. In Intel-VT virtualization with a hypervisor like LynxSecure, we can use traps to catch USB access (allowing or denying it) and use Intel's new IOMMU (VT-d?) to prevent guests from misusing USB stacks, among other things.
Regardless of the method, it seems that a particular USB device must be locked to a particular VM. Sharing them is possible in many cases but introduces extra complexity and breaks noninterference. Noninterference is the academic jargon saying that each VM is isolated from another except through permissible communication mechanisms. With multiplexed USB devices, one VM may subvert or sabotage another indirectly. This is called a covert channel. It's dangerous enough to necessitate the one device per VM rule.
So, how do we do it usably? We could have an automated version that says "Give control to whichever VM has focus." Or we could, if using Xen's console-guest model, let the Dom0 always have control and give indirect access to DomU with focus or permission. For USB storage, we could possibly reuse Xen's existing virtual storage technology to let DomU's access files on USB much like they already do filesystems.
If we have a trusted path GUI and are running everything through it, we could have a notification pop up that asks which VM to give the device to. Then, the software takes care of all the plumbing in the background and the guest VM just suddenly sees the device. This seems pretty easy for a power user and shouldn't require a ton of complexity on the trusted components. Personally, I'd prefer this approach because I'm not always looking at the VM I want to get the device. I might know the device will take a minute to load and I'm doing something else while waiting for it. So, being asked which VM to give it to would be nice.
In a secure design, we'd also have to address termination of USB devices. Have you seen that "safely remove device" feature? This would have to be in the console VM or in a trusted app so we could ensure proper termination in the case of a rogue or faulty VM. Don't want to kill the VM, then have a corrupted device, eh? The system, likely to use capability security, should also keep track of the capabilities each guest has. This way, upon termination, the system immediately starts revoking and RAM-overwriting capabilities of the terminated guest VM to ensure a fail-safe situation.
Again, this should be usable enough for powerusers. After all, many of them screw around with command lines and esoteric GUI options all the time. If they couldn't figure this out, then I'd refer them to Apple. ;) I wonder if the user-facing part is simple enough for a lay user to operate. What's your opinion?
Also, after reading these details, do you see how far existing open source and many commercial solutions are from "secure" in USB handling alone? The quote I always give comes from Brian Snow's paper "We Need Assurance" (free online):
"The problem is innately difficult because... computers were built to share resources (memory, processors, buses, etc.). If you look for a one-word synopsis of computer design philosophy, it was and is SHARING. In the security realm, the one word synopsis is SEPARATION: keeping the bad guys away from the good guys' stuff! So today, making a computer secure requires imposing a "separation paradigm" on top of an architecture built to share. That is tough! Even when partially successful, the residual problem is going to be covert channels." That's why this is so tough and we have to deal with every little detail.
"What they should do is offer creating several functioning operating systems, each designed to be secure."
Why? What does this get them? Considering their aims, they would be better just improving on existing projects. OpenBSD/NetBSD, TUD:OS, QubeOS, OKL4 3.0 w/ OK Linux, CapDesk desktop, EROS/COYOTOS and others all have excellent assurance in specific areas and great potential in others. They just need investment.
For instance, India would do itself a lot better if they just wrote a high assurance software stack for one of the microkernels, safe interfaces to untrusted VM's, and a trusted path system that lay users could understand. It would cost a few million, prevent keylogging, control damage to untrusted VM's, protect secret keys, and stop many privilege escalation attacks. Or they could just spend a few million licensing all of Green Hill's and partners RTOS's, middleware, drivers, etc. Then, test, extend and port it to some specific cheap desktops and notebooks. Then, they could spend a few million more porting security-critical apps to run directly on the INTEGRITY RTOS or to leverage the high assurance components. The total cost would be between $15 and $25 million (wild guess, admittedly), resulting in a very usable, medium robustness platform whose security-critical apps would only be vulnerable to covert channel leaks or manipulation, if that.
So, building new OS's is a dumb idea. I don't want to see more OS's built when we already have a bunch of good ones. There's plenty of diversity out there. We just need those good ones to be improved and used properly. Even if each country picks a different one and offers a standardized API (POSIX?), then the robustness of the Internet as a whole increases.
As far as a Windows clone goes, nobody is going to successfully build one without pirating the Windows source code. The best bet is a rewrite of all critical portions of Windows and the must-have apps. The source code would be licensed, everything's behavior (intended and actual) modeled precisely along with dependencies, and the entire system would be rewritten with medium to high assurance techniques from the ground up. Internals would be modularized, layered and simplified. Anytime something on top broke, it would be rewritten enough to work (or rewritten entirely). The project would probably take hundreds of man years of effort and cost tens of millions of dollars. The funding would be distributed amongst the government that wanted access to this version in their country. Anything less than another Manhattan project won't produce a Windows- and backwards-compatible OS with any assurance of functionality, reliability or security. It's a sad reality but India must accept it or face utter failure.
I hope they call it the "nobody knows what’s that" OS or NkwtOS
"The DRDO specifically wants to design and develop its own OS that is hack-proof to prevent sensitive data from being stolen."
You're already starting off wrong, so I'm not terribly sanguine you're going to get anywhere farther down the road than anyone else has...
Good news. Good luck to the indians with their OS. I personally think that any country that is serious about its telecommunications and IT infrastructure should have its own OS.
"Have its own OS" is a meaningless phrase - such a complex project that is undertaken by governments with the usual range of government motives has little chance of success. The project, among other issues, starts with a list of enemies (those being the enemies of the Government of India) that very few other projects would have to contend with.
""The only way to protect it is to have a home-grown system, the complete architecture ... source code is with you and then nobody knows what's that," he added."
I was prepared to take this "OS" seriously until I saw this quote about closed source meaning no one will be able to penetrate it. It's clear that anyone stupid enough to think that simply closing the source of the OS will prevent penetration is too stupid to write or supervise any kind of OS work.
This project is thus already over and not worth further consideration.
Most of the ideas above including just adding some layers onto freeBSD are more sensible. I think this project has been proposed simply to piss on RIM's Blackberry and QNX operating systems that these people cannot penetrate... which suggests what we really should be using...
This is a legitimate Indian OS, but since it was a debian distro I imagine by now all of it's language features etc. are in at least one other distro:
For more fun read the Satish Chandra comment here
"Robert Crowley, former Assistant Deputy Director of Clandestine Operations of the CIA, gave documents of his own top secret operations to his friend, historian Gregory Douglas and described in detail how the C.I.A. has done “business” with Russian intelligence agencies for many decades, how the C.I.A. directly arranged the plane crash which killed Homi Bhabha but relied on Russian intelligence agencies, with which it did “business”, to assassinate Shastri who had given a go ahead for an Indian nuclear weapons program. The Russian intelligence agencies -- large parts of which were brought on the C.I.A.’s payroll -- brought down the Soviet Union. After a letter of mine in Indian Express in the early nineties which appeared under the editor's heading “Grab This Opportunity” regarding a Russian proposal to form a Russia-China-India alliance, P. V. Narasimha Rao sent the head of India’s submarine-launched ballistic missile program to Russia to get help, where he died as Shastri did. When, in a letter to the press, I pointed out that this was the “help” the Russians had provided, the Russians hastily withdrew a delegation that was visiting India. It will be the easiest thing in the world for Russian or other intelligence agencies to install devices in submarines etc. with which they can track them and, in keeping with their “business” relationship with U.S. intelligence agencies, enable the Americans to track them, too. I have said there should be an iron-clad rule against importing ANY defence equipment and I have also described the perils of importing..." (more or less everything).
So there may be good reasons for some Indian defense professionals to feel this is their "only" path, though it can't work.
And, um, anyone remember "Red Flag Linux"?
Maybe it's time to develop an ASIC that we could combine with a revived version of Multics....
Hi anyone has an idea whats the security order of operating systems below?
Cdac has already tried customizing linux. Thd distro is called boss. It still does not fit drdo requirements as is evident from the ever increasing cyber attacks from mainland china. Even the pentagon is jumping. I personally think the idea of a new os especially for the defence sector puters is a good idea. I wud expect it to have very spartan ui (a la win 3.1/apple lisa like ui), but very robust all interface security.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.