"The world is run on insecure and badly-designed software because writing all software the way the space shuttle software was written would be utterly impractical."
And if you remember even NASA's Space Shuttle software has had a hickup or two...
The world is an imperfect place where the only absoluts we thought we had appear to be based on the "roll of God's dice" which is why Albert got so upset.
We either live with it or as Albert did "paint ourselves into a corner".
We new from before the first electronic computer was ever built that we were going to have a fun time with them. Kurt Godel showed that any system of logic that was effectivly practical to use could not be used to prove it's self (undecidability). Church and Turing (halting problem) slightly later showed that this held with even determanistic systems.
And if you take it forward you will realise the implication is that any moderatly complex system cannot realy be known. That is if the flow of the program can be controled by the input data the program it's self becomes as unknown in it's behaviour as the input data. Further if this data can be stored within the program in some way and the data can make the program access other data you are basicaly back to the Turing Engine...
Only the simplest of programs can be "proved" in any way and this is generaly within several constraints with the proof itself. often resting on the assumptions we call axioms.
You can take these ideas forward to show that no useful program can be 100% error free and thus cannot be in turn 100% secure.
Is this a problem well actually no I don't think it is.
Back when they were designing the atom bomb at Los Alomos they ran into a problem with some of the mathmatics involved. In effect they could not get answers determanisticaly in a meaningful way. So they used probability and ended up with what we now call Monte Carlo Methods.
I think the same reasoning can be applied to the functioning of all moderatly complex programs and thus in turn security.
I won't go into the details but consider this.
We have a very limited number of programers who can write secure code is this an issue?
Currently yes, but does it need to be?
I think not. Consider the following proposition.
We get programers to use what are effectivly scripting languages, but these are slightly different to the normal scripting languages.
They actually have two parts the conventional sub function to be used in the program but also a security function that gets run in a hypervisor system. This has a signiture of how the sub function behaves and the level of resources such as memory and CPU cycles etc the sub function needs.
If the subfunction trys to excead the limits or it's execution signiture becomes abnormal the hpervisor stops the function in it's tracks and then goes in to perform a sanity check on the subfunctions current data and memory contents.
If things are not as they should be then it kills the subfunction and chucks it up to the hypervisor for human analysis.
If the subfunctions are written in the correct way then both the execution signiture and the limits can be set by the scripting engine as it compiles down to it's version of byte code.
But this "right way" has many advantages in that it means that each subfunction can just like those in a Unix shell script run on any available CPU core so you will get implicit parallel processing with just a few other constraints on how the scripts are written.
Further the script sub functions need have no knowledge of the outside world their input data is "drip fed" to them by the hypervisor stack and their output is fed back a drip at a time to the hypervisor stack. This criticaly limits the need to store data in any function thus the ability to subvert the function by any input into it.
But as the subfunction is decoupled from the world and any sub function the hypervisor can halt it randomly and sanity check it's code and data memory. If it has become suspect it can be stoped and kicked back up the hypervisor stack.
Now have a think about each sub function being written by three different teams and the hyporvisor randomly using one of the three versions to execute one iteration of the sub function.
If a signiture or limit exception is raised the hypervisor can present the same data to all three versions of the subfunction and take a vote and decide what to do.
Further the hypervisor can randomly send one small data set to all three versions if they all agree in the way they behave then the chances are the program is behaving functionaly correctly
There are a whole load of other things the security hypervisor can do such as out of order execution add random time delays and run through nonce data.
Effectivly the script programer is blisfully unaware of any ot this background activity by the hypervisor. However any malware writer wishing to put data into the script to subvert it in a meaningful way is going to have a much much harder time.
In this way the scarce resource of those capable of writing the subfunctions securely can do this, the everyday jobbing programmer needs not get involved with the actual security at this level.
Thus you end up with a script that is both inherantly more secure but also probabalisticaly check for incorect function by the hypervisor.
Such a system would not be perfect but it would certainly be orders of magnitude more secure from external malware attacks than current methods.
Have a think on it and see if you can come up with further benifits.
I'm aware that it has certain issues as well such as it is not "CPU cycle" efficient, but then neither is Java, PERL or just about any other post Java language.
Oh and one other benifit you can make the subfunction of the scripting language much more highlevel. It is known that the number of "bugs" a programer introduces into any program is related more to the number of lines of code than just about any other metric, which is why the likes of LISP programers are generaly more productive than C++ programmers and they in turn than assembler level programmers.