@ Rob Lewis,
"However, a patched OS is not necessarily an inherently secure system, just a less vulnerable one."
Absolutely, but unlike the majority of user applications atleast there are security mechanisms that are understood and available at the OS level (or there should be in a modern OS).
Although OS's are not as secure as they could be (even when tightened down by an expert), the malware battle has by and large moved on because the OS is not the low hanging fruit any longer.
Many applications are multithreaded and have access to multiple resources and can have multiple instances of themselves not just at the same priveledge level but in the same unrestricted memory segment.
Thus they have little or no segregation which a sensible OS would normaly enforce between multiple processes running under the same user.
And thus it is the application like a web browser that is currently the low hanging fruit, especially as it allows programs to be downloaded and run as part of the normal user activity.
Even "sandboxed" scripts can communicate via access to shared resources and often leverage themselves or have influence "out side of their box" in one way or another.
A year or so ago people where saying "Ok it can be done but to what end", currently we are starting to see code that is site and user specific that hides transactions on electronic bank statments.
What is the next step for malware developers?
How long say before scripts from two different sites become aware of each other through a users browser.
Let us say that you are shopping on one site and a script from that site becomes aware of another site you have open and opens a covert channel to pass info across. In essence one site could influence the behaviour on another site.
I cannot immediatly think of a realy good example of why malware writers would find this advantageous but I'm fairly sure that at some point one will...
But the point is that we are yet to see what "fruit" the next malware will use. But whatever it is the chances are it will only be "obvious with hindsite".
And this is the hidden problem with mobile devices with low CPU and Memory resources and at best modest connectivity, patching applications on the mobile device is problematical.
Now if you think about what Google is trying to do is make a lite / thin client with the application running on a high resource available server.
If you patch the application on the server then effectivly all the mobiles that use it get the patched version immediatly.
I would be the first to admit the concept is not new, and that lite / thin clients (and X-terms) never realy had much market traction due mainly to lack of price differential between a thin client and a full blown PC.
However with mobile devices the situation is very different.
Take the ubiquitous phone handset for example due to limitations of batteries and physical size it is extreamly resource limited and there is little that can be done to change it within the constraints of current cost effective technology.
It therefore makes sense to devote those meger resources to the user interface and to conectivity. Which in essence is what a thin client is.
Some modest applications can be written as scripts to run within the 'lite browser" however "heavy lift" applications would run on a server along with storage and other back end productivity resources (the dreaded "group ware" etc).
With regards to the server end of things the trick is to have a framework were an application need have no security awareness, the framework deals with it.
Which is very much what you describe. However the framework should provide more services than would normaly be expected of an OS. In effect it should provide access to the usuall business back ends.
Thus what the application realy becomes is middleware where the "application logic" is effectivly "scripted" together filters / tools in a similar way to the "unix philosophy".
The application developer concentrates on the application and error/exception handeling. The framework provides the security and "heavy lift" such as DB searches etc.
Which leaves the user interface that is perhaps best done as an abstracted "virtual display" anyway as this alows various levels of hardware resourse to be used transparently. Very light low bandwidth mobile devices can have most of the work done by a server that has a frame buffer and sends just "diffs" down to the mobile (VNC style), through to more capable devices using a higher level protocol such as you would expect from a full blown web browser.
Surprisingly all of this can be done with the current level of technology we have, however we have to accept that there is a significant cost to "security" in terms of "efficiency".
Security takes CPU cycles, however proper segmentation usually means "one task per module" where the module has it's own kernal and memory with access to all other resources through the framework. Simplicity dictates a "one size fits all" methodology for the modules which is unlikley to lead to optimal usage.
However is this lack of efficient utilisation of resources realy an issue?
In reality only for the marketing dept "bang for your buck" figures.
We are happy to accept worse resource utilisation for "high availability" by fault tolerant redundant hardware.