Ars Technica on Liabilities and Computer Security

Good article:

Halderman argued that secure software tends to come from companies that have a culture of taking security seriously. But it’s hard to mandate, or even to measure, “security consciousness” from outside a company. A regulatory agency can force a company to go through the motions of beefing up its security, but it’s not likely to be effective unless management’s heart is in it.

This is a key advantage of using liability as the centerpiece of security policy. By making companies financially responsible for the actual harms caused by security failures, lawsuits give management a strong motivation to take security seriously without requiring the government to directly measure and penalize security problems. Sony allegedly laid off security personnel ahead of this year’s attacks. Presumably it thought this would be a cost-saving move; a big class action lawsuit could ensure that other companies don’t repeat that mistake in future.

I’ve been talking about liabilities for about a decade now. Here are essays I’ve written in 2002, 2003, 2004, and 2006.

Posted on July 27, 2011 at 6:44 AM46 Comments

Comments

Richard Steven Hack July 27, 2011 7:57 AM

It would be nice if civil liability were an issue for the entire IT industry in terms of usability and reliability as well as security.

In fact, it would be nice if the “we bear no responsibility for anything because our product is total crap – and by the way you don’t even own it” EULAs were grounds for civil liability.

The bottom line: None of this is going to happen as long as Microsoft, Apple, Oracle and other multi-score-billion-dollar companies make massive campaign contributions to politicians.

Just as Congress will never loosen the overly restrictive intellectual property laws because the even LESS massive entertainment industry has paid them off to make sure that never happens.

You want liability for insecure software? Start by demanding Congress pass a law outlawing all campaign contributions by corporations of any kind on any level – city, state and Federal.

Which might also at least make one small step to preventing the military-industrial complex, the oil industry, the financial services industry, and the banking industry from running the US Congress as their errand boys. The current President of the United States is an errand boy for the Crown and Pritzker families in Chicago, nothing more. Which is why we’re in – and will stay in – Iraq, Afghanistan, Yemen, Somalia, Libya and heading for Pakistan and Iran.

Naturally, the odds of getting any such legislation through are approximately zero point zero.

Bottom line: The software industry, let alone any of these other industries, are not going to change – ever. And neither is the US Congress or the electorate who puts these scum into office every four years.

“There is no security” is a meme which is dependent on a number of far, far lower level memes that deal with the real world.

Suck it up.

S July 27, 2011 8:07 AM

@ RSH

That’s far more of an indictment of capitalism in general (with which I entirely agree!) than just the software industry.

Is it really surprising that when profit is your main motive, quality slips?

Wayne Conrad July 27, 2011 9:55 AM

Using lawsuits as an incentive for good security? Sounds good. Might even work, but there may be unintended consequences.

Ask any health care professional how many of the expensive tests they run are not medically indicated, but are ordered as a future defense for any possible lawsuit.

Clive Robinson July 27, 2011 10:17 AM

ON Topic 😉

For some considerable time I have been saying that security is a quality issue.

That is the same framework as quality assurance but different taks will ensure that you have the ability to build security into your products.

And like the physical manufacturing industry it will actually pay for it’s self, if and only if it gets buy in from the biggest desk in walnut corridor all the way down to those actually putting goods into the customers hands.

Now what made physical manufacturing buy into quality control so quickly and effectivly was “return rates”. Due to the very asymetric cost of returning defective physical goods (1% return of product could compleatly wipe any and all profit in FMCE) manufactures had to get on top of quality at all stages.

Contrary to what many thing quality control was not a Japanese invention it was actually a British invention, but by the time we formalised it Maggie Thatcher and her cohorts had destroyed manufacturing industry in the UK. However we did have service industries and contrary to many peoples expectations quality systems are just as effective in the service sector as well.

Now unfortunatly the managment view in many software houses is the measurand of software productivity is quantity of code cut and features aproximated. This is because there is no asymetric costs in shipping defective goods.

In fact due to the way software can be patched almost the entire defect cost falls on the customer not the manufacture of the defective software.

The question is how to either return the asymetric cost of rectifying faulty goods or how to show that both a good quality process and a good security process will pay dividends.

Personaly I would like to avoid the “liability” angle enforced under law if at all possible because of the effect it would have on FOSS. Lliability legislation would effectivly either out law it, or hamstring FOSS to the benift of the current inept software manufactures. And as RSH and others would point out they would get the laws sufficiently watered down as to have little effect on them but a significant effect on FOSS thereby effectivly creating a legal monopoly.

Clive Robinson July 27, 2011 10:32 AM

Oh I forgot to mention one thing,

The whole oof this conversation is effectivly pointless for one simple reason,

A lack of usable measurands.

That is before you can start talking about liability or any other proactive method of improving security you need to actually be able to use the scientific method as originaly developed and used by Sir Issac Newton.

The scientific method cannot be used with out good measurands that work the same way irespective of the software under test.

We have nothing even close.

@ Bruce,

Not being funny or rude, but if you realy realy want to improve the state of the security industry at all levels you real should be talking about reliable measurands not liability.

It is not having effective measurands that alows the likes of the DHS and large software organisations to get away with “Security Theatre” nothing else.

You invented the term I now invite you to do what would consign it to the anals of history.

As the old adage has it “Put the horse infront of the cart”.

Clive Robinson July 27, 2011 10:40 AM

@ Richard Steven Hack,

“Military chip crypto cracked with power-analysis probe.”

Sadly it is nothing new, I’ve known about the issues for a number of years with other chips that use crypto etc to hide various design functions (specifficaly loading DSP software into combind DSP Microcontroler chips in embedded and then encapsulated systems).

Robert T can give you a very good run down on the issues involved in FPGA and other similar chips as to why nonlinear activities will almost always have side channels by which they leak information via their power spectrum.

echowit July 27, 2011 10:44 AM

I agree (I think) with several of the above, liability is probably a better solution than legislation. But couldn’t some standardation be wrought to the privacy/liability/user agreements we all try to read but usually bail out of before hitting “I accept”?

Not a single fits-all STD of course but a set of known, universally understood standard policies from which the mfgs could choose the option they want to apply to the agreement/contract and maybe even price accordingly.

The current snarl of legalese in this area is IMO the biggest readblock to any punitive or financial incentive to marry (and improve both) quality and security.

S July 27, 2011 11:01 AM

@ Clive re. measurands:

Ever done much with e.g. ISO9001?

It relies a lot less on quantifiable metrics than it does on processes; e.g. you document the procedures that should be followed, and also document a procedure for improving your processes, etc.

Could be more difficult with the pace of change in technology but you could take a more holistic/wider-ranging view and plug in whatever technologies/problems were current into your documented, auditable framework.

Of course, most companies of medium size or larger will be documenting their procedures, but I’ve found through implementing both 9001 (quality) & 14001 (environmental) that they really help formalise the process.

(of course, the lack of objective measurements allows you to gain ISO14001 certification whilst dumping 10000 barrels of nuclear waste a week into the sea, as long as you have a flowchart that says you’re going to reduce it to 9000 barrels this financial year….)

Apologies for my brief foray into sounding like a management consultant there folks, I’ll try not to let it happen too often.

Dilbert July 27, 2011 11:49 AM

@Mod,

Thanks for the cleanup. I’ve thought for quite some time that it would be nice if there were a discussion forum here, so the comments section doesn’t become a free-for-all

Clive Robinson July 27, 2011 12:03 PM

@ S,

“Ever done much with e.g. ISO9001”

Quite a bit more than I would have wished to whilst it was a BSI Standard long before it became an ISO standard.

However you asked a question about “measurands”

My comments about those where in a seperate post to those about security ~= quality.

The BSI/ISO standards provide the very important frameworks, not the actuall processes that control the work in hand.

There are other standards ment to plug into the frameworks, the best examples of framworks and plugins are to be found in the likes of the EU R&TTE legislation. and CEPT/CCITT standards.

You need diferent measurands for the frameworks than you do for either the processes or the work in hand.

Sadly in the IT industry that whilst we might claim some mesurands for frameworks we don’t realy have them for either the processess or work in hand.

This is why we keep whittering on about “best practice” it is at best saying “we rolled the dice and we got six in a row”. That is we look around for X number of organisations that self claim not to have been attacked. We then try and find what they have in common and then call that “best practice”. Obviously there are three signifficant failings in this process with regards the organisations that actually makes the “best practice” process worthless,

1, Have they actually been targeted for attack.
2, Have they the ability to detect an attack.
3, Have they honestly reported on attacks.

Now as we have recently seen with many many organisations due to Anonymous and LulzSec that it does not take much skill to succeed so we know that 1 above is a significant issue.

We also know from recent APT disclosures that many many organisations cannot detect attacks, they only see the effects. That is they only know they’ve been botted when the DDoS or SPAM ups their network load. So we know that 2 above is a significant issue.

Finally we know from the introduction of legislation over the disclosure of loss of PII that there is a long and inglorious history of keeping attacks not only quite but paying off the attackers. So we know that 3 above is a significant issue.

So “best practice” is clearly based at best on self delusion and chance. Which puts it in the eviseration and reading of the entrails of goats and chickens league of psudo science.

Real science relies entirly on reliable and quantifiable measurment by which reliable observations can be made and hypotheses can be both proved and disproved by anybody who cares to apply the process at any time.

chicopanther July 27, 2011 12:10 PM

It’s actually funny to see people bashing corporations, capitalism, profit-seeking, etc, especially by folks who are using computers on a world-wide network, neither of which would exist except for corporations who made those computers, network gear, etc!

chicopanther

squarooticus July 27, 2011 1:26 PM

The economic illiteracy on this thread is appalling. Quality slips because of the profit motive? What drugs are you guys on, so I know to avoid them?

Jason July 27, 2011 2:30 PM

@squarooticus

Quality slips not because of profit, but because of a lack of consequences. There is no compelling motive to be “perfect” when “good enough” is really good enough.

Corporations like Adobe, Oracle, Microsoft, and Apple can produce buggy, insecure software because we will still buy it and we will still use it.

And we will be blamed when something goes wrong and (thanks to those license agreements no one reads) we will be on the hook for any damages.

Why fix it if we’ll buy it anyway? I guess that does point to a profit motive

squarooticus July 27, 2011 2:53 PM

Jason:

That there doesn’t exist any “perfect” spreadsheet/OS/MMORPG/ despite the robustness of competition in those markets should tell you one thing: that the cost of developing such a thing would price it out of the market with existing technology.

Maybe you’ll see a huge leap in software reliability when tools and languages incorporating constraint provability become both useful and commonplace, but with a large pool of (e.g.) C++ developers and a huge existing codebase in said language keeping their respective companies competitive on features and price and the lack of a robust market for rebuilding everything from the ground up with a better architecture, this is unlikely to happen in short order.

What will legislated liability for the user’s refusal to recognize that all software sucks and that backups are necessary get us? Lots of things, but probably the most important is “less software.” People are not going to be able to, much less willing to, pay the price for your preferred level of reliability.

So you’re right that the market is the reason why perfect software doesn’t exist, but it isn’t some grand conspiracy: it’s simply that customers won’t pay the price for the level of reliability you want. They, in fact, demonstrate that the software is “good enough” by continuing to buy it at McDonald’s prices instead of paying Four Seasons’ prices to get to five 9’s.

S July 27, 2011 2:58 PM

@ squarooticus: Jason said it, really. Apologies for not explaining my point better. The driving motive is to make money, not to produce the ultimate in quality product. These goals are not always exactly identical, although I accept they coincide in a lot of cases.

@ Clive: totally, and accept your arguments about best practices in a general sense. But remember that a lot of the attacks we’ve been seeing in the news aren’t really the hi-tech stuff that you & Nick chat about several miles over my head, it’s things like SQL injection. Which is analogous to leaving a window open when you lock the building.

I was just musing on the possibilities of some sort of framework either to sit inside 9001 or be a rough parallel. So your template procedures would be things along the lines of: once a month, verify everything is patched up-to-date. Once a year, hire some external pen testers. Once every x days conduct desktop PC audits on a random basis (I’m just pulling generic stuff out of my hat here, since this is far from my field…!)

As well as being a big help for smaller companies – since you’d get firms that would help them draw up the manuals, same as the other standards – it would be a much saner thing to try and audit.

In summary: ‘best practices’ aren’t always the best, but the’re probably better than what 90% of people are doing, so can we at least start there?!

NZ July 27, 2011 3:29 PM

@S

That’s far more of an indictment of capitalism in general
Developed socialism is even worse 🙁

@squarooticus

People are not going to be able to, much less willing to, pay the price for your preferred level of reliability.

Note that “price” is not only the price of the software itself, after all OpenBSD is both secure and free…

Clive Robinson July 27, 2011 5:07 PM

@ squarooticus,

“it’s simply that customers won’ pay the price for the level of reliability you want They, in fact, demonstrate that the software is “good enough” by continuing to buy it at McDonald’s prices instead of paying Four Seasons’ prices to get to five 9’s.”

Oh dear the fallacy of “supply and demand theory”.

Something has to be available for people to purchase, not only that they have to be aware of it’s existance to purchase it.

If for some reason nobody choses to supply or sufficiently advertise a product then there quite naturaly won’t be a demand for it.

Now it could be argued that a global market alters the normal market dynamics which tend to assume that markets are sufficiently local that several suppliers may if sufficiently non local with respect to each.other may develop products of similar maturaty that they can compeate on equal terms.

But an examination of main stream software supply and more certainly Internet online services tends to indicate that the first to market tends to take the market and hold it unless it significantly errs or earns it’s customers ire. This is generaly the behaviour you would expect in a purely local market. Especially one where the development time and margins are sufficient that prices can be dropped to keep comercial competitionout of the market.

But the interesting thing is the near zero cost of duplication does allow “labours of love” to enter into the market at what is to the developer zero cost of manufacture (that is the customer pays the cost of copying) and the cost of the development time they bare for personal reasons.

Most people would think that the consumer would go for the near free option but they don’t.

The reasons for this are many but boil down to time invested in the use of a product. That is the consumer actually gives worth to the time they have spent learning the comercial product for whatever reason and choses to pay some fiscal value to continue using that investment rather than scrap it and re-invest in a new product.

Thus in the sooftware market the simplistic view of supply and demand is distorted out of effective reality by other economic rules.

squarooticus July 27, 2011 6:17 PM

Clive, I think in your colossal arrogance you’re actually making my point for me. “Cost” includes many things, not just currency. It also includes (for example) the effort required to change from one vendor to another, or the effort required to re-engineer a codebase to incorporate reliability guarantees.

This is why Windows dominates the desktop market despite being technically inferior to most of the other major choices: inertia of users and applications.

I’m simply saying there isn’t anything conspiratorial about the relative scarcity of software with correctness guarantees: people simply haven’t demanded it because the cost to get it (whether measured in dollars, effort, ponies, or unicorn farts) is too high.

Forcing Microsoft or Oracle—or Ubuntu—to make their software reliable at the point of a gun may result in some better quality software, but the one thing it will result in is less software, and almost certainly the end of free software.

You have the freedom to contract with software houses to have them produce what you want, so if what you want is an OS with medical device-grade correctness guarantees, go for it, but be prepared to shell out. What I want is freedom: freedom to innovate, freedom to use cool and new but broken software, freedom to get what I prefer instead of having your preferences forced on me, and still have money and time left over for the other, far more important things in my life.

Clive Robinson July 27, 2011 8:41 PM

@ Squarooticus,

“Clive, I think in your colossal arrogance you’re actually making my point for me. “Cost” includes many things, not just currency”

If you can be bothered to go back and look I picked you up not on “cost” but,

“by continuing to buy it at McDonald’s prices instead of paying Four Seasons’ prices to get to five 9’s.”

Now I don’t know about your McDonald’s but the ones I’ve used only take a monetary payment be it in cash or some form of payment card.

So I’m sure others will forgive me as taking your view to be “monetary cost” not any other kind of cost, which I, not you, chose to amplify on.

As for my views on FOSS I think you will see from a couple of my posts on this blog page it is something I regard as a valid part of the market and should remain as such, even though you do not have to make a monetary payment to the developers to use it.

As for the commercial software suppliers view of FOSS, I belive at least one senior person has refered to it as “cancer” and also compared it too “communism”. Even though nearly all the network code the company they work for sold initialy was lifted from Unix code released by the regents of the University of California (however there is still a question of if they even honoured the requirments of the regents properly). But of recent times the company has started to “embrace” FOSS (though what sort of embrace is yet to be determined)…

Magnum July 27, 2011 9:38 PM

“It’s actually funny to see people bashing corporations, capitalism, profit-seeking, etc, especially by folks who are using computers on a world-wide network, neither of which would exist except for corporations who made those computers, network gear, etc!”

Those corporations wouldn’t even exist without publicly funded R&D by CERN, DARPA, Bletchley Park, etc.

Nick P July 27, 2011 10:17 PM

@ Clive Robinson

“A lack of usable measurands.”

There’s definitely measureands. One is defects per thousand lines of code. High quality development processes produce software with fewer defects. Additionally, there are known best practices that apply in certain contexts to prevent entire classes of errors. A combination of a “best practices” checklist/audit and low-defect development process would result in software that’s harder to hack & fails less often.

Note: I just noticed you mentioned a critique of best practices. To be clear on my term, I’m talking about tactics that catch or prevent certain classes of vulnerabilities. Examples include managed code, input validation, proper implementation of authentication schemes, etc.

@ S

“But remember that a lot of the attacks we’ve been seeing in the news aren’t really the hi-tech stuff that you & Nick chat about several miles over my head, it’s things like SQL injection.”

Thanks for the compliment. 😉 To be clear, though, many of the technologies and approaches I discuss prevent common attacks. In the easiest, the Fagan Software Inspection Process (invented in 70’s) works like this: produce some code; do a rigorous code review looking for many specific kinds of flaws; prioritize those found; fix; repeat the review; if all is well, next set of code. Cleanroom (invented in 80’s) uses a combination of a mathematically-based development method, code verification, and usage-based testing. Cleanroom developers never even execute their own code, yet the defect rate is usually several times lower than industry average!

Empirical studies of both of these methods showed they were easy to learn, developers had low defect rates on first try, statistically certifiable defect rates over time, and cost wasn’t significantly increased (decreased in many cases). Praxis has a method, Correct by Construction, that’s can be applied to longer, critical projects to produce very low defect software that demonstratably meets requirements. Praxis and some Cleanroom shops offer warranties on their code. (Yeah, you weren’t hallucinating when you read that.) So, we have two development methods in particular that greatly increase quality & requirements met, but can decrease cost. WHY AREN’T THEY IN WIDESPREAD USE?

Note: Software Inspection Process could easily be retargeted for web app development by looking for common web dev. flaws, like XSS or SQL injection vulnerability. Matter of fact, it would be easier for web development because the flaws are often easier to spot than system-level coding errors.

In the area of web development, we still have easy opportunities for improvements. The use of managed code & mature frameworks help. But, many turn to Ruby on Rails, Python & PHP because they’re “cool”, easier to use or faster to develop. New platforms & strategies have been emerging for web development that eliminate much of the difficulty in building secure apps. Examples include CMU’s SIF/JIF, the OCaml web runtime, E platform, Tahoma browsing system, WISC’s “SWIM,” Univ of Penn’s AURA language, Ravenscar Ada, and a ton of more specialized, formal methods. I’ve linked to the SIF project because it offers a nice combination of features, ease of use for ordinary programmers, and automatic partitioning of the app between client & server.

SIF: Enforcing Confidentiality and Integrity in Web Applications
http://www.cs.cornell.edu/andru/papers/usenix07-html/paper.html

So, high quality software at a reasonable, even cheaper, price is doable for the software industry. Many companies are actually doing this in practice, especially smaller firms trying to differentiate on quality. As I told Clive, there’s also pretty good measurements that can raise the baseline a bit. So, why are companies spending tons of money on flawed methods and producing a low baseline? Well, it usually has nothing to do with technology, let me tell you. 😉

Richard Steven Hack July 27, 2011 11:02 PM

Nick P: “A combination of a “best practices” checklist/audit and low-defect development process would result in software that’s harder to hack & fails less often.”

And how is that working out for the IT industry in general? In my view, not so much.

“Best practices” and “low-defect development” are symptoms and poorly implemented ones at that of what is really needed to ENGINEER (as opposed to “develop”) software.

Engineering is a different process altogether. Engineering relies on taking materials with known properties, applying known transformations to those materials to produce artifacts and effects with known properties.

The key words are “known properties”. Software which performs a function is vapor. There are no known properties. The people producing it can barely describe how it’s supposed to do its main function – which is limited to a list of “features” – without any regard to objective measurement of things like usability, reliability and security.

But there could be. It is quite possible to apply science and engineering to design a software development process which precisely and completely models ALL the effects – including usability, reliability and security – of a given project, and then turns that over to an automated code generation system which produces correct code which provable implements the model to the same degree of precision.

They do it in various manufacturing processes for hardware of all kinds all the time. A computer checks a design for compliance with known constraints.

This has been the holy grail for at least the last three decades. Academics have built various prototypes for this sort of thing. One can read the articles in the various academic computing journals (as opposed to the IT industry trade journals which universally ignore this stuff.)

But no one in the actual software industry is using it. Why?

Because the industry prefers to think of itself as an “art” and academic efforts to turn it into “software engineering” have failed miserably. It’s a cultural issue more than anything else.

Although I do think economics are involved as well since “retooling” all the brains of people in the IT industry to be able to understand the highly technical nature of actual “software engineering” – which would be comparable to the technical nature of real physical engineering – would be expensive. And a lot of the current people wouldn’t make the cut.

The examples you cited of better development processes are a start, definitely. More attention should be paid to them. But in my view, someone needs to go back to ground zero and redesign the entire software engineering process from the ground up.

Which is precisely why I hate Microsoft and the rest of the multi-score-billion-dollar IT companies that produce CRAP as a result. They have the MONEY and the PEOPLE to translate academic software engineering into a real industrial process.

But Gates and Ballmer and the rest couldn’t care less because the users of software don’t have an option to “return crap” and thus cause them unacceptable “return rates” which wipe out their profit. And this is mostly because the industry has succeeded in getting the consumer legal issue settled that they don’t have to be liable for anything – even giving money back – due to their “get out of jail free” EULAs.

Not to mention that the consumer is so befuddled by software that most of them put up with this, whereas if their car fails they blow a fuse and shoot the local mechanic.

Richard Steven Hack July 27, 2011 11:10 PM

How about this? Every time a corporate or home user Windows PC crashes because of a driver or registry error and requires a reinstall, said user can go back to Microsoft and charge them some reasonable amount for the time it took to do the reinstall?

I suspect we’d damn sure see a change in how Windows is designed.

Clive Robinson July 28, 2011 6:19 AM

@ Nick P,

“There’s definitely measureands. One is defects per thousand lines of code”

Yes and what does it actualy tell you?

It’s a quantative not qualitative measure therefore not a usable measurand.

That is some defects don’t have security implications others do, and others it might if used in/with etc etc. The number of bugs is only indicative of a potential lack of care by the code cutters.

That aside the measurands you are talking about are those (seldom if ever) used at the design end, not those that are used at the other end where the users are, and it is this end where it all goes horribly horribly wrong.

It’s interesting to read RSH’s post a couple up about art-v-engineering, I’ve said it almost exactly the same many times before (except I usualy through in some refrences to van Gogh, prima donnas, victorian boilermakers or wheel wrights and coopers).

The thing is like quality security is a process of mind state, it should start long long before the design process when people are trained (as it is in engineering) and continue down through usage to and past the demise of a particular design.

One engineering tool that could be used immediatly is “project history files”, in engineering they look through what went wrong and right from previous designs and carry the knowledge forward in a formal way, you rarely see this in software because nine times out of ten new software projects are not “Clean slate” at the start.

One of the worst things in software is “reuse” most code cutters don’t have a clue when it comes to writting library code and likewise most suppliers of comercial library code don’t either. Worse we seam to want to turn it into a methodology via things like “patterns” and “agile coding”.

A library should have a very clear well defind API and you should never have to use “hidden knowledge”. That is it should work on the principle of “least surprise”.

I’ve looked at so much “object oriented” code and thought “Oh F***, what’s this, where’s the structure where are the API’s, where is the seperation…”.

Likewise users suffer from the same WTF moments when software does something odd or just dies or worse explodes taking the system with it.

Years ago I was looking forward to a web world because I naively thought it would provide a good seperation between the user interface and the functional code… well we can see how that mess turned out.

I occasionaly bang on about first principles of channels, data sources and sinks; and a good understanding of ADT’s and the implications of RIP-ICE and pyramid to diamond coding.

I also point out that data should move only in the intended direction if it can be passed back in the unintended direction, that is how a program deals with exceptions errors and ommissions.

I also moan about how errors are reported to users, how programers try to put all the error checking as far to the left as they can and then don’t do any error checking from then on.

Likewise how things should fail gracefully (or safe) and not blow up taking the whole system and the users nerves and sanity with it.

All of this requires measurands that have qualitive not quantative meaning.

I could carry on but I don’t have my soap box with me 😉

Nick P July 28, 2011 12:04 PM

@ Richard Steven Hack

“And how is that working out for the IT industry in general? In my view, not so much.”

That statement makes no sense. The industry isn’t using low defect methodologies or the best practices recommended by security engineers. We won’t know how it’s “working out” until the industry starts doing it.

“Engineering is a different process altogether. Engineering relies on taking materials with known properties, applying known transformations to those materials to produce artifacts and effects with known properties.”

Partly true, partly semantics. And the “known properties” part is hard to impossible if one is using complex legacy or proprietary software. Accurate interface-level, behavioral specifications solve the problem & allow true engineering, but they rarely exist. Unsuprisingly, these are built into both Cleanroom & Praxis’ method.

“It is quite possible to apply science and engineering to design a software development process which precisely and completely models ALL the effects – including usability, reliability and security – of a given project, and then turns that over to an automated code generation system which produces correct code which provable implements the model to the same degree of precision.”

Each of these has been done in isolation & sometimes together. So, I agree there should be a way to integrate several of these frameworks. Automatic code generation is the easiest part: model-, ontology- and specification-based code generators and automatic programmings systems already exist in the marketplace & academic circles. On the high end, the CompCert compiler was automatically extracted from its mathematical specifications after each compiler pass was mathematically proven correct. The end result is that it’s the only compiler to have never messed up a compilation, even during fuzz testing. So, yes, these things are doable.

“One can read the articles in the various academic computing journals (as opposed to the IT industry trade journals which universally ignore this stuff.)”

So you’ve apparently seen a few of what I describe. And I rarely read IT trade journals: they’re normally crap for my purposes.

“The examples you cited of better development processes are a start, definitely. More attention should be paid to them. But in my view, someone needs to go back to ground zero and redesign the entire software engineering process from the ground up.”

No, they’re exactly what you’re looking for because they WERE a ground-up & top-down redesign of the process. Praxis’ Correct by Construction, in particular, is strong on requirements gathering, spec-to-requirements mapping, iteratively refining specs to concrete specs in a property-preserving way, and making low-defect code that corresponds to those specs. They often use their custom-built (open-source now) SPARK toolset. It’s a safe language with formal semantics, an automated tool for proving absence of certain runtime errors, and a manual tool for proving other aspects of specifications. Their track record is pure excellence.

In any case, you said you wanted the software engineering to be like the hardware engineering. The hardware guys use a very formalized process, make use of behavioral specifications, make use of formal verification technology (esp. since floating point bug recall), iteratively refine their design into low level stuff, and have statistical quality control in place. Sound similar to the above? Both Cleanroom and Correct by Construction use a similar process & their quality levels can be certified statistically just like hardware, hence the warranties that are often offered. In fact, the inventor of Cleanroom even named it after the cleanrooms whose defect levels he hoped to achieve.

“But Gates and Ballmer and the rest couldn’t care less because the users of software don’t have an option to “return crap” and thus cause them unacceptable “return rates” which wipe out their profit. ”

“Not to mention that the consumer is so befuddled by software that most of them put up with this, whereas if their car fails they blow a fuse and shoot the local mechanic.”

Little market demand + negative supplier incentives = little to no supply of high quality software. The only beacon of light is the DO-178B certification process required for aviation. This incentivized many companies to produce very robust software like OS’s, graphics drivers, networking stacks & file systems. That’s a start & an expensive one. The only way to get the market as a whole to do similarly good work is to increase consumer demand & supplier liability.

Nick P July 28, 2011 12:14 PM

@ Clive Robinson

“It’s a quantative not qualitative measure therefore not a usable measurand.”

It certainly is as a compromise. Reducing defects a minimal = reducing exploitability. There’s other aspects to it, of course, but this by itself would have prevented a large number of vulnerabilities in previous systems. Most were design, interaction & coding errors these methodologies prevent. So, whatever our measureand is, the defect rate will at least be part of it.

“The thing is like quality security is a process of mind state, it should start long long before the design process when people are trained (as it is in engineering) and continue down through usage to and past the demise of a particular design.”

The Orange Book A1 process essentially did that for design. The Lessons Learned papers on various projects addressed the “demise” part. So, we already have usable processes for embedding security & correctness in from requirements to design to implementation. And we have a few verified compilers and runtimes. 😉

“One engineering tool that could be used immediatly is “project history files”, in engineering they look through what went wrong and right from previous designs and carry the knowledge forward in a formal way”

Good advice. The OpenBSD team uses that approach. Upon finding a bug, they look for similar coding situations in the rest of their code. This often results in finding more bugs & preventing them in future situations.

“A library should have a very clear well defind API and you should never have to use “hidden knowledge”. That is it should work on the principle of “least surprise”. I’ve looked at so much “object oriented” code and thought “Oh F***, what’s this, where’s the structure where are the API’s, where is the seperation…”.”

I feel you and totally agree. Fortunately, methods I mentioned solve that problem. Additionally, technologies are being developed in the static checking area to automatically extract behavioral specifications from source code & compare them to, say, abstract interface-level specs. In other words: “does it do what I think it does?” Should help when the tools mature.

“I also moan about how errors are reported to users, how programers try to put all the error checking as far to the left as they can and then don’t do any error checking from then on.”

Lack of incentives.

“Likewise how things should fail gracefully (or safe) and not blow up taking the whole system and the users nerves and sanity with it.”

Lack of wisdom.

“All of this requires measurands that have qualitive not quantative meaning.”

I contend that we need both. The quantative portion, however, will soley measure the success of the qualitively sound engineering processes.

NZ July 28, 2011 2:41 PM

@RHS

How about this? Every time a corporate or home user Windows PC crashes because of a driver or >registry error and requires a reinstall, said user can go back to Microsoft and charge them some >reasonable amount for the time it took to do the reinstall?
More often than not it’s a buggy driver. So it’s hard to even to find out who is to blame.

notmyopinion July 29, 2011 4:18 AM

This seems like the sort of proposal that will have many unintended consequences.

Sometimes lawsuits can right a great wrong – but sometimes they are just sand in the gears of the economy.

Anything that increases the number of lawsuits should be viewed with a great deal of caution – especially in environments like the US where rich organisations can use expensive lawsuits as a strategic weapon against less rich targets; not so much to recover actual losses, as to prevent competition. Bear in mind that the cost of determining the merits of a “strategic” lawsuit may be unaffordable for many defendants (particularly where they cannot recover those costs). Consider, as Bruce Schneier often asks us to, how such a mechanism would fail… how it might be exploited…

One of the most worrying things about this proposal is the effect it would have on Free and Open Source software, especially where it is not produced as an economic activity (though some Open Source software IS very commercial).

And what would be the effect on someone who puts sample code or code examples on a website or in a book? Someone who submits a patch. Could they be sued?

Or would you sue the distributor? The free software mirror? The distribution repository? The author of the line with the bug?

I believe that strict liability for the consequences of bugs would represent an impossible burden.

Even ordinary liability, where one can avoid a claim if one proves to the satisfaction of a jury of random non-technical citizens that one was “careful enough” (or not “negligent”), would open many cans of worms.

We need to find better ways to improve software quality. Liability is superficially attractive, but is a Really Bad Idea.

Doug Coulter July 29, 2011 5:34 PM

Linux is arguably more secure than most other opsys out there, and adoption is pretty good. Even though it’s free. Clive, you paying attention? Sure it hasn’t dominated the world, and that’s fine by me, as it attracts fewer attacks to the few vulnerabilities it has that way.

From the file system on up, its security from the inside out, the opposite of what Windows attempts. And of course Apple is BSD, which ain’t bad at security even though they mostly work on making it slick rather than secure and have a crappy update system.

The linux community actively seeks out security issues and fixes them – I get updates almost daily where a zero day has been fixed before any attack has been made on it.

I think it’s more like most people don’t care, just use what’s on the machine when they get it. Here I do support for most of the neighborhood, but it’s only free on linux. It’s like being the maytag repairman and it’s driving adoption. I’m not hearing complaints about “it can’t do something” because in all cases so far, it’s merely a matter of knowing what to download free and how to set it up.

Nick P July 29, 2011 7:46 PM

@ Doug Coulter

Hey man when you gonna put up a link to that guide you’ve often talked about on setting Linux up and avoiding all the little gotchas in the process. You’ve been talking about it and smooth migrations for a while, but still no post on the specifics. I know a bunch of guys who would love to have that info, including me.

Richard Steven Hack July 29, 2011 7:47 PM

NZ: But Microsoft also makes drivers and I recall one of the Microsoft execs had an email that ended up in court explaining how their drivers sucked, too.

Also, Microsoft signs and certifies the drivers included with the OS. Third party drivers can come in from the user – but if Microsoft were serious, they would not even load unless certified.

So it’s still Microsoft’s fault in my opinion.

Nick P: “The industry isn’t using low defect methodologies or the best practices recommended by security engineers. We won’t know how it’s “working out” until the industry starts doing it.”

I was referring to the general use of low defect methodologies. What I’m recommending goes well beyond that to a true engineering.

“And the “known properties” part is hard to impossible if one is using complex legacy or proprietary software.”

The point is that “legacy” shouldn’t exist. And if you redesign software engineering, you’re likely to have a way to automatically convert legacy software.

I recall back in the ’80’s or ’90’s (can’t remember when), a big insurance company sued the COBOL standards committee for introducing a new version of COBOL, since the company had spent TEN YEARS converting their code to the LAST version. And this company had a 250-man computer science department WITHIN their IT department.

I said if they can’t convert their code in less than ten years, then they need to start over. I said they pay me enough, I’ll convert their stuff a hell of a lot sooner than that. Because they’re morons.

Beyond that, yes it is possible to precisely model the behavior of even the most complex software. It just takes enough brains and the right modeling software – which is precisely what I’m saying needs to be developed. There may be cases where the project is so large that it would decades to formally prove it – well, that’s what supercomputers are for. Rent the time. Most of the projects are costing millions, scores of millions or hundreds of millions anyway – adding in some computer time seems like a no-brainer especially if you can’t afford the project to fail.

And only the initial development of this new engineering methodology is going to be expensive. The key is automation. You don’t have a lot of bodies doing this once the software is made. It’s all automated. And once automated, the software shouldn’t cost a hundred grand like some of these specialized development environments, but $1.95 in OSS duplication costs. So every individual OSS project could afford to use it.

Which would reduce the cost of functionality, usability, reliability and security to a manageable level.

Which would make user demand and company liability a non-issue.

The chicken-and-egg problem really isn’t one – it’s pure greed and stupidity on the part of the management of companies like Microsoft. Because if Microsoft promoted this sort of thing, it would end up with much higher quality software than anyone else and could justify the price they charge for it. Which in the end would give them far higher profits than whatever they spent on the technology.

But Bill Gates has built a company based on doing as little as possible to make the highest profit possible. It’s his nature, as every biography of him has established throughout his career. His company inherited that corporate culture directly from him.

But it’s not just the proprietary software companies. It’s the industry. However, most of what constitutes process in the industry has been inherited from proprietary software companies (and to a lesser degree, academia). Although OSS has a lower defect rate (at least for large projects) than equivalent proprietary software, it’s still junk compared to what could be produced.

I’m unfamiliar with the products you cite, but I have to believe that if they’re as good as you say, they would be in wider use than aviation software. Even if myopic companies like Microsoft weren’t using them, a lot of smaller companies would be. And the quality between their software and the big boys would get noticed.

So I have to believe that the products you cite don’t go far enough to the level I’m talking about: where everything is precisely modeled and fully automated.

But it is good that someone is trying. Maybe it’s just too early in the process for me to see the impact. It’s just disappointing to see it take over 30 years to get this far.

NZ July 29, 2011 11:12 PM

@Doug Coulter

it’s merely a matter of knowing what to download free and how to set it up.

Exactly! But I have to conclude that this cost is too high.

@RHS

But Microsoft also makes drivers

My point was that when a driver crashes, user has to figure was it a MS driver, or a 3rd party one. While not a rocket science, this requires some level of knowledge.

Also, Microsoft signs and certifies the drivers included with the OS. Third party drivers can come in
from the user – but if Microsoft were serious, they would not even load unless certified.

Believe me or not that’s more or less the NT6 x64 kernel works: driver must be signed in order to load it. And that decision used to be highly controversial — a lot of hardware didn’t have signed drivers. If NT5 worked the same way, it would have never been so popular. Bill Gates didn’t want to have the most secure OS, he wanted to have the most popular one.

Now Theo de Raadt has engineered arguably the most secure and stable OS. And what?

Nick P July 30, 2011 11:29 AM

@ Richard Steven Hack

“The point is that “legacy” shouldn’t exist. And if you redesign software engineering, you’re likely to have a way to automatically convert legacy software.”

Legacy exists for economic reasons. So does the difficulty of converting it. It’s always been this way. There was a recent paper in ACM about pushing “system discontinuity” paradigm instead of “keep everything forever.” It’s a good idea, but not for the bottom line. As for automated conversions, they exist. I even designed one a long time ago using a GLR parser. I was trying to improve it when I landed on the DMS software reengineering toolkit that automates language A to B conversion, esp. COBOL. It does a lot more actually.

DMS Toolkit
http://www.semdesigns.com/Products/DMS/CodeGenerator.html?Home=DMSToolkit

“Beyond that, yes it is possible to precisely model the behavior of even the most complex software. It just takes enough brains and the right modeling software – which is precisely what I’m saying needs to be developed. ”

They’re working on it. This won’t be a redesign of software engineering. The process will remain very similar to existing formal development methodologies. The difference will be more automation. There are already tools that produce a fully working application from precisely stated requirements & environmental assumptions. Those tools are limited but show what can be done in theory.

“And only the initial development of this new engineering methodology is going to be expensive. The key is automation. You don’t have a lot of bodies doing this once the software is made. It’s all automated. ”

They tried this before. I was actually in that field for a time. It’s called “automatic programming.” I have bad news for you: it probably won’t happen. Many of us, hell entire universities, quit at that. The reason wasn’t that we couldn’t accomplish it to some degree. The problem was that system design incorporates a lot of knowledge about people, domains, physics, computers, etc. Our brains do all of this “common sense” and stuff easily. Computers don’t. And try to find an efficient solution to a complex problem while embedding common sense in the process? Good luck.

Projects like Cyc and OpenMind have been at that for a long time without the kind of success they are looking for. We’ll need a working artificial general intelligence technology (Novamente, maybe?) and years worth of properly structured knowledge before we can even start on automating software engineering. All we can do is automate certain transformations, problem detections, analyses, documentation and testing. Even these require human involvement, although they increase productivity. Having an AI background, I just don’t see your scheme working. The last thirty years have failed to produce a counterexample to this status quo. (Maybe I missed one in a publication somewhere while I’ve been doing all this security research, but I’d expect it to make the news.)

Processes like Cleanroom/Praxis/EAL7, spec-based code generation, spec-based test generation, static analyses, and formal verification are the best we have right now. They’ll be the best we have in 10 years, just more capable.

Besides, I’m pushing stuff that’s worked for 20+ years and proven in practice. That’s the kind of stuff we need businesses to start adopting TODAY. I don’t see how you’re going to get businesses to improve their processes by promoting vaporware from the future that is barely theoretically possible on existing hardware. We have to give them real, existing solutions and hard data to show it’s cost effective. Otherwise, nothing changes.

Clive Robinson July 30, 2011 11:59 AM

@ Doug,

“Clive, you paying attention”

Amongst other “comercial grade” OS’s I run some variations on Linux, BSD, and SunOS (all free to some extent).

Because I still get asked from time to time to develop / support, I still have MS “comercial” OS’s from Dos 6.x and Windows 3.x and NT 3.5x, 4.x, Win 95 (ME spit vomit spit) 2000, XP etc loaded on machines oh and earlier such as WIN2.O AND EVEN GemOS as well as historic Acorn and Apple OS (if you can call them that).

My view is none of them is realy secure and you are living on borrowed time if you connect any of them to the Internet or any other network for that matter.

However it is a “commercial” OS world out there and customers have their whims and fancies over and above cold hard business logic, and although we might not like it they pay the bills so the old adage “the customer is always right” is often the most pragmatic aproach (although you can sneek Linux in for NAS and all sorts of other things as long as it has a reasonable web admin interface).

The *nixs are generaly by design more secure than MS OS’s but that is not to say you cannot strip MS OS’s down and make them more secure, but all *nix OS’s beat the MS OS’s hands down on being able to do this for a whole variety of reasons (not least is not having a “non human readable registry”).

However OS’s more secure than commercial grade have their problems and to be honest cause support issues with ordinary users and usage. They are however very good for “devices” and specialised hosts on networks, for perimiter control, zoning and providing secure network to network bridging across insecure networks (that is any and all you do not have absolute control over).

But at the end of the day a comercial environment has costs to manage, users to support and a whole host of departments that should be but are not segregated because of the issues arising from doing so.

Worse is the issues of prestige and privilege that humans have and will not relinquish without floods of tears before bedtime. There are few comercial organisations that use “role based” security and of those few very few enforce it strictly.

As I’ve pointed out frequently long before Chrome was around it’s pointless having a secure OS etc if the app (web browser) has shared insecure memory. A user just has to have a window open on the intranet to a server with highly confidential information on it and to an insecure network such as just about any Internet host for the confidential data to be nestling right next to god alone knows what with no chance of stopping it being leaked.

One of the key secrets to good security is “segregation” from right down at the silicon level all the way up and through a humans head.

The dirty little secret of the comercial world is the attacks that do most harm are commited by insiders either knowingly or unknowingly, and you can talk about “training” till you are blue in the face and beyond humans are falable even at the best of times, and thoroughly dangerous when tired and under stress.

Thus if commercial organisations want to be secure they need strongly enforced minimum scope roles, strong segregation, and very effective audit before they start looking at more secure OS’s. And to be honest I cann’t see any of them doing it.

You only have to look back over recent newsworthy security breaches (HB Gary, RSA, et al) to see conveniance / low cost wins hands down when it conflicts with security. But worse all these hacked organisations seem to forget that their lack of security makes others in turn insecure and other people get hurt as a consequence of their inaction.

Richard Steven Hack July 31, 2011 1:13 AM

Nick P: “The problem was that system design incorporates a lot of knowledge about people, domains, physics, computers, etc. Our brains do all of this “common sense” and stuff easily. Computers don’t. ”

I’m well aware of the conceptual processing problem in AI research. And yes, it would be nice if it existed, but it probably won’t until we have much more detailed live analysis of brain function via nanotech tools.

However, I do think much better system designs could be produced based on automated CHECKING of human specifications against databases of real-world constraints. I don’t think you’d need full AI for that – just a very big and complex set of rules for specifying things we already know about in system design.

I agree with your last paragraph. But I don’t even see that happening anywhere except in niches such as aviation software where high reliability is critical. Consumer stuff doesn’t even come close.

I see it more as a “chicken-and-egg” issue. The software is expensive so consumer software companies won’t use it. The consumer software companies won’t use it, so the market is niche which makes the software expensive. It’s the latter that makes the difference.

This is where a company like Microsoft could step up and fund something and make it available to the industry for a low price (or even free). It’s not like they could go broke doing this. It’s not even like they would take some sort of competitive hit from doing so, since their monopoly is based on already being big. Not to mention that already being big, they could use the technology to stay that way themselves.

No, there has to be another reason – a cultural one why this stuff isn’t everywhere. It’s not just expense. The problem is no demand from those companies that would most benefit from the technology. I continue to believe it’s because industry people are frankly scared of being replaced by the very computers they program.

It’s been said before: “The last industry to use computers effectively is the computer industry.”

Clive Robinson July 31, 2011 6:10 AM

@ Richards Steven Hack,

“I see it more as a “chicken-and-egg” issue. The software is expensive so consumer software companies won’t use it.”

No consumer software companies are not afraid of putting their hand in their pocket, IF and only IF they can see a return on it.

And the problem they see is not that the software and the methodology won’t show a dividen, it’s the lack of programers who will do it.

Code cutting monkeys are some of the worst “artistic types” around, they basicaly push out Macho coding, pulling multiple all nighters and all the rest of the crap, because at the end of the day the metrics by which their pay is judged encorages that sort of behaviour.

For some reason (which I’ll avoid for now) the defect rate is counted against lines of code cut, and importantly appears to be a constant irrespective of the language used.

However the functionality of code per line goes up dramaticaly with more high level (and often abstract) programing languages. So from a productivity aspect you would be better producing code in lisp than you would Occam or pascal or C…

But programers like to programe in C or C++ both of which are way way down on the productivity per line of code (only marginaly more productive than some Assembler languages).

So you have managers rewarding “lines per day” and programers using the least productive programing language that consequently produces just about the maximum number of lines for any given level of functionality…

Can you realisticaly see programers going for more productive languages and methodologies unless managers change the rewards system…

Likewise can you see managers buying the tooools if they know they will be resisted either directly or subverted in some manner.

There are ways out of this particular chiken and egg but it’s not happened in the past fifteen years or so even though the issue has been. discussed to death in the early nineties…

Andy July 31, 2011 3:24 PM

It isn’t just the programmers, the compilers are just as much to blame if not more, because of obscurty.
Windows had a bug in a string function when looking for nulls you could trigger the same thing, but without any nulls, or ther heapalloc function when you got close to 7fdeff, if the error function catch didn’t look for the strange code you could crash the program.

Programmers would have know way in finding them unless the attached a debugger.

Andy July 31, 2011 3:52 PM

abit strange but, in openssh if there are developers listening, there is a function that has four values that get xored with the packet coming in and a place in memory, you know the four values and what you sent in the packet, so you should beable to make a null, which should make a bug.
Can’r find it in the latest program thought, might want to check though

Richard Steven Hack July 31, 2011 6:53 PM

Clive: “it’s the lack of programers who will do it.”

That’s part of what I mean.

And the reason programmers don’t use Lisp vs C++ is because Lisp is hard to use (not to mention absurd from the point of view of readability.) The same with the “functional” class of programming languages. The language paradigms are too different from “normal” languages like C, C++, C#, Java, etc.

Which goes right back to my point that REAL “software engineering” is so technical – like REAL physical engineering – that a lot of existing programmers probably couldn’t cut it without major retraining.

In fact, retraining probably wouldn’t be possible. The methodology would have to be taught in university engineering schools, not “computer science” courses, and a new generation would have to grow up with the new approach.

But since a lot of managers in the IT industry came out of programming (promoted to their level of incompetence), that’s another reason management won’t buy into it.

The only solution, as I indicated, is the new way of engineering has to be automated (so you don’t need highly trained “programming hordes”), and then the software has to be cheap or free so even small groups or individual programmers can effectively use it with less technical training.

But you’re right, there is little indication we’re going to see this any time soon. Still, at some point, the computer hardware is going to be so powerful – and then more complex, a la parallel processing – that software development is going to lag so painfully behind that some movement will have to be made. My guess is this will occur within the next twenty years, if not sooner.

So far, advances in computer power have been overshadowed by just more bloated bad software. It’s not clear that can go on forever, though – but I assume the industry will try!

Dirk Praet August 1, 2011 5:32 PM

Whatever the appropriate and feasible solution to the problem, it’s definitely not more litigation. As law suits over the last decades have become more of an additional weapon in the arsenal of big corps and plutocrats than a means to actually see justice done, such an idea is merely showing off intellectual poverty on behalf of the author. Much like raising taxes in my country seems to be the answer to nearly any issue thrown at the average politician unable to come up with real answers.

I personally believe that it would be a good start for consumers to stop buying and using stuff that has a history of proving itself defective or insecure. That’s why I tossed Windows year ago. Linux or BSD may not be the most user-friendly – or most secure – experience for the average user either, but at least they are free. As to OS X, I like it just as much as most people do, and arguably Apple is not doing too bad a job. Then again, I have no ambition whatsoever to lock myself in to yet another monopolist, especially one with such a bad record on privacy.

NZ August 1, 2011 6:08 PM

@Clive

GemOS
Sorry for offtopic, but what is GemOS? Google didn’t help.

General comment: “a chain is a strong at its weakest link”.

@Andy
http://openssh.org/list.html

@RHS

And the reason programmers don’t use Lisp vs C++ is because Lisp is hard to use
I feel that most of the programmers don’t even use C++ to its full extent.

Bruce Clement August 1, 2011 11:23 PM

@NZ
Probably he means DRI’s Graphical Environment Manager http://en.wikipedia.org/wiki/Graphical_Environment_Manager also known as GEM, unless he means their GEMDOS, which became Atari’s TOS (The Operating System) http://en.wikipedia.org/wiki/Atari_TOS

If you want to see GEM in action, you can download a FreeDOS VM with GEM in it from the FreeDOS people. Be warned, that GUIs weren’t as nice 30 years ago as they are now — they were designed to work with slower processors, smaller memory, lower resolution displays, less colours etc.

HTH

Supachupa August 2, 2011 5:45 AM

It’s one of the reasons I’m such a big fan of PCI-DSS: Gives business the fear of a big stick, addresses risk in a prioritized manner based on real world experience and helps less mature companies develop an ISMS.

Clive Robinson August 2, 2011 8:55 AM

@ NZ,

Bruce Clement has it partialy correct.

GemOS as it became known was DRI’s GEM windowing system ontop of DRI’s version of DOS called Dos Plus (it was like MS-DOS but had some more features akin to C/PM) as specialy licenced for Amstrad computers such as the PC-1512.

Looking back all those years it’s hard for people to remember that Amstrad was a very very major play in PC’s. At the time an IBM PC would set you back around 2000GBP for a low res monochrome without a windows system. Alan Sugar released the 1512 for a quater the price and threw in IIRC three games and a basic word pro.

What did Amstrad in was when he started putting HD’s into later systems, the Far Eastern HD suppliers decided to pass on the returns from other manufactures. What is not known is why these dodgy drives made it out into Amstrad products but the whole debacle ruined the Amstrad name, even though Alan Sugar eventually won compensation in the court against the HhD manufactures and others but by then the oportunities had passed on.

However as well as having an original Amstrad PC1640 monitor, I also have the Amstrad PPC640D “lugable” that I still use for generating printed One Time Pads on three parrt stationary in a dot matrix printer.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.