Comments

Clive Robinson June 14, 2018 11:18 AM

It would appear Thomas Dullien is either a little out of date or he has more depth in some areas than breadth.

For instance he poses a couple of things,

1, Do I need to trust my CPU vendor?

2, We do not know how to write real world programes that operate on untrusted hardware.

Neither is true, there are ways to mitigate untrusted hardware, that are over a hundred years old. Also NASA and others know how to write very real world programes that realy do run on untrusted hardware.

The reason these known mitigations and methods are not used much is the costs involved. That said the cost of such systems are actually less than hardware of around three to five years ago.

One reason unnecessary complexity with both legacy and carry forward happens, is that software developers do not get taught how to write code in a correctly balanced way. Which although it makes code slightly harder to understand when crossing over the 20,000ft barrier actually reduces complexity and those legacy and carry forward issues.

It’s very much to do with “left to right sequential thinking” which results in bad decisions. Specifically input error detection and occasionaly correction get moved way way to far to the left and exceptions are rarely handled in a way that is not treated as “show stopping”.

Various tests over the years show that the majority of errors are caused by incorrect handling of input either at the data level or in the program logic.

Put simply the average developer moves the error correction all in the same place as close as possible to the actual data input. Once through that the program logic is written on the very fatal assumption that the input data contains no errors… Whilst this might be true for a program written from scratch, but when you move to “code reuse” input filtering and program logic have been “de-coupled” thus the wrong or insufficient input checking happens for the program logic assumptions to be correct. Which means that you get exceptions that are not handeled correctly or at all.

Programs should be written such that the error checking and program logic are very tightly coupled. But they are wrapped in an exception handeling process that can re-wind to any previous point where the error can be handled. This type of programing even survives things like the failure of very far downstream to the right hardware. It gives the best possibility to correct any and all failures that conceiveably can be corrected without dropping data / core or out of the sky on some unfortunates head.

Further those hardware engineers that have been trained to design EmSec / TEMPEST hardened equipment very much understand how close coupling of related functionality though increasing complexity locally significantly reduces complexity in the larger system the close coupled code is part of.

Humdee June 14, 2018 1:43 PM

“Dullien gave an excellent talk on the subject with far more detail than I’ve ever provided”

So you are saying he gave a complex talk on complexity. Figures.

me June 15, 2018 2:17 AM

@Clive Robinson

Also NASA and others know how to write very real world programes that realy do run on untrusted hardware.

Do you have any link? because i don’t think is possible.
Operating on buggy hardware i think yes, but malicious hardware not.
for example you can add error correcting codes, cosmic ray detector, double all the things so that you don’t have one register but you have two copy of the same register+ecc so that you know if it is invalid because of cosmic ray or some power fault.

i’m not in the field and i’m not sure that what i’m writing makes much sense.

but when we talk about “evil” hardware things are different.

Alyer Babtu June 15, 2018 5:51 AM

The talk prompted recall of the (inspiring) paper “… Dynamics and Evolution of … Sociotechnical Systems” by Elliot Montroll from 1982 (edited 1984). See the pdf link at
https://projecteuclid.org/euclid.bams/1183553669

As the title suggests, the paper discusses quantitative patterns seen generally in technological change. It suggests any technology (in response to social demand)!develops to a saturation point, at which point it is replaced by a new technology which overcomes the limiting factors causing the saturation of the previous technology. Montroll makes some of the same points made by Dullien.

Alyer Babtu June 15, 2018 6:12 AM

previous comment inadvertently truncated

Montroll asks why some transitions are unduly delayed beyond the typical 10 years or so, e.g. fusion power has been under development for nearly 70 years. He suggests this has something to do with the presence of numerous dimensionless parameters in the new model.

Could the problems with reliability and security in computing systems be an instance of Montroll’s suggestion ?

Marshall June 15, 2018 9:20 AM

@Alyer Babtu fun fact for anyone who has done origami, Elliot Montroll is the father of the famous Origami artist John Montroll.

Fred P June 15, 2018 10:20 AM

@me

  • one way to protect against malicious hardware, is to have other, independent hardware observing the outputs to see if the hardware is acting maliciously, with the power to shut down the other hardware if it is, and backup procedures to handle a failure. That said, very careful maliciousness could cause problems that are hard to detect, but they hopefully would also have limited effects. Also, the observing hardware could be subverted. One advantage, though, is that it tends to be easier to inspect the observing software and hardware than the active hardware and software,

I’ve worked on a system that uses this type of approach for an operating room medical device. Needless to say, this type of engineering is rare in the medical field outside of devices which can cause immediate death or harm.

Major June 15, 2018 10:53 AM

@Alyer Babtu

Thank you very much for the link! I am largely self educated in many areas, and I have never seen entropy used as a complex system calculus and I found the exposition clear and entertaining. It is a terrific tool for my problem solving tool belt.

@Clive

You have many interesting things to say with great assurance!! I wish I could google your mind for details!!

I have taken a pretty great hardware security class and it gave no indication however that software can overcome subverted hardware. You seem to suggest that input cleaning and throw/catch error handling come somewhere near a solution to evil hardware, when to me they are only a start of a partial solution.

You frequently assail programmers and perhaps I am being defensive!

It is my intuition that if hardware is evil with a probability p adding n pieces of hardware watching the main hardware will at most reduce the probability of error by a factor on the order of 1/log2(1+n).

wumpus June 15, 2018 12:25 PM

@Fred P. – one way to protect against malicious hardware, is to have other, independent hardware observing the outputs to see if the hardware is acting maliciously, with the power to shut down the other hardware if it is, and backup procedures to handle a failure.

This assumes that you have the complete documentation of the hardware in such a form (presumably VHDL was designed for this) that the independent vetting hardware can compare it with the specs for correct action. This of course flies in the face of the original article which points out that we certainly aren’t going to be able to prove the VHDL code trustworthy, nor are we likely to be able to vet the software running on the hardware.

Even worse is trying to vet the operation of a random number generator: the operation is effectively impossible. Intel has even presumably crafted a RNG that can be changed from valid to invalid by modifying the process (and the rest of the chip remains unaffected).

While it is easy to distrust the hardware (in a “who watches the watchmen” sense), I feel that if you somehow crafted trusted hardware it would be of questionable worth (although it might be sufficiently valuable to build encryption specific hardware this way). Hardware is just one part of an unvettable stack, and easily the smallest part. There’s far too much engineering/coding that goes into the most trivial parts, and expecting hardware to vet all that code is simply not going to happen. Expecting “many eyeballs” to vet all that code is equally invalid.

Clive Robinson June 15, 2018 1:38 PM

@ me,

Do you have any link? because i don’t think is possible.
Operating on buggy hardware i think yes, but malicious hardware not.

I’ve talked about it many times on this blog in the past with @Nick P, @Wael and several others. If you search for my name and “voting protocols” you will find plenty of comments.

To get back to NASA they built on work by New York Telephone to use redundant systems in parallel and use voting protocols based on the majority vote.

NASA used the notion that you could use three entirely diferent hardware platforms of similar performance and by using clean room design techniques and three different software teams come up with the three systems to run in parallel to feed the voting protocol system.

If an external attacker wants to infect the system, they need to,

1, write three pieces of malware for totaly different systems.

2, Infect the three systems simultaniously with the three pieces of malware.

To avoid detection.

Which if you think about it step 1 is quite didicult, but steo 2 is not realy possible. Because the infections would have to be done sequentially which in turn means the voting protocol would catch it in progress.

I hope that helps clarify things a bit for you.

Clive Robinson June 15, 2018 2:08 PM

@ Major,

You need to start from a point way down the computing stack and think can I detect malicious behaviour here?

The answer is yes only if you can see the results of the malicious behaviour a lot further up the computing stack.

The question then becomes what if the malicious behaviour is triggered. That is it behaves normally untill a crafted signal turns the malicious behaviour on. It quickly becomes clear that with modern silicon you can not see with test equipment carefully crafted circuitry to give malicious behaviour.

The question then arises as to what could you do at such a low level?

Well we already know from Rowhammer that you can use the failings of the device physics to flip bits in the core memory below not just the CPU level but also below the MMU level. Just fliping a single bit in the MMU tables stored in the core memory will compleatly wreck system security that is absolutly dependent on the correct functioning of the MMU. Thus software can in effect reach down and around any protections the CPUband MMU can give you.

Thus it can be seen that a single CPU system can not in any way protect it’s self from attack.

All sounds doom and gloom till you consider a parallel system where one CPU MMU executes the OS and Application code. The second acts in effect as a hypervisor. If the second CPU halts the first CPU and then reads memory and calculates a hash function of critical blocks that can then be checked against known good hashs the low level changes to memory can be detected.

This is just a tiny fraction of what can be done security wise with a hypervisor that is in many respects little more than a state machine that is not Turing compleate.

I’ve talked abiut this in depth on this blog previously with @Nick P, @Thoth, @Wael and others. Have a search for “Castles-v-Prisons” or “C-v-P”. Or just “Probabilistic Security”.

It turns out that @Thoth proposed using SIMs to do the parallel CPUs for C-v-P and a bunch of academics at University College London have misappropriated the ideas without any acknowledgment, so you can buy a system if you wish.

Major June 15, 2018 2:40 PM

@Clive

Thank you for the kindly detailed response. I’ll do some googling!!

I’d love a heterogeneous triple cpu trusted linux system!!

Clive Robinson June 15, 2018 3:43 PM

@ Major,

You frequently assail programmers and perhaps I am being defensive!

I do assail certain types of programers I call “Code Cutters”.

My view point is that they behave like artisans not engineers or scientists.

Back in the Victorian era artisans decided they could build steam engines just by banging bits together using non standard hand crafted parts. The results were fairly dire, the worst part was the deaths and injuries caused by parts failing and boiler explosions. It was this that caused the UK Parliment to take action and the place between artisans and the newly established practitioners of practical “Natural Philosophy” became the engineers that designed by what we call the scientific method.

The problem with the software industry is by and large it is a “sausage machine” slicing and dicing existing work and stuffing the result into skins or paterns. Narry a sign of engineering in sight. Thus errors in existing code get passed on through multiple generations of reuse because they were not found by what are in effect “standard engineering” practice.

It’s one reason I tend to look for software people not with CS qualifications but hard science and engineering backgrounds. Oh and I have a view not to disimilar to Edsger Wybe Dijkstra, who in 2000 observed,

    The required techniques of effective reasoning are pretty formal, but as long as programming is done by people that don’t master them, the software crisis will remain with us and will be considered an incurable disease. And you know what incurable diseases do: they invite the quacks and charlatans in, who in this case take the form of Software Engineering gurus.

My only objection to the above is the inclusion of the word “Engineering”, as those he describes can not by definition be engineers.

Clive Robinson June 15, 2018 4:23 PM

@ Wumpus, Fred P,

This assumes that you have the complete documentation of the hardware in such a form…

You will never get “complete documentation of the hardware” even if you are the chip designer.

So your argument is proplematic at best.

However you do not require anything more than the ISA and circuit documentation to get functionality out of a basic computer system. In fact we know with System on a Chip (SoC) based systems they can be built into products without knowing much of the documentation. The Raspberry Pi with the Broadcom chip demonstrated that as large parts of it’s functionality were held under Non Disclosure Agrement (NDA).

The point about voting protocol systems is you use different CPU chips that is three unrelated ISAs. You then use three development teams that have no contact with each other. As part of the specification they write what you might call tasklets that have clearly defined functionality including timing. Thus each tasklet has a time based signature which is designed when functioning correctly to be as near identical as possible across the three CPUs.

It’s the signatures that you put into the voting circuit. Aberant behaviour by a change in either the hardware or software will cause the signature to change.

As three different pieces of malware would be required for the three different ISAs and they can not be loaded from by an external attacker in parallel, then when the first is loaded it’s CPU gets voted out. Then you enter the terminal fault state as the second CPU gets loaded.

Alyer Babtu June 15, 2018 7:00 PM

@Clive Robinson

a time based signature

My dumb question: is the triple then a kind of real-time system ? And are real-time systems generally harder to hack, and hard-real-time ones even harder ?

I refuse to give my name June 15, 2018 8:56 PM

with far more detail than I’ve ever provided

So it was complicated?

Clive Robinson June 16, 2018 12:03 AM

@ Alyer Babtu,

My dumb question: is the triple then a kind of real-time system ?

It’s one of those awkward questions that are easy to ask but hard to answer 😉

There is a lot of talk about Operating Systems that is often little more than fluff in a specific situation. Thus you have to first work out at what level you are working at, then decide what level of OS if any is required.

Real Time OS’s are for when fast response to input changes is desirable. But their real function is to guarentee a maximum not an average response time. Often they are considered very inefficient as they just sit there doing nothing most of the time.

The oft touted example is a system to control the braking system on a vehicle. If you put your foot on the brake then you want not just a fast response but a guaranteed response within a very short time frame. The rest of the time you moastly don’t care what the control system is doing. Thus as you usually don’t put your foot on the acceleration control peddle at the same time as the brake peddle you might add part of the engine managment to the same control system to save hardware etc… To do it though you need a task switcher which adds overhead even in a hardware switch from background to foreground with interupts

The reality is you can keep doing this only to a certain point before response time guarantees fail.

So for many safety critical systems you don’t have an OS and often not even a BIOS. If you use the word “safety” in the way that some European languages do it covers both of the English words “Security” and “Safety”. Thus you can get the first glimps that such systems apply equally to Safety critical and Security critical.

The important aspect you need to add in this case is that the three CPUs and their software stay in “lock step” at a quite low level that would be in effect be below most of the BIOS level.

Once you have this building block of three diferent CPU types and their low level software in “lock step” you can look on it functionaly as a single CPU and add OS type requirments on top.

It helps to understand this if you have ever worked on other parallel systems at a low level.

For many programmers the jump from sequential thinking to parallel thinking is either not going to happen or be inefficient / inelegant at best. However Moore’s Law has started to approach the laws of physics limits, thus “The future is parallel at all levels” or stagnate which is just one of the reasons the likes of Intel have multi-core CPU offerings. With an acceptable loss of efficiency per core greater through put can be achieved with multiple cores, all that is needed in most cases are the “tools” to take the sequential high level code a programmer has produced and reduce it down to lower level parallel code of a “desired form”.

If you search this blog with my name and the word “tasklets” you will find I’ve discussed this issue in more depth in the past as part of the Castle-v-Prison model.

Alyer Babtu June 16, 2018 5:44 AM

@Clive Robinson

Thanks. I, and everyone else, appreciate your detailed explanations, which multiply repay study.

jim dallas June 16, 2018 6:56 AM

The three (or 5 or ..) heterogenous CPU discussion reminds me of how Haile Selassie, emperor of Ethiopia, was reputed to govern, by having multiple separate intelligence systems which he communicated with privately (and by speaking in person). When I read this as a student I thought this ridiculous, but actually if you live in an untrusted context, maybe it was a good idea, wasteful, but safer.

Security Sam June 16, 2018 3:30 PM

The left to right sequential mode thinking
Brings to mind the infamous 1802 chip
Since the left to right shift was missing
The only choice left was a barrel shift.

Nick P June 16, 2018 5:45 PM

Re backdoored hardware

On top of what Clive said, my design ideas added that the hardware be made with different EDA tools in different fabs and jurisdictions. Preferably minimal cooperation on things like backdoors. The RF-level attacks make me think that model is ultimately broken, though, since even one in TCB can leak data.

I’ll also note that DOD’s money let them go a more direct route with the Trusted Foundry Program. They use US defense contractors or other trusted firms to make make the stuff. They can also use formally-verified CPU’s like AAMP7G or VAMP on process nodes still human-inspectable. Those tend to be more reliable, too, with less-broken physics.

questions June 16, 2018 10:42 PM

@Clive Robinson

the “tools” to take the sequential high level code a programmer has produced and reduce it down to lower level parallel code of a “desired form”.

Do you think it’s possible or even desirable to engineer reliable tools that reduce clever high-level down to proven low-level code?

If I remember correctly, in the past, you have defined “tasklets” as being well engineered single-purpose functions that can be safely combined into a high-level program – somewhat similar to how shell commands are often piped into one another.

Are the two directions fundamentally the same, or does one necessarily precede the other?

Nick P June 17, 2018 2:57 AM

@ questions

Languages like Concurrent Pascal (old) and Cray’s Chapel (active) let you describe concurrent or parallel stuff. Chapel, like X10, tried to cover a lot of situations, too. In hardware, high-level synthesis tools generate low-level descriptions of hardware from high-level forms close to programming. The hand-made stuff is better on average but there’s steadily been people using that stuff. FOSS tools even exist.

So, both concepts not only exist: they’re commercial products and FOSS projects in active use. It’s more a question of if the method or tool is right for your project’s requirements. I think things like Chapel just didn’t get the attention they deserved. Many would’ve found them useful for capitalizing multicore or clusters easily. Stuff like HLS may not make the cut because it’s not good enough for the job, too pricey, or many reasons.

Clive Robinson June 17, 2018 5:12 AM

@ questions,

Do you think it’s possible or even desirable to engineer reliable tools that reduce clever high-level down to proven low-level code?

It depends on what your final objectives are…

Lets start with a bottom up view as @Nick P has given a top down view.

In essence we do it already with compilers that reduce high level code down to any given CPU’s ISA, and the ISA is reduced down by the CPU internal microcode interpreter to Register Transfer Language (RTL) and as few as eight Arithmetic and Logic instructions that have hard coded into the CPU ALU(s). That part most high school kids get routinely mentioned in the way they are taught “Computer Science” these days, though it was not to long ago when they got taught little or nothing of it even in college CS courses that just went with High Level Languages.

Digging in a little further some ISAs are quite rich and some are quite lean instruction wise, and at one point people used to talk of Complex Instruction Set Computers (CISC) and Reduced Instruction Set Computers (RISC) and their respective architecture advantages. In reality the instructions had little to do with the underlying architectures, it was actually about how to most quickly get instructions and data from core memory into the internal CPU structures, typically the register file and what the CPU could do with it. Cray and other CPUs had very large very flexible register files and alowed quite complex functions to run entirely in CPU repeatedly thus avoiding the Core to register bottle neck.

The reality today is multiple RISC type CPU cores with moderately large register files surounded by mind boggeling cache systems and extra instruction decode to feed them that do all the clever speedup stuff Intel, AMD and ARM are having so many attack vectors with currently. In essence they have engineered themselves into an evolutionary cul der sac [Like the Saber toothed tiger was alleged to have done].

I can not remember what the current tally of instructions for the Intel architecture is and I suspect few can, but I do know they all end up as RTL and ALU instructions that are actually quite few.

More importantly looking to the future there is no reason why an ISA should not get more complex and start replacing base code libraries found in programing languages. People are talking about doing the equivalent of having FPGA logic cells built directly into CPU chips as the next generation of both Cloud and Super Computer devices. This will enable qualified programers to write algorithms directly in logic and get a five to fifty times speed improvment.

So along with going “parallel” the industry is going for even richer ISAs that are in part designed by the programer.

My intent with “tasklets” was to go massively parallel with very small very efficient CPU cores surounded by very fast memory that could be used as a very flexible register array or as what looks like conventional cache/core memory. In part because we’ve already crossed a point of processing logic density restricted by clock speed and heat death. Memory being inherently lower power as storage logic is mostly static can be packed considerably more densly or run at higher clock speeds. With judicious design there is no reason why the internal clock speed could not go through the 10GHz barrier. But the real intention was two fold, firstly to get rid of conventional task switching that kills CPU performance and secondly that the tasks have clear signitures for security hypervisors to monitor for abberant behaviour.

The problem as with parallel programing is that there are a very very few people who can do low level security coding. Thus there are not enough to securely code even OS’s let alone the uncountable number of application programs.

Back some decades ago in 1981 when the Apple ][ was still the must have personal business computer, a language was released called “The Last One”[1] it was allegadly a “Fourth Generation language” (4GL) so high level that youl’d never need another language… The reality was it was a program generator taking in user flow charts and producing BASIC code the then “lingua franca” of the computer world. The point is it started as a form of scripting language that hid low level details behind prewritten tasks and evolved it’s own form of early AI to become something that would generate those tasks. A similar approach would be for a programer to write a script and a generator produce not just the required code but in a way that it would also produce clear security signatures and hooks for the security hypervisor. Thus taking the scarce resource of security programmers and making their work more widely available.

You could consider it to be a security focused version of Model-driven engineering (MDE)[2], but that would be missing the point some what.

[1] http://www.tebbo.com/presshere/html/wf8104.htm

[2] Model-driven engineering is considered an awkward cuss lurking between third and fourth generation languages by a number of people that consider themselves unfortunate to have come across it. One main gripe is that it is far from user friendly and the processes behind it obtuse at best. However it also has it’s ardent fans. Which ever side of the devide you are it does unfortunatly have a very steep learning curve and makes you feel like your brain has been through the equivalent of a couple of rounds with the world heavy weight champion. Thus it shows usability issues that may mean it never realy hits “prime time”.

PeaceHead June 17, 2018 7:13 PM

This is a great topic.
It brings to mind also the thorny issue of interoperability, where previously unseen and untested complexities (and conflicts) are born and escape to die annother day.

me June 18, 2018 2:25 AM

@Fred P

to have other, [SECURE], independent hardware observing the outputs to see if the hardware is acting maliciously

So i can have insecure hardware if i have secure hardware checking what the insecure one does?
why can’t i use the secure one in first place?
that’s not a solution i’m sorry 🙂

me June 18, 2018 2:28 AM

@Clive Robinson

I hope that helps clarify things a bit for you.

Yes thanks
Now that i think about it… something similar was proposed for programs: use multiple compilers to compile the same source.

Thunderbird June 18, 2018 11:37 AM

Clive’s comments on how you can–at tremendous expense–implement multiple versions of code using clean room techniques to run on untrusted hardware are interesting, but don’t address the situation the original presentation was talking about. Specifically, it discusses the current state of the world, where microprocessors are (much) cheaper than purpose-built hardware, and cheaply-implemented programs that work in enough cases can simulate said purpose-built hardware in (again) many cases. Those cases where the simulation fails likely lead to security holes.

What I am afraid we may lose track of in quibbling about individual bullet points is that the presentation describes a thing happening in the real world for understandable reasons. I found it insightful and helpful. Like many great ideas, it made me wonder why didn’t I think of it myself.

MarkH June 18, 2018 2:11 PM

@Security Sam:

The RCA COSMAC 1802!

Simply seeing that number, sent me for a moment on an excursion down “memory lane” …

I wrote software for a project (battery powered, of course) which required quadrature addition (square-root of sum of squares) for three accelerometer axes. As you can imagine, I was face-to-face with the peculiar limitations of the 1802.

I was so methodical in those days that I would add a hundred or so lines of assembly code at one go (of course, we were working without debugging tools), and fairly often the new code would test out the first time.

Heavens Above, I was so young in those days …

Clive Robinson June 18, 2018 2:19 PM

@ me,

So i can have insecure hardware if i have secure hardware checking what the insecure one does? why can’t i use the secure one in first place?

You can use insecure hardware if you can verify it’s correct functioning. A voting system does this.

You are making an incorect assumption about the checking hardware. It can be a very simple state machine made of components that can be verified more easily such as diodes and resistors. Due to it’s simplicity, verifiability and non-Turing-Compleate design it can reliably verify the complex Turing Compleate CPU’s and other complex logic.

Thus the incorrect assumption, incorrectly leads to your conclusion of,

[T]hat’s not a solution i’m sorry

I can go through it with you in greater depth, but based on past experience it’s going to occupy a large amount of this thread and I would like to leave it a few days to let others get their comments in first.

Security Sam June 19, 2018 10:53 AM

@MarkH
Yes indeed, it was peculiarly unique. I happened to be at an Intel seminar, where someone in the audience made a humorous reference to it.

Noone June 21, 2018 10:21 AM

@Clive Robinson

I think the problem solution you point out is for a different world (e.g. “NASA”) where cost and time do not matter.
Halvar’s pointed out in his talk that the real world goes for ONE cheap CPU at still-acceptable unreliability for cost reasons. So, replacing that by three different CPU architectures with three different SW architectures to build a voting system is a bit too far from reality.

Clive Robinson June 22, 2018 12:40 PM

@ Noone,

So, replacing that by three different CPU architectures with three different SW architectures to build a voting system is a bit too far from reality.

Actually it’s more dependent on the system requirments analysis than it is on anything else like relative costs. That is it should consider financial risk as a proportionate factor, which has had a very major shove since May 25th this year.

The fact that SOHO and home users have rarely done a full requirments analysis upto now, is something that is likely to start changing over the next few years due to regulation with real teeth and just one or two business terminating cases. That is ecactly the same reason as many of the larger businesses have done recently and will continue to do from now on.

The biggest group of high end CPU purchasers currently is those involved with supplying online services. Many are subject to various legal constraints, just the latest in a long line is the EU GDPR, which unlike earlier legislation has some very real teeth, that scare the likes of Google and Facebook and similar.

Thus the question of “Best Practice” likewise gets some very real teeth to chew up with. Best practice would be to use such a voting system, no if’s doubt’s or maybe’s, the technology is in place one way or another from a limited number of security and high availability vendors in other high risk areas (think areospace / health / etc). Thus managment of large organisations have to make the choice on investment / risk outcome. The risk is now way way higher, so much so that any two bit lawyer could persuade a court that a company had not followed best practice in representation to the EU…

The point being this has a tipping point effect that moves not just the cost but the tipping point downwards, making it more available thus widening the best practice risk circle, bringing the cost down further and so on untill it gets a lot closer to the equivalent commodity pricing.

We have seen this before with the auto industry, where such legislation actually pulled the entire industry out of a “race for the bottom” “death spiral”. It not just encoraged improved safety it actually caused engineering to find new methods with interesting “sweet spots” that actually reduced manufacturing costs and improved production techniques likewise reducing costs whilst also upping safety and fuel utilisation…

The right sort of legislation although initially appearing draconian, actually force markets to face the realities of life and thus up their game quite a bit that contrary to “free market mantra” actually improves the market for both customers and suppliers of goods and services.

There are other examples such as QA which whilst not mandatory de jur quickly became mandatory de facto as customers were given choice and voted with their feet.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.