name.withheld.for.obvious.reasons November 15, 2021 1:27 PM

I know Bruce your contribution will be substantive, my reaction to symposium on securing “physical capable computers” is the cart before the horse. How about securing computers in general? We are still so far away from any notion of trustworthy computing that it undermines and effort to secure some newly cast environment. I have in the past have had trip over significant infrastructure vulnerabilities that exist around physical operations. All manner of industrial and military systems are woefully inadequate today, let alone tomorrow. Until integrity with an honest tenor meets with corporate, social, and manufacturers the EULA’s where accountable is everything but…and I’ll skip the whole consumer electronics morass and resign myself to what is obvious.

Clive Robinson November 15, 2021 6:27 PM

@ name.withheld…,

We are still so far away from any notion of trustworthy computing that it undermines and effort to secure some newly cast environment.

The problem is that computers have never been designed to be secure. We simply do not know how to do it. The best idea we have currently is “Fully Homomorphic Encryption”(FHE)[1] and to be frank that is not going to turn out as secure as people hope it will[2]. That is it will be,

“Secure in theory, but not in practice”.

Add to that you have the somewhat thorny issue of “Malleability” within an encapsulated data object, that by necessity has to be authenticated[3] by some secure checksuming / hashing algorithm.

Whilst there are a number of other issues, it’s uncertain that the issues mentioned can actually be securely resolved. If they can not, then despite the theoretical promise of FHE the practical results will not be secure, though they might still be in effect confidential.


[2] Whilst Homomorphic encryption protects the data you process. The actual processing will leak information by the simple facts that work is being done and in specific ways.

[3] Ordinarily data that is either stored –at rest– or communicated –in transit– needs to be “armoured” against several types of attack by an adversary. This armoring is done by checksums or hashes etc prior to encryption. Modify the data in any way then it’s pre-modification checksum fails. Worse any homomorphic operation on encrypted data results in encrypted results that are unknown, which means a new checksum can not be generated. At which point all those attacks the armoring previously protected against become possible again.

okmarts2 November 16, 2021 3:29 AM

With the development of computer technology and digital circuit technology, the interface capability of HMI products is getting stronger and stronger. In addition to the traditional serial (RS232, RS422/RS485) communication interfaces, some HMI products with data interfaces such as network interface, parallel interface, and USB interface which can be connected to industrial control devices with the same interfaces to realize human-machine interaction of devices.ABB HMI CP620

name.withheld.for.obvious.reasons November 19, 2021 2:20 PM

@ Clive
Yes, malleability is possibly a bug and not a feature. To my thinking, fidelity of data; as it is stored, recalled from an indirect source, or queried from knowledge base is not a structural thing. A general methodology where resources, data, and information from prescribed sources meet specific and mutable criteria (dynamism), is not a thing. At least not in the way I think of it. And this goes back to your idea about a distributed database that I mentioned that Sun Microsystems was working on, Web Fibre Database or some such. It is kind of a distributed disk and storage network much flatter than existing network-based storage schemes.

There are encoding standards, we have recently seen how well that works, the are transport protocols such as IDL/NIDL, and a suite of presentation layer tools as in SQL and Hadoop. Where this falls down is when I make a physical data set comparison, for example from a paper data logger or books and publications, where a number of cascading and direct relationships to the data’s integrity (I like to think of it from an audiophile perspective), or fidelity is maintained–or at the very least “recoverable”. I can pull the paper logs, books, or other tactile source that does not share the same risk profile when comes to data integrity (and fidelity).

Today, the distance from the source inputs, whatever form that is, and the coalescing, computing, and subsequent analysis is long–and as we see with machine learning, nearly indeterminate. That’s not good.

Clive Robinson November 19, 2021 6:27 PM

@ nsne.withheld…,

Today, the distance from the source inputs, whatever form that is, and the coalescing, computing, and subsequent analysis is long–and as we see with machine learning, nearly indeterminate. That’s not good.

The distance needs to get shorter for various quite fundemental reasons (bandwidth and speed of light). But that is not the way the major industry players want to move. That is they want your data and the control it gives them over you.

Which is why this idea of centralising stuff in “the cloud” is realy a bad idea at oh so many levels.

If we look at nature and how it has worked things out over several million years we see a highly distributed model with as much processing done at the source as possible, and in some cases local control overiding distant central control in several ways. As can be seen in the human body with taping nerve bundles in the knee and elbow causing an autonomous reaction, the lizard/monkey brain causing “instinctive” rather than “conscious” action.

The trick is working out what needs to be done at what distance away and why. One mistake we keep making over and over is assuming higher bandwidth gives more “usefull” information… Whilst yes it does give more information much of it can be seen in reality as “noise” to a given function. The real problem arises when you try to run two entirely different functions at the end of a distance. Because the signal for one may well be the noise for the other and vice versa. This suggests “filter early and travel on seperate paths” might be a stratagy to investigate, unfortunately no such a system is inherently unstable. The processing needs to happen as part of the filtering thus must be local to the source. To see why this might be consider an amplitude tracking loop that has three lossy integrators that produce fast, medium, slow integration times. The trick is to try and maintain the greatest dynamic range. So the fast loop is there to get ride of transients or clicks that are very probably not part of the desired signal and so on. What is not immediately apparent is that there has to be a delay through the system proportional to the time of the longest integration period. If you have several inputs to a down stream system that are filtered in this way, you need to be darn certain you get the delays right or all hell breaks loose and feedback becomes oscillatory (howl in audio systems).

Interestingly nature has elected to split parallel processing and serial processing appart. Very roughly the right side of your brain does the parallel processing of semi-raw sensory input and the left hand side does the sequential logic for reasoning and action. We don’t yet know to what extent this makes sensory processing easier but we can see parallels in human activities[1].

Therefore we need to learn not just how to parallel process but do it in a distributed way.

As most of us can not do parallel processing effectively[2] we end up with very slow grossly inefficient systems. In the main because we can not even think about how to distribute behaviours let alone do it.

There is only one direction we can go in, which is a highly parallel, highly distributed, semi-autonomous systems, acting collaboratively. The problem we have not a clue how to do it. And so far the solution has been head in the sand thinking of trying to make sequential systems not just more complex but run faster, untill we’ve crashed into the constraints of the laws of physics we currently know…

[1] Think in terms of a company accounts department, individual transactions come in and are processed anything that breaks certain basic rules gets stopped there. At the end of the week etc these individual transactions get grouped into small groups where different rules get applied. This process gets applied for increasing time periods and larger groupings with different rules. When done correctly the company appears to have a well managed finacial situation but quite slowly over all and on what appears at the top to be very limited data. The reality is though that the response can be very fast for individual inputs and other responses likewise happen at speeds appropriate to the aggregate data grouping. In effect what would appear as a very chaotic and noisy input gets filtered and processed in a way that is distributed appropriately for most conditions.

[2] Due to the fact we try to do every thing centrally and serially, we are fairly hopless at doing things in parallel. Therefore we tend to decimate in time. That is we split things into seperate serial tasks and then centrally process at the rate of the slowest. So that we just delay and delay untill the control loop is to slow to respond as required.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.