Hyundai Uses Example Keys for Encryption System

This is a dumb crypto mistake I had not previously encountered:

A developer says it was possible to run their own software on the car infotainment hardware after discovering the vehicle’s manufacturer had secured its system using keys that were not only publicly known but had been lifted from programming examples.


“Turns out the [AES] encryption key in that script is the first AES 128-bit CBC example key listed in the NIST document SP800-38A [PDF]”.


Luck held out, in a way. “Greenluigi1” found within the firmware image the RSA public key used by the updater, and searched online for a portion of that key. The search results pointed to a common public key that shows up in online tutorials like “RSA Encryption & Decryption Example with OpenSSL in C.

EDITED TO ADD (8/23): Slashdot post.

Posted on August 22, 2022 at 6:38 AM42 Comments


Q August 22, 2022 6:48 AM

So nothing wrong here. All is as it should be.

People who buy a vehicle should be able to do whatever they wont with it. Including running their own code. The manufacturer shouldn’t ever be permitted to lock the owner out from anything. It’s none of their business what people do with their own cars.

Michael August 22, 2022 7:03 AM

I disagree for 2 reasons.

Firstly, owners shouldn’t be allowed to do things that are dangerous or illegal. Neither should car makers of course.

Secondly, owners may not know what the software does, at best they know what it is advertised to do. Of course there is no guarantee that Hyundai does either.

There are also guarantees and insurance to consider. I don’t think that companies should be penalised for owner’s stupidity.

Q August 22, 2022 7:54 AM

“Firstly, owners shouldn’t be allowed to do things that are dangerous or illegal.”

Owners already aren’t allowed “to do things that are [..] illegal”. There are these things called laws. You also aren’t allowed to exceed the speed limit, do you suggest that all cars should be governed (controlled by who?) to prevent you speeding? Currently we rely upon the driver to not jab the accelerator too hard, and it works fine. If people don’t follow that rule, they get punished.

You also aren’t allowed to stab people with a knife, but we don’t ban knives. Instead we have laws that forbid you to do that.

Perhaps you are conflating “allowed” with “able to”. They are different things. You aren’t allowed to stab people, but you are certainly able to. It should be the same with cars. How does reprogramming it to disable all telemetry suddenly lead to you doing illegal things? It doesn’t, it is a false conclusion.

The real crime here is the manufacturers retaining control. If you don’t control it, then you don’t own it.

Clive Robinson August 22, 2022 8:16 AM

@ Bruce, ALL,

“This is a dumb crypto mistake I had not previously encountered”

Yes it is, but…

It’s also “expected” in a way.

If you are developing a cryptographic application you’ld want to run “the standard tests”. So at some point the standard keys would get used.

Now the real questions to ask are,

1, How did the test key get left behind?
2, Was it by accident or design?

That is,

1, Is it a very stupid mistake.
2, Is it made to look like a stupid mistake.

As gets pointed out from time to time “Plausable deniability” is very useful.

But this is so stupid, it’s almost too stupid.

I can see one way for it to happen and that is the “two team handover” issue.

Generally when one team hands code over to another team, these days they also hand over a bunch of “tests” for the second team to run and check they have not broken anything later.

The “Agile Alliance” and similar strongly push “Test Driven Development”(TDD) which very loosly is,

1, Read the spec
2, Write a bunch of tests[1]
3, Write the code to pass the tests
4, If a test fails fix code
5, Refactor the code
6, Keep testing and refactor looping

Yes it produces code that passes the tests but…

1, Are the tests correct?

And perhaps worse,

2, Do the tests in effect become built into the code?

The answer to the last point is often “yes” which is where the danger arises.

But even if it’s not direct artifacts of the tests getting left in the code, the code structure and more are clearly a product of the TDD methodology in use.

As for are the tests right? Well often most do not even know what they should be testing for or how, just that a new test should fail and the code changed to pass[1]…

Thus the tests are perscriptive and other testing techniques which in experienced hands are way more effective get ignored.

To say,

“Test artifacts get left behind in TDD”

Is shall we say an understatment with some developers…

So yes I can see “Stupid being to Stupid” but that’s to be expected with “features over functionality” driven managment with commensurately short development cycles.

There are also other ways such things can get “baked in” as develipment moves down a chain.

[1] One mantra of TDD is “new tests must fail” it’s based on a false argument that a test is not functioning if the existing code passes it… Therefore by definition you are not writing a test to the specification, but to make the existing code fail in some way. This is dangerous because when you “refactor” you can cause other tests or code to inherit something undesirable from the new test, that may actually introduce a fault.

R. Cake August 22, 2022 9:28 AM

I think it is as easy as this: outsourcing to an incompetent entity.
During the (outsourcing) sales process the car supplier responsible for the unit told the prospective contractors that they would be responsible for a unit with AES encryption. Of course they said “yes yes, no problem” and did nothing else.
Upon award (to the lowest bidder of course), they dumped the spec on the desk of their developers. These had never in their life done anything with crypto yet.
So naturally, they turned to textbooks to cobble together some quick and dirty “solution”. Not even bothering to try and understand what they were doing or which part of it mattered, they just took any examples they could find and copied them 1:1.

They had fulfilled the spec at face value, and (nearly) everyone was happy ever after.

Erdem Memisyazici August 22, 2022 10:14 AM

Does anybody find it strange that only people who work on gambling machines produce flawless software when it comes to security?

Clive Robinson August 22, 2022 10:51 AM

@ Erdem Memisyazici,

Re : Bug free coding.

“Does anybody find it strange that only people who work on gambling machines produce flawless software when it comes to security?”

Err not true at all.

1, Some gambling machines have had bugs especially to do with random bumber generation.

2, I’ve written software for various embedded systems including “Fast Moving Consumer Electronics”(FMCE) that have not had any faults reported.

But also consider when you take all the unimportant bells and whistles nonsense used to jazz up what is a very very simple interface, gambling machines are not exactly complicated.

Because the electronics in gaming machines, replaces mechanics, on which several legall constraints are placed on them, including payout rate etc, and these machines are tested before they are approved for use. The software is of very limited functionality and written as a “state machine” with every state maped and verifyed.

Mexaly August 22, 2022 10:55 AM

@Erdem, banks are pretty good at infosec, too.
Enough is at stake to make the inconvenience of security tolerable.
That inconvenience includes firing people who won’t operate securely.
If you don’t do that, your security won’t hold up.

SpaceLifeForm August 22, 2022 3:59 PM

@ Q

Currently we rely upon the driver to not jab the accelerator too hard

. . .

If you don’t control it, then you don’t own it.

So, 8 weeks ago I was driving. As a habit of mine, unless WX conditions are really poor, I at least keep my drivers side window open a bit, because I like to listen to the traffic around me. Just hearing tire or engine noise is useful information, especially if someone slipped into your blind spots unnoticed.

On this particular day, it was hot, so I had the A/C on, but still had the window open an inch or so. In this particular stretch of road, it was 2 lanes in each direction, but no center median at that point. I had just came into this section from one lane in each direction. Going the other direction, you lose a lane.

I hear this whack noise, and immediately look into my side mirror, and watch this car that had just passed me going in the other direction, crash head-on into a telephone pole. No brake lights observed.

The car must have going 40 in a 35 mph zone. It was horrible to see, and I will never be able to forget it.

The whack sound was the car running over the ‘right lane ends’ sign, which was only 25 feet from the telephone pole. That is how fast I reacted to look into my mirror.

The car hit with so much force, it broke the telephone pole, then the car twisted 90 degrees to it’s left back out into the street.

I concluded the driver must have had a stroke or heart attack, and was likely dead. I did not call EMS because there were plenty of other drivers behind me that stopped in shock, plus EMS was right across the street, and they probably heard it.

3 days later, the Electric Utility came out and planted a new pole, and attached the power lines to the new pole.

Attached to the broken pole at a lower level, are cable and phone lines.

The Electric Utility, after moving the power lines, cut off the top half of the broken pole.

Now, two months later, the cable and phone lines are still attached to the broken pole. Those companies have not come out to attach to the new pole.

Maybe this is because the only thing keeping the broken pole still upright is the fact that the cable and phone lines are actually holding it upright and neither wants to go first. Maybe it is lack of staff. The ‘right lane ends’ sign is still missing, which is a County issue.

I did not notice the car well enough, but I now wonder if was a Tesla.

I worry about self-driving cars messing up and causing an accident, especially a head-on accident. I do not like them.

lurjer August 22, 2022 5:20 PM

I presume the Allybank story was intended for the squid thread; but one line from Ars sticks out:
“Card fraud is an almost accepted fact of modern life.”

Skip August 22, 2022 5:57 PM

Wait, Isn’t it called an outsourced model to the cheap labor (not really cheap in the long run) past more than 20 years which most living in the real world well aware of?

SpaceLifeForm August 22, 2022 7:20 PM

@ lurker

I could have put it on squid, but I keep my tabs at the ready, locked and loaded these days. I have plenty of ammo tabs, just waiting for stories to break. In This Modern Day, it does not take long before they become relevant.

I pay attention, and connect dots. It is rare these days that something pops up here that I have not already read about days or weeks ago.

When something pops up, and if I have a dot, then I will note it.

It was in response to Mexaly and this

banks are pretty good at infosec, too.
Enough is at stake to make the inconvenience of security tolerable.

Not if they are crooked.

lurker August 22, 2022 7:21 PM

Interesting story of how it was done. I note
1. Hyundai seem to have taken all the usual steps to avoid unauthorised entry, then do a version of “password” as their password …
2. Google seems to be doing nicely in their effort at “indexing the world’s information.”

lurker August 22, 2022 7:52 PM

@SpaceLifeForm, @Mexaly
“banks are pretty good at infosec, too.”

Some of the banks, some of the time. I have personally experienced behaviour at both ends of the scale from different banks for delivery of new cards.

Lynnette August 22, 2022 8:05 PM

This is a dumb crypto mistake I had not previously encountered

Lucky you; it seems other cryptographers have had different experiences. (Checking private keys and API-tokens into Github is also surprisingly popular.)

Peter Gutmann’s got a timely internet-draft about this: “The widespread use of PKCs on the Internet has led to a proliferation of publicly-known but not necessarily acknowledged keys used for testing purposes or that ship preconfigured in applications. These keys provide no security, but since there’s no record of them it’s often not known that they provide no security. In order to address this issue, this document provides a set of widely-known test keys that may be used wherever a preconfigured or sample key is required, and by extension in situations where such keys may be used such as when testing digitally signed data. […] The intent of publishing known keys in this form is that they may be easily recognised as being test keys when encountered. It should go without saying that these keys should never be used or relied upon in production environments. The author awaits the inevitable CVEs.”

(I see no definition for PKC; I’m guessing “public-key cryptosystem”.)

Z August 22, 2022 10:09 PM

It’s quite possible the programmer put in something quick & fast to get it working as a demo, intending to update the keys later on, but was was denied the time from pointy-haired-boss management. I’ve seen that before.

Or perhaps they were laid off. Or left for a better job. I’m not too inclined to volunteer my time to help my ex-employer after getting sacked.

Or they may just have been evil. Or incompetent. Doing as little as possible to get by.

There’s a lot of vulnerabilities here nobody ever thinks about.

Thankfully this was not brakes, acceleration, or steering. I saw what happened with Toyota. This can get a lot scarier.

Ted August 23, 2022 12:13 AM


  1. Hyundai seem to have taken all the usual steps to avoid unauthorised entry, then do a version of “password” as their password …

Lol! Right?

Getting into the encrypted zip file (enc_system_package_{version}.zip) was harrowing.

But then finding the encryption method, key, and IV for CBC AES-128 in the shell script ( Awesome.

I wasn’t quite sure how he found part of the RSA private key. But when he Googled it, I’m wondering how many pages deep the interesting results were.

The cool thing is “greenluigi1” can now lock and unlock his doors from his car’s infotainment system.

I was halfway expecting Hyundai/Hyundai Mobis to have a comment on this. Is this a responsible disclosure kind of deal? No CVE’s yet?

Clive Robinson August 23, 2022 12:48 AM

@ ALL,

The general drift behind this thread is it’s

“Testing code left in, for some reason X.”

Where X can be a multitude of slips, trips, forgetmes, lack of time, or a deliberate vulnerability etc.

I mentioned the dangers of development methodologies –specifically TDD– above.

But there are others dangers “code re-use” being one, especially via “code libraries” which also brings in “code refactoring” as yet another danger.

One such set of exploitable vulnarabilities is via the transfer of information between parts of a system.

In order to transfer “Complex Information Objects” held in memory we convert them to a bit stream send them down a Shannon Communications Channel then convert the bit stream back to a complex information object in memory in a different part of the system. Also doing any “Big-endian” etc adjustments…

We often glibly call the process “serialization” and “deserialization” and UNFORTUNATELY it has become ubiquitous, to the point it is often done when it need not be and in fact should not be. Part of the reason for this is to make “code reusable” in libraries etc.

Part of the hidden problem is that “Shannon Communications Channel”. Programmers rarely understand it’s properties or nature especially when it comes to security.

What they do know however is the likes of JSON work and is just a library call away.

But there is a little more to also consider. Most programmers with any real world experience know they should check user input. However as I’ve mentioned before they also tend to push such checking to the left in their designs. That is away from where the checking is actually needed. Then having pushed the checking to the left they “assume it away”. That is it is seen as “done and dusted”.

Then having put such user input into a complex information object they serialize it and pass it across the system.

They forget that you can not simply deserialize it back into a complex information object. Because the implication of the Shannon Channel is,

“The data is now untrusted”

Importantly not just from “noise on the line” type random errors that checksums and similar easy physical layer in the stack testing might pick up. But also Eve and other third parties “with agency” who can hand craft their malicious data to pass the checksums and similar thus get it all the way up the stack to where it does it’s “magic” for then.

So to be secure you have to do all the user input checking on the data again[1], only is it “the same checking” are you going to run all the TDD tests for it?

Of course not. Thus you have just opened a major security hole in your system. Don’t think it will never happen… because there are a lot of Eve’s out there and as log4j denonstrated way to much unchecked code reuse…

It just so happens that there is a timely article about the dangers of code reuse, refactoring and serialization in Java Libraries,

Just remember it’s not just Java where this happens… Think any programing language, also the likes of JSON as an example of over reuse as well. Sadly JSON has realy become an “anti-security pattern” when you think about it, something Douglas Crockford is now saying…

[1] If the user input checking had not been pushed to the left, but actually been done where it is needed then this expensive duplication of work would not have had to be done. It’s not just with serialization that this occurs, one of the dangers of both refactoring and code reuse is it encorages this splitting out of input checking and moving it away to be forgoton.

Erdem Memisyazici August 23, 2022 2:23 AM

@Clive Robinson

Rarely do you see any software flaws in gambling machines and when you do, it’s usually a news-worthy story that ends with arrests. Comparing that ratio to phones, IoT, and laptop/desktops the difference is vast.

Like banks, it’s taken more seriously. The simple answer is likely “due to resources dedicated to it.”

Nobody cares if I install a remote admin tool on someone’s phone to talk to them every night as they enter R.E.M. sleep, because it doesn’t cost anybody millions. Yet still the concept is just as wrong.

Clive Robinson August 23, 2022 9:46 AM

@ Erdem Memisyazici,

Re : Independent Certification.

“Rarely do you see any software flaws in gambling machines “

No, nor do you see it in “intrinsic Safety” electronics, or “medical equipment”, and most other electronics that has to go through rigorous “Independent Certification”.

Even aircraft systems when “Independently Certified” don’t have critical software flaws. Shame that Boeing software was not “Independently Certified”…

Oh and you don’t see it in the airside of phones which is likewise rigorously tested in “Independent Certification”, but you do on the user side of Smart Phones and the like that does not get rigorously tested, and certainly not “Independebt Certification”.

Which is the point I was gently making to you earlier with,

“Because the electronics in gaming machines, replaces mechanics, on which several legall constraints are placed on them, including payout rate etc, and these machines are tested before they are approved for use.”

In fact they have to be “certified” in most places which requires them being “Independently Certified”.

As I’ve indicated in the past I’ve worked in electronics and software design in all these areas befor, and I have been a major inovator in them by bringing new technology and features which previously were not attempted because others thought the “Independent Certification” was to rigorous to alow them. The only reason I could do so was two fold,

1, I know how to not just read the certification requirments, but how they are tested[1].

2, I know in depth how to design intrinsically safe, fail safe electronics and intrinsically safe, fail safe software.

As a result I’ve had no qualms about puting my name and professional reputation on the products I’ve designed in these areas.

Designing safe and secure software requires,

1, The correct knowledge.
2, The correct state of mind.

As I’ve said before the design of secure systems “Is a Quality Issue”, likewise so is the design of safe systems.

As we all should know one of the first things required of a process for such “Quality Issues” is,

“Total buy in from all involved”

If that is not there from before the project starts, from the most junior to the most senior involved then your chances of success diminish correspondingly.

[1] I’ve also done testing to a very high level and been involved with designing new testing techniques to minimise testing cost and improve testing accuracy.

John tillotson August 23, 2022 10:39 AM


“Not if they are crooked.”

No, they are very good at security ESPECIALLY if they are crooked.

They are only poor at security if they are incompetent.

LS August 23, 2022 12:32 PM

@Clive Robinson,

The general drift behind this thread is it’s

“Testing code left in, for some reason X.”

The most common example I see is the busybox test code, many modules of which have comments along the lines of “DO NOT SHIP THIS!” often ships in commodity devices. Some of these modules effectively give root via the web interface. The people who make electronics that go into pretty boxes at Walmart spend more on the box design than they do on the engineering. The direct result is mistakes like this.

Erdem Memisyazici August 23, 2022 1:40 PM

@Clive Robinson

Indeed true. A real world example is the very phone I’m typing this on. I haven’t even finished paying for the device itself yet my service provider decided to no longer provide me with updates leaving a handful of CVEs applicable to the setup. I’m supposed to keep buying a new phone.

That’s just phones though. Desktops/laptops have a CVE cycle where there always is some way to get in with the next patch applied. It’s not uncommon to see a patch fix 1 thing yet break 2 more.

In fact there are seemingly legitimate companies who work in the opposite direction and are paid for providing persistent access to anyone’s computer we hear on the news quite often as Group. That’s the opposite of trying to fix problems.

I wonder what it would be like if every computer from fish tank thermometers to the new body atfached insulin dispenser were secured like casinos and banks.

Personally I think it would be exactly how we expected them to work anyways.

Clive Robinson August 23, 2022 3:30 PM

@ L.S.

Re : Reason X

“The direct result is mistakes like this.”

In the past I’ve been lucky, in that managment listened to me, all be it grudgingly, because I not only got results, but also very importantly Awards for their companies.

One of the reasons I moved out of engineering design, was because “marketing” and “human resources” were making too many gains in the “power grab” game, and it was to me becomming clear which direction things were moving.

Sad to say all but one of the companies I worked for nolonger exists… The fact I saw it comming last century along with the dangers of out-sourcing etc but could not stop it… I guess is my cross to carry, but even back then it had an inevitability about it, which is why I got out.

I see the same inevitability in the ICTsec industry, and it does not bode well. I guess it’s time to see which direction to jump in next.

Oddly it may be back in the direction of engineering design, because the events of the past couple of years may have made their managment more receptive in places.

lurker August 23, 2022 3:33 PM

@Clive Robinson, ALL

The ElReg article is a dismal list of the perils of Java. But the more I read, the more it nagged a pet bugbear: Quality Control, and how the different rules, systems, methods of QC produce different outcomes in proprietary vs. open source code. I see you keep emphasising the problems with “moving input validation to the left”. But who is responsible for putting in either a block to stop leftward movement, or a validator over there? In closed or open source?

Of course Java gets the worst of both worlds …

MrC August 23, 2022 8:11 PM

@Erdem Memisyazici, Clive:

My prior post went to moderation purgatory and never returned, but here’s a shorter version:

Years ago I used to work as a programmer at small company that made video gambling machines. It was a dumpster fire. Our products were nowhere close to secure.

I think the perception of video gambling machines as secure is largely due to casinos tending to have mob-affiliated goons on hand. This discourages at lot of research into these machines’ security. Try to pen test a machine on site, and you will be “asked” to leave. Buy a machine, pen test it at home, then publish your results, and most likely nothing will happen. But a visit from the goons is the second-most-likely outcome, far ahead of “get paid a bug bounty” or “nice thank-you e-mail.” On the flip side, the existence of the goons does slightly improve the vendor’s focus on security. In my lost post, I had an amusing story about this, plus a couple programming practices we did get right. (Yes, this situation has strong “Liar and Outliers” vibes.)

Q August 23, 2022 8:14 PM

It amazes me the number of people commenting here and on slashdot about how it is so bad that users can modify their own things. And the how the manufacturer’s control over “our” devices must be preserved.

Is it some sort of collective brainwashing? If so, then the marketing departments have really done their jobs well. Sadly.

Wake up. Stop letting companies dictate your lives. Take control of your stuff. Do your own things with it.

Clive Robinson August 23, 2022 9:34 PM

@ lurker,

Re : A step to the left and a jump to the right[1].

… problems with “moving input validation to the left”.

It’s a form of lazyness some incorrectly made into a virtue now it’s a very pathogenic disease that is almost genetic in behaviour.

The argument once was and sometimes still is,

“It gives clarity”

But that was at best a fig leaf, which goes back at least as far as the 1970’s to my certain knowledge if not I suspect further to the 1950’s.

The reality was to try to get some kind of “efficiency” with highly limited eye wateringly expensive resources.

This was known to be a bad idea when C was developed and certainly discussed back in the earliest days of Algol.

The argument was over the location of,

1, low level logic.
2, supporting logic.
3, business logic.

in a high level language program.

Which whikst it might sound esoteric had very real performance issues back then as it still does today. Even though few realise just how CPU and below architecture dependent it actually is (think executable code caching for instance).

Back then resources were not just speed limited but memory limited and Virtual Memory was a very new idea and not realy available except in the most expensive of mini-computer systems and only some high end “Big Iron” mainframes costing the equivalent of a hundred years or more of professional level salaries.

However primative code block swapping –without hardware address translation– was in use. And actually quickly gained use in early Personal Computing in the Apple ][ in the late 1970’s and in IBM PC clones in the mid to late 1980’s. It was usually done by “overlay” where code at a high memory address would get loaded at an appropriate time then as the program progressed this would get overwritten by a new set of code appropriate to the part of code executing. Those that programed the likes of the 8086 realised that was why there was four segments for addressing (as a “poorman’s MMU”).

Obviously you could not overlay “low level logic” code, because that would be IO and similar BIOS/OS code needed throughout a programs execution. Nor could you realy overlay the high end “business logic” code as that would need to contain the appropriate overlay logic to pull in the overlay blocks of code. Which left the “support logic” code.

To understand this better you need to look at code from a higher level. Imagine you are writing every piece of code without macros or much in the way of functions. What you have is three parts,

1, Set up,
2, Business logic
3, Tear down / clean up.

The first and last are “administrative functions and belong to the OS, so often do not get addressed in high level languages.

The second is what most of us would consider the “program” these days. If you were to draw it out as a diagram you would end up with a control loop at the top, that calls or loads various functional blocks at the next layer and so on down through horizontal layers to the actual hardware control. It forms a pyramid of code where each lower horizontal layer is larger than those above.

However the lowest layers contain almost the same code over and over, which although fast in terms of CPU cycles is very inefficient usage of memory which was even in the 1970’s tens if not hundreds of dollars a word. Thus speed at the lower horizontal layers took a back seat to minimizing memory usage. Thus the hardware code got pulled into a single generalised code block and so on back up the layers of the pyramid, making it look more like a diamond.

It was in that “fat middle” layers where all the support logic was. So the same rationalisation was tried but… These middle horizontal layers had sub-function in them which were distinct. Which made it easy to partician up several horizontal layers vertically. Thus easy to convert into overlay code that only needed to be loaded into “CORE Memory” when that sub-function was used.

So to aid in this process the idea of seperating out the high level “business logic” and putting the “support logic” into an “overlay library” where it would be grouped in memory came about.

To do this the core business logic had to be seperated from the support logic… And this is where a fatal mistake that still haunts software development happened.

Ask yourself what “support logic” is and it boils down to “errors and exceptions”. Some of the error checking is “range checking” if you actually strip out the ranges you end up with code that can be generalised or as it gets called these days “refactored” for “reuse” which is seen as some kind of “Holly Grail Mantra” that sells all sorts of nonsense to management as “labour saving”, “resource optomisation”, “Productivity improvment” etc. Or to be blunt appearing to save costs by employing less skilled labour[2]…

But this nonsense also got sold into education… Explaining code or writing “example code” becomes oh so much easier if you strip out all that support logic. Few actually bother to find out what the ratio is between support logic and business logic is. Depending on how you define low level logic roughly 20% or less of the code is business logic… The “code reuse” can reduce that to 15% but the required generalisation code in the refactoring process to do it takes it back up again (so the actual saving in resources is often not there, in fact the opposite).

Thus shifting “error and exception” handeling appears to make things less messy when you design the code. As you can just concebtrate on higher level business logic.

Shifting “errors” to the left and shunting “exceptions” to the “Blue Screen of Death” code makes the business logic design even simpler thus from both managment and educators perspectives is a gift from heaven (and a disaster for the rest of us).

One danger is it falsely particians code which for those that remember going from the original client-server model through “middleware” to the mess we are in today… Where “validateng user behaviour” is done in JavaScript on the client (see hit Ctrl-F/F12 to see SSN security vulnarability as a recent famous example). Should tell you that this “move to the left” is a realy bad idea.

Earlier today I was talking about the perils of TDD and Shannon Communications Channels[3] it’s exactly the same problem in a different disguise. Partitioning means implicit “communications” and “communications” can go from “in thread” to “inter system” later which the original designers never envisioned or intended. The result is you open a vulnerability via the communications channel behind your user input checking, thus have a massive security hole appear, just waiting to be found and exploited.

The sad thing is the developers of C were well aware of this issue, it’s why they emphasized the use of both functions and macros. Thus “errors and exceptions” and all sorts of validation could be put directly in the actual business logic where it realy belongs.

Which brings us to your question,

But who is responsible for putting in either a block to stop leftward movement, or a validator over there?

The actual answer is “every one” as it is fundementally a “Quality Issue”.

The reality is though, it has to be senior managment getting to grips with the drivers of these behaviours and purging or ameliorating them. But to be ready to do this they need to stop listening to neo-con mantras pushed by very short term thinking shareholders and the like (which as that effects senior managment benifits etc is not going to happen in the current corporate world).

But also as designers and developers we have gone way way to far down both the “code reuse” and “code refactor” paths.

These days software is not so much “developed” as “plumbed together” it’s not healthy for the ICTindustry or the Designer/Developers.

I actually cringe when I see C++ and now C Standard Libraries. As for Java, JavaScript and Python does anyone know of something for which there is a use but there is not a library?

But have people actually looked at not just the quality of these libraries, but just how inefficient they are?

The problem with “generalisation” especially for “code reuse” is it is always horrendously inefficient in both memory and CPU cycles. Worse nearly everyone who uses the libraries will never ever learn how to do the things they require with out them. Thus all sorts of design errors come “built in” because in the main nobody knows better, and even if they do, they are not alowed to use that knowledge.

[1] See Rocky Horror Show if you don’t get the refrence 😉

[2] You could write several books on the very short sighted and false economy of this notion. In fact several people have but the neo-con mantra of “Don’t leave money on the floor” wins almost every time and costs mega bucks down the road, only a fraction of which is covered by the term “technical debt”.

[3] see my comments above, but speciffically,

And my first comment on this thread about the perils of methodologies taken too far, for which I used TDD (it’s not the only one, in fact the same issue appears with nearly all methodologies that get overly formalised or perscriptive by those on the make and their fanbois).

Clive Robinson August 23, 2022 10:36 PM

@ Q, ALL,

Re : Brainwashing

“… how the manufacturer’s control over “our” devices must be preserved.”

Actually it’s not realy that at all but has been perverted to become that.

It’s actually a question of safety that underlies what you can and can not do. Engineers understand this as they get taught engineering history.

Software developers and most others in the ICT industry, do not get taught industry history, and in the consumer sector mostly know next to nothing about safety or security. Like as not have never ever seen “Obligitory Standards” used for “Certification” because there is no consumer software certification with legal teath.

The oft given idea that being able to repair / renovate a 1950’s car is the same as DIY software is way way of and is not even close to an “Apple to Oranges” comparison.

People who repair / rebovate 1950’s cars for instance do not make thrir own “brake pads” or most other bits in the vehicle. They buy in pre-made sub-assemblies.

Thus they do not have to do any of the very many hidden engineering steps to ensure what they are doing is safe.

You write software and you are doing the equivalent of smelting, refining, alloying[1] and all those other highly critical steps that if you get them even fractionaly wrong will have you “off the road and wrapped around a pole or abutment”.

But worse, where as vehicle manufacturers have thousands of compliance steps, to keep them in the safe zone, there is no such thing for consumer software.

In an other area, consider infrastructure and “Demarcs”. You are alowed to rewire your home, but not connect a generator up to it as an emergancy backup without having a specialised set of components and switches that not only are you not alowed to make as they have to meet critical safety requirments thus certification, in most places you are not even alowed to fit them, that has to be done by a trained and certified person. The reason is so you do not kill, maim, or injure people in other houses or who worke on the infrastructure.

Thus there are very good reasons why you are not alowed to change some software, and it has absolutly nothing to do with,

“how the manufacturer’s control over “our” devices must be preserved.”

It’s to stop even you getting fried alive or blown up, or not having phone or Internet service because some idiot thought it OK to write their own code.

People need to remember that there is a trade off of,

“Personal Rights -v- Societal Responsability”

And it applies every bit as much to “freedom to tinker” as it does to “driving under the influance”.

Not understanding this, indicates a lack of maturity akin to that seen in a kindergarten playground.

[1] Look up for instance “White Metal” and it’s myriad of uses including for safety critical components,

Q August 24, 2022 12:39 AM

It isn’t about “safety”. That is just the cover story. It never was about safety. It’s all just marketing and misdirection.

If it really was about safety then no ordinary person would be permitted to change their brake pads. You would be required to go back to the manufacturer and pay some exorbitant price to get only the “authorised” and “genuine” pads. And only the original manufacturer can change the pads because of the special proprietary bolts and fixings they used to prevent you from ever doing it.

Currently, if you change your brake pads and botch it up then you get the consequences. That’s what laws are for. It is no different with software. Locking you out and removing your access is not for “safety”, it is to enrich the manufacturer.

I can see the marketing has done so well to brainwash the unwasheds, that people are actually defending their actions. Sigh, we are all doomed if we continue onward down the path of allowing ourselves zero control over our stuff.

JonKnowsNothing August 24, 2022 10:35 AM


re: brake pad and consequences

iirc(badly) RL, tl;dr

A long time back….

In San Francisco, there are many steep hills, always a trial for brakes and often a context area for the Trolley Car Problem, which appears more ominous if the trolley is rolling down the hill rather than coasting along the flat.

A famous person was riding in a car when they were struck by a car descending a steep hill. The person died and several passengers were severely injured.

The car traveling down the hill had no brakes. The person driving the car knew the brakes needed fixing but had no funds to do so. The person driving the car was not seriously injured.

There was a court case but nothing happened, legally. As the person was impoverished there wasn’t anything there either.

Your supposition that Actions and Consequences have Determinant outcomes needs a few more olives… (1)


1) The olive eating scene in The Matrix series.

Cyber Hodza August 29, 2022 12:37 AM

From the article

“figured out with my IVI that I could enter its Engineering Mode by going to the Software Update screen, quickly pressing to the left of the Update button 10 times, and then once to the right of the button.“

Who is going to buy this as a true version of events?

JonKnowsNothing August 29, 2022 9:16 AM

@ Cyber Hodza

re: quickly pressing to the left of the Update button 10 times, and then once to the right of the button.

Who is going to buy this as a true version of events?

iirc(badly) tl;dr

Some years back a clever person discovered that when using a well known social media platform, a simple slash at the end of the input line would allow extra code to be inserted. The person gleefully inserted code to change the color of the display text, the default color was black. Happily crafting messages using bright colors was instantly noticed, not just by the user community, which also started crafting colorful messages, but by the corporate tech groups, which squashed the extra add on code path returning the text to basic-black.


Search Term

Web colors

William Malik September 15, 2022 10:47 AM

So code reuse is going a bit too far, eh?
In the early days people copied the sample and example code from documentation and were surprised that it didn’t work. While in IBM development I spent many hours just cleaning up the samples and examples (IBM MVS sysgen and IOgen decks, particularly) so they actually worked in the most simple environment.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.