Comments

Gunter Königsmann September 19, 2019 6:43 AM

Every software exploit is impractical until someone feels it to be important enough to exploit it.

On the other hand most people being rational is what keeps us all alive on a daily basis.

Clive Robinson September 19, 2019 7:56 AM

There are only two ways to stop a “real” vulnerability becoming an “exploit”.

1, Never deploy the system.
2, Deploy the system in total issolation.

As the second is in most cases not achivable in any “practical” way for a deployed system, then if you need the system it must not have any crittical effects if it is exploited.

Any other measure of “practicality” is meaningless in the face of an inventive mind with sufficient motivation.

The risk analysis on “criticality” is thus the dominant decider not the use of any security devices that will have their own “real” vulnerabilities when deployed.

9/11 made it clear to the world thst all aircraft are “Guided Missiles” with high fatality rates even when not augmented by destructive payloads.

For some reason what should be common knowledge is being wilfully ignored by atleast one modern aircraft manufacturer that we know off[1]. It’s also likely since the FAA cutbacks and rule changes that other manufacturers of modern aircraft are also ignoring this common knowledge.

The argument of “small government” and “safe technology” are both obviously and logically at odds with each other. That is the more technology we deploy that is potentially vulnerable the more people it takes to ensure it’s not deployed in a way the vulnerabilities can become exploits. As we get reminded on an almost daily basis that corporations can not be trusted to do self regulation, and that most private organisations funded to do regulation become lowest price “Check box auditors”… That only leaves entities entirely independently financed from the tax payer purse to carry out such regulation to the required depth. Which is obviously not happening or we would not be having this discussion.

As I’m nolonger alowed to fly on modern passenger aircraft for medical reasons, I’m thankfull that I can not be put into the position of having preasure applied to me to get on the more modern aircraft that are potentially susceptable to these “real” vulnerabilities that can and almost certainly will be turned into working exploits when an inventive mind decides to do so.

[1] Yes I’m aware the same logic applies to modern cars and other “tech improved” road vehicals, it’s why I tend to travel by trains, busses and my crutches.

Rj Brown September 19, 2019 9:12 AM

I worked on the 787, and I have significant experience withAFDX and those other “timed networks” the author mentions. These are indeed complex systems, but we have a fullset of prints/documentation for them. Contrast that with the pilot, crew, and passengers: we have very little understanding(comparatively) of how they work internally. They are much more complex than the 787 machine. The weakest part on any non-autonomous air vehicle is the human part. If you don’t believe me, read some of Kevin Mitnick’s books.

me September 19, 2019 9:24 AM

@Clive Robinson

Yes I’m aware the same logic applies to modern cars and other “tech improved” road vehicals, it’s why I tend to travel by trains, busses

i don’t get this part;
yes, modern cars can be hacked and i’d love to be able to turn off the car bluetooth to achieve this:

2, Deploy the system in total isolation.

unfortunatly while my car has the option to turn off bluetooth the setting it’s forgotten as soon as you turn off and on the car ‘-_-
i’m worried that in a not too far future someone will cryptolock my car and while for the computer i have the necessary skills and software to prevent this.
the car is a closed black box and the only idea thta i can think is cutting the bluetooth antenna, but i need to find it before doing so… and considering the very low risk i just don’t care right for now.

i think that the problem also apply to trains, they have much electronics in them, same apply for railway exchanges, they are probably remotely controlled by some software or human probably using internet.
so i don’t get why you don’t like cars but trains are ok?

if you are talking about future cars/state of the art with auto driving i think they are the perfect recipe for a disaster, AI is not that intelligent ad even without using adversarial AI there are simply too many unexpected cases that will cause car crashes/kill people for sure.

me September 19, 2019 9:38 AM

@Rj Brown

The weakest part on any non-autonomous air vehicle is the human part

I disagree about this, just two examples:
Schiaparelli mars lender crashed on mars because an int overflow prevented it from correctly knowing the altitude (i have not read details about this).
ans also:
https://news.sky.com/story/ethiopia-crash-pilots-repeatedly-performed-correct-procedures-report-11683805
Both planes had an automated system that pushed the aircraft’s nose down when sensor readings detected the danger of an aerodynamic stall.

i’m not expert but apparently the faulty reading was the cause, there was no double sensor because that costed an extra.

please note that my car has a double sensor on the accelerator, i know it because one day one of the two was faulty and the car greatly reduced the engine power due to this misreading, the technician said “just turn off and on the car to reset the emergency mode, if it keeps happening we will replace the sensor”, it never happened again.

as i said IA is not that smart, i remember seeing a documentary about moon landing where they said that people at one point ignored the computer readings because there was an unexpected extra speed caused by the air pressure released when the two parts detached (sorry for my bad english if it’s not clear).

i understand that modern planes are simply impossible to drive without computers that assist but don’t tell me that they are better than humans.
they are better if everything is perfect and there isn’t any unexpected thing happening.

gggeek September 19, 2019 10:14 AM

It seems to me that there’s a lot of hand-waving and assumptions in the linked post, but precious little proof.

As much as I appreciate the fact that aviation engineering deals with safety issues pervasively, and IT security is just one of the many safety-critical ones, I dispute the idea that the “obscurity” observed at play with Boeing does add a significant amount of security, and that we can all sleep well.

We have seen time and again that only real security can withstand dedicated attacker teams, and that can hardly even be achieved with proper development practices (case in point: the wpa2 ‘mathematically proven’ protocol crack).

Obfuscation and opaqueness are a recipe for disaster as they simply allow the IT developers to keep the quality of their work hidden for a prolonged time. In my own experience, this always leads to poor quality, not high.

It should also be accepted for a fact in 2019 that no cpu is perfectly safe, nor are routers and networking devices – cue Spectre and its siblings for showing the world how creative thinking can reveal side-channels attacks that the silicon designers were oblivious to.

The only way to prove that avionics busses are really bulletproof, and there are no bugs in the switching fabric, is to give independent researches a nice monetary incentive and let them try them out to their heart’s contents.

[disclaimer: worked in a major european airport IT for 7 years]

me September 19, 2019 10:44 AM

@gggeek

I dispute the idea that the “obscurity” observed at play with Boeing does add a significant amount of security, and that we can all sleep well.

i noticed that too, they keep saying “knowing the sector is no surprise that they hided deatils/prevented access/refused to answer questions” this is not a positive thing, and it doesn’t make them look any better.

It should also be accepted for a fact in 2019 that no cpu is perfectly safe

this is another problematic thing, i know that mistakes happens but according to some people in internet intel cutted on testing to be faster and compete with arm.
but there is a problem: arm makes mobile phone cpu and nobody care if your phone crash, you just reboot it.
but intel chips are used on desktop computers used by engineers of all over the world to design planes, medical devices, cars, trains and all other important stuff.
An intel cpu that makes wrong calculations like fdiv bug simply can not be accepted.

Nameless Cow September 19, 2019 12:27 PM

@Clive Robinson

As I’m nolonger alowed to fly on modern passenger aircraft for medical reasons, …

Sorry to hear that. If don’t mind me asking, what’s the concern behind the prohibition? Is it because of the possibility of a medical emergency that they can’t handle onboard? Is it because some medical equipment you need is not safe to be carried on a passenger plane?

Brad September 19, 2019 12:39 PM

As usual, this article leaves much to be desired. The author laments that security researchers simply do not understand aircraft avionics and systems so therefore even if they were allowed access, they might not even understand the mitigations in place. However, is the reverse not equally true? For example, this excerpt is particularly enlightening on the so called aircraft avionics expert’s lack of understanding of the security researcher’s field of knowledge:

it’s essential to understand that aircraft systems can’t be updated in-flight or on the fly. This means that if an attacker is seated on the airplane, there’s no way they could modify systems. There are only three ways to update a system on a modern aircraft:

How does the expert know this to be true? Nothing in the article seems to imply that the expert is saying these systems have some method to prevent modification of their running code. Nothing in the article seems to say the expert understands how the manufacturer actually performs the update either in the factory or on site, from a technical perspective rather than a process perspective. Hackers don’t follow the procedure or process, they break stuff. What prevents an attacker with enough access to a system from bypassing physical switches in software? These aren’t mechanical systems. For example, the position of a switch can easily be made irrelevant to the software by an attacker with enough access to said system. Unless that switch physically disconnects WRITE lines or something like that, I’m gonna call a BIG DOUBT on that claim.

There is a long history of system admins and software engineers who are EXPERTS in their field claiming X or Y can’t be done. Hackers have almost always and will continue proving them wrong. These avionics experts are not hackers, period. They achieve nothing by continuing to claim their systems are perfectly impenetrable. Only by working together will we improve aircraft security. They said “hackers don’t understand”. So help us understand, just as we want to help you understand our body of knowledge.

Brad September 19, 2019 12:46 PM

Also, I just want to add what a fool Ben Rothke is. He works in security, he should know better. Shame on him for continuing to allow avionics experts to claim their security flaws can’t exist while hiding behind obscurity. He does us all a disservice.

Clive Robinson September 19, 2019 1:20 PM

@ gggeek,

It seems to me that there’s a lot of hand-waving and assumptions in the linked post, but precious little proof.

Which one? Of the two links @Bruce gives, the first or the second. If the second I would agree.

The authous BIO at the bottom gives no indication of working in the field. In fact he talks about a friend telling him. As you might know one of the biggest problems in technical journalism is journalists not asking questions that will elicit the correct answer and unqualified journalists interpreting what they have been told.

But worse he uses the expression “sui generis” which I suspect a lot will need to look up[1]. In effect it’s the same as claiming “I’m special” or “He’s special” with the accepted meaning of “exception to the rules”. The SigInt and IC agencies believe they are “sui generis” and that quickly tells you that it’s not a good thing as they use it to evade the law, lie in legal proceadings, and exclude any oversight thus any kind of liability for their actions.

Aerospace is in no way “special” or in a class of it’s own, it’s an engineering proffession just like many others. Having worked in Telecommunications, Medical electronics, Petro Chem saftey engineering and aerospace engineering including “space payloads” I can tell you that they all could claim to be “sui generis” for the exact same reasons which means none of them are “sui generis”…

The author appears to also understand squat diddly about communications as I’ve mentioned a few times here in the past they are way more complicated than most understand.

For instance what most call plain old RS232 can be forced to be just a unidirectional TX to RX comms interface, but only by abusing the control signalling at two or more levels (look up DB25 wiring for a two or three wire serial interface and then what the various shorting wires in the plug wiring does).

Nearly all low level signalling protocols especially those that atr at network speeds use various error signalling in the reverse direction. Thus systems that are thought by many to be “TX Only” can be baddly abused by an attacker generating errors in the control signalling which is infact an input to the TX Only device.

I’ve come across quite a number of devices like “Data Diodes” that will survive having their comms cables pulled out or cut, but are quite susceptable to error and exception signalling attacks. Worse the systems behind them tend to be transparent to such attacks thus pass the errors and exceptions further back up that TX only chain often right back to some key device the software of which was never designed to deal with such attacks, thus behaves in an unknown way.

I see no sign that the author is cognizant of this class of attack, which is not surprising few out side the SigInt entities are. Thus I doubt he asked his friend questions relating to signalling of errors, excptions and transparancy of systems from their outputs to their inputs. Thus if his friend was even aware of it he would probably not have mentioned it having not been directly asked.

Thus I likewise have doubts about the authors expertise or advice in this particular domain.

The same applies to many supposed “security experts” who have not explicitly come across it even when training, the same with many software engineers.

The truth is though they have come across one form of it by other means. The older readers here will probably remember the DOS attack against the TCP/IP attack that by “breaking the protocol” caused the stack to request to many buffers from kernel space. The result was to put it politely “abnormal behaviour”. Now imagine the reverse where due to error tricks the TX sode of the stack runs out of buffers… The result is usually first an exception the software ignores shortly followed by a seg fault or other exception that causes the kernel to dump the program. If the program is better written it passes the exception back up the TX path untill something else has an exception and causes core to be dropped.

That is just one of very many tricks in the class of error injection attacks.

As was noted long ago “It’s comparatively easy to develop a protocol engine that performs correctly to the published protocol, however developing a protocol engine that performs correctly under all error states is usually orders of magnitude more difficult”.

So arguably some have started in on communications “error injection” attacks but we’ve not yet openly seen them progress beyond the equivalent of vandalism. Thus remember that “Whilst you might not see something it does not mean it’s not occurring” or if you prefere, “Whilst you can prove something is happening if you know what to look for, you can not prove something is not happening if you don’t know what to look for”.

[1] “sui generis” has the ambiguous meaning of “in a class alone” which is not at all helpfull.

Clive Robinson September 19, 2019 1:34 PM

@ Brad,

These avionics experts are not hackers, period. They achieve nothing by continuing to claim their systems are perfectly impenetrable.

The funny thing is the author shoots himself in the foot on that score with the,

    99% secure is 100% insecure

You can not argue the other way and then make a statment like that, it blows credibility out of the water. It also falls into that darn silly “sui generis” thinking that there is something uniquely special about aerospace engineering, there’s not it’s just another area of “intrinsic safety” enginering.

In fact aircraft systems design usuallt falls well short of satellite payload design and the launch vehicle design. As most know range officers on unmaned launchs have their finger close to the self destruct button… That is they know a brutal truth about launch vehicle and payload engineering reliability.

Clive Robinson September 19, 2019 1:45 PM

@ Nameless Cow,

Sorry to hear that. If don’t mind me asking, what’s the concern behind the prohibition?

The likelyhood of blood clots caused by low pressure, I’ve had them in the legs, lungs and brain. Whilst the first is called DVT and the other two HAPE and HACE they boil down to the same issue, my blood clotting is at the best of times extreamly eratic. The result is Im on four different drugs for different parts of the clotting pathways. The down side is if I breath in second hand smoke I get nose bleeds, likewise other diseases of old age cause significant bleeding. So caught between the fryingpan and the fire on that medical issue alone.

Then there is the heart doing other strange things resulting in not just a pacemaker but also medical equipment that the DHS rules case issues with. Oh and there are the medications… All in all it would be easier to just stay in bed in hospital, I’ve nearly got enough “bed miles” to have my own “hospital time share”.

MarkH September 19, 2019 2:34 PM

No security geek is going to feel very satisfied, on the basis of the publicly available information. My personal opinion is that feasible attacks are unlikely to be found, because of the conservative design policies used in safety-critical systems.

A few thoughts …

• To be clear, neither Rothke nor the industry has (to my understanding) claimed “security by obscurity.” Rather, Rothke and others say, in essence, that because those claiming insecurity don’t sufficiently understand the systems, they are imagining attack vectors which don’t exist.

• A particular point made by Rothke, is that the security critique supposes avionics networks to function like their ground-based home and office counterparts, whereas in fact they are in some respects very different.

• A specific instance of this, is that the physical networks used for flight controls are redundant (at least triply), to the extent of running by widely separated geometric paths through the airframe. These redundant networks are completely isolated from each other (and all other physical networks on the plane), except to the necessary extent of having common electrical safety grounds etc.

Traffic to and from these networks must pass through a kind of router. Because these networks are absolutely critical to safety, unless there was extreme and gross negligence, no packet is put onto the network unless it comes from an avionics box that has a damn good reason to send it … and the architecture (for safety reasons) protects against one box “spoofing” another.

• One comment above proposes that “the position of a switch can easily be made irrelevant to the software by an attacker with enough access to said system.” I agree, IF the attacker can modify the avionics software … but if the switch prevents such modification, then how does the attacker go about making the first modification? And if the attacker has enough access to bypass the switch in the first place, then why bother disabling it?

• Another comment observes the limitations of data diodes. Reportedly, in at least one airliner family, the copper Rx pair is simply disconnected. I don’t know how the cleverest software hack in the world can get around that, but maybe some genius does!

Even where the barrier is not so relentlessly physical, my bet is that the design is in fact robust against “error and exception signalling attacks.” This is exactly the kind of design requirement that is applied to safety-critical systems: they must be designed to function when a node has “blown up,” and is making any kind of random, noisy, or high-frequency signal completely inconsistent with expected function. I recall a colleague designing against precisely this scenario more than 30 years ago, in a system far less critical than avionics.

Robustness of design will be even stricter for connections to IFE, where not only is the hardware far less reliable, but also the software is 100% unverified and untrusted.

• The above is one instance of a more general phenomenon. When ordinary computerized systems are tested at all, it is almost exclusively against expected (typical operation) cases, and very inadequately against atypical operation (that should never happen!) cases.

Decades of painful experience have taught that for safety-critical systems, design and testing against abnormal conditions are absolutely indispensable.

• Although industry representatives have a maddeningly smug tone in their reassurances, the technical people inside the industry worry all the time about what can go wrong.

They sometimes fly on airliners. So do their family, and friends.

The guys who wrote the vulnerability analysis didn’t have the opportunity to try anything on a ground development lab, or an actual airframe. Likewise, most of us don’t have such access either.

But can you imagine that nobody at Boeing, who in fact does have such access, checked whether the specific steps are feasible?


Every time the question of airliner hacking comes up, I repeat this observation: systems which are designed to be robust against invalid inputs and internal failures are much more difficult to hack, even if they are not designed with a “hacker” in mind.

The converse of this is obvious: so many network exploits have resulted from idiot designs in which inputs are assumed always to “normal,” messages are always of the right length, hardware is assumed never to break, etc. etc.

Designing for extreme robustness closes hatches through which hackers crawl in.

Clive Robinson September 19, 2019 4:12 PM

@ me,

i don’t get this part;

First of sorry for the late reply it’s a busy day and in the part of London that has as many RF holes as there are foxes raiding the trash cans.

The problem with most road vehicles is speed and the ability to swerve easily. Suburban busses are both long and slow but whilst they do have accidents the fatalities and injuries are considerably low per number of travlers. Further due to the hight of seats most cars will crash in with their bumpers about floor level, and will often subduct which I’ve seen first hand when wearing the green when a car smashed into the side of the minibus by my feet. I sufferd only a degree of indignation at the fact it disturbed my train of thought.

Trains run on tracks, and whilst you can cause two trains to run on the same track it’s been very rare in the UK and if you know where to sit –about 1/3 of the way from an end– your survivability is greater. The fly in the ointment is derailment, the safe place to sit is very dependent on where it derails, how and crucially at what speed. In the UK derailment is very rare compared to the number of trains and again suburban trains are relatively slow so survivability is increased.

Clive Robinson September 19, 2019 6:48 PM

@ Mark H,

Another comment observes the limitations of data diodes. Reportedly, in at least one airliner family, the copper Rx pair is simply disconnected. I don’t know how the cleverest software hack in the world can get around that, but maybe some genius does!

Disconecting the RX pair in a multilevel signalling system whilst stopping the ordinary data path back to the sensitive system does not in many data signaling systems stop the error control signalling, which can and often is detected on the TX pair, such are the joys of diferential signaling (bi-phase over two wire pair using Manchester or similar coding).

It’s almost a “newbie mistake” to think that cutting the RX pair would stop error detection. Because it’s been described in the IEEE documentation from the earliest Ethetnet standards that used either RG213 or RG52 Coax (cheapernet / 10Base-2) or as in the IBM token ring system two conductors with an outer sheath.

These Ethernet systems that became IEEE 802.3 are from atleast four decades ago, and used half duplex communications and hardware based CSMA/CD (Carrier Sense Multiple Access/Collision Detect) as its collision detection method directly on the wire in the Media Attachmentment Unit (MUA). Even when CAT-3 twisted pair wiring (10Base-T) was used the hardware collision detection remaind so that hardware bridges between coax and twisted pair wiring in the same segment would remain compatible (effectively a pair of back to back MUA’s did the job). It was still retained in the 100Mbs specs as well where in some cases data was split across four twisted pairs two of which switched directions depending on the data flow direction. Such error signalling is still used to detect if a cable is pluged into a switch or not even in full duplex mode.

Further in many cases the error control signalling is required so that the sensitive system knows it’s data is actually going out. This is because higher level application specific protocols may depend on such knowledge.

As I mentioned in my posting above many systems are designed to deal with cables being cut or removed but are rarely tested with intermitant or burst type error signalling. It’s something those who deal with more arcane signalling systems in quite unreliable communications channels tend to see. Without going overly into details it’s around the point where the communications channel / path is considered degraded enough, to a point about 16dB above which, you would consider moving to otherwise bandwidth inefficient “Forward Error Correction” (FEC). Such points are dependent on many operational factors thus are difficult to imposible to model or predict. The result of this is that error protocol layers have wide gaps between them.

In a cabled network system such as you find used for normal digital signalling most developers or test technicians would normally never ever consider such conditions or even know how to test for them. That is in such systems the cable native bandwidth easily exceeds the data bandwidth and intermittent failures due to “dry joints” or “contact noise” are of a sufficiently low frequency behaviour compared to the data rate that hundreds if not thousands of packet times would pass. Which to the data system would look like hard faults equivalent to a cut or pulled cable. Thus a different level in the protocol stack would enter a circuit re-initialization and potentially force a full restart –like a brown out– on the sensitive system which as it’s the equivalent of a “power on reset” it would clear any “built up error” issues like running out of TX data buffers etc.

As I also noted this is not a subject area very many come across. Of the few who do many work in specialized areas of the defense industry and what they know is usually covered not just by NDA’s but in the US and allied nations the ITAR and EAR legislation or equivalent.

To see how you get around such draconianly restrictive legislation[1] have a look at,

https://openresearch.institute/itar-and-ear-strategy/

Which is what the likes of GNU Radio etc have to do and to a lesser extent people who develop communications protocals for what are in effect Low Probability of Intercept (LPI) communications systems such as the Amature Radio FT8 / JS8 / WISPR / etc protocols and hardware designs destined for Amature Satellites (this is currently a hot button subject in the American AMSAT-NA elections, where the likes of Bruce Perens firmly believe the current officers have become hung up on it for all the wrong reasons, the European AMSATS in England, Germany and Belgium having gone down the “Open Route”.

[1] The funny thing is that the legislation has a fault in it which alows you to quite legitimately DOS attack those that regulate it (the US Constitution likewise contains a similar fault that you can find by searching for stories on Einstein sponsoring a foreign national into US citizenship).

Clive Robinson September 19, 2019 7:04 PM

@ Mark H,

A specific instance of this, is that the physical networks used for flight controls are redundant (at least triply), to the extent of running by widely separated geometric paths through the airframe. These redundant networks are completely isolated from each other (and all other physical networks on the plane), except to the necessary extent of having common electrical safety grounds etc.

Sounds nice but the “are completely isolated from each other” is unfortunatly denonstratably not true as I have mentioned in the past. All the systems that need to talk to “over the air” interfaces get multiplexed together. As this is usually of a considerably lower bandwidth than even the individual networks the muxing switch provides an interesting attack point in it’s own right. At the very least a “blocking attack” can be run.

As noted these networked communications systems are highly complex and few realy understand the implications of even apparently very minor “down the line” changes and they often will not show up with run of the mill testing.

The auto industry is begining to wake up to this because they have seen bus congestion issues with Real Time systems across the CAN bus as more nodes are added.

Rj Brown September 19, 2019 8:11 PM

@Me et al: When I spoke of the human being the weak link, I was not referring to their ability to act as a control system, I was talking about their ability to be deceived or otherwise used as unknowing pawns in a security breach. Consider Kevin Mitnick’s book :The Art of Deception”…

me September 20, 2019 3:27 AM

@Clive Robinson
about cars vs bus and train.

Thanks for the answer, now everything makes sense.

but don’t live constantly worried, i mean:

if you know where to sit –about 1/3 of the way from an end– your survivability is greater.
just get on the train at random position like others do.
don’t let your knowledge “ruin” your life.

says the one who, if possible, avoid walking on grates on the floor because is safer and who sit on the second train wagon because the first is full of “long trip” people who never get off the train while on the second a seat is almost always guaranteed. i also avoid central part because the it’s where the train station entrace is, so most of the people just continue to walk straight and get on the train on the shortest path and is almost always super full of people

seems that we tend to think a lot, we should try to live more freely, just my two cents.

Clive Robinson September 20, 2019 5:55 AM

@ me,

seems that we tend to think a lot, we should try to live more freely, just my two cents.

Well… I’m an engineer in more than one domain of the that field of endevor and I also “live inside my head” unlike many who try to live in others heads. Ever since I was too small to realy remember I was not happy with something unless I knew how it worked. I taught myself how to pick some types of locks when I was around eight, and was an avid reader. I was lucky in terms of electronics I was at the right age when things were switching from valves/tubes to transistors, thus I got a lot of radios TVs and tape recorders from junk/rummage sales and got to the point I was earning pocket money repairing them or doing them up to sell. A slightly later interest in water sports went on to boat building again at the time that fiberglass and plywood were becoming the way to do things. What I learnt from that a few years later helped me built microwave dishes and other antennas you just could not get any other way. I did well in what we called “higher education” especially as I’d picked up digital electronics whilst still in school. Thus had knowledge how to build the likes of frequency counters and digital RF synthesizers whilst still in school. This led into Pirate Radio which enabled me to build broadcast equipment for people whilst in higher education which kind of helped as I was an orphan by then. Being on the leading edge of computer design with the likes of bit slice processors and ECL along with the new fangled microprocessors, I was designing high end tech not just for defence but also medical elrctronics such as body scanners etc.

The result is I am a natural at what our host @Bruce calls “thinking hinky” and can almost glance at any system mechanical electrical software or human and tell you where you are going to find problems and importantly how to avoid or fix them. At times I’ve had jobs where I was a fireman/trouble shooter where I would be brought in to turn what were near disasters into something that was either recoverable or a success. Both of these skills are even these days still reasonably marketable.

The point is where as some people would find thinking about what is safe to travel on and why quite arduous, to me it’s easier than breathing and comes just as naturally and without any kind of stress.

Not doing so would actualy be stressful to me because it would be like being constrained or in effect locked out of my head.

me September 20, 2019 6:14 AM

@Clive Robinson

tell you where you are going to find problems and importantly how to avoid or fix them

This comes exactly in a perfect moment, few hours ago i was thingking about asking your help in a iot project that i have built for my personal use, it works but i’d like to add some extra safety and your help would be a great added value.
if you don’t want to help (by email?) because of time/money/whatever just tell me, otherwise i’ll post more details in the friday squid blog post.

@Rj Brown
Thanks for the clarification

@@Clive Robinson @MarkH

The auto industry is begining to wake up to this because they have seen bus congestion issues with Real Time systems across the CAN bus as more nodes are added.

i remember reading a paper about an attack that abuse this.
I don’t remember the title but some tl;dr is that can bus is a shared bus, they spoofed random signals like they come from device A to device B while the actual sender was C.
than for safety design reason device B started to ignore every message from A because it thought that A was just broken.
in this way they hacked a car and disabled the abs (or something similar) unluckily i don’t remember the full paper.

RealFakeNews September 20, 2019 7:58 AM

A certain airliner in the late 80s was designed such that even the wash tap in the toilet was connected to the ARINC bus that handled all the avionics.

It took a series of incidents for them to discover that using the tap was the source of ILS glideslope reception failures and flight director problems.

The solution was to disconnet the tap from the bus at build.

They put the tap on the bus because it was easy to do and it signalled the water pump to run.

My only question is: is the in-flight entertainment definitely linked to the avionics busses in any way for control?

I’m sure design engineers and technicians try and do a good job, but too often convenience defeats them.

Clive Robinson September 20, 2019 8:06 AM

@ me,

otherwise i’ll post more details in the friday squid blog post.

As long as it’s “security” related in keeping with the blog and you keep the details brief and you do it on last weeks squid after this weeks squid has been posted it may not get moderated.

me September 20, 2019 9:05 AM

@RealFakeNews
My only question is: is the in-flight entertainment definitely linked to the avionics busses in any way for control?

as far as i know watched some presentation from:
https://twitter.com/AndreaBarisani
there are data diodes in place to isolate important parts from less important parts.
but as Clive pointed, cutting the RX wire so that you can not receive data from outside might not be enough.
quick example:
landing gear, it might have a switch that detect closed position, another to detect fully open position.
-it can be closed (one switch toogled)
-opening/closing (no switch toggled)
-opened (other switch toggled)
but what if both switch are toggled at the same time? this should be an impossible case as in logic, but it could happen anyway.

to conclude, i have no access to any project, i think vulnerability exist and that trying to hide behind “avionic is special is dumb no sense but i trust people working there to prioritize saftey above everything.

MarkH September 20, 2019 9:20 AM

@Clive:

I was careful to write that the redundant physical networks are completely isolated. Of course, the traffic among them is supposed to be substantially identical, so in that respect they’re not isolated at all.

Does anyone suppose that the elevator motors need to talk to the satellite terminal? Or even worse, that any over-the-air packets would ever be allowed on a flight control drop?

Going from memory here (apologies in advance for any errors), in the early days of F-16 fly-by-wire testing, the equipment and design were done quite robustly, because everyone understood that if it failed altogether, the result would be a smoldering pile of wreckage.

Notwithstanding this careful engineering, in more than one case the DC power supply cut out … resulting in smoldering piles of wreckage.

Among aircraft designers, there is no doubt or confusion that common-mode failures of flight control systems will destroy the aircraft, and for that reason they are engineered at a high level of paranoia.

Obviously, message latency must be controlled, and I would be very surprised if the latency controls in place are not very stringent indeed.

This is yet another example, in which robust design protects against malevolent attack even if the designers weren’t thinking about hackers. What they must think about, are a wide variety of possible hardware and software faults, which in some cases could result in lots of spurious network traffic.

Designing for correct function even when certain system components have failed will protect against DOS (intentional or accidental) as well.

MarkH September 20, 2019 9:33 AM

@RealFakeNews:

As far as I’m aware (I’m not an industry insider, and my knowledge is limited to what I learn as an aviation technology enthusiast), there are two cases:

  1. The avionics network accepts no data whatsoever from IFE systems.
  2. IFE fault information is forwarded to the plane’s maintenance information system. (I don’t know whether this is done presently, but I understand that it was proposed for future designs.)

In case 2, fault messages will pass through a router, because IFE is on its own network. Unless the designers of the system are actually insane, this router will

• handle IFE packets like high-level radioactive waste, because the IFE hardware and software are absolutely untrusted

• forward IFE fault messages only to the maintenance system (which will be a separate LRU) and to no other address

• obey traffic throttling rules limiting the size and frequency of packets from the IFE network

Note that the maintenance information system is there for economic reasons, and is not in any way critical to safety of flight.

Ari Trachtenberg September 20, 2019 10:16 AM

Bruce, I’m surprised that you’re willing to take Ben’s word on authority – this goes directly against the “security mindset” that we constantly inculcate in cybersecurity students.

Boeing might not need to prove to IOActive that they are not vulnerable, but we all fly these planes and are entitled to have confidence that they have been properly vetted by as broad a range of people who have the technical skills to evaluate them.

A plane that can be taken over by malicious actors can kill not just its passengers, but also innocent people on the ground.

Clive Robinson September 20, 2019 11:24 AM

@ me,

My only question is: is the in-flight entertainment definitely linked to the avionics busses in any way for control?

That depends on your point of view…

Take a pasenger aircraft with Internet connectivity for passengers and Rolls Royce Trent engines “under remote engineering”.

The engine is most definitely connected to the avionics system so that the pilots and their support systems can see what is going on. Each engine also has a two way path via the satellite system to Rolls Royce in real time.

The passenger Internet connection is also two way and is not just part of the entertainment system it also has two way paths via the satellite system.

Thus they share the available satellite bandwidth.

In effect the engine and internet data networks are multiplexed via a mux switch into the much lower bandwidth satelite system, thus the data from both in both directions “share the channel”.

Now my point of view is that as they are both on the same wire, they are connected.

That is the only thing seperating them is in effect data slots. To see this just for arguments sake visulize the engine managment taking slot 1 the order system taking slot 2 certain avionics and control features in slot 0 and the Internet taking the rest of the 32 slots when they are not in use for other functions. That is the mux switch is implementing something approximating QoS over multi channel VLANs…

Thus the question of how reliable is the QoS and VLAN approximations. As we are aware the likes of the NSA are masters at making software controled switches dance to theor tune. These days all routers and Muxes are in reality just switches with different software… Likewise many data diodes are switches as well.

So the point of view rests not on the actual hardware, but on the software and how bullet proof you think it is.

I know that switches can be got at as should every other reader on this blog if they’ve been around for a few years. I’ve actually demonstrated non autherised changing of QoS and VLAN features on switches in the past.

Heck the general conclusion is the NSA and GCHQ have got into every switch that Cisco and Juniper have ever made along with a whole bunch of SME / SoHo / consumer routers/switches from other US HQ’d manufacturers (the likes of LinkSys etc).

Not being funny but in reality those switches in aircraft systems are infact little different. At best they have received more detailed testing during development.

The problem with testing is “Knowing what to Test” just making measurments without very specific intent is unlikely to stop the more refined attacks or even see them…

So to answer your question again, we know that the data paths at one point “share the same wire” we also know that testing during development is at best imperfect irrespective of how much you do of it… So we should now ask you what your point of view is regards the data being segregated, knowing the above…

Gunter Königsmann September 20, 2019 1:43 PM

What I don’t understand is

1) one of the linked articles more or less said data paths are redundant (which means if one is broken the other will take over) and in case data is injected into one channel only the data from the unmodified channel is used.
2) All parts of the aircraft only accept data from the appropriate senders. Do they encrypt or sign the data to make sure?
3) that there are no unnecessary links. And network separations. Which might make it impossible to manipulate any security-relevant parts. But data diodes often fail.
4) as has already been stated firmware updates can only been done according to the Right procedure. The same was true to my Galaxy Note. Until someone found out that every single app had write access to the internal storage. Do they physically disable the charge pump of every flash memory? Or just tell the OS not to write?

MarkH September 20, 2019 2:29 PM

@Gunter:

In response to your second question, in at least some versions of aircraft networks the addressing scheme is very difficult to “spoof”.

In keeping with my theme here, this is absolutely necessary from a flight safety standpoint, even when the designers aren’t thinking about hackers. Any LRU might fail in any of a myriad of ways, and the networks are designed to maintain safe operation with malfunctioning modules.

Even if some aircraft are now using networks with less rigorous protocols, spoofing would still require the ability to reprogram an LRU, which gets to your fourth question.

The ability to reflash your Galaxy note depended on all sorts of people being able to load software onto it! Avionics simply don’t provide that capacity in flight.

People keep proposing chicken-and-egg vulnerabilities, which in essence say “if I could load my own software onto the avionics, then I could open the door to make it possible to load my own software onto the avionics!”

MarkH September 26, 2019 4:42 AM

@Clive:

I’ve been thinking about the assertion that “in reality those switches in aircraft systems are in fact little different” (from LAN and Internet equipment).

How can that be?

Any switch or router in the path of the flight controls is obviously critical to flight safety. Failure is actually worse than a broken cable, because if it’s a software fault, it might cause common-mode malfunction, nullifying all of the intended redundancy.

Safety critical avionics software is made (to the best of my knowledge) by extremely rigorous processes including various formal verification methods.

I expect that you know the COTS systems far more deeply than I … in my limited experience, they are typically built on some version of *nix, and have grotesquely bloated feature sets.

I was appalled recently to look at a Cisco manual that was the best part of 1000 pages in length.


I expect that no version of *nix will EVER be accepted for flight safety critical systems.

And the giant baroque menus of exotic features and options are absolutely unnecessary for flight control network paths.

Given a functioning network stack (which will already be DO-178 certified), the software needed for switching or routing might comfortably be printed on a few sheets of paper, and practical to analyze and verify with great thoroughness.

The complexity of COTS equipment makes it impossible to assure, and in fact guarantees that it must be saturated with errors and vulnerabilities.


So, what part of this thinking is wrong?

Is the idea that aircraft network equipment is similar to COTS based on specific knowledge of avionics internals? Or is it surmise?

MarkH October 7, 2019 7:58 AM

@Clive … or anybody with knowledge of current airliner avionics:

Are networking gadgets (like switches and routers) connected to flight controls and other safety-critical systems modified or repackaged from ordinary ground-based equipment?

Or are they clean-sheet-of-paper designs under DO-178?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.