The Insecurity of Secret IT Systems

We now know a lot about the security of the Rapiscan 522 B x-ray system used to scan carry-on baggage in airports worldwide. Billy Rios, director of threat intelligence at Qualys, got himself one and analyzed it. And he presented his results at the Kaspersky Security Analyst Summit this week.

It’s worse than you might have expected:

It runs on the outdated Windows 98 operating system, stores user credentials in plain text, and includes a feature called Threat Image Projection used to train screeners by injecting .bmp images of contraband, such as a gun or knife, into a passenger carry-on in order to test the screener’s reaction during training sessions. The weak logins could allow a bad guy to project phony images on the X-ray display.

While this is all surprising, it shouldn’t be. These are the same sort of problems we saw in proprietary electronic voting machines, or computerized medical equipment, or computers in automobiles. Basically, whenever an IT system is designed and used in secret – either actual secret or simply away from public scrutiny – the results are pretty awful.

I used to decry secret security systems as “security by obscurity.” I now say it more strongly: “obscurity means insecurity.”

Security is a process. For software, that process is iterative. It involves defenders trying to build a secure system, attackers—criminals, hackers, and researchers—defeating the security, and defenders improving their system. This is how all mass-market software improves its security. It’s the best system we have. And for systems that are kept out of the hands of the public, that process stalls. The result looks like the Rapiscan 522 B x-ray system.

Smart security engineers open their systems to public scrutiny, because that’s how they improve. The truly awful engineers will not only hide their bad designs behind secrecy, but try to belittle any negative security results. Get ready for Rapiscan to claim that the researchers had old software, and the new software has fixed all these problems. Or that they’re only theoretical. Or that the researchers themselves are the problem. We’ve seen it all before.

Posted on February 14, 2014 at 6:50 AM37 Comments


jones February 14, 2014 7:10 AM

In Europe they use quantum key cryptography in their voting systems:

In Europe, the struggle to create a public realm out of the monarchy’s private government extends back to populist movements in the Middle Ages such as the Ranters and Diggers and Bretheren of the Free Spirit; they seem more likely to view their government as something that really belongs to them, with the potential to work for them.

In the United States, our struggle to create a public government really begins with the 14th Amendment. Between then and the civil rights era is when we obtained universal suffrage. The franchise was highly exclusive in the Revolutionary era — so much so that “WE, the People” probably only represents the will of 5-7% of the population at the time. Since then, the conservative battle cry has been “smaller government” and “privatization.” Notwithstanding that we had private government once before — when we were owned by the Britain — we have this myth of obtaining self rule by fighting tyranny, when, in fact, the road to self rule has been a much more complicated struggle. But the myth prevails over history.

kronos February 14, 2014 7:42 AM

I still remember the first time I ran smack into security-by-obscurity. My boss at the time put me in charge of the most secure system we had at work. It was kept behind a heavy locked door and I was admitted only after a strong lecture on how important it was to keep it as secure as possible.

Even behind a closed door, he felt it necessary to whisper: “and the password is ‘secret’, which of course we can’t tell anybody and they would never guess…”

hugh February 14, 2014 7:45 AM

The idea of continual improvement driven by the hostile nature of the operating environment seems very similar to evolution in the natural world. I don’t mean analagous to, I mean another form of evolution, survival of the fittest. Obscurity, obfuscation, and political lobbying are all attempts to exclude a product from the security evolutionary process but even at the very highest levels – government printed currency, NSA information – it is impossible to isolate anything from security evolutionary forces.
It is better to embrace this process and to continually iteratively evolve and grow stronger and wiser during the process than to try and hold the driving forces of evolution at bay. The driving threat forces of security evolution themselves evolve, adapt and get stronger over time. You can hold them at bay temporarily through obscurity and obfuscation, but it then become only a matter of for how long? e.g. Sony Playstation 3
The Playstation 3 is an interesting example in that it was finally cracked due to sloppy cryptography implementation ( what was meant to be a random number generator was implemented as a constant ). I say interesting because peer / open review would have exposed the obvious flaw and it would have been fixed. In this case it was the obscurity and obfuscation that ultimately was responsible for the security being broken.

beatty February 14, 2014 8:21 AM

OK, so if Kaspersky or Symantec or TrendMicro or Norton doesn’t hand over all their source code I shouldn’t buy the product… right?

Kristof Provost February 14, 2014 8:27 AM

@beatty Well … yes, although not just because you don’t have source code. Virus scanners typically only search for known exploit code (and perhaps a few variations on it). They’re fundamentally reactive and in my view not worth the effort.

@vincent You jest. The difference between secret paswords and secret algorithms has been explained to death already.

vas pup February 14, 2014 8:31 AM

@vincent: not really. That is yours, your privacy for your own usage/protection, but when somebody offer for public usage some kind of security product, it should be available for public scrutiny on potential security threats/weaknesses and invasion of privacy (like recently Samsung on smart TV agreed to provide led indicator when camera is on. I hope that activation is hardware, not software).

Gweihir February 14, 2014 8:31 AM

The meta-problem here is that many (in my experience most) software “engineers” are incompetent and do not qualify as engineers. The result is that the typical software system sucks badly.

This machine is just a standard example. However though Windows (no matter what version) was suitable as an embedded OS has no business working on software or surrounding systems.

Mark A. Hershberger February 14, 2014 9:01 AM

@beatty — you shouldn’t buy their products if you don’t trust them. I don’t and the systems my family uses haven’t had an problems. But they don’t hide what they’re doing and they have some pretty vigorous competition.

@vincent — I hope you don’t think there is anything especially “secure” about your SSN. But hiding secret information used to access a system is different than hiding or obscuring information about the the system itself, which is what this post is about.

Vincent February 14, 2014 9:19 AM

No, this just another cherry-picked instance of failed product development that is being used for self-serving purposes. It’s easy, isn’t it, to swoop in after the fact and point out everything that went wrong. Do you even know what the original threat model was? For all you know this failure was the RESULT of anal security engineers that suffocated product development until the project collapsed. You don’t know. You pick up on all this pop news junk and fling it anyway you want.

Nicholas Weaver February 14, 2014 9:24 AM

Oh, it gets better. I keep thinking that if you control the software, you can probably control where the X-Ray beam is at. And observe that TSA agents walk back and forth through the scanner all the time.

So you keep it ON bouncing back and forth scanning at roughly chest level when not actually scanning the full body. When you see something metallic (say, a TSA badge), you immediately drop the emitter down to crotch level for the next 10 seconds…

moocow February 14, 2014 9:26 AM

“For all you know this failure was the RESULT of anal security engineers”

Win98 and plaintext passwords are not the result of anal security engineers. Unless, of course you mean actual anal security engineers, in which case it is not surprising, as they only know stuff about how to use a variety of rubber corks.

z February 14, 2014 9:44 AM

I somewhat disagree that engineers try to hide their bad designs through obscurity. I don’t believe they even think about security.

There seems to be this idea of “Well, why would anyone attack that?” that is prevalent far too often. It’s why we have empty passwords on internet-facing SCADA stuff, hopelessly outdated operating systems that can’t be updated on embedded systems, etc. Nobody thinks like attackers. Closed systems are perfectly acceptable to people who don’t think they will ever be a target.

Jason February 14, 2014 10:21 AM

It would be like me inventing my own door lock, and because no one has seen one before, I can assume I’m protected. (Of course this could also be said – Since no one has attempted to break into it yet, I hope I’m protected)

Rather than the alternative

Buying a door lock that has been proven in the real world. Paying attention to security bulletins so that if an exploit is found, I can replace it with a fixed version.

@vincent – Either way, I’m not giving you my key.

vas pup February 14, 2014 10:56 AM

@Jason. All you said is valid when you are random target making you more protected than next target in the phishing scheme. Just to bring some relax mood: “Two young ladies were in the jungle and spotted lion. One start running, another asked is she really could run faster than lion. Nope, she reply. Just faster than you…” If you are NOT random target, all depends on the actor’s resources available to break your security (psychical or informational): local thugs, organized crime, LEA local or state, LEA federal, foreign agents, etc.

Jason February 14, 2014 11:11 AM

@vas pup:

Well, now we’re just getting deeper into security concepts. I don’t think the threat of a targeted attack is a reason to abandon tried and tested methods. I think it’s a reason to bring in additional expertise, add some additional layers of protection. (Instead of just a good door lock, add a surveillance system, alarm system, maybe a stronger door, bars on windows). The security system you use, no matter what you’re protecting has a cost that must be weighed against the risk, and to be effective almost certainly will be layered.

Clive Robinson February 14, 2014 11:22 AM

@ Bruce,

    The truly awful engineers will not only hide their bad designs behind secrecy, but try to belittle any negative security results

That statment is a little unfair, because when it comes to hardware the closer you are to the metal, generaly the more competent you are as an “engineer”.

The problem generaly starts and ends with managment, because,

1, Like quality, security has to be there fully functional from project day 0.

2, Security processes, training etc “cost”.

You have to be an “old engineer” to remember the days befor quality processes were considered part and parcel of the job. And unfortunatly the area quality processes are least frequently found is “software engineering”. Just take any modern software methodology and find the bits that are actually about “Quality Assurance”…

The simple answer is all you will find is an illusion or mirage paying lip service to any real quality process. It’s also the reason grizzled old vetrans of software coding will tell you that most software development methodologies are at best “make work” and that you will get better results where team members share a common non adveserial goal and thus trust each other.

And when you look back at the development of QA systems it was the teams who bought into it and trusted the others that the most benifit was seen.

The reason QA actually got going was two fold,

1, Managment saw the financial benifit before the factory door.

2, Those who saw benifit used QA as a part of purchase decision.

Neither of these conditions is true currently for “security” thus managment treat it as “a non productive inefficiency” and thus “managment mantra” says it should be ruthlessly expunged from the work process “to increase productivity”

The way to get security into the design process as a norm is by making having it the most profitable path to walk, that way as with QA “managment mantra” will change.

Untill that time blaiming other people for “keeping their jobs” is a little unfair.

G. Bailey February 14, 2014 1:07 PM

I think the article is dead wrong about the threat projection system being a big issue.

The purpose of this system is to keep the screener alert. In a normal airport, a contraband item like a bomb, gun, etc. might occur at most once a day. Rare contraband like a bomb is probably less than once in a lifetime. Hence it would be natural for a screener to simply ‘pass’ all luggage, even if they are being diligent. Adding these “false positives” gives the screener something to do, and increases security by “impedence matching” the task at hand to the psychology of the operator.

It’s true that an attacker could have the system inject innocuous items, or perhaps have it inject items at a very high rate. I suspect that either of these new behaviors would be quickly noticed.

EH February 14, 2014 1:17 PM

Actually, in any airport, contraband like this would occur at most 24hrs/airport-lockdown-time per day.

G. Bailey February 14, 2014 1:19 PM

Looks like I was wrong. The other article gives more details about the system, and it is pretty crappy.

It’s one thing to superimpose false images that are removed after alarming on them. It’s another entirely to allow some other person to choose the time when the false image will be shown, and to replace rather than modify the image.

yesme February 14, 2014 1:40 PM

Hacking is illegal. Selling crappy secured soft- / hardware isn’t (wearing my black and white glasses now).

Mike Amling February 14, 2014 1:49 PM

“Upon seeing a weapon on the screen, operators are supposed to push a button to notify supervisors of the find. But if the image is a fake one that was superimposed, a message appears onscreen telling them so and advising them to search the bag anyway to be sure. If a fake image of a clean bag is superimposed on screen instead, the operator would never press the button, and therefore never be instructed to hand-search the bag.”

If the training software assumes that the .bmp images have simulated contraband, one would think that the training software would do something if the operator doesn’t press the button when a .bmp is displayed. Or does the attacker who introduces a “clean” .bmp file also modify the software?

DB February 14, 2014 2:11 PM

Thank you, Bruce… “Obscurity means insecurity” is exactly what I’ve always meant, when I said “closed source by definition is insecure”…. only open source can be secure (which doesn’t guarantee that it is, only that it’s at least possible).

yesme February 14, 2014 2:33 PM

What really worries me is that we haven’t really learned a lot.

The 1983 movie wargames could happen today. Maybe not in the US (although I doubt that), but there are more countries in the world. The problem with security by obscurity is that you just don’t know whether there is a WOPR that has a backdoor with the login “Joshua”.

How secure are these nucleair platform systems? Just look at the stoxnet virus. Is “the west” capable of protecting itself against this kind of things? I don’t think so. (looking at this news item)

And is the JSF/F-35 capable of dropping a nuke? It also contains 20 mln lines of C++ code.

I think this is way more worrying than any “terrorist attack”.

I don’t know. Maybe it’s just BS that I am talking about. I am not a security expert. But I do know that you can’t trust computers. Not yesterday, today or tomorrow.

Michael Toecker February 14, 2014 5:57 PM


Saw you at SAS, thanks for speaking!

Did you notice on the way out that all the machines in the Punta Cana airport were the make and model Billy and Terry evaluated?


David Surovell February 14, 2014 8:10 PM

Your comment about engineers (smart vs awful) was unfortunate. Well-established companies such as Diebold produce software with a workforce that is salaried and university-educated. The software produced usually conforms to management’s priorities. If QA isn’t isn’t part of the software process, the software produced will tell the tale. The company with good management and a weak engineering staff is a rare beast. Unicorn rare. If Diebold has crappy software, then Diebold is to blame, not some mythical bumbler.

To say nothing of the procurement process.

Wesley Parish February 15, 2014 4:19 AM

Okay, @vincent, I fess up. You can have my unique computer network address: and the password: Top_Secret_Passwd to my administrator account: Gallipoli alias First_Lord_Admiralty alias Total_Cock_Up


@Gary I’d be interested in whether or not the software development process itself improved in the ten years between the Rapiscan 522B and the 620DV. If it’s still a cow-horse-elephant designed by a committee then it’s still a disaster waiting to happen, no matter how glitzy it may appear.

Juhani February 15, 2014 5:41 AM

How did the money pyramid go:

User interface

Now where is security? Why should somebody even the least sane make software that is theoretically reliable?
You say the worst engineers belittle others. Why should’t they. Good engineer is the one who uses least effort and perhaps get’s to do something interesting.

Comments mention Playstation 3 eventual hack. Does it really matter? Sony was able to sell playstations for years. If they had disclosed code they would been hacked just sooner.
And being secretive helped them keep costs down, selling less secure software. It was their product and their risks.

About code openness helping. There was the FreeBSD openssh key generation problem, for years…

The sellers are always going to try to sell the biggest crap they can get away. The agreements are made, if you can find the crap and point to it, then they will say sorry, how do you want me to fix it? If they can get away with shit the better (less work) for the manufacturer.

Now for the real sad joke.
Once the client has bought the sw the problems are most painful for the client.
The bill for problems will go to the client (did you have support, did you have the latest model etc). Now we all know the real money is made from support.
In essence this articles game ideology about security is big money from clients to suppliers.

Smart client will not go down open road. Better insecure, but not publicly than being forcibly milked for money, having to replace stuff etc.
The best is if the client is forced to buy support / software updates by disclosing vulnerabilities in his system, then the client just has to pay and perhaps 20% / year (regulations, audits).
So why the manufacturer should want a too secure / well written system?

IMHO the problem is elsewhere. Clients want MORE functionality, never less, never something that just works for 20y and does not require patching.
Military wants some high grade encryption and satellite connection, not couriers (most secure).

Mike the goat February 15, 2014 6:47 AM

yesme: unfortunately with control equipment for water treatment plants, power substations etc connected to the internet and administered over unencrypted channels like telnet I suspect that a “real” tragedy has to occur before govts wake up to the very real threats that exist. It isn’t wargames but it is close and the devastation or more likely mass inconvenience caused may not be comparable but would certainly get attention.

…. and personal experience shows me that even places that have hardened their exposure from the open internet by using VPNs etc often have an old 33.6k modem connected to the console “just in case”

I often wondered what would happen if – in 2014 – someone deployed a phreaking style wardialer in a large city just to see what answers. Hell, do it long distance – I am sure we can cope with a 9600bps connection to KREMVAX 🙂

Autolykos February 17, 2014 4:04 AM

@yesme: What I do know for a fact is that you can’t trust any C++ code to be secure unless it’s short and simple enough to keep in mind completely and verify its correctness on a piece of paper (and even then you should be wary). I still like C/C++ because it’s fast, efficient, has lots of good libraries available and trusts me to know what I’m doing, but secure it ain’t.

yesme February 17, 2014 11:24 AM


In the JSF/F-35 they use a special subset of C++, so it’s probably a lot safer than regular C++. But still it’s a massive, massive amount of code.

However, that is not really wat I meant. It’s the big picture.

“The west” is capable of creating Stuxnet. And I am quite sure that any other player with enough capacity and determination is able to create something like that too.

The problem is that there is too much diversity. Too much diversity in hardware, software, protocols etc..

Just look at this article.

Wesley Parish February 18, 2014 2:24 AM

Didn’t the rival lock manufacturers use to have public contests to see which lock was the easiest to compromise? iirc, that’s how Yale locks managed to get their fine reputation.

About programming languages and secure usability – there’s an Ada variant out there called SPARK that’s supposed to be worth investigating.

JeffH February 18, 2014 6:48 AM

“Smart security engineers open their systems to public scrutiny, because that’s how they improve.”
An open iterative security process like the above assumes that you can withstand the first few iterations failing.

If your company is dealing with PCI or PII data, I’m not sure a standpoint of ‘we’ll try it, be really open & honest about our vulnerabilities, and hope we get better next time around’ is going to cut it from a legal perspective. Maybe if you replace ‘public’ there with ‘external NDA audit’…

The individual infrastructure pieces such a system is built of, sure, they themselves can be open & iterative, and I applaud the call there – but I think the nature of just about any corporate IT system is never going to be publicly available information.

Nick P February 18, 2014 12:07 PM

@ JeffH

“Maybe if you replace ‘public’ there with ‘external NDA audit’…”

That’s what I promoted. The independent evaluation by people who know what they’re doing is the part of openness that boost security. However, full openness also makes the job easier for opponents among other issues. Selective disclosure to professional bug hunters under NDA provides the benefits without most of the risks. Good ones tend to cost good money, though.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.