Security by Default
Nice essay by Christopher Soghoian on why cell phone and Internet providers need to enable security options by default.
Nice essay by Christopher Soghoian on why cell phone and Internet providers need to enable security options by default.
Mark • August 17, 2011 6:35 AM
Sorry but this is a poor quality article, only mentions end-to-end security regarding connections for cloud services (Ignoring the whole problem of keeping our data on these platforms in the first place) and web based voicemail which I’ve only ever really seen widely deployed in the US. Out of all the issues out there, this is a poor ‘essay’ which doesn’t even mention the fact carriers often disable useful features on their custom ROMs in attempt to ensure the ‘integrity’ of the product and service which compared to something like a normal computer would be considered “security by default” or anything else for that matter.
David • August 17, 2011 7:44 AM
It’s not an article – It’s an Op-Ed.
[as it states clearly in the opening section]
It does not prove anything, but rather conveys a message.
I believe the message is well-conveyed – brushing over the fine-print and related small details.
This Opinion-Editorial is one the my mom could read, and perhaps better understand both the risks and the solutions in what are probably perceived by her as “secured environments”.
Clive Robinson • August 17, 2011 7:50 AM
The important point to take away from this is,
In any market it is a “race to the bottom” that is all constraints get removed by default to aid usability / support, so if insecure is the bottom position than that is what the market will get.
Thus as I keep banging on about “protocols” they need to be secure not just by default but from the very begining (otherwise “backwards compatabilty” becomes the insecure bottom).
We are at a point where protocols are being invented for controling not just our utilities by which we live but also medical equipment and medical implants.
Due to technology limitations (such as connectors wearing out and IR leds getting dirty or obsucred) many of these protocols have a base physical layer of “wireless” and as they are designed by field of endevor engineers not security engineers the default mode is “plaintext, no authentication”.
If this does not worry people then they are either overly complacent or not sufficiently worldly wise.
grymoire • August 17, 2011 8:24 AM
Isn’t this really about the usability of security? If security is easy to use, people would use it more often. But if it’s complicated, consumers get confused. See Peter Guttman’s book.
Another David • August 17, 2011 10:35 AM
@David – Nitpick (clearly), but Op-Ed is short for “Opposite the Editorial”, not “Opinion-Editorial” – referring to the original placement of the section.
@Another David: Really? Wow. I’m glad i never had to answer that one on jeopardy!
Richard Steven Hack • August 17, 2011 11:38 AM
I’m with Clive. Security is going to fail anyway due to my meme, but if it’s not baked in from the start it’s going to fail sooner and worse and affect even more people.
And then comes the typical human response: “let’s fix it!” Which only makes things worse as mistake is piled on top of mistake. This is such a common human behavior pattern it should be considered one of humanity’s defining characteristics.
Gabriel • August 17, 2011 12:11 PM
Insecurity by design is a given in most devices, even critical ones such as medical implants and controllers responsible for infrastructure. For industrial controllers and anything that could be plugged in or powered by a rechargeable battery, you could afford to build in (or more the norm throw in, sadly) security, including encryption and authentication. Devices like medical plants, however, have to function for many years on a single battery. (are they still using radioactive decay?) What are the best security options for that? Would most crypto algorithms require more than the devices could power?
John Campbell • August 17, 2011 12:13 PM
“When a unified theory of Human Behavior is finally published it will consist, in its entirety, of exceptions.”
Security falls into this, just like competing standards.
Security isn’t a feature, it is a mindset, one for which too many incompatible memes exist.
(I have a .jpg of the Security poster with Bruce and “Security – You keep saying that word but I don’t think it means what you think it does”.)
“Security is a many-splendid meme.”
Verizon, BTW, is infamous in the USA for taking a high-feature smart-phone and turning it into a semi-inert chunk of electronics in order to keep the phone’s features from being used in an unexpected fashion (like installing your own ring tones). Every carrier would like to “secure” their customers to their service.
Like I said…
Security means different things to different people.
Steven Hoober • August 17, 2011 12:45 PM
Agree with grymoire, that most of these security features are unusable. Even vmail passcodes are accepted by systems so barely full duplex they can barely hear, so some users have to enter multiple times. Satisfaction increases massively when passwords are disabled.
The apparently clever solution of not requiring a password when calling from your phone is pretty good thinking. I’ve been part of such decisionmaking. Except that later we find the vmail systems are not really on the network, so can be spoofed by caller id. What?! Why have the secure in-network features if we can’t tell what’s in network?
Terrifying, and exemplary of not doing security from the ground up. Admissibility failures? Sad.
Anonymous 1 • August 17, 2011 1:57 PM
Of course you’ve also got to depend on the provider not to change your security settings on you like facebook have a reputation for doing. It’d be nice to make something like getting security right a legal requirement (or give people the right to sue companies which screw up, especially if it is a deliberate screw up like facebook’s various security setting resets) but I’d be worried as to what side effects that might have and whether it’d end up worse.
Gabriel: IIRC RTGs aren’t being put into humans any more (the NRC claims they are still licensed in the US) although batteries have improved and power usage of implanted medical devices has been coming down a bit (though they still don’t last as long as RTGs, the extra surgery is more risky than having a little bit of Plutonium in you, but try telling that to a radiophobe). Work is being done to power such devices from the body which should do away with the need for a battery, assuming they can get it to work, and more importantly stay working for decades.
vasiliy pupkin • August 17, 2011 3:22 PM
I have questions:
(1) Why regular customer (landline or cell) can’t have on their phone ANI as on 888 & 800 numbers, so there are no ‘unavalable’, ‘blocked’, ‘private’, etc. numbers for the same service charge as for caller ID?
(2)Can ANI be spoofed?
(3)With current cell/smart phones having a lot of memory is it possible to store
voicemail on the phone itself with access by PIN not on provider’s server (like old fashioned answering machne)?
(4)Why e-mail providers require plain text security questions as a back door to e-mail account?
Anonymous 1 • August 17, 2011 3:41 PM
vasily pupkin’s questions:
1) Because ANI can’t be blocked (the ability to block caller ID was implemented for a reason).
2) ANI shouldn’t be spoofable, though maybe some Phreaker has found a weakness in some phone network’s implementation (then again, Caller ID shouldn’t be spoofable as well, only blockable).
3) I imagine you could store voicemail on the phone, but that’d either require the phone to be on all the time, or still require messages be stored on the provider’s system until you turn the phone on again.
4) You’d have to ask them, though typing gibberish will work well enough if you don’t want to leave plain text.
Clive Robinson • August 17, 2011 4:04 PM
@ Anonymous 1, Gabriel,
With regards radioisotope thermal generators or nuclear batteries, due to US scares they might be turned into dirty bombs they did not develope them much. Unlike the Russians who became quite expert on their design and manufacturing.
From what I’ve been told, although they are still legal to manufacture and implant they have some significant issues. The first and most recent issue is that they can and do set off overly sensitive detectors that are known to be in use in various places. However the real problem is what happens when a patient with one in them dies for some reason (there are alittle under a hundred still in people). Apparently there are all sorts of regulations about the removal, handeling and transportation of these devices as they are not alowed to remain in the body for burial or cremation.
Of interest with traditional RTG’s is they are very inefficient at converting heat to electricity with even specialised thermocouples being at best 10% efficient, attempts at improving them by adding IR sensitive photocells can get them upto around 20% but the energy output drops very much faster than the isotope half life due to the degredation of the photocells and thermocouples due to the effects of radioactivity. NASA amongst others are looking at advanced “Sterling engine” designs that might well achive 30% conversion of heat to electricity, but it is an open question as to what the life expectancy of such mechanical devices would be.
Gabriel • August 17, 2011 7:19 PM
@Clive: thanks I couldn’t recall if the rtg batteries were still in use. Ill have to look into what they provide now. I do wonder, either way, if such a battery could power a microcontroller powerful enough to implement any strong crypto, or a crypto engine. Of course, I could picture one of the medical device manufacturers implementing a proprietary version of a weak algorithm and protocol, and calling secure due to layers of obfuscation that will be relatively trivial to reverse engineer. I recall something like that in an RF payment card used by exxon a few years ago.
Clive Robinson • August 18, 2011 4:00 AM
I suspect the issue of security and power consumption is better looked at as CPU cycles per compleated comms transaction.
The reason for this is we tend to seperate security out into layers, just as we do with networking, which like the majority of “stack” systems is grossly inefficient.
There have been systems proposed in tha past that can do error checking/correction, authentication and encryption as one function and two APIs not three functions and six API’s and as a result use considerably less than half the CPU cycles.
But by and large they get ignored and left on the shelf to gather dust because of various factors both on the product engineering design side and the security design side. Which tend to boil down to the methodology used to manage complexity.
So we end up with the usual messy comms stack consisting of the data encapsulating stack and the error correction stack an then we add a third security stack. The security stack either gets interleaved with the comms stack or added on top of the comms stack just below the application.
It’s all very messy and has a significant complexity of it’s own with all the API code and buffering and flow control code.
It all needs stripping out for low power applications as it’s a compleate waste of power and time and a great inconveniance for little or no gain.
However in the case of medical devices it’s something we need to do fairly quickly because there is something like a couple of hundred different medical implants currently all doing things their own way. And hospitals with A&E and Acute Care cannot aford to have two hundred different bits of “console” equipment with different UI’s and nomenclature.
You only have to look in a ward today to see three or four different types of basic observations machines (Pulse / BP/ SpO2 / temp / resp rate) all with “value added features” that don’t get used and actually cause erroneous readings to be taken (ie an alarm holds the display flashing at the alarm trip level not the actual current patient reading or reading max/min).
With respect to UI’s we all have a bit of a laugh over “dad and the video recorder” but it’s not funny when it’s the “nurse and dads failing pace maker” as he slowly turns blue in front of our eyes.
But now consider ontop of the Nurse/UI issue you also have a basic comms issue as well because the console is PumpUright Rev 6.1 and dads pace maker is PulseUok Rev 7.4.
It is more of an issue in the US currently because the use for cardiac and other implants is being led by the medical insurance companies presumably because they think the devices will save them money longterm…
Gabriel • August 18, 2011 7:39 AM
@Clive: regarding the different proprietary protocols, I suspect that has to do mostly with vendor lock in, I wouldn’t be surprised if they make a killing not only selling consoles in hospitals, but also the RF/modem gateways for sending data back to the doctor from home and the receiving equipment and analysis software at the doctor. SO that’s at least 3 times they can make money off one class of implant. Once they all start talking the same protocol, I’m sure they would lose a huge revenue stream. Not sure what the best driver to resolve would be, regulation, or medical practitioners banding together to put pressure on the device makers.
One thing is for sure, security is very critical for medical implants. My father in law had a pacemaker which went off due to arrhythmia last year. (he passed away this February). At the hospital, it was necessary to shut the pacemaker down so they could externally defibrillate him. This was done wirelessly of course. Medical personnel need total administrative (“god”) powers over these devices. Without security, of course, that means anyone can trigger or shutdown a critical device. Not a pleasant idea to know that someone cOuld have a god button for a critical life preserving function inside of you. And of course, the muti ui multiple protocol nightmare you mentioned is even worse, since there is a high probability that it will kill someone eventually.
Anonymous 1 • August 18, 2011 7:58 AM
Clive: The Soviets didn’t exactly do a very good job on their RTGs (or at least their ⁹⁰Sr ones used for remote power have managed to kill quite a few people who found them and didn’t know what they were) while the US tended to actually bother with safety (at least where they did use RTGs).
The Plutonium powered pacemakers were designed to survive cremation just in case someone forgets to take it out (stories of funeral home workers being scared of them when someone with one gets sent their way exist).
On the big issue with security on medical devices (and also medical information systems) needing to be accessed quickly in emergencies (and a delay while every gets sorted out isn’t tolerable) I’d probably tend to have the encryption key stored on the medical identification tag, perhaps in the form of a barcode for easy machine reading (and also in the memory if it’s a USB device but having the barcode would be a useful backup). Of course you’d need standards for each part of that which all equipment (implant and reader) would have to support.
Gabriel: I wouldn’t be too surprised if it has killed someone already.
A lot of the reason for having proprietary protocols is simply that there isn’t a standard anyone could use so all the companies making devices have just got to make their own up, hopefully that’ll change soon (of course the standard we get will need to be a good one).
Clive Robinson • August 18, 2011 9:46 AM
@ Anonymous 1,
“The Soviets didn’t exactly do a very good job on their RTGs (or at least their ⁹⁰Sr ones used for remote power have managed to kill quite a few people who found them and didn’t know what they were) while the US tended to actually bother with safety (at least where they did use RTGs)”
Safety against people taking angle griders etc to them was not realy a requirment with the Russian’s, their priority was as we know not safety but functionality. They kind of took the view that the RTG’s they designed were for remote government controled locations where scrap metal hunters where not a consideration.
It is thus a difference in philosophy not functionality of the device to produce power as efficiently as possible for a given price point.
Look at it this way if the design of trains, cars or planes had originaly had the “safety requirments” we have today they would never have been built let alone get to the level of functionality we have today.
With regards medical implants and equipment, we’ve been aware of, and ignoring the problem of console compatability and security since atleat the 1980’s.
Just recently at Black Hat, somebody reverse engineered the comms protocol on an insulin pump. That alone should be a wakeup call to the industry, but…
Look at medical electronics and EMC regulations, they effectivly got excempted. With the result that a lot of medical electronics was suseptable to being interfeared with by other electronics that generate EM fields.
I don’t know the details but there have been reports of “continuouse medication” pumps that should be giving no more than a couple of mls per minute or hour suddenly and for no accountable reason pumping out 10s of mls per second for short bursts. It’s been put down to the use of RF generating equipment in the vicinity but not with any real evidence (that is it could just as easily been a software bug).
This does not give me confidence that any standards they come up with will even be practicale let alone good.
Anonymous 1 • August 18, 2011 2:25 PM
Clive: You’re not kidding that trains, cars and planes wouldn’t be at the level they are today (or even around) if they were expected to follow current regulations.
It seems obvious that medical devices (which are safety critical systems) should be held to higher standards for everything, including coping with electromagnetic interference.
Fixing it does sound like it’ll be hard to do, the regulatory agencies seem to have been asleep for so long that it’ll probably be hard for them to wake up (of course if a company decides to pay the extra attention to get things right they’ll likely spend more time and money developing their product and then they’ll have a hard time convincing people it’s better).
Some high profile deaths are probably what it’ll take to get things fixed.
Subscribe to comments on this entry
Sidebar photo of Bruce Schneier by Joe MacInnis.
Leave a comment