Cory Doctorow on Software Security and the Internet of Things

Cory Doctorow has a good essay on software integrity and control problems and the Internet of Things. He's writing about self-driving cars, but the issue is much more general. Basically, we're going to want systems that prevent their owner from making certain changes to it. We know how to do this: digital rights management. We also know that this solution doesn't work, and trying introduces all sorts of security vulnerabilities. So we have a problem.

This is an old problem. (Adam Shostack and I wrote a paper about it in 1999, about smart cards.) The Internet of Things is going to make it much worse. And it's one we're not anywhere near prepared to solve.

Posted on December 31, 2015 at 6:12 AM • 39 Comments

Comments

Jeff MartinDecember 31, 2015 8:07 AM

I'm not sure this is such a tough problem. Healthcare faced this issue some time ago, e.g. a bug in the operating software caused some patients to receive a lethal dose of radiation. The solution was to enforce critical paths in the hardware, not in the software. That way bugs or alterations in the software would not enable horrible outcomes in the hardware.

BDecember 31, 2015 8:32 AM

I do not see why the inability to create systems that say "no" to their owners is a problem. A hammer does not say "no" when its user bashes someone's skull in with it; a knife does not say "no' when its user stabs someone; a pen does not say "no" when its user writes an anti-government polemic. What is it about software that should change any of this?

We have laws to deal with people who use their tools in dangerous or harmful ways. If someone reprograms their self-driving car and it winds up killing someone, we can hold a trial and decide whether or not their custom firmware caused the death. Or if we do not want to take the chance, we could just ban self-driving cars entirely.

What we absolutely should not create is a world in which our tools preempt our own interests to benefit others. That is what Cory is warning about -- that the current direction of technology and the current state of the law are setting us up for that situation.

H.L. MenckenDecember 31, 2015 9:50 AM

"We're going to want systems that prevent their owner from making certain changes to it."

Really? Speak for yourself buddy!

A more accurate way to phrase this is that Hollywood wants systems that prevent their owner from making certain changes to it.

I'm with Stallman: You control the code, or the code controls you.

Sancho_PDecember 31, 2015 10:38 AM


@B, Jeff Martin

So you compare a trackless, self driving car to a simple human operated tool like a hammer?
Or an autonomous and trackless operating mass transportation to a fixed machine?
A “device” for the masses to handpicked and certified machinery?

Beware of AI, we humans are stupid enough.

L. W. SmileyDecember 31, 2015 11:04 AM

No user serviceable parts? No souped up cars? I've removed the finger guard from my Kitchen Aid coffee grinder so the beans don't clog in the hopper during the operation. How about upgrading the hard drive to 500GB SSD on an ipod and Rockboxing it? Of course we want to mod our products and repair them ourselves. Give us standardized parts and open architecture, etc. Of course we should be able to mod the software ourselves or with 3rd party. What are you talking about? And if my self driving car is programmed to kill me to avoid greater carnage in a no win accident situation...

Anonymous CowDecember 31, 2015 11:13 AM

"We're going to want systems that prevent their owner from making certain changes to it."

Uh, no, we want systems that prevent anybody other than the owner and creator from making changes. Taking self driving cars for example: the only way it's software/firmware should be updatable is a physical connection, no wireless connections allowed. A hacker might still be able to alter the system but they would have to have physical access to do so.

albertDecember 31, 2015 1:04 PM

@Jeff Martin,
I believe you're referring to the Therac 25 machines; essentially, particle accelerators. (there are several detailed studies available online) Ironically, the previous versions used hardware interlocks to determine the position of the various mechanical components.
.........
One issue is whether the software controlled device may injure or kill other people . So, no, I don't want some nameless goober messing with 'their' auto software. Altering software that may have public safety ramifications should be a felony.
.
AFAIC, the major issue with software in the IoT is reprogrammability (not discussed enough, IMO). This major security issue has been virtually ignored. When you make things reprogrammable over the internet, trouble can't be far behind.
.
I'm certain that the main reason for remote reprogrammability is the fear that there are bugs in the code that may need fixing someday. Why?

1. Todays systems are way too complicated, requiring huge code bases, and impossible for one person to understand.

2. The trend toward contracting out software development. Very often, a few people, or even one person has sole responsibility for coding, and there's no expertise on staff to check it.

3. Bean Counters (Marketing People) often have the last word in software decisions.

When money is the prime criterion, everyone suffers...and some die.

. .. . .. _ _ _ ....

ThomasDecember 31, 2015 4:18 PM

This is like a certain other subject that occasionally comes up.

We want to retain our own ability to do these things, because we trust ourselves to do it properly, and because we foresee a time when we disagree with how things are.

We need to prevent unsuitable people from doing these things, because they cannot be trusted to do so safely.

In the end there's an argument about how society as a whole is best served, by preserving the individual right or by playing it safe and locking things down.

Just because I want to be able to install software of my choice on hardware I paid for (i.e. install Linux on my laptop) doesn't mean I want the guy down the road installing custom software on his self-driving car.

From a technology point of view the cases are identical, as are the means of allowing/preventing user modification.
In the real-world the second computer being modified has some rather interesting peripherals attached.

jdgaltDecember 31, 2015 4:32 PM

Speak for yourself. No gadget has any business limiting what its owner can do with it, and we should be fighting to make constitutions say so. And disobeying the law until that day comes.

Clive RobinsonDecember 31, 2015 6:49 PM

With regards,

We're going to want systems that prevent their owner from making certain changes to it.

And people saying "no we don't", I don't think you have thought it through sufficiently far.

One of the jobs of electronic hardware design engineers is to ensure components stay well within working parameters. The reason for this is to reduce the likelyhood of failure. After all you do not want you car electronic brakes to suddenly fail when you stamp your foot hard on the brake pedal in an emergancy, because the current rush through the driver transistor made it open circuit faster than a fuse.

Electronic engineers have all sorts of ways of making systems "fail safe" and many are not obvious from just looking at the circuit diagram. Likewise eclectromechanical and mechanical engineers have similar systems such as limiters, end stops and safety valves. History shows that it was not unknown for "operators" to defeat safety systems, by amongst other things "wiring down" limiters and safety valves, to get a few more percent performance. The downside is that by so doing they caused boiler explosions etc that killed people or got them badly injured. The legislators of the time were faced with an option of "personal freedom" or "general safety", they opted for "general safety" as this minimised the potential for harm.

Many many years later researchers realised that some people over estimate their abilities, and that they can be extreamly dangerous because of it. Eventually it was written up and is now called the Dunning-Kruger effect.

Do you realy want such people flying their latest "drone modifications" over your back garden where your kids are playing?

I suspect not.

Engineers spend lifetimes of learning about not just how to make systems safe, but safe in a very wide range of conditions. They spend incredible amounts of money on testing the sysyems they design to ensure they meet not just the specifications but edge conditions that effect safety as well.

What gives an individual the right to change such systems, not test them sufficiently and then put others at risk by using their modified systems near others?

But note I am not saying that you must not change anything, I firmly belive you should be able to change things where it is safe to do so. The crux of the problem is knowing when it is and is not safe to change something, and taking limiting or mitigating steps where there is uncertainty.

And I suspect for most people they would rather you did your experiments etc a long long way away from them, their loved ones and their property. Preferably where the only person you can harm is yourself, and the only damage you can do is to your own property.

altjiraDecember 31, 2015 9:00 PM

I'm not a design engineer or even a software security specialist, but I feel fairly safe in making a few predictions.

1. Corporations are going to try preventing people from altering their products, and for the most part, most of the time, they will succeed, with the help of legislators and regulators.
2. A significant number of people will modify or attempt to modify those products. Most of those modifications will result in bricking or no harm. Some will succeed and taut their achievements, resulting in generally more bricking.
3. An elite few are going to be able to do whatever they want.
4. Sooner or later, somebody's modification is going to result in a terrible disaster. There will be media fanfare, strident condemnation, libertarian defensiveness, and a lot of social media memes. And political hearings. Then we'll all move on.

So, what's new?

cfDecember 31, 2015 11:32 PM

@Clive Robinson, and anyone else freaking out about how the slightest change to the code could kill someone:

Every single time you've driven somewhere for more than 15 minutes or so, you've driven near a car that was mechanically maintained by an amateur mechanic. Or not maintained properly, or at all, for years. Or customized/modified by some idiot with a blowtorch in his garage (those Hondas you see cruising one inch off the ground, and pickup trucks with five foot tall tires, you know those don't come like that from the factory, right?) I know because I was one of those idiots. When I was a kid I had a VW Baja bug I built myself from a wreck and a pile of rusty, junk parts left in a field.

In the course of three or four years I had a rear wheel come off because I was running tires far too large for the car and sheared the studs off the hub, hit a hard dip and had the front suspension break away from the body, caught some air off another dip and snapped both the trans axles in the rear, the generator fell off on the freeway, fumes leaking from the gas tank through the hole where my radio should have been and a lose fusebox that would sometimes short and blow all the fuses at once, a brake pedal that fell apart while I was driving and no emergency brake, fenders that weren't wide enough to be street legal, and a stinger exhaust that shot flames a foot in the air when I'd floor it. Oh, and my turn signals weren't real. Didn't have a flasher, just a toggle switch, and if there was a cop behind me I'd manually rock it back and forth until I'd made my turn. No cop around? I wouldn't signal at all...

And that's just the stuff that broke. Everything else was barely hanging by a thread.

Now I know you're going to say I shoulda been put in prison or I shoulda this, that and the other thing, but that's kinda like someone saying they shoulda been born rich and pretty. But they weren't, just like I was never caught for any of these things, and no amount of wishing will change it. Because the fact is that it's far beyond the abilities of any agency to manually inspect every vehicle in America for compliance, and only the most egregious violations can be found by a cop cruising by you on the street. So the answer to:

>What gives an individual the right to change such systems, not test them sufficiently and then put others at risk by using their modified systems near others?

Is absolutely none. It just can't be stopped. It can't be done now, and I doubt it will be any easier in the future. It can't even be accomplished for commercial vehicles, where they have to be inspected on a yearly basis, with surprise inspections at road side scale houses, and ruinous fines for failure to comply. The infrastructure and the expenditures for commercial vehicle safety are tremendous, and do pay benefits I'm sure, but you would have to increase it by an incredible amount to apply that same scrutiny to every vehicle in the US. We can't even keep our roads and bridges maintained as we stand, so I doubt we're going to find the money for this as well.

If you take it as an axiom that someone who has skill and physical access to a system is almost sure to be able to compromise it, and believe what I said above about how it's not going to be possible to regularly, physically inspect all of these vehicles, then you really are just going to have to accept that in the future you will, on a daily basis, be driving near automated cars that have been tinkered with by their owners. Just as you have always been driving near unsafe vehicles, your whole life, and weren't aware of it.

Clive RobinsonJanuary 1, 2016 4:51 AM

@ cf,

Now I know you're going to say I shoulda been put in prison

Actually, no because it would not solve the problem or be usefull to society in general.

It's why I said,

    But note I am not saying that you must not change anything, I firmly belive you should be able to change things where it is safe to do so. The crux of the problem is knowing when it is and is not safe to change something, and taking limiting or mitigating steps where there is uncertainty.

The solution to terminal curiosity only happens in one of two ways, education or death. Death is a waste of skill and resources, thus education is the prefered way to go as it also moves society forwards.

The problem with this is the folks that fall into the Dunning-Kruger issue. Telling them what the are doing is dangerous will only elicit a "but I can handle it man" response. The way to resolve this is by either a different form of education, or by giving them a place far enough away to do things where they can not hurt unaware bystanders. One way to do this is to turn it into a sport with race tracks etc.

Which is why I'm very much in favour of people learning to tinker in safe environments. Curiosity and to a certain extent risk taking are what makes humans what we are, not just individually but as a society. The hard bit is directing risk taking into safe channels. To a certain extent that is what the various "Maker" and "sporting" groups are about.

As the old saying has it "cats will play" but there is no reason why they should do it in the china cabinet, or on the furniture. It's why cat toys and doors minimise the problem to the point it's entertaining to watch or get involved with, and just one of the reasons we have them, likewise children.

Thomas_HJanuary 1, 2016 6:21 AM

Clive Robinson wrote:
The problem with this is the folks that fall into the Dunning-Kruger issue. Telling them what the are doing is dangerous will only elicit a "but I can handle it man" response. The way to resolve this is by either a different form of education, or by giving them a place far enough away to do things where they can not hurt unaware bystanders.

The problem is that in corporate life the standard response currently seems to be "promote them upstairs" instead of "limit the damage they can cause" or "educate them to handle things better" (this last one doesn't work with some people due to a major lack of self-insight), which results in many more people having to deal with their stupidity and its consequences (like other employees taking their stupidity as an example...).

So first there needs to be a change of societal culture, one that actually makes people consider long-term effects instead of that year's dividends to shareholders.

Clive RobinsonJanuary 1, 2016 7:58 AM

@ Thomas_H,

The problem is that in corporate life the standard response currently seems to be "promote them upstairs"...

That could be for several reasons, one of which is that those that promote them are either like them. Which might account for all the consultants to senior managment we see.

Another is the "safety in numbers" issue. We saw this with sub-prime lending that led to the First Banking Crisis. If you did as everybody else was doing it would be a near certainty that you would not be singled out if things went badly. However to do differently was very risky, because in the short term you would have to show clear benifit over what was at the time quite good business. Thus doing the right thing was most likely a career killer in the short term but the wrong thing fitted in with the crowd and in the long term there was "The Insurer of Last Resort" to be used if not blackmailed with "To big to Fail" ploy of pay us or the economy will be irrevocably wrecked.

But also some pathological climbers see advantages in having what are effectivly "yes-guys" to take the fall etc because they don't have the self perception to realise that is where their usefulness to psychopathic senior managment lies. The trick to looking good is to jump ship before the rats. What you do is "Have a Vision" that nobody else understands --because it's nonsence / high risk gamble-- get it started as a major project and about a third of the way through having networked your way up in others perceptions as a visonary, jump ship. If dispite the odds the project succeeds then it's due to your good vision. If however it fails, it's because those you left behind "dropped the ball". Either way you don't lose, and those Duning-Kruger types you left behind can be easily seen to have been at fault, so in effect you also pull the drawbridge up behind you... Whilst not perfect as a plan, it's still a fairly good one, as long as you have a place to jump to, and as we know psycopaths are very very good at networking so for them it's generaly not a problem.

Then there is what I call "The Tony Blair PM Plan", when you get to the top, logicaly there is nowhere to jump to so what do you do? Simple surround yourself with second or third raters who you make sure know that there promotion/position is down to you and you alone. Part of that is you get the glory and if it should go wrong they take the blaim and fall on their sword, knowing that you will bring them back again in the near future provided they keep their nose clean and don't make waves... In the mean time you have your exit plan all sorted out and your nest well feathered, part of which is engineering the start of a self destruct device behind. You get clear and things implode on those left behind and the resulting in fighting pushes you ever further from the mess wgilst polishing your image...

There are a couple of other reasons such as family and favours and one or two other human failings, but the above cover most of it these days.

Slime Mold with MustardJanuary 1, 2016 1:45 PM

I believe their may be an aspect of the "safety" argument that has not been touched on and may even involve a "hidden hand".

That issue is replacement parts.

Car manufactures fought like hell from the 1960's through the 1980's to prevent after-market manufactures from making parts that fit their cars.

The Supreme Court said "No element, not itself separately patented, that constitutes one of the elements of a combination patent is entitled to patent monopoly, however essential it may be to the patented combination and no matter how costly or difficult replacement may be".

However, as I understand it, DRM/IP law is completely different. Everything breaks eventually. It may be that replace/reload OEM is the only option.

The Hidden Hand:

See This for an outline of a dirty tricks campaign. I have seen them in action. Safety is a great cover. "Think of the children!"

dsdJanuary 1, 2016 4:46 PM

"Basically, we're going to want systems that prevent their owner from making certain changes to it."
Why? I want to upload my own firmware to every device I have. And to ones I don't have. lol

Ole JuulJanuary 1, 2016 8:04 PM

In the case of driverless cars, I don't see any problem. Nobody, as an individual, would want to own one of those anyway. They're useless from a driving or personal car perspective. They're only useful from a transportation point of view.

I foresee driverless cars as being more like public transport. If they get implemented on any scale then they will be owned by companies providing that service - similar to bus, taxi, or train services. The rules regarding modification and maintenance will then apply to commercial outfits and not be infringing on the personal rights (perceived or otherwise) of the passengers.

dsdJanuary 1, 2016 8:17 PM

James Comey don't want to weaken encryption. He want a trojan in every computer. He want this trojan be installed even before you install the OS. The trojan will be in firmware.

If you look at any modern computer, you will see that it contains 20-50 subsystems in it. Look for schematics. Every subsystem have firmware. Every firmware may contain trojan or at least bugs.

Firmware is closed source software. Reversing of every piece of firmware is impossible because:
1. Too many products - huge amount of work.
2. Manufacturers update firmware.

Also we don't have mathematical calculus to find "bugs". Maybe we will someday, but we don't have it right now.

This means that you cannot trust your computer. Its insecure by design.

There is some way to harden computers, of course.

Sorry for my bad english.

Jesse ThompsonJanuary 1, 2016 10:44 PM

@Clive Robinson

Please be advised: we are moving FROM an era where every car is nothing but a gigantic primarily hardware puppet responding instantly to the needs of a human pilot, to an era where cars can have AI brains and tons of software to augment it's hardware parts.

There exists NO capability for causing harm that the self-driving variant has which the old style variant did not already have! Today, right now, this very second we have hundreds of MILLIONS of 0.5-2 ton death machines hurtling scarce feet past one another in every part of this country at THOUSANDS OF TIMES PER SECOND at relative speeds in excess of 120 miles per hour.

The ABSOLUTE WORST case scenario of allowing the *owner* of the device to make changes to his product is the situation we already have: where the owners of the devices make every god damned split-second decision on behalf of their products for every single moment of their operation.

Why is this a dimension which terrifies you so badly that you and/or Mr. Doctorow show concern that self-driving cars should never exist unless we could prevent *their owners* from being able to alter them? That's like launching a relief effort that states "unless we can deliver 10x more food than is required per capita, some people might conceivably sometimes experience hunger for up to 15 minutes at a time, which is a tragedy.. and to avoid that tragedy we should scrub the mission and leave the entire population at their current worst-possible rate of starvation instead.

ianfJanuary 1, 2016 11:46 PM


@ Ole Juul

[I am not discussing Doctorow's article, only your comments in regard to it].

You “don't see any problem. Nobody, as an individual, would want to own [a driverless car] anyway.

Rephrase it to I (=you) wouldn't want to own one, or show us the mandate for being the spokes/wo/man for that Nobody or perhaps all nobodies of this world.

They're useless from a driving or personal car perspective. They're only useful from a transportation point of view.

No shit. Remind me of what cars primarily are for, please, since it apparently can't be transportation. And, while you're at it, what is that undefined "personal car perspective" of yours… not the new car smell that one apparently can buy in a can @ Walmart?

I foresee driverless cars as being more like public transport.

And what would be wrong with that, I wonder. Once upon a time a horse buggy, or, in town, a hansom cab, were practically a necessity for journeys beyond one's walking perimeter. Then automobiles appeared, which, for a time, required that a footman(?) ran in front of each with a red flag warning passers-by of an automobile approaching. Yet somehow, and within a couple of decades, we've adapted to a practically horse-buggy-less world, and the footmen became chauffeurs with a mission to seduce no-hoper second daughters of landed TV-photogenic gentry (the flags were bought up in bulk by the Commies, who know a good deal when they see it).

What you seem not to have understood is that the emergence of secure driverless cars will upend several industries, ways of living and, yes affect both quality of life (positively) and (adversely) unemployment. That last reason alone may prove the biggest sociological obstacle towards their acceptance. But in the end, the advantages of driverless vehicles outstripping the disadvantages, privately owned "human-drive" cars (such compatible with, thus able to move in a mesh network of driverless ones) might become an elite status symbol… because, when you think about it, what's so exciting with having to control a fast moving vehicle among others on the road when you could simply instruct the car to "take you there" and do something else while en route… nap perhaps, or why not—with a like-minded accomplice of the contradictory gender and smoked all-around windows—some babies?

Ole JuulJanuary 2, 2016 3:04 AM

@ianf I must say your take on what I said puzzles me - or rather your somewhat rude assumptions about me. :)

So you agree that they're only good from a transportation point of view. Apparently we're on the same page in that regard. I'm glad you also understand the potentially disruptive effect on industries and society. Your statement about what I seem to not have understood, is just really odd.

However regarding your assumption that people buy cars primarily for transportation does puzzle me. Most marketing, and indeed design, is aimed at the driving experience and social status of the vehicle. Perhaps I misunderstand the marketing, but I see dreams being sold. And may I add that many people buy cars despite there being alternate ways for them to live. I personally drive a 1987 K-Car and put in about 50 miles a month because I'm in a remote area and have 12 miles to the store. If I was back in the city, I'd have no need for such a sexy piece of social status, particularly if I was young again and could ride a bike.

To rephrase what I said earlier, I believe that driverless cars have a future as a type of public transportation. At this point in history they're not going to appeal to people undergoing midlife crisis or 30 something youngsters pretending to be race drivers.

I thought my hints at exactly what you said were sufficiently clear. Next time I'll try to include sarcasm tags, and animated emoji.

FlorianJanuary 2, 2016 3:30 AM

I also think the issue is not making sure no one modifies any part of the software, but to give the owner a way to ensure that no one but him modifies the car. You can make a dangerous car without any change to the software and the only things preventing some people from doing it is their own safety and laws that punish people who hurt others (even if it is just out of stupidity). I don't see why software changes anything here. The same goes for any other tool -- you can make them dangerous in a number of ways and preventing us from changing the software doesn't make anyone safer. I want to be in control of my devices and not be controlled by them; that doesn't mean I will modify my car (I don't because I don't know how to do it), but I want to be free to do so. Just require an inspection after modification to make sure the changes don't endanger anyone.

Clive RobinsonJanuary 2, 2016 4:21 AM

@ Jesse Thompson,

The ABSOLUTE WORST case scenario of allowing the *owner* of the device to make changes to his product is the situation we already have: where the owners of the devices make every god damned split-second decision on behalf of their products for every single moment of their operation.

Sorry you are wrong, and the fact you don't know that is the issue I'm talking about.

You need to go and lookup some engineering concepts such as "hunting", "race conditions", "critical damping" and "oscillatory feedback/feedforward", "dynamic loop gain", "negative resistance".

A relatively simple gotcha is the response time of two or more amplifiers in series in a feedback loop. If you get them the right way around the loop is stable, the wrong way around and the loop becomes unstable and can start to exhipit oscillatory behaviour.

Similarly feedback loops within feedback loops. You find this issue arising in overload protection systems. In adjustable power supplies primarily the design spec is you need to keep the output voltage stable, whilst also having both power and current limiting. However if the user decides to charge certain types of battery, they will wind the voltage up to the maximum for the battery, then turn the current limit hard on, connect the battery and raise the current to the recomended "constant current" of the battery.

Most hobby electronics designers make assumptions about loads on amplifiers and power supplies being passive resistors and not having energy storage ability. Thus they only design to two not four quadrants. That is they don't realise that with the feedback loops they can end up designing a "negative resistance" with a degree of reactance, thus if a load with the opposite reactance is added it can make a very high power oscillator.

So if the "owner" does change something, there is a very real danger that when there is an unexpected effect the system would self oscillate and the owner would have no ability to stop it before the results are destructive. Such an unexpected result might be the ventury effect when a car gets between two high sided vehicals like container lorries.

But even proffesional engineers get it wrong, have a look at why some of the largest vehicle manufactures have had to issue critical product recalls for software correction.


I could go on at length, but if you have a hunt around you will find PDFs of post graduate control theory, with all the theory and equations you need to know to design stable loops which is just one aspect of designing systems that are "Fail Safe".

ianfJanuary 2, 2016 6:36 AM


@ Ole Juul

Certainly, I questioned your car-is-everything claims, hence by definition must have been rude.

BTW. please spare me your patronizing pats on the head (because "you're glad I also understand…" etc), or I will turn rude.

your [=i.e. my] assumption that people buy cars primarily for transportation does puzzle me.

Oh, really? What else would they buy cars for, to impress the next door Joneses? To put away in car-sized bank vaults to protect their investments and wait for them to accrue in value? TELL ME, as you obviously have THE answer.

[I…] put in about 50 miles a month because I'm in a remote area and have 12 miles to the store.

Shame on you then for abusing The Saintly Status Symbol for mere transportation to and from the bodega.

Ole JuulJanuary 2, 2016 6:48 AM

@ianf
OK. I must admit that your angry attitude is difficult to understand. I'm not sure what it is you're trying to achieve.

ianfJanuary 2, 2016 7:08 AM


ADMINISTRIVIA @ Ole Juul,

Listen, let's make a deal: when I'll feel the need to have my opinions "psychoanalyzed" by some nincompoop who's run out of arguments, I'll dispatch a minion to fetch you… how's that sound?

Just sayin'January 2, 2016 5:58 PM

@ianf

I got where Ole Juul was coming from prior to his/her extended explanation and happen to agree with the points made. Self-driving cars will have their place. But there's no way they can deliver the "freedom of the open road" experience that many drivers desire. While I'm certain they'll be just the ticket for many people and certain applications, there's no way in hell I'd ever buy one. And from talking to others, I know Ole Juul and I aren't alone in this - rather obvious btw - sentiment.

I get you're a regular on this blog and have much to offer. But geez guy, do us all a favor and tone it down a notch. It's not Ole Juul that's sounding unreasonably angry and defensive here.

ianfJanuary 2, 2016 10:59 PM


@ Just sayin'

    […] if you glance straight away at the end of any posting and find the two-word sentence “Just saying” you don’t have to read the confident statements that lead up to it.Clive James on net.trolls (made easier if a posting begins with that.)

MCassanitiJanuary 3, 2016 5:30 PM

I see a few common issues here, and I'm not about to take a solid stance in either direction, but merely highlight them all.

  1. A self-driving car should be modifiable by the individual owner, since we already have that ability with our current cars, and the risk of harm is no less for modifying a self-driving car.
  2. Modification of any car has the ability to harm others and the owner if sufficient understanding of how individual components should operate is not well understood by those making modifications. Indeed we currently have engineers and certifiers for after-market modifications to cars today.
  3. Stopping modifications to software of a self-driving car is seen as a legalistic limitation that prevents user freedom, vendor lock-in, and may also be anti-competitive in the long term. The potential for the vendor to prevent needed changes (think non-genuine parts) or remove some fake product based limit (you bought the cheaper version which can physically drive as far as the more expensive, but is software limited to distance X) is quite high.

I noticed no one has raised any concerns about other IoT devices. I guess the risks of me modifying my fridge controller just aren't the same.

The part that concerns me more is the legal front. If I'm in a crash with one other car and it goes to court, the fight is between me and the other driver (and the lawyers). If two self-driving cars are in a crash, who is responsible then? I would assume it would be the vendor since they are responsible for the process of driving the vehicle, and they have much more resource to fight this out in a court room. What happens when one of those lawyers points the finger at me and says "we can't reliably prove that the software wasn't modified by the owner prior to this crash"? That is my primary concern, not my ability to change the software regardless of the freedoms I wish to have or even if I will ever exercise this freedom, but the PITA legal problem when someone can still blame me for a crash of a car I didn't modify and didn't operate beyond 'go home please'.

That is a much harder problem to solve, and I agree that adding DCMA style restrictions to the software and any other restriction conceivable to man that we've seen from the game console world will probably not resolve that issue.

ianfJanuary 3, 2016 8:33 PM


@ MCassaniti

[…] “A self-driving car should be modifiable by the individual owner, since we already have that ability with our current cars, and the risk of harm is no less for modifying a self-driving car.

#fuggedaboutit, ALL OF IT. If self-driving cars are to become the default road transport/ trucking option (to begin with perhaps only on certain routes, later everywhere), and the economics ARE DEFINITELY in their favor, then the individual vehicles will have to mesh (verb) with one another into that mesh (noun) that they're autonomous dynamic nodes of. It's like a network that constantly rearranges itself around all currently active, forever moving cell-tower sites in it… presently a tall order even for mere GSM phones on the move talking to stationary cell towers.

The vehicles will be of different makes, but needing to talk to, take feedback from, cooperate with, AND BLINDLY TRUST one another. Think a murmuration of starlings, birds in a 3D fast-turning flock, each instinctively keeping track of, and responding to staggered positioning feedback from just 8 (or is it 9?) others around itself. That is roughly how autonomous self-driving cars WILL HAVE to behave on the road, though in linear 2D, rather than the far harder "birdy 3D."

Getting there, even agreeing on and implementing a single common control protocol will take time, no less than a decade perhaps a decade from now (the USA and Japan will go it alone, but EU/ Eurasia will have no option but to agree on the common global standard).

So it goes without saying that a transport system like that can not allow any DIY-modifications for not self-drive, HENCE BY DEFINITION UNTRUSTWORTHY "vehicle nodes" just because Ole Juuls and MCassanitis of this world feel like showing off their driving reflexes to impress one another.

    Unlike "the Internet" treating censorship as an obstacle, and routing around it, the future accomplished road self-drive system will treat suddenly appearing untrusted vehicular nodes as broken-down debris, and call in heavy-duty robot removal drones to simply lift them off the macadam, so the temporarily slowed-down transport flow can resume.

Just Sayin'January 9, 2016 8:43 PM

@ianf

I'll leave it to the reader to determine which of us comes off as more trollish.

ianfJanuary 25, 2016 9:55 AM


ADDENDUM 25 January:

Anyone harbouring wishful thoughts for, essentially, continued presence of human drivers among driverless cars on the roads (ubiquitous in foreseeable future) had better read this notice off /. on how the insurance industry already now is planning to survive to them apparent future downsizing of demand for their services. Hence, if human drivers will at all be permitted on these "self-steering" roads, their insurance rates will probably be sky-high, pricing them out of the market.

Insurance Companies Looking For Fallback Plans To Survive Driverless Cars

    Driverless cars could mean a huge downsizing of the auto insurance industry, as the frequency of accidents declines and liability shifts from the driver to the vehicle's software or automaker. This is compounded by the rise of ride-sharing services. Once summoning a vehicle to take you somewhere isn't limited by the number of people available to drive them (and are correspondingly cheaper), car ownership is likely to decline. Many major automakers and tech companies are throwing billions of research dollars into making this happen […]

WazowskiJanuary 25, 2016 11:44 AM

@ianf
Driverless cars could mean a huge downsizing of the auto insurance industry, as the frequency of accidents declines and liability shifts from the driver to the vehicle's software or automaker.

No chance getting at least Google to assume "responsibility" towards an individual. They don't. That would be damaging to their fake brand as an "IT company".

More likely the user will have to accept a disclaimer stating something like this:

"...By riding in this vehicle you accept that we collect data about your person. You further acknowledge that Google, Inc. or its subsidiaries are not to be held responsible for any loss of health etc..."

ianfJanuary 26, 2016 4:44 AM


@ Wazowski

don't conflate (confound?) Google-the-research-entity with a (future driverless or any) car manufacturer. They would like to provide THE operating system for driverless cars, but, once the technology seems to solidify, more heavy players in that field will appear (couple of years ago I read some financial maven's speculative prognosis on what Apple might do with its steadily growing earnings surplus. One of the—admittedly fanciful—ideas was to "get into the personal transportation business." Given the stakes, and Apple's proven record of seamless hardware/ software integration and manufacturing excellence, it doesn't sound as strange as it once did to my ears).


[…] More likely the user will have to accept a disclaimer stating something like this [a short paragraph].

Not a chance. The driverless vehicles EULA/ToS script will run into tens of ALL CAPS PAGES, AND REQUIRE SIGNING IN BLOOD DRAWN BY A REGISTERED NURSE UNDER DIRECT SUPERVISION OF A NOTARY PUBLIC (upon which the blood group and other derivative data will become property of the car supplier's).

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.