Comments

vas pupSeptember 14, 2017 9:30 AM

@Elon Musk • September 14, 2017 8:41 AM

I see the only way to prevent it: imbed kill switch [kind of]into robots/AI design. Just learn out of lessons with bad security of smart phones, tablets, laptops you name it.

I love real Elon Mask for his unique combination of knowledge in STEM set and business skills.
I guess we need to clone this guy (and 'bleep' political correctness on that)

Clive RobinsonSeptember 14, 2017 12:04 PM

Why are humanoid robots considered any different from other computer controled machines. Like for instance a romba vacuum cleaner, John Deer tractor, an automated fork lift, level 3 and above driverless car, an aircraft on autopilot, an automated self propelled gun, or a pilot assisted drone...

The point is they all have computers that control the mechanical device. The mechanical devices are all mobile, they are all implicitly weapons of destruction and carnage. With the problem that for technical reasons all those computers are hackable.

Maybe it's that robots are "futuristic" and look vaguely human in entertainment that makes us view them differently?

If we go back over a hundred years we had "mobile force multiplers" that replaced horses and carts. They were seen as dangerous, so much so that in England early automobiles had to have a man carrying a red flag walk in front of them.

You only have to look at the number of fatalities on the road in the USA and ask yourself a question of "How many of the drivers had hacked their own brains with drink, druks, lack of sleep or thinking about work etc?"

Just because we can make machines that look like us does not make them even remotely unique when it comes to the ability of machines in general to kill us.

Maybe if we thought about machines in general in the same way we fantasize about "killer robots", then the world would be a safer place in general.

vas pupSeptember 14, 2017 4:04 PM

@Clive;
"Why are humanoid robots considered any different from other computer controled machines."
Clive, I see the difference when any machine/robot humanoid could make own decision having AI capability, not just computer. That means robot with AI could make random decisions on its own, not just by commands pre-loaded.
According to Asimov,
The three laws of robots:
1.Robot may not harm human.
2.Robot must follow commands of human.
3.Robot should take care of own security.
Robot should follow (2) if it does not contradict (1).
Robot should follow (3) if it does not contradict (1) and (2).
(1)Violated already: https://www.fastcompany.com/3059484/this-robot-intentionally-hurts-people-and-makes-them-bleed

That is somehow related to development of weapon systems when human excluded altogether. As you know, there were cases (on both sides of globe during Cold War) when humans involvement prevent nuclear counter strike based on false alarms generated by machines. I am sure that Asimov's Laws are the final word on robotics now, but I guess you do have higher level expertise on that.

albertSeptember 14, 2017 4:18 PM

@vas, et al,

I doubt that robot designers know how to design a 'kill switch' . They'll probably do it in software:)

Here in the US, we've had safety standards for factory automation for decades. -Hard-wired- Emergency Stop buttons (ESTOP) all over the machine to kill power. Mechanical systems need to be designed so that power failure cannot generate dangerous conditions.

@Clive,
"..."How many of the drivers had hacked their own brains with drink, druks, lack of sleep or thinking about work etc?"..."

The stresses of modern life. Back in the day, your horse could take you to work all on his own, and you could make out with your ladyfriend on the way back to her place......

I know you meant 'drugs', but just for the helluvit, look up 'druks' :)

. .. . .. --- ....

Julia ClementSeptember 14, 2017 8:36 PM

@Clive Robinson

Why are humanoid robots considered any different from other computer controled machines. Like for instance a romba vacuum cleaner,

My Rooomba may hate me and want to kill me, but so far the worst it has managed to do is slam into my toe at low speed a few times. Unpleasant, but not painful. I doubt anyone can see them as anything other than harmless.

Industrial robots can and have resulted in deaths, but most people never encounter one so they are not seen as a threat.

My guess about the humanoid robots is that for as long as we've been around as a species we have been threatened by self moving humanoid shaped beings; other humans. Given what we know of human history, it seems wise regard strange humans as a potential threat until we know they are safe. A robot that looks like a very strange human is going to invoke this fear as soon as it looks enough like a person.

DroneSeptember 14, 2017 11:35 PM

The robot in my house just showed up one day, and now it refuses to leave. I live in a Robot Sanctuary city, so the cops refuse to take it away. It doesn't speak English, but it does seem to communicate with that Google Home thing we're all required to have now. When I go to sleep the robot just stands there and stares at me. Kitchen knives go missing.

AndrewSeptember 15, 2017 1:04 AM

They used to say machine learning is biased, actually predictions ML provides are based on statistics in similar circumstances, so it has to be biased to be accurate.
This doesn't mean that decisions will be taken on this kind of statistics alone.
Future robots will have enforced a list of "principles", very similar with Asimov laws, which will stay above the biasing computed based on big data.
For example, a police robot won't think "we only shoot black people", like two weeks ago, even statistics tell him a black person is more likely to commit a crime.
People often mistake decisions with predictive results.

AndrewSeptember 15, 2017 4:32 AM

One more thing, I'm not afraid of hacking robots as I am of fooling them...think how easy is to fool humans...

vas pupSeptember 15, 2017 9:16 AM

@Andrew
How AI will decide the moral tasks? Whom chose to to save when robot could save just one? Will it consider life of university professor (yeah, including what kind of science (s)he is teaching, e.g. history versus STEM) more valuable than plumber or life of old person versus young? The whole idea is in real life humans usually have to chose out of several bad options - less evil, not out of good and evil.
It is key what kind of moral 'principles' will be added, and some our respected bloggers recently provided citation that religion hijacked morality. I don't have any suggestion for now, but I'll be glad to hear opinions of others with respect.
@all
http://www.bbc.com/future/story/20170914-spotting-cancer-stopping-shootings-how-ai-protects-us
“We should view AI not as something competing with us, but as something that can amplify our own capabilities,” says Takeo Kanade, a professor of robotics at Carnegie Mellon University. This is because AI has a tolerance for tedium, plus an ability to spot patterns – an ability that’s far beyond anything humans are capable of.
An automated system that listens for the sounds of gunfire with arrays of sensors can be used to pinpoint where gunshots came from and alert the authorities within 45 seconds of the trigger being pulled. The ShotSpotter system uses 15 to 20 acoustic sensors per square mile to detect the distinctive “pop” of a gunshot, using the time it takes to reach each sensor to and uses algorithms to reveal the location to within 25 meters.
Machine learning technology is used to confirm the sound is a gunshot and count the number of them, revealing whether police might be dealing with a lone shooter or multiple perpetrators and if they are using automatic weapons.

Mike BarnoSeptember 15, 2017 10:09 AM

The seagoing robots called aircraft carriers, battleships, cruisers, and destroyers are perhaps being hacked, according to this report in Foreign Policy.

http://foreignpolicy.com/2017/09/14/u-s-navy-investigating-if-destroyer-crash-was-caused-by-cyberattack/

Naval investigators are scrambling to determine the causes of the mishaps, including whether hackers infiltrated the computer systems of the USS John S. McCain ahead of the collision on Aug. 21, Tighe said during an appearance at the Center for Strategic and International Studies in Washington.

(Vice Admiral Jan Tighe is the US Navy's deputy chief of naval operations for information warfare.)

"I like combat vessels that weren't captured by Russians."
[or substitute Chinese, Vietnamese, North Koreans, Anonymous, or IS]

AndrewSeptember 15, 2017 12:20 PM

@vas pup
"How AI will decide the moral tasks? Whom chose to to save when robot could save just one? Will it consider life of university professor (yeah, including what kind of science (s)he is teaching, e.g. history versus STEM) more valuable than plumber or life of old person versus young?"

This is very disputed issue, I can only give you a personal opinion. Anyway, the AI development is not stuck with this in any way either solution will be ok.
Humans may not make the best choice in most critical situations. If we can improve this, why not? If in case of imminent accident the AI has to choose between a driver's life and the life of several kids, what's wrong with saving kids? Yes, AI will act like God sooner than we expect. It will take life and death decisions for us. Maybe better than we do.
Germans guideline for AI specify that AI should just let things happen without interfering, on extreme situations like these. I do not agree, of it can minimise losses, it should do it.

Imagine that a car's only choice forward is either a wall or a person. There is one driver, perfect 50% chances, what AI should decide?
The answer?
I am sure that long after . one decimal will be higher than another. Either airbag, car safety, physical condition of the pedestrian or else will make the difference. Not age or race.

A Nonny BunnySeptember 15, 2017 3:40 PM

@Andrew

Firstly, few people would want to buy a car that will actively try to kill them, or their loved ones, to save another.
Secondly, that kind of utilitarian thinking would lead to cars killing people so doctors can use their organs to save the lives of a multiple people.

I think a car should be loyal to its owner (or guest) to the full extend of the law. (And if he/she so chooses, the owner/guest can instruct/request it to save the other's live in case of an accident.)

Sancho_PSeptember 15, 2017 5:22 PM

@Julia Clement, re Roomba the toe stubbing companion

Your feeling for the device is understandable, considering your personal experience to the day. But you know that it has a very powerful rechargeable battery and could, e.g. due to a defect, esp. under the sofa, set your home on fire, and regardless whether you are at home or not, day or night, very likely you would not be able to extinguish the blaze.
Good luck when dealing with the manufacturer for compensation.

Industrial robots, on the contrary, have improved over long time, with fatal accidents, but are nearly safe today because of strict regulations, safety features, limited access and well trained operating personnel.
This is why the average observer does not see them as a threat.
And, in case of accident very often we find out who is liable.

Regarding interaction with other humans (or so) we should not underestimate the non-verbal communication between all living objects, sparsely known in science, but totally unknown in any man - machine communication.
But look into the tabloids to see how often we fail, regardless of years of education and training (divorced persons raise their hands.).

However, criminally tampering with machines is far from our present comprehension.

Sancho_PSeptember 15, 2017 6:01 PM

Re A(G)I in machines / robots:

Indeed, for the average person humanoid robots are seen as (wo)man-like, expecting reason and morale, probable companion, and in rare occasions as a (job) killer.
All ingredients for “hair standing up in the neck” + endless fantasy / fiction.
In fact, the real “job killers” are plentiful next to us, attendant in daily life.

So I will use the term machines also for “autonomous” moving robots,
in contrast to living creatures.

Apart of deliberately tampered machines (I don’t like “hacked” for a criminal activity) our expectation with machines (and the law) is based on a deterministic behavior,
- until the first undetected fault occurs and,
if the machine can pose a risk to humans (depending on how severe, likely and how many humans could be affected),
- a second, independent and undetected fault occurs.
This is what we have today in both, production processes and private environment (e.g. elevators, public transport).

Today the manufacturer is fully liable for the construction, according to technical rules up to date, documentation and operating instructions of any machine that poses a risk to humans, animals or the environment.

But the (in detail) behavior of an AI-powered machine isn’t predictable or reproducible, it is not deterministic and so the liability is at the machine itself. It would be an immense and possibly fruitless effort to find out if the reason for any wrongdoing, harmful or not, is based on design or technical fault, criminal mind or was just bad luck.

I don’t oppose AI, on the contrary, it has it’s field.
I’ll try an example:

Forget attacking low-knowledge jobs that involve combining a bunch of extremely complex sensors, low to average human brain and brain speed, knowledge, human interaction, experience, kind of morale, with nearly real time and deterministic or at least comprehensible reaction, and extremely powerful + speedy actors with real time feedback and control loop for all moving parts.
Instead they should focus on much simpler targets:

Imagine an A(G)I driven President.
All problems with sensors + actors aside, the simple minded brain structure should be easily learned by a neuronal network and the results could be simply compared to the living human, to find flaws and advantages in AI and human behavior.

But only when the AI-President finds out that we are doomed because “our future is based on eternal growth” I will tip my hat in respect of AI.

vas pupSeptember 16, 2017 1:03 PM

@Anders
I am proud posting on that blog the very first link to the Harmony(from BBC).
Thank you for the link you've provided.
The danger is to have software of the doll in the cloud - that is (bleeping) invention you have license agreement on software so each time you need to get it out of the cloud, it creates record(how long when and where you are accessing it, etc). The most important is hacking danger when you connected to the web as prerequisite of usage the doll within such paradigm.
Security could be provided when you get software once directly from the manufacturer, then you owned it and no need to be connected to the web during usage of any device(doll included).
That provide privacy and security.
Updates could be send to you by manufacturer through the regular mail and loaded through USB port so doll is not connected to the web. No 100% secure, but 100% private (excluding EM radiation of doll in usage).

RachelSeptember 16, 2017 1:24 PM

I appreciate the discussion of ethics in AI and the Asimov context.
Two productions address these matters I recommend
The first is a highly regarded film with an extremely intelligent, multilayered and sophisticated script that provides returns on repeat viewings. The acting is first class and the cinematography is breathtaking. It is essential viewing for anyone interested in AI.
'Ex Machina', only a few years old. DON'T read any synopsis, I've said enough and you'll ruin it for yourself if you read further

2. A TV series, set in present day London.Robots called Synths that mostly look and act like everyone else,have increasingly taken over labour tasks from humans. Many houses have a live in one as a servant/carer. But there is growing un ease and tension personally and collectively about the role Synths play in society and people are threatened. People are protesting. The show is about that conflict between humans and their creation. Theres also a thread about the Synths being more sentient than believed.
It's called Humans and is a highly engrossing drama. Based on the Swedish original 'Real Humans'

JG4September 17, 2017 2:41 PM


robot hacking falls into the threat model class that I previously labeled "projected intent." two obvious branches are taking over other systems (including, but not limited to, the power grid), and building customs system like drones, for robotic delivery of hellfire and brimstone. or just hellfire, which are dangerous to friendlies for 1000 yards downrange.

I actually posted this Friday, but it fell through the cracks after being blocked. perhaps etiquette precludes using links to relevent previous comments that are on point.

my previous comments on projected intent assumed custom hardware, but it is much more economical to repurpose existing hardware by subverting the control system:

https://www.schneier.com/blog/archives/2015/08/friday_squid_bl_1.html#c6704747

https://www.schneier.com/blog/archives/2017/06/friday_squid_bl_581.html#c6754687

https://www.schneier.com/blog/archives/2017/06/friday_squid_bl_582.html#c6755288

your government projects a different intent than the people. the last thing that the military-industrial-psychopath complex wants is peaceful solutions to our common problems

I've come pretty close to saying that the problem on your planet is unaccountable power, which is true for everything from Marxist states to extreme capitalism and various organs of government. they aren't even held to the most minimal levels of accountability with or without negative feedback. we could hope that dangerous hardware and dangerous software would be managed wisely.

one of the encouraging aspects of constant surveillance - various organs of government get caught breaking the rules. if they can get away with it, they change the rules. but each time that it happens, from the planting of evidence by police making "reenactment videos" to finding the Deep State fingerprints on human rights victims raises awareness. there still is hope.

vas pupSeptember 18, 2017 4:06 PM

The creators of a new artificial intelligence programme hope it could one day save democracy
http://www.bbc.com/news/uk-politics-40860937
British politicians, on a House of Lords committee, are set to investigate the economic, ethical and social implications of artificial intelligence over the coming months.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.