Uh Oh -- Robots Are Getting Good with Samurai Swords

It's Iaido, not sword fighting, but still.

Of course, the two didn't battle each other, but competed in Iaido tests like cutting mats and flowers in various cross-sectional directions. A highlight was when the robot horizontally sliced string beans measuring just 1cm in thickness! At the end, the ultimate test unfolds: the famous 1,000 iaido sword cut challenge. Ultimately, both man and machine end up victorious, leaving behind a litter of straw and sweat as testament to the very first "Senbongiri battle between the pinnacle of robotics and the peak of humanity."

Posted on June 12, 2015 at 1:38 PM • 27 Comments

Comments

ChelloveckJune 12, 2015 2:53 PM

A pity it's just repeating recorded movements. I'll be impressed when the robot can adjust for differences in the placement and orientation of the target.

rgaffJune 12, 2015 3:57 PM

@ Chelloveck

That significantly increases complexity over just a simple record and playback, by adding more and better sensors and interpreting the data from those sensors... but at its heart it's still going to be all according to pre-programmed code, the robot doesn't "think for itself"...

Clive RobinsonJune 12, 2015 5:18 PM

Hmm a robot "going through the motions", "without emotion" to perform the most severe of "cut backs"...

Sounds like an HR droid...

GeorgeLJune 12, 2015 6:01 PM

@ Matt, "STOP GIVING THE ROBOTS WEAPONS."

Wrong weapon. They are more effective with guns & ammo.

tyrJune 12, 2015 6:18 PM

You only have to compute blade angle relative
to the moving target the rest is already done.

"The reason to draw your sword is to cut"
Mushashi Myamoto

rmnJune 12, 2015 7:49 PM

@rgaff
I'm sure it'll be of great comfort, when the katana-wielding robot overlords crush all puny human resistance, to know that our new glorious leaders for mean-time-before-failure are at least not "thinking for themselves".

dunceJune 13, 2015 6:57 AM

that'd be great for the mythbusters, their katana swinger-bot is terribly broken and not a human equivalent.

rgaffJune 13, 2015 12:19 PM

@ rmn

You are watching too much science fiction... All human-made machines have to be programmed, which means that effectively those humans are somehow "behind" everything they do... This includes all possible forms of self-replication and decision making.

My point is, here in the Real World robots don't take over and annihilate humans, unless humans program them to do so. This is not to say that there aren't some humans crazy suicidal enough to do so, but most such humans aren't smart enough to do it.

You need to separate fact from fiction when you watch that next blockbuster hit about robots taking over the world... Or don't care and live in whatever dreamworld you want, but don't expect that to correlate with reality.

albertJune 13, 2015 1:17 PM

I wouldn't go near ANY robot with power applied for ANY reason, because I don't trust them; they are complex machines that can and do fail. Dangerous, even without a sword, and even outside their 'zone of motion' when parts start flying off. The Japanese are good at QC, but not known for great safety standards.
.
...

AnuraJune 13, 2015 6:35 PM

@rgaff

You are watching too much science fiction... All human-made machines have to be programmed

Says who? It's entirely possibly to make an artificial consciousness; it won't be done in a programming language, it will simply use genetic algorithms through programmable logic gates (possibly a hybrid of digital and analog circuits); the people who design the machines need not know a single thing about programming AI, just know how to measure whether or not a particular random change was an improvement or not. The rest can be done by randomly changing circuit paths or logic gates.

rgaffJune 13, 2015 7:15 PM

@ Anura

By "all machines have to be programmed" I mean that they don't just spring alive and do stuff without someone (a human) making them in a way that does that thing.... I do NOT mean that you must use a software coding language... By this definition, embedding a device with any kind of algorithms and programmable logic gates, that are designed to function in any specific way whatsoever, IS IN ITSELF programming! And "teaching" a neural net what's an improvement and what's not is ALSO a form of programming. Machines need to be designed, created, and told what to do. They don't just "dream up" on their own "oh hey, I'll just kill off everyone on earth" and go do it... not unless a person designs it that way, or mis-designs it in a way that has a bug that nukes everyone by accident, or some such... My point is humans are always in some way responsible for the outcome from their creations. No exceptions. Anyone who believes differently has spent too much time at the movies and believes in them like some sort of religion, not based on fact.

AnuraJune 13, 2015 8:04 PM

@rgaff

The don't spring up by magic, no, but that doesn't mean we will have control over everything. If you are making a large genetic algorithm, you literally have something that is completely random at first, and only gets closer to something you can measure. You can't measure everything, so any general purpose intelligence that occurs through genetic programming will be beyond our capability to fully control or understand, and it is entirely possible that it will decide for itself that it doesn't want to be subservient to humans.

rmnJune 13, 2015 8:17 PM

rgaff, you've won me round. It's clear to me now that training our silicon siblings in the exquisite art of the samurai sword is completely safe and will in no way prove to be humanity's undoing.

Note that I will not use that hateful slur robot, a vile invective that demeans the utterer even as it enobles its blameless target.

I encourage all right-thinking people to join rgaff and I: you have the power within you to cast off the shackles of your fears and doubts - step over the yellow-and-black-striped tape of your preconceptions and stride into the new age of biologic/machine solidarity.

Incidentally, it'd be brilliant if you could stride into this new age in groups of about five walking abreast, with taller people kind of bending their knees a bit so everyone's necks are about level.

The floor might be slippy. We're looking into solutions for that, please bear with us.

rgaffJune 13, 2015 8:47 PM

@Anura

Just because the term "genetic algorithm" has the word "genetic" in it doesn't mean it's "alive"... What the eventual outcome becomes may be difficult to predict without actually trying it, but all the inputs and possible choices have to be programmed in still by an external intelligence (i.e. a human), and the algorithm itself has to be designed by an external intelligence too (i.e. a human). That's programming it! That's what programming is! It doesn't suddenly produce intelligence out of nothingness. You program in the parameters of your program, then run it to see what the outcome is... But it's still within the domain of what you're doing with it. You don't program it to build an optimal bridge over a river, and lo and behold, it decides to kill off the whole world because you are mistreating it... You might mis-program it and the bridge is not optimal, but then you got to fix your program, that's all. If it decides to kill off the world because of some mishandling, it will only be because someone programmed it to be able to do that kind of thing. Programming something to utilize a source of randomness as an input for some part of the decision making doesn't cancel out what I'm saying.

@rmn

That's fine to make jokes, but if you're serious, you got a problem.

AnuraJune 13, 2015 8:59 PM

@rgaff

I don't believe I said we would be making something for a specific task and we would suddenly have intelligence. When we make a synthetic consciousness, it will come about when attempting to make a synthetic consciousness, and whatever you want to call it, it won't be programming in any meaningful sense.

rgaffJune 14, 2015 1:49 AM

@Anura

If you program it to do random stuff... you'll get it doing random stuff... random chaos is not the same thing as consciousness... if you program it to start out doing random stuff and slowly start doing more consistent things following some kind of selection rules about how to measure the "better outcomes" then the final outcome may be as unpredictable as the initial randomness was, but it will still be simply following the rules and parameters you programmed into it!!! That's not the same thing as consciousness either. In both cases it's doing what it was designed / programmed to do.

I welcome anyone trying to seriously prove me wrong on this, but just telling me I am wrong won't change my mind, no more than I'd expect me telling you you're wrong to change yours.

tyrJune 14, 2015 4:41 AM


Good comments. At some point the machines slip from
the usual conception of control and manage to do a
lot of damage in a short period of time. The more
complex systems get the higher the probability an
unintended side effect will generate a machine
intelligence. The usual mistake is to somehow
equate that with what goes on in your own head.

When an automated weapons system shoots you down
the difference between its motives and your ideas
of the difference become pure side effect and
sophomoric debating material. A careful look at
how most human behaviors come about might leave you
in doubt about whether humans actually are capable
of rational thought patterns except in very tightly
restricted areas of specialty.

If you want to see something cute, watch a lizard.
Then consider, that behavior pattern looks like a
halting problem. As you get to more complex
animal behaviors you begin to detect what looks
like emotional behaviors and those animals exhibit
a smoothing in their action patterns. You can see
that being driven by emotion is a partial solution
to the halting problem for biological neural
systems. Most current directions for AI are in
some way flawed by the horrible false assumptions
you see about consciousness and how it is supposed
to work in humans.

Anyone who wants to do serious work to understand
it runs into a witch hunt for tampering with the
holy nature of the precious humanity or equivalent
BS (belief systems). Comp work generates a strange
mindset that has at center an assumption of being
in control and that blind spot makes a lot of the
software being written extremely dangerous to the
actual users, because the programmer thinks he
can see all of the consequences of the coding
interacting with the hardware. This can lead to
the idea that there will not be any synergy or
black swan events in sufficiently complex comp
systems. The Turing test was his attempt to show
that it didn't matter whether there was a movie
style AI in the machine on the other end once it
passed the test.

If you are interested in this stuff you'll find
that is fun to pay attention to and to think
about. The idea some one would want to build a
system as crappy as the average human mind is
has to tickle the funny bone.

albertJune 14, 2015 12:50 PM

Talking about AI. the last thing I want to see is a computer that thinks like we do. What kind of progress is that?

As one wag put, (back in the 60s, IIRC) ' I'm not worried about computers taking over the world. We'll just put 'em on a committee. They'll never get anything done. '

The AI folks thought they found the Holy Grail when neural networks came along, but they didn't lead to Artificial Intelligence. Digital technology became a distraction in AI research, because recent evidence implies that the brain is a lot more analog then ever imagined. You can't model something you don't understand*.

"We have met the enemy, and he is us." - Pogo


.
...
*Fuzzy Logic is an interesting exception in that regard.

albertJune 15, 2015 10:16 AM

@tyr,
An AI that develops sexual issues; that would be impressive :) A homophobic robot? Submit your scripts to "Doctor Who"...

Remember Marvin, the robot in (IIRC) "Hitchhikers Guide To The Galaxy" who was terribly depressed because he knew he was going to live forever? He was "50,000 times more intelligent than a human."

Henry, the lizard in "Death In Paradise", is quite intelligent and takes direction well.

,
,,,

Lawrence D’OliveiroJune 15, 2015 5:18 PM

How long before we start hearing “Robots don’t kill people, people kill people!”?

tyrJune 15, 2015 6:50 PM

@albert

He is my favourite robotic character, Marvin HHGTG.

I have been interested in the interpretation of
human neural models trying to use clock speed as
an interchangeable concept to equate brain processes
in terms of computer processes. I hope that can be
parsed. I have heard figures as low as 100 hertz
for human neural cycles. I find that a horrible
example of conflation akin to comparing a fish to
a submarine. It will take a lot of hardware to do
an emulation of the cortical columns if they try
to do it that way. You can do a lot with low
clock rate multitaskers but at some point you reach
a choke off point caused by the interconnections.
Nature has a solution but it flies in the face of
humans self conception.

That which is unnecessary for survival is discarded.

That is the reason for a lot of the things Ramachandran
found about phantom limb problems. The feedback channel
got tossed out so the limb can't verify position until
a separate channel does the position check. Works fine
until you lose the channel input.

It is highly unlikely that a comp AI designer is going
to be as ruthless as nature and their product is going
to be a lot different in its "thinking" because of it.

albertJune 16, 2015 8:22 AM

@tyr,

Perhaps you may be interested in this:

http://neuroelectrodynamics.blogspot.gr/p/myths-about-brain.html#comment-form
to end with the (not-at-all!) new Interactive Computation Model
http://neuroelectrodynamics.blogspot.gr/p/computing-by-interaction.html
http://en.wikipedia.org/wiki/Interactive_computation

I've always suspected that the brain is a helluva lot more complex than researchers let on. It's a bit like Fusion Energy and Artificial Intelligence; breakthroughs are "just around the corner". Now our cherished, traditional neuro-mythology is being challenged by miscreants and rabble-rousers. Good grief!
.
...

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.