President Obama Talks About AI Risk, Cybersecurity, and More

Interesting interview:

Obama: Traditionally, when we think about security and protecting ourselves, we think in terms of armor or walls. Increasingly, I find myself looking to medicine and thinking about viruses, antibodies. Part of the reason why cybersecurity continues to be so hard is because the threat is not a bunch of tanks rolling at you but a whole bunch of systems that may be vulnerable to a worm getting in there. It means that we’ve got to think differently about our security, make different investments that may not be as sexy but may actually end up being as important as anything.

What I spend a lot of time worrying about are things like pandemics. You can’t build walls in order to prevent the next airborne lethal flu from landing on our shores. Instead, what we need to be able to do is set up systems to create public health systems in all parts of the world, click triggers that tell us when we see something emerging, and make sure we’ve got quick protocols and systems that allow us to make vaccines a lot smarter. So if you take a public health model, and you think about how we can deal with, you know, the problems of cybersecurity, a lot may end up being really helpful in thinking about the AI threats.

Posted on October 20, 2016 at 6:16 AM33 Comments

Comments

r October 20, 2016 6:43 AM

Joking of course, but maybe he’s been bitten by the programming bug. He might be plugging his way into a job with Kaspersky or somebody. :O

Maybe Juniper.

Dr. I. Needtob Athe October 20, 2016 7:29 AM

I have no doubt that the president is influenced, either directly or indirectly, by this blog.

Clive Robinson October 20, 2016 8:21 AM

Pres Obama has been known to make jokes about sorting algorithms, so unless he’s got a very geeky script writer, he has more than a passing interest in the more theoretical side of computing.

That said I’m not sure a public health medical pandemic model such as that used for influenza and other air bourn pathogens by the likes of the CDC will be that usefull in cyber security scenarios.

@ Bruce.

In the last sentence of the quote you have “AI” rather than the more likely “AV”. Do you know what was originaly said?

peter October 20, 2016 9:21 AM

@Clive Robinson

“Then there could be an algorithm that said, “Go penetrate the nuclear codes and figure out how to launch some missiles.” If that’s its only job, if it’s self-teaching and it’s just a really effective algorithm, then you’ve got problems.”

Obama talks about AI throughout the article, including the above quote, so I think it’s AI, not AV.

ab praeceptis October 20, 2016 9:51 AM

Pardon me, but that’s a classical example why major parts of the eurasian intellgentsia consider us-americans as, uhm, let’s politely say, easy to wow and why quite many have their own understanding of what “science” is there.

So, what do we have here? Basically a president and a “major light in science” (ito) doing some “sophisticated” blabla in a magazine that eurasian would hardly see in the context of science (but rather as a tech tabloid).

The president is painted for pretty much two reasons: a) it pleases him to be seen like that. b) it pleases the us-americans being seen like that. His real tech know how is largely unknown and, frankly, is not of importance anyway; after all, a presidents job qualification doesn’t call for tech geekiness.

The “light of science”, ito, is a compsci dropout from a not exactly elite university. He may be a very, very bright man, I don’t doubt that, and may be very successful in his diverse (mostly tech) business endeavours, but does that make him someone to listen to in the field of compsci? The us-americans seem to think “yes” (hardly surprising), the eurasians seem to think “no”.

Again, this is in no way against Mr. Ito, who is a successful and quite probably highly intelligent man. Probably there really are reasons to admire him, but in other fields.

Where I come from we have an idiom “readers digest stuff”, and that’s what this is. Entertaining to read maybe, with more or less interesting figures, maybe, but basically irrelevant from a professional perspective.

Plus, the president is leaving and neither of the candidates for the next president offer us any reason to consider them tech savvy.
Nothing to celebrate here, and little to talk about.

Clive Robinson October 20, 2016 2:07 PM

@ Peter,

Obama talks about AI throughout the article, including the above quote, so I think it’s AI, not AV.

The problem is that AI does not fit the two paragraphs, where ad AV does. It’s why I’m wondering if it is a transcribing error or misheard by the person making the written transcript.

If Obama did say AI then it flies in the face of most of the rest of what he has said. As noted AI of any real sense is atleast a decade away as it has been for the last sixty or so years (arguably eight if you consider the musings of Alan Turing and others about “electronic brains” from the period between the teo World Wars.

Further to say “AI threats” in the current cyber security context comes across as “woaha there be dragons” type mysticism.

Hence applying the razor suggests to me it’s a mistake post utterance, however others may disagree (depending on their perspective).

EvilKiru October 20, 2016 3:27 PM

@Clive: What are your definitions for AI and AV? When I see those in the context of a security blog I infer them to mean Advanced Intrusion and Anti-Virus. In that context, “Advanced Intrusion threats” makes a lot more sense to me than Anti-Virus threats”

Nutty Professor October 20, 2016 4:46 PM

More to the point of this blog, I found this Obama quote significant…

“Figuring out how we regulate connectivity on the Internet in a way that is accountable, transparent, and safe, that allows us to get at the bad guys but ensures that the government does not possess so much power in all of our lives that it becomes a tool for oppression—we’re still working on that. Some of this is a technological problem, with encryption being a good example. I’ve met with civil libertarians and national security people, over and over and over again. And it’s actually a nutty problem, because no one can give me a really good answer in terms of how we reconcile some of these issues.”

Clive Robinson October 20, 2016 5:42 PM

@ EvilKiru,

Have you read the opening sentences of the article?

They are,

    IT’S HARD TO think of a single technology that will shape our world more in the next 50 years than artificial intelligence. As machine learning enables our computers to teach themselves, a wealth of breakthroughs emerge, ranging from medical diagnostics to cars that drive themselves. A whole lot of worry emerges as well. Who controls this technology? Will it take over our jobs? Is it dangerous? President Obama was eager to address these concerns.

From which it can be reasonably assumed that the AI in question is “Artificial Intelligence”.

anonymous October 20, 2016 6:00 PM

Clive Robinson • October 20, 2016 8:21 AM
Pres Obama has been known to make jokes about sorting algorithms,
so unless he’s got a very geeky script writer, he has more than
a passing interest in the more theoretical side of computing.

Obama’s display of knowledge about sorting algorithms was staged.

“The proceedings at Google are not unremittingly serious affairs. Mr. Schmidt asked Senator McCain, ‘How do you determine good ways of sorting one million 32-bit integers in two megabytes of RAM?’ Immediately signaling that the question was asked in jest, Mr. Schmidt moved on. Six months later, Senator Obama faced the same question, but his staff had prepared him. When he replied in fluent tech-speak (‘A bubble sort is the wrong way to go’), the quip brought down the house.” (“For the 2008 Race, Google Is a Crucial Constituency”. New York Times. December 02, 2007).

see also https://www.quora.com/How-does-Obama-know-about-bubble-sort

The most memorable moment came during the Q and A. “What,” asked a Googler to the politician, “is the most efficient way to sort a million 32-bit integers?”

It was a hard-core programming question an engineer might be asked in a job interview at Google. But the candidate squinched up his face in concentration, as if racing through various programming alternatives. “Well,” he finally said, “I think the bubble sort would be the wrong way to go.”

The crowd erupted in appreciative laughter. The exchange had obviously been staged. Indeed, Andrew McLaughlin had briefed the candidate. And before the session, Schmidt had prepped him on how he might answer such a question.

-In the Plex: How Google Thinks, Works, and Shapes our Lives by Steven Levy

goro October 20, 2016 7:16 PM

@ Clive,

I think AI is correct and the talk timely. President Obama is making the rounds.

PREPARING FOR THE FUTURE OF ARTIFICIAL INTELLIGENCE
Executive Office of the President
National Science and Technology Council
Committee on Technology
October 2016

The Current State of AI

Remarkable progress has been made on what is known as Narrow AI, which addresses specific application areas such as playing strategic games, language translation, self-driving vehicles, and image recognition. Narrow AI underpins many commercial services such as trip planning, shopper recommendation systems, and ad targeting, and is finding important applications in medical diagnosis, education, and scientific research. These have all had significant societal benefits and have contributed to the economic vitality of the Nation.

General AI (sometimes called Artificial General Intelligence, or AGI) refers to a notional future AI system that exhibits apparently intelligent behavior at least as advanced as a person across the full range of cognitive tasks. A broad chasm seems to separate today’s Narrow AI from the much more difficult challenge of General AI. Attempts to reach General AI by expanding Narrow AI solutions have made little headway over many decades of research. The current consensus of the private-sector expert community, with which the NSTC Committee on Technology concurs, is that General AI will not be achieved for at least decades. (p. 7)

Today’s Narrow AI has important applications in cybersecurity, and is expected to play an increasing role for both defensive (reactive) measures and offensive (proactive) measures.” (p. 36)

https://www.whitehouse.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf

As well, this recent reporting on a security summit in Washington DC:

Intel Chiefs Warn on Cyber, ISIS Threats
The Cipher Brief | SEPTEMBER 8, 2016 | MACKENZIE WEINGER

AI and machine learning will be particularly crucial in the area of cyber defense, Rogers noted, helping to optimize resources and improve quickly to address the problem. But while AI and big data will become important to intelligence tasks, they will not replace the “human dynamic,” he said. “It’s got to be some combination of the two,” Rogers said.

Sapp said it is key to have “ground systems that can learn and operate at the speed of cyber, not at the speed of human beings making decisions.”

“That’s been a big change for us, and I think it’s going to be a huge change for the community in terms of the way we task and do our business. And they’re not only systems that can learn and think, they are systems that can integrate across intelligence disciplines and across domains,” she said.

https://www.thecipherbrief.com/article/exclusive/intel-chiefs-warn-cyber-isis-threats

LOL October 20, 2016 8:08 PM

re: AI vs AV

Just for a moment, leave aside the specific context of this prepared speech about “AI threats” and think about what may have been indicated unintentionally…

There be dragons, but it’s not necessarily all woo mysticism.

https://fas.org/blogs/secrecy/2016/08/dsb-autonomy/

You might not personally use the term “self-learning” but AI as it exists today is already playing a role here!

What exactly is the GENIE Initiative?
Look at the progression from THINTHREAD to TURBULENCE.
Consider TURMOIL, TRAFFICTHIEF, & TURBINE.
Are SURPLUSHANGAR diodes the divide between QUANTUM and QUANTUMTHEORY!?

LOL October 20, 2016 9:10 PM

@r

Did I forget? If I’m not mistaken, you may have personally influenced this particular line of inquiry 😉

r October 20, 2016 9:39 PM

I admit, the quoted line from Nutty Professor is suspect. I don’t have time to follow this nutty stuff right now, window

They could be feeding IT strategy, troop movements, training IT. Who knows, what bothers me about what he said he said (quoted) are the implications indicating that it is being used in a not-very-kosher way (eg. losing battle and the those who are spying can’t justify or curb the other sides concerns at all).

I wish I had more time, hopefully tomorrow.

We are most certainly at a tipping point for a lot of things, I don’t see why if we can’t train AI on image recognition we can’t figure out a way to plug financial transactions or troop movements into a data diode to see what patterns emerge.

Like I said, both ends of the equation.

JPA October 20, 2016 9:39 PM

I think the infectious disease analogy with security can make a lot of sense. The security processes are like the immune system. When the immune system is too weak it can’t protect adequately. When it is too active it cal lead to an autoimmune disease which is extremely debilitating if not lethal.

Security has the same issue. How to avoid being too weak and at the same time not getting so active that it damages or destroys the society it is supposed to protect. This balance, which is dynamic, can be achieved by a push for increased security from government and a strong opposition from those who support civil liberties. These forces are opposing but not opponents.

r October 20, 2016 10:25 PM

That paper you posted from fas.org is from is a memo for … acquisition, technology and logistics.

“”
The study focused on three areas:

institutional and enterprise strategies to widen the use of autonomy[1];

approaches to strengthening the operational pull[2] for autonomous systems[3];

and an approach accelerate the advancement of the technology for
autonomy applications and capabilities[4].

The study concluded that action is needed in all three areas to build trust[5]
and enable the most effective use of autonomy[6]
for the defense of the nation.
“”

I think that’s right?

[2] pull? like pulling your own weight but for a computer?

EvilKiru October 21, 2016 3:16 AM

@Clive: I had not read any part of the article other than parts Bruce quoted. Wow, was I off base!

Clive Robinson October 21, 2016 4:21 AM

@ EvilKiru,

Wow, was I off base!

No worries, the real problem is there are not that many usefull two letter “TLA’s” 😉 So like all usefull objects they get overloaded heavily.

As for reading articles I’ve had a bit of a bumb in that yesterday, I was away from my usual “habitat” using a public PC to look at that ASLR paper from the previous post on this blog. I skim read it and thought that needs more thought, so later I downloaded into the phone, and tried to open it whilst traveling, only to get blank pages… So it will have to wait a couple of days for me to use another system which does read it, so I can make the comments I said I would… “Don’t you just love the rapid pace of technology NOT”.

@ All,

Any on else remember Linus Torvalds insightful and tactfully –for him– worded comment on Artificial Intelligence? B-)

    “The whole ‘singularity’ kind of event? Yeah, it’s science fiction, and not very good sci-fi at that, in my opinion. Unending exponential growth? What drugs are those people on? I mean, really.”

I guess he and Pres Obama see things slightly differently 🙂

Snarki, child of Loki October 21, 2016 7:18 AM

Am I the only one seeing DNS problems this morning? Did Putin finally get fed up and decided to go on an all-out attack?

muhammad usman October 21, 2016 9:42 AM

It is great that information security challenge is acknowledged at the highest possible level. Recently i was reading about embedded medical devices security and I was quite alarmed at how defenceless the embedded devices are. With their limited operating systems, memories and buffers, it is going to be life threatening some day.

So yes, the challenge is huge and response is not what it should be. People even do not consider medical devices as something related to information technology and something that can be accessed maliciously.

r October 21, 2016 11:02 AM

I think Obama is going to be even greater out of office, despite not sticking to the line he was prior to being elected (a la NSA). Constitution lawyer who was president that codes? I’m in love.

Tom Corwine October 21, 2016 12:22 PM

@Snarki, child of Loki

Yes, I was experiencing DNS problems, too. I ended up pointing to a DNS server in Germany to get DNS back.

Then, I got a notification that CBSN (CBS’s online news) had a “breaking news” story about it featuring a “former NSA” employee explaining a massive ongoing DNS attack. He gave some bizarre advice for everyone to change their passwords.

r October 24, 2016 4:23 PM

@.Clive

Remember too, that when a president has questions there are people just waiting to sell him or her their re: hearsed answers. Linus? Not so much imb.

Clive Robinson October 25, 2016 12:18 AM

@ r,

Linus is popular with a certain crowd because of his sometimes colourful outbursts. Some call it “character” others “entertainment”. Either way it rises a smile when I read the more publicized ones.

It also needs to be said that he is more often right than wrong, which makes him immensely irritating to some, who thus tend to over react when he does make a mistake in their eyes.

But when all is said and done, he does a job for which few have either the talent or the stamina, and thus his short pithy statments can be seen as a time saving measure.

vas pup October 25, 2016 11:32 AM

A study suggests there are parallels between the way youngsters turn into hackers and how youths become addicted to drugs and alcohol:

http://www.bbc.com/news/technology-37752800

“[The hormone] dopamine can be released quickly as vulnerable youth achieve frequent and rapid successes online, and if these successes are linked to anti-social acts, such as hacking, they will be reinforced to pursue further ends to obtain their gains,” it states.
The study suggests a large part of the problem is that many youngsters see the internet as a place that is not watched over by guardians.
The report adds that often their goal is not financial gain, but rather to boost their reputation among other hackers in order to compensate for what might be a lack of self-esteem in the rest of their lives.
The authors also suggest educators develop new tests to identify which children have the highest potential for technological skills when they are as young as four, so they can be “nurtured and rewarded” for using their talents in ways that benefit society.
“So, rather than trying to change what people are interested in, we should be steering them to pro-social activities rather than criminal ones, and looking to what’s in their surroundings that influences the path they go down.”

Clive Robinson October 25, 2016 3:58 PM

@ vas pup,

The tell in what you’ve quoted is,

    … not financial gain, but rather to boost their reputation among other hackers in order to compensate for what might be a lack of self-esteem in the rest of their lives.

Whilst there may be a few “400lb chair benders” amongst them, many will be those who are intelligent and picked on by those who shall we say have more brawn than brains and hunt in packs for the fun of inflicting pain of one form or another on “specky four eyes brain box”.

The hacking is a outlet for those bullied to get “self esteem” amongst those they regard as their peers, mentors or betters. Thus the way to prevent this unhealthy problem is not “selection for segregation” of those with intelligence at an early age, but “selection and correction” of those who bully (which most education establishments are actually quite bad at despite having policies etc in place).

The thing is, if you allow intelligent child to be bullied, a number that do not gain an outlet or justice, will turn their thoughts and actions to revenge. And revenge driven by intelligence is sometimes not focused on the bully but those who failed to stop the bully, and that can be very nasty indeed, with many other lives being adversely affected.

vas pup October 26, 2016 11:12 AM

@Clive:
“The thing is, if you allow intelligent child to be bullied, a number that do not gain an outlet or justice, will turn their thoughts and actions to revenge. And revenge driven by intelligence is sometimes not focused on the bully but those who failed to stop the bully, and that can be very nasty indeed, with many other lives being adversely affected.”
Clive, I agree. My guess is that combined brain power is the most valuable resource of any organization, company, country as a whole, and if people in power understand the idea of {“nurtured and rewarded” for using their talents in ways that benefit society}, they have to take care of security of talents as part of nurturing and protect them from bullies as well.
When you have fair and affordable, reasonable (civilized) conflict resolution system, then revenge space is minimized (not eliminated altogether for sure).

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.