Comments

Bauke Jan Douma April 10, 2018 6:39 AM

The acronyms DARPA and CHESS should give an indication of what this is all about.

David Rudling April 10, 2018 7:02 AM

“… the kind of thing that can dramatically change the offense/defense balance.”
Yes but a balance can tip either way.
This project is presumably aimed at creating a toolset.
Any vulnerabilities discovered by that toolset and swiftly patched will be good news. However the toolset will be invaluable to those who will misuse it (and it WILL leak – guaranteed).
I wonder what clever acroynm the guys at DARPA will think up for PANDORAS BOX, or perhaps GENIE and BOTTLE.

Name (required) April 10, 2018 7:23 AM

My guess is that only a very limited subset of vulnerabilities can be discovered this way.

Maybe we need a development framework focused on auditability and analysability of code?

@David Rudling – it’s just the matter of time before someone implemented it. If it were “bad guys” first, we would be in a much worse situation.

anon April 10, 2018 8:59 AM

This would probably become an extension to existing static analysis tools. It will also suffer from similar limitations to static analysis tools, but just expand the types of bugs they will find.

Though if utilizing machine learning/ artificial neural networks, one implementation of this trained with one training set data may be able find some bugs that another instance trained on a different training set would miss, and also in turn miss some that other instance would find. Presumably the intelligence agencies would have the largest collection of different instances/training sets available and run code through many of these and be able to find the most number of exploits that other more commonly available tools will miss.

strawman April 10, 2018 9:29 AM

“Scaling up existing approaches to address the size and complexity of modern software packages is not possible given the limited number of expert hackers in the world, much less the Department of Defense (DoD).” [DARPA-SN-18-40]

Poor DoD cannot keep up! Who’s gonna help them to build the super-hacker AI?

This project could also bring humanity further. But only under this necessary condition: The software, documentation and insights gathered must be free and open! Accessible for everyone. Let the public find their vulnerabilities themselves! This is the only way to defeat the hackers.

DARPA Special Notice 18-40 does not hint into that direction.

Ho Ho Ho April 10, 2018 9:50 AM

I have a different take on this. I recently was at a symposium on machine learning and cybersecurity and after sitting through the program I had one overall reaction.

FAD

FAD

FAD

As I told one fellow attendee over the buffet table: how the hell is there a symposium on machine learning and cybersecuirty when no one and I mean no one ever bothered to even discuss what the hell a machine is or how to define learning? I assert that no one even knows what machine learning is, every one seems to define it differently, and as far as I could tell machine learning and cybersecurity just means bigger and faster databases and a hand out for more money.

@bruce writes.

“This is the kind of thing that can dramatically change the offense/defense balance.”

I doubt it. To err is human but to really fouls things up requires a computer.

Denton Scratch April 10, 2018 10:05 AM

60 years ago, AI researchers claimed that artificial intelligence would be here within 20 years. That was back in the days of Eliza and Parry. 40 years ago, we had “expert systems” (that could in principle explain their reasoning), and primitive experiments with neural networks (that couldn’t).

20 years ago, game developers announced that their game engines were using artificial intelligence. I think they were just using systems of interlinked rules – ‘rule-based systems’. I suppose you could designate that as another type of “AI”, but anyone who’s played a computer game against the “AI” knows that intelligence is not one of its attributes.

I’m not sure what is behind the present hype about AI. I’m aware of no evidence that there’s been a breakthrough in AI research in (say) the last 10 years. All I can see is that Google and others have been applying neural networks to extremely large population datasets, and using them to infer rules; and that the combination of neural nets and rule-based systems can be used to control an automobile on a public highway – possibly better than a human driver (not a particularly impressive achievement, judging by the human drivers around here).

So I don’t really understand what this DARPA initiative is about. I can’t see any huge dataset that can have neural nets applied to it to induce new rules; and I can’t really see how game-engine-type rule-based systems can be shepherded by humans into exposing software vulnerabilities. (I read the linked article; it did not enlighten me)

True AI is always about 20 years into the future, and has been for at least 60 years.

Drone April 10, 2018 10:53 AM

Great, my taxes funding something that will in the end see the widest take-up by Cyber-Criminals. Oh well, at least when their AI-assisted systems are ripping us all off we’ll know how they’re doing, it and that it’s impossible to stop them.

albert April 10, 2018 11:42 AM

@Denton Scratch,
Should I stop waiting for fusion power?r
Is that cancer cure just around the corner?r

@Drone,
Ai is not ready for prime time, but it’s already being used to pick out ‘likely’ terrorists and other criminals (but not critics of the gov’t; they have their own special lists).

OT:
The New Yorker finds it necessary to preface Andy Borowitz articles with the word “satire”. Are Americans really that stupid?r Do they need labels to explain things that are clearly obvious?r

xhttps://www.newyorker.com/humor/borowitz-report

Pick any article; they’re all good.

. .. . .. — ….

David Rudling April 10, 2018 11:51 AM

@Name (required)
Unfortunately the point you make is true. This seems to be a lose/lose situation for DARPA.

Denton Scratch April 10, 2018 12:19 PM

@albert I personally think fusion power is closer than the kind of AI that could be substituted for human intelligence. I believe they can now make a fusion reaction that actually outputs power.

As far as cancer cures is concerned: it turns out that there are many kinds of cancer, and it seems that many of these can be treated successfully, nowadays.

That is: I think that AI is a weird one: a thing that we thought we’d have mastered long ago, but we seem to be no closer now than we were 20/40/60 years ago. I suppose there could be a breakthrough at any time; but I’m not buying shares in AI.

RealFakeNews April 10, 2018 12:33 PM

@Denton Scratch: I agree. These AI/ML monikers are thrown around as if everyone knows exactly what they are, but it seems to me that they aren’t actually anything except some researcher recently called something “AI” and it stuck.

There seems to be this idea that if it is called AI it has magical powers and can do things other systems can’t. That is BS in the extreme – it is still a computer program, working like any other computer program.

Instead of throwing ever more powerful tools of dubious utility at these problems, why don’t we start with firing the jackasses that just write code, and hire people who know what they’re doing to actually do code audit and fix the software?

Half the reason software is insecure is because it was poorly written in the first place. It is one area where quantity over quality is most certainly having a negative impact.

No-one cares how well software is written; only that it does what they want it to do. They only care after the software was used to steal their personal data.

RealFakeNews April 10, 2018 12:40 PM

To add: Intel said recently that they will not fix the silicon, and continue to rely on software patches for mitigation of security vulnerabilities.

This is a perfect example of trash thinking overriding serious and real security threats, likely in the name of profit and performance (IMHO it is likely that if they fixed the silicon, their performance advantage over particularly AMD, vanishes).

Why is anyone accepting this?? We should be saying to Intel: “NO – FIX YOUR STUFF”.

Why isn’t this happening???

vas pup April 12, 2018 9:01 AM

TED 2018: Technology reveals fear and other emotions:
http://www.bbc.com/news/technology-43653649
Speech analysis technologies are already being developed to provide insights into people’s mental and physical health, Prof Crum added. She gave three examples:
 machine learning can analyze the words people use to predict whether someone is likely to develop psychosis
 dementia and diabetes can be revealed by alterations in the “spectral coloration” of a person’s voice. The term refers to a way the frequencies and amplitudes of sound can be represented on a graph
 linguistic changes associated with Alzheimer’s disease can appear more than 10 years before a clinical diagnosis
In an era when many are considering cutting down their digital footprint, Prof Crum urged the opposite approach.
“If we share more data with transparency and effective regulations, it can help create empathetic technologies that can respond in a more human way that improves our lives,” she said.”

Human as the most vulnerable link of security should be analyzed by AI as well (see above) to assess risk.

PeaceHead (again) April 19, 2018 7:50 PM

One of my personal “nightmares” is a scenario where some kind of AI or AI hybrid system is busy “doing it’s thing” and it gets “stuck on repeat”. Take for example some kind of NDT scanner where a reflected signal is not supposed to cause harm but is radiated at a person or their environment or tools.

In the case of a person-controlled system alone, an “overuse” victim saying, “Hey, would you please turn that signal off, that hurts!” or “It burns, please stop!” or “Hey, those vibrations are making me feel sick, you can turn that off now… please!”.

In such a case, the likelihood of the overuse is diminished, be most people who are NOT sociopaths will typically relent and stop the scanning tool use before it becomes abusive (which is always a possibility with certain types of directed so-called “non-destructive testing” energy–electromagnetics/ultrasonics/radiation/etc). So abuses would be minimal, hopefully, or at least there might be a factor of remorse or restitution.

However, with an AI or an AI hybrid there’s not much preventing a relentless almost-nonstop signal assault—a “SIG-HIT”, or “SIG-BURN”, or “Near-Ballistic/Caustic SIG”. There’s nothing preventing a tautology or a logic flaw or and endless loop or a phase-locked loop or whatever you want to call it…. unless the “technician” steps in to TURN IF OFF or try to “talk some sense to ‘I.T.'”.

Think about it, would you rather have a manual off switch for DARPA automated high amplitude exotic penetration AI coordinated directed energies, or would you rather have some hyper-fuzzy-logic, discrete decision criteria unknown, programmable logic software/malware/interoperability conundrum deciding in non-human terms when “ENOUGH IS ENOUGH”?????????!!!!!?

This is not actually a joking matter.
There have been historical case studies where ethically faulty human researchers actually inflicted bodily harm on subjects as part of tests of human ethics. They eventually stopped and some of them were remorseful, but still they did actually knowingly torture people.

In the risky ages of “point and click” and “set and forget” or “plug and play”, do we really want weapons GUI’s so autonomous that they might not ever stop if allowed to continue indefinately?

Last consideration…

Earlier this past year, the BBC had a terrifying yet honorably heartfelt and quite noble news brief article about how some almost-expired yet still regrettable coldwar unmanned automated antiques lingering in the world could still ACCIDENTALLY cause lethal problems simply because they were tragically designed to continue operating WITHOUT HUMAN INTERVENTION. I think the term that was used was “Dead Hand Trigger” or something blisteringly haunting like that.

So even AFTER… THE WARS ARE OVER, and yet we still are at risk of dying by mechanical rube-goldberg types of TOTAL CAVEAT EMPTORS, simply because the gear is designed to listen for remote signs of expected (yet now antiquated) cultural/military/and-or/economic activity in terms of SIG-INT. And if and when the “gap” is big enough or long-enough, such BAD “ROBOTS” could switch into dangerous modes. Horrible!

I certainly hope there is a concerted effort to dismantle and eliminate such deadly mil-antiques!

This is why people like me exist… to constantly speak of PEACEFUL COEXISTENCE and Detente with Great Sincereness and Honesty. Who is to say that the mechanisms might or might not respond to automated HUM-INT which is PEACEFUL (rather than otherwise!)????

I sincerely hope people will be thinking of this stuff.
Analog answering machines retrofitted with large reels of black-box ware are cabable of quite alot given, just a Turing trick or several!

Yet I digress, (or do I?).

May Peacefulness Prevail Within All Realms of Existence!
We still love, you, our distant peace-pact friends!

PeaceHead July 9, 2018 5:41 PM

Putin Urges Closer Internacional CyberSecurity Cooperation:

https://www.usnews.com/news/world/articles/2018-07-06/putin-urges-closer-international-cybersecurity-cooperation

This seems like an olive-branch during tough times.
Geopolitical detente is kind of like a marriage; whether in sickness or in health; unspoken reaffirming of vows to do no harm to each other; it’s a good idea.

Even if you’re the type of person who is distrustful or skeptical. There’s no escape from eternal pessimism and total isolation unless those behaviors are put on hold so that greater understanding can occur.

Distrust based upon reputation isn’t a thorough flawless predictor of present or future actions, beliefs, or motivations. Most people change over time. Nations and organizations do also. Very little in the universe is static forever.

Give people a chance to do nice things, and they will. Give them more chances to do nice things, and they will do more nice things. It’s not all fighting and bloodshed and caustic ideation everywhere at all times.

For those whom it is, perhaps they are less innocent than they would lead us to believe. Thus, their credibility as judges of character is stained too.

I speak in terms of Peace as a Security-Enhancing Choice.
There’s no security in death and disease. Peaceful Coexistence and mutuALLY-assured Survival instead offer a future at best, at worst, more of a chance than war. War and it’s precedents guarantee only the end of the existence of everthing alive, nothing more.

Something I rarely speak of:

If you had an ICBM launch code embedded in your memory would you prefer PEACEful RECONCILLIATION or continued de-facto militarism and isolationism?

I’ve done the existential, psychological, and emotional calcuations; I prefer Peace.

DARPA, please make up your mind.
We’ve got bigger threats, such as idiots still trying to invent more quantum and/or blackhole weapons as if we didn’t already have too many. And of course some corporations are a greater risk to Lives and Liberty than rogue nations.

Congratulations on inventing the Internet.
Please promptly begin returning it back to PEACETIME status.
Vocoders make great musical gear instead of battlefield tools of the deadmen walking.

Where are we headed?
Rhetorical questions, yes.

But of course, someone somewhere is always watching, wondering, studying, second-guessing. Excepting of course when there are machines instead of people. Well, machines of loving grace, I’m still Peaceful for you as well.

May Peacefulness Prevail Within All Realms of Existence
exit(0)

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.