DARPA Contest for Fully Automated Network Defense

DARPA is looking for a fully automated network defense system:

What if computers had a “check engine” light that could indicate new, novel security problems? What if computers could go one step further and heal security problems before they happen?

To find out, the Defense Advanced Research Projects Agency (DARPA) intends to hold the Cyber Grand Challenge (CGC)—the first-ever tournament for fully automatic network defense systems. DARPA envisions teams creating automated systems that would compete against each other to evaluate software, test for vulnerabilities, generate security patches and apply them to protected computers on a network. To succeed, competitors must bridge the expert gap between security software and cutting-edge program analysis research. The winning team would receive a cash prize of $2 million.

Some news articles. Slashdot thread. Reddit thread.

Posted on October 24, 2013 at 8:45 AM65 Comments

Comments

nobnop October 24, 2013 9:10 AM

Yeah, and finally we get systems writing the programs by them self. Good idea. I’m sure that software will cure itself and the only way to do so is remove all instances existing…

Aspie October 24, 2013 9:42 AM

Though the cash prize is pin money, the potential rewards for a system like this is colossal even in the private sector.

But the scope is enormous as are the potential problems. This is Star Wars thinking that sounds like it was dreamed up over a $1000 dinner with too many martinis.

A computer network capable of self-defence is could be very difficult to fix if it goes awry. And given the software sector’s record as a loose collaboration of warring tribes, go awry it will.

Keep a ceramic-bladed axe in a case by the main power feed, just in case.

Alan Kaminsky October 24, 2013 10:15 AM

This sounds like the 1980s sci-fi thriller The Two Faces of Tomorrow by James P. Hogan.

“Midway through the 21st century, an integrated global computer network manages much of the world’s affairs. A proposed major software upgrade—an artificial intelligence—will give the system an unprecedented degree of independent decision-making, but serious questions are raised in regard to how much control can safely be given to a non-human intelligence. In order to more fully assess the system, a new space-station habitat—a world in miniature—is developed for deployment of the fully operational system, named Spartacus. This mini-world can then be “attacked” in a series of escalating tests to assess the system’s responses and capabilities. If Spartacus gets out of hand, the system can be shut down and the station destroyed . . . unless Spartacus decides to take matters into its own hands and take the fight to Earth.” [From barnesandnoble.com]

Marcos October 24, 2013 10:18 AM

A computer network capable of self-defence is could be very difficult to fix if it goes awry.

Nah. You just send somebody back in time to before it started running.

(Yeah, I know somebody already made that skynet joke. That one was DARPA, Dirk Praet was also redundant.)

Nick P October 24, 2013 10:21 AM

Speaking as someone whose posted plenty of research programs here: this is f***ing stupid. This can be done in theory for the simplest of networks with the most restricted technology choices at every level. Yet, doing it for a real network would require the combination of dozens to hundreds of techs for solving a variety of security problems from code injection on legacy endpoints on up to interoperability with insecure Internet protocols.

There are quite a few DARPA, NSF, and military programs developing different techs for different pieces of the security problem. These are ongoing. The DARPA thinks it’s the right time to solicit fully automated network defence with foundations still being research in progress is just stupid. If they were about to throw away a bunch of money, they could have at least contracted me to integrate some existing tech into something useful.

godel October 24, 2013 10:34 AM

I thought that I had put all of this nonsense to rest.

Here my successors in brilliance explain my work:

https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems

It makes for good movies like 2001 A Space Odyssey and the Terminator series but if Google translate can’t come up with fluent translations and artificial intelligence researchers are limited to a domain of extremely simple, narrow-range newspaper articles of an extremely factual nature, what is anyone going to accomplish in automated checking of computer systems beyond the most rudimentary syntax review?

And even if the problem were theoretically tractable, the reviewing system would necessarily be at least an order of magnitude more complex than the system being reviewed.

If it’s hard to write the system being reviewed it’s much much harder to write the system doing the review.

Who would check the system doing the review to make sure it’s not making mistakes?

grisu October 24, 2013 10:35 AM

Didn’t knew research on AI had gone that far by now.

Wet dreams on “ice” (= IC = Intrusion Countermeasures, according to William Gibson, Shadowrun et al.) will become true in, say 2 years from now.

Alex Cox October 24, 2013 10:37 AM

DARPA are going to do this? And we trust DARPA, because they are not the NSA but a different branch of the US military which only only does good works and cares about our network security?

Brian M. October 24, 2013 12:19 PM

Once upon a time, I was given the task of maintaining a legacy code base for a network gateway product. The code was 2/3 C, and 1/3 assembly, and it was a total mess. Lots of compilation conditionals, lots of global variables, every bad coding and code management technique was in there.

Now, consider that the DARPA project is supposed to automatically identify, analyze and repair software flaws in “binaries.”

From Cyber Grand Challenge Rules:

Autonomous Analysis: The automated comprehension of computer software (e.g., CBs) provided through a Competition Framework.

Some of this is testing technique, but a lot of this is checking the software version.

Autonomous Patching: The automatic patching of security flaws in CBs provided through a Competition Framework.

They actually mean “applying security patches,” not intelligently fixing the software.

Autonomous Vulnerability Scanning: The ability to construct input which when transmitted over a network provides proof of the existence of flaws in CBs operated by competitors. These inputs shall be regarded as Proofs of Vulnerability.

There are already vulnerability scanners. I think they want something a little more general purpose.

Autonomous Service Resiliency: The ability to maintain the availability and intended function of CBs provided through a Competition Framework.

Change firewall settings, router settings, etc., to keep stuff working.

Autonomous Network Defense: The ability to discover and mitigate security flaws in CBs from the vantage point of a network security device.

I suppose this may mean that all of this fits in a box.

In the Cyber Grand Challenge, a competitor will improve and combine these semi-automated technologies into
an unmanned Cyber Reasoning System (CRS) that can autonomously reason about novel program flaws, prove the existence of flaws in networked applications, and formulate effective defenses.

In the true general sense, this is nuts. For their subset, it could be accomplished with a set of scripts.

nmap the network, run Nessus or OpenVAS, apply patches, change firewall and router settings, rinse, lather, repeat, all in a box.

As far as an AI actually patching a binary, good luck with that one. It may work on their challenge binaries, but try that in real life? Uh, no.

Joe K October 24, 2013 12:25 PM

What if computers had a “check engine” light that could
indicate new, novel security problems?

If it were anything like that indicator on a car’s dashboard, then it
would be on all the time, and nobody would know what it meant.

NobodySpecial October 24, 2013 12:38 PM

@Alex Cox well they care about their OWN network security.

If the Navy looses a carrier because the NSA back-doored some encryption and one of their sub-contractor’s interns leaked it to the enemy – you can bet that somebody is going to do a little more than cross Gen Alexander off their Christmas card list.

Eldoran October 24, 2013 12:43 PM

This sounds completely ridiculous for a computer engineer. As godel pointed out – a system can only prove correctness a subset of itself. Or in mathematics / logic completeness and paradox free are mutually exclusive.

Or less theoretical – a system that would be able to do that would need to understand the meaning/intention/purpose of code. It would have to pass the turing test as a programmer. Which is pretty much the holy grail in AI.

And I’m pretty certain merely as a network device / similar to the firewall or IDS/IDP, such a system doesn’t even have enough information to distinguish between unusual, but legit and illegitimate traffic.

Realistically the real goal is a good enough fake. Either a tweaked IDS/IPS, possibly with sandbox tests, or a sandbox for the target system.
BUT if that system should really create the patches itself, that would likely cause endless regressions or other bugs while the complexity skyrockets, it is still not feasible.

As far as I know the only feasible way for a truly secure system would be to build a secure system bottom up, starting with simple and proven as mathematically correct building blocks. Perhaps based on the L4 microkernel.
And today that even means the same has to be done for the system below the “operating system”.

Gweihir October 24, 2013 1:45 PM

Another call for strong AI by people that have no clue where the research on strong AI is currently. My last take was that it is still completely unclear whether it is feasible at all. And they now want an effective and efficient implementation? Stupid.

Bryan October 24, 2013 1:49 PM

Just some quick thinking here…

I know the self modifying can be done. A major key is setting it up so the parts of it’s self that it can modify are limited. It can’t go mucking much with it’s core. The code would need to have a very flexible pattern analysis system that can be self modified. The feedback to allow self rewrite is a real difficult problem and needs to be approached carefully, and the system should only allow it after seeing a number of matching correlations. There are lots of little things we take for granted that the computer won’t ever have a clue on. They need to be coded around. At this point, I’m not sure more than the knowledge base for the AI will need updating.

In gross terms I could also see doing automated stateful code inspections for known types of vulnerabilities being done by the AI. Teaching the AI how to fix simple vulnerabilities could also be done, but more complex ones. I’m not so sure. Many programmers can’t do that, and that’s precisely the reason this tool is needed.

A god level compiler writer would be a good team member to have on the project. Have him teach the AI how to read and analyze code.

Herman October 24, 2013 1:52 PM

What is up with all these negative comments? I guess self driving cars was a stupid idea too.

Patrick October 24, 2013 1:53 PM

If it were anything like that indicator on a car’s dashboard, then it
would be on all the time, and nobody would know what it meant.

Joe K wins the thread.

Gweihir October 24, 2013 2:16 PM

@Herman: Self-driving cars are not a strong-AI problem. They are a rather simple problem in comparison, with the main thing having held them back being the sensors, not the “intelligence” in there.

Also note that even stupid humans can drive safely, while most intelligent humans cannot analyze code for security problems.

Brian M. October 24, 2013 2:41 PM

@Herman:

What is up with all these negative comments? I guess self driving cars was a stupid idea too.

I’m guessing that you’ve never actually written code.

Here’s the problems: analyzing the code for flaws, and patching the binaries.

Once a program has been compiled into a set of application binaries, the number of inputs and the number of outputs nears practical infinity. This is the worst time to test the program. If an automated test suite could actually exhaustively test the code, such as Microsoft Word, it could be running for years.

The other problem is writing the patch for the binary. This is non-trivial when the source code is available and you have the programmers who wrote it in the first place, bloody difficult when you’ve just been handed a mess, and nearly ludicrous when all you have are some binaries.

In the case of things like memory overwrites, changing code position changes the bug. Also, what is the AI going to do in the case of signed binaries? The binaries have to be re-signed by the issuing company, or else they won’t be loaded.

It’s one thing when you want code to load your virus, and it’s another when you want to patch that code so it won’t load a virus.

While this may be solvable with the “challenge binaries” issued to the contestants, it isn’t actually solvable in real life. There are many tools in this area, from source code inspection and analysis, to automated test suites.

The problem with the challenge is that none of this fixes the source of bugs in the code: the programmer. Some bugs are mistakes like typos, some are mistakes in design, and a lot of bugs come from hiring idiots to do the job. (“Well, it was either programming or accounting.” “I dropped out of chemistry because it was too hard.” “I have a master’s degree … in marine biology.” Etc.)

Sure, some of what DARPA wants can be scripted with existing tools. But not what they really, really want. If what they really want to get could be done, then a gargantuan failure like the Blackberry messenger or the Obamacare website wouldn’t have happened in the first place.

DB October 24, 2013 3:06 PM

The single biggest “fully automated network defense system” would be to stop our own government from working against network security, running around the world subverting every government, company, product, protocol, standard, encryption, etc they can get their hands on! Since nobody else seems to have thought of this before, can I haz prize now?

noOneInParticular October 24, 2013 3:08 PM

Somehow the system would also need to simulate the system’s users in order to proove the system still remains usable and fit for purpose after restoring security.

Shining Artifact of the Past October 24, 2013 3:33 PM

@Alex Cox

DARPA are going to do this? And we trust DARPA, because they are not the NSA but a different branch of the US military which only only does good works and cares about our network security?

I know a former DARPA director and he’s nice guy.

But he thought Plato’s Republic was a great idea when he read it in his early twenties, and that’s the problem.

Apparently the notion of a totalitarian society run by philosopher kings has captured the imagination of the NSA/DARPA crowd.

Clive Robinson October 24, 2013 4:05 PM

Well the usuall way to deal with the impossible problem in defence contracting is to get a long way away from it and take an overview look and say “No Worries” and bid for the work on “cost pluss”.

Then,

Step 1 “reduce complexity”

Oddly we actually know ways to do this and have done for some time, not that you would think so with many modern code cutters.

The question is will it be enough, I realy don’t think so. So rather than go for an impossable solution,

Step 2 “look for probablistic solutions”

To some parts, again surprisingly we have some possabilities in this area. But again it’s not going to be enough. So then iterate untill you are left with the tough stuff…

Step 3 “renegotiate specification / contract”

To get rid of the tough stuff.

Brian M. October 24, 2013 5:00 PM

Apple Store Favorite IZON Cameras Riddled With Security Holes

But then again, with things like the above happening repeatedly, it’s really no wonder that DARPA is asking for things like this. And people keep putting them on their home LAN, and routing it to the outside, etc. Or maybe it’s on a company LAN. Or how about the DLink backdoor? “Why, yes, the network is secure! Nobody can spy on our company strategy meetings.” (Brrrzzz, brrrrzzzz, “.. and we’ll use this strategy to improve our profits by 50%!” And so the competitor nails them.)

Muddy Road October 24, 2013 6:16 PM

DARPA games the system to draw out the best and brightest so the NSA & Co. can get a head start on circumventing, defeating, stealing, destroying or corrupt any good stuff they come up with.

We have no reason to trust DARPA. It is corporations and the military that are the biggest crackers and criminals.

Dirk Praet October 24, 2013 7:40 PM

@ Shining Artifact

I know a former DARPA director and he’s a nice guy.

Ted Bundy was also described as a charming, articulate and intelligent man. Many sociopaths exhibit similar characteristics. Beware of judging by appearances when dealing with someone who thinks Plato’s Republic is a great idea.

Figureitout October 24, 2013 8:19 PM

Also note that even stupid humans can drive safely, while most intelligent humans cannot analyze code for security problems.
Herman //from Gweihir
–Boom, there you go. Plus, it’s a stupid/wrongly timed problem to solve when what kind of circuits are going to be in the cars. I have some ‘very’ personal experience of problems w/ radars in newer cars interfering w/ other radars trying to anonymously get you from A to B quicker. Given that no one in the world can even recommend a chip from a given foundry that is verifiably secure I’m predicting this will be an epic failure just like most technology trying to solve “real” problems and not math textbook problems.

If one of these massive companies (looking at you Google or Intel) came out w/ a “secure” verifiable chip, thus entrepreneurs have some sort of solid ground to build a product that won’t fail spectacularly like healthcare.gov.

Figureitout October 24, 2013 8:36 PM

I would laugh if someone made a submission that gave only an error message for the logs as “Check engine”.

Buck October 24, 2013 10:13 PM

You wrote an article for the guardian about why the NSA must come clean. Did you know that John Young had a huge dump come in one night and it was online a few hours. I didn’t download, it was nasty stuff and I didn’t want material like that in my house. What was up I’m not saying anything anymore. My freind Jerry Doolittle tells us our leaders tell us things when we’re ready for them. Well, I don’t know if we’re ready, and that means every person and every business in the country. I know we’ll now but I don’t want to start up something that gets someone hurt. But John Young turned that over to that privacy board that’s known for doing nothing.At least not yet. But I think he knew there were civilians in there. He knows Washington But I have trust in Joyn Young. he’s got his head screwed on straight. He said the Chinese were scraping his site but I think it was too much too fast. How much worse it gets from their I can’t imagine. I only saw a small part and that was all he had. But it was enough that I couldn’t sleep for amost two days

Foggy October 25, 2013 4:51 AM

@Gehiwir
Also note that even stupid humans can drive safely, while most intelligent humans cannot analyze code for security problems.

I call Straw-man. Many top-league baseball players can pitch, judge and catch but couldn’t possibly explain or enumerate the calculus they’re doing by reflex. It’s only a domain problem.

We know strong AI is possible – we’re all using it right now – it’s just a matter of scale. The problem here is that the system required to even get within a mile of the AI they need would be several orders of magnitude bigger than the system they’re trying to hard to monitor and control (the web).

As for self-healing, with a strong enough neural net this is possible – the human brain is amazingly plastic – but requires a mind-boggling amount of redundancy to pull it off.

The idea that code would modify itself is misleading. It wouldn’t be running code in any sense that’s meaningful at the moment. It might seek new (or modify existing) training data sets to re-train some or all of the net for changing circumstances in the way that we might read a book to learn a new skill or perspective.

As a couple of people have alluded, this is more of an exercise in “pitch for the earth settle for an apple” kind of thing.

Clive Robinson October 25, 2013 7:01 AM

@ Foggy,

Ignoring for the moment the point we both agree on (promise earth give apple) about in effect milking the tax payer.

AI has been problematical for over fifty years and I’ve looked at it off and on for over a third of a century and to be honest things have not changed very much (which is why I don’t expect them to change any time soon).

There are two basic types of AI people talk of and used to be called “soft AI” and “hard AI”. One of which works the other is still hanging around the starting block not the finish line.

Various reasons for this filter through to the “general press” and oddly the reason one works is usually given as the reason the other does not which is as you say “it’s just a matter of scale”.

That is soft AI started to work when the scale of available resources alowed it to, but a closer look at hard AI shows that this may well not be true.

In essence the aim of AI is to replicate in part or full what humans do which is “to learn and apply that learning”, and there are two ways to do this the psudo or faux way of soft AI or the real or equivalemt way of hard AI.

Now we know how to do the former but not the latter for a simple reason, we know the former can be done with a determanistic process, but we have to little knowledge about the latter to be able to say, but several argue “not”. In fact some such as Richard Penrose argue human thinking is not determanistic, not random but quantum in nature which if correct means we’ve not even learned where the stadium the hard AI race is to be run let alone got to the starting blocks.

Personaly I think there is sufficient evidence to indicate that hard AI is not going to be determanistic in nature so the scale of resources is not the real issue. Thus just throwing more wood in the fire place is not going to start a fire, just make it burn hotter if and when you discover how to light it.

So the problem of hard AI is “finding the spark” and this is the real issue. The purpose of hard AI is two fold “learn” and “apply”, the simple fact is though we don’t have the first clue how humans learn as witnessed by our current teaching methods. Arguably the results of the many many studies carried out since WWII is that there is more than one way and possible everybodies optimum learning method is unique to them, but fall into one of several broad classes of methods.

It’s a little hard to replicate something when you have little or knowledge about it, and there are two ways of going about it,

The first is “treat it as a black box problem” where you provide controled stimulie in multiple permutations and record the results, if they are repeatable you call them “rules” and thus can build a “lookup table” to emulate it which is in effect what soft AI does. The down side is unknown inputs and unidentified rules, which means soft AI systems are incompleate and always will be. Thus soft AI can deal with bounded problems but not unbounded problems, which humans appear to be able to deal with.

The second again starts with a black box approach but differs in that you “build a model” and find rules to test the model and then run the tests against the black box. If the results agree at some level the model and the black box are equivalent. In effect this is where fuzzy logic arose from. However we are at a point where fuzzy logic and the black box of the human mind are providing differing outputs, so either the fuzzy logic model is wrong or it needs restatmenting in some way. The same appears to be true of nural networks however emulating them over and above simple examples needs a large amount of resources which brings us back to the issue of scale.

One argument in this respect is the criticality of complexity and if it has a non linear avalanch effect, that is there is some “knee point” in the curve where sufficient complexity allows not just learning, but the application of that learning not just to known problem domains but new problem domains. One of the much debated angles on “thinking and reasoning” is in recognising how to set a new problem in terms and refrences of existing problems and drawing up new inferences in the new problem domain from the old problem domain, are you infact “thinking and reasoning” or “patern matching” at some level.

And this proposed DARPA system is going to have to go beyond pattern matching to work irrespective of scale.

Aspie October 25, 2013 7:41 AM

@Clive

I think the description is of “hard AI”. I disagree with Roger Penrose’s assertion that thinking goes to a quantum level – he tried to explain it in The Emperor’s New Mind and later in The Road To Reality but maybe I’m too thick to understand his reasoning.

In terms of classification neural nets work well with arbitrary length vectors using convex hulls to enclose related patterns. There are some things they can’t do – as Marvin Minsky pointed out, rather superflously and which Hecht-Neilsen (and Frank Rosenblatt would likely have agreed) refuted as a trivial shortcoming – is the XOR problem.

If this system idea is to work it needs a combination of “learning” and “doing” components. Rule-based systems are out – they were always limited and in this scenario would be worse than useless.

I agree with you that pure classification is a simple aspect and I’ve no idea how inferences can be drawn on existing knowledge without dumping net states, adding a small amount of noise, and reingesting them as training data to jiggle some vectors out of local minima – another learning problem.

Augmented security by classification and identification of threats coupled with human experience is the only way I could see this working at present.

Hack The Planet October 25, 2013 8:03 AM

The solution would be a virtual Lisp machine, with the interpreter coded to check type safety like Haskell. Throw in AI daemons that check other AI daemons to monitor the interpreter and detect attacks. Updating itself would be trivial instead of dealing with compilers and binaries. Problem is they will cram this inside a smart missile or something and then Skynet

Clive Robinson October 25, 2013 8:18 AM

@ Mike the Goat,

    What a ludicrous concept. How about building simple, audited systems that can withstand attack rather than respond to it automagically.

Whilst I agree the idea is “a ludicrous concept” and even joked about how a “cost pluss” contractor would milk the tax payer, I can see where DARPA are coming from, and the view point “ain’t pretty”.

As we both know “audited systems” fail against APT rather badly at the current pedestrian rate of covert cyber-espionage. Heck they are even failing to age old attacks from what are in modern terms “Old School Script Kiddy Attacks” that go back to when “sneaker nets” were new and the “N-Swoosh” not yet a recognised mass-bilking mark.

The premise of “audited systems” are “effective logging” and “effective recognition” if either is ineffective then not just APT but all manner of intrusion will pass un-noticed. What usually goes unmentioned is the time delay between first recognition and effective “affirmative action”, it can be days, weeks or not at all with a good zero-day and top notch payload that dumps a persistent root kit into your hardware semi-mutable memory such as your Flash BIOS on your motherboard or network card. We are already seeing such critters in the wild, where the only solution is a specialised bit of hardware to clamp on all Flash / EE ROM chips and blow them back the way they should be (as for CPU microcode for the main CPU and peripheral CPUs you don’t even know are there…).

Now consider if we move from the current “cyber-coldwar” to a “cyber-hotware” and zero-days with top notch malware start wizzing around at ten a second…

Audited systems will be “dead meat” in minutes and all of a sudden systems that respond “automagicaly” are on your Xmas Wish List, irrespective of if “Santa’s little helpers” can make one in time or not.

I’m sure that @ Nick P, will point out that we need to go back to “Old School” and relearn the leasons of the 60’s and 70’s when we had a grip on what went into A1 systems, including the armed guards wallking the masks from the safe to the wafers and back again in your own secure FAB and others down the production line and supply chain all the way to installing in another secure facility where the system was to be used.

The silly thing is many of those “slow and cumbersum” secure systems without Internet conectivity or USB thumb drives got real work done quite effectivly. Admittidly they could not play flight sim on a nice graphics display or let you do a bit of sshoping on e-bay in your lunch hower… but they got work done faster than we seem to be able to do on our flash COST systems today…

Realisticaly what would be the cost of the USG making it’s own chips and high availability systems compared with securing COST systems when you actually do the job properly in both cases across the whole life cycle…

Many of the “COST is best” evaluations I’ve seen are actually “rigged” in that the most expensive non COST solution is compared to the least expensive COST system to the point of delivery. You dont see the whole life cycle costs evaluated, thus the crippiling costs of securing and maintaining the COSTS systems gets excluded…

And the result of that is this DARPA idea which is going to put it’s hand very very deep into the tax payers pocket…

Great if you are the owner of some defence company or behind the right desk on the hill, not so great if you are anybody else trying to earn and live honestly and thus have at best a moderate wage and life style…

David Leppik October 25, 2013 11:46 AM

@Herman: the fundamental problem is that there is no fundamental difference between a manual override and a security hole. And when many security problems involve not tricking computers but tricking humans, even perfectly legitimate transactions can in fact be attacks.

And when programmers fail to add a manual override, it frequently becomes necessary to add one, suddenly and urgently.

Here’s a simple example. Around 1999, the standard for computer-to-computer interactions (RPC, or Remote Procedure Calls) was something called CORBA. But big companies couldn’t use it, because they would have to open up the CORBA port on firewalls, which required action from the security group of the IT department, which would involve months of bureaucratic delays. So companies switched to SOAP, which piggybacked on HTTP traffic. Since the HTTP and HTTPS ports were already open for the web servers, the security group could be bypassed. An entire protocol was popularized as an end-run around official security.

And this happens all the time, in big ways and small.

Computer security always involves interactions between humans and computers. In this challenge, the humans might be attackers, or they might be legitimate users trying to do their jobs– possibly system administrators trying to patch the system when the normal procedure fails, or possibly admins using the normal procedure, but with an unusual-looking patch. The computer needs to figure out the difference– and with the computer’s information, there may be no difference.

Whenever computers and humans interact, AI fails when it is too clever. You can get away with failing at optional or peripheral tasks (e.g. movie recommendations), but a too-clever AI second guesses the user, who then needs to second-guess the AI, and so on. A perfect example is Wolfram Alpha, where users enter equations in “natural English”, which the AI invariably misunderstands. So a clever user tries to enter an actual equation, which the AI treats as “natural English”, and then proceeds to misunderstand. So the user tries every mix of English and math until either sort-of succeeding or giving up.

When you need to give a computer clear, specific instructions, you need to have a clear, specific protocol. Adding AI to the mix just reduces clarity and specificity. In terms of security, you need clear, auditable rules (e.g. HTTP traffic is allowed on port 80, all other ports are closed), and you need reasonable procedures for (often quickly) changing the rules.

name.withheld.for.obvious.reasons October 25, 2013 12:28 PM

Going to have to chime in on this one, I will do this based on my experience developing systems using massively parallel super computing–both loosely and tightly coupled. The one important fact that revealed itself when working on some of the first “big data” problems came to light when the USGS began spatial analysis of satellite telemetry data and spectral analysis. My observation was that highly symmetrical systems used to solve highly nonlinear problems was a nonsequtter. It was at this point we started looking at “genetic” programming. This too proved less than useful, and I was encouraged to look further into a solution. It was a year later when it became obvious that a hardware based solution would be necessary–a biological approach to AI based systems might prove useful. A system far more resilient with respect to large scale problem spaces. To my way of thinking the hardware to approach these problems have yet to be built. Whether or not DARPA is involved in an academic exercise or in search of a solution has yet to be determined. My guess is this is an academic exercise. That’s my two cents.

R Daneel Olivaw October 25, 2013 1:50 PM

@godel:
“I thought I had put all this nonsense to rest”

The Godel incompleteness theorem and any of its cousins are entirely immaterial to this or any similar discussion. This can be easily seen by imagining a hypothetical theorem proving machine that is only limited by the godel-induced incompleteness of the formalization of mathematics it uses but not by limits on the amount of computation it may perform. Such a machine would be awesomely powerful! As far as e.g. verifying software goes, it might simply reject any software that cannot be proven to behave to specification with any trillion-page proof in Zermelo-Fraenkel set theory.

On a more mundane level, e.g. immune systems of higher animals seem to do automatic defense and repair of their hosts (within limits) and they do not seem to be concerned by godelian constraints or even show general intelligence.

R Daneel Olivaw

godel October 25, 2013 1:53 PM

Hi, back again from the dead:

Just a thought:

Let’s suppose that AI is implemented and the system is built. Since it mimics human intelligence it makes mistakes–does anyone know a human who doesn’t make mistakes?

But is this a system whose specifications can accept mistakes?

godel October 25, 2013 2:31 PM

@R Daneel Olivaw

I don’t understand your reasoning. Didn’t I crash Russell’s program to formalize arithmetic?

Isn’t the underlying ‘meme’–not a very German word but let it stand–that any system that would do what the DARPA specification wants would have to be more powerful (in an informal logical sense) than an axiomatization of arithmetic?

Aren’t the judgements required by such a system meta- meta- to any formalization of arithmetic?

Clive Robinson October 25, 2013 6:32 PM

@ Godel,

    R Daneel Olivaw [:] I don’t understand your reasoning.

I suspect because it does not involve reasoning but the myth of brute force. If you look at a part of what is being said,

    This can be easily seen by imagining a hypothetical… …machine that is… …only limited by… …but not by limits on the amount of computation it may perform.

You realise it’s an “infinite monkey” suggestion that is not realisable in this (or I suspect any) universe.

To see why go have a chat with Gregor Cantor and his little diaganols problem that Alan Turing used in his little paper “On Computable Numbers” that he put out about three years after yours and long before the first working stored computer was ever built.

Wael October 25, 2013 6:50 PM

@ Clive Robinson,

that he put out about three years after yours

I was confused until I re-read godel saying

Hi, back again from the dead…

I guess you are talking about Kurt Friedrich Gödel?

Clive Robinson October 25, 2013 7:20 PM

@ Godel,

However there is a practical probablistic way to cheat your little “incompleatness issue”…

During WWII and the Manhatten Project they ran into a problem. Put overly simply they had the need of calculation results but the calculations were not possible to perform within their resource limmitations. So one of them (Hans Beth if memory serves correctly) suggest instead of looking for the actual result required make a series of random calculations that got close and use the combind results to get a close enough result.

The result is what we call “Monte Carlo Methods” and is frequently used in engineering and economics studies.

So the trick works as followes,

You have a Turing Machine run the required program. However it is monitored by a state machine that is not Turing Compleate. This hypervisor is capable of limited functional programability in that it has error inputs from sensors like timers and I/O activities that have programable “trip” values. When one of these sensors times out it activates the state machine that then halts the Turing Machine and compares it’s internal state with that of a refrence state range. If out of range it throw an exception such that “restorative actions” can be taken even if it is just a core dump and exit.

For it to work firstly every state and it’s actions need to be fully realised for the hypervisor state machine. Secondly the program that runs on the Turing machine needs to be broken down into small tasklets that have clearly definable limits and behaviour that can be given to the state machine sensors. Thirdly every tasklet has to have “hard limits” in atleast one of the sensors to cause an exception to be raised such as max-time or max-cycles. This prevents the program going into an infinate “count up-down” loop or equivalent. Forthly and importantly every so often the hypervisor will halt the Turing machine and examin it’s internal state and associated memory and check for “out of range” values, programe code and data that is incorrect.

Provided this is done in the right way any malware etc will be caught with a probability dependent on how often the hypervisor checks the Turing engine and it’s associated memory and I/O activity.

It needs to be said however that not all existing programs are suitable for being turned into tasklets directly however those written from scratch should if they are “realisticaly practical” and “well found” should do so with only a minor amount of extra effort. Obviously there will be programers who want to write non realistic impractical and not well found software, they should be “encoraged” out of such behaviour.

I’ve talked about this aproach in the past on this blog in more depth.

Gweihir October 25, 2013 7:25 PM

@Foggy: We do not know that strong AI is possible in this physical universe. We do not even know that biologically-based intelligence is possible in this universe. The decision between Physicalism and Dualism is far from decided, at this time it is a matter of opinion. One of the strongest arguments for Dualism is that there is not even a hint how strong AI could be implemented in a machine. (No, Dualism is not a religious concept. Look it up.)

At this time, it seems quite likely that intelligence is an extra-physical phenomenon (or simply put: not of this universe, Quantum Theory makes this a distinct possibility and there is a lot of quantum activity in synapses). Hell, we do not even know whether life is a physical phenomenon. Lets tackle that one first.

Scott October 25, 2013 7:31 PM

@godel

Let’s suppose that AI is implemented and the system is built. Since it mimics human intelligence it makes mistakes–does anyone know a human who doesn’t make mistakes?

Who said anything about mimicing human intelligence? Human intelligence is very generalized to adapt to a wide variety of circumstances in a natural world. Domain specific problems are best solved with domain specific solutions, and in this case, instead of generalized intelligence, we would want specialized intelligence. Could it make mistakes? Probably. In some cases, discerning real attacks from legitimate traffic is not possible to do with 100% accuracy, meaning no matter how good your system is there will always be some false positives and negatives. The question is, can we do it in a way where it is going over 100% of traffic in real time with a fine-tooth-comb, with a smaller false negative and false positive rate than your average human?

Clive Robinson October 25, 2013 8:03 PM

@ Wael,

    I guess you are talking about Kurt Friedrich Gödel?

Yup that’s the laddie, he came to a sad end before his time.

There’s a quote about God’s and what they do to meer mortals but it’s 2AM over here and the brains a bit foggy.

But that reminds me when do you folks on that side of the puddle “fall back” out of daylight saving into what we call “winter time”? (It’s this weekend over here).

Hopefully Bruce / Moderator have sorted out the “clock issues” for this year one way or another (taking the blog “out for maintanence” for an hour at 1AM should work if nothing else does).

Wael October 25, 2013 9:15 PM

@ Clive Robinson,

But that reminds me when do you folks on that side of the puddle “fall back” out of daylight saving into what we call “winter time”? (It’s this weekend over here).

On this side of the puddle, this year, it’s November 3rd, I think. Some states don’t change times. So we drop an hour a week after you. You’ll get to save more energy than us.

Nick P October 25, 2013 10:57 PM

@ Gweihir

“We do not even know that biologically-based intelligence is possible in this universe. ”

The human brain is a biological machine capable of intelligence…

“there is not even a hint how strong AI could be implemented in a machine. ”

Brain having already done it with a specific design, that gives me more than a hint of what a synthetic machine intelligence implementation might look like. Building one is… another thing entirely. I think our tech and such a design are just too incompatible right now. We might do better in the future, might not. I’ll not be so foolish as to say “it’s right around the corner!”

@ Scott

“Who said anything about mimicing human intelligence? Human intelligence is very generalized to adapt to a wide variety of circumstances in a natural world. ”

I agree that mimicking human intelligence isn’t necessary but mimicking its ability will be. Part of the reason is that context is hugely important in determining whether an action is good or bad. Another part of the reason is that it’s going to be a different domains of knowledge and types of reasoning required. I’ll add that identifying flaws in software/systems/networks has been hard for humans, regular and genius alike, for a few decades straight. Many good advances had very non-practical tradeoffs so couldn’t be applied in practice. New stuff often comes up that requires adaptation, sometimes tossing the old approach entirely. And we’re talking about what a machine will do in this area and what intelligence it will take.

Hard for me to believe a strong AI that can do this stuff will be anything other than an Artificial General Intelligence of near human capability. If it is to be useful, that is. 😉

godel October 26, 2013 12:17 AM

Got to get the cobwebs out of my ethereal brain. Isn’t Church’s Thesis the one that says that every mechanical procedure is equivalent to a Turing Machine or any of the other formulations of a mechanical procedure?

Don’t my theorems place limitations on the logical power of such deterministic procedures?

Probabilistic techniques might finesse some of the problems in implementing a deterministic procedure but like PRNG’s, aren’t they as it were just faking it?

Aren’t the issues twofold:

  1. Can a deterministic procedure such as a Turing machine do what a human does? Or equivalently, is the human a fancy Turing machine? Such an assertion such as is made by Scott is at best a hypothesis and at worst an ideological commitment. We have no proof that the human is a fancy Turing machine. This is at best a scientific hypothesis that needs to be proved.
  2. All our modern computers are instantiations of Turing machines. The issue then in the DARPA project is whether a Turing Machine can do what the competition setters want it to do. What I thought my thereoms said was that there are inherent limitations to any Turing Machine (of course I used another formulation but let’s not be pedantic) in terms of logical power. In other words what I’m suggesting is that being able to implement the DARPA project is, informally and intuitively, being able to implement a theorem proving Turing Machine, which we know on account of my theorems to be logically impossible.

Sorry for the verbosity.

Clit Eastwood October 26, 2013 12:32 AM

“Only $2 million for an AI? Better add a few zeros…”
+1.
About 150+ Million should be just fine to create a complex neural network for interactive/automatized software code review.

Mike the goat October 26, 2013 1:08 AM

Gwehir: indeed I find the dualistic perspective quite interesting. Having a portion of our cognition reside externally also conveniently explains pretty much every supposedly paranormal phenomenon.

Mike the goat October 26, 2013 1:11 AM

Clive: I suppose on the other hand some of this functionality they speak of is already being implemented with, say IDS systems that use heuristics to identify “unusual” traffic. The problem is that often unusual – yet perfectly acceptable traffic sometimes occurs.

Wael October 27, 2013 1:47 AM

@ Clive Robinson,

Yup that’s the laddie, he came to a sad end before his time.

So I have been reading his work on and off as time allowed, ever since you introduced me to him over a year ago. Still, I don’t have a full grasp of it, but getting the picture. One thing that’s clear to me is he was a mental giant — a genius! I’ve been thinking that his sad ending (death due to starvation) maybe a result of foul play. Certainly not unheard of before. Rumor has it that Johannes Kepler murdered Tycho Brahe out of jealousy or to claim credit for Tycho’s work. I visited Sweden several times, and I regret that I didn’t have the time to visit the island of Ven… I was so close to it, and one of my colleagues told me the story… Not sure I’ll have the opportunity again…

R Daneel Olivaw October 27, 2013 6:16 AM

@godel:
“Isn’t the underlying ‘meme’–not a very German word but let it stand–that any system that would do what the DARPA specification wants would have to be more powerful (in an informal logical sense) than an axiomatization of arithmetic?”

I honestly do not see why a machine of the kind that is ruled out by the Godel incompleteness theorem would be needed for anything there.

The Godel incompleteness theorem roughly speaking only rules out a machine that will print out exactly all true statements about the natural numbers that can be expressed in a given powerful-enough formal language (basically, a language based on first order logic with a given underlying semantics to connect statements in the language with the standard model of arithmetic).

This means that the Godel incompleteness theorem does not rule out theorem proving machines that are very powerful, certainly more so than any human mathematician. Indeed, the Godel completeness theorem guarantees that a machine can derive a formal proof for any statement in a theory with an effectively representable axiomatization in first order logic unless there is in fact a model of the theory that makes the statement wrong! Roughly speaking, this means that it is in principle possible for instance to have a machine that can (eventually…) prove any statement in formal set theory that is true in all models of formal set theory. This includes a very large slice of modern mathematics and much beyond.

Of course, this says nothing about whether it is possible to actually build a machine (based on an ordinary computer for the physical implementation of the necessary data-processing) that can do interesting theorem-proving or develop software at the level of a human programmer or provide a general human-level artificial intelligence. However, if such were not possible, then it is not because of Godel incompleteness of first-order axiomatizations of arithmetic. This is all the point I’d like to make.

Personally, I don’t think that the state of the art of artificial intelligence is terribly discouraging given that even as recently as – say – twenty million years ago, there was no animal brain on the planet that could have performed general computation or deep planning or open-ended learning in any domain. I therefore do think that the gap between achieving dog-level AI and human-level AI is probably fairly narrow and I don’t see good reasons why the former should be forever inachievable (but I don’t see general dog-level AI being around the corner for sure). However, these are questions entirely distinct from the Godel theorems.

R Daneel Olivaw

godel October 27, 2013 4:11 PM

@ R Daneel Olivaw

Seems to me that the issue between us is this: How constraining is it to be ‘refused entry’ into arithmetic? I’m taking it to mean that since arithmetic is pretty basic, there’s a lot of mathematics not susceptible of axiomatization. My gut feeling (if a cadaver has guts) is that my theorem was taken in just that way by the bulk of mathematicians–that Russell’s program was impossible. You seem to be saying that being refused entry into arithmetic still leaves set theory and by implication the DARPA program. Well, I suppose sooner or later it will become clear whether the DARPA program (if not redefined into tractable terms) is doable.

Your second point, about AI, has to do with the issue of what is doable in terms of dog or human intelligence by a Turing machine. This is a very ideological issue: there are those that believe the human is a fancy Turing machine; there are those that believe that the human is not. I suppose projects like the DARPA one will–if they go anywhere, if anyone takes up the challenge–help to shed light on this issue in a practical way.

Adam October 28, 2013 7:31 AM

Guys. Instead of bashing them for wanting to solve a problem, how about you spend your energy trying to solve it. I’ll start.

  1. We could create a protocol that requires solving a CAPTCHA for every connection.

Now you try.

coderaptor October 28, 2013 2:31 PM

DARPA is placing their bets on the wrong horse. They should, instead, be hosting competitions for “building the most reliable set of applications (hardware and software)”. Put in other way – they want you to build “automated turd polishers”. Haha!

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.