- We could create a protocol that requires solving a CAPTCHA for every connection.

Now you try.

]]>Seems to me that the issue between us is this: How constraining is it to be ‘refused entry’ into arithmetic? I’m taking it to mean that since arithmetic is pretty basic, there’s a lot of mathematics not susceptible of axiomatization. My gut feeling (if a cadaver has guts) is that my theorem was taken in just that way by the bulk of mathematicians–that Russell’s program was impossible. You seem to be saying that being refused entry into arithmetic still leaves set theory and by implication the DARPA program. Well, I suppose sooner or later it will become clear whether the DARPA program (if not redefined into tractable terms) is doable.

Your second point, about AI, has to do with the issue of what is doable in terms of dog or human intelligence by a Turing machine. This is a very ideological issue: there are those that believe the human is a fancy Turing machine; there are those that believe that the human is not. I suppose projects like the DARPA one will–if they go anywhere, if anyone takes up the challenge–help to shed light on this issue in a practical way.

]]>“Isn’t the underlying ‘meme’–not a very German word but let it stand–that any system that would do what the DARPA specification wants would have to be more powerful (in an informal logical sense) than an axiomatization of arithmetic?”

I honestly do not see why a machine of the kind that is ruled out by the Godel incompleteness theorem would be needed for anything there.

The Godel incompleteness theorem roughly speaking only rules out a machine that will print out exactly all true statements about the natural numbers that can be expressed in a given powerful-enough formal language (basically, a language based on first order logic with a given underlying semantics to connect statements in the language with the standard model of arithmetic).

This means that the Godel incompleteness theorem does not rule out theorem proving machines that are very powerful, certainly more so than any human mathematician. Indeed, the Godel completeness theorem guarantees that a machine can derive a formal proof for any statement in a theory with an effectively representable axiomatization in first order logic unless there is in fact a model of the theory that makes the statement wrong! Roughly speaking, this means that it is in principle possible for instance to have a machine that can (eventually…) prove any statement in formal set theory that is true in all models of formal set theory. This includes a very large slice of modern mathematics and much beyond.

Of course, this says nothing about whether it is possible to actually build a machine (based on an ordinary computer for the physical implementation of the necessary data-processing) that can do interesting theorem-proving or develop software at the level of a human programmer or provide a general human-level artificial intelligence. However, if such were not possible, then it is not because of Godel incompleteness of first-order axiomatizations of arithmetic. This is all the point I’d like to make.

Personally, I don’t think that the state of the art of artificial intelligence is terribly discouraging given that even as recently as – say – twenty million years ago, there was no animal brain on the planet that could have performed general computation or deep planning or open-ended learning in any domain. I therefore do think that the gap between achieving dog-level AI and human-level AI is probably fairly narrow and I don’t see good reasons why the former should be forever inachievable (but I don’t see general dog-level AI being around the corner for sure). However, these are questions entirely distinct from the Godel theorems.

R Daneel Olivaw

]]>Yup that’s the laddie, he came to a sad end before his time.

So I have been reading his work on and off as time allowed, ever since you introduced me to him over a year ago. Still, I don’t have a full grasp of it, but getting the picture. One thing that’s clear to me is he was a mental giant — a genius! I’ve been thinking that his sad ending (death due to starvation) maybe a result of foul play. Certainly not unheard of before. Rumor has it that Johannes Kepler murdered Tycho Brahe out of jealousy or to claim credit for Tycho’s work. I visited Sweden several times, and I regret that I didn’t have the time to visit the island of Ven… I was so close to it, and one of my colleagues told me the story… Not sure I’ll have the opportunity again…

]]>+1.

About 150+ Million should be just fine to create a complex neural network for interactive/automatized software code review. ]]>

Don’t my theorems place limitations on the logical power of such deterministic procedures?

Probabilistic techniques might finesse some of the problems in implementing a deterministic procedure but like PRNG’s, aren’t they as it were just faking it?

Aren’t the issues twofold:

- Can a deterministic procedure such as a Turing machine do what a human does? Or equivalently, is the human a fancy Turing machine? Such an assertion such as is made by Scott is at best a hypothesis and at worst an ideological commitment. We have no proof that the human is a fancy Turing machine. This is at best a scientific hypothesis that needs to be proved.
- All our modern computers are instantiations of Turing machines. The issue then in the DARPA project is whether a Turing Machine can do what the competition setters want it to do. What I thought my thereoms said was that there are inherent limitations to any Turing Machine (of course I used another formulation but let’s not be pedantic) in terms of logical power. In other words what I’m suggesting is that being able to implement the DARPA project is, informally and intuitively, being able to implement a theorem proving Turing Machine, which we know on account of my theorems to be logically impossible.

Sorry for the verbosity.

]]>“We do not even know that biologically-based intelligence is possible in this universe. ”

The human brain is a biological machine capable of intelligence…

“there is not even a hint how strong AI could be implemented in a machine. ”

Brain having already done it with a specific design, that gives me more than a hint of what a synthetic machine intelligence implementation might look like. Building one is… another thing entirely. I think our tech and such a design are just too incompatible right now. We might do better in the future, might not. I’ll not be so foolish as to say “it’s right around the corner!”

@ Scott

“Who said anything about mimicing human intelligence? Human intelligence is very generalized to adapt to a wide variety of circumstances in a natural world. ”

I agree that mimicking human intelligence isn’t necessary but mimicking its ability will be. Part of the reason is that context is hugely important in determining whether an action is good or bad. Another part of the reason is that it’s going to be a different domains of knowledge and types of reasoning required. I’ll add that identifying flaws in software/systems/networks has been hard for humans, regular and genius alike, for a few decades straight. Many good advances had very non-practical tradeoffs so couldn’t be applied in practice. New stuff often comes up that requires adaptation, sometimes tossing the old approach entirely. And we’re talking about what a *machine* will do in this area and what intelligence it will take.

Hard for me to believe a strong AI that can do this stuff will be anything other than an Artificial General Intelligence of near human capability. If it is to be useful, that is. ðŸ˜‰

]]>