[net.ai] AI halting problem

Robert.Frederking@CMU-CS-A@sri-unix.UUCP (10/07/83)

From:  Robert.Frederking@CMU-CS-A (C410RF60)

        Actually, this isn't a problem, as far as I can see.  The Halting
Problem's problem is: there cannot be a program for a Turing-equivalent
machine that can tell whether *any* arbitrary program for that machine will
halt.  The easiest proof that a Halts(x) procedure can't exist is the
following program:  (due to Jon Bentley, I believe)
        if halts(x) then
                while true do print("rats")
What happens when you start this program up, with itself as x?  If
halts(x) returns true, it won't halt, and if halts(x) returns false, it
will halt.  This is a contradiction, so halts(x) can't exist.

        My question is, what does this have to do with AI?  Answer, not
much.  There are lots of programs which always halt.  You just can't
have a program which can tell you *for* *any* *program* whether it will
halt.  Furthermore, human beings don't want to halt, i.e., die (this
isn't really a problem, since the question is whether their mental
subroutines halt).

        So as long as the mind constructs only programs which will
definitely halt, it's safe.  Beings which aren't careful about this
fail to breed, and are weeded out by evolution.  (Serves them right.)
All of this seems to assume that people are Turing-equivalent (without
pencil and paper), which probably isn't true, and certainly hasn't been
proved.  At least I can't simulate a PDP-10 in my head, can you?  So
let's get back to real discussions.