markg@nvuxf.UUCP (M. Guzdial) (05/18/85)
The discussion on the Net recently concerning Hofstadter's comments about machines composing music has brought to mind a puzzle that occurred to me in reading GEB and in studying theoretical computer science. I imagine that the question is not original with me (perhaps not even new to this newsgroup, as I'm yet a rookie around here), and I'm confident that someone in AI has puzzled this one out. Perhaps someone can enlighten me with a solution. According to the Turing/Church-Kleene/Post Thesis (and derivatives thereof), "Any computation one might naturally regard as possible to carry out can be performed by some Turing machine having a suitable set of instructions" (All quotes in this posting are from "Machines, Languages, and Computation" by Denning, Dennis, and Qualitz), or in other words, anything which is "computable" may be simulated by a Turing machine. If we are attempting to "simulate" human intelligent activity on a machine (activity including composing music), we must be assuming that such activity is therefore "computable," if we accept that anything our machines do is a computable activity. But Turing also stated that "The halting problem for Turing machines is unsolvable" where the halting problem is defined as "Given a Turing machine M and an initial tape T, does M halt when started on tape T?" I take all this to mean that one computer, given the inputs of another computer's hardware and software specifications and its input, cannot determine if the program will halt, or will go into "infinite loop." In other words, one computer cannot "debug" another. Human beings can "debug" programs, and are capable of determining if a program goes into "infinite loop." If human beings can perform an activity that Turing machines cannot, they must not be computable. Therefore, human intellectual activity cannot be simulated on a machine. There are three solutions that I see to this problem. First, what the Thesis defines as "computable" is wrong. Second, perhaps the machines we are developing today are more powerful than Turing machines, which begs the question of what is truly "computable." Third, perhaps humans cannot always "debug" programs. Douglas Lenat has stated that "AM" did things that he did not suspect. Perhaps computable process Lenat was finding that it could not determine the "halting function" for Turing machine "AM." -- Mark Guzdial {ihnp4, houxm}!nvuxf!markg (201) 949-5471
sidney@linus.UUCP (Sidney Markowitz) (05/20/85)
In article <113@nvuxf.UUCP> markg@nvuxf.UUCP (M. Guzdial) writes: >According to the Turing/Church-Kleene/Post Thesis (and derivatives thereof), >"Any computation one might naturally regard as possible to carry out can >be performed by some Turing machine having a suitable set of instructions" >(All quotes in this posting are from "Machines, Languages, and Computation" >by Denning, Dennis, and Qualitz), or in other words, anything which is >"computable" may be simulated by a Turing machine. Any function computable by one Turing machine is computable by any other. This is a corollary of the proof that the halting problem is unsolvable by a Turing machine, since the proof involves having a Turing machine simulate another Turing machine. Since such a simulation can be constructed, all Turing machines are equivalent. As I explain further below, however this is not the same as saying that anything computable by any machine is computable by a Turing machine, unless you use a strictly formal definition of "computable function" which is defined in terms of what Turing machines can do, anyway. >If we are attempting to "simulate" human intelligent activity on a >machine (activity including composing music), we must be assuming >that such activity is therefore "computable," if we accept that anything >our machines do is a computable activity. We are forced to assume that human intelligent activity can be supported by the hardware we are using to simulate (or imitate) it. We don't know for sure that the hardware of the human brain is Turing-equivalent. A Turing machine is a single processor sequential machine with finite state control and infinite memory. It is generally accepted that sequential computers with what is now the conventional (Von Neumann) architecture are also Turing-equivalent, given sufficient time and memory. But see the article by Carl Hewitt (of the MIT AI Lab) in the April, 1985 issue of _Byte_ for a fascinating discussion of parallel asynchronous architectures (like that of the brain) and how they cannot be simulated by Turing machines. Hence the current interest in AI in parallel archtectures and connection machines. >But Turing also stated that "The halting problem for Turing machines is >unsolvable" where the halting problem is defined as "Given a Turing >machine M and an initial tape T, does M halt when started on tape T?" >I take all this to mean that one computer, given the inputs of >another computer's hardware and software specifications and its input, >cannot determine if the program will halt, or will go into "infinite >loop." In other words, one computer cannot "debug" another. The halting problem states that we cannot program a Turing machine to *infallibly* predict given the specifications of an arbitrary Turing machine and its input whether it will halt. This is philosophically like Goedels theorem. It does not mean that computers cannot debug one another, only that they may not be able to be infallible at doing so. If the brain were Turing-equivalent, it would only mean that there are problems we cannot solve. I think it is quite reasonable to speculate that we have some limitations. >Human beings can "debug" programs, and are capable of determining if >a program goes into "infinite loop." Not infallibly. Remember that the Turing machine is an abstract automata with infinite memory. The "infinite loop" may be arbitrarily long. Could a human *always* determine if a computer is in an infinite loop that takes a *very* long time to repeat? > If human beings can perform an >activity that Turing machines cannot, they must not be computable. >Therefore, human intellectual activity cannot be simulated on a machine. > >There are three solutions that I see to this problem. First, what the Thesis >defines as "computable" is wrong. Not wrong, just a restricted, formal mathematical definition that should not be confused with the more general English word "computable". >Second, perhaps the machines we are >developing today are more powerful than Turing machines, which begs the >question of what is truly "computable." As Hewitt points out, that is true of the newer architectures we are looking at now for AI work. >Third, perhaps humans cannot always "debug" programs. Perhaps also correct. Or at least predict what they will do. -- Sidney Markowitz ARPA: sidney@mitre-bedford UUCP: ...{allegra,decvax,genrad,ihnp4,philabs,security,utzoo}!linus!sidney
prem@eagle.UUCP (Swami Devanbu) (05/21/85)
> I take all this to mean that one computer, given the inputs of > another computer's hardware and software specifications and its input, > cannot determine if the program will halt, or will go into "infinite > loop." In other words, one computer cannot "debug" another. > > Human beings can "debug" programs, and are capable of determining if > a program goes into "infinite loop." If human beings can perform an > activity that Turing machines cannot, they must not be computable. > Therefore, human intellectual activity cannot be simulated on a machine. Aha, but can a human "debug" himerself ? If s/he can't, s/he falls under the curse of the said thesis as well. (take a bow, Socrates) Swami Devanbu
colonel@gloria.UUCP (Col. G. L. Sicherman) (05/21/85)
["Those who refuse to go beyond fact rarely get as far as fact." --H. Jackson] > But Turing also stated that "The halting problem for Turing machines is > unsolvable" where the halting problem is defined as "Given a Turing > machine M and an initial tape T, does M halt when started on tape T?" > I take all this to mean that one computer, given the inputs of > another computer's hardware and software specifications and its input, > cannot determine if the program will halt, or will go into "infinite > loop." In other words, one computer cannot "debug" another. > > Human beings can "debug" programs, and are capable of determining if > a program goes into "infinite loop." If human beings can perform an > activity that Turing machines cannot, they must not be computable. > Therefore, human intellectual activity cannot be simulated on a machine. Debugging is one thing; looping is quite another. Nobody has figured out whether a (sound) program that searches for an odd perfect number will halt, and perhaps nobody ever will. The halting problem is purely mathematical. Mathematics won't serve as a model for all human activity. -- Col. G. L. Sicherman ...{rocksvax|decvax}!sunybcs!colonel
chavez@harvard.ARPA (R. Martin Chavez) (05/21/85)
Let's get some facts straight: (1) The Church-Turing Thesis states that "no computational procedure will be considered an algorithm unless it can be presented as a Turing machine" (Lewis and Papadimitriou, \\Elements of the Theory of Computation//, p. 223). NO computational model that carries out finite labor at each step has yet been shown to accept non-recursive languages. Any such demonstration would absolutely revolutionize the field of computational complexity. (2) The PRAM is an exceedingly general model of parallel computation: we are allowed an unbounded number of processors, and an infinite global memory. (Each processor can perform arithmetic and load/store operations.) Probabilistic simulations of PRAMs on fixed-degree networks are known. More important for our purposes: the class of problems solvable in polynomial time on a PRAM is exactly equivalent to the class of problems solvable in polynomial space on a sequential Turing machine. I very much doubt that Hewitt has discovered a finite computational model that cannot be simulated on a Turing machine. (3) The notion of a "computable function" has been defined in terms of various models (e.g., unrestricted grammars and mu-recursive functions). The abundance of alternative characterizations suggest that the Turing-decidable languages form a stable, natural class. The notion of computabilitiy, then, is not an artifact of the Turing-machine model. R. Martin Chavez Aiken Computation Laboratory, Harvard University (chavez@harvard.ARPA)
sophie@mnetor.UUCP (Sophie Quigley) (05/21/85)
> According to the Turing/Church-Kleene/Post Thesis (and derivatives thereof), > "Any computation one might naturally regard as possible to carry out can > be performed by some Turing machine having a suitable set of instructions" > (All quotes in this posting are from "Machines, Languages, and Computation" > by Denning, Dennis, and Qualitz), or in other words, anything which is > "computable" may be simulated by a Turing machine. > > If we are attempting to "simulate" human intelligent activity on a > machine (activity including composing music), we must be assuming > that such activity is therefore "computable," if we accept that anything > our machines do is a computable activity. > > But Turing also stated that "The halting problem for Turing machines is > unsolvable" where the halting problem is defined as "Given a Turing > machine M and an initial tape T, does M halt when started on tape T?" > I take all this to mean that one computer, given the inputs of > another computer's hardware and software specifications and its input, > cannot determine if the program will halt, or will go into "infinite > loop." In other words, one computer cannot "debug" another. > > Human beings can "debug" programs, and are capable of determining if > a program goes into "infinite loop." If human beings can perform an > activity that Turing machines cannot, they must not be computable. > Therefore, human intellectual activity cannot be simulated on a machine. > > There are three solutions that I see to this problem. First, what the Thesis > defines as "computable" is wrong. Second, perhaps the machines we are > developing today are more powerful than Turing machines, which begs the > question of what is truly "computable." Third, perhaps humans cannot > always "debug" programs. Douglas Lenat has stated that "AM" did things > that he did not suspect. Perhaps computable process Lenat was finding that > it could not determine the "halting function" for Turing machine "AM." > There is a fourth solution, the one you stated in the previous paragraph: it is possible that human intellectual activity cannot be simulated on a machine. (I personally think that this is quite probable anyway, even without worrying about Turing machines). -- Sophie Quigley {allegra|decvax|ihnp4|linus|watmath}!utzoo!mnetor!sophie
simon@psuvax1.UUCP (Janos Simon) (05/21/85)
[] There's been a lot of needless confusion in recent postings. Church's thesis says that anything that can be precisely described, can be computed by a Turing machine. The question, then is: "Is the human mind equivalent to a Turing machine?", and there is a strong urge in some people to respond "no". Justifications for a negative answer can be 1)"we're not Turing machines" or 2)"because we can do X, but Turing machines can't". Objection 1) is clearly valid, and can be forcefully put (e.g. we have feelings, we have an immortal soul, we are creative, we are organic and think in parallel)Nonetheless, it should be discarded. What matters is not what motivates us or how we do things, but the result of such actions. That is, in order to prove that Church's thesis is not applicable to human minds, we must give a justification of type 2). (read Turing's original paper on AI for a MUCH better debunking of such objections). Objection 2) has never been successfully used. For example, the version X="debugprograms" is clearly false: we can debug SOME programs. So can computers. (The unsolvability of the halting problem does NOT mean that a machine cannot decide whether another will halt or not - it says that the decider cannot do it for an ARBITRARY machine. Compilers routinely decide that certain programs will run forever, and refuse to run them. The fact that these programs are easily detectable is irrelevant. Previous postings fall into one of these categories. For example, Hewitt's examples of asynchronous parallel computation can be easily simulated by a Turing machine - it will be less efficient, but Church's thesis says nothing about efficiency. The fact that the brain does not work like a Turing machine does not mean it cannot be simulated by one: the motion of springs has little resemblance to electric current in circuits, but analog computers use the equivalence of the respective differential equations. As for our purported ability to decide whether programs will halt, consider the procedure below: let Next(a,b,c,n) be a subroutine that returns the next 4-tuple of positive integers in some standard ordering (e.g. increasing values of a+b+c+n, ordered lexicographically within a given value), starting with (1,1,1,1). while 1|=0 do Next(a,b,c,n); if n>2 then if a**n + b**n = c**n then halt od I would warmly welcome a proof that this procedure halts (or that it loops forever).
jbn@wdl1.UUCP (05/24/85)
Undecidability is confusing to many people. It helps to remember that formal undecidability can only arise for machines which are either nondeterministic or have infinite memory. This follows from the observation that a machine with finite memory and deterministic state transitions must eventually either repeat a previous state, there being a finite number of such, or halt. So this is a total nonissue for all real computers. Second, Turing's proof that the halting problem is undecidable implies only that there exist SOME programs for which the halting problem is undecidable. For most programs, the question can be decided, even assuming infinite memory. There are practical methods for proving that a program halts. The usual approach is to have a single value associated with each loop which is derived from the values that change in the loop in some way, and can be shown to never go negative while decreasing by at least 1 every time through the loop. If you can construct such an expression for each loop and each recursion in a program, then the program must eventually halt. If you can't find such an expression, your program has a very good chance of containing a bug. Steve German did his PhD thesis at Stanford on mechanical methods for finding such expressions by automatic examination of Pascal programs. The whole issue of undecidability is a red herring in the AI and computer field, and should be put to rest. John Nagle
peter@yetti.UUCP (Runge) (05/26/85)
> Hewitt's > examples of asynchronous parallel computation can be easily simulated by a ^^^^^^ > Turing machine - it will be less efficient, but Church's thesis says nothing > about efficiency. I agree with everything in this article except the point about simulating asynchronous parallel computation. In almost all of the literature, the TM models are "clocked"; the entire system changes state synchronously. The only *easy* way I know of simulating an asynchronous process on a TM requires the ASSUMPTION that all the events in the asynchronous process can be clocked by a global clock. (Imagine what a mess it would be if they couldn't be! How would we know how many states the process as a whole actually had? How could we define temporal orderings between parts of the whole process which are not in direct communication? Think of the number of times in which your site has received a followup to an article before you got the article. Are you confident you could design the software so that this could NEVER happen without appealing to a Universal Time stamp?) What justifies the assumption that universal synchronization is always possible? Note that my objection is to the word "easily", not to Church's thesis. I believe that C. Petri (of Petri net fame) constructed an asynchronous model of a TM in which each square of the tape was a finite automaton connected asynchronously to its neighbors and to the read/write head. This was a Ph. D. thesis (in German) from University of Bonn in the early 60's. It was translated (by a man named Green, I believe) but I don't know where the English version appeared. My recollection was that Petri showed that the asynchronous TM could carry out any TM computation, even in the face of indefinite delays in the signals between its components. Of course, for Church's thesis, it's the converse that is important. Anybody know of any more recent rigorous results on this issue? [This really belongs in net.automata!] -- Peter H. Roosen-Runge, Department of Computer Science, York University Toronto (Downsview), Ontario
zrm@prism.UUCP (05/27/85)
All that the halting problem may mean is that proving the correctness of a program may be a futile pursuit. But don't tell that to the Ada clones making a living pursuing that goal.
debray@sbcs.UUCP (Saumya Debray) (06/12/85)
> Undecidability is confusing to many people. It helps to remember that > formal undecidability can only arise for machines which are either > nondeterministic or have infinite memory. This follows from the observation > that a machine with finite memory and deterministic state transitions must > eventually either repeat a previous state, there being a finite number of > such, or halt. So this is a total nonissue for all real computers. Technically, you're right. However, the virtual address space of most "real" computers is so large that it is, for all practical purposes as far as checking repitition of previous machine states is concerned, infinite. In other words, not many people I know will spend a lot of time writing code to check whether my VAX is repeating states. > The whole issue of undecidability is a red herring in the AI and > computer field, and should be put to rest. > I wish undecidability would go away, it'd certainly make life easier for me (and, I'm sure, a lot of other people). Unfortunately, it's a very pertinent issue for programs that manipulate other programs, e.g. optimizing compilers. Simple questions like "will this variable be instantiated at this point in this Prolog program?" are undecidable in general (as most "interesting" program properties are, ref: Rice's theorem), so such optimizers must be careful to err on the side of caution, in order to guarantee soundness. -- Saumya Debray SUNY at Stony Brook uucp: {allegra, hocsd, philabs, ogcvax} !sbcs!debray arpa: debray%suny-sb.csnet@csnet-relay.arpa CSNet: debray@sbcs.csnet
debray@sbcs.UUCP (Saumya Debray) (06/12/85)
> The only *easy* way I know of simulating an asynchronous process on a TM > requires the ASSUMPTION that all the events in the asynchronous process > can be clocked by a global clock. (Imagine what a mess it would be if > they couldn't be! How would we know how many states the process as a whole > actually had? How could we define temporal orderings between parts of the > whole process which are not in direct communication? Individual processes in an asynchronous computation may have incomplete knowledge about the states of other processes (e.g. their local clocks), but this has nothing to do with a TM simulation of the computation: in any particular asynchronous computation, events have to happen with respect to a (possibly external) global clock, e.g. the one at Greenwich (*), and there's no reason why this can't be handled by a NTM. (*) Please, let's assume an inertial frame of reference. -- Saumya Debray SUNY at Stony Brook uucp: {allegra, hocsd, philabs, ogcvax} !sbcs!debray arpa: debray%suny-sb.csnet@csnet-relay.arpa CSNet: debray@sbcs.csnet