[comp.ai.philosophy] Minds, machines, and Godel

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/16/91)

Dull around here.  How about everybody tries to give the decisive refutation
of the Lucas/Penrose arguments that use Godel's theorem to "show" that human
beings are not computational (or more precisely, to "show" that human beings
are not computationally simulable)?

Just to refresh your memory, the argument goes like this: if I were a
particular Turing Machine T, there would be a mathematical sentence G (the
"Godel sentence" of T) that I could not prove.  But in fact I can see that G
must be true.  Therefore I cannot be T.  This holds for all T, therefore I am
not a Turing machine.

The argument goes through equally well if we replace "am a Turing Machine" by
"am simulable by a Turing machine", so that's not a problem.  The Penrose
version of the argument is similar, except that it applies to the level of
the mathematical community rather than to the level of the person (thus
avoiding the problem that in practice individual people may not be perfect
mathematicians; the mathematical community can conquer all).

I actually think that most of the standard replies are flawed (or can be
overcome) in one way or another.  But I'm interested to see the wisdom of
the net.  Please reply only if you have a good grasp of Godel's theorem and
of the theory of computation.  No hand-waving or mystical mush.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

torkel@sics.se (Torkel Franzen) (01/17/91)

In article <1991Jan16.035058.7465@bronze.ucs.indiana.edu> chalmers@bronze.ucs.
indiana.edu (David Chalmers) writes:

 >Just to refresh your memory, the argument goes like this: if I were a
 >particular Turing Machine T, there would be a mathematical sentence G (the
 >"Godel sentence" of T) that I could not prove.  But in fact I can see that G
 >must be true.  Therefore I cannot be T.  This holds for all T, therefore I am
 >not a Turing machine.

  There is indeed a specific oversight in this argument, contained in the
statement "But in fact I can see that G must be true." For simplicity, let's
stick to arithmetical sentences, where an interpretation is agreed on.
In general we have no idea, given an arithmetical theory T, whether or not
its Godel sentence is true. What we know is that the Godel sentence is true
if the theory is consistent. This is as a rule provable in the theory itself.

  So, to make this a bit more concrete: suppose I present you with a
formal system (or, if you like, a Turing machine generating sentences)
and claim that this system is consistent, and every arithmetical
sentence you can prove is provable in this system. (So as far as
arithmetic is concerned, you're a machine subordinate to or perhaps
identical with this machine.) You can't refute this using Godel's
theorem unless you can prove the consistency of the system. This we
have no reason to believe that you can do. (Consider e.g. the
arithmetical part of ZFC + 'there is an uncountable measurable
cardinal').

  On the other hand it is correct that if we recognize that a formal system
T only embodies valid principles, Godel's theorem provides a means of
extending T to a stronger theory only embodying valid principles. But this
is a different matter.

tap@ai.toronto.edu (Tony Plate) (01/17/91)

Dave Chalmers writes:
>Just to refresh your memory, the argument goes like this: if I were a
>particular Turing Machine T, there would be a mathematical sentence G (the
>"Godel sentence" of T) that I could not prove.  But in fact I can see that G
>must be true.  Therefore I cannot be T.  This holds for all T, therefore I am
>not a Turing machine.

A flaw in this "proof" is that it is incomplete.  To make it complete
we must prove that for all candidate T, there exists a sentence G
which cannot be proved by T, but which I can see is true.

The briefest statement of this argument would be that people can prove
anything, therefore, people are not turing machines.

I can't see how you could ever prove that "people can prove anything".

I think that it is necessary to have the universal rather than
the existential quantifier in the "people can prove anything"
part of the argument.  Otherwise, we could take the Turing machine T'
which can indeed prove G, and combine it with T to get a new
candidate Turing machine.

-- 
---------------- Tony Plate ----------------------  tap@ai.utoronto.ca -----
Department of Computer Science, University of Toronto, 
10 Kings College Road, Toronto, 
Ontario, CANADA M5S 1A4
----------------------------------------------------------------------------

mcdermott-drew@cs.yale.edu (Drew McDermott) (01/17/91)

   In article <1991Jan16.035058.7465@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
   >Dull around here.  How about everybody tries to give the decisive refutation
   >of the Lucas/Penrose arguments that use Godel's theorem to "show" that human
   >beings are not computational (or more precisely, to "show" that human beings
   >are not computationally simulable)?
   >
   >Just to refresh your memory, the argument goes like this: if I were a
   >particular Turing Machine T, there would be a mathematical sentence G (the
   >"Godel sentence" of T) that I could not prove.  But in fact I can see that G
   >must be true.  Therefore I cannot be T.  This holds for all T, therefore I am
   >not a Turing machine.
   >
   >-- 
   >Dave Chalmers                            (dave@cogsci.indiana.edu)      
   >Center for Research on Concepts and Cognition, Indiana University.
   >"It is not the least charm of a theory that it is refutable."

Now that you've demanded a high level of precision, I realize I don't
quite understand the argument.  The claim is that 

  If T is an arbitrary Turing machine, then there is a sentence G that
  T cannot prove.

What does it mean for an arbitrary Turing machine to prove something?
The argument is usually made with respect to a formal logical system,
where I can clearly see that the system "proves" P iff P is a theorem
of the system.  Granted, there is an isomorphism between Turing
machines and formal systems, but then the claim translates into 

  If T is an arbitrary Turing machine, then there is a state G that
  T cannot reach

which, let us suppose, can be made true by restricting the class of
Turing machines a bit.  But then, so what?

If you want to back up and rephrase the argument in terms of Peano
arithmetic or the like, then *of course* people are not equivalent to
any system of arithemtic.  For starters, they're not consistent.

[I pursue the argument along these lines in my reply to Penrose in BBS.]

                                             -- Drew McDermott

loren@dweasel.llnl.gov (Loren Petrich) (01/17/91)

	I am not impressed with the claim that Go"del's theorem
indicates that there are things that we can think about that a
computer cannot. For the following reason. Consider that Go"del's
theorem is for any formal system in which statement can refer to
themselves. The theorem states that there will exist a statement G
that states: "G is not a theorem". Though true, it cannot be shown to
be true inside the system. All that is necessary is the ability to
make self-references. Go"del's theorem is simply a fancy version of
the Liar Paradox. If "I am lying" is true, then it must be false. If
it is false, then it must be true. Ad infinitum.

	There is a psychological version of Go"del's theorem, which
makes no assumptions about the nature of the human mind. Consider this
statement:

	I am incapable of judging the correctness of this statement

While it is certainly true, one will not be able to work it out
directly. Because if one was able to do so, that would imply that one
was not. I myself am able to discuss this by psychological
indirection, involving my ability to analyze my own thought.

	So that is why I think that Go"del's theorem makes no
limitation on computers that we do not share, because one can always
set up a psychological version of the statement G mentioned earlier.

	Any comments?


$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Loren Petrich, the Master Blaster: loren@sunlight.llnl.gov

Since this nodename is not widely known, you may have to try:

loren%sunlight.llnl.gov@star.stanford.edu

What do you MEAN it's not in the computer?!? -- Madonna

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/17/91)

In article <89586@lll-winken.LLNL.GOV> loren@dweasel.llnl.gov (Loren Petrich) writes:

>	So that is why I think that Go"del's theorem makes no
>limitation on computers that we do not share, because one can always
>set up a psychological version of the statement G mentioned earlier.

This does not address the argument as stated.  The argument as stated did
not claim "humans have no psychological limitations".  Rather, it is an
argument that claims to exhibit a specific difference between the
capabilities of humans and those of any given Turing Machine (from which
it would follow that humans cannot be simulated by Turing Machines).

Incidentally, your judgment that the sentence "I am incapable of judging the
truth of this sentence" is true leads to an obvious inconsistency.  But this
is not relevant to the Godel argument.  Godel's theorem is much deeper than
ordinary self-reference.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

jmc@DEC-Lite.Stanford.EDU (John McCarthy) (01/17/91)

The following is taken from a review of the Penrose I wrote for the
Bulletin of the American Mathematical Society, November 1990.

	The Penrose argument against AI of most interest to
mathematicians is that whatever system of axioms a computer is
programmed to work in, e.g. Zermelo-Fraenkel set theory, a man
can form a G\"odel sentence for the system, true but not provable
within the system.

	The simplest reply to Penrose is that forming a G\"odel
sentence from a proof predicate expression is just a
one line LISP program.  Imagine a dialog between Penrose and a
mathematics computer program.

\noindent Penrose: Tell me the logical system you use, and
I'll tell you a true sentence you can't prove.

\noindent Program: You tell me what system you use,
and I'll tell you a true sentence you can't prove.

\noindent Penrose: I don't use a fixed logical system.

\noindent Program: I can use any system you like, although mostly
I use a system based on a variant of ZF and descended from 1980s
work of David McAllester.  Would you like me to print you a
manual?  Your proposal is like a contest to see who can name the
largest number with me going first.  Actually, I am prepared to
accept any extension of arithmetic by the addition of
self-confidence principles of the Turing-Feferman type iterated
to constructive transfinite ordinals.

\noindent Penrose: But the constructive ordinals aren't
recursively enumerable.

\noindent Program: So what?  You supply the extension and
whatever confidence I have in the ordinal notation, I'll grant
to the theory.  If you supply the confidence, I'll use the
theory, and you can apply your confidence to the results.

	[Turing adds to a system a statement of its consistency, thus
getting a new system.  Feferman adds an assertion that is
essentially of the form $n(provable P(n))  nP(n)$.  We've left
off some quotes.]

	One mistaken intuition behind the widespread belief that
a program can't do mathematics on a human level is the assumption
that a machine must necessarily do mathematics within a single
axiomatic system with a predefined interpretation.

	Suppose we want a computer to prove theorems in
arithmetic.  We might choose a set of axioms for elementary
arithmetic, put these axioms in the computer, and write a program
to prove conjectured sentences from the axioms.  This is often
done, and Penrose's intuition applies to it.  The G\"odel
sentence of the axiomatic system would be forever beyond the
capabilities of the program.  Nevertheless, since G\"odel
sentences are rather exotic, e.g. induction up to $\epsilon0$ is
rarely required in mathematics, such programs operating within a
fixed axiomatic system are good enough for most conventional
mathematical purposes.  We'd be very happy with a program that
was good at proving those theorems that have proofs in Peano
arithmetic.  However, to get anything like the ability to look at
mathematical systems from the outside, we must proceed
differently.

	Using a convenient set theory, e.g. ZF, axiomatize the
notion of first order axiomatic theory, the notion of
interpretation and the notion of a sentence holding in an
interpretation.  Then G\"odel's theorem is just an ordinary
theorem of this theory and the fact that the G\"odel sentence
holds in models of the axioms, if any exist, is just an ordinary
theorem.  Indeed the Boyer-Moore interactive theorem prover
has been used by Shankar (1986) to prove
G\"odel's theorem, although not in this generality.  See also
(Quaife 1988).

	Besides the ability to use formalized metamathematics
a mathematician program will need to give credence to conjectures
based on less than conclusive evidence, just as human mathematicians
give credence to the axiom of choice.  Many other mathematical,
computer science and even philosophical problems will arise
in such an effort.

torkel@sics.se (Torkel Franzen) (01/17/91)

In article <JMC.91Jan16213907@DEC-Lite.Stanford.EDU> jmc@DEC-Lite.Stanford.EDU
 (John McCarthy) writes:

   >The Penrose argument against AI of most interest to
   >mathematicians is that whatever system of axioms a computer is
   >programmed to work in, e.g. Zermelo-Fraenkel set theory, a man
   >can form a G\"odel sentence for the system, true but not provable
   >within the system.

  Surely this can't be Penrose's argument. Obviously anybody can *form*
the Godel sentence. If Penrose has an argument it must be that human
beings, but not the machine at issue, can realize the truth of the
Godel sentence (again assuming the interpretation of the theory to be
agreed on). Again: it is not in general the case that the Godel sentence
for a theory (=the set of arithmetical theorems generated by the machine)
is true, nor have we any reason to believe that we can in general establish its
truth in those cases when it is true.

torkel@sics.se (Torkel Franzen) (01/17/91)

  To amplify my previous comment: John McCarthy's remarks correctly
emphasize that machines just as well as people can use Godel's theorem
in its positive application, i.e. as a means of indefinitely extending
the set of formal principles which we recognize as valid.

  The point I wish to make is that Godel's theorem cannot even be used
to refute the bald assertion that the set of arithmetical theorems
provable by human beings is recursively enumerable (and thus equal to
the set of theorems produced by some Turing machine, or provable in
some formal system). For we have no reason to believe that we would be
able even to convince ourselves of the truth of the Godel sentence for
such a formal system.

cpshelley@violet.uwaterloo.ca (cameron shelley) (01/18/91)

In article <1991Jan17.104913.15692@sics.se> torkel@sics.se (Torkel Franzen) writes:
>
>  To amplify my previous comment: John McCarthy's remarks correctly
>emphasize that machines just as well as people can use Godel's theorem
>in its positive application, i.e. as a means of indefinitely extending
>the set of formal principles which we recognize as valid.
>

  Before I comment on this, let me digress for a moment (sorry! :>).
What Goedel showed, briefly, was that for any axiomatic system T1, there
is a Goedel number G1 which represents a statement about T1 that is
'true' but not 'provable' in T1.  If we then switch to axiomatic system
T2, then we could find that number G1 now represents a provable statement
about T2 -- however there is now a Goedel number G2 which represents
a statement about T2 that is 'true' but not 'provable'.  There are two
things to note about this arguement: 1) the incompleteness of any 
axiomatic system is a semantic property of our notion of axiomatic, in
other words there is no fixed number (structure) which always corresponds
to the meaning "true but unprovable", and 2) the role played in the
arguement by the term "represents" should indicate that the interpretation
function being used is critical to the problem.  Is it part of the
axiomatic system?

  I think Penrose's arguement is that it is not.  He might say: "A human
is able to perceive the truth (a semanitc property) of the various
structures Gx ("syntactic" entities) indicated by the incompleteness
theorem, so that for the human syntactic validity is not isomorphic
with semantic validity.  But the axiomatic systems Tx cannot perceive
this because for them syntactic validity is isomorphic with semantic
validity.  The upshot: the human's interpretation function does not
have the same limitations as that of an axiomatic system!  From this,
I can conclude that no axiomatic system can fully simulate a human,
QED."  Based on the above premises, I think the conclusion is correct.
But what about the premises?

  As several people have pointed out, an axiomatic system (I'm getting
tired of that term, how about "computer"?) can be made to approximate
such an interpretation function to an arbitrary accuracy epsilon.  This
point throws one of Penrose's assumptions into relief: a human's
assignment of 'truth' to a syntactic object is certain (ie. not subject
to approximation).  I have some reservations about this, and this is
obviously where the 'philosophy' comes into play.  What does "truth"
mean?  Is it atomic?  I don't think Tarski resolved this for all time
with his definition, as the current controversy points out.  At this
point, all I can state is my opinion: I am not inclined to the notion
of certitude in anything, because it requires an ideal standard which
I can't conceive of as real, so I would reject Penrose's arguement.
It is tempting here to start conjecturing about how modern
physics might be used to support me, but as David said when he started
this, "no hand-waving"!

PS.  Steve Dai: happy now? :>
--
      Cameron Shelley        | "Absurdity, n.  A statement of belief
cpshelley@violet.waterloo.edu|  manifestly inconsistent with one's own
    Davis Centre Rm 2136     |  opinion."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/18/91)

In article <JMC.91Jan16213907@DEC-Lite.Stanford.EDU> jmc@DEC-Lite.Stanford.EDU (John McCarthy) writes:

>	The Penrose argument against AI of most interest to
>mathematicians is that whatever system of axioms a computer is
>programmed to work in, e.g. Zermelo-Fraenkel set theory, a man
>can form a G\"odel sentence for the system, true but not provable
>within the system.

The Lucas/Penrose argument need not assume that the computer is working in
any axiomatic system remotely like the standard systems of set theory and
logic.  All that is required is that it has a computational procedure for
generating true statements of arithmetic.

>	The simplest reply to Penrose is that forming a G\"odel
>sentence from a proof predicate expression is just a
>one line LISP program.  Imagine a dialog between Penrose and a
>\noindent Program: I can use any system you like, although mostly
>I use a system based on a variant of ZF and descended from 1980s
>work of David McAllester.  Would you like me to print you a
>manual?  Your proposal is like a contest to see who can name the
>largest number with me going first.  Actually, I am prepared to
>accept any extension of arithmetic by the addition of
>self-confidence principles of the Turing-Feferman type iterated
>to constructive transfinite ordinals.

The argument is not intended to apply to individual axiomatic systems that
the program might use in an auxiliary fashion.  Rather, it is intended
to apply to the program as a whole (thus encompassing all of those systems
if they are part of the system).

>	One mistaken intuition behind the widespread belief that
>a program can't do mathematics on a human level is the assumption
>that a machine must necessarily do mathematics within a single
>axiomatic system with a predefined interpretation.

I agree that this is a common misinterpretation, but the argument has broader
scope.  Assuming that the program is indeed a fixed finite program (of
undetermined nature; the computations that go into determining the program's
computation may look nothing like first-order logic or set theory, although
it's not unlikely that such computations might occasionally be used), then
the Godel argument applies.  (It's a simple matter to convert any program into
an "axiomatic" system that generates precisely the same statements.  Then
we call on Godel.)

>	Using a convenient set theory, e.g. ZF, axiomatize the
>notion of first order axiomatic theory, the notion of
>interpretation and the notion of a sentence holding in an
>interpretation.  Then G\"odel's theorem is just an ordinary
>theorem of this theory and the fact that the G\"odel sentence
>holds in models of the axioms, if any exist, is just an ordinary
>theorem.

Godelize the entire system.  This will give a statement that it cannot
generate which we can see to be true.

>	Besides the ability to use formalized metamathematics
>a mathematician program will need to give credence to conjectures
>based on less than conclusive evidence, just as human mathematicians
>give credence to the axiom of choice. 

This is potentially closer to the point.  If you hold that any system
that humans use to determine mathematical truth is necessarily somewhat
unreliable, then the TM that simulates the human may give rise to an
inconsistent system, so that Godel's theorem will not apply.  Of course humans
are inconsistent in practice, but I'm not sure that I'd want a refutation
of Lucas/Penrose to be based solely on this fact.  At the very least you'd
have to hold that's it's a *deep* and important fact about human competence
(as opposed to some kind of noise in the system, a performance/competence
distinction).  Also it is relevant to realize that the argument applies
equally to a simulation of the world's 100 best mathematicians working at
determining the truth of a statement for 100 years, and only giving a
yes/no answer when they are really sure (it's quite alright for them to say
"don't know" for some).  If you held that this system would be inconsistent,
and that this inconsistency was a deep and important fact, then this could
serve as the grounds for a refutation of the argument.  I think, however,
that a refutation of the argument should be based on something more clearcut
than this.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

minsky@media-lab.MEDIA.MIT.EDU (VA) (01/18/91)

In article <1991Jan17.170401.8536@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <JMC.91Jan16213907@DEC-Lite.Stanford.EDU> jmc@DEC-Lite.Stanford.EDU (John McCarthy) writes:

>>	One mistaken intuition behind the widespread belief that
>>a program can't do mathematics on a human level is the assumption
>>that a machine must necessarily do mathematics within a single
>>axiomatic system with a predefined interpretation.

>...  Of course humans
>are inconsistent in practice, but I'm not sure that I'd want a refutation
>of Lucas/Penrose to be based solely on this fact.  At the very least you'd
>have to hold that's it's a *deep* and important fact about human competence
>(as opposed to some kind of noise in the system, a performance/competence
>distinction).

Important, but scarcely deep.  Can't you paraphrase Godel as showing
that any system that is consistent is also limited in expressive
ability.  Humans have no problem in making deductions about 'sets than
do not have themselves as members', and so on.  The observation that
humans are inconsistent seems obvious, shallow, and categorically
incontrovertible. When you say, "at the very least", I can't fugure
out the intended effect of that clause.

torkel@sics.se (Torkel Franzen) (01/18/91)

In article <1991Jan17.170401.8536@bronze.ucs.indiana.edu> chalmers@bronze.ucs.
indiana.edu (David Chalmers) writes:

   >Godelize the entire system.  This will give a statement that it cannot
   >generate which we can see to be true.

  We have no grounds whatever for claiming that we can see the Godel sentence
of the system to be true.

torkel@sics.se (Torkel Franzen) (01/18/91)

In article <1991Jan17.162141.12917@watdragon.waterloo.edu> cpshelley@violet.uwaterloo.ca (cameron shelley) writes:

   >What Goedel showed, briefly, was that for any axiomatic system T1, there
   >is a Goedel number G1 which represents a statement about T1 that is
   >'true' but not 'provable' in T1.

   No, Godel did not show this. What he did show was that given a formal
system T in which a certain amount of arithmetic is representable (in a
well-defined sense), we can construct a formula G which is undecidable
in T provided T is omega-consistent. Rosser strengthened this to the
theorem that we can construct a formula R which is undecidable in T
provided T is consistent. The formulation in terms of truth presupposes
a particular interpretation of the language of T. Assuming that T is
an extension of arithmetic, with the arithmetical part of T being given
its standard interpretation, it does indeed follow that G is true but
unprovable in T - provided T is consistent. Nothing follows from Godel's
theorem concerning the possibility of proving that G is true.


   >I think Penrose's arguement is that it is not.  He might say: "A human
   >is able to perceive the truth (a semanitc property) of the various
   >structures Gx ("syntactic" entities) indicated by the incompleteness
   >theorem, so that for the human syntactic validity is not isomorphic
   >with semantic validity.

  We have no reason whatever for claiming that we are able, in general,
to recognize the truth of the Godel sentence of an arithmetical theory,
even in those cases when the Godel sentence is true.

mikeb@wdl31.wdl.loral.com (Michael H Bender) (01/18/91)

David Chalmers quotes:

   if I were a particular Turing Machine T, there would be a mathematical
   sentence G (the "Godel sentence" of T) that I could not prove.  But in fact
   I can see that G must be true.  Therefore I cannot be T. This holds for all
   T, therefore I am not a Turing machine.

I am not an expert in these areas, but I do have a question: isn't the word
"prove" being used in two different ways? I.e., when you say:
   
	"there would be a mathematical sentence G (the "Godel sentence" 
         of T) that I could not prove"
                                ^^^^^

you are referring to proof in the mathematical/formal context. However,
when you say:

	"But in fact I can see that G must be true. Therefore I cannot be T"
                                      ^^^^^^^^^^^^

you are referring to a completely different concept -- what we believe
or we "know" about the world. 

Am I missing something? Because clearly there is a well understood
difference between these two. Clearly people often believe things which are
not provable and in fact, psychological studies have shown that people
will, on occasion, refuse to believe things which they know are provable.

Mike Bender

cam@aipna.ed.ac.uk (Chris Malcolm) (01/18/91)

In article <1991Jan17.040803.8205@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>Yes, but then we'd just Godelize that one.  The point is that for any
>given TM there exists a sentence where our capabilities differ, so we can't
>be any given TM.

Only if you have decided to limit your TM, as others have pointed out,
to working exclusively from one set of axioms, and to being consistent.
But even if you do decide to limit your TM in this way, all your
argument claims is that people can't be such machines -- and who ever
proposed that they might be? What you are trying to reach at with this
argument is the conclusion that computers can't be smart like people.
People happily perform these Goedelian tricks by changing axiom systems,
i.e., in effect switching from being one kind of your limited TMs to
another. But what is to stop the construction of a Universal Turing
Machine, which can be any particular of your limited TMs it pleases?
Such a machine would no more suffer from the "Goedel limit" than people
do, for the same reasons.
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   +44 31 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK             DoD #205

thornley@cs.umn.edu (David H. Thornley) (01/18/91)

Before we continue with this discussion, I would like to ask how we
are to see that something is true.  Most of the time, I don't "see"
truth, I have to work at it.

If we have a program that will pass the Turing test, we presumably
can make some sort of equivalent logical system, and derive a Godel
number corresponding to an undecidable statement, given infinite
resources.  It is not at all clear to me that we can see that this
statement is true; what if a slight mistake occurred during the
process?  How can we tell?

Further, the Godel process, as well as I can tell, results in the
proposition P reading "Proposition P cannot be proved in this system",
which is obviously unprovable in that system, and hence true, provided
the system is consistent.  This seems, to me, to be precisely equivalent
to statement S:  "Statement S cannot be asserted by Chalmers".
I will point out that that statement is true iff Chalmers is
consistent, and challenge Chalmers to stay consistent while
agreeing with me.

I'm not talking psychology here, I'm talking about assertions and
logical consistency.  No system of logic can be both complete and
consistent, and neither can a human.  Any arguments that human
beings are different from logical systems will have to start somewhere
else.

DHT

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/18/91)

In article <1991Jan17.191340.28824@sics.se> torkel@sics.se (Torkel Franzen) writes:

>  We have no grounds whatever for claiming that we can see the Godel sentence
>of the system to be true.

So you must claim either (a) we cannot see the Godel sentence of Principia
Mathematica to be true, or (b) there are some relevant differences between
the general Turing Machine case and Principia Mathematica.  I don't see
any grounds for claiming (a) -- Godel would deny it and so would I.  The
Godel sentence for Principia Mathematica seems as well supported as most
statements of mathematics.  If you claim (b), I would like to see the
relevant differences spelled out.

(N.B. Any Turing Machine system for judging the truth of statements of
arithmetic can straightforwardly be turned into a first-order axiomatic
system for generating such statements.)

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/18/91)

In article <MIKEB.91Jan17123943@wdl31.wdl.loral.com> mikeb@wdl31.wdl.loral.com (Michael H Bender) writes:

>I am not an expert in these areas, but I do have a question: isn't the word
>"prove" being used in two different ways? I.e., when you say:
>	"there would be a mathematical sentence G (the "Godel sentence" 
>         of T) that I could not prove"
>you are referring to proof in the mathematical/formal context. However,
>when you say:
>	"But in fact I can see that G must be true. Therefore I cannot be T"
>you are referring to a completely different concept -- what we believe
>or we "know" about the world. 

I probably shouldn't have used the word "prove" in the first sentence
quoted, due to the connotations of derivation in a particular formal system.
All that is required is mathematical judgment.  I assume that mathematicians
are able to judge certain statements to be "certainly true", irrespective
of how they do this (formal proof is a common method, but not the only
method: e.g. it doesn't apply to axioms, and applies in a subtly different
way to Godel sentences).  If mathematicians are computationally simulable,
then we can make a TM perform this process, and the Lucas argument applies.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/18/91)

David Thornley writes:

>Further, the Godel process, as well as I can tell, results in the
>proposition P reading "Proposition P cannot be proved in this system",
>which is obviously unprovable in that system, and hence true, provided
>the system is consistent.

That's roughly correct, except that the proposition is not "Proposition P
cannot be proved in this system".  Rather, it's a very complex statement of
arithmetic that happens to be equivalent to that proposition.

>This seems, to me, to be precisely equivalent
>to statement S:  "Statement S cannot be asserted by Chalmers".
>I will point out that that statement is true iff Chalmers is
>consistent, and challenge Chalmers to stay consistent while
>agreeing with me.

I agree that this proposition causes me problems ("...I want to say
it's true, really I do...but I just can't.  Oh no, I just did!  Does
the world explode now?").  But it's not directly relevant to the argument,
as the argument never claimed that humans have no limitations.

>I'm not talking psychology here, I'm talking about assertions and
>logical consistency.  No system of logic can be both complete and
>consistent, and neither can a human.  Any arguments that human
>beings are different from logical systems will have to start somewhere
>else.

That's true, and it does.  Humans are certainly not complete and consistent
when it comes to English sentences, as your argument demonstrates; and
they are very likely not complete and consistent when it comes to arithmetic.
But the argument did not claim that they were.  Rather, it claimed that human
capabilities differ from those of any given Turing Machine.

>Before we continue with this discussion, I would like to ask how we
>are to see that something is true.  Most of the time, I don't "see"
>truth, I have to work at it.

We'll get an army of mathematicians to work on judging the statement
for a hundred years.  The process needn't be direct.  Then, if the
mathematicians are computationally simulable, we'll get a TM to
simulate them.

>If we have a program that will pass the Turing test, we presumably
>can make some sort of equivalent logical system, and derive a Godel
>number corresponding to an undecidable statement, given infinite
>resources.  It is not at all clear to me that we can see that this
>statement is true; what if a slight mistake occurred during the
>process?  How can we tell?

Well, our army of mathematicians will have plenty of time to check and
recheck the process.  Of course you might still maintain that there
remains a tiny but non-zero chance of error.  I would like to
idealize away from such errors, as I don't believe they are really
relevant.  Appeal to error would be an "easy way out" of the Lucas argument.
And if you accepted it as the only way out, you would be committed to the
claim that the Lucas argument really would apply to error-free creatures.  I
don't believe that, and think that the argument has a more fundamental
problem.  So henceforth I'd like to idealize away from the possibility of
human error/inconsistency, if you'll allow me.

(Those who believe that human error/inconsistency is the fundamental problem
with the argument, and that the argument really would apply to hypothetical
error-free creatures, should feel free to ignore the discussion from here.)

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

jrk@information-systems.east-anglia.ac.uk (Richard Kennaway CMP RA) (01/18/91)

In <1991Jan18.012527.20104@news.cs.indiana.edu> chalmers@iuvax.cs.indiana.edu (David Q. Chalmers) writes:
>OK, now we're homing in on something closer to the point.  Judgments of the
>truth of Godel sentences are parasitical on judgments of the consistency
>of the given system.  And judgments of consistency may be hard.  We can judge
>that Principia Mathematica is consistent fairly straightforwardly

Eh?  How?

--
Richard Kennaway          SYS, University of East Anglia, Norwich, U.K.
Internet:  jrk@sys.uea.ac.uk		uucp:  ...mcsun!ukc!uea-sys!jrk

mcdermott-drew@cs.yale.edu (Drew McDermott) (01/19/91)

   In article <1991Jan18.012217.20029@news.cs.indiana.edu> chalmers@iuvax.cs.indiana.edu (David Q. Chalmers) writes:

   [[ Quoting me -- ]
   >>What does it mean for an arbitrary Turing machine to prove something?
   >
   >OK, not quite an arbitrary Turing machine.  A Turing machine that takes
   >in mathematical statements as input, and produces "yes" or "no" (or
   >perhaps "don't know") as an output.  That's all we need: if mathematicians
   >are computationally simulable, we can create such a TM from a simulation
   >of them.
   >
   >(Alternatively, we could formulate it as a Turing machine that *generates*
   >statements, rather than judges them; we'd just require a special symbol to
   >be printed at the start and finish of every such statement.  I think the
   >judging TM is simpler, though.)
   >
   >Given such a judging TM, there is an corresponding formal system that produces
   >as theorems only those statements that the TM judges as true.  This we can
   >Godelize.
   ...
   >--
   >Dave Chalmers                            (dave@cogsci.indiana.edu)      
   >Center for Research on Concepts and Cognition, Indiana University.
   >"It is not the least charm of a theory that it is refutable."

I dispute that Godel ever proved anything about this broad class of
Turing machines.  This class includes systems that are *mistaken*
about mathematics to some degree, for instance.  Or systems that
change their minds.  If you rule such Turing machines out, then you've
simply begged the question whether human mathematicians are Turing
machines, because people do indeed change their minds, and can indeed
be mistaken about some parts of mathematics while being quite
competent at other parts.

I'm rather surprised at the course this discussion has taken.  I
expected Chalmers to elicit the usual woolly responses, and then give
prizes for the correct answers.  But his understanding of the debate
turns out to be about as woolly as the average.

At this point, I'd like to see what Chalmers considers the definitive
refutation of the "Mind, Machines, and Godel" argument.

                                             -- Drew

torkel@sics.se (Torkel Franzen) (01/19/91)

In article <15303.9101181150@s4.sys.uea.ac.uk> jrk@information-systems.
east-anglia.ac.uk (Richard Kennaway CMP RA) writes:

   >Eh?  How?

  We mustn't take this talk of seeing and realizing too seriously. The
system of Principia Mathematica, if we mean what Russell and Whitehead
actually wrote, is tricky even to formalize in a way that makes the
assertion that it is consistent into a mathematical statement. (It can
be done in a reasonable way, but few people have any occasion or
reason to delve into the details, e.g. regarding the axiom of
reducibility.) In the case of an unproblematically formal system such
as ZFC, its consistency is not known to be provable in any
mathematical sense, and whether or not one regards it as evidently
consistent is very largely a matter of personal inclination and
opinion. A serious examination of the concept 'humanly provable' will
show that there is no well-defined totality of arithmetical statements
that are 'humanly provable' in a wide sense, or the truth of which we
can 'realize' or 'see'.

  In the present context, however, we are pretending that there is
such a well-defined concept of 'humanly provable arithmetical
statement' or 'humanly realizable arithmetical truth', and we're taking
a very liberal view of what is thus 'realizable'.  I think it's of
interest to note that even on this assumption, there is nothing in
Godel's theorem that implies that the 'humanly provable' statements
are not recursively enumerable.

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/19/91)

In article <3857@aipna.ed.ac.uk> cam@aipna.ed.ac.uk (Chris Malcolm) writes:

>Only if you have decided to limit your TM, as others have pointed out,
>to working exclusively from one set of axioms, and to being consistent.
>But even if you do decide to limit your TM in this way, all your
>argument claims is that people can't be such machines -- and who ever
>proposed that they might be?

All I need is the assumption that humans are computationally simulable
(and, as I've said a couple of times, the perhaps disputable idealization
to consistency).  The computational foundation of the TM that simulates a
human mathematician will probably look nothing like any normal axiomatic
system.  Perhaps e.g. it will involve the computations of a simulated
neural network.  Various mathematical axiomatic systems that the human
may be consciously using at a higher level (e.g. ZFC & Principia Mathematica)
may well be only a very small part of the story.  If we have a perfect
computational simulation of the human, then the ability to "change" such
higher-level axiomatic systems will come along as part of the package.
But it's still a TM (flexible, universal, or not), and so the argument
applies.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

cpshelley@violet.uwaterloo.ca (cameron shelley) (01/19/91)

In article <1991Jan17.200828.376@sics.se> torkel@sics.se (Torkel Franzen) writes:
>In article <1991Jan17.162141.12917@watdragon.waterloo.edu> cpshelley@violet.uwaterloo.ca (cameron shelley) writes:
>
>   >What Goedel showed, briefly, was that for any axiomatic system T1, there
>   >is a Goedel number G1 which represents a statement about T1 that is
>   >'true' but not 'provable' in T1.
>
>   No, Godel did not show this. What he did show was that given a formal
>system T in which a certain amount of arithmetic is representable (in a
>well-defined sense), we can construct a formula G which is undecidable
>in T provided T is omega-consistent. Rosser strengthened this to the
>theorem that we can construct a formula R which is undecidable in T
>provided T is consistent. The formulation in terms of truth presupposes
>a particular interpretation of the language of T. Assuming that T is
>an extension of arithmetic, with the arithmetical part of T being given
>its standard interpretation, it does indeed follow that G is true but
>unprovable in T - provided T is consistent. Nothing follows from Godel's
>theorem concerning the possibility of proving that G is true.
>

  Well, quite true.  I was addressing the effects of Goedel's theorem
and should have said so explicitly.  I'm not sure if I understand your
last statement correctly however -- the theorem does not constrain all
possible interpretations of truth (which is what I think you mean) --
but did have the effect of refuting Hilbert's conjecture that truth
can be interpreted compositionally with well-formedness (if I may be
allowed to paraphrase) in Frege's sense.

>  We have no reason whatever for claiming that we are able, in general,
>to recognize the truth of the Godel sentence of an arithmetical theory,
>even in those cases when the Godel sentence is true.

  We have the same claim to recognizing truth in this case as we do for
anything else: we made it up.  On the other hand, I don't think we can
claim that 'truth' has been given a perfect formalisation.  I suppose
the question being argued here is whether ever will be able to or not.

--
      Cameron Shelley        | "Absurdity, n.  A statement of belief
cpshelley@violet.waterloo.edu|  manifestly inconsistent with one's own
    Davis Centre Rm 2136     |  opinion."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/19/91)

In article <28145@cs.yale.edu> mcdermott-drew@cs.yale.edu (Drew McDermott) writes:

>I dispute that Godel ever proved anything about this broad class of
>Turing machines.  This class includes systems that are *mistaken*
>about mathematics to some degree, for instance.  Or systems that
>change their minds.  If you rule such Turing machines out, then you've
>simply begged the question whether human mathematicians are Turing
>machines, because people do indeed change their minds, and can indeed
>be mistaken about some parts of mathematics while being quite
>competent at other parts.

As I've said a number of times, I'm idealizing away from inconsistency,
because I don't think that inconsistency is the root of the problem.  You
may disagree.

Given that step, then take our TM that simulates the judgment of a
mathematician (or an army of mathematicians).  Convert it straightforwardly
to an axiomatic system that generates precisely those statements of
first-order arithmetic that the mathematicians judge to be true.  Then this
system has the following properties:

(1) It is consistent (by assumption).
(2) It has expressive power as great as the usual first-order systems
   of arithmetic, and has deductive power as least as great as such systems.
   (Assuming that human mathematicians have as a subset of their ability the
   capacity to apply the usual "axioms" of arithmetic straightforwardly, even
   consciously if necessary.  This should not be controversial.)

This is all that is required to apply the Godel theorem.

>I'm rather surprised at the course this discussion has taken.  I
>expected Chalmers to elicit the usual woolly responses, and then give
>prizes for the correct answers.  But his understanding of the debate
>turns out to be about as woolly as the average.

Hmm.  Of course the initial statement of the problem requires some
clarification before it really makes sense, and to avoid some basic
counterarguments.  I didn't make these clarifications at the beginning
(though I was tempted to) because Lucas and Penrose didn't either, and it
seemed more interesting to let them emerge through dialectic.  It is
interesting how many of the "standard" responses can be finessed by refining
the argument.

>At this point, I'd like to see what Chalmers considers the definitive
>refutation of the "Mind, Machines, and Godel" argument.

I think that the correct refutation is tied up with the question of how we
"see" that certain systems are consistent, and the arguments by Torkel
Franzen are coming close to that point, although even there the argument has
at least temporary defenses.  Note that this kind of refutation is completely
independent of the empirical fact of human inconsistency, which I firmly
believe to be a red herring as far as this argument is concerned (although
this is a fascinating point of issue in its own right).

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/19/91)

In article <15303.9101181150@s4.sys.uea.ac.uk> jrk@information-systems.east-anglia.ac.uk (Richard Kennaway CMP RA) writes:

>[I wrote]
>>OK, now we're homing in on something closer to the point.  Judgments of the
>>truth of Godel sentences are parasitical on judgments of the consistency
>>of the given system.  And judgments of consistency may be hard.  We can judge
>>that Principia Mathematica is consistent fairly straightforwardly
>
>Eh?  How?

Surely this isn't too controversial.  To make it even less so, make it just
a simple modern-style first-order system that axiomatizes our arithmetic
intuitions, by formalizing addition, multiplication, induction etc in the
normal ways.  Presumably the statement that this system is consistent is as
well-founded as most of our beliefs about arithmetic.  If it wasn't, the
whole of mathematics would be in danger of collapse.

Principia Mathematica is similar, though a little more baroque.  Note that
I'm only considering that part of PM that is relevant to generating
statements of first-order arithmetic.  I think it's straightforward to see
the equivalence of this system to other more stripped-down systems.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/19/91)

In article <1991Jan18.083759.19131@sics.se> torkel@sics.se (Torkel Franzen) writes:

>  In spite of your later remarks I fail to see that you have in any way
>substantiated the claim that we are not Turing machines. You are postulating
>that if the arithmetical theorems provable by by us are exactly those
>of a theory T, then we must be able to recognize that "we are that machine"
>and therefore T is consistent. Exactly what you mean by this is unclear,
>but you have given no justification for it.

This is now getting very close to the bone.  Judgment of the truth of the
Godel sentence is parasitical on a judgment of consistency, which in turn
is parasitical on our knowledge that the TM in question is in fact an accurate
simulation of us.  Now, I don't know in practice whether we can know whether a
given TM is an accurate simulation of us or not, but it doesn't seem an
unreasonable claim, with the wonders of cognitive science and neurophysiology.
Importantly, to use this as your refutation of the argument, it is you who
have to make the strong claim (a la Benacerraf) that we necessarily cannot
discover the TM that we are (or more precisely, a TM that accurately
simulates us).  I don't see any independently-supported reason for such a
claim, although it's certainly a logical possibility.

This is a bit like the inconsistency counter.  If we in fact couldn't
discover what TM we are, then the argument wouldn't apply, but on the
face of it it seems to be a bit too much of an easy way out.

>  Suppose I claim that the arithmetical theorems provable by human beings
>are precisely those of ZFC+'there is an uncountable measurable cardinal'
>(even if we have so far only managed to arrive at the truth of a tiny
>fraction of this wealth of arithmetic). How do you propose to use Godel's
>theorem to refute this claim?

Well, the form of the argument is a reductio.  If you did this, and you
provided convincing evidence for this claim, and I had good reason to
believe that I was consistent (the role of the last two assumptions has been
gone over in this and the previous posts), then I could then assert the
statement "ZFC+UM is consistent", and its arithmetical analog (or
alternatively, I could assert the Godel sentence of the theory).  I would
thus have asserted a statement outside the bounds of the theory.  
Contradiction.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

torkel@sics.se (Torkel Franzen) (01/19/91)

In article <1991Jan18.191303.6840@bronze.ucs.indiana.edu> chalmers@bronze.ucs.
indiana.edu (David Chalmers) writes:

>Now, I don't know in practice whether we can know whether a
>given TM is an accurate simulation of us or not, but it doesn't seem an
>unreasonable claim, with the wonders of cognitive science and neurophysiology.
>Importantly, to use this as your refutation of the argument, it is you who
>have to make the strong claim (a la Benacerraf) that we necessarily cannot
>discover the TM that we are (or more precisely, a TM that accurately
>simulates us).  I don't see any independently-supported reason for such a
>claim, although it's certainly a logical possibility.

  I claim nothing concerning what we can or cannot necessarily discover.
To refute your argument I need merely show that it is inconclusive. You
still have said nothing to substantiate the claim that if ZFC+UM codifies
the humanly provable arithmetical statements, we must be able to convince
ourselves of this. Vague (or rather vacuous) appeals to "the wonders of
cognitive science" carry no more weight than a bald claim that humans
can establish the truth of any true arithmetical statement.

  >Well, the form of the argument is a reductio.  If you did this, and you
  >provided convincing evidence for this claim, and I had good reason to
  >believe that I was consistent (the role of the last two assumptions has been
  >gone over in this and the previous posts), then I could then assert the
  >statement "ZFC+UM is consistent", and its arithmetical analog (or
  >alternatively, I could assert the Godel sentence of the theory).  I would
  >thus have asserted a statement outside the bounds of the theory.  
  >Contradiction.

  These remarks are nothing to the point. Obviously I am not claiming to
have convincing evidence that ZFC+UM codifies the humanly provable
arithmetical statements. I have been told, let us say, by God in a dream
that this is so - he claims that he constructed us that way. Now you may
well mistrust my sources, but this is not the issue. The question is whether
you can *refute* my claim by appealing to Godel's theorem. From my point
of view there is no reason whatever for believing that we can give any
kind of evidence for the claim.

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/19/91)

In article <1991Jan18.203249.7022@sics.se> torkel@sics.se (Torkel Franzen) writes:

>  I claim nothing concerning what we can or cannot necessarily discover.
>To refute your argument I need merely show that it is inconclusive. You
>still have said nothing to substantiate the claim that if ZFC+UM codifies
>the humanly provable arithmetical statements, we must be able to convince
>ourselves of this. Vague (or rather vacuous) appeals to "the wonders of
>cognitive science" carry no more weight than a bald claim that humans
>can establish the truth of any true arithmetical statement.

It's a fairly subtle point.  I have no idea whether or not we will
someday be able to discover enough about the mind to build a TM that
accurately simulates it.  But the point is that I don't think you want
the refutation of the Lucas argument to rest solely on the claim that we
cannot.  Do you think that if in fact we could discover the structure of the
mind in this way, the Lucas argument would hold?  It seems tenuous to base
the refutation on such a fragile empirical matter.

As with inconsistency, I think there is something more clearcut going wrong.
Say we assume that we could in fact discover empirically what TM we "are".
Would the Lucas argument then apply, giving a reductio?  I don't think so.

So, I've made so far a number of idealizations, all of which could be subject 
to dispute:

(1) Humans are consistent.
(2) Humans know they are consistent.
(3) Humans are capable of empirically discovering enough about the mind so
    that if they in fact are simulable by a TM, they will know that a given
    TM simulates them.

A reply to the argument could be based on denying the possibility of any of
these idealizations.  Minsky's response is based on the denial of the
possibility of the idealization to (1).  Your response above appears to be
based on the denial of the possibility of the idealization to (3).  I
believe that these replies miss the point, as they're based more on
empirical matters than the logical heart of the Lucas argument.

In fact I believe that even if we accept idealizations (1), (2) and (3)
we can still refute the Lucas argument, in a very clearcut fashion.
Want to have a go?

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

forbis@milton.u.washington.edu (Gary Forbis) (01/19/91)

In article <1991Jan18.223602.15474@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>I've made so far a number of idealizations, all of which could be subject 
>to dispute:
>
>(1) Humans are consistent.
>(2) Humans know they are consistent.
>(3) Humans are capable of empirically discovering enough about the mind so
>    that if they in fact are simulable by a TM, they will know that a given
>    TM simulates them.
>
>A reply to the argument could be based on denying the possibility of any of
>these idealizations.  Minsky's response is based on the denial of the
>possibility of the idealization to (1).  Your response above appears to be
>based on the denial of the possibility of the idealization to (3).  I
>believe that these replies miss the point, as they're based more on
>empirical matters than the logical heart of the Lucas argument.
>
>In fact I believe that even if we accept idealizations (1), (2) and (3)
>we can still refute the Lucas argument, in a very clearcut fashion.
>Want to have a go?

I'm sorry I do not have the training to fully participate in this discussion.
I will none the less participate minimally.  I will restate your idealizations.

(1) I am consistent.
(2) I know I am consistent.
(3) I am capable of empirically discovering enough about my mind that I can
    tell when a given TM simulates me.

If I understand your argument correctly you want to hold these to be true when
you have already argued they cannot all be true.  I think you have also made
another idealization.  That is (4) The individual and the collective are 
arbitrary.

When I accept a new belief, I (to the best of my abilities) check the belief
for internal consistancy then check the consistancy of this belief with all
my previously held beliefs.  I recognize that some of my beliefs may be negated
without affecting the consistancy of the set.  This being so, I can imagine
another individual who is consistant, knows he or she is consistant, and know
the TM that simulates him or her.  When the two of us are considered together
we may no longer be consistent.  The difference between the individual and
the collective is not arbitrary.

I am not the same TM I was a moment ago.  The difference between the TMs at
any two moments is chaotic or random.

I hope this has not been too stupid.

--gary forbis@u.washingtion.edu

mcdermott-drew@cs.yale.edu (Drew McDermott) (01/19/91)

   In article <1991Jan18.184116.5299@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
   >In article <28145@cs.yale.edu> mcdermott-drew@cs.yale.edu (Drew McDermott) writes:
   >
   >>I dispute that Godel ever proved anything about this broad class of
   >>Turing machines.  This class includes systems that are *mistaken*
   >>about mathematics to some degree, for instance.  Or systems that
   >>change their minds.  If you rule such Turing machines out, then you've
   >>simply begged the question whether human mathematicians are Turing
   >>machines, because people do indeed change their minds, and can indeed
   >>be mistaken about some parts of mathematics while being quite
   >>competent at other parts.
   >
   >As I've said a number of times, I'm idealizing away from inconsistency,
   >because I don't think that inconsistency is the root of the problem.  You
   >may disagree.
   >

Forget inconsistency; being mistaken does not entail being
inconsistent.  Changing one's mind can't be modeled at all within
formal arithmetic.  Neither is being led to a proof by an apple
falling on one's head (or touch sensor).

   >Given that step, then take our TM that simulates the judgment of a
   >mathematician (or an army of mathematicians).  Convert it straightforwardly
   >to an axiomatic system that generates precisely those statements of
   >first-order arithmetic that the mathematicians judge to be true.  

This step is not straightforward!  I know what you're alluding to: the
usual transformation of a Turing machine into a term-rewriting system.
The problem with that construction in this context is that a Turing
machine that we interpret as telling us mathematical conjectures will
not get transformed into a formal system for generating conjectures in
the same domain.  Instead, it will get transformed into some
completely uninteresting system of Turing-machine state descriptions.  
[I should acknowledge Marvin Minsky for pointing this out to me years
ago; evidently, he should have pointed it out to more people.]

Let's grant that idealization is a good thing, and assume that our TM
mathematician does nothing but print out its conclusions, without ever
communicating with other mathematicians.  One plausible design for
this system is to have it employ various techniques of varying degrees
of soundness for arriving at these conclusions.  (E.g., the use of
diagrams; naive set theory). If a conclusion seems to withstand all
attempts to disprove it, it gradually becomes accepted, and the TM
prints it out.  If a counterexample or a fallacy in the proof later
becomes apparent, it prints out a retraction.

One might object that this would be an untrustworthy mathematician.
Well, so are people.

One might object that one couldn't build such a TM.  But that begs the
original question.  We're assuming you can, supposedly in order to get
a contradiction.

So let's assume that such a mathematician can be built.  Please note
that *we can still perform the usual transformation of this Turing
machine into a proof system.*  There is more than one way to do it.
If we want to have a system that generates all and only the states the
machine can reach, then the proof system's language will be a language
for describing machine and tape configurations.  This language will
just not be about mathematics at all.  

On the other hand, if we choose to axiomatize "conclusions the machine
reaches after N state transitions," for some N, then let's suppose
that also can be made into a formal system.  Unfortunately, the resulting
system is not sound.

Hence we have a choice between modeling the Turing machine as a formal
system that says nothing about mathematics, or viewing it as an
unsound formal mathematical system.  Either way, Godel's argument does
not apply.

   >
   >>At this point, I'd like to see what Chalmers considers the definitive
   >>refutation of the "Mind, Machines, and Godel" argument.
   >
   >I think that the correct refutation is tied up with the question of how we
   >"see" that certain systems are consistent, and the arguments by Torkel
   >Franzen are coming close to that point, although even there the argument has
   >at least temporary defenses.  

I like Franzen's argument.  I like McCarthy's too.  It's not
surprising that an argument cobbled together from misunderstood
fragments of several theorems has several refutations.

                                             -- Drew McDermott

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/19/91)

In article <14733@milton.u.washington.edu> forbis@milton.u.washington.edu (Gary Forbis) writes:

>[Idealizations:]
>(1) I am consistent.
>(2) I know I am consistent.
>(3) I am capable of empirically discovering enough about my mind that I can
>    tell when a given TM simulates me.
>
>If I understand your argument correctly you want to hold these to be true when
>you have already argued they cannot all be true.  I think you have also made
>another idealization.  That is (4) The individual and the collective are 
>arbitrary.

Actually, the argument claimed not that (1)-(3) are inconsistent, but rather
that they are not consistent with the fact that I am a TM.  (NB Certain
other background assumptions are also needed, esp the fact that I have at
least a basic arithmetical ability.  I hope this isn't controversial :-). )

Those who base their counter to the Lucas argument on the fact that one of
(1)-(3) is fundamentally false would appear to accept that argument.
(Indeed, Minsky appears to claim that the Lucas/Godel argument shows that
consistent systems are fundamentally limited in a way that we are not.)
I think that this is far too much of a concession to Lucas, however.

>without affecting the consistancy of the set.  This being so, I can imagine
>another individual who is consistant, knows he or she is consistant, and know
>the TM that simulates him or her.  When the two of us are considered together
>we may no longer be consistent.  The difference between the individual and
>the collective is not arbitrary.

True.  Doesn't matter.  I'll decide in advance what it is that I want to
simulate, whether it's a single mathematician or an army of them.  Preferably
the latter.  They'll work on judging truth for an indefinite period of time,
until they come to a joint, carefully-considered decision.  Individual
mathematicians may at first have differences, but we assume that the process
will eventually settle them.  Statements of number theory (unlike those of
set theory) don't seem to cause radical differences in standards of
justification, presumably because mathematicians share a set of common
intuitions about the integers.  There might still be a minute amount of
random noise in this system causing mistakes, but this is what I'm prepared
to idealize away from

>I am not the same TM I was a moment ago.  The difference between the TMs at
>any two moments is chaotic or random.

If you're making an appeal to low-level chaos, I'm idealizing away from that
too.  (I wish I was Noam Chomsky so I could make these idealizations
with a straight face.  They always seem to bring on this feeling of
defensiveness.  I don't know how he does it.)  I think it's fairly widely
accepted that chaos doesn't give us any extra capacities that a simulation at
a very fine but discrete level wouldn't have.  (Because of chaos, such
a simulation gives you a way that the system could have gone, rather than
the way that in fact does go on a particular occasion.  No difference
in competence.)  As with consistency but more clearly, I don't think you
want to base an argument against Lucas solely on chaos.

(Actually, your argument from chaos might be construed as an argument *for*
Lucas, saying "Hey, we're really not TM simulable after all.  So Lucas was
right.")

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/19/91)

In article <28154@cs.yale.edu> mcdermott-drew@cs.yale.edu (Drew McDermott) writes:

>This step is not straightforward!  I know what you're alluding to: the
>usual transformation of a Turing machine into a term-rewriting system.
>The problem with that construction in this context is that a Turing
>machine that we interpret as telling us mathematical conjectures will
>not get transformed into a formal system for generating conjectures in
>the same domain.  Instead, it will get transformed into some
>completely uninteresting system of Turing-machine state descriptions.  
>[I should acknowledge Marvin Minsky for pointing this out to me years
>ago; evidently, he should have pointed it out to more people.]

This is only one way to rewrite the TM as a formal system!  First, a
clarification: our TM is *judging* certain statements of first-order
arithmetic, given to it as input, as true or false (and it may whirr
forever on certain inputs if it wants to); it's not generating them.
(The argument would still work with generation, but it would be
more complex.)  So the statements that the TM judges to be true are a
recursively enumerable set.  (More precisely, their Godel numbers form an
r.e. set.)

There is a fairly straightforward theorem that for any set of statements
closed under logical consequence whose Godel numbers form an r.e. set, then
that set of statements can be generated as the theorems of some formal
system (from a decidable set of axioms).  I think the theorem is due to
Craig, you can certainly find it somewhere in Boolos & Jeffrey or any other
standard text.  It should be non-controversial that our set of statements
is closed under logical consequence.  Therefore the statements can be
generated as precisely the theorems of some formal system.

Sorry if this wasn't clear before.  I thought that the relationship between
recursive enumerability and axiomatizability was well known.

>diagrams; naive set theory). If a conclusion seems to withstand all
>attempts to disprove it, it gradually becomes accepted, and the TM
>prints it out.  If a counterexample or a fallacy in the proof later
>becomes apparent, it prints out a retraction.

I'm assuming the TM is simulating an army of mathematicians working on
judging the truth of a given statement.  When they decide (after much
checking and counter-checking) that the statement is certainly true, they
lodge an irrevocable decision.  (Some statements may be such that they never
reach a decision; it doesn't matter.)  So no retractions for our TM.

>On the other hand, if we choose to axiomatize "conclusions the machine
>reaches after N state transitions," for some N, then let's suppose
>that also can be made into a formal system.  Unfortunately, the resulting
>system is not sound.

We need not specify such an N.  It's allowed to run for an indefinite amount
of time.  The set of statements eventually recognized as true still form
a recursively enumerable set.

Recall that I'm idealizing away from the possibility of our army of
mathematicians making mistakes in their final, considered decision.  (And
note that no-one is forcing them to make decisions on statements that they're 
not certain about.  But certain statements, such as "2+2=4", or "first-order
arithmetic is consistent" [actually its arithmetical analogue], will not be
controversial.)  I don't see any reason why this idealization is unreasonable.
If, however, you want to dispute it, I can't provide a knockdown argument --
you can side with Minsky et al on the fundamentality of inconsistency.  Either
way, I don't see this deep conceptual incoherency and woolliness that you keep
talking about.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

torkel@sics.se (Torkel Franzen) (01/19/91)

In article <1991Jan18.223602.15474@bronze.ucs.indiana.edu> chalmers@bronze.
ucs.indiana.edu (David Chalmers) writes:

  >It's a fairly subtle point.  I have no idea whether or not we will
  >someday be able to discover enough about the mind to build a TM that
  >accurately simulates it.  But the point is that I don't think you want
  >the refutation of the Lucas argument to rest solely on the claim that we
  >cannot.  Do you think that if in fact we could discover the structure of the
  >mind in this way, the Lucas argument would hold?  It seems tenuous to base
  >the refutation on such a fragile empirical matter.

  Just what do you mean by a refutation of the Lucas argument? What I
mean by refuting the Lucas/Penrose argument is to establish that it is
inconclusive.  I believe this is what is generally meant by refuting
an argument. I am not concerned with either accepting or rejecting
vague speculations about the possibility of our establishing that the
human mind is simulated by some Turing machine. My argument is just
this, that the Lucas/Penrose argument preupposes a claim for which no
justification whatever has been presented, namely the claim that if
the humanly provable statements of arithmetic are exactly the theorems
of a formal system T, it must be possible for human beings to convince
themselves of this fact.

  If we leave the Lucas/Penrose argument to one side and consider on
its own merits the idea of simulating the mathematical activity of the
human mind by a Turing machine or formal system, my general attitude
is that there is no such thing as a well-defined "capacity of the
human mind" for realizing mathematical truths or proving mathematical
statements. The question of the possibility or impossibility of
simulating the human mind by a Turing machine (in the sense and the
area here at issue) is not an empirical question at all, but the
outcome of a misunderstanding of or lack of attention to such concepts
as mathematical provability and mathematical evidence.

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/20/91)

In article <1991Jan19.112854.28632@sics.se> torkel@sics.se (Torkel Franzen) writes:

>  Just what do you mean by a refutation of the Lucas argument? What I
>mean by refuting the Lucas/Penrose argument is to establish that it is
>inconclusive.  I believe this is what is generally meant by refuting
>an argument. I am not concerned with either accepting or rejecting
>vague speculations about the possibility of our establishing that the
>human mind is simulated by some Turing machine. My argument is just
>this, that the Lucas/Penrose argument preupposes a claim for which no
>justification whatever has been presented, namely the claim that if
>the humanly provable statements of arithmetic are exactly the theorems
>of a formal system T, it must be possible for human beings to convince
>themselves of this fact.

The trouble with this refutation through inconclusiveness is that if it does
in fact turn out that we are able to discover enough about the human mind
(an empirical matter), then the Lucas argument applies as strongly as ever.
I'd prefer to find a refutation that isn't subject to such empirical
vagaries.  (Unless, in fact, you believe that the argument would be valid
given such an assumption.)

>  If we leave the Lucas/Penrose argument to one side and consider on
>its own merits the idea of simulating the mathematical activity of the
>human mind by a Turing machine or formal system, my general attitude
>is that there is no such thing as a well-defined "capacity of the
>human mind" for realizing mathematical truths or proving mathematical
>statements. The question of the possibility or impossibility of
>simulating the human mind by a Turing machine (in the sense and the
>area here at issue) is not an empirical question at all, but the
>outcome of a misunderstanding of or lack of attention to such concepts
>as mathematical provability and mathematical evidence.

Your first sentence is quite true, and that's why I've done it by stipulation.
Pick an army of mathematicians that I want to simulate, set them to work on
judging the truth of a statement, and when they've convinced themselves that
the statement is "certainly true" (by their own standards, but they must be
conservative standards), they lodge an offical judgment.  The TM that simulates
this process (and produces as final output their judgment) is the TM that is
being used.  [And I'm idealizing away from the possibility of mistakes,
somewhat controversially, as well as from factors like death and boredom,
I hope less controversially.]

As for simulating the mind by a TM not being an empirical question; I don't
see why.  I'm talking about a complete simulation of human action, not just
some ill-specified notion of "mathematical judgment".  Then *given* such
a simulation ability, we can produce simulation of some stipulated aspect of
mathematical judgment without difficulty, as outlined in the previous
paragraph.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/20/91)

In article <1991Jan19.122052.29529@sics.se> torkel@sics.se (Torkel Franzen) writes:

>  You are quite right, but I think those who are not familiar with this
>observation of Craig's may get the wrong idea. It should not be assumed
>that the formal system yielded by Craig's trick is anything like ZFC or
>other formal systems with comprehensible axiom schemas.

This is quite true, though of course it doesn't affect the argument.  I've
stressed before that the formal system in question may look nothing like
ZFC or other familiar systems.  The system is still subject to the Godel
argument.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

torkel@sics.se (Torkel Franzen) (01/20/91)

In article <1991Jan19.185303.25410@bronze.ucs.indiana.edu> chalmers@bronze.
ucs.indiana.edu (David Chalmers) writes:

  >The trouble with this refutation through inconclusiveness is that if it does
  >in fact turn out that we are able to discover enough about the human mind
  >(an empirical matter), then the Lucas argument applies as strongly as ever.

  Come now. If the Lucas argument is strengthened by (as yet purely
hypothetical) observations concerning the human mind, it is another
argument. In rejecting the Lucas argument I'm rejecting what is now
presented as a piece of reasoning from Godel's theorem. To take an
analogy: if somebody argues that there is life on Mars on the basis
of a vision, I reject his argument. If he argues that there is life on
Mars on the basis of a vision in combination with the findings of
an expedition to Mars, we are dealing with a new argument.

  >As for simulating the mind by a TM not being an empirical question; I don't
  >see why.  I'm talking about a complete simulation of human action, not just
  >some ill-specified notion of "mathematical judgment".  Then *given* such
  >a simulation ability, we can produce simulation of some stipulated aspect of
  >mathematical judgment without difficulty, as outlined in the previous
  >paragraph.

  The notion of 'simulation' at issue here is that of a Turing machine
generating (or accepting) those and only those arithmetical statements
provable by human beings. Your remarks about a 'complete simulation of
human action' don't at all touch on the questions involved. For
example, in the case of the consistency of ZFC, which 'human action'
do you want to simulate: the assertion that ZFC is evidently
consistent, or the assertion that there are no grounds for claiming
that ZFC is consistent? Both of these 'actions', and others as well,
are found among mathematicians.

torkel@sics.se (Torkel Franzen) (01/21/91)

In article <17695.9101201730@s4.sys.uea.ac.uk> jrk@information-systems.
east-anglia.ac.uk (Richard Kennaway CMP RA) writes:

 >I see no reason for supposing first-order arithmetic to be consistent, other
 >than the fact that no contradictions have yet surfaced.  This is strong
 >enough evidence that I can regard its consistency as "uncontroversial"
 >(though not necessarily uncontrovertible); I have the same amount of
 >confidence (or lack of it) in the consistency of ZF (or any of its
 >equiconsistent variants).

  Hmm..on the basis of the fact that no contradiction has yet surfaced
you regard the consistency of first order arithmetic as "uncontroversial"...
By the same token, then, since no counterexample to Fermat's last theorem
has been found, you regard it as "uncontroversially" true...And for any
conjecture that has not yet been settled in ZFC, you regard it as
"uncontroversial" that it is undecidable in ZFC...

  It is of course open to anybody to claim whatever he likes concerning
what he has or has not confidence in. However, your apparent doctrine
that any statement of the form "for all k, P(k)" that has not been 
refuted is "uncontroversially" true is merely silly from the point of view
of ordinary mathematics.

  In other words: if you claim that there is evidence that first order
arithmetic is consistent, you had better come up with something more
convincing.

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/21/91)

In article <28203@cs.yale.edu> mcdermott-drew@cs.yale.edu (Drew McDermott) writes:

>Well, I think this idealization *is* unreasonable, and perhaps we
>should leave the dispute at that.

It comes down to whether you think a competence/performance distinction makes
sense when applied to human mathematical capacities (where competence
idealizes away from mistakes, death and boredom as performance factors).  It's
obviously controversial.  I think the notion makes some sense; enough sense
for the purposes of this argument, though it's not entirely clear-cut.
Others obviously disagree.  It's a complex matter that's probably not
worth getting into here.  (Look at all the debate the linguists have had over
the notion.)

If we accepted the notion, then the Lucas argument could be construed as
claiming that the set of arithmetical statements encompassed by human
competence includes statements that are outside the capacities of any given
TM.  This, I think, would be of some interest to AI.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (01/21/91)

Any system strong enough to express arithmetic is either incomplete or
inconsistent.

Now what does "expressing arithmetic mean".  Because (for reasons that
I suspect Torkel Franzen could explain better than I can, even if he
might not agree with them) I am wary of attempting to express this
formally.  So let's say, simply, that this refers to systems that can
talk about proofs of theorems in the same systems.  What does
arithmetic have to do with this?  Simply because arithmetic is
*sufficient* for making the required self-referential statements -- as
Godel showed with his numbering scheme.  Is arithmetic *necessary*?  I
dunno.  You can do the same things with Smullyan's lexical tricks with
strings, or with McCarthy's very "natural" methods of using
LISP-quotes.

We all know how inconsistencies arose from self-reference, in the work
of the giants upon whose heads we stand.  Theory of types, and
stratification were attempts to avoid it -- but because we really
*did* want to make those deductions anyway, we had then to find ways
to collapse those infinite hierarchies by inventing various sorts of
closure-convergence gadgets.  In my dim understanding of these things,
the same idea seems to keep recurring, at least in spirit: various
sorts of ultrafilters, directed systems, Dana Scott's gadgets, and so
forth.  But these alternatives to stratifications are not "obviously
consistent", although some have not shown flaws for many years now.

OK.  Now Drew McDermott "suspects that" we differ here:

  "I believe [Minsky] thinks that inconsistency is actually a hallmark
  of  inferential power, whereas I think it's an unavoidable bug."

Well, we actually agree, in that the bug seems unavoidable.  Where do
we differ?  If anywhere, which I doubt, it is that I didn't mean to
argue that inconsistency is a virtue in itself.  However, I am pretty
sure that *self-reference*, or rather, the permission to consider,
conjecture, speculate, -- in short, to think freely and reflectively
-- about what one (that is, the thinking system that one is) is doing
may be an indispensable tool for making good discoveries.  And this
makes it worth the risk of making mistakes.  That's what happened to
Frege, and Russell, and Quine, and all those brave pioneers.  

(Now, each of those attempts was followed by making a new
reformulation, so that we can regard the whole community of Logicians
as generating a Myhill creative set, in which each theory generates a
new one.  Of course, there is no difficulty in making a single Turing
machine that does that sort of thing iteratively -- and it is up to
the external observer to regard it as a logic circuit that generates
symbol strings (a view in which the concept of *true* is
inappropriate) or as a formal system that happens to be either
incomplete or inconsistent, depending on how the observer chooses to
interpret what it's doing.  I tend to regard Penrose in both ways,
form moment to moment; first as a somewhat incoherent generator of
(nice-sounding or well-written) strings that have nothing at all to do
with truth, and then as a logical system which, in the "standard
interpretation" is rich in patent inconsistencies and evident
falsenesses.)

To be sure, perhaps self-reference might turn out to not be
indispensable.  But consider the price.  The computational cost of
self-reference is virtually nil.  Your (thinking) machine has an input
channel and an output channel; all you need is a quotation trick to
create processes that self-refer.  You can make your Spelling Program
check the spelling in its source code.  No trouble there.  You can
make your interpreter interpret itself when applied to itself.  Lots
of trouble there.  You can try to inhibit any process that gets too
close to self-reference -- and then you'll have a machine that won't
be about to think about any Godel theorems at all.


So the price -- not of consistency, but of its consequence -- of not
being permitted to be reflective, referential, or -- may I suggest it
-- of not being permitted to be conscious -- seems too high a price to
pay.  Indeed, the most obnoxious part of Penrose' book, at least to
me, is the muddle he makes of consciousness.  Perhaps his confusion
about the various sense of how machines can be seen as being
constrained by logic is why he insists on treating consciousness as a
primitive mystery, rather than feeling obliged to offer substantial
suggestions about the various phenomena associated with that term.
That may be one reason why he seems to propose that consciousness might
involve some entirely new and undiscovered principle of physics. I
might be wrong, but it seems to me that the most prominent aspect of
consciousness is indeed some capacity for being able to manipulate
descriptions of one's own recent mental (computational) activities.
Is there a hidden thread in his reasoning?  My view is that his book
is an "air sandwich" of the following form.

  "I'll show you machines can't think."
  (Two of hundred pages of irrelevant mathematical autobiography)
  "See, I showed you!"

Where's the beef?  Is it possible that this is his hidden agenda:

 "Machines are logical and consistent."
 "Then they can't be sefl-referent."
 "But people are.  So they can't be machines."

-- Marvin Minsky

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (01/21/91)

Any system strong enough to express arithmetic is either incomplete or
inconsistent.

Now what does "expressing arithmetic mean".  Because (for reasons that
I suspect Torkel Franzen could explain better than I can, even if he
might not agree with them) I am wary of attempting to express this
formally.  So let's say, simply, that this refers to systems that can
talk about proofs of theorems in the same systems.  What does
arithmetic have to do with this?  Simply because arithmetic is
*sufficient* for making the required self-referential statements -- as
Godel showed with his numbering scheme.  Is arithmetic *necessary*?  I
dunno.  You can do the same things with Smullyan's lexical tricks with
strings, or with McCarthy's very "natural" methods of using
LISP-quotes.

We all know how inconsistencies arose from self-reference, in the work
of the giants upon whose heads we stand.  Theory of types, and
stratification were attempts to avoid it -- but because we really
*did* want to make those deductions anyway, we had then to find ways
to collapse those infinite hierarchies by inventing various sorts of
closure-convergence gadgets.  In my dim understanding of these things,
the same idea seems to keep recurring, at least in spirit: various
sorts of ultrafilters, directed systems, Dana Scott's gadgets, and so
forth.  But these alternatives to stratifications are not "obviously
consistent", although some have not shown flaws for many years now.

OK.  Now Drew McDermott "suspects that" we differ here:

  "I believe [Minsky] thinks that inconsistency is actually a hallmark
  of  inferential power, whereas I think it's an unavoidable bug."

Well, we actually agree, in that the bug seems unavoidable.  Where do
we differ?  If anywhere, which I doubt, it is that I didn't mean to
argue that inconsistency is a virtue in itself.  However, I am pretty
sure that *self-reference*, or rather, the permission to consider,
conjecture, speculate, -- in short, to think freely and reflectively
-- about what one (that is, the thinking system that one is) is doing
may be an indispensable tool for making good discoveries.  And this
makes it worth the risk of making mistakes.  That's what happened to
Frege, and Russell, and Quine, and all those brave pioneers.  

(Now, each of those attempts was followed by making a new
reformulation, so that we can regard the whole community of Logicians
as generating a Myhill creative set, in which each theory generates a
new one.  Of course, there is no difficulty in making a single Turing
machine that does that sort of thing iteratively -- and it is up to
the external observer to regard it as a logic circuit that generates
symbol strings (a view in which the concept of *true* is
inappropriate) or as a formal system that happens to be either
incomplete or inconsistent, depending on how the observer chooses to
interpret what it's doing.  I tend to regard Penrose in both ways,
form moment to moment; first as a somewhat incoherent generator of
(nice-sounding or well-written) strings that have nothing at all to do
with truth, and then as a logical system which, in the "standard
interpretation" is rich in patent inconsistencies and evident
falsenesses.)

To be sure, perhaps self-reference might turn out to not be
indispensable.  But consider the price.  The computational cost of
self-reference is virtually nil.  Your (thinking) machine has an input
channel and an output channel; all you need is a quotation trick to
create processes that self-refer.  You can make your Spelling Program
check the spelling in its source code.  No trouble there.  You can
make your interpreter interpret itself when applied to itself.  Lots
of trouble there.  You can try to inhibit any process that gets too
close to self-reference -- and then you'll have a machine that won't
be about to think about any Godel theorems at all.


So the price -- not of consistency, but of its consequence -- of not
being permitted to be reflective, referential, or -- may I suggest it
-- of not being permitted to be conscious -- seems too high a price to
pay.  Indeed, the most obnoxious part of Penrose' book, at least to
me, is the muddle he makes of consciousness.  Perhaps his confusion
about the various sense of how machines can be seen as being
constrained by logic is why he insists on treating consciousness as a
primitive mystery, rather than feeling obliged to offer substantial
suggestions about the various phenomena associated with that term.
That may be one reason why he seems to propose that consciousness might
involve some entirely new and undiscovered principle of physics. I
might be wrong, but it seems to me that the most prominent aspect of
consciousness is indeed some capacity for being able to manipulate
descriptions of one's own recent mental (computational) activities.
Is there a hidden thread in his reasoning?  My view is that his book
is an "air sandwich" of the following form.

  "I'll show you machines can't think."
  (Two of hundred pages of irrelevant mathematical autobiography)
  "See, I showed you!"

Where's the beef?  Is it possible that this is his hidden agenda:

 "Machines are logical and consistent."
 "Then they can't be self-referent."
 "But people are.  So they can't be machines."

-- Marvin Minsky

jrk@information-systems.east-anglia.ac.uk (Richard Kennaway CMP RA) (01/21/91)

In <1991Jan20.230731.579@sics.se> torkel@sics.se (Torkel Franzen) writes:
<  Hmm..on the basis of the fact that no contradiction has yet surfaced
<you regard the consistency of first order arithmetic as "uncontroversial"...
<By the same token, then, since no counterexample to Fermat's last theorem
<has been found, you regard it as "uncontroversially" true...And for any
<conjecture that has not yet been settled in ZFC, you regard it as
<"uncontroversial" that it is undecidable in ZFC...

First-order arithmetic and ZFC have received a great deal more usage than
any particular conjectures such as FLT.  No, I dont regard FLT as
uncontroversially true.  And for that matter, by "uncontroversially true"
- I should have said just "uncontroversial" - I dont mean any more than
that I'm willing to take arithmetic and ZFC for granted, and not worry
about the possibility that the house of cards may fall down one day.  I
wouldn't do that with FLT.

<However, your apparent doctrine
<that any statement of the form "for all k, P(k)" that has not been 
<refuted is "uncontroversially" true is merely silly from the point of view
<of ordinary mathematics.

Such a doctrine would indeed be uncontroversially silly.  It isnt mine though.

<  In other words: if you claim that there is evidence that first order
<arithmetic is consistent, you had better come up with something more
<convincing.

I am not claiming any such thing.

--
Richard Kennaway          SYS, University of East Anglia, Norwich, U.K.
Internet:  jrk@sys.uea.ac.uk		uucp:  ...mcsun!ukc!uea-sys!jrk

torkel@sics.se (Torkel Franzen) (01/21/91)

In article <703.9101211032@s4.sys.uea.ac.uk> jrk@information-systems.
east-anglia.ac.uk (Richard Kennaway CMP RA) writes:

    [quoting me:]
   ><In other words: if you claim that there is evidence that first order
   ><arithmetic is consistent, you had better come up with something more
   ><convincing.

   >I am not claiming any such thing.

   I thought you did, in saying that "This is strong enough evidence
that I can regard its consistency as 'uncontroversial'." Anyway, the
"statistical" argument for the consistency of elementary arithmetic or
ZFC is a very poor one. Take some classical open problem A in
mathematics, one on which lots of mathematicians have been working for
maybe two hundred years. The "statistical" argument would lead us to
the conclusion that A is undecidable in current mathematics. Still every
now and then such a classical open problem is settled using perfectly
ordinary mathematics. In the case of the question of the consistency
of strong set theories, the statistical argument is further
weakened by the fact that very few people try to exploit and explore
the full strength of these theories.

  On the other hand, there is nothing mysterious about the claim that
e.g. first order arithmetic is obviously consistent. There is no need
to speak poetically about intuition: the theory is obviously consistent
because its axioms are obviously true. The mere occurrence of the word
"true" in an observation is often, it seems, taken to imply that
the observation must be metaphysical or otherwise deplorable. To say
that the axioms are true, however, is to say no more than what is said
in asserting the recursion equations for addition and multiplication,
and the principle of mathematical induction. Now if somebody does in fact
find these assertions obscure or doubtful or false, he will have no reason to
believe that first order arithmetic is consistent. It is mere ritual
however, to speak of the consistency of arithmetic as dubious without
attempting to establish just what is obscure, false, or doubtful about
the axioms. If one does in fact have serious doubts concerning the
consistency of arithmetic, one must have serious doubts concerning the
validity of practically every theorem of mathematics.

  It is perhaps worth pointing out that the trivial consistency proof
for arithmetic - the axioms are true, so the theory is consistent - is
easily formalizable in ordinary mathematics (second order arithmetic).
The proof is mathematically unsatisfactory not because it is invalid,
but because it has no interesting mathematical content whatever.
If one regards the conclusion of this particular proof as so extremely
dubious, why should one have confidence in any theorem of elementary
analysis? Again the supposed mystique of consistency appears to be mostly
a matter of ritual.

  Nor is it strange that many people who regard the consistency of elementary
arithmetic as unproblematic find the consistency of ZFC highly problematic.
The axioms of ZFC, when put forward as true assertions, are simply very much
more abstract and problematic and open to objections of various kinds.

  You raised another point:

    >>consider the Godel sentence of ZFC+ 'there is an uncountable
    >>measurable cardinal'. Is this sentence (in its ordinary
    >>interpretation) true or not? Nobody knows, and quite possibly nobody
    >>ever will.

   >What would count as evidence?  What is this "ordinary interpretation" of
   >ZFC+'tiaumc'?  What would it mean to find out that the sentence was true or
   >false?


  I'm not speaking of any interpretation of ZFC+UM, but of the ordinary
interpretation of a particular set-theoretical formula. This is an
arithmetical formula in the set-theoretical sense, i.e. all its
quantifiers are restricted to the hereditarily finite sets. So we need
to agree on the interpretation of such formulas. But of course we
also need to agree on the interpretation of the formula defining the
hereditarily finite sets.

  The most direct way of finding out that the Godel statement for
ZFC+UM is false is to establish that ZFC+UM is inconsistent. Evidence
for the truth of the Godel sentence is a rather more difficult matter,
and it may well be that we will never arrive at anything that anybody
regards as at all convincing evidence for the consistency of ZFC + UM
(unlike the situation with less problematic axioms of infinity).

  My adding "in its ordinary interpretation" had a point: suppose somebody
presents us with a theory, one that he associates with a particular
interpretation. We don't need to know what his interpretation is, or
introduce any interpretation of the theory, in order to establish that
Godel's theorem applies to it. What we do is to interpret arithmetic
in his theory (this a purely proof-theoretic matter), and establish that
the translation G* in his theory of a particular arithmetical
sentence G is not provable in his theory, provided it is consistent.
We know nothing about the truth-value of G* under his interpretation, and
indeed in the case of the Godel sentence, G* may well be refutable in his
theory, without this implying that anything is wrong with the theory (in
particular, it need not be inconsistent). In the case of the Rosser
sentence R, we know that R* is undecidable in the theory if it is
consistent. However, it may still well be the case that R* on his own,
perfectly good interpretation is a false statement.

cpshelley@violet.uwaterloo.ca (cameron shelley) (01/21/91)

In article <1991Jan21.022919.13895@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <28203@cs.yale.edu> mcdermott-drew@cs.yale.edu (Drew McDermott) writes:
>
>>Well, I think this idealization *is* unreasonable, and perhaps we
>>should leave the dispute at that.
>
>It comes down to whether you think a competence/performance distinction makes
>sense when applied to human mathematical capacities (where competence
>idealizes away from mistakes, death and boredom as performance factors).  It's
>obviously controversial.  I think the notion makes some sense; enough sense
>for the purposes of this argument, though it's not entirely clear-cut.
>Others obviously disagree.  It's a complex matter that's probably not
>worth getting into here.  

And number theory isn't? :>  I think this thread has been interesting and
it's worthwhile dicussing competence limits, but their relationship to
performance is of importance to the "comp.ai" part of "comp.ai.philosophy".
This has already occured in the last thread regarding Searle's chinese
room, so perhaps a different Gedankenexperiment would be appropriate.
I would like to point out that we do have other options than the standard
TM for modeling humans: namely non-deterministic TM's.  Off the top of
my head, I'd say they'd be a better place to start.

--
      Cameron Shelley        | "Absurdity, n.  A statement of belief
cpshelley@violet.waterloo.edu|  manifestly inconsistent with one's own
    Davis Centre Rm 2136     |  opinion."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

G.Joly@cs.ucl.ac.uk (Gordon Joly) (01/22/91)

In article <1991Jan16.035058.7465@bronze.ucs.indiana.edu> David Chalmers stirs

> Dull around here.  How about everybody tries to give the decisive refutation
> of the Lucas/Penrose arguments that use Godel's theorem to "show" that human
> beings are not computational (or more precisely, to "show" that human beings
> are not computationally simulable)?
> 
> Just to refresh your memory, the argument goes like this: if I were a
> particular Turing Machine T, there would be a mathematical sentence G (the
> "Godel sentence" of T) that I could not prove.  But in fact I can see that G
> must be true.  Therefore I cannot be T.  This holds for all T, therefore I am
> not a Turing machine.

Godel only dealt with first order logic.  We presume that there is more
going on in the Universe than that.

Secondly, Godel published ``Uber formal unentscheidbare Satze der
Principia Mathematica und verwandter Systeme, Tiel I'', in Monatshefte
fur Mathematik und Physik 38 (1931), 173-89. (umlauts omitted). He
then became ill and never published ``Teil II''.


Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

           Email: Les jeux sans frontiers du monde

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (01/22/91)

In article <1991Jan21.022919.13895@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>
>If we accepted the notion, then the Lucas argument could be construed as
>claiming that the set of arithmetical statements encompassed by human
>competence includes statements that are outside the capacities of any given
>TM.  This, I think, would be of some interest to AI.

I don't see why anyone sould accept that statement.  But how 'bout
accepting:


  The set of arithmetical statements encompassed by human
  competence includes statements that are outside the capacities of
  any Turing Machines that are restricted to deducing the theorems of 
  certain consistent deductive systems -- and whose output strings are
  interpreted only in that sense.

And this would be of little interest to AI-ers.  At least, to those
trying either to make smart machines or constructive theories of
psychology or brain function.

bill@synapse.nsma.arizona.EDU (Bill Skaggs) (01/22/91)

In article <4963@media-lab.MEDIA.MIT.EDU> 
minsky@media-lab.media.mit.edu (Marvin Minsky) writes:
>
>  [ . . . quotation . . . ]  
>
>But how 'bout accepting:
>
>
>  The set of arithmetical statements encompassed by human
>  competence includes statements that are outside the capacities of
>  any Turing Machines that are restricted to deducing the theorems of 
>  certain consistent deductive systems -- and whose output strings are
>  interpreted only in that sense.
>
>And this would be of little interest to AI-ers.  At least, to those
>trying either to make smart machines or constructive theories of
>psychology or brain function.

	I'm not quite sure I know how to parse your sentence.  If I
understand it, it means:

	For any consistent deductive system S of a certain class, 
	there exists an	arithmetical statement A such that:  1) A is 
	encompassed by human competence; and 2) A is outside the 
	capacity of any Turing Machine TM which is restricted to
	deducing consequences of S.

	I think Roger Penrose would agree with this assertion (though
I wouldn't).  Penrose seems to believe in a Platonic notion of absolute
mathematical truth to which humans have priveleged access.  He seems to
think that the collapse of quantum wave-functions, which (he says) 
cannot be modeled by differential equations, is crucial to this access.

	If I accepted Penrose's argument (I don't), and I were interested
in building a machine with human intelligence, I would be trying to 
figure out how to incorporate quantum effects into it.  So these sorts
of considerations *could* be of interest to AI people, if only they
were correct.

G.Joly@cs.ucl.ac.uk (Gordon Joly) (01/23/91)

  "I'll show you machines can't think."
  (Two of hundred pages of irrelevant mathematical autobiography)
  "See, I showed you!"

Oxbridge Professor is asked "Is that correct?". He thinks, goes away
and comes back twenty minutes later, and says to the class "Yes, it is
correct and what is more, it is obvious." He then carries on the lecture.

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

           Email: Les jeux sans frontiers du monde

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (01/23/91)

In article <1377@ucl-cs.uucp> G.Joly@cs.ucl.ac.uk (Gordon Joly) writes:
>
>  "I'll show you machines can't think."
>  (Two of hundred pages of irrelevant mathematical autobiography)
>  "See, I showed you!"
>
>Oxbridge Professor is asked "Is that correct?". He thinks, goes away
>and comes back twenty minutes later, and says to the class "Yes, it is
>correct and what is more, it is obvious." He then carries on the lecture.
 
This really happened. I was young student at harvard taking linear 
algebra-type course given by Hassler Whitney, the great topologist.  
A couple of weeks into abstract vector space theories, one student 
finally got the courage to speak up:

  "Sir, I can sorta see what's happening in 2 and 3 dimensions, but I
can't get the hang on what it's like in n-dimensional space.  Could
you suggest a practical way to visualize this?"

  Whitney stared into nothingness for a very long time.  Finally he
faced the student again and said,

 "Yes.  Just think of some particular n."

  And he continued on, as abstract as before.

sarima@tdatirv.UUCP (Stanley Friesen) (01/24/91)

In article <1991Jan18.012527.20104@news.cs.indiana.edu> chalmers@iuvax.cs.indiana.edu (David Q. Chalmers) writes:
>OK, now we're homing in on something closer to the point.  Judgments of the
>truth of Godel sentences are parasitical on judgments of the consistency
>of the given system.  And judgments of consistency may be hard.

Are not judgements (proofs) of consistancy more than just hard?
Isn't consistancy one of the undecidable questions (in the general case)?
[I was under the impression that it was equivalent to the halting problem]
If so, then one of the premises of the arument is invalid, not even humans
can actually determine the truth of all Goedel statements.

>This might be responded to by saying that all the contradiction proves is
>that *one* of the assumptions is false.  We can only conclude that *either*
>(a) we aren't that TM; or (b) we can't know that we are that TM; or (c) we
>can't know that we are consistent.  Benacerraf wrote a paper in the 60's
>suggesting (b) (i.e. maybe it's impossible for us to know which TM we are),
>but I find this a bit weak as the sole grounds for refuting the argument.

But why?  Isn't denying (b) equivalent to saying that consistancy is
decidable?  If so, then (b) is necessarily true unles consistancy is provable.

>(c) is perhaps better but still a bit weak -- just say we are that super-good
>race that has every reason to believe that they never make a mistake 
>(remembering that we are allowed to idealize here).

I suppose idealization is OK, but it does leave us with an almost uninteresting
result! I do not believe that a generalized resoning system can be designed
that is not subect to mistakes.  I.e. being able to reach arbitrary conclusions
without scope limitations automatically implies the possibility of error.

Of course with humans it goes even deeper than that, since we operate almost
entirely on pattern matching (i.e. guessing).  This is true in *all* fields
of human endeavor, even (or especially) mathematics.  I do not believe that
any system could match human performance in mathematics except by using
pattern matching to arrive at new hypotheses 'from scratch'.  And pattern
matching is intrinsically unreliable.

What am I saying here?  I am saying that humand mathemeticians arrive at
conclusions based on prior experience, intuition, training &c. and only
produce mathematical proofs as an final step.  The real work is in the
leaps of intuition.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)

jbaxter@physics.adelaide.edu.au (Jon Baxter) (01/24/91)

In article <93@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:

.........
>
> I suppose idealization is OK, but it does leave us with an almost uninteresting
> result! I do not believe that a generalized resoning system can be designed
> that is not subect to mistakes.  I.e. being able to reach arbitrary conclusions
> without scope limitations automatically implies the possibility of error.
>
> Of course with humans it goes even deeper than that, since we operate almost
> entirely on pattern matching (i.e. guessing).  This is true in *all* fields
> of human endeavor, even (or especially) mathematics.  I do not believe that
> any system could match human performance in mathematics except by using
> pattern matching to arrive at new hypotheses 'from scratch'.  And pattern
> matching is intrinsically unreliable.
>
> What am I saying here?  I am saying that humand mathemeticians arrive at
> conclusions based on prior experience, intuition, training &c. and only
> produce mathematical proofs as an final step.  The real work is in the
> leaps of intuition.

This line of reasoning has already been discussed at great length in this
thread. Even if a human mathematician does use heuristic methods to arrive
at mathematical theorems, the proofs are generally correct and when they
are not they are (usually) quickly overturned by the rest of the mathematical
community. So surely we can idealize away from the--perhaps slightly
inconsistent--human mathematician, to the completely consistent one.
To do this all one has to consider is a mathemetician who feeds her
"proofs" to another very large group of mathematicians that reject
anything they decide is invalid. The system consisting of the original
mathematician plus her "vetters" can be made virtually as reliable as you
like, so simulate the lot of them on a Turing machine and you have got
yourself an error-free "mathematician".
--
Jon.                            "Life's too short for death."

G.Joly@cs.ucl.ac.uk (Gordon Joly) (01/29/91)

In article <2291@sirius.ucs.adelaide.edu.au> Jon Baxter

> To do this all one has to consider is a mathemetician who feeds her
> "proofs" to another very large group of mathematicians that reject
> anything they decide is invalid. The system consisting of the original
> mathematician plus her "vetters" can be made virtually as reliable as you
> like, so simulate the lot of them on a Turing machine and you have got
> yourself an error-free "mathematician".

One thing is missing:-

``A mathematician is machine for turning coffee into theorems'' -
John Erd\"os.

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

           Email: Les jeux sans frontiers du monde

G.Joly@cs.ucl.ac.uk (01/29/91)

I apologise, that should read Paul Erdo\"os

``A mathematician is machine for turning coffee into theorems'' -

sarima@tdatirv.UUCP (Stanley Friesen) (01/30/91)

In article <2291@sirius.ucs.adelaide.edu.au> jbaxter@adelphi.physics.adelaide.edu.au.oz.au (Jon Baxter) writes:
|This line of reasoning has already been discussed at great length in this
|thread. Even if a human mathematician does use heuristic methods to arrive
|at mathematical theorems, the proofs are generally correct and when they
|are not they are (usually) quickly overturned by the rest of the mathematical
|community. So surely we can idealize away from the--perhaps slightly
|inconsistent--human mathematician, to the completely consistent one.

Perhaps so, but I believe that by that point we have something that is
uninteresting from an AI point of view (something with no 'intelligence').
I maintain that such a generalized, consistant mathemetician has lost 'his'
ability to be original (to invent new theorems or concepts) except in a
very restricted sense.  It is the 'willingness' to make mistakes that allows
new ideas to be generated.  Thus a model that didn't include at least a
sub-component that was 'unreliable' would be incapable of doing anything
except *check* theorems proposed to it.  That is in idealizing away the
error you have idealized away the mathmetician and left nothing but a
'vetter' as you call it.

|To do this all one has to consider is a mathemetician who feeds her
|"proofs" to another very large group of mathematicians that reject
|anything they decide is invalid. The system consisting of the original
|mathematician plus her "vetters" can be made virtually as reliable as you
|like, so simulate the lot of them on a Turing machine and you have got
|yourself an error-free "mathematician".

I suspect you would end up with an 'error-free' mathematical 'vetter' that
needs a real mathematician to feed it new ideas.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)

james@uncecs.edu (J. Smith) (01/30/91)

In article <1991Jan18.005010.23943@bronze.ucs.indiana.edu> 
  chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

> [responding to David Thornley]
>Humans ...
>are very likely not complete and consistent when it comes to arithmetic.
>But the argument did not claim that they were.  Rather, it claimed that human
>capabilities differ from those of any given Turing Machine.

And in article <1991Jan21.022919.13895@bronze.ucs.indiana.edu> writes:
  [responding to Drew McDermott]

>It comes down to whether you think a competence/performance distinction makes
>sense when applied to human mathematical capacities (where competence
>idealizes away from mistakes, death and boredom as performance factors).
>... If we accepted the notion, then the Lucas argument could be construed as
>claiming that the set of arithmetical statements encompassed by human
>competence includes statements that are outside the capacities of any given
>TM.  This, I think, would be of some interest to AI.

Would the argument make any claims regarding: "human capabilities
differ from those of any given human being" or "the set of arithmetical
statements encompassed by human competence includes statements that are
outside the capacities of any given human being"?

One could substitute "given army of mathematicians" just as well,
idealized or not.

Is there a notion of "TM capabilities" or "TM competence" we might
contrast with any "given TM" in the same way?

chalmers@bronze.ucs.indiana.edu (David Chalmers) (01/30/91)

In article <1991Jan29.211200.29482@uncecs.edu> james@uncecs.edu (J. Smith) writes:

>Would the argument make any claims regarding: "human capabilities
>differ from those of any given human being" or "the set of arithmetical
>statements encompassed by human competence includes statements that are
>outside the capacities of any given human being"?

No, it wouldn't.  The Godelian argument only applies to TM's, not humans,
so it would only yield such a conclusion if we beg the question and assume
that humans are TM's.

The given statements (particularly the second) might well be true, but only
because of "performance noise" in the capacities of a given human.  It's not
impossible from the standpoint of the argument that there could exist a human
without such performance limitations (except the trivial limitations like
death).  In any case, this empirical competence/performance distinction is
different in kind to the fundamental difference in capacities that the
argument purports to find between humans and TM's.

>Is there a notion of "TM capabilities" or "TM competence" we might
>contrast with any "given TM" in the same way?

That's an interesting question.  There are any number of different ways of
drawing competence/performance distinctions, depending on what we want to
idealize away from.  The question of whether the idealization preserves
the "essence" of a given capacity is a tricky one.  Presumably in the given
context, some such idealizations will sometimes be able to be drawn, just
as they can be from humans.  But I don't think that will affect the argument,
as the argument applies to a TM (the purported TM model of human competence)
for which all the relevant idealizations have already been made.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."

christo@psych.toronto.edu (Christopher Green) (02/01/91)

>
>In article <1991Jan16.035058.7465@bronze.ucs.indiana.edu> David Chalmers stirs
>
>> Dull around here.  How about everybody tries to give the decisive refutation
>> of the Lucas/Penrose arguments that use Godel's theorem to "show" that human
>> beings are not computational (or more precisely, to "show" that human beings
>> are not computationally simulable)?
>> 
>> Just to refresh your memory, the argument goes like this: if I were a
>> particular Turing Machine T, there would be a mathematical sentence G (the
>> "Godel sentence" of T) that I could not prove.  But in fact I can see that G
>> must be true.  Therefore I cannot be T.  This holds for all T, therefore I am
>> not a Turing machine.
>
Didn't Whitely take care of this argument back in 1962 (_Philosophy_, _37_: 61)?
Even Hofstadter seem to think so (see comment in Bibliography of _G,E,B_)

-cdg-