[sci.math] musings on Godel's theorem

news@ccncsu.ColoState.EDU (USENET News) (11/23/90)

the chance to absorb "Godel's Proof" by Nagel and Newman.  IMHO, the major
achievement in Godel's paper was the invention of the expressions for
provability and self-substitution in the formal system, the form of which was
mostly unspecified in the book, so naturally I speculated!  Now, apart from
the immediate requirement that they be constructed exclusively with the
symbols of the formal system, what are other demands on their form?
What about these:
 
* They have to be a finite size.
 
(We need to compute the Godel number of expressions with the provability and
substitution expressions embedded within, and if either did not have a
finite size the finite transcribed Godel number of the overall expressions
could not be determined.)
 
* They have to be invariant.
 
(The proof hinges on the idea that the provability and self-substitution
expressions can be paralleled in separate formulas "H" and "G".  It seems
to me that if either of the two expressions (proving and substitution) varied
between the H and G formulas the conclusion may not be valid.)
 
Can anybody outline the method by which Godel accomplished this?
The "provability" formula must take a number that can represent symbol
or string of symbols of arbitrary length, parse it, and check each step
for provability.  How is this feat possible within the confines of an
expression of finite length?  The "substitution" expression similarly must
be able to deal with expressions (represented by the Godel number) of
arbitary length.  At the very least, it seems the formal system requires
an operator "the number such that..."
 
* * *
 
On a different topic, I think it is presumptuous at best and irresponsible
at worst for claims about machine intelligence to be made from a supposed
basis in Godel's proof, as Nagel and Newman do in their work (and also as
in the article on the subject in Scientific American):
 
``Godel's conclusions bear on the question whether a calculating machine
can be constructed that would match the human brain in mathematical
intelligence.  Today's calculating machines have a fixed set of directives
built into them; these directives correspond to the fixed rules of inference
of formalized axiomatic procedure.  But, as Godel showed in his incompleteness
theorem, there are innumerable problems in elementary number theory that fall
outside the scope of a fixed axiomatic method; and that such engines are
incapable of answering, however intricate and ingenious their built-in
mechanisms may be and however rapid their operations.''
 
This link between an axiomatic system and computation seems rather ill-defined
and tenuous!  (The philosopher Searle may be taking advantage of this with his
"Chinese Room" argument, claiming that computation is merely axiomatic or
"syntactic" in his word.)
 
How is it that one single self-referential statement in a system has been
construed to hold so much importance?  Nagel and Newman state above that
Godel's proof showed
 
``there are innumerable problems in elementary number theory that fall
outside the scope of a fixed axiomatic method...'' (?!)
 
How is it that one "truth" that cannot be demonstrated in the system has
been extended to conjectures in number theory?  Evoking Godel seems like a
weak evasion of their challenge.
 

ld231782@longs.LANCE.ColoState.EDU

morphy@nntp-server.caltech.edu (Jones Maxime Murphy) (11/26/90)

news@ccncsu.ColoState.EDU (USENET News) writes:

>This link between an axiomatic system and computation seems rather ill-defined
>and tenuous!  (The philosopher Searle may be taking advantage of this with his
>"Chinese Room" argument, claiming that computation is merely axiomatic or
>"syntactic" in his word.)
> 

The link is hardly tenuous. A machine is defined by it's changes of state in 
response to sentences from the language it accepts. The primitives of it's
language correspond to axioms, and syntax corresponds to an inferential
mechanism for extracting state changes from combinations of primitives.


>How is it that one single self-referential statement in a system has been
>construed to hold so much importance?  Nagel and Newman state above that
>Godel's proof showed
> 
>``there are innumerable problems in elementary number theory that fall
>outside the scope of a fixed axiomatic method...'' (?!)
> 
>How is it that one "truth" that cannot be demonstrated in the system has
>been extended to conjectures in number theory?  Evoking Godel seems like a
>weak evasion of their challenge.
> 

I presume the one "truth" being referred to here is the impossibility of 
a finistic proof of the consistency of arithmetic that can be embedded in the
language of arithmetic and the first-order predicate calculus. In addition to 
simple culture shock at the news that "logic" as we know it is no longer
omnipotent in it's domain of inquiry, the fact is that we must now be wary in
the quest for artificial intelligence.
 A great deal of human intelligence derives from self-reference, and weaknesses
in self-referential analysis in the inferential mechanisms of computing machines
which at the moment all rely on 1st order predicate calculus must be taken
seriously.
Jones

blenko-tom@cs.yale.edu (Tom Blenko) (11/26/90)

In article <1990Nov25.204513.24169@nntp-server.caltech.edu> morphy@nntp-server.caltech.edu (Jones Maxime Murphy) writes:
|news@ccncsu.ColoState.EDU (USENET News) writes:
|
|>This link between an axiomatic system and computation seems rather ill-defined
|>and tenuous!  (The philosopher Searle may be taking advantage of this with his
|>...
|
|The link is hardly tenuous. A machine is defined by it's changes of state in 
|response to sentences from the language it accepts. The primitives of it's
|language correspond to axioms, and syntax corresponds to an inferential
|mechanism for extracting state changes from combinations of primitives.

You've left me mystified. A computing machine reads a sequence of
inputs, producing a sequence of outputs and halting or not halting.  A
logical theory consists of a set of axioms, rule(s) of inference, and a
collection of theorems derivable from the axioms.

Now, what is the "link"? What corresponds to the inputs, what
corresponds to the outputs, and what corresponds to the computing
machine?


|I presume the one "truth" being referred to here is the impossibility of 
|a finistic proof of the consistency of arithmetic that can be embedded in the
|language of arithmetic and the first-order predicate calculus. In addition to 
|simple culture shock at the news that "logic" as we know it is no longer
|omnipotent in it's domain of inquiry, the fact is that we must now be wary in
|the quest for artificial intelligence.
| A great deal of human intelligence derives from self-reference, and weaknesses
|in self-referential analysis in the inferential mechanisms of computing machines
|which at the moment all rely on 1st order predicate calculus must be taken
|seriously.

Again, this argument is opaque. Intelligent entities exist, and their
evolution/development presumably has proceeded with no prior knowledge
of logical truths.

If you wish to *describe* intelligent entities, artificial or
otherwise, or to describe automobiles, toasters, or houseplants for
that matter, you may find that some or all logics are unsuitable. If
so, this is not a problem of automobiles, toasters, or houseplants, but
a limitation of the logics in question.

	Tom