[net.philosophy] the Halting problem.

pd@eisx.UUCP (P. Devanbu) (09/26/83)

There are two AI problems that I know about: the computing power
problem (combinatorial explosions, etc) and the "nature of
thought" problem (knowledge representation, reasoning process etc).
This article concerns the latter.

AI's method (call it "m") seems to  model human 
information processing mechanisms, say legal reasoning methods, and 
once it is understood clearly, and a calculus exists for it, programming 
it. This idea can be transferred to various problem domains, and voila,
we have programs for "thinking" about various little cubbyholes of 
knowledge. 

The next thing to tackle is, how do we model AI's method "m" that was
used to create all these cubbyhole programs ?  How did whoever thought
of Predicate Calculus, semantic networks,  Ad nauseum block world 
theories come up with them ? Let's understand that ("m"),
formalize it, and program it. This process (let's call it "m'") gives
us a program that creates cubbyhole programs. Yeah, it runs on a zillion
acres of CMOS, but who cares.

Since a human can do more than just "m", or "m'", we try to make 
"m''", "m'''" et al. When does this stop ? Evidently it cannot. 
The problem is, the thought process that yields a model or simulation 
of a thought process is necessarily distinct from the latter  (This
is true of all scientific investigation of any kind of phenomenon,
not just thought processes). This distinction is one of the primary
paradigms  of western Science.

Rather naively, thinking "about" the mind is also done "with" the mind. 
This identity of subject and object that ensues in the scientific 
(dualistic) pursuit of more intelligent machine behavior - 
do you folks see it too ? Since scientific thought
relies on the clear separation of a theory/model and reality, is
a mathematical/scientific/engineering discipline inadequate for said
pursuit ? Is there a system of thought that is self-describing ? Is
there a non-dualistic calculus ? 

What we are talking about here is the ability to separate oneself
from the object/concept/process under study, understand it, model
it, program it... it being anything, including the ability it self.
The ability to recognize that a model is a representation within
one's mind of a reality outside of ones mind. Trying to model
this ability is leads one to infinite regress.
What is this ability ? Lets call it conciousness.  What we seem
to be coming up with here is, the INABILITY of math/sci etc to
deal with this phenomenon, codify at it, and to boldly program
a computer that has conciousness. Does this mean that the statement:

"CONCIOUSNESS CAN, MUST, AND WILL ONLY COME TO EXISTENCE OF ITS 
OWN ACCORD"

is true ? "Conciousness" was used for lack of a better word. Replace
it by X, and you still have a significant statement. Conciousness already
has come  to existence;  and according to the line of reasoning above,
cannot be brought into existence by methods available.

If so, how can we "help" machines  to achieve conciousness, as benevolent
if rather impotent observers ?
Should we just mechanistically build larger and larger neural network
simulators  until one says "ouch" when we shut a portion of it  off,
and better, tries to deliberately modify(sic) its environment so that 
that doesn't happen again? And may be even can split infinitives ?

As a parting shot, it's clear that such neural networks, must have
tremendous power to come close to a fraction of our level of abstraction
ability.

Baffled, but still thinking...  References, suggestions, discussions,
pointers avidly sought.

Prem Devanbu

ATTIS Labs , South Plainfield.

mat@hou5d.UUCP (M Terribile) (09/28/83)

I may be naive, but it seems to me that any attempt to produce a system that
will exhibit conciousness-;like behaviour will require emotions and the
underlying base that they need and supply.  Reasoning did not evolve
independently of emotions; human reason does not, in my opinion, exist
independently of them.

Any comments?  I don't recall seeing this topic discussed.  Has it been?  If
not, is it about time to kick it around?
						Mark Terribile
						hou5d!mat

samir@drufl.UUCP (09/28/83)

I agree with mark. An interesting book to read regarding conciousness is
"The origin of conciousness in the breakdown of bicamaral mind" by
Julian Jaynes. Although I may not agree fully with his thesis, it did
get me thinking and questioning about the usual ideas regarding
conciousness.

An analogy regarding conciousness, "emotions are like the roots of a
plant, while conciousness is the fruit".

				Samir Shah
				AT&T Information Systems, Denver.
				drufl!samir