[net.ai] New topic for discussion

phaedrus@eneevax.UUCP (04/26/84)

<Why do you eat this line if you're so artificially intelli-
gent?>

Hi there, I would like to start a new topic for  discussion.
Here, at the Univ. of Maryland, we have a course on the book
by Douglas R. Hofstadter, "Godel, Escher, Bach:  An  Eternal
Golden Braid."  In this book there are some really fascinat-
ing topics on which we could get into some really good  fra-
cases  (sp?).   I want you experienced AI'ers, philosophers,
anybody that is interested in AI out there in netland to let
your  hair  down and start voicing your opinions on the fol-
lowing excerpt from the book which in  turn  is  an  excerpt
from  the  article by J.R. Lucas entitled, "Minds, Machines,
and Godel".


I have no idea what I need to do to follow  copyright  laws,
so  my  apologies to the afore-mentioned authors in advance,
if I do something to offend them.

Here goes....

At one's first and simplest attempts  to  philosophize,  one
becomes  entangled  in  questions  of whether when one knows
something one that one knows it, and what, when is  thinking
of  oneself,  is  being thought about, and what is doing the
thinking.  After one has been puzzled and  bruised  by  this
problem for a long time, one learns not to press these ques-
tions: the concept of  a  conscious  being  is,  implicitly,
realized to be different from that of an unconscious object.
In saying that a conscious being knows something we are say-
ing  not  only  does he knows it, but he knows that he knows
it, and that he knows that he knows that he knows it, and so
on,  as  long as we care to pose the question:  there is, we
recognize, an infinity here,  but  it  is  not  an  infinite
regress in the bad sense, for it is the questions that peter
out, as being pointless, rather than the answers.  The ques-
tions  are felt to be pointless because the concept contains
within itself the idea of being able to go on answering such
questions  indefinitely.  Although conscious beings have the
power of going on, we do not wish to exhibit this simply  as
a  succession  of  tasks they are able to perform, nor do we
see the mind as an infinite sequence of  selves  and  super-
selves  and  super-super-selves.   Rather,  we insist that a
conscious being is a unity, and though we talk  about  parts
of our mind, we do so only as a metaphor, and will not allow
it to be taken literally.


     The paradoxes of consciousness arise  because  a  cons-
cious  being  can  be  aware  of itself, as well as of other
things, and yet cannot really be construed as being  divisi-
ble  into  parts.   It means that a conscious being can deal
with Godelian questions in a way in which a machine  cannot,
because  a  conscious being can consider itself and its per-
formance and yet not be other than that which did  the  per-
formance.   A machine can be made in a manner of speaking to
"consider" its performance, but it cannot  take  this  "into
account"  without  thereby  becoming  a  different  machine,
namely the old machine with a new  part  added.  But  it  is
inherent in our idea of a conscious mind that it can reflect
upon itself and criticize its own performances and, no extra
part  is  required  to do this:  it is already complete, and
has no Achilles' heel.


     The thesis thus begins to become more of  a  matter  of
conceptual  analysis  than  mathematical discovery.  This is
borne out by considering another  argument  put  forward  by
Turing.   So far, we have constructed only fairly simple and
predictable artifacts.  When we increase the  complexity  of
our  machines, there may, perhaps, be surprises in store for
us.  He draws a parallel with a fission pile.  Below a  cer-
tain  "critical"  size, nothing much happens:  but above the
critical size, the sparks begin to  fly.   So  too,  perhaps
with brains and machines.  Most brains and all machines are,
at present, "sub-critical"-they react to incoming stimuli in
a  stodgy and uninteresting way, they have no ideas of their
own and can produce only stock responses-but a few brains at
present,  and  possibly  some  machines  in  the future, are
super-critical, and scintillate on their own account.   Tur-
ing  is  suggesting that it is only a matter of complexity a
qualitative difference  appears.   So  that  super  critical
machines  will  be  quite  unlike  the  simple ones hitherto
envisaged.


     This maybe so.  Complexity often does introduce  quali-
tative differences. Although it sounds implausible, it might
turn out that above a certain level of complexity, a machine
ceased  to  be  predictable,  even in principle, and started
doing things on its own account, of, to use a very revealing
phrase,  it might begin to have a mind of its own.  It would
begin to have a mind of  its  own  when  it  was  no  longer
entirely predictable and entirely docile, but was capable of
doing things which we recognized  as  intelligent,  and  not
just  mistakes  or  random  shots, but which we had not pro-
grammed into it.  But then it would cease to be  a  machine,
within  the  meaning  of  the  act.  What is at stake in the
mechanist debate is not how minds are, or might be,  brought
into  being,  but how they operate.  It is essential for the
mechanist thesis that the mechanical model of the mind shall
operate  according  to  "mechanical principles," that is, we
can understand the operation of the whole in  terms  of  the
operation of its parts, and the operation of each part shall
be  either  determined  by  its  initial   state   and   the
construction  of  the  machine  or  shall be a random choice
between a determinate number of determinate operations.   if
the  mechanist  produces  a  machine which is so complicated
that this ceases to hold good of it, then it is no longer  a
machine  for the purpose of our discussion, no matter how it
was constructed.  We should say, rather, that he had created
a  mind, in the same sort of sense as we procreate people at
present.  There then be two ways of bringing new minds  into
the  world,  the traditional way, by begetting children born
of women, and a new way by constructing very,  very  complex
systems  of,  say,  valves  and relays.  When talking of the
second way we should take care to stress that although  what
was  created  looked  like a machine, it was not one really,
because it was not just the total of its parts:   one  could
not  even tell the limits of what it could do, for even when
presented with the Godel type question, it  got  the  answer
right.   In fact we should say briefly that any system which
was not floored by the Godel question was eo ipso not a Tur-
ing  machine,  i.e.  not a machine within the meaning of the
act.


Ibbidy-Ibbidy-Ibbidy that's all folks!!!
-- 


Without hallucinogens, life itself would be impossible.

ARPA:   phaedrus%eneevax%umcp-cs@CSNet-Relay
BITNET: phaedrus@UMDC
UUCP:   {seismo,allegra,brl-bmd}!umcp-cs!eneevax!phaedrus

davidson@sdcsvax.UUCP (Greg Davidson) (04/28/84)

This is a response to the submission by Phaedrus at the University of
Maryland concerning speculations about the nature of conscious beings.
I would like to take some of the points in his/her submission and treat
them very skeptically.  My personal bias is that the nature of
conscious experience is still obscure, and that current theoretical
attempts to deal with the issue are far off the mark.  I recommend
reading the book ``The Mind's Eye'' (Hofstadter & Dennett, eds.)  for
some marvelous thought experiments which (for me) debunk most current
theories, including the one referred to by Phaedrus.  The quoted
passages which I am criticizing are excerpted from an article by J. R.
Lucas entitled ``Minds, Machines, and Goedel'' which was excerpted in
Hofstadter's Goedel, Escher, Bach and found there by Phaedrus.

	the concept of a conscious being is, implicitly, realized to be
	different from that of an unconscious object

This statement begs the question.  No rule is given to distinguish conscious
and unconscious objects, nothing is said about the nature of either, and
nothing indicates that consciousness is or is not a property of all or no
objects.

	In saying that a conscious being knows something we are saying not
	only does he know it, but he knows that he knows it, and that he
	knows that he knows that he knows it, and so on ....

First, I don't accept the claim that people possess this meta-knowlege more
than a (small) finite number of levels deep at any time, nor do I accept
that human beings frequently engage in such meta-awareness; just because
human beings can pursue this abstraction process arbitrarily deeply (but
they get lost fairly quickly, in practice), does not mean that there is any
process or structure of infinite extent present.

Second, such a recursive process is straightforward to simulate on a
computer, or imbue an AI system with.  I don't see any reason to regard such
systems as being conscious, even though they do it better than we do (they
don't have our short term memory limitations).

	we insist that a conscious being is a unity, and though we talk
	about parts of our mind, we do so only as a metaphor, and will not
	allow it to be taken literally.

Well, this is hardly in accord with my experience.  I often become aware of
having been persuing parallel thought trains, but until they merge back
together again, neither was particularly aware of the other.  Marvin Minsky
once said the same thing after a talk claiming that the conscious mind is
inherently serial.  Superficially, introspection may seem to show a unitary
process, but more careful introspection dissolves this notion.

	The paradoxes of consciousness arise because a conscious being can
	be aware of itself, as well as of other things, and yet cannot
	really be construed as being divisible into parts.

The word ``aware'' is an implicit reference to the unknown mechanism of
consciousness.  This is part of the apparent paradox.  Again, there's
nothing mysterious about a system having a model of itself and being able to
do reasoning on that model the same way it does reasoning on other models.
Also again, nothing here supports the claim that the conscious mind is not
divisible.

	It means that a conscious being can deal with Godelian questions in
	a way in which a machine cannot, because a conscious being can
	consider itself and its performance and yet not be other than that
	which did the performance.

Whatever the conscious mind is, it appears to be housed in a physical
information processing system, to wit, the human brain.  If our current
understanding about the kind of information processing brains are capable of
is correct, brains fall into the class of automata and cannot ultimately do
any processing task that cannot be done with a computer.  The conscious mind
can scrutinize its internal workings to an extent, but so can computer
programs.  Presumably the Goedelian & (more to the point) Turing limitations
apply in principle to both.

	no extra part is required to do this:  it is already complete, and
	has no Achilles' heel.

This is an unsupported statement.  The whole line of reasoning is rather
loose; perhaps the author simply finds it psychologically difficult to
suppose that he has any fundamental limitations.

	When we increase the complexity of our machines, there may, perhaps,
	be surprises in store for us....  Below a certain ``critical'' size,
	nothing much happens....  Turing is suggesting that it is only a
	matter of complexity [before?] a qualitative difference appears.

Well, its very easy to build machines that are infeasible to predict.  Such
machines do not even have to be very complex in construction to be highly
complex in behavior.  Las Vegas is full of many examples of such machines.
The idea that complexity in itself can result in a system able to escape
Goedelian and Turing limitations is directly contradicted by the
mathematical induction used in their proofs:  The limitations apply to
<<arbitrary>> automata, not just to automata simple enough for us to
inspect.

Charlatans can claim any properties they want for mechanisms too complex for
direct disproofs, but one need not work hard before dismissing them with
indirect disproofs.  This is why the patent office rejects claimed perpetual
motion machines which supposedly operate merely by the complexities of their
mechanical or electromagnetic design.  It is also why journals of
mathematics reject rediculously long proofs which claim to supply methods of
squaring the circle, etc.  No one examines such proofs to find the flaw, it
would be a thankless task, and is not necessary.

	It is essential for the mechanist thesis that the mechanical model
	of the mind shall operate according to ``mechanical principles,''
	that is, we can understand the operation of the whole in terms of
	the operation of its parts....

Certainly one expects that the behavior of physical objects can be explained
at any level of reduction.  However, consciousness is not necessarily a
behavior, it is an ``experience'', whatever that is.  Claims of
consciousness, as in ``I assert that I am conscious'' are behavior, and can
reasonably be subjected to a reductionist analysis.  But whether this will
shed any light on the nature of consciousness is unclear.  A useful analogy
is whether attacking a computer with a voltmeter will teach you anything
about the abstractions ``program'', ``data structure'', ``operating
system'', etc., which we use to describe the nature of what is going on
there.  These abstractions, which we claim are part of the nature of the
machine at the level we usually address it, are not useful when examining
the machine below a certain level of reduction.  But that is no paradox,
because these abstractions are not physical structure or behavior, they are
our conceptualizations of its structure and behavior.  This is as mystical
as I'm willing to get in my analysis, but look at what Lucas does with it:

	if the mechanist produces a machine which is so complicated that
	this [process of reductionist analysis] ceases to hold good of it,
	then it is no longer a machine for the purpose of our discussoin,
	no matter how it was constructed.  We should say, rather, that he
	had created a mind, in the same sort of sense as we procreate
	people at prsent.

If someone produces a machine which which exhibits behavior that is
infeasible to predict through reductionist methods, there is nothing
fundamentally different about it.  It is still obeying the laws of physics
at all levels of its structure, and we can still in principle apply to it
any desired reductionist analysis.  We should certainly not claim to have
produced anything special (such as a mind) just because we can't easily
disprove the notion.

	When talking of [human beings and these specially complex machines]
	we should take care to stress that although what was created looked
	like a machine, it was not one really, because it was not just the
	total of its parts:  one could not even tell the limits of what it
	could do, for even when presented with the Goedel type question, it
	got the answer right.

There is simply no reason to believe that people can answer Goedelian
questions any better than machines can.  This bizarre notion that conscious
objects can do such things is unproven and dubious.  I assert that people
cannot do these things, and neither can machines, and that the ability to
escape from Goedel or Turing restrictions is irrelevant to questions of
consciousness, since we are (experientially) conscious but cannot do such
things.

I find that most current analyses of consciousness are either mystical like
the one I've addressed here, or simply miss the phenonmenon by attacking the
system at a level of reduction beneath the level where the concept seems to
apply.  It is tempting to thing we can make scientific statements about
consciousness just because we can experience consciousness ourselves.  This
idea runs aground when we find that this notion is dependent on capturing
scientifically the phenomena of ``experience'', ``consciousness'' or
``self'', which I have not yet seen adequately done.  Whether consciousness
is a phenomenon with scientific existence, or whether it is an abstract
creation of our conceptualizations with no external or reductionist
existence is still undetermined.

-Greg Davidson (davidson@sdcsvax.UUCP or davidson@nosc.ARPA)

pem1a@ihuxr.UUCP (Tom Portegys) (05/04/84)

Phaedrus' article made me think of a story in the book "The
Mind's Eye", by Hofstadter and Dennett, in which the relationship
between subjective experience and physical substance is explored.
Can't remember the story's name but good reading.  Some other
thoughts:

One aspect of experience and substance is how to determine when
a piece of substance is experiencing something.  This is good to
know because then you can fiddle with the substance until it stops
experiencing and thereby get an idea of what it was about the 
substance which allowed it to experience.

The first reasonable choice for the piece of substance might be
yourself, since most people presume that they can tell when they
are having a conscious experience.  Unfortunately, being both the
measuree and measurer could have its drawbacks, since some experiments
could simulaneously zap both experience and the ability to know or not
know if an experience exists.  All sorts of problems here.  Could you
just THINK you were experiencing something, but not really?

What this calls for, it seems to me, is two people.  One to measure 
and one to experience.  Of course this would all be based on the
assumption that it is even possible to measure such an elusive
thing as experience.  Some people might even object to the notion
that subjective experiences are possible at all.

The next thing is to choose an experience.
This is tricky.  If you chose self-awareness as the experience, then
you would have to decide if being self-aware in one state is the same
as being self-aware in a different state.  Can the experience be the
same even if the object of the experience is not?

Then, a measuring criterion would have to be established whereby 
someone could measure if an experience was happening or not.  This
could range from body and facial expressions to neurological readings.
Another would be a Turing test-like setup:  put the subject into a
box with certain I/O channels, and have protocols made up for
measuring things.  This would allow you to REALLY get in there and
fiddle with things, like replacing body parts, etc.

These are some of the thoughts that ran through my head after reading
the Phaedrus article.  I think I thought them, and if I didn't, how
did this article get here?

                            Tom Portegys, Bell Labs, ihlpg!portegys

(ihlpg currently does not have netnews, that's why this is coming from
ihuxr).