[mod.ai] Submission for mod-ai

lambert@seismo.CSS.GOV@cwi.nl (Lambert Meertens) (01/23/87)

Path: mcvax!lambert
From: lambert@mcvax.cwi.nl (Lambert Meertens)
Newsgroups: mod.ai
Subject: Re: C-2 as C-1
Summary: Long again--please skip this article
Keywords: mind, consciousness, memory
Message-ID: <7259@boring.mcvax.cwi.nl>
Date: 23 Jan 87 02:29:51 GMT
References: <424@mind.UUCP> <12272599850.11.LAWS@SRI-STRIPE.ARPA>
Reply-To: lambert@boring.UUCP (Lambert Meertens)
Organization: CWI, Amsterdam
Lines: 104

In article <12272599850.11.LAWS@SRI-STRIPE.ARPA> Laws@SRI-STRIPE.ARPA
(Ken Laws) writes:

>> From: Stevan Harnad <princeton!mind!harnad@seismo.CSS.GOV>:
>>
>> Worse than that, C-2 already presupposes C-1. You can't
>> have awareness-of-awareness without having awareness [...].
>
> A quibble: It would be possible [...]  that my entire conscious
> perception of a current toothache is an "illusory pain" [...].

I agree.

> These views do not solve the problem, of course; the C-2 consciousness
> must be explained even if the C-1 experience was an illusion.  My conscious
> memory of the event is more than just an uninterpreted memory of a memory
> of a memory ...

Here I am not so sure.  To start with, the only evidence we have of
organisms having C-1 is if they are on the C-2 level, that is, if they
*claim* they experience something.  Even granting them that they are not
lying in the sense of a conscious :-) misrepresentation, why should we (in
our quality of scientific enquirers) believe them on their word?  After
all, more than a few people truly believe they have the most incredible
psychic powers.

Now how can we know that the "awareness-of-awareness" is a conscious thing?
There seems to be a hidden assumption that if someone utters a statement
(like "It is getting late"), then the same organisms is consciously aware
of the fact expressed in the statement.  Normally, I would grant you that,
because that is the normal everyday meaning of "conscious" and "aware", but
not in the current context in which these words are isolated from their
original function to just provide an expedient way to express certain things.

[You will find that people in general have no problem in saying that a fly
is aware of something, or experiences pain, even though for all we know
there is no higher (coordinating) neural centre in this organism that would
provide a physiological substrate.  Many people even have no problem in
ascribing consciousness to trees.  I claim that if people (but not young
children) do have qualms in saying that an automaton experiences something,
it is because they have been *taught* that consciousness is limited to
animate, organic, objects.]

So the mere speech act "It is getting late" does not by itself imply a
conscious awareness of it getting late.  Otherwise, we are forced to
ascribe consciousness of the occurrence of a syntax error to a compiler
mumbling "*** syntax error".  Likewise, not only does someone saying "I
have a toothache" not imply that the speaker is experiencing a toothache,
it also does not imply that the speaker is consciously aware of the
(possibly illusionary) fact of experiencing one.  The only evidence of that
would be a C-3 act, someone saying: "I am aware of the fact that I am aware
of experiencing a toothache."  But again, why should we believe them?  (And
so on, ad nauseam.)

This is getting so complicated mainly because of the inadequacy of words.
Allow me to try again.  You, reader, are having a toothache.  You are
really having one.  I can tell, because you are visibly in pain, and,
moreover, I am your dentist, and you are in my chair with your mouth open
into which I am prodding and probing, and boy, you should have a toothache
if anyone ever had one.  At this point, I cannot know for sure if you are
consciously experiencing that pain.  Maybe neural pathways connect your
exposed pulpa with the centre innervating your grimacing and squirming
muscles while bypassing the centre of consciousness.  I retract my
instruments from your mouth, giving you a chance to say "That really hurt,
doctor.  I'll pay all my bills in time from now on if only you won't do
that again."  Firmly brushing aside the empathy that threatens to
compromise my scientific attitude, I realize that this still does not mean
that you consciously experienced that pain just a minute ago.  All I know
is that you remember it (for if you did not, you wouldn't have said that).
So some symbolic representation, "@#$%@*!" say, may have been stored in
your memory--also bypassing your centre of consciousness--which is now
retrieved and interpreted (maybe illusionary) as "conscious experience of
pain--just now".  This interpretation act need not mean that you experience
the pain now, after the fact.  So it is entirely possible that you did not
consciously experience the pain at any time.  Now were you conscious then,
while making that silly promise, of at least the memory of the--by itself
possibly unconscious--suffering of pain?  If you are still with me, then
you will probably agree that that is not necessarily the case.  Just like
P = <neural event of pain>, even though leaving a trace in memory, need not
imply consciousness of P, so R(P) = <neural event of remembering P> need
not imply consciousness of R(P) itself.  However, R(P) can again leave a
trace in memory--what with your Silly Promise and dentists' bills being as
they are, you are bound to trigger R(SP) and therefore, by association,
R(R(P)), many times in the future.

If we had two unconnected memory stores, and a switch would now connect to
one, now to the other store, we would become two personalities in one body
with two "consciousnesses".  If we could somehow censor either the storing
or the retrieval of pain events, we would truly, honestly believe that we
are incapable of consciously experiencing pain--notwithstanding the fact
that we would probably have the same *immediate* reaction to pain as other
people--and we wouldn't make such promises to our dentists anymore.

Wrapping it all up, I still maintain that "conscious experience" is a term
that is ascribed *in retrospect* to any neural event NE that has been
stored in memory, at the time R(NE) occurs.  Stronger, R(NE) is the
*only*--as I hope I have shown insufficient--evidence of "consciousness"
about NE in a more metaphysical or whatever sense.  For all we know and can
know, all consciousness in the sense of being conscious of something *while
it happens* is an "illusion", whether C-1, C-2 or C-17.

-- 

Lambert Meertens, CWI, Amsterdam; lambert@mcvax.UUCP

brothers@TOPAZ.RUTGERS.EDU.UUCP (02/15/87)

Path: topaz!brothers
From: brothers@topaz.RUTGERS.EDU (Laurence R. Brothers)
Newsgroups: mod.ai
Subject: Re: Other Minds
Message-ID: <9245@topaz.RUTGERS.EDU>
Date: 14 Feb 87 21:57:45 GMT
References: <8702132202.AA01947@BOEING.COM>
Organization: Rutgers Univ., New Brunswick, N.J.
Lines: 49

So...? I think you've basically restated a number of properties of
intelligence which AI researchers have been exploring for some time,
with varying degrees of success. 

There are two REAL reasons why you can't build an "intelligent"
machine today: 

1) Since no one really knows how people think, we can't build machines
which accurately model ourselves.

2) Current machines do not have anything like the kind of computing
power necessary for intelligence.

Ray@Boeing says:
>Manipulation of symbols is insufficient by itself to duplicate human
>performance; it is necessary to treat the perceptions and experiences the
>symbols *symbolize*.  Put a symbol for red and a symbol for blue in a pot,
>and stir as you will, there will be no trace of magenta.

Look, manipulation of symbols by a program is analogical with
manipulation of neural impulses by a brain. When you reduce far
enough, EVERYTHING is typographical/syntactical. The neat thing about
brains is that they manipulate so MANY symbols at once. 

General arguments against standard AI techniques are all well and good
(viz. Hofstadter's position), but keep in mind that while mainstream
AI has not produced so much wonderful stuff, the old neural-net
research was even less impressive. 

My own view regarding true machine intelligence is that there is no
particular reason why it's not theoretically possible, but given
an "intelligent" machine, one should not expect it to be able to
do anything weird like passing a Turing Test. The hypothetical
intelligent machine won't be anything like a human -- different
architecture, different i/o bandwidths, different physical
manifestation, so it is philosophically deviant to expect it
to emulate a human.

Anyhow, as a putative AI researcher (so I'm only 1st year, so sue me),
it seems to me that decades of work have to be done on both hardware
and cognitive modeling before we can even set our sights on
HAL-9000.... Give me another ring when those terabyte RAM, femtosecond
CAML cycle optical computers come out -- until then the entire
discussion is numinous....
-- 
			 Laurence R. Brothers
		      brothers@topaz.rutgers.edu
    {harvard,seismo,ut-sally,sri-iu,ihnp4!packard}!topaz!brothers
	    "The future's so bright, I gotta wear shades!"