[sci.philosophy.tech] The Cybernetics of Mental Systems vs Who am I

vu0112@bingvaxu.cc.binghamton.edu (vu0112) (02/24/88)

[This is a cross-posting to sci.philosopy.tech of an interesting (to me
at least!) conversation that's going on on talk.philosophy.misc.  I hope
you techies think it's appropriate, and enjoy.  Talk.phil people have
seen it already].

In article <20510@amdcad.AMD.COM> prem@amdcad.UUCP (Prem Sobel) writes:
>In article <825@bingvaxu.cc.binghamton.edu> vu0112@bingvaxu.cc.binghamton.edu (Cliff Joslyn) writes:
>>No doubt mind is a complex semantic system relating mental
>>representations of various degrees of abstraction to each other
>> [etc. deleted]
>
>Few would fundamentally disagree with the above, it is easily verifiable
>by experience. But that brings us back to the point I raised in my article
>of Feb. 10'th that this experiential awareness, whether sensory or rational
>(i.e. thought and reason) still remains essentially unexplained throughout
>this entire dialog.

Hmm, "unexplained".  I feel that my above quote is the *beginning* of an
explanation, that is, still a very incomplete and unsatisfying one. 
However, one tendency I see among philosopohy types (alas I am one) is
to search for an explanation in the wrong place.  Let's see.  .  . 

>We know how to put sensory I/O devices (e.g. video cameras, microphones,
>gravitometers, etc.) on machines and can program them, with suitable
>feedback mechanism with "learning" or variable goals and have them ACT
>intelligent. BUT, are the machines aware that they see or hear or feel?

No, I don't even think they act intelligently.  Again, the wrong place. 
The thing to understand about computers is that they are the machine
analog of brains.  It can be argued (see talk.origins, or what
talk.origins could/should be) that in organisms, brains are the the last
developed facility, coming into their own in mammals, and in particular
in primates (and cetaceans (sp?)?).  Yet when we build machines and want
them to be analogous to organisms, we build the brains first.  Why?

The true machine analog to an organism is a robot.  So you're asking the
right question (*can* we hook up I/O devices etc.), but getting the
wrong answer.  The arguments in AI now are about whether expert systems
and chess mahcines can think.  These are purely *brain* organisms.
Existing robots are *obviously* unintelligent, and more importantly *not
alive*.  *Why* do we think we can make a *thinking machine* before we
make a *living machine*?

As I said, I suspect that the *key* to intelligence is a complex of
feedback relations between representations of internal (other
representations) and external (I/O) states.  This differs from a
possible definition of life only inasmuch as the word "representations"
is absent! And we *still* haven't introduced genetics, which is *a
semantic system*! Such relations (as opposed to computers) occur through
feedback with the environmeent *in real time*.  Where are the machines
which approach this? Certainly not any von Neumann computers.  Perhaps
connectionist machines (I wouldn't be surprised). 

>And, of course more importantly, how are human beings aware of their
>exterior and interior environment, and at the meta-level: how do they
>know that they know or perceive? My thesis is that one knows, especially
>about oneself by identity. That is you know about your self because you
>are yourself. You know about (alleged) others because of the inherent
>unity/oneness of substance/energy force that science is looking for.

You see, to me these are the *last* questions we should examine,
otherwise you're putting the keystone in an arch without building any
columns.  Much more important to me are the questions of how I see the
table in front of me, indeed, how I can represent anything *to myself*
(not to someone/thing else), let alone representing myself to myself. 

>To get back to the machines. Perhaps, then they too are aware but in a
>more limited manner because they have no been given a means to express
>it.

I doubt that any existing machine knows anything.  Current machines do
not have any autonomous existence (life), and so cannot enter into
semantic relations with their environments, rather only syntactic ones. 
Thus, they can only represent things *to us*, not *to themselves*.

>How this takes place is partially unknown. One analogy that I find very
>helpful is the following. If we limited things to just 3 qualitative levels
>and call the most gross matter, the middle level the emotional/desire/life
>level, and the third mind; these three can be replaced by: ice, liquid
>water, and steam.

I interperet the three levels you introduce as inanimate, animate, and
semantic.  Each level exhibits a higher level of cybernetic control and
increase in order (decrease in entropy) over the others.  Together,
the three form a monotone relation of increasing development/complexity.
 These are *necessary* relations (i.e.  mind -> life -> matter, matter
!-> life !-> mind, where -> is implication, !-> the negation of implication).

>All three are fundamentally the same, giving the sought after unity, and
>yet they give a qualitative difference with sepctrum of variation (in
>temperature). Clearly no one of the three is more fundamental, but there
>is something underlying them which gives all three a basis for difference
>and for sameness.

Of course between any two real things there is always *some* degree of
sameness and *some* degree of separateness, so I would disagree that
they're fundamentally equivalent (see above).  Temperature is a *very*
important consideration in your metaphor.

>	Prem

O---------------------------------------------------------------------->
| Cliff Joslyn, Mad Cybernetician
| Systems Science Department, SUNY Binghamton, Binghamton, NY
| vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .