[mod.ai] AI-discussion

H29@DHDURZ2.BITNET (09/20/86)

In the last AI-lists there has been a discussion about the possibilities
of intelligent machines.
I am going to add some arguments I missed in the discussion.
1. First is to claim, that there are a lot of cognitive functions of
man which can be simulated by the computer. But one problem is, that
up to now these different functions are not integrated in one machine
or superprogram to my kwowledge.
2. There is the phenomenon of intentionality amd motivation in man that
finds no direct correspondent phenomen in the computer.
3. Man's neuronal processing is more analogue than digital in spite of
the fact that neurons can only have two states.
Man's organisation of memory is rather associative than categorial.

  [Neurons are not two-state devices!  Even if we ignore chemical and
  physiological memory correlates and the growth and decay of synapses,
  there are the analog or temporal effects of potential buildup and the
  fact that neurons often transmit information via firing rates rather
  than single pulses.  Neurons are nonlinear but hardly bistable. -- KIL]

Let me elaborate upon these points:
Point 1: Konrad Lorenz assumes a phenomenon he called " fulguration" for
systems. This means in the end nothing more than: The whole is more
than the sum of parts. If you merge all possible functions a
computer can do to simulate human abilities, you will get higher
functions which transgress the sum of all lower functions.
You may once get a function like consciousness or even selfconscious-
ness. If you define self as the man's knowledge of himself: his
qualities, abilities, his existence. I see no general problem to feed
this knowledge to a computer.
Real "understanding" of natural language however needs not only lingui-
stic competence but also sensory processing and recognition abilities
(visual, acoustical).   Language normally refers to objects which we
first experience  by sensory input and then name it. The construct-
ivistic theory of human learning of language by Paul Lorenzen und
O. Schwemmer (Erlanger Schule) assumes a "demonstration act" (Zeige-
handlung)  constituting a fundamental element of man (child) learning
language.  Without this empirical fundament of language you will never
leave the hermeneutic circle, which drove former philosphers into
despair.
Point 2.:
One difference between man and computer is that man needs food and
computers need electricity and further on the computer doesn't cry
when somebody is going to pull his plug.
Nevertheless this can be made: A computer,a robot that attacks every-
body by weapon, who tries to pull his plug. But who has an interest
to construct such a machine? To living organisms made by evolution
is given the primary motivation of self-preservation. This is the
natural basis of intentionality. Only the implementation of intentionality,
motivation, goals and needs can create a machine that deserves the name
"intelligent". It is intelligent by the way it reaches "his" goals.
Implementation of "meaning" needs the ability of sensory perception and
recognition, linguistical competence and   understanding, having or
simulating intentions. To know the meaning of an object means to
understand the function of this object for man in a means-end relation
within his living context. It means to realize for which goals or needs
the "object" can be used.
Point 3.:
Analogue information processing may be totally simulated by digitital
processing or may be not. Man's associative organization of memory,
however needs storage and retrieval mechanism other than those now
available or used by computers.
I have heard that some scientists try to simulate associative memory
organization in the states, but I have no further information about
that. (Perhaps somebody can give me information or references.
Thanks in advance!).

  [Geoffrey E. Hinton and James A. Anderson (eds.), Parallel Models
  of Associative Memory, Lawrence Erlbaum Associates, Inc., Hillsdale
  NJ.  Dr. Hinton is with the Applied Psychology Unit, Cambridge England.
  -- KIL]

Scientists working on AI should have an attitude I call "critical opti-
mism".  This means being critical,see the problems and not being euphoric,
that all problems can be solved in the next ten years. On the other hand
it means not to assume any problem as unsolvable but to be optimistic,
that the scientific community will solve problems step by step, one
after the other how long it will ever last.

Finally let me - being a psychologist - state some provocative hypotheses:
The belief, that man's cognitive or intelligent abilities including
having intentions will never be reached by a machine, is founded in the
conscious or unconscious assumption of man's godlike or godmade
uniqueness, which is supported by the religious tradition of our
culture.  It needs a lot of self-reflection, courage and consciousness
about one's own existential fears to overcome the need of being
unique.
I would claim, that the conviction mentioned above however philosphical
or sophisticated it may be justified, is only the "RATIONALIZATION" (in
the psychoanalitic meaning of the word) of understandable   but
irrational and normally unconscious existential fears and need of human
being.


       PETER PIRRON      MAIL ADDRESS: <H29@DHDURZ2.BIYNET>

                         Psychologisches Institut
                         Hauptstrasse 49-53
                         D-6900 Heidelberg
                         Western Germany