[mod.ai] Other Minds

ray@BOEING.COM.UUCP (02/14/87)

Some of you may be after the fame and great wealth associated with AI
research, but MY goal all along has been to BUILD an "other mind"; a
machine who thinks *at least* as well as I do.  If current "expert
systems" are good enough for you, please skip this.  Homo Sap.'s
distinguished success among inhabitants of this planet is primarily due
to our ability to think.  We will continue to exist only if we act
intelligently, and we can use all the help we can get.  I am not
convinced that Mutual Assured Destruction is the most intelligent
behavior we can come up with. It's clear the planetary population can
benefit from help in the management of complexity, and it is difficult
for me to imagine a goal more relevant than improving the chances for
survival by increasing our ability to act intelligently.

However, no machine yet thinks nearly as well as a human, let alone
better.  I wouldn't trust any computer I know to babysit my child, or
my country.  Why?  Machines don't understand!  Anything!  The reason
for this poor performance is an inadequate paradigm of human intelligence.
The Physical Symbol System Hypothesis does not in fact account for human
intelligent behavior.  

Parenthetically, there's no more excitement in symbol-processing computers;
that's what digital computers have been doing right along, taking the
symbol for two and the symbol for two, performing the defined operation
"ADD" and producing the symbol for four.  We may have lost interest in
analog systems prematurely.

Manipulation of symbols is insufficient by itself to duplicate human
performance; it is necessary to treat the perceptions and experiences the
symbols *symbolize*.  Put a symbol for red and a symbol for blue in a pot,
and stir as you will, there will be no trace of magenta.

I have developed a large suite of ideas concerning symbols and
representations, analog and digital "computing", induction and
deduction, natural language, consciousness and related concepts which
are inextricably intertwined and somewhat radical, and the following
is necessarily a too-brief introduction. But maybe it will supply
some fuel for discussion.

Definition of terms:  By intelligence, I mean intelligent behavior;
intelligent is an adjective describing behavior, and intelligence is a name
for the ability of an organism to behave in a way we can call intelligent.

Symbols and representations: There are two quite distinct notions denoted
by *symbolize* and *represent*.  Here is an illustration by example:
Voodoo dolls are intended as symbols, not necessarily as faithful images
of a person.  A photo of your family is representative, not symbolic.  A
picture of Old Glory *represents* a flag, which in turn *symbolizes* some
concepts we have concerning our nation.  An evoked potential in the visual
cortex *represents* some event or condition in the environment, but does
not *symbolize* it.  

The essence of this notion of symbolism is that humans can associate
phenomena "arbitrarily";  we are not limited to representations.  Any
phenomenon can "stand for" any other.  That which any symbol symbolizes
is a human experience.  Human, because we appear to be the only symbol
users on the planet.  Experience, because that is symbolism's ultimate
referent, not other symbols.  Sensory experience stops any recursion.
Noises and marks "symbolize" phenomenological experience, independent of
whether those noises and marks are "representative".  

Consciousness: Consciousness is self-consciousness; you aren't conscious
of your environment, you are conscious of your perceptions of your
environment.  Sensory neurons synapse in the thalamus.  From there,
neurons project to the cortex, and from the cortex, other neurons project
back to the thalamus, so there, in associative contiguity, lie the input
lines and reflections of the results of the perceptive mechanisms.  The
brain has information as to the effects of its own actions.  Whether it is
resident in thalamic neurons or distributed throughout the brain mass, that
loop is where YOU are, and life experience builds your identity; that hand
is part of YOU, that hammer is not.  One benefit of consciousness is that
it extends an organism's time horizon into the past and the future,
improving its chance for survival.  Consciousness may be necessary for
symbol use.

Natural language: Words, spoken or written, are *symbols*.  But human
natural language is not a symbol system; there are no useful interactions
among the symbols themselves.  Human language is evocative; its function
is to evoke experiences in minds, including the originating mind.  Words
do not interact with each other; their connotations, the evoked responses
in human minds interact with each other.  Responses are based on human
experience; touch, smell, vision, sound, emotional effects.  Communication
between two minds requires some "common ground"; if we humans are to
communicate with the minds we create, we and they must have some
experiential "common ground".  That's why no machine will "really
understand" human natural language until that machine can possess the
experiences the symbols evoke in humans.  

Induction and deduction: Induction, as defined here, consists in the
cumulative effect of experience on our behavior, as implemented by neural
structures and components.  Induction is the effect on an organism's
behavior; not a procedure effected by the organism.  That is to say, the
"act" of induction is only detectable through its effects.  All living
organisms' behavior is modified by experience, though only humans seem
to be self-aware of the phenomenon.  Induction treats *representations*,
rather than *symbols*; the operation is on *representation* of experience,
quite different from symbolic deduction.  

Deduction treats the *relationships among symbols*, that which Hume
described as "Relations of Ideas".  There is absolute certainty concerning
all valid operations, and hence the resulting statements.  The intent is
to manipulate a specific set of symbols using a specific set of operations
in a mechanical way, having made the process sufficiently explicit that we
can believe in the results.  But deduction is an operation on the *form*
of a symbol system; a "formal" operation, and deliberately says nothing at
all concerning the content.  Deductive, symbolic reasoning may be the
highest ability of humans, but there's more to minds than that. 

Analogy: One definition of analogy is as the belief that if two objects or
events are alike in some observed attributes they are alike in other,
unobserved, attributes.  It follows that the prime requisite for analogy
is the perception of "similarity".  It could be argued that the detection
of similarity is one of the most basic abilities an organism must have to
survive.  Similarity and analogy are relationships among *representations*,
not among *symbols*.  Significant similarities, (i.e. analogy and metaphor)
are not to be found among the symbols representing mental perceptions, but
among the perceptions themselves.  Similarity is perceived among
experiences, as recorded in the central nervous system.  The mechanism is
that symbols evoke, through association, the identical effects in the
nervous system as are evoked by the environmental senses.  Associative
memory operates using sensory phenomena; that is, not symbols, but *that
which is symbolized* and evoked by the symbols.  We don't perceive
analogies between symbols, but between the experiences the symbols evoke
in our minds.  

Analog and digital: The physical substrate supporting intelligent behavior
in humans is the central nervous system.  The model for understanding the
CNS is the analog "gadget" which "solves problems", as in A. K. Dewdney's
Scientific American articles, not Von Neumann computers; nor symbol
systems of any kind.  The "neural net" approaches look promising, if they
are considered to be modifiable analog devices, rather than alternative
designs for algorithmic digital computers.

Learning and knowledge: Learning is inductive; by definition the addition
of knowledge. "Deductive logic is tautological"; i.e. implications of
present knowledge can be made explicit, but no new knowledge is introduced
by deductive operations.  There is no certainty with induction, though:

     "And this kind of association is not confined to men; in
     animals also it is very strong.  A horse which has been
     often driven along a certain road resists the attempt to
     drive him in a different direction.  Domestic animals
     expect food when they see the person who usually feeds them.
     We know that all these rather crude expectations of
     uniformity are liable to be misleading. The man who has
     fed the chicken every day throughout its life at last
     wrings its neck instead, showing that more refined views
     as to the uniformity of nature would have been useful to
     the chicken."

     [Bertrand Russell. 1912. "On Induction", Problems of Philosophy.]

Thinking systems will be far too complex for us to construct in "mature"
form; artificial minds must learn.  Our most reasonable approach is to
specify the initial conditions is terms of the physical implementation
(e.g., sensory equipment and pre-wired associations) and influence the
experience to which a mind is exposed, as with our children.

What is meant by "learning"?  One operational definition is this: can you
apply your knowledge in appropriate ways?  Some behavior must be modified.
All through your childhood, all through life, your parents and teachers
are checking whether you have learned something by asking you to apply it.
As a generalization of applying, a teacher will ask if you can re-phrase
or restate your knowledge.  This demonstrates that you have internalized
it, and can "translate" from internal to external, in symbols or in modified
behavior.  Language to internalized, and back to language... if you can
do this, you "understand".

Knowledge is the state of the central nervous system, either built in or
acquired through experience.  Experience is recorded in the CNS paths which
"process" it.  Recording experience essentially in the same lines which
sense it saves space and totally eliminates access time.  There is no
retrieval problem; re-evocation, re-stimulation of the sensory path is
retrieval, and that can be done by association with other experience, or
with symbols.

That's probably enough for one shot.  Except to say I think the time
is ripe for trying some of these ideas out on real machines.  A few years
ago there was no real possibility of building anything so complex as a
Connection Machine or a million-node "neural net", and there's still no
chance at constructing something as complex as a baby, but maybe there's
enough technology to build something pretty interesting, anyway.

Ray

kort@cad.Berkeley.EDU@hounx.UUCP (02/16/87)

Ray Allis has brought up one of my favorite subjects:  the creation
of an artificial mind.

I agree with Ray that symbol manipulation is insufficient.  In last
year's discussion of the Chinese Room, we identified one of the
shortcomings of the Room:  it was unable to learn from experience
and tell the stories of its own adventures.

The cognitive maps of an artificial mind are the maps and models of
the external world.  It is one thing to download a map created by
an external mapmaker.  It is quite another thing to explore one's
surroundings with one's senses and construct an internal representation
which is analogically similar to the external world.

An Artificial Sentient Being would be equipped with sensors (vision,
audition, olfaction, tactition), and would be given the goal of
exploring its environment, constructing an internal map or model
of the that environment, and then using that map to navigate safely.

Finally, like Marco Polo, the Artificial Sentient Being would describe
to others, in symbolic language, the contents of its internal map:
it would tell its life story.

I personally would like to see us build an Artificial Sentient Being
who was able to do Science.  That is, it would observe reality and
construct accurate theories (mental models) of the dynamics which
governed external reality.

Suppose we had two such machines, and we set them to explore each
other.  Would each build an accurate internal representation of the
other?  (That is, could a Turing Machine construct a mathematical
model of (another) Turing Machine?)  Would the Sentient Being
recognize the similarity between itself and the Other?  And in seeing
its soul-mate, would it come to know itself for the first time?

Barry Kort
---
				-- Barry Kort
				...ihnp4!houxm!hounx!kort

	A door opens.  You are entering another dementia.
	The dementia of the mind.

ray@BOEING.COM.UUCP (02/21/87)

Hello?  Where'd everyone go?  Was it something I said?  I have
a couple of things to say, yet.  But fear not, this is part 2 of 2,
so you won't have me cluttering up your mail again in the near future.
This is a continuation of my 2/13/87 posting, in which I am proposing
a radical paradigm shift in AI.


  [The silence on the Arpanet AIList is due to my saving the
  philosophical messages for a weekly batch mailing.  This gives
  other topics a chance and reduces the annoyance of those who
  don't care for these discussions.  -- KIL]


Our common sense thought is based on and determined by those things
which are "sensible" (i.e. that we can sense).  "The fog comes in on
little cat feet"  [Sandberg].  Ladies and gentlemen of the AI
community, you are not even close!  Let me relax the criteria a little, 
take this phrase, "a political litmus test".  How do you expect a
machine to understand that without experience?  Nor can you ever
*specify* enough "knowledge" to allow understanding in any useful
sense.  The current computer science approach to intelligence is as
futile as the machine translation projects of the 60's, and for the
same reason; both require understanding on the part of the machine,
and of that there isn't a trace.

Obviously symbolic thinking is significant; look at the success of our
species.  There are two world-changing advantages to symbolic thought.
One advantage is the ability to think about the relationships among
things and events without the confusing details of real things and
events; "content-free" or "context-independent" "reasoning" leading to
mathematics and logic and giving us a measure of control over our
environment, and our destiny.  Symbol systems are tools which assist
and enhance human minds, not replacements for those minds.  Production
rules are an externalization of knowledge.  They are how we explain our
behavior to other people.

The other advantage lies in the fundamental difference between
"symbolize" and "represent".  Consider how natural language works.
Through training, you come to associate "words" with experiences.
The immediate motive for this accomplishment is communication; when
you can say "wawa" or "no!", the use of language becomes your best
tool for satisfying your desires and needs.  But a more subtle and
significant thing happens.  The association between any symbol and
that which it symbolizes is arbitrary, and imprecise.  Also, in any
human experience, there is *so much* context that it is practically
the case that every experience is associated with every other, even
if somewhat indirectly.  

So please imagine a brain, in some instantaneous state of excitation
due to external stimuli.  Part of the "context" (or experience) will
be (representations of) symbols previously associated.  Now imagine
the internal loop which presents internal events to the brain as if
they were external events, presenting those symbols as if you "saw"
or "heard" them.  But, since the association is imprecise, the
experience evoked by those symbols will very likely not be identical
to that which evoked the symbols.  A changed pattern of activity in
the nervous system will result, possibly with different associated
symbols, in which case the cycle repeats.

The function of all this activity is to "converge" on the "appropriate"
behavior for the organism, which is to say to continue the organism's
existence.  There is extreme "parallelism"; immense numbers of events
are occurring simultaneously, and all associations are stimulated
"at once".  Also, none of this is "computation" in the traditional
sense; it is the operation of an analog "device", which is the central
nervous system, in its function of producing "appropriate" behavior.

Imagine an experience represented in hundreds of millions of CNS
connections.  Another experience, whatever the source, (that is from
external sensors, from memory or wholly created) will be represented
in the same (identical) neurons, in point-for-point registration, all
half-billion points at once.  Any variation in correspondence will be
immediately conspicuous.  The field (composite) is available for the
same contrast enhancement and figure/ground "processing" as in visual
(or any) input.

Multiple experiences will reinforce at points of correspondence, and
cancel elsewhere.  Tiny children are shown instances of things; dogs,
kittens, cows, fruits, and expected to generalize and to demonstrate
their generalization, so adults can correct them if necessary.
Generalization is the shift in figure / ground percentage which comes
from "thresholding" out the weaker sensations.  The resultant is the
"intersection" of qualities of two or more experiences.  This whole
operation, comparing millions of sensation details with corresponding
sensation details in another experience can happen in parallel in a
very few cycles or steps.

Informed by Maturana's ideas of autopoeic systems, mind can be
considered as an emergent phenomenon of the complexity which has
evolved in the central nervous systems of Terrestrial organisms
(that's us).  This view has fundamental philosophical implications
concerning whether minds are likely to exist elsewhere in the Universe
due to "natural causes", and whether we can aspire to create minds.

Much "thinking" is of the sort described by the Nobel Prize winner
in "The Search for Solutions" who thinks of DNA as a rope which, when
stretched will break at certain "weak" points.  That "tool", the
visualization, is guided by physical experience, his personal
experience of ropes and their behavior.  Einstein said he often
thought in images; certainly his thought was guided, and perhaps
the results judged, by his personal experience with the things
represented.  We also need  "... the ability to generalize, the
ability to strip to the essential attributes of some actor in the
process..."  "We are not ready to write equations, for the most part,
and we still rely on mechanical and chemical or other physical models."
Josua Lederberg - Nobel Prize geneticist - President of Rockefeller U.
"The Search for Solutions".

The internal loop can use motor action (intents) to re-stimulate
associated sensory input (results) and entire sequences of sensory
input to motor output to sensory input can occur without interacting
with the external environment.  Here is the basis for imagination and
planning.  Experiences need not be original; they may be created
entirely from abstractions.  And this is called *imagination*.

The ability to construct internal imaginary events and situations is
fundamental to symbolic communication: where symbols evoke and are
derived from internal state.  Planning is the process of reviewing a
set of experiences, which may be recalled, or may be constructed
imaginary experiences.  Planning requires imagination (see above) of
actions and consequences.  The success and effectiveness of the
resulting plan depends on the quality and quantity of experiences
available to the planner.  He benefits from a rich repertoire of
experience from which to choreograph his dance of events.  The novelty
in the present theory is that most of the planning process is
essentially and necessarily analog in nature, and symbol processing
is only part of it.  Symbols are critical to make the process
explicit, but the planning process itself is not only, or even
primarily, symbol processing.

If we agree that our minds are an effect of our CNS, then we must
accept that the structure of our mind is determined by the structure
of our CNS.  Sure there's a "deep structure" in linguistic ability;
it's our physical implementation (embodiment).  The "meaning" of
language is that state which it evokes in us.

"A new meaning is born whenever the mind uses a word or other symbol
in a new way.  If you think of a key as something to open a lock and
then speak of hard work as the key to success, you are using the word
key in a new way.  It no longer means simply a metal implement for
opening a lock; it has acquired a much richer sense in your mind:
"necessary prerequisite for attaining a desired goal."  If the word
key were not free to shift its sense, the new concept probably could
not emerge.  All thinkers, whether artists, philosophers, scientists,
businessmen, or laborers, can create new thoughts if they use words
in new ways."  ["The Mind Builder", Richard W. Samson, 1965.]

Samson identified seven mental "faculties" which make an interesting
list of target capabilities for "intelligent machines".  These are:
1.	Words: We let words (together with numbers and other symbols)
     mean things.
2.	Thing Making: We make mental pictures of things when we
     interpret sensations.
3.	Qualification: We notice the qualities of things: how things
     are alike and how they differ.
4.	Classification: We mentally sort things into classes, types or
     families.
5.	Structure Analysis: We observe how things are made: break
     structural wholes into component parts.
6.	Operation Analysis: We notice how things happen: in what
     successive stages.
7.	Analogy: We see how seemingly unconnected situations are
     alike, forming parallel relations in different "worlds of
     thought".

When you are ready, try your system on the SAT test:  

     Which word (a, b, c, or d) best completes the sentence,
     in your opinion?  There is no "right" answer; pick the
     word which seems best to you.
          Poverty and hatred are ---------- of war.
          (a) roots  (b) leaves  (c) seeds  (d) fruits

We might be well advised to imitate a real example intelligence
(ours).  Later we can improve on the implementation, and possibly
the performance.

Certainly we will use mathematics to analyze and predict the system's
behavior; or rather subsets and abstractions, models of the system.
But we may not be able to construct any model less complex than the
system itself, which will produce the desired behavior; its behavior
must be understood through simulation.

"Computational irreducibility is a phenemenon that seems to arise in
many physical and mathematical systems.  The behavior of any system
can be found by explicit simulation of the steps in its evolution.
When the system is simple enough, however, it is always possible to
find a short cut to the procedure: once the initial state of the system
is given, its state at any subsequent step can be found directly from
a mathematical formula."  "For a system such as (illus.), however, the
behavior is so complicated that in general no short-cut description of
the evolution can be given.  Such a system is computationally
irreducible, and its evolution can effectively be determined only by
the explicit simulation of each step.  It seems likely that many
physical and mathematical systems for which no simple description is
now known are in fact computationally irreducible.  Experiment, either
physical or computational, is effectively the only way to study such
systems."
[Stephen Wolfram, Computer Software in Science and Mathematics,
Scientific American, Sept., 1984]

A mind is an effect which probably cannot be sustained at a lesser
level of complexity than in our own case; any abstraction which
simplifies will also destroy the very capabilities we wish to
understand.  There are trillions of components and connections in the
human brain.  No reasonable person can expect to model a mind in any
significant way using a few tens or hundreds of components.  Since
there is a threshold of complexity below which the behavior of
interest will not occur, and the complexity of models is generally
deliberately reduced below this level, models will not produce the
phenomena of interest.

"Yet recall John von Neumann's warning that a complete description of
how we perceive may be far more complicated than this complicated
process itself - that the only way to explain pattern recognition
may be to build a device capable of recognizing pattern, and then,
mutely, point to it.

How we think is still harder, and almost certainly we are not yet
breaking this problem down in solvable form."

Horace Freeland Judson, "The Search for Solutions", 1980.

In spite of the tone of that last quote, I believe we can and should
build, now, things which will prove or disprove these ideas, so we
can either quit wasting energy or get going on building other minds.

I'm not going to be at this mail address after March 1, but probably
someone will forward my mail.  The Boeing Advanced Technology Center
just closed down all its robotics projects, including mobility and
stereo vision, my work in induction, and all other work not "directly
supporting Boeing programs".  So twenty-plus of us are scrambling to
find other places to work.  I don't know what access to any networks
I might have next month.

Ray