[comp.ai] Cog Sci Fi

harnad@elbereth.rutgers.edu (Stevan Harnad) (12/11/89)

This is a multiple reply, to multiple errors:

cam@aipna.ed.ac.uk (Chris Malcolm) of 
Dept of AI, Edinburgh University, UK, wrote:

> my game is assembly robotics... The assembly agent is designed not only
> to succeed in its tasks, but to present a suitable virtual world to the
> planner, there is an extra constraint on the task modularisation. That
> constraint is sometimes referred to as the symbol grounding problem.

As the mint that coined the term, I think I speak with a little
semantic authority when I say that that's decidedly *not* the symbol
grounding problem but rather a symptom of it! Virtual worlds are not
real worlds, and what goes on inside a simulation is just meaningless
symbol crunching. The only way to ground symbols is in the real world.

Let me add the prediction that whereas a "virtual world" may allow one
to ground a toy robot in the real world, it will never lead to what --
for a psychobiologist, at any rate -- is the real goal: A robot that
passes the Total Turing Test. The reason is again the symbol grounding
problem: Virtual worlds cannot embody all the contingencies of the real
world, they can only capture as many of them symbolically as we can
anticipate. In ordinary AI this was known as the "frame" problem -- but
of course that's just another manifestation of the symbol grounding
problem.

mike@cs.arizona.edu (Mike Coffin) of
U of Arizona CS Dept, Tucson, wrote:

> an artificial intelligence living... in a (sub-)simulation on a Cray-9
> would have no choice but to accept the simulated flight to Istanbul to
> the AI conference as ``reality.''

Here is the other side of the coin, what I have called the "hermeneutic
hall of mirrors" created by projecting our interpretations onto
meaningless symbols. As long as you allow yourself to interpret
ungrounded symbols you'll keep coming up with "virtual reality."
The only trouble is, what we're after is real reality (and that
distinction is being lost in the wash). There's nobody home in
a symbol cruncher, and it's not because they're on a virtual
flight to Istanbul! This is what I call "Cog Sci Fi."

yamauchi@cs.rochester.edu (Brian Yamauchi) of
University of Rochester Computer Science Department wrote:

> My complaint about most AI programs is not the worlds are simulated,
> but that the simulated worlds often are very unlike any type of
> perceptual reality sensed by organic creatures.  It's a matter of
> semantics to argue whether this is "intelligence"...
> It seems that one interesting approach to AI would be to use the
> virtual reality systems which have recently been developed as an
> environment for artificial creatures. Then they would be living in a
> simulated world, but one that was sophisticated enough to provide a
> convincing illusion for *human* perceptions.

Illusion is indeed the right word! Simulated worlds are no more
"like" a reality than books are: They are merely *interpretable*
by *us* as being about a world. The illusion is purely a
consequence of being trapped in the hermeneutic hall of mirrors.
And that's not just "semantics," it's just syntax...
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

yamauchi@cs.rochester.edu (Brian Yamauchi) (12/11/89)

In article <Dec.10.11.48.06.1989.2717@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>
>yamauchi@cs.rochester.edu (Brian Yamauchi) of
>University of Rochester Computer Science Department wrote:
>
>> My complaint about most AI programs is not the worlds are simulated,
>> but that the simulated worlds often are very unlike any type of
>> perceptual reality sensed by organic creatures.  It's a matter of
>> semantics to argue whether this is "intelligence"...
>> It seems that one interesting approach to AI would be to use the
>> virtual reality systems which have recently been developed as an
>> environment for artificial creatures. Then they would be living in a
>> simulated world, but one that was sophisticated enough to provide a
>> convincing illusion for *human* perceptions.
>
>Illusion is indeed the right word! Simulated worlds are no more
>"like" a reality than books are: They are merely *interpretable*
>by *us* as being about a world.

There is a *big* difference between a book and a virtual reality (or a
movie, for that matter).  When you read a book you are interpreting
linguistic symbols, when you watch a movie you are processing raw
sensory perceptions.

Suppose virtual reality technology develops to the point where it is
impossible for a human to tell an illusion from reality (this is already
the case for still computer graphic images of some objects).  In this
case, the imaging patterns hitting the person's retina will be the
same regardless of whether he is viewing the real world or a simulated
one.  Now, suppose that we can develop a program which can react to
these images in the same way that a human can.  Does it make any
difference whether the inputs to the program come from the simulator
or a pair of cameras observing the real world?

Now, if you are arguing that it will be impossible in *practice* to
build a simulator which has the complexity of the real-world, in terms
of interactivity and modeling complex physical laws, then you may have
a point.

>The illusion is purely a
>consequence of being trapped in the hermeneutic hall of mirrors.

Actually, the illusion is the result of having very sophisticated
graphics software...

_______________________________________________________________________________

Brian Yamauchi				University of Rochester
yamauchi@cs.rochester.edu		Computer Science Department
_______________________________________________________________________________

blenko-tom@CS.YALE.EDU (Tom Blenko) (12/11/89)

In article <Dec.10.11.48.06.1989.2717@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
|...
|Here is the other side of the coin, what I have called the "hermeneutic
|hall of mirrors" created by projecting our interpretations onto
|meaningless symbols. As long as you allow yourself to interpret
|ungrounded symbols you'll keep coming up with "virtual reality."
|The only trouble is, what we're after is real reality (and that
|distinction is being lost in the wash). There's nobody home in
|a symbol cruncher, and it's not because they're on a virtual
|flight to Istanbul! This is what I call "Cog Sci Fi."

You appear to be arguing both with the assumption that there simply is
no escape from this situation, and the related proposal that no escape
is necessary.  If you wish to argue that there is any such thing as a
symbol grounding problem, I think you have to address both of these
views (which I understand to be widely accepted).

The argument that there is no escape goes like this: even if all
information from the environment were in principal available to a
putative intelligent entity (it makes no difference whether it is
artificial or not), there are necessarily limitations on what
information the entity could extract. So the fact that some information
may not be available in principal is at most a different facet of a
existing fundamental limitation.  I believe that recognition of this
property was first expressed in Herb Simon's Principle of Bounded
Rationality.

Why isn't escape necessary? All sorts of entities (corporations, the
roach species, you and I, etc.) make imperfect use of incomplete
information in order to survive and reproduce.  Certainly the validity
of these entities as predictors of their own, real-world futures plays
a role in their survival -- but there are a host of other important
strategies they use that are very far divorced from anything we would
term "intelligence" (e.g., genetic recombination, reproductive
strategies, role specialization).  So some "virtual realities" are good
enough, and (in the case or roaches) may be strikingly simpler than
"real reality".

So the conclusion is that "real reality" is neither attainable nor
necessary for any phenomenon termed "intelligence" to be realized.

	Tom

jwi@cbnewsj.ATT.COM (Jim Winer @ AT&T, Middletown, NJ) (12/11/89)

> >>Brian Yamauchi wrote:
> >>
> >> My complaint about most AI programs is not the worlds are simulated,
> >> but that the simulated worlds often are very unlike any type of
> >> perceptual reality sensed by organic creatures.  It's a matter of
> >> semantics to argue whether this is "intelligence"...
> >> It seems that one interesting approach to AI would be to use the
> >> virtual reality systems which have recently been developed as an
> >> environment for artificial creatures. Then they would be living in a
> >> simulated world, but one that was sophisticated enough to provide a
> >> convincing illusion for *human* perceptions.

> >Stevan Harnad writes:
> >
> >Illusion is indeed the right word! Simulated worlds are no more
> >"like" a reality than books are: They are merely *interpretable*
> >by *us* as being about a world.

> Brian Yamauchi writes:
> 
> There is a *big* difference between a book and a virtual reality (or a
> movie, for that matter).  When you read a book you are interpreting
> linguistic symbols, when you watch a movie you are processing raw
> sensory perceptions.


When you read a book, you are processing raw sensory perceptions and
interpreting them as literary symbols which recall associated memories
or previous perceptions and emotional states. 

Whe you watch a movie you are processing raw sensory perceptions and
interpreting them as visual symbols which recall associated memories
or previous perceptions and emotional states.

To an artificial creature without the necessary referent previous
perceptions and emotional states of a human, the interpretation of
human reality is likely to be impossible. To a human without the
necessary referent previous perceptions and emotional states of a
creature living in an artificial reality, the interpretation of
artificial reality is likely to be equally impossible. In short,
we probably couldn't communicate -- the problem isn't words, the
problem is what would we have to say to an intelligent tree?

> >The illusion is purely a
> >consequence of being trapped in the hermeneutic hall of mirrors.

> Actually, the illusion is the result of having very sophisticated
> graphics software...

Actually, the ilusion is the result of falsely thinking that an
individual has the referents to communicate outside its particular
system, or the ability to even perceive outside its system.

Jim Winer -- Post, don't email, I usually can't reply.
-----------------------------------------------------------------
opinions not necessarily |  "And remember, rebooting your brain
and do not represent     |   can be tricky." -- Chris Miller
any other sane person    |
especially not employer. |

jwi@cbnewsj.ATT.COM (Jim Winer @ AT&T, Middletown, NJ) (12/11/89)

> Tom Blenko writes:
> 
> So the conclusion is that "real reality" is neither attainable nor
> necessary for any phenomenon termed "intelligence" to be realized.

Perhaps, but "real reality" or at least a "common consensual reality"
may very well be necessary for meaningful communication with any
phenomenon termed "intelligence."

Jim Winer -- Post, don't email, I usually can't reply.
-----------------------------------------------------------------
opinions not necessarily |  "And remember, rebooting your brain
and do not represent     |   can be tricky." -- Chris Miller
any other sane person    |
especially not employer. |

harnad@phoenix.Princeton.EDU (Stevan Harnad) (12/12/89)

yamauchi@cs.rochester.edu (Brian Yamauchi)
University of Rochester Computer Science Department

> Suppose virtual reality technology develops to the point where it is
> impossible for a human to tell an illusion from reality (this is
> already the case for still computer graphic images of some objects)...
> Now, suppose that we can develop a program which can react to these
> images in the same way that a human can... Now, if you are arguing that
> it will be impossible in *practice* to build a simulator which has the
> complexity of the real-world, in terms of interactivity and modeling of
> complex physical laws, then you may have a point.

First of all, the last point WAS my point: The problem of designing a
robot that will pass the Total Turing Test (TTT) is a tiny subset of
the problem of simulating the world the robot is in, not vice versa.
(Another way to put it is that an analog object or state of affairs is
infinitely more compact than any possible symbolic description of it. To
approximate it closely enough, the description quickly becomes
ludicrously large.)

Second, the point of the TTT is for the ROBOT to pass it, in the world
and for us, not for its WORLD to pass it for us.

Finally, it's irrelevant what graphics we hook onto one symbol cruncher
in providing inputs to another symbol cruncher. (That's like having two
computers play chess against one another: it may as well all be one
computer.) Symbol crunchers don't see, not even if you hook transducers
onto them. It's just playing on the illusion from OUR point of view to
bother having a graphics interface. So it's still just the hermeneutic
circle, whether we're projecting our interpretations on symbolic text
or on symbol-governed graphics.
-- 
Stevan Harnad  Department of Psychology  Princeton University
harnad@confidence.princeton.edu       srh@flash.bellcore.com
harnad@elbereth.rutgers.edu    harnad@pucc.bitnet    (609)-921-7771

jiii@visdc.UUCP (John E Van Deusen III) (12/12/89)

In article <Dec.10.11.48.06.1989.2717@elbereth.rutgers.edu>
harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
>
> ... Virtual worlds are not real worlds, and what goes on inside a
> simulation is just meaningless symbol crunching. The only way to
> ground symbols is in the real world.

Real world?  You have no way of proving that there is a "real world",
that there is only one, or that we are not living inside a simulation.
Intelligence is intelligence, no matter if it is accepting patterns in
{a,b}* or operating a fork lift on planet earth.  It is only a matter of
degree.

> ... a "virtual world" ... will never lead to what -- for a
> psychobiologist, at any rate -- is the real goal: A robot that passes
> the Total Turing Test.

Does a "Total" Turing Test differ from a Turing Test in that
psychobiologists will, when confronted with a robot that passes the test
to the satisfaction of everyone else, insist upon running the tests out
to infinity?  It has been proven that equivalence testing programs with
that level of generality can not exist.

> ... Virtual worlds cannot embody all the contingencies of the real
> world, they can only capture as many of them symbolically as we can
> anticipate.

Let's say we have creatures living in the universe of {a,b}*.  They have
gotten pretty smart, in terms of language recognition, and have started
to simulate their universe, which consists of patterns of 'a's and 'b's
coming through a channel.  It is true that they don't have a prayer;
because the number a patterns, subsets of {a,b}*, is not only infinite,
it is not even enumerable.  Standing on the other end of the channel,
the god of the ab-creatures can send any pattern it wants and always
keep the creations guessing.

Now consider that the god of the ab-creatures is itself an ab-creature
being fed patterns from yet a higher level.  At each level, as gods,
they have total knowledge of the universe of the creatures immediately
below them; and, as creations, find their own universe incomprehensibly
complex and unknowable.  Even when the levels of simulation are carried
out to infinity, there is no qualitative difference in the intelligence
of the occupants or their "virtual" reality between the successive
levels of simulation.  Thus the inhabitants can not prove at what level
they are or even that they themselves exist within a simulation.
--
John E Van Deusen III, PO Box 9283, Boise, ID  83707, (208) 343-1865

uunet!visdc!jiii

harnad@phoenix.Princeton.EDU (Stevan Harnad) (12/12/89)

Tom Blenko blenko-tom@CS.YALE.EDU  of
Yale University Computer Science Dept wrote:

> You appear to be arguing both with the assumption that there simply is
> no escape from this situation, and the related proposal that no escape
> is necessary.  If you wish to argue that there is any such thing as a
> symbol grounding problem, I think you have to address both of these
> views (which I understand to be widely accepted).

Before you can argue about anything connected with the symbol grounding
problem you first have to know what it is (preprint available by email):

            THE SYMBOL GROUNDING PROBLEM

             (Physica D 1990, in press)

                 Stevan Harnad
            Department of Psychology
              Princeton University

ABSTRACT: There has been much discussion recently about the scope and
limits of purely symbolic models of the mind and about the proper role
of connectionism in cognitive modeling. This paper describes the
"symbol grounding problem" for a semantically interpretable symbol
system:  How can its semantic interpretation be made intrinsic to the
symbol system, rather than just parasitic on the meanings in our heads?
How can the meanings of the meaningless symbol tokens, manipulated
solely on the basis of their (arbitrary) shapes, be grounded in
anything but other meaningless symbols? The problem is analogous to
trying to learn Chinese from a Chinese/Chinese dictionary alone.

A candidate solution is sketched: Symbolic representations must be
grounded bottom-up in nonsymbolic representations of two kinds:
(1) iconic representations, which are analogs of the proximal sensory
projections of distal objects and events, and (2) categorical
representations, which are learned and innate feature-detectors that
pick out the invariant features of object and event categories from
their sensory projections. Elementary symbols are the names of these
object and event categories, assigned on the basis of their
(nonsymbolic) categorical representations. Higher-order (3) symbolic
representations, grounded in these elementary symbols, consist of
symbol strings describing category membership relations ("An X is a Y
that is Z").

Connectionism is one natural candidate for the mechanism that learns
the invariant features underlying categorical representations, thereby
connecting names to the proximal projections of the distal objects they
stand for. In this way connectionism can be seen as a complementary
component in a hybrid nonsymbolic/symbolic model of the mind, rather
than a rival to purely symbolic modeling. Such a hybrid model would not
have an autonomous symbolic "module," however; the symbolic functions
would emerge in the form of an intrinsically "dedicated" symbol system
as a consequence of the bottom-up grounding of categories' names in
their sensory representations. Symbol manipulation would be governed
not just by the arbitrary shapes of the symbol tokens, but by the
nonarbitrary shapes of the icons and category invariants in which they
are grounded.
-- 
Stevan Harnad  Department of Psychology  Princeton University
harnad@confidence.princeton.edu       srh@flash.bellcore.com
harnad@elbereth.rutgers.edu    harnad@pucc.bitnet    (609)-921-7771

mbb@cbnewsh.ATT.COM (martin.b.brilliant) (12/12/89)

From article <8093@cs.yale.edu>, by blenko-tom@CS.YALE.EDU (Tom Blenko):
> In article <Dec.10.11.48.06.1989.2717@elbereth.rutgers.edu> harnad@elbereth.rutgers.edu (Stevan Harnad) writes:
> |.... As long as you allow yourself to interpret
> |ungrounded symbols you'll keep coming up with "virtual reality."
> |The only trouble is, what we're after is real reality ....

The reference to real reality raised a ghost, and part of Tom Blenko's
reply illuminated it:

> ..... even if all
> information from the environment were in principal available to a
> putative intelligent entity (it makes no difference whether it is
> artificial or not), there are necessarily limitations on what
> information the entity could extract......
> ...... All sorts of entities (corporations, the
> roach species, you and I, etc.) make imperfect use of incomplete
> information in order to survive and reproduce......

In sophomore philosophy, I heard of a problem that Rene Decartes had.
He had sensory information seemingly coming into his mind, but he
wasn't sure it was really sensory in origin.  He was sure only that he
existed himself: "Cogito, ergo sum," I think, therefore I am.  In other
words, he knew of the existence of the symbols, but he didn't know
whether or not they were grounded.  He finally copped out, through some
argument based on the concept of God, and convinced himself that God
had to exist, and hence, somehow, that everything else did too.

But is there any difference between Descartes's symbol grounding
problem (which we all have to solve for ourselves, as individuals), and
the problem of symbol grounding in an automaton?

I think there is no logical way out, just as there was none for
Descartes.  And for practical purposes we don't need one, because we
have got on for billions of years without one.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.
Notice: Communication will cease 12/30/89 due to retirement.

mnr@daisy.learning.cs.cmu.edu (Marc Ringuette) (12/12/89)

Stevan Harnad writes,
> As the mint that coined the term, I think I speak with a little
> semantic authority when I say that that's decidedly *not* the symbol
> grounding problem but rather a symptom of it! Virtual worlds are not
> real worlds, and what goes on inside a simulation is just meaningless
> symbol crunching. The only way to ground symbols is in the real world.
> 
> Let me add the prediction that whereas a "virtual world" may allow one
> to ground a toy robot in the real world, it will never lead to what --
> for a psychobiologist, at any rate -- is the real goal: A robot that
> passes the Total Turing Test. The reason is again the symbol grounding
> problem: Virtual worlds cannot embody all the contingencies of the real
> world, they can only capture as many of them symbolically as we can
> anticipate. In ordinary AI this was known as the "frame" problem -- but
> of course that's just another manifestation of the symbol grounding
> problem.
 
Mr. Harnad, I think you're taking a practical argument and couching it,
misleadingly, in philosophical terms.  Here are three arguments you seem to
be making, and my commentary:

1. Existing symbolic AI systems live in simulated worlds which are a long
   distance away from reality.  The only reason some people think they are
   grounded in reality is because they're projecting.

   [ I agree that the distance to reality is large in most cases. ]

2. It's impractical to produce a simulated world which is a short distance
   from reality.  

   (My definition of "a short distance" would be that, if you were to
   replace the simulation with sensors and effectors which act in the real
   world, the AI system would be able to operate effectively in the real
   world if it does so in the simulation.  The construction of the
   sensors/effectors must not be allowed to hide complexities of the
   problem, of course; the simulation is a "short distance" from reality
   only if the sensor/effector system is a fairly direct translation.)

   [ I think this is a reasonable argument for someone to make, but I disagree
     on empirical grounds: I expect that we will soon see AI systems which
     run equally well on the real world and on fairly-detailed simulations. 

     Right here in our lab at CMU, we run a real robot with an extremely
     simple symbolic AI system.  The abstraction done by the sensor/effector
     system is considerable; I would say that when we decouple our system 
     from reality and run it under the simulation, it is a fairly "large
     distance" from reality, so it isn't a counterexample to claim (2).
     However, it's only a matter of degree, and powerful robotic performance
     systems must decrease that distance in order to be able to fully test
     their systems in simulation.  I believe they will succeed. ]
     
3. No matter what the simulation is, if it's _completely symbolic_, then 
   it's not _grounded_.

   [ This is the philosophical point.  I don't think this matters at all. ]


As a practicing roboticist, it's clear that when I consider an AI system
I should not be asking the question "Is it grounded?" but rather the question
"How interesting is it?", where _interest_ is positively correlated with
_realism_.  That's the way for me to produce good research results.  Asking
this question may lead me to ground my system in the lab, or it may not.

I think it's up to you to give reasons why the AI community should care about
the philosophical issue of _grounding_ rather than the practical issues of
_interest_ and _realism_.  And I think it behooves you to be more careful
to identify which of your arguments are practical and which are philosophical.

If you can't argue that the symbol grounding problem is worth considering 
on practical grounds, you might consider emphasizing your more practical
arguments in this forum.  For instance, if you wish to advocate connectionism,
you might wish to cite the fact that connectionist systems typically have a
"short distance" to reality, and that their _realism_ contributes to their
_interest_.


\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
 \\\ Marc Ringuette \\\ Carnegie Mellon University, Comp. Sci. Dept. \\\
  \\\ mnr@cs.cmu.edu \\\ Pittsburgh, PA 15213.  Phone 412-268-3728(w) \\\
   \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\

harnad@phoenix.Princeton.EDU (Stevan Harnad) (12/12/89)

mnr@daisy.learning.cs.cmu.edu (Marc Ringuette)
of Carnegie-Mellon University, CS/RI wrote:

> As a practicing roboticist, it's clear that when I consider an AI
> system I should not be asking the question "Is it grounded?" but rather
> the question "How interesting is it?", where _interest_ is positively
> correlated with _realism_...  I think it's up to you to give reasons
> why the AI community should care about the philosophical issue of
> _grounding_ rather than the practical issues of _interest_ and
> _realism_.

As a practicing roboticist, you can be interested in whatever you like.
But if you're doing real robotics, rather than virtual robotics, your
robots better be able to do whatever they do in the real world. To the
extent that symbol-crunching in a virtual world can actually be
translated into robotic performance in the real world, none of my
objections should worry you. To the extent it cannot, they
should. For my part, my interest is in a robot that can pass the
Total Turing Test; the symbol grounding problem for this enterprise
is empirical and methodological. It isn't and never has been
philosophical.
-- 
Stevan Harnad  Department of Psychology  Princeton University
harnad@confidence.princeton.edu       srh@flash.bellcore.com
harnad@elbereth.rutgers.edu    harnad@pucc.bitnet    (609)-921-7771

yamauchi@cs.rochester.edu (Brian Yamauchi) (12/12/89)

In article <691@visdc.UUCP> jiii@visdc.UUCP (John E Van Deusen III) writes:
>
>Intelligence is intelligence, no matter if it is accepting patterns in
>{a,b}* or operating a fork lift on planet earth.  It is only a matter of
>degree.

This strikes me as the core of what is wrong with a substantial
proportion of AI research.  The idea that the way to build
intelligence is to start with abstract "language" (in the math logic
sense of the word) recognizers on {a,b}* or theorem provers using FOPC
and deciding that it is only "a matter of degree" to expand these
systems to human intelligence.

There is one clear example of how to develop intelligent systems
incrementally -- it's called evolution.  Wouldn't it make more sense
to develop artificially intelligent systems in an analogous manner --
starting with simple, fully autonomous creatures, and progressively
adding more advanced capabilities?

True, the fact that nature used this course does not mean that it is
the only course, but it does mean that it is a possible one.  (And
hopefully, the substitution of intelligent design for random mutations
will cut down the required time by a few billion years.)

_______________________________________________________________________________

Brian Yamauchi				University of Rochester
yamauchi@cs.rochester.edu		Computer Science Department
_______________________________________________________________________________

mnr@daisy.learning.cs.cmu.edu (Marc Ringuette) (12/13/89)

Stevan Harnad writes,
> ... the symbol grounding problem for this enterprise is empirical and
> methodological. It isn't and never has been philosophical.

Let's look at the following statement from a methodological point of view.

You wrote,
> ... what goes on inside a simulation is just meaningless
> symbol crunching. The only way to ground symbols is in the real world.

What did you mean?  If you're saying that an AI system running in a
simulation is totally meaningless, then I'll argue that you're flat wrong.

I get a feeling that you only meant 'meaningless' in the philosophical
sense.  In that case, my question stands: why make philosophical arguments
if you really mean to discuss methodological issues?  And would you re-state
the methodological case you were trying to make?  I couldn't hear it for
the noise.

   ///////////////////////////////////////////////////////////////////////
  /// Marc Ringuette /// Carnegie Mellon University, Comp. Sci. Dept. ///
 /// mnr@cs.cmu.edu /// Pittsburgh, PA 15213.  Phone 412-268-3728(w) ///
///////////////////////////////////////////////////////////////////////

mike@cs.arizona.edu (Mike Coffin) (12/18/89)

This is a single reply to multiple errors :-)

From article <Dec.10.11.48.06.1989.2717@elbereth.rutgers.edu>,
   by harnad@elbereth.rutgers.edu (Stevan Harnad): 
> As long as you allow yourself to interpret
> ungrounded symbols you'll keep coming up with "virtual reality."
> The only trouble is, what we're after is real reality (and that
> distinction is being lost in the wash). There's nobody home in
> a symbol cruncher ...

You omitted the first half of my posting, in which I pointed out that
there is nothing (that I know of) that precludes the possibility that
*we* might be simulated.  In that case, what is the difference between
real reality and virtual reality?

On a more general issue, this "problem of symbol grounding" seems
nothing more than a mantra that some people chant when they're faced
with arguments they can't deal with otherwise.  If the only thing that
distinguishes reality and virtual reality is that symbols are grounded
in one but not the other, what is the real difference?  I mean, what
observable effect does "grounding" the symbols have?  Does a system
begin behaving differently when symbols are grounded?  Would *we*
begin behaving differently if the entity that wrote our program suddenly
said, "My God! they're ungrounded!"?
-- 
Mike Coffin				mike@arizona.edu
Univ. of Ariz. Dept. of Comp. Sci.	{allegra,cmcl2}!arizona!mike
Tucson, AZ  85721			(602)621-2858

cam@aipna.ed.ac.uk (Chris Malcolm) (12/31/89)

Stevan Harnad wrote:

> cam@aipna.ed.ac.uk (Chris Malcolm) wrote:

>> my game is assembly robotics... The assembly agent is designed not only
>> to succeed in its tasks, but to present a suitable virtual world to the
>> planner, there is an extra constraint on the task modularisation. That
>> constraint is sometimes referred to as the symbol grounding problem.

> As the mint that coined the term, I think I speak with a little
> semantic authority when I say that that's decidedly *not* the symbol
> grounding problem but rather a symptom of it! Virtual worlds are not
> real worlds, and what goes on inside a simulation is just meaningless
> symbol crunching. The only way to ground symbols is in the real world.

I think Stevan has taken me to mean the opposite of what I intended! When
I said "virtual world" I meant something which bore the same
relationship to the real world as does a virtual machine (such as a
software code intepreter) to a real machine (such as the computer
running the interpreter. In other words, this kind of "virtual world" is
just as real as the "real world", the "virtual" prefix meaning simply
that we are referring to a level within the organisation of a creature
which is hosted by the various physical and information processing
mechanisms which underly it. At the top level the virtual world of a
creature is "what it is like to be" that creature, the umwelten of von
Uexkull. A virtual world in this sense is most emphatically not any kind
of simulation.

If, however, one presumes that the symbol grounding problem can be
solved by the kind of perceptual mechanisms Stevan has outlined in his
various papers, with the occasional qualification that a similar system
can be devised for the motor side, then it is true that the kind of
virtual world I am talking about becomes rather like a simulated world,
such as in the often cited example of the aircraft simulator being
hooked up to the controls and sensors of a real aircraft, and thus
crossing the great divide between simulation of flight and really
flying. The reason for this similarity is the separation of the sensor
and motor hierarchies until they are combined at a symbolic level. This
separation omits the powerful facilities available to a creature by
combining sensing and motor activity to create kinds of sensing and
action otherwise impossible (or much more computationally expensive).
One example of this kind of facility is the servo-mechanism, which uses
feedback to create - as far as the functioning creature is concerned - a
useful and perceptible stability from the otherwise unstable and
ephemeral. Another example is the use of an unstable motor activity
which the environmental conditions will tend to drive into one of two
appropriate limiting conditions. In this case the unstable motor
activity is used as a combination of sensing and response system.  There
are many other kinds of ways in which sensing and action can be combined
to advantage, or subsituted for one another.

Wherever sensing and action have been locally amalgamated in this sort of
way a barrier is created to the extension of separate sensor and motor
processing hierarchies. While these processing hierarchies definitely
exist, and are most important, they can't be extended beyond such a
barrier. An independent set of such barriers can be seen as constituting
a level. There can be many such levels erected atop one another. When I
say "virtual world" in the context of a biological or artificial
creature I refer to such a level. These virtual worlds depend on the
active functioning of the real creature situated in the real world, and
in that sense they are thoroughly real and grounded.

Let me add the prediction that whereas one certainly can ground toy
robots with architectures which involve separate perceptual and motor
hierarchies erected on top of a basic bottom level of feedback control,
that we will not be able to create "intelligent" but dumb (speechless)
robots, let alone a robot which could pass the Total Turing Test,
without multiple levels of such virtual worlds, each one creating new
virtual sensors and effectors, in terms of which the next level can be
constructed. In between these levels, I expect the kind of categorical
perception described by Stevan Harnad, and its motor analogue, will play
an important role. But, IMHO, it cannot, on its own, accomplish all that
is required of symbol grounding in a complex intelligent creature.
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (01/02/90)

From article <1781@aipna.ed.ac.uk>, by cam@aipna.ed.ac.uk (Chris Malcolm):
> ...
>Wherever sensing and action have been locally amalgamated in this sort of
>way a barrier is created to the extension of separate sensor and motor
>processing hierarchies. ...

This seems a central point, but I don't understand it.  Why is
a barrier created?  (And is the barrier in the creature, or
in our analysis of it?)
				Greg, lee@uhccux.uhcc.hawaii.edu

cam@aipna.ed.ac.uk (Chris Malcolm) (01/16/90)

In article <5871@uhccux.uhcc.hawaii.edu> lee@uhccux.uhcc.hawaii.edu (Greg Lee) writes:
>From article <1781@aipna.ed.ac.uk>, by cam@aipna.ed.ac.uk (Chris Malcolm):
  ...
>>Wherever sensing and action have been locally amalgamated in this sort of
>>way a barrier is created to the extension of separate sensor and motor
>>processing hierarchies. ...

>This seems a central point, but I don't understand it.  Why is
>a barrier created?  (And is the barrier in the creature, or
>in our analysis of it?)

Good point. Now that I think it out carefully, it is not in fact a
barrier, so much as an opportunity not be missed. One _could_ just go on
extending the sensor and processor hierarchies, but the point is that
local amalgamation, as in a feedback servo, creates (when seen with the
right timeframe and granularity) a useful new feature of the "world"
(umwelt) which cannot be seen either as the result of pure sensory
processing, or as as some kind of macro-operation on top of effector
processing. To take advantage of such a feature adulterates the purity
of the twin hierarchy.

For a criticism of the "vision-module" approach to "the vision problem"
from this kind of standpoint, see Aaron Sloman's article in the latest
(vol 1 iss 4) Journal of Experimental and Theoretical AI.

Yes, it's in the creature. While it _might_ be possible to build
creatures with separated sensory and actuator processing hierarchies, it
would at least be very computationally costly.
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (01/18/90)

From article <1838@aipna.ed.ac.uk>, by cam@aipna.ed.ac.uk (Chris Malcolm):
"   ...
" >>Wherever sensing and action have been locally amalgamated in this sort of
" >>way a barrier is created to the extension of separate sensor and motor
" >>processing hierarchies. ...
" 
" ... it is not in fact a
" barrier, so much as an opportunity not be missed.

I believe there are two issues involving separation/modularity versus
amalgamation noticeable here.  One concerns whether sensing and
action are separable, and the other whether lower levels are
separate from higher levels -- i.e. whether there is a hierarchy.
I must admit to having only a foggy idea about the nature of
these "barriers", but I thought you were saying that there is
a relationship between these issues, to the effect that the need
to process both input and output efficiently requires the
development of a processing hierarchy.  If something like that
is so, it is pertinent to whether we can expect to find that
higher levels of human processing, "thought", are sufficiently
independent of human sensing and acting organs to be emulated
by a program that runs on non-human hardware.

On a small scale, these matters arise in the analysis of language,
where one might see words as the barriers between phonological
and syntactic processing.
				Greg, lee@uhccux.uhcc.hawaii.edu