[comp.ai] What is a Symbol System?

harnad@phoenix.Princeton.EDU (S. R. Harnad) (11/20/89)

What is a symbol system? From Newell (1980) Pylyshyn (1984), Fodor
(1987) and the classical work of Von Neumann, Turing, Goedel, Church,
etc.(see Kleene 1969) on the foundations of computation, we can
reconstruct the following definition:

A symbol system is:

(1) a set of arbitrary PHYSICAL TOKENS (scratches on paper, holes on
a tape, events in a digital computer, etc.) that are

(2) manipulated on the basis of EXPLICIT RULES that are

(3) likewise physical tokens and STRINGS of tokens. The rule-governed
symbol-token manipulation is based

(4) purely on the SHAPE of the symbol tokens (not their "meaning"),
i.e., it is purely SYNTACTIC, and consists of

(5) RULEFULLY COMBINING and recombining symbol tokens. There are

(6) primitive ATOMIC symbol tokens and

(7) COMPOSITE symbol-token strings. The entire system and all its parts
-- the atomic tokens, the composite tokens, the syntactic manipulations
(both actual and possible) and the rules -- are all

(8) SEMANTICALLY INTERPRETABLE: The syntax can be SYSTEMATICALLY
assigned a meaning (e.g., as standing for objects, as describing states
of affairs).

According to proponents of the symbolic model of mind such as Fodor
(1980) and Pylyshyn (1980, 1984), symbol-strings of this sort capture
what mental phenomena such as thoughts and beliefs are. Symbolists
emphasize that the symbolic level (for them, the mental level) is a
natural functional level of its own, with ruleful regularities that are
independent of their specific physical realizations. For symbolists,
this implementation-independence is the critical difference between
cognitive phenomena and ordinary physical phenomena and their
respective explanations. This concept of an autonomous symbolic level
also conforms to general foundational principles in the theory of
computation and applies to all the work being done in symbolic AI, the
branch of science that has so far been the most successful in
generating (hence explaining) intelligent behavior.

All eight of the properties listed above seem to be critical to this
definition of symbolic. Many phenomena have some of the properties, but
that does not entail that they are symbolic in this explicit, technical
sense. It is not enough, for example, for a phenomenon to be
INTERPRETABLE as rule-governed, for just about anything can be
interpreted as rule-governed. A thermostat may be interpreted as
following the rule: Turn on the furnace if the temperature goes below
70 degrees and turn it off if it goes above 70 degrees, yet nowhere in
the thermostat is that rule explicitly represented.

Wittgenstein (1953) emphasized the difference between EXPLICIT and
IMPLICIT rules: It is not the same thing to "follow" a rule
(explicitly) and merely to behave "in accordance with" a rule
(implicitly). The critical difference is in the compositeness (7) and
systematicity (8) criteria. The explicitly represented symbolic rule is
part of a formal system, it is decomposable (unless primitive), its
application and manipulation is purely formal (syntactic,
shape-dependent), and the entire system must be semantically
interpretable, not just the chunk in question. An isolated ("modular")
chunk cannot be symbolic; being symbolic is a combinatory, systematic
property.

So the mere fact that a behavior is "interpretable" as ruleful does not
mean that it is really governed by a symbolic rule. Semantic
interpretability must be coupled with explicit representation (2),
syntactic manipulability (4), and systematicity (8) in order to be
symbolic. None of these criteria is arbitrary, and, as far as I can
tell, if you weaken them, you lose the grip on what looks like a
natural category and you sever the links with the formal theory of
computation, leaving a sense of "symbolic" that is merely unexplicated
metaphor (and probably differs from speaker to speaker).

Any rival definitions, counterexamples or amplifications?

Excerpted from:
Harnad, S. (1990) The Symbol Grounding Problem. Physica D (in press)
-----------------------------------------------------
References:
Fodor, J. A. (1975) The language of thought. New York: Thomas Y. Crowell
Fodor, J. A. (1987) Psychosemantics. Cambridge MA: MIT/Bradford.
Fodor, J. A. & Pylyshyn, Z. W. (1988) Connectionism and cognitive
     architecture: A critical appraisal. Cognition 28: 3 - 71.
Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical
     and Experimental Artificial Intelligence 1: 5-25.
Kleene, S. C. (1969) Formalized recursive functionals and formalized
     realizability. Providence, R.I.: American Mathematical Society.
Newell, A. (1980) Physical Symbol Systems. Cognitive Science 4: 135-83.
Pylyshyn, Z. W. (1980) Computation and cognition: Issues in the
     foundations of cognitive science. Behavioral and Brain Sciences
     3: 111-169.
Pylyshyn, Z. W. (1984) Computation and cognition. Cambridge MA:
     MIT/Bradford
Turing, A. M. (1964) Computing machinery and intelligence. In: Minds
     and machines, A.R. Anderson (ed.), Engelwood Cliffs NJ: Prentice Hall.

-- 
Stevan Harnad  INTERNET:  harnad@confidence.princeton.edu   harnad@princeton.edu
srh@flash.bellcore.com      harnad@elbereth.rutgers.edu    harnad@princeton.uucp
CSNET:    harnad%confidence.princeton.edu@relay.cs.net
BITNET:   harnad1@umass.bitnet      harnad@pucc.bitnet            (609)-921-7771

fateman@renoir.Berkeley.EDU (Richard Fateman) (11/21/89)

In article <11640@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (S. R. Harnad) writes:
>
>
>A symbol system is:
>
>(1) a set of arbitrary PHYSICAL TOKENS (scratches on paper, holes on
>a tape, events in a digital computer, etc.) that are
......>
[manipulated based on a rule system that is]
>i.e., it is purely SYNTACTIC, ....

If you believe the syntactic rules (a^b)^c <--> a^(b*c) and a*b <--> b*a  then
-1 = (-1)^1 = (-1)^(2* (1/2)) = ((-1)^2)^1/2) = 1^(1/2) = 1.
from which anything else can be proven.
So if you want to do mathematics correctly, this approach 
or representation is wrong.
....
>All eight of the properties listed above seem to be critical to this
>definition of symbolic. Many phenomena have some of the properties, but
>that does not entail that they are symbolic in this explicit, technical
>sense.

In other words, with respect to symbolic mathematical symbol
manipulation systems, Harnad is making an observation that pertains
to those "syntactic" ones which (perhaps inevitably incorrectly) manipulate
uninterpreted trees. His observation is merely that anything else
should not be called a "symbol system".  Since usage contradicts
Harnad's contention, what's the point? Should we change the name
of our newsgroup? :)


Richard Fateman
 fateman@renoir.berkeley.edu

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (11/21/89)

From article <11640@phoenix.Princeton.EDU>, by harnad@phoenix.Princeton.EDU (S. R. Harnad):
" ...
" Wittgenstein (1953) emphasized the difference between EXPLICIT and
" IMPLICIT rules: It is not the same thing to "follow" a rule
" (explicitly) and merely to behave "in accordance with" a rule
" (implicitly). The critical difference is in the compositeness (7) and
" systematicity (8) criteria. The explicitly represented symbolic rule is
" part of a formal system, ...

`Explicit' is not the same thing as `explicitly represented'.
I'm not sure I see how all the rules could be formalized, but in
any case, they needn't be.
				Greg, lee@uhccux.uhcc.hawaii.edu

mcdermott-drew@CS.YALE.EDU (Drew McDermott) (11/21/89)

In article <11640@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (S. R. Harnad) writes:
   > What is a symbol system? From Newell (1980) Pylyshyn (1984),
   >Fodor (1987) and the classical work of Von Neumann, Turing,
   >Goedel, Church, etc.(see Kleene 1969) on the foundations of
   >computation, we can reconstruct the following definition:
   > A symbol system is:
   >
   >(1) a set of arbitrary PHYSICAL TOKENS (scratches on paper, holes
   >on a tape, events in a digital computer, etc.) that are
   >
   >(2) manipulated on the basis of EXPLICIT RULES that are
   >
   >(3) likewise physical tokens and STRINGS of tokens. The
   >rule-governed symbol-token manipulation is based
   >
   >(4) purely on the SHAPE of the symbol tokens (not their
   >"meaning"), i.e., it is purely SYNTACTIC, and consists of
   >
   >(5) RULEFULLY COMBINING and recombining symbol tokens. There are
   >
   >(6) primitive ATOMIC symbol tokens and
   >
   >(7) COMPOSITE symbol-token strings. The entire system and all its
   >parts -- the atomic tokens, the composite tokens, the syntactic
   >manipulations (both actual and possible) and the rules -- are all
   >
   >(8) SEMANTICALLY INTERPRETABLE: The syntax can be SYSTEMATICALLY
   >assigned a meaning (e.g., as standing for objects, as describing
   >states of affairs).
   >
   >According to proponents of the symbolic model of mind such as
   >Fodor (1980) and Pylyshyn (1980, 1984), symbol-strings of this
   >sort capture what mental phenomena such as thoughts and beliefs
   >are. Symbolists emphasize that the symbolic level (for them, the
   >mental level) is a natural functional level of its own, with
   >ruleful regularities that are independent of their specific
   >physical realizations. For symbolists, this
   >implementation-independence is the critical difference between
   >cognitive phenomena and ordinary physical phenomena and their
   >respective explanations. This concept of an autonomous symbolic
   >level also conforms to general foundational principles in the
   >theory of computation and applies to all the work being done in
   >symbolic AI, the branch of science that has so far been the most
   >successful in generating (hence explaining) intelligent behavior.
   >  [...]   
   >Any rival definitions, counterexamples or amplifications?
   >
   >-- Stevan Harnad INTERNET: harnad@confidence.princeton.edu
   >harnad@princeton.edu

I have two quibbles with this list:

(a) Items 2&3: I agree that the rules have to be explicit, but they are
usually written in a different notation from the one they manipulate.
Example: A theorem prover written in Lisp.  Another example: The
weights in a neural net.

(b) Item 8: Why is it necessary that a symbol system have a semantics
in order to be a symbol system?  I mean, you can define it any way
you like, but then most AI programs wouldn't be symbol systems in 
your sense.  I and others have spent some time arguing that symbol
systems *ought* to have a semantics, and it's odd to be told that I
was arguing in favor of a tautology.  (Or that, now that I've changed
my mind, I believe a contradiction.)

Perhaps you have in mind that a system couldn't really think, or
couldn't really refer to the outside world without all of its symbols
being part of some seamless Tarskian framework.  (Of course, *you*
don't think this, but you feel that charity demands you impute this
belief to your opponents.)  I think you have to buy several extra
premises about the potency of knowledge representation to believe that
formal semantics is that crucial. 
      
                                              -- Drew McDermott

geddis@Neon.Stanford.EDU (Donald F. Geddis) (11/21/89)

In article <32690@ucbvax.BERKELEY.EDU> fateman@renoir.Berkeley.EDU.UUCP (Richard Fateman) writes:
>If you believe the syntactic rules (a^b)^c <--> a^(b*c) and a*b <--> b*a  then
>-1 = (-1)^1 = (-1)^(2* (1/2)) = ((-1)^2)^1/2) = 1^(1/2) = 1.

1^(1/2) = (-1 or 1), not just 1.  Your last step was not one of the "syntactic
rules" that I "believe".

If you reverse the order, so you start with 1=1 and then go to 1=1^(1/2), that
is false, in the strict sense.  The left-hand side is equivalent to 1, while
the right-hand side is equivalent to (-1 or 1).

	-- Don Geddis
-- 
Geddis@CS.Stanford.Edu
"There is no dark side of the moon, really.  Matter of fact, it's all dark."

hardt@linc.cis.upenn.edu (Dan Hardt) (11/21/89)

I'm not sure how you can sharply distinguish between a system
that is interpretable as rule-governed and one that is
explicitly rule governed.  Perhaps you have in mind a connectionist
network on the one hand, where what is syntactically represented might
be things like weights of connections, and the rules only emerge from the
overall behavior of the system; on the other hand, an expert system,
where the rules are all explicitly written in some logical notation.
Would you characterize the connectionist network as only interpretable
as being rule-governed, and the expert system as being explicitly 
rule governed?  If it is that sort of distinction you have in mind,
I'm not sure how the criteria given allow you make it.  If fact, I
wonder how you can rule out any turing machine.

harnad@phoenix.Princeton.EDU (S. R. Harnad) (11/21/89)

mcdermott-drew@CS.YALE.EDU (Drew McDermott) of
Yale University Computer Science Dept asked:

> Why is it necessary that a symbol system have a semantics in order to
> be a symbol system? I mean, you can define it any way you like, but
> then most AI programs wouldn't be symbol systems in your sense.
> 
> Perhaps you have in mind that a system couldn't really think, or
> couldn't really refer to the outside world without all of its symbols
> being part of some seamless Tarskian framework...  I think you have to
> buy several extra premises about the potency of knowledge
> representation to believe that formal semantics is that crucial.

I'd rather not define it any way I like. I'd rather pin people down on
a definition that won't keep slipping away, reducing all disagrements
about what symbol systems can and can't do to mere matters of
interpretation.

I gave semantic interpretability as a criterion, because it really
seems to be one of the properties people have in mind when they
single out symbol systems. However, semantic interpretability is
not the same as having an intrinsic semantics, in the sense that
mental processes do. But I made no reference to anything mental
("thinking," reference," "knowledge") in the definition.

So the only thing at issue is whether a symbol system is required to be
semantically interpretable. Are you really saying that most AI programs
are not? I.e., that if asked what this or that piece of code means
or does, the programmer would reply: "Beats me! It's just crunching
a bunch of meaningless and uninterpretable symbols."

No, I still think an obvious sine qua non of both the formal symbol
systems of mathematics and the computer programs of computer science
and AI is that they ARE semantically interpretable.
-- 
Stevan Harnad  Department of Psychology  Princeton University
harnad@confidence.princeton.edu       srh@flash.bellcore.com
harnad@elbereth.rutgers.edu    harnad@pucc.bitnet    (609)-921-7771

harnad@phoenix.Princeton.EDU (S. R. Harnad) (11/21/89)

Dan Hardt hardt@linc.cis.upenn.edu University of Pennsylvania wrote:

> I'm not sure how you can sharply distinguish between a system
> that is interpretable as rule-governed and one that is
> explicitly rule governed. Perhaps you have in mind a connectionist
> network on the one hand, where what is syntactically represented might
> be things like weights of connections, and the rules only emerge from the
> overall behavior of the system; on the other hand, an expert system,
> where the rules are all explicitly written in some logical notation.
> Would you characterize the connectionist network as only interpretable
> as being rule-governed, and the expert system as being explicitly 
> rule governed?  If it is that sort of distinction you have in mind,
> I'm not sure how the criteria given allow you to make it.  If fact, I
> wonder how you can rule out any turing machine.

I'm willing to let the chips fall where they may. All I'm trying
to do is settle on criteria for what does and does not count as
symbol, symbol system, symbol manipulation.

Here is an easy example. I think it contains all the essentials:
We have two Rube Goldberg devices, both beginning with a string
you pull, and both ending with a hammer that smashes a piece of
china. Whenever you pull the string, the china gets smashed by the
hammer in both systems. The question is: Given that they can both be
described as conforming to the rule "If the string is pulled, smash the
china," is this rule explicitly represented in both systems?

Let's look at them more closely: One turns out to be pure causal
throughput: The string is attached to the hammer, which is poised like
a lever. Pull the string and the hammer goes down. Bang!

 In the other
system the string actuates a transducer which sends a data bit to a
computer program capable of controlling a variety of devices all over the
country. Some of its input can come from strings at other locations, some
from airline reservations, some from missile control
systems. Someone has written a lot of flexible code. Among the
primitives of the system are symbol tokens such as STRING, ROPE, CABLE,
PULL, HAMMER, TICKET, BOMB, LOWER, LAUNCH, etc. In particular, one
symbol string is "IF PULL STRING(I), LOWER HAMMER(J)," and this sends
out a data bit that triggers and effector that brings the hammer down.
Bang! The system also represents "If PULL STRING(J), LOWER HAMMER(J),"
"IF PULL STRING(J), RELEASE MISSILE(K)," etc. etc. The elements can be
recombined as you would expected, based on a gloss of their meanings,
and the overall interpretation of what they stand for is systematically
sustained. (Not all possible symbol combinations are enabled,
necessarily, but they all make systematic sense.) The explicitness of
rules and representations is based on this combinatory semantics.

It is in the latter kind of symbol economy that the rule is said to
be explicitly represented. The criteria I listed do allow me to make this
distinction. And I'm certainly not interested in ruling out a Turing
Machine -- the symbol system par excellence. The extent to which
connectionist networks can and do represent rules explicitly is still
unsettled.
-- 
Stevan Harnad  Department of Psychology  Princeton University
harnad@confidence.princeton.edu       srh@flash.bellcore.com
harnad@elbereth.rutgers.edu    harnad@pucc.bitnet    (609)-921-7771

jiii@visdc.UUCP (John E Van Deusen III) (11/22/89)

In article <11657@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU
(S. R. Harnad) writes:
>
> Here is an easy example. I think it contains all the essentials:
> We have two Rube Goldberg devices, both beginning with a string
> you pull, and both ending with a hammer that smashes a piece of
> china. Whenever you pull the string, the china gets smashed by the
> hammer in both systems. The question is: Given that they can both be
> described as conforming to the rule "If the string is pulled, smash
> the china," is this rule explicitly represented in both systems?
> -- 
> Stevan Harnad  Department of Psychology  Princeton University
> harnad@confidence.princeton.edu       srh@flash.bellcore.com
> harnad@elbereth.rutgers.edu    harnad@pucc.bitnet    (609)-921-7771

I believe that artificial intelligence is only concerned with the
problem of if and when to pull one of the strings.  Once the string is
pulled or not, the result is deterministic.  The china may break or it
may not, but the result requires no intelligence.  It seems kind of
clear that if we want to consider artificial intelligence distinct from
the chaotic determinism in which it is embedded, we have to resort to
some sort of contrived formalism.

Like others, I think of intelligence as a recognizer of a language taken
over an alphabet of symbols.  Not only is this mathematical contraption
capable of doing anything, in the sense of "knowing" precisely when to
to pull the string, but it is brutishly mechanistic and free from
subjective magic, (although seldom possible to build).  In such a model
it is fruitless to quibble about what is to be included in the set of
symbols, since the set of possible languages taken over an alphabet even
as simple as {a, b} is infinite.
--
John E Van Deusen III, PO Box 9283, Boise, ID  83707, (208) 343-1865

uunet!visdc!jiii

cam@aipna.ed.ac.uk (Chris Malcolm) (11/23/89)

In article <11657@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (S. R. Harnad) writes:

>Here is an easy example. I think it contains all the essentials:
>We have two Rube Goldberg devices, both beginning with a string
>you pull, and both ending with a hammer that smashes a piece of
>china. Whenever you pull the string, the china gets smashed by the
>hammer in both systems. The question is: Given that they can both be
>described as conforming to the rule "If the string is pulled, smash the
>china," is this rule explicitly represented in both systems?
>
>Let's look at them more closely: One turns out to be pure causal
>throughput: The string is attached to the hammer, which is poised like
>a lever. Pull the string and the hammer goes down. Bang!
>
> In the other
>system the string actuates a transducer which sends a data bit to a
>computer program capable of controlling a variety of devices all over the
>country. [ .. and which activates an explicit representation of the
>rule, which in turn causes the hammer blow.]

In your original posting you (Stevan Harnad) said:

    So the mere fact that a behavior is "interpretable" as ruleful
    does not mean that it is really governed by a symbolic rule.
    Semantic interpretability must be coupled with explicit
    representation (2), syntactic manipulability (4), and
    systematicity (8) in order to be symbolic.
	
There is a can of worms luring under that little word "coupled"!  What I
take it to mean is that this symbolic rule must cause the behaviour
which we interpret as being governed by the rule we interpret the
symbolic rule as meaning. Unravelled, that may seem stupendously
tautologous, but meditation on the problems of symbol grounding can
induce profound uncertainty about the status of supposedly rule-governed
AI systems. One source of difficulty is the difference between the
meaning of the symbolic rule to the system (as defined by its use of the
rule) and the meaning we are tempted to ascribe to it because we
recognise the meaning of the variable names, the logical structure, etc.

Brian Smith's Knowledge Representation Hypothesis contains a nice
expression of this problem of "coupling" interpretation and causal
effect, in clauses a) and b) below.

    Any mechanically embodied intelligent process will be be
    comprised of structural ingredients that a) we as external
    observers naturally take to represent a propositional account of
    the knowledge that the overall process exhibits, and b)
    independent of such external semantical attribution, play a
    formal but causal and essential role in engendering the
    behaviour that manifests that knowledge.

[Brian C. Smith, Prologue to "Reflection and Semantics in a Procedural
Language" in "Readings in Knowledge Representation" eds Brachman &
Levesque, Morgan Kaufmann, 1985.]
	
It is not at all clear to me that finding a piece of source code in the
controlling computer which reads IF STRING_PULLED THEN DROP_HAMMER is
not just a conjuring trick where I am misled into equating the English
language meaning of the rule with its function within the computer
system [Drew McDermott, Artificial Intelligence meets Natural Stupidity,
ACM SIGART Newsletter 57, April 1976]. In simple cases with a few rules
and behaviour which can easily be exhaustively itemised we can satisfy
ourselves that our interpretation of the rule does indeed equate with
its causal role in the system. Where there are many rules, and the rule
interpreter is complex (e.g. having a bucketful of ad-hoc
conflict-resolution prioritising schemes designed to avoid "silly"
behaviour which would otherwise result from the rules) then the equation
is not so clear. The best we can say is that our interpretation is
_similar_ to the function of the rule in the system. How reliably can we
make this judgment of similarity? And how close must be the similarity
to justify our labelling an example as an instance of behaviour governed
by an explicit rule?

Why should we bother with being able to interpret the system's "rule" as
a rule meaningful to us? Perhaps we need a weaker category, where we
identify the whole caboodle as a rule-based system, but don't
necessarily need to be able to interpret the individual rules. But how
can we do this weakening, without letting in such disturbingly ambiguous
exemplars as neural nets?
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

dhw@itivax.iti.org (David H. West) (11/25/89)

In article <1656@aipna.ed.ac.uk> cam@aipna.ed.ac.uk (Chris Malcolm) writes:
]In article <11657@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (S. R. Harnad) writes:
]
]>Here is an easy example. I think it contains all the essentials:
]>We have two Rube Goldberg devices, both beginning with a string
]>you pull, and both ending with a hammer that smashes a piece of
]>china. Whenever you pull the string, the china gets smashed by the
]>hammer in both systems. The question is: Given that they can both be
]>described as conforming to the rule "If the string is pulled, smash the
]>china," is this rule explicitly represented in both systems?

This is not an empirical question, but a question about how we wish
to use the words "described", "explicitly" and "rule".

]In your original posting you (Stevan Harnad) said:
]
]    So the mere fact that a behavior is "interpretable" as ruleful
]    does not mean that it is really governed by a symbolic rule.

There IS no "really": we can interpret our "observations" as we
wish, and take the consequences of our choice.

]    Semantic interpretability must be coupled with explicit
]    representation (2), syntactic manipulability (4), and
]    systematicity (8) in order to be symbolic.
]	
]There is a can of worms luring under that little word "coupled"!  What I
]take it to mean is that this symbolic rule must cause the behaviour
]which we interpret as being governed by the rule we interpret the
]symbolic rule as meaning. 

Rather: if WE are to call the system "symbolic", the meaning WE
ascribe to the symbols should be consistent with (OUR interpretation
of) the behavior we wish to regard as being caused by the rule.

]                          Unravelled, that may seem stupendously
]tautologous, but meditation on the problems of symbol grounding can
]induce profound uncertainty about the status of supposedly rule-governed
]AI systems. One source of difficulty is the difference between the
]meaning of the symbolic rule to the system 

We can have no epistemological access to "the meaning of the
symbolic rule to the system" except insofar as we construct for
ourselves a consistent model containing something that we interpret
as such a meaning.   Symbol-grounding happens entirely within our
mental models.  Additionally, many of us believe that the world is
not conducive to the replication of systems that react in certain
ways (e.g. an object labelled "thermostat" which turns a heater ON
when the temperature is ABOVE a threshold is unlikely to attract
repeat orders), and this could be regarded as a mechanism for
ensuring symbol-function consistency, but the latter is still all
in our interpretation.

]Brian Smith's Knowledge Representation Hypothesis contains a nice
]expression of this problem of "coupling" interpretation and causal
]effect, in clauses a) and b) below.
]
]    Any mechanically embodied intelligent process will be be
]    comprised of structural ingredients that a) we as external
]    observers naturally take to represent a propositional account of
]    the knowledge that the overall process exhibits, and b)
]    independent of such external semantical attribution, play a
]    formal but causal and essential role in engendering the
]    behaviour that manifests that knowledge.
	
I agree.

]It is not at all clear to me that finding a piece of source code in the
]controlling computer which reads IF STRING_PULLED THEN DROP_HAMMER is
]not just a conjuring trick where I am misled into equating the English
]language meaning of the rule with its function within the computer
]system [Drew McDermott, Artificial Intelligence meets Natural Stupidity

As McDermott points out, the behavior of such a system is unaffected
if all the identifiers are systematically replaced by gensyms 
(IF G000237 THEN (SETQ G93753 T)), which causes the apparently
"natural" interpretation to vanish.

]Why should we bother with being able to interpret the system's "rule" as
]a rule meaningful to us? 

It may just be the way we are.

]But how
]can we do this weakening, without letting in such disturbingly ambiguous
]exemplars as neural nets?

If we are "disturbed", it's a sign of internal inconsistency in our
construction of a world-view.  People used to be disturbed by the
idea of light being both wave and particle.  Now we're not.

-David West            dhw@itivax.iti.org

harnad@phoenix.Princeton.EDU (S. R. Harnad) (11/25/89)

Chris Malcolm cam@aipna.ed.ac.uk
of Dept of AI, Edinburgh University, UK, wrote:

> What I take [you] to mean is that [the] symbolic rule must cause the
> behaviour which we interpret as being governed by the rule we interpret
> the symbolic rule as meaning...  meditation on the problems of symbol
> grounding can induce profound uncertainty about the status of
> supposedly rule-governed AI systems. One source of difficulty is the
> difference between the meaning of the symbolic rule to the system (as
> defined by its use of the rule) and the meaning we are tempted to
> ascribe to it because we recognise the meaning of the variable names,
> the logical structure, etc.

I endorse this kind of scepticism -- which amounts to recognizing
the symbol grounding problem -- but it is getting ahead of the game.
My definition was only intended to define "symbol system," not to
capture cognition or meaning.

You are also using "behaviour" equivocally: It can mean the operations
of the system on the world or the operations of the system on its
symbol tokens. My definition of symbol system draws only on the
latter (i.e., syntax); the former is the grounding problem.

It is important to note that the only thing my definition requires
is that symbols and symbol manipulations be AMENABLE to a systematic
semantic interpretation. It is premature (and as I said, anaother
problem altogether) to require that the interpretation be grounded in
the system and its relation to the world, rather than just mediated by
our own minds, in the way we interpret the symbols in a book. All we
are trying to do is define "symbol system" here; until we first commit
ourselves on the question of what is and is not one, we cannot start to
speak coherently about what its shortcomings might be!

(By the way, "meaning to us" is unproblematic, whereas "meaning to the
system" is highly contentious, and again a manifestation of the symbol
grounding problem, which is certainly no definitional matter!)

> It is not at all clear to me that finding a piece of source code in the
> controlling computer which reads IF STRING_PULLED THEN DROP_HAMMER is
> not just a conjuring trick... In simple cases with a few rules and
> behaviour which can easily be exhaustively itemised we can satisfy
> ourselves that our interpretation of the rule does indeed equate with
> its causal role in the system. Where there are many rules... The best
> we can say is that our interpretation is _similar_ to the function of
> the rule in the system. How reliably can we make this judgment of
> similarity? And how close must be the similarity to justify our
> labelling an example as an instance of behaviour governed by an
> explicit rule?

Again, you're letting your skepticism get ahead of you. First let's
agree on whether something's a symbol system at all, then let's worry about
whether or not its "meanings" are intrinsic. Systematic
interpretability is largely a formal matter; intrinsic meaning is not.
It is not a "conjuring trick" to claim that Peano's system can
be systematically interpreted as meaning what WE mean by, say,
numbers and addition. It's another question altogether whether the
system ITSELF "means" numbers, addition, or anything at all: Do you
see the distinctions.

(No one, has actually proposed the Peano system as a model of
arithmetic understanding, of course; but in claiming, with confidence,
that it is amenable to being systematically interpreted as what we mean
by arithmetic, we are not using any "conjuring tricks" either. It is
important to keep this distinction in mind. Number theorists need not
be confused with mind-modelers.)

But, as long as you ask, the criterion for "similarity" that I have
argued for in my own writings is the Total Turing Test (TTT), which,
unlike the conventional Turing Test (TT) (which is equivocal in calling
only for symbols in and symbols out) calls for our full robotic
capacity in the world. A system that can only pass the TT may have a
symbol grounding problem, but a system that passes the TTT (for a
lifetime) is grounded in the world, and although it is not GUARANTEED
to have subjective meaning (because of the other minds problem), it IS
guaranteed to have intrinsic meaning.

(The "Total" is also intended to rule out spurious extrapolations from
toy systems: These may be symbol systems, and even -- if robotic --
grounded ones, but, because they fail the TTT, there are still strong
grounds for skepticism that they are sufficiently similar us in the
relevant respects. Here I do agree that what is involved is, if not
"conjuring," then certainly wild and unwarranted extrapolation to a
hypothetical "scaling up," one that, in reality, would never be able
to reach the TTT by simply doing "more of the same.")

> Why should we bother with being able to interpret the system's "rule" as
> a rule meaningful to us?

Because that's the part of how you tell whether you're even dealing with a
formal symbol system in the first place (on my definition).

Stevan Harnad

------------------------------------------------------------------
-- 
Stevan Harnad  Department of Psychology  Princeton University
harnad@confidence.princeton.edu       srh@flash.bellcore.com
harnad@elbereth.rutgers.edu    harnad@pucc.bitnet    (609)-921-7771

mcdermott-drew@CS.YALE.EDU (Drew McDermott) (11/29/89)

In article <11655@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (S. R. Harnad) writes:
>
>
>mcdermott-drew@CS.YALE.EDU (Drew McDermott) of
>Yale University Computer Science Dept asked:
>
>> Why is it necessary that a symbol system have a semantics in order to
>> be a symbol system? I mean, you can define it any way you like, but
>> then most AI programs wouldn't be symbol systems in your sense.
>> 
>I'd rather not define it any way I like. I'd rather pin people down on
>a definition that won't keep slipping away, reducing all disagrements
>about what symbol systems can and can't do to mere matters of
>interpretation.
> ...

Which "people" need to be pinned down?  Fodor, I guess, who has a strong
hypothesis about a Representational Theory of Meaning.

But suppose someone believes "It's all algorithms," and not much more?
He's willing to believe that intelligence involves an FFT here, some
inverse dynamics there, a few mental models, maybe some neural nets,
perhaps a theorem prover or two,....  His view is not completely vacuous
(Searle thinks it's even false).  It might be a trifle eclectic for some
philosophers, but so what?

I realize that there is an issue about what symbol systems "can and
can't do."  It might turn out that computation is just a ridiculous
model for what goes on in the brain.  All the AI types and cognitive
psychologists could then find something else to do.  But it's simply
not possible that it could be revealed that there was a task X such
that symbol systems cannot do X and some other computational system
can.  That's because I and all the other computationalists would just
incorporate that new sort of system in our universe of possible
models.  We wouldn't even notice that it hadn't been incorporated
already.  In spite of philosophers' ardent hopes, there simply is no
natural category of Physical Symbol Systems separate from
Computational Systems in General.

>So the only thing at issue is whether a symbol system is required to be
>semantically interpretable. Are you really saying that most AI programs
>are not? I.e., that if asked what this or that piece of code means
>or does, the programmer would reply: "Beats me! It's just crunching
>a bunch of meaningless and uninterpretable symbols."
>
> ...
>Stevan Harnad  Department of Psychology  Princeton University
>harnad@confidence.princeton.edu       srh@flash.bellcore.com
>harnad@elbereth.rutgers.edu    harnad@pucc.bitnet    (609)-921-7771

Well, of course, no one's going say, "My program is crunching meaningless
symbols."  The word "meaningless" has all these negative connotations;
it sounds like it's next to "worthlessness," "pointlessness."  So
everyone will cheerfully claim that their symbols are "meaningful."
But if you press them on exactly what they're committing to, you're
usually going to start hearing about "procedural semantics" or
"holistic semantics" or some such twiddle.

The fact is that most symbols are conceived of as calculational
devices rather than denotational devices; their function is to compute
rather than to mean.  Try asking a rug merchant what the meaning is of
the position of the third bead on the second wire of his abacus.  If
he thinks hard, he might come up with something like "It denotes the
third ten in the count of shekels paid for these rugs."  But chances
are it never crossed his mind that the bead position required a
denotation.  After all, it's part of a formal system.  The meanings of
the expressions of such a system can't enter into its functioning.
Why then is it so all-fired important that every expression have a
meaning?

                                             -- Drew McDermott

blenko-tom@CS.YALE.EDU (Tom Blenko) (11/29/89)

In article <6921@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
|In article <11655@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (S. R. Harnad) writes:
|>
|>
|>mcdermott-drew@CS.YALE.EDU (Drew McDermott) of
|>Yale University Computer Science Dept asked:
|>
|>> Why is it necessary that a symbol system have a semantics in order to
|>> be a symbol system? I mean, you can define it any way you like, but
|>> then most AI programs wouldn't be symbol systems in your sense.
|>> 
|>I'd rather not define it any way I like. I'd rather pin people down on
|>a definition that won't keep slipping away, reducing all disagrements
|>about what symbol systems can and can't do to mere matters of
|>interpretation.
|> ...
|
|Which "people" need to be pinned down?  Fodor, I guess, who has a strong
|hypothesis about a Representational Theory of Meaning.
|
|But suppose someone believes "It's all algorithms," and not much more?
|He's willing to believe that intelligence involves an FFT here, some
|inverse dynamics there, a few mental models, maybe some neural nets,
|perhaps a theorem prover or two,....  His view is not completely vacuous
|(Searle thinks it's even false).  It might be a trifle eclectic for some
|philosophers, but so what?

I don't share Drew's disenchantment with semantic models, but I think
there is a more direct argument among his remarks: specifically, that
it isn't a particularly strong claim to say that an object of
discussion has "a semantics".  In fact, if we can agree on what the
object of discussion is, I can almost immediately give you a semantic
model -- or lots of semantic models, some of which will be good for
particular purposes and some of which will not. And it doesn't make any
Difference whether we are talking about axioms of FOPC, neural
networks, or wetware.

Richard Feynman had an entertaining anecdote in his biography about a
fellow with an abacus who challenged him to a "computing" contest.  He
quickly discovered that the fellow could do compute simple arithmetic
expressions as fast as Feynman could write them down. So he chose some
problems whose underlying numerical structure he understood, but which
it turned out that the other fellow, who simply knew a rote set of
procedures for evaluating expressions, didn't

Who had a semantic model in this instance? Both did, but different
models that were suited to different purposes.  I suspect that Harnad
had a particular sort of semantics in mind, but he is going to have to
work a lot harder to come up with his strawman (I don't believe it
exists).

	Tom

anwst@unix.cis.pitt.edu (Anders N. Weinstein) (12/07/89)

"Explicit representation of the rules" is a big red herring. 

At least two major articulations of the "symbolist" position are quite
clear: nothing requires a symbol system to be "rule-explicit" (governed by
representations of the rules) rather than merely "rule-implicit"
(operating in accordance with the rules). This point is enunciated in
Fodor's _Psychosemantics_ and also in Fodor + Pylyshyn's _Cognition_
critique of connectionism. It is also true according to Haugeland's
characterization of "cognitivism" [Reprinted in his _Mind Design_]

The important thing is simply that a symbol system operates by 
manipulating symbolic representations, as you've characterized them.

Many people seem to get needlessly hung up on this issue.
My own suggestion is that the distinction is of merely heuristic
value anyway -- if you're clever enough, you can probably interpret 
any symbol system either way -- and that nothing of philosophical 
interest ought to hinge on it. I believe the philosopher Robert
Cummins has also published arguments to this effect, but I don't have
the citations handy.
-- 
Anders Weinstein		ARPA:	anwst@unix.cis.pitt.edu
U. Pitt. Philosophy       	UUCP:	{cadre,psuvax1}!pitt!cisunx!anwst
Pittsburgh, PA  15260		BITNET: anwst@pittvms

harnad@phoenix.Princeton.EDU (Stevan Harnad) (12/14/89)

anwst@unix.cis.pitt.edu (Anders N. Weinstein)
of Univ. of Pittsburgh, Comp & Info Services wrote:

> "Explicit representation of the rules" is a big red herring...
> nothing requires a symbol system to be "rule-explicit" (governed by
> representations of the rules) rather than merely "rule-implicit"
> (operating in accordance with the rules). The important thing is simply
> that a symbol system operates by manipulating symbolic representations,
> as you've characterized them... if you're clever enough, you can
> probably interpret any symbol system either way

I'm not convinced. Whether a rule is explicit or implicit is not just a
matter of interpretation, because only explicit rules are
systematically decomposable. And to sustain a coherent
interpretation, this decomposability must systematically mirror every
semantic distinction that can be made in interpreting the system. Now
it may be that not all features of a symbol system need to be
semantically interpretable, but that's a different matter, since
semantic interpretability (and its grounding) is what's at issue here. I
suspect that the role of such implicit, uninterpretable "rules" would
be just implementational.
-- 
Stevan Harnad  Department of Psychology  Princeton University
harnad@confidence.princeton.edu       srh@flash.bellcore.com
harnad@elbereth.rutgers.edu    harnad@pucc.bitnet    (609)-921-7771