harnad@phoenix.Princeton.EDU (S. R. Harnad) (11/20/89)
What is a symbol system? From Newell (1980) Pylyshyn (1984), Fodor (1987) and the classical work of Von Neumann, Turing, Goedel, Church, etc.(see Kleene 1969) on the foundations of computation, we can reconstruct the following definition: A symbol system is: (1) a set of arbitrary PHYSICAL TOKENS (scratches on paper, holes on a tape, events in a digital computer, etc.) that are (2) manipulated on the basis of EXPLICIT RULES that are (3) likewise physical tokens and STRINGS of tokens. The rule-governed symbol-token manipulation is based (4) purely on the SHAPE of the symbol tokens (not their "meaning"), i.e., it is purely SYNTACTIC, and consists of (5) RULEFULLY COMBINING and recombining symbol tokens. There are (6) primitive ATOMIC symbol tokens and (7) COMPOSITE symbol-token strings. The entire system and all its parts -- the atomic tokens, the composite tokens, the syntactic manipulations (both actual and possible) and the rules -- are all (8) SEMANTICALLY INTERPRETABLE: The syntax can be SYSTEMATICALLY assigned a meaning (e.g., as standing for objects, as describing states of affairs). According to proponents of the symbolic model of mind such as Fodor (1980) and Pylyshyn (1980, 1984), symbol-strings of this sort capture what mental phenomena such as thoughts and beliefs are. Symbolists emphasize that the symbolic level (for them, the mental level) is a natural functional level of its own, with ruleful regularities that are independent of their specific physical realizations. For symbolists, this implementation-independence is the critical difference between cognitive phenomena and ordinary physical phenomena and their respective explanations. This concept of an autonomous symbolic level also conforms to general foundational principles in the theory of computation and applies to all the work being done in symbolic AI, the branch of science that has so far been the most successful in generating (hence explaining) intelligent behavior. All eight of the properties listed above seem to be critical to this definition of symbolic. Many phenomena have some of the properties, but that does not entail that they are symbolic in this explicit, technical sense. It is not enough, for example, for a phenomenon to be INTERPRETABLE as rule-governed, for just about anything can be interpreted as rule-governed. A thermostat may be interpreted as following the rule: Turn on the furnace if the temperature goes below 70 degrees and turn it off if it goes above 70 degrees, yet nowhere in the thermostat is that rule explicitly represented. Wittgenstein (1953) emphasized the difference between EXPLICIT and IMPLICIT rules: It is not the same thing to "follow" a rule (explicitly) and merely to behave "in accordance with" a rule (implicitly). The critical difference is in the compositeness (7) and systematicity (8) criteria. The explicitly represented symbolic rule is part of a formal system, it is decomposable (unless primitive), its application and manipulation is purely formal (syntactic, shape-dependent), and the entire system must be semantically interpretable, not just the chunk in question. An isolated ("modular") chunk cannot be symbolic; being symbolic is a combinatory, systematic property. So the mere fact that a behavior is "interpretable" as ruleful does not mean that it is really governed by a symbolic rule. Semantic interpretability must be coupled with explicit representation (2), syntactic manipulability (4), and systematicity (8) in order to be symbolic. None of these criteria is arbitrary, and, as far as I can tell, if you weaken them, you lose the grip on what looks like a natural category and you sever the links with the formal theory of computation, leaving a sense of "symbolic" that is merely unexplicated metaphor (and probably differs from speaker to speaker). Any rival definitions, counterexamples or amplifications? Excerpted from: Harnad, S. (1990) The Symbol Grounding Problem. Physica D (in press) ----------------------------------------------------- References: Fodor, J. A. (1975) The language of thought. New York: Thomas Y. Crowell Fodor, J. A. (1987) Psychosemantics. Cambridge MA: MIT/Bradford. Fodor, J. A. & Pylyshyn, Z. W. (1988) Connectionism and cognitive architecture: A critical appraisal. Cognition 28: 3 - 71. Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25. Kleene, S. C. (1969) Formalized recursive functionals and formalized realizability. Providence, R.I.: American Mathematical Society. Newell, A. (1980) Physical Symbol Systems. Cognitive Science 4: 135-83. Pylyshyn, Z. W. (1980) Computation and cognition: Issues in the foundations of cognitive science. Behavioral and Brain Sciences 3: 111-169. Pylyshyn, Z. W. (1984) Computation and cognition. Cambridge MA: MIT/Bradford Turing, A. M. (1964) Computing machinery and intelligence. In: Minds and machines, A.R. Anderson (ed.), Engelwood Cliffs NJ: Prentice Hall. -- Stevan Harnad INTERNET: harnad@confidence.princeton.edu harnad@princeton.edu srh@flash.bellcore.com harnad@elbereth.rutgers.edu harnad@princeton.uucp CSNET: harnad%confidence.princeton.edu@relay.cs.net BITNET: harnad1@umass.bitnet harnad@pucc.bitnet (609)-921-7771