[comp.ai] Features, Symbols, Categories

harnad@elbereth.rutgers.edu (Stevan Harnad) (04/02/89)

Andrew Palfreyman <andrew@logic.NSC.COM> of National Semiconductor
has asked me to reply to his recent posting, which elicited no
responses. He wrote:

" [about] symbols and their attributes... symbols, attributes, features
" and the central role of the recognition of isomorphism... [How do we]
" describe "the attribute of a thing"?...  The existence of attributes
" seems only possible when a feature extraction process is performed, by
" which attributes are *created* as a direct result of the interaction of
" the perceiver with the environment...  [There seem to be two kinds of]
" predicates... (1) a simple, "non-relational" predicate, like
" "whiteness" or "how many" (2) a set membership predicate, like "is a
" member of" or "has .. members".  [but] (1) appears to be subsumable
" under (2) in a recursive fashion (i.e. "is a member of the set of white
" things")... Is (2) an inclusive definition?...  from the above mentioned
" reductionist perspective, symbols evaporate!...  The feature set is all
" that is, all the way from just inside the "transducer surface" to just
" inside the "effector surface".  Analytic deduction of "symbols" from
" patterns of activation [is] just one more level of <significant feature
" extraction>, based on <recognition of isomorphisms between feature sets
" in the current context of feature sets>.

There are points here with which one can agree, but the reason I
didn't reply to this when it was originally posted was that it was
embedded in a much larger message consisting of entirely unnecessary
Zen quips and pseudophilosophy. The suggestion seems to be that:

(a) Feature extraction is important. (Yes.)

(b) "Attributes" are "created." (No, feature-detection may involve some
internal construction, approximation and even error, but features are
still features: this is not ontology we're discussing, just cognitive
modeling).

(c) Feature recognition and predication may be related through set
inclusion. (Yes, in a book on categorization I've tried to show how
set inclusion may be the operation underlying both categorization
["That is an X"] and description ["An X is a Y"].)

(d) If feature detection (and categorization) is central, then
"symbols vanish." (No, symbol tokens, according to my view, are the
names of categories that we can recognize, identify and act upon
because we have learned to detect their features. These symbol tokens
then enter into combinations in the form of symbol strings that
describe ever more abstract objects and states of affairs in the form
of set-inclusion (categorization) statements. Symbols tokens are
objects too, so why should they "vanish"? You probably mean that
symbol MEANINGS vanish, but that's wrong too. They're still there.
That's what the Chinese Room debate was about. My position was that
subjective meaning rides epiphenomenally on the "right stuff," and the
right stuff is NOT just internal symbol manipulation, as Searle's
opponents keep haplessly trying to argue, but hybrid nonsymbolic/symbolic
processes, including analog representations and feature-detectors,
with the symbolic representations grounded bottom-up in the nonsymbolic
representations. One candidate grounding proposal of this kind is
described in my book.)

Refs:
Harnad S. (1987) (Ed.) Categorical Perception: The Groundwork of Cognition 
		 (NY: Cambridge University Press)
Harnad S. (1989) Minds, Machines and Searle. Journal of Experimental
                 and Theoretical Artificial Intelligence" 1: 5-25
-- 
Stevan Harnad INTERNET:  harnad@confidence.princeton.edu    harnad@princeton.edu
srh@flash.bellcore.com    harnad@elbereth.rutgers.edu      harnad@princeton.uucp
BITNET:   harnad@pucc.bitnet           CSNET:  harnad%princeton.edu@relay.cs.net
(609)-921-7771

srh@wind.bellcore.com (stevan r harnad) (04/04/89)

Andrew Palfreyman <andrew@logic.NSC.COM> of National Semiconductor
has asked me to reply to his recent posting, which elicited no
responses. He wrote:

" [about] symbols and their attributes... symbols, attributes, features
" and the central role of the recognition of isomorphism... [How do we]
" describe "the attribute of a thing"?...  The existence of attributes
" seems only possible when a feature extraction process is performed, by
" which attributes are *created* as a direct result of the interaction of
" the perceiver with the environment...  [There seem to be two kinds of]
" predicates... (1) a simple, "non-relational" predicate, like
" "whiteness" or "how many" (2) a set membership predicate, like "is a
" member of" or "has .. members".  [but] (1) appears to be subsumable
" under (2) in a recursive fashion (i.e. "is a member of the set of white
" things")... Is (2) an inclusive definition?...  from the above mentioned
" reductionist perspective, symbols evaporate!...  The feature set is all
" that is, all the way from just inside the "transducer surface" to just
" inside the "effector surface".  Analytic deduction of "symbols" from
" patterns of activation [is] just one more level of <significant feature
" extraction>, based on <recognition of isomorphisms between feature sets
" in the current context of feature sets>.

There are points here with which one can agree, but the reason I
didn't reply to this when it was originally posted was that it was
embedded in a much larger message consisting of entirely unnecessary
Zen quips and pseudophilosophy. The suggestion seems to be that:

(a) Feature extraction is important. (Yes.)

(b) "Attributes" are "created." (No, feature-detection may involve some
internal construction, approximation and even error, but features are
still features: this is not ontology we're discussing, just cognitive
modeling).

(c) Feature recognition and predication may be related through set
inclusion. (Yes, in a book on categorization I've tried to show how
set inclusion may be the operation underlying both categorization
["That is an X"] and description ["An X is a Y"].)

(d) If feature detection (and categorization) is central, then
"symbols vanish." (No, symbol tokens, according to my view, are the
names of categories that we can recognize, identify and act upon
because we have learned to detect their features. These symbol tokens
then enter into combinations in the form of symbol strings that
describe ever more abstract objects and states of affairs in the form
of set-inclusion (categorization) statements. Symbols tokens are
objects too, so why should they "vanish"? You probably mean that
symbol MEANINGS vanish, but that's wrong too. They're still there.
That's what the Chinese Room debate was about. My position was that
subjective meaning rides epiphenomenally on the "right stuff," and the
right stuff is NOT just internal symbol manipulation, as Searle's
opponents keep haplessly trying to argue, but hybrid nonsymbolic/symbolic
processes, including analog representations and feature-detectors,
with the symbolic representations grounded bottom-up in the nonsymbolic
representations. One candidate grounding proposal of this kind is
described in my book.)

Refs:
Harnad S. (1987) (Ed.) Categorical Perception: The Groundwork of Cognition 
		 (NY: Cambridge University Press)
Harnad S. (1989) Minds, Machines and Searle. Journal of Experimental
                 and Theoretical Artificial Intelligence" 1: 5-25

Stevan Harnad  INTERNET:  harnad@confidence.princeton.edu   harnad@princeton.edu
srh@flash.bellcore.com      harnad@elbereth.rutgers.edu    harnad@princeton.uucp
CSNET:    harnad%confidence.princeton.edu@relay.cs.net
BITNET:   harnad1@umass.bitnet      harnad@pucc.bitnet            (609)-921-7771