[comp.ai.digest] Fuzzy Symbolism

Laws@STRIPE.SRI.COM.UUCP (06/29/87)

  From: mind!harnad@princeton.edu  (Stevan Harnad)

  Finally, and perhaps most important: In bypassing the problem of
  categorization capacity itself -- i.e., the problem of how devices
  manage to categorize as correctly and successfully as they do, given
  the inputs they have encountered -- in favor of its fine tuning, this
  line of research has unhelpfully blurred the distinction between the
  following: (a) the many all-or-none categories that are the real burden
  for an explanatory theory of categorization (a penguin, after all, be it
  ever so atypical a bird, and be it ever so time-consuming for us to judge
  that it is indeed a bird, is, after all, indeed a bird, and we know
  it, and can say so, with 100% accuracy every time, irrespective of
  whether we can successfully introspect what features we are using to
  say so) and (b) true "graded" categories such as "big," "intelligent,"
  etc. Let's face the all-or-none problem before we get fancy...

Is a mechanical rubber penguin a penguin?  Is a dead or dismembered
penguin a penguin?  How about a genetically damaged or altered penguin?
When does an penguin embryo become a penguin?  When does it become a
bird?  I think your example depends on circularities inherent in our
use of natural language.  I can't unambiguously define the class of
penguins, so how can I be 100% certain that every penguin is a bird?
If, on the other hand, we are dealing only in abstractions, and the
only "penguin" involved is a idealized living adult penguin bird, then
the question is a tautology.  We would then be saying that we are 100%
certain that our abstraction satisfies its own sufficient conditions --
and even that could change if scientists someday discover incontrovertible
evidence that penguins are really fish.

In short, every category is a graded one except for those that we
postulate to be exact as part of their defining characteristics.


After writing the above, I saw the following reply:

  I am not, of course, claiming that noise does not exist and that errors
  may not occur under certain conditions. Perhaps I should have put it
  this way: Categorization preformance (with all-or-none categories) is
  highly reliable (close to 100%) and MEMBERSHIP is 100%. Only
  speed/ease of categorization and typicality ratings are a matter of
  degree. The underlying representation must hence account for
  all-or-none categorization capacity itself first, then worry about its
  fine-tuning.

  This is not to deny that even all-or-none categorization may encounter
  regions of uncertainty. Since ALL category representations in my model are
  provisional and approximate (relative to the context of confusable
  alternatives that have been sampled to date), it is always possible that
  the categorizer will encounter an anomalous instance that he cannot classify
  according to his current representation. The representation must
  hence be revised and updated under these conditions, if ~100% accuracy
  is to be re-attained. This still does not imply that membership is
  fuzzy or a matter of degree, however, only that the (provisional
  "defining") features that will successfully sort the members must be revised
  or extended. The approximation must be tightened.

You are entitled to such an opinion, of course, but I do not accept the
position as proven.  We do, of course, sort and categorize objects when
forced to do so.  At the point of observable behavior, then, some kind
of noninvertible or symbolic categorization has taken place.  Such
behavior, however, is distinct from any of the internal representations
that produce it.  I can carry fuzzy and even conflicting representations
until -- and often long after -- the behavior is initiated.  Even at
the instant of commitment, my representations need be unambiguous only
in the implicit sense that one interpretation is momentarily stronger
than the other -- if, indeed, the choice is not made at random.

It may also be true that I do reduce some representations to a single
neural firing or to some other unambiguous event -- e.g., when storing
a memory.  I find this unlikely as a general model.  Coarse coding,
graded or frequency encodings, and widespread activation seem better
models of what's going on.  Symbolic reasoning exists in pure form
only on the printed page; our mental manipulation even of abstract
symbols is carried out with fuzzy reasoning apparatus.

					-- Ken Laws