[comp.ai.digest] Natural kinds

JMC@SAIL.STANFORD.EDU (John McCarthy) (07/10/87)

Recently philosophers, Hilary Putnam I think, introduced the concept
of natural kind which, in my opinion, is one of the few things they
have done that is useful for AI.  Most nouns designate natural kinds,
uncontroversially "bird", and in my opinion, even "chair".  (I don't
consider "natural kind" to be a linguistic term, because there may
be undiscovered natural kinds and never articulated natural kinds).

The clearest examples of natural kind are biological species -
say penguin.  We don't have a definition of penguin; rather we
have learned to recognize penguins.  Penguins have many properties
I don't know about; some unknown even to penguin specialists.
However, I can tell penguins from seagulls without a precise definition,
because there aren't any intermediates existing in nature.
Therefore, the criteria used by people or by the programs we build
can be quite rough, and we don't all need to use the same criteria,
because we will come out with the same answer in the cases that
actually arise.

In my view the same is true of chairs.  With apologies to Don Norman,
I note that my 20 month old son Timothy recognizes chairs and tables.
So far as I know, he is always right about the whether the objects
in our house are chairs.  He also recognizes toy chairs, but just
calls them "chair" and similarly treats pictures of chairs in books.
He doesn't yet say "real chair", "toy chair" and "picture of a chair",
but he doesn't try to sit on pictures of chairs.  He is entirely
prepared to be corrected about what an object is.  For example, he
called a tomato "apple" and accepted correction.

We should try to make AI systems as good as children in this respect.
When a an object is named, the system should generate a
gensym, e.g. G00137.  To this symbol should be attached the name
and what the system is to remember about the instance.  (Whether it
remembers a prototype or a criterion is independent of this discussion;
my prejudice is that it should do both if it can.  The utility of
prototypes depends on how good we have made it in handling similarities.)

The system should presume (defeasibly) that there is more to the concept
than it has learned and that some of what it has learned may be wrong.
It should also presume (although will usually be built into the design
rather than be linguistically represented) that the new concept is
a useful way to distinguish features of the world, although some new
concepts will turn out to be mere social conventions.

Attaching if-and-only-if definitions to concepts will sometimes be
possible, and mathematical concepts often are introduced by definitions.
However, this is a rare case in common sense experience.

I'm not sure that philosophers will agree with treating chairs as
natural kinds, because it is easy to invent intermediates between
chairs and other furniture.  However, I think it is psychologically
correct and advantageous for AI, because we and our robots exist
in a world in which doubtful cases are rare.

The mini-controversy about penguins can be treated from this point of
view.  That penguins are birds and whales are mammals has been discovered
by science.  Many of the properties that penguins have in common with
other birds have not even been discovered yet, but we are confident that
they exist.  It is not a matter of definition.  He who gets fanatical
about arbitrary definitions will make many mistakes - for example,
classifying penguins with seals will lead to not finding tasty penguin
eggs.

Laws@STRIPE.SRI.COM (Ken Laws) (07/12/87)

I would not be so quick to thank recent philosophers for the concept
of natural kinds.  While I am not familiar with their contributions,
the notion seems similar to "species" in biology and "cluster" in
engineering and statistics.  Cluster and discriminant analysis go
back to at least the 1930s, and have always depended on the tendency
of objects under study to group into classes.

					-- Ken
-------

TLW.MDC@OFFICE-1.ARPA (Tony Wilkie /DAC/) (07/16/87)

I may get sizzled for this, but I will suggest that the term "natural kind", 
while a fairly recent addition to the philosophical lexicon, is a conceptual 
descendant of Plato`s Forms, and more closely approximated in meaning to 
Aristotle's discussions of 'kinds' in his Metaphysics.

Chairs would certainly be a paradigm example of a Platonic Form, and Aristotle 
in his Metaphysics used his horse, Bucephalus, as an example in his discussion 
of kinds. Given his inclination as sort of a teleological guerilla, Aristotle 
would have (and may have) had a tough time separating his 'kinds' concept from 
'species' in the biological cases. Still, I think it safe to say that 
philosophical discussion of ontology preceded the development of a formal 
concept of species.

   Tony L. Wilkie <TLW.MDC@Office-1.ARPA>

mclean@NRL-CSS.ARPA (John McLean) (07/16/87)

Even "recent" philosophical discussions of natural kinds go back 20 years
and much further if you count Nelson Goodman's stuff on projectibility of
predicates (why do we assume emeralds are green and not grue, i. e.,
green until the year 2000 and then blue?) or much of the stuff written
in response to Hempel's problem whether a nonblack nonraven could could
count as a confirming instance of the claim that all ravens are black (since
the claim that all P's are Q's is logically equivalent to the claim that
all nonQ's are nonP's).  But I think you can also view much of what Plato
had to say about forms and what Aristotle had to say about substance as
being concerned with the problem of natural kinds as well.

However, I think the issue being raised about recognizing penguins,
chairs, etc. goes back to Wittgenstein's _Philosophical_Investigations_:

   For if you look at them you will not see something that is common to
   all, but similarities, relationships, and whole series of them at
   that...I can think of no better expression to characterize these
   similarities than "family resemblance"...

John McLean

AI.CAUSEY@R20.UTEXAS.EDU (Robert L. Causey) (07/18/87)

In a message posted 7/15, John McCarthy says that philosophers
have recently introduced the concept of natural kind, and he
suggests how this concept may be useful in AI.  I think this
deserves serious comment, both historical and substantive.  The
following is lengthy, but it may illustrate some general
characteristics about the relationships between philosophy and AI. 

                         HISTORY
In their messages, Ken Laws and others are correct -- the idea of
natural kinds is not new.  It is at least implicit in some
Pre-Socratic Greek philosophy, and Aristotle extensively
developed the idea and applied it in both philosophy and biology. 
Aristotle's conception is too "essentialist" to fit what McCarthy
refers to. 

In the late 1600's John Locke developed an impressive empiricist
analysis of natural kinds.  Further developments were contributed
in the 1800's in J.  S.  Mill's, _A_System_Of_Logic_.  Mill also
made important contributions to our understanding of inductive
reasoning and scientific explanation; these are related to
natural kinds. 

In our century a number of concepts of natural kinds have been
proposed, ranging from strongly empiricist "cluster" approaches
(which need NOT preclude expanding the cluster of attributes
through the discovery of new knowledge, cf.  McCarthy 7/17), to
various modal analyses, to some intermediate approaches.  Any of
these analyses may have some value depending on the intended
application, but the traditional notion of natural kinds has
almost always been connected somehow with the idea of natural
laws. 

                    SUBSTANTIVE ISSUES
1.  Whatever one's favorite analysis might be, it is important to
distinguish between a NATURAL kind (e.g., the compound silicon
dioxide, with nomologically determined physical and chemical
attributes), and a functional concept like "chair".  There is
generally not a simple one-to-one correspondence between our
functional classifications of objects and the classification
systems that are developed in the natural sciences.  This is true
in spite of the fact that we can learn to recognize sand,
penguins, and chairs.  But things are not always so simple -
Suppose that Rip van Winkle learns in 1940 to recognize at sight
a 1940-style adding machine; he then sleeps for 47 years.  Upon
waking in 1987 he probably would not recognize at sight what a
thin, wallet calculator is.  Functional classifications are
useful, but we should not assume that they are generated and
processed in the same ways as natural classifications.  In
particular, since functional classifications often involve an
abstract understanding of complex behavioral dispositions, they
are particularly hard to learn once one gets beyond simple things
like chairs and tables. 

2.  Even discovering the classic examples of NATURAL kinds (like the
classification of the chemical elements) can be a long and
difficult process.  It requires numerous inductive
generalizations to confirm that the attributes in a certain Set
of attributes each apply to gold, and that the attributes in some
other Set of attributes apply to iodine, etc.  We further
recognize that our KNOWLEDGE of what are the elements of these
Sets of attributes grows with the general growth of our
scientific knowledge.  Also, we need not always use the same set
of attributes for IDENTIFICATION of instances of a natural kind. 
Most of this goes back to Locke, and philosophers have long
recognized the connection between induction and classification;
Carnap, Hempel, Goodman, and others, have sharpened some of the
issues during the last 50 years. 

3.  Now, getting back to McCarthy's suggestion -- in his second
message (7/17) he writes: "...for a child to presume a natural
kind on hearing a word or seeing an object is advantageous, and
it will also be advantageous to built (sic) AI systems with this
presumption." His 7/15 message says, "When an object is named,
the system should generate a gensym, e.g., GOO137.  To this
symbol should be attached the name and what the system is to
remember about the instance." This is an interesting suggestion,
but it prompts some comments and questions:

i) Assuming that children do begin to presume natural kinds at
some stage of development, what inductive processes are they
using, what biologically determined constraints are affecting
these processes, and what prior acquired knowledge is directing
their inductions.  These are interesting psychological questions. 
But, depending on our applications, we may not even want to build
robots that emulate young children.  We can attach a name
to a gensym, but it is not at all easy to decide "...what the
system is to remember about the instance,"  or to specify how
it is to process all of the stuff it generates in this manner.

ii) Children receive much corrective feedback from other people;
how much feedback will we be willing or able to give to the
"maturing" robots? Will the more mature robots help train the
naive ones?

iii) Given that classification does involve complex inductive
reasoning, we need to learn a lot more about how to implement
effective inductive procedures, where "induction" is understood
very broadly. 

iv) If the AI systems (robots, etc.) are to learn, and reason with,
functional concepts, then things get even more complex.  Ability
to make abstractions and perform complex analogical reasoning
will be required.  In my judgment, we (humans) still have a lot
to learn just about the representation of functional knowledge. 
If my Rip van Winkle story seems farfetched, here is a true
story.  I know a person who is familiar with the appearance and
use of 5 1/4 inch floppy diskettes.  Upon first seeing a 3.5 inch
mini-diskette, she had no idea what it was until its function was
described.  Knowledge of diskettes can extend to tracks, sectors,
etc.  The concept of natural kinds is relatively simple (though
often difficult to apply); functional concepts and their
relations with physical structures are harder subjects.
-------

JMC@SAIL.STANFORD.EDU (John McCarthy) (07/19/87)

[In reply to message from AI.CAUSEY@R20.UTEXAS.EDU sent Sat 18 Jul 87.]

I agree with Bob Causey's comments and agree that the open questions he
lists are unsolved and important.  I have one caveat.  The distinction
between nomological and functional kinds exists in sufficiently elaborate
mental structures, but I don't think that under 2 year olds make the
distinction, i.e. have different mechanisms for learning them.  For this
reason, it is an open question whether it should be a primary distinction
for robots.  In a small child's world, chairs are distinguished from other
objects by appearance, not by function.  Evidence: a child doesn't refer
to different appearing objects on which he can also sit as chairs.
Concession:  there may be such a category "sittable" in "mentalese", and
languages with such categories might be as easily learnable as English.
What saves the child from having to make the distinction between kinds
of kinds at an early age is that so many of the kinds in his life are
distinguishable from each other in many ways.  The child might indeed
be fooled by the different generations of calculator, but usually he's
lucky.

I hope to comment later on how robots should be programmed to identify
and use kinds.

rlw@philabs.philips.COM (Richard Wexelblat) (07/21/87)

In article <8707161942.AA13065@nrl-css.ARPA> mclean@NRL-CSS.ARPA
(John McLean) writes:

>However, I think the issue being raised about recognizing penguins,
>chairs, etc. goes back to Wittgenstein's _Philosophical_Investigations_:

Actually, the particular section chosen is a bit too terse.  Here is more
context:

   Consider, for example the proceedings that we call `games.'  I mean board-
games, card-games, ball-games, Olympic games, and so on.  What is common to
them all?--Don't say:  ``There must be something common, or they would not be
called `games' ''--but look and see whether there is anything common to all.
--For if you look at them you will not see something that is common to all,
but similarities, relationships, and a whole series of them at that ...  a
complicated network of similarities overlapping and criss-crossing; sometimes
overall similarities, sometimes similarities of detail.
   I can think of no better expression to characterize these similarities
than ``family resemblances''; for the various resemblances between the
members of a family: build, features, colour of eyes, gait, temperament,
etc.  etc. overlap and criss-cross in the same way.--And I shall say: `games'
form a family.

                                   * * *

This sort of argument came up in a project on conceptual design tools a few
years ago in attempting to answer the question:  ``What is a design and how
do you know when you have one?''  We attempted to answer the question and got
into the question of subjective classifications of architecture.  What is a
``ranch'' or ``colonial'' house?  If you can get a definition that will
satisfy a homebuyer, you are in the wrong business.

                                   * * *

Gratis, here are two amusing epigrams from W's Notebooks, 1914-1916:

	There can never be surprises in logic.
		  ~~~~~

	One of the most difficult of the philosopher's tasks is to
	find out where the shoe pinches.

rlw@philabs.philips.COM (Richard Wexelblat) (07/21/87)

It is amusing and instructive to study and speculate on children's language
and conceptualization.  (Wow! That construct's almost Swiftean!)  For those
who would read further in this domain, I recommend:

Brown, Roger
A First Language -- The Early Stages
Harvard Univ. Press, 1973

MacNamara, John
Names for Things -- A Study of Human Learning
MIT Press, 1984

MINSKY@OZ.AI.MIT.EDU (07/22/87)

About natural kinds.  In "The Society of Mind", pp123-129, I propose a
way to deal with Wittgenstein's problem of defining terms like "game"-
or "chair".  The basic idea was to probe further into what
Wittgenstein was trying to do when he talked about "family
resemblances" and tried to describe a game in terms of properties, the
way one might treat members of a human family: build, features, colour
of eyes, gait, temperament, etc.

In my view, Wittgenstein missed the point because he focussed on
"structure" only.  What we have to do is also take into account the
"function", "goal", or "intended use" of the definition.  My trick is
to catch the idea between two descriptions, structural and functional.
Consider a chair, for example.

  STRUCTURE: A chair usually has a seat, back, and legs - but
     any of them can be changed in so many ways that it is hard
     to make a definition to catch them all.

  FUNCTION: A chair is intended to be used to keep one's bottom
     about 14 inches off the floor, to support one's back
     comfortably, and to provide space to bend the knees.

If you understand BOTH of these, then you can make sense of that list
of structural features - seat, back, and legs - and engage your other
worldly knowledge to decide when a given object might serve well as a
chair.  This also helps us understand how to deal with "toy chair" and
such matters.  Is a toy chair a chair?  The answer depends on what you
want to use it for.  It is a chair, for example, for a suitable toy
person, or for reminding people of "real" chairs, or etc.  

In other words, we should not worship Wittgenstein's final defeat, in
which he speaks about vague resemblances - and, in effect, gives up
hope of dealing with such subjects logically.  I suspect he simply
wasn't ready to deal with intentions - because nothing comparable to
Newell and Simon's GPS theory of goals, or McCarthy's meta-predicate
(Want P) was yet available.

I would appreciate comments, because I think this may be an important
theory, and no one seems to have noticed it.  I just noticed, myself,
that I didn't mention Wittgenstein himself (on page 130) when
discussiong the definition of "game".  Apologies to his ghost.

hamscher@HT.AI.MIT.EDU (Walter Hamscher) (07/27/87)

Your functional description of "chair" does capture more of "what's
essential to chairs" than the structural description could.  Some
quibbles, however.  First, it includes couches since it doesn't say
that it's for exactly one person.  Second, it doesn't seem to include
"Balenz" chairs, those kind in which the person rests on his/her
shins, since the "support for one's back" is rather indirect -- what
they do is to make it easier to balance the spine by tilting the
pelvis forward.  Third, some people might say that Balenz chairs
aren't chairs at all, but stools, because the back support is indirect
-- the point being that the functional description might have to take
into account who's saying what about chairs to whom.  Probably, other
Ailist readers will come up with more borderline cases, which brings
me to the speculation that functional descriptions may end up with as
many exceptions as structural descriptions do.

MINSKY@OZ.AI.MIT.EDU (07/27/87)

I agree:
   1. Yes, I think we'd all agree that a chair is for 1 person to sit on.
   2. The boundary is fuzzy, indeed, and some people might not
      consider a Balenz chair to be a chair.
   3. Yes, indeed, the "functional description" does indeed depend  on
whose "intention" is ivolved, and upon who is saying what to whom.

My point is not that such terms can be defined in foolproof, clear-cut
ways.  There are really two sorts of points.

1.  You can get much further in making good definitions by squeezing
in from both structural and function directions - and surely others as well.

2.  In Society of Mind, section 30.1 I discuss how meanings must depend on
speakers, etc.

As Ken Laws remarked, we should not be too hasty to thank philosophers
for concept of "natural kind".  McCarthy make useful remarks about
penguins, which form a clear-cut cluster because of the speciation
mechanism of sexual reproduction.  The class is un-fuzzy even though,
as McCarthy notes, penguins have properties that scientists have not
yet discovered.

But then, I think, McCarthy defeats this clarity by proceeding to
discuss how children learn about chairs - and tries to subsume this,
too, into natural kinds.  He describes what seems clearly to be not
"natural" aspects of chairs, but the clustering and debugging
processes a child might use.

My conclusion - and, I'd bet, Ken Laws would agree - is that the
concept of "natural kind" has an illusory generality.  It seems to me
that, rather than good philosophy, it is merely low-grade science
contaminated by naive, traditional common sense concepts.  The
clusters that have good boundaries, in the world, usually have them
for good - but highly varied reasons.  Animals form good clusters
because of Darwinian speciation of various sorts.  Certain metals,
like Gold, have "natural" boundaries because of the Pauli exclusion
principle which causes things like periodic tables of elements.
Philosophers like to speak about gold - but their arguments won't work
so well for Steel, whose boundary is fuzzy because there are so many
ways to strengthen iron.  All in all, the clusters we perceive that
have sharp boundaries are quite important, pragmatically, but exist
for such a disorderly congeries of reasons that I consider the
philosophical discussion of them to be virtually useless in this
sense: the class of clusters with "suitable sharp boundaries" to
desaerve the title "natural kinds" is itself too fuzzy a concept to
help us clarify the nature of how we think about things.

shebs@CS.UTAH.EDU (Stanley Shebs) (07/27/87)

In article <MINSKY.12320404487.BABYL@MIT-OZ> MINSKY@OZ.AI.MIT.EDU writes:

>About natural kinds.  In "The Society of Mind", pp123-129, I propose a
>way to deal with Wittgenstein's problem of defining terms like "game"-
>or "chair".  The basic idea was to probe further into what
>Wittgenstein was trying to do when he talked about "family
>resemblances" and tried to describe a game in terms of properties, the
>way one might treat members of a human family: build, features, colour
>of eyes, gait, temperament, etc.

>[... details of Wittgenstein vs Minsky :-) ...]

>I would appreciate comments, because I think this may be an important
>theory, and no one seems to have noticed it. [...]

I recently finished reading "Society of Mind", and quite enjoyed it.
There are a lot of interesting ideas.  There are also many that are
familiar to people in the field, but with new syntheses that make the
ideas much more plausible than in the past.  I had been getting cynical
about AI, but after reading this, I wanted to go and hack out programs
to test the hypotheses about action, and memory, and language.  But there's
a serious problem;  how *can* these hypotheses be tested?  The society of
mind follows human thinking so closely that any implementation is going
to be a model of human minds rather than minds in general, and will probably
be handicapped by being too small and simple to be recognizably human-like
in its behavior.  Tracing a mind society's behavior will generate lots
of data but little insight.  So my ardor has been replaced by odd moments
speculating on tricky but believable tests, and a greater appreciation for
people interested in a more formal approach to minds.

Getting down to specifics, the theory about recognition of objects by either
structure or functions was one of the parts I really liked.  A robot should
be able to sit on a desk without getting neurotic, or to sit carefully on
a chair that's missing one leg...

							stan sh
New

powell%mwcamis@MITRE.ARPA (07/29/87)

  Minsky's notion of natural types involving both structure and function
does seem plausible.  One could think of each natural type as a
bipartite graph where one node class represents structural components
and where the other node type represents each function of the natural
type.  Connections between the two node classes would represent
(in a crude way) the way in which portions of each class relate to
the nodes of the other class.
     
  Even more specifically, the entire design foundations
as would be recorded in the data dependency net of an ATMS recording
the design process (function to structure) would capture still more about
the natural type.  This seems
like a bizarely specific way to define a hazy notion like natural types,
but it does appear to follow naturally from Minsky's proposal.

eyal@wisdom.BITNET (Eyal mozes) (07/30/87)

An important theory that has so far not been mentioned in the
discussion on "natural kinds" is the Objectivist theory of concepts.
In essence, this theory regards universal concepts, such as "chair" or
"bird", as the result of a process of "measurement-omission", which
mentally integrates objects by omitting the particular measurements of
their common characteristics.  The theory takes into account the point
mentioned in Minsky's recent message about structure and function, and
completely solves Wittgenstein's problem.

The theory is presented in the book "Introduction to Objectivist
Epistemology" by Ayn Rand, and, more recently, in the paper "A theory
of abstraction" by David Kelley (Cognition and Brain Theory, vol. 7
no. 3&4, summer/fall 1984, pp. 329-357).

        Eyal Mozes

        BITNET:                 eyal@wisdom
        CSNET and ARPA:         eyal%wisdom.bitnet@wiscvm.wisc.edu
        UUCP:                   ...!ihnp4!talcott!WISDOM!eyal

aweinste@BBN.COM (Anders Weinstein) (07/30/87)

In article <MINSKY.12321721233.BABYL@MIT-OZ> MINSKY@OZ.AI.MIT.EDU writes:
>
>My conclusion - and, I'd bet, Ken Laws would agree - is that the
>concept of "natural kind" has an illusory generality.  It seems to me
>that, rather than good philosophy, it is merely low-grade science
>contaminated by naive, traditional common sense concepts.  

I think there's some confusion about what natural kinds are in this
discussion. Most of the talk has focussed on the alleged sharpness of the
kind's boundaries. But I don't think this is what's at issue, at least in the
contemporary philosophical usage.

The point is that you can't do science without imposing some taxonomy on the
objects under study. "Natural kinds" are simply the kinds that figure in
scientific generalizations (aka Laws of Nature). Thus "bird" is perhaps a
natural kind, but "thing that is either furry or made of clay" is not.

Some people like to argue about whether these classification systems are "out
there" in Nature waiting to be discovered (the "realist" view) or are
invented by the mind and imposed on some undifferentiated reality (an
"idealist" or "constructivist" picture). Happily, we can ignore this debate.

What we can't ignore is the fact that a notion of natural kinds is
*essential* for induction, as demonstrated by Nelson Goodman's classic "grue
vs green" puzzle. Without some sense of what kinds are "natural", you're
liable to go off projecting "grue", or looking for laws governing "furry or
clay things". This would be the antithesis of intelligence. 

Of course, coming up with suitable taxonomies is an empirical matter.  I once
heard Kuhn emphasize that Aristotle's concept of "motion" included things
like the growth of trees. Progress in physics had to await a more useful
concept of motion. 

But this shouldn't be taken to imply that natural kinds are only relevant to
sophisticated scientific theorizing -- the same principles apply to the
inductions that are part of common-sense understanding. And it seems that we
are blessed with pretty accurate innate intuitions about which kinds or
similarities are natural (eg. "green") and which are ludicrously artificial
("grue"). The philosophy of induction thus suggests that you can't make an
intelligent system without somehow building into it an equivalent sense of 
the naturalness of kinds.

Anders Weinstein
BBN Labs

gilbert@hci.hw.ac.UK.UUCP (08/13/87)

In article <MINSKY.12320404487.BABYL@MIT-OZ> MINSKY@OZ.AI.MIT.EDU writes:
>
>In my view, Wittgenstein missed the point because he focussed on
>"structure" only.  What we have to do is also take into account the
>"function", "goal", or "intended use" of the definition.  My trick is
>to catch the idea between two descriptions, structural and functional.
>Consider a chair, for example.
>
>  STRUCTURE: A chair usually has a seat, back, and legs - but
>     any of them can be changed in so many ways that it is hard
>     to make a definition to catch them all.
>
>  FUNCTION: A chair is intended to be used to keep one's bottom
>     about 14 inches off the floor, to support one's back
>     comfortably, and to provide space to bend the knees.
>
>If you understand BOTH of these, then you can make sense of that list
>of structural features - seat, back, and legs - .......[ cut ]......
>........This also helps us understand how to deal with "toy chair" and
>such matters.  Is a toy chair a chair?  The answer depends on what you
>want to use it for.  It is a chair, for example, for a suitable toy
>person, or for reminding people of "real" chairs, or etc.  

     a toy chair is a chair if people say it is a chair. I didn't vote
     for any lexicographer to go and prescribe our language.

Whilst agreement on structure is possible by an appeal to sense-data
mediated by a culture's categories, agreement on function is less
likely. How do we know that an object has a function? Whilst the prime
use of a chair, is indeed for sitting on, this does not preclude it's
use for other functions - now don't these go back to structure? Or are
they related to intention (i.e. when someone hits you on the head with
a chair)?

Function is a dangerous word, as it pretends a closure well-suited to
the description of a well-ordered, unchanging world. I hope that this
new focus doesn't take AI down the path of American post-war sociology,
where Talcott Parson's functionalism recast the great American dream as
the 'natural' functions of all societies.

In short, nothing, no "das ding an Sich", has a function. People give
things functions. Give a polaroid to someone in a part of the world
where cameras aren't understood, and the function is not going to jump
out and reveal the essence of the object. In fact, museums of
ethnography are full of examples of industrial products put to the
strangest uses. There was also once a spate of jokes about what the
Japanese did when faced with a western water-closet, and recently a
book on Japanese etiquette has warned Westerners about using their
hankerchiefs to blow their nose on - we are told that this is not the
function of a hankerchief in Japan!

So, don't ignore the social. It's the only reality there is. Wittgenstein
may have missed your preferred point, but I think you're ignoring his
observations. Had he been alive in the '60s, I've a feeling that the
growth of sociology would have provided him with more substance for
thought than GPS and the Want-P predicate. BTW - what is the function
of a Want-P predicate, and what would a Japanese do with a hankerchief
afterwards? :-)

Times change, the world changes, knowledge-bases stagnate.
-- 
   Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
   JANET:  gilbert@uk.ac.hw.hci    ARPA:   gilbert%hci.hw.ac.uk@cs.ucl.ac.uk
		UUCP:	..{backbone}!mcvax!ukc!hwcs!hci!gilbert

smoliar@VAXA.ISI.EDU (Stephen Smoliar) (08/18/87)

In article <115@glenlivet.hci.hw.ac.uk> Gilbert Cockton
<mcvax!hci.hw.ac.uk!gilbert@seismo.CSS.GOV> writes:
>
>Whilst agreement on structure is possible by an appeal to sense-data
>mediated by a culture's categories, agreement on function is less
>likely. How do we know that an object has a function? Whilst the prime
>use of a chair, is indeed for sitting on, this does not preclude it's
>use for other functions - now don't these go back to structure? Or are
>they related to intention (i.e. when someone hits you on the head with
>a chair)?

There seems to be a bit of confusion between that the function of a perceived
object IS and what it CAN BE.  There are very few concepts for which
structure and/or function are unique.  The point is that both serve to
guide the classification of our perceptions.  Thus, we may recognize a
chair by its structural appearance.  Having done so, we can then identify
the surface upon which we should sit, how we should rest our back, where
we can tuck our legs, and so forth.  On the other hand, if I walk into a
kitchen and see someone sitting on a step-stool, I recognize that he is
using that step-stool as a chair.  Thus, I have made a functional recognition,
from which I conclude that he is using the top step as a seat, he is resting
his legs on a lower seat, and he is managing without a back support.  Thus,
one can proceed from structural recognition to functional recognition or
vice versa.

This may be what Cockton means by "intention;"  and it is most likely highly
societal in nature.  However, we must not confuse the issue.  We do not
classify our perceptions merely for the sake of classifying them but for
the sake of interacting with them.  Depending on my needs, I may choose
to classify the chair at my dining room table as a place to sit while I
eat, a place to stand while I change a light bulb, or a weapon with which
to threaten (or attack) an intruder.

rjz@JASPER.Palladian.COM.UUCP (08/28/87)

In McCarthy's message of Jul 10, he talks of the need for AI
systems to be able to learn and use "natural kinds",
meaning something like "empirically determined categorizations
of objects and phenomena in the experience of an individual".

A response by Causey (Jul 18) describes a "natural kind"
as something with "nomologically determined attributes",
and specifically distinguished this from a "functional concept"
such as a chair.

First: what is the correct definition of a "natural kind"
in philosophical usage? What precisely does it cover,
and why can't a "functional definition" define a natural kind?

Second: Sidestepping the terminological issue,
McCarthy's original point is the more crucial:
that people seem to be able to classify objects in the
absence of precise information.
This is important if individuals are to "make sense" of their world,
meaning they are able to induce any significant
generalizations about how the world works. It seems clear
that such generalizations must allow "functional definitions";
how else would we learn to recognize chairs, tables, and
other artifacts of civilization?
Perhaps we could call this expanded notion an "empirical kind".

Third: Such "kinds" are especially important for communicating with other
individuals, since communication cannot proceeed without
mutually-accepted points of
reference, just as induction cannot proceed without "natural kinds".
Being based on individual experience, no two persons' conceptions of
a given concept can be assumed to correspond
_exactly_. Yet communication is for the most part not deterred
by this. It would be a great convenience,implementation-wise,
if this meant that precise definitions of "kinds" are
unnecessary in [AI] practice.

Roland J. Zito-wolf
Palladian Software
Cambridge, Mass 02142
RJZ%JASPER@LIVE-OAK.LCS.MIT.EDU

gilbert@hci.hw.ac.UK (Gilbert Cockton) (09/14/87)

In article <"870828113435.1.rjz@JASPER"@UBIK.Palladian.COM>
rjz%JASPER@LIVE-OAK.LCS.MIT.EDU writes:

>In McCarthy's message of Jul 10, he talks of the need for AI
>systems to be able to learn and use "natural kinds",

I'd like to continue the sociological perspective on this debate.
Rule number 1 in sociology is forget about "naturalness" - only
sociobiologists are really into "nature" now, and look at the foul
images of man that they've tried to pass off as science (e.g. Dworkin).

> McCarthy's original point is the more crucial: that people seem to be able
> to classify objects in the absence of precise information.

Psychologists cram a lot under the heading of "ability". The learner
is often assumed to have an active, conscious problem solving role.
When dealing with formal problems and knowledge, such a
characterisation seems valid. With social constructs such as informal
categories, "ability" is not the result of an active learning process.
Rather the ability follows automatically from cultural immersion.

>This is important if individuals are to "make sense" of their world,
>meaning they are able to induce any significant
>generalizations about how the world works.

Artifacts of civilization are only induced once. Thereafter, if they
fulfil social needs, they remain unchanged. Rather than induce what a
chair is, children learn what it is as part of their sociolinguistic
development. They come to know what a chair is without ever actively
and consciously inducing a formal definition.

>Perhaps we could call this expanded notion an "empirical kind".

"Empirical" is about as helpful as "natural" when it comes to reasoning
about social phenomena.

>Third: Such "kinds" are especially important for communicating with other
>individuals. Being based on individual experience, no two persons'
> conceptions of a given concept can be assumed to correspond _exactly_. 

At last, some social reasoning :-)! However, surface differences in
statements about meaning do not imply deep differences over the real
concept. The problem is one of language, not thought. Note also that
where beliefs about a concept are heavily controlled within a society,
public expression about a concept can be almost identical. See under
ideology or theocracy.

Once again, the reason why so much AI research is just one big
turn-off is that much of it is a very amateur and jargon-ridden
sophomore attempt at formalising phenomena which are well understood
and much studied in other real disciplines. Anthropological studies of
the category systems of societies abound. Levi-Strauss for one has
explored the reoccurance of binary oppositions in many category
systems. The difference between the humanities and AI is mainly that
the former are happy to write, as elegantly as possible, in natural
language, whereas in the latter there is a fetish for writing in a
mixture of LISP, cut-down algebra and folk-psychology without an ounce
of scholarship. There is rigour no doubt, but without scholarship it
is worthless. Artificial ignorance is an apt characterisation.

The debate on natual kinds appears to have emerged from a discussion
of where AI needs to go next. Perhaps AI folk should drop the
hill-climbing and take their valuable techniques back into the
disciplines which can make use of them in a sensible and balanced way.
Then perhaps only programmes worth writing will be implemented and
this nonsense about tidying up poorly expressed ideas on a dumb
machine can be interred once and for all.
-- 
   Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
   JANET:  gilbert@uk.ac.hw.hci    ARPA:   gilbert%hci.hw.ac.uk@cs.ucl.ac.uk
		UUCP:	..{backbone}!mcvax!ukc!hwcs!hci!gilbert

cugini@ICST-ECF.ARPA.UUCP (09/21/87)

Gilbert Cockton writes:

> I'd like to continue the sociological perspective on this debate.
> Rule number 1 in sociology is forget about "naturalness" - only
> sociobiologists are really into "nature" now, and look at the foul
> images of man that they've tried to pass off as science (e.g. Dworkin).

This seems a somewhat abrupt dismissal of natural kinds, which has
lately attracted some support by people such as Saul Kripke, who is
neither a computer scientist, dumb, nor politically unreliable
(although he IS a philosopher, and is thereby suspect, no doubt).

The (philosophically) serious question is to what extent our shared
concepts ("dog", "star", "electron", "chair", "penguin", "integer",
"prime number") are merely arbitrary social conventions, and to what
extent they reflect objective reality (the old nominalist-realist
debate).  A sharper re-phrasing of the question might be:

  To what extent would *any* recognizably rational being share our
  conceptual framework, given exposure to the same physical environment?
  (Eg, would Martians have a concept of "star"?).

I believe there have been anthropological studies, for instance,
showing that Indian classifications of animals and plants line
up reasonably well with the conventional Western taxonomy.

If there are natural kinds, their relevance to some AI work seems
obvious.

John Cugini <Cugini@icst-ecf.arpa>
------