[mod.ai] Taxonomizing in AI: neither useful or harmless

marek@INDIANA.CSNET ("Marek W. Lugowski") (02/10/86)

From: "Marek W. Lugowski" <marek%indiana.csnet@CSNET-RELAY.ARPA>


> [Stan Shebs:] In article <3600036@iuvax.UUCP> marek@iuvax.UUCP writes:
>  ...one of the most misguided AI efforts to date is 
>  taxonomizing a la Michalski et al: setting up categories along arbitrary 
>  lines dictated by somebody or other's intuition.  If AI does not have 
>  the mechanism-cum-explanation to describe a phenomenon, what right does it 
>  have to a) taxonomize it and b) demand that its taxonomizing be recognized 
>  as an achievement?  
> 
> I assume you have something wonderful that we haven't heard about?

I assume that you are intentionally jesting, equating that which I criticize
with all that AI has to offer.  Taxonomizing is a debatable art of empirical
science, usually justified when a scientist finds itself overwhelmed with
gobs and gobs of identifiable specimens, e.g. entymology.  But AI is not
overwhelmed by gobs and gobs of tangible singulars; it is a constructive
endeavor that puts up putatative mechanisms to be replaced by others.  The
kinds of learning Michalski so effortlessly plucks out of the thin air are not
as incontrovertibly real and graspable as instances of dead bugs.

One could argue, I suppose, that taxonomizing in absence of multitudes of
real specimens is a harmless way of pursuing tenure, but I argue in
Indiana U. Computer Science Technical Report No. 176, "Why Artificial
Intelligence is Necessarily Ad Hoc: Your Thinking/Approach/Model/Solution
Rides on Your Metaphors", that it causes grave harm to the field.  E-mail
nlg@iuvax.uucp for a copy, or write to Nancy Garrett at Computer Science
Department, Lindley Hall 101, Indiana University, Bloomington, Indiana
47406.

> Or do you believe that because there are unsolved problems in physics,
> chemists and biologists have no right to study objects whose behavior is
> ultimately described in terms of physics?
> 
> 							stan shebs
> 							(shebs@utah-orion)

TR #176 also happens to touch on the issue of how ill-formed Stan Shebs's
rhetorical question is and how this sort of analogizing has gotten AI into
its current (sad) shape.

Please consider whether taxonomizing kinds of learning from the AI perspective
in 1981 is at all analogous to chemists' and biologists' "right to study the
objects whose behavior is ultimately described in terms of physics."  If so,
when is the last time you saw a biology/chemistry text titled "Cellular
Resonance" in which 3 authors offered an exhaustive table of carcinogenic
vibrations, offered as a collection of current papers in oncology?...

More constructively, I am in the process of developing an abstract machine.
I think that developing abstract machines is more in the line of my work as
an AI worker than postulating arbitrary taxonomies where there's neither need
for them nor raw material.

				-- Marek Lugowski