[comp.ai.philosophy] toward a definition of AI

news@ccncsu.ColoState.EDU (USENET News) (03/11/91)

`AI' is a very difficult term to describe to the layman, perhaps even more so
than other scientific terms, because there is not even total agreement within
the field itself on its goals.  However, AI researchers might agree,
 
``The mind is a machine.''
 
* Notice I said `mind' and not `brain'.  Otherwise the statement would not be
  scientifically controversial (or, in other words, the claim would not be
  worth investigating in the same way that the statement ``atoms exist'' is
  not).
 
* Notice I said `machine'.  This should be taken in a completely general way.
  It should not evoke any visions of turning gears or intricate chips.
  Anything that can be physically isolated constitutes a machine.
 
* By `is', I mean that the mind can be `completely described' in terms of
  mechanics.  The test is whether we can use the entities (namely `minds'
  and `machines') interchangeably.
 
If the mind *is* a machine, we have every reason to believe we can duplicate
and harness its essential properties, and by doing so, perhaps even *improve*
on them.  Here the literal meaning of `AI' is very informative; to elaborate,
one might say,
 
``AI is the project of duplicating the human mind in a medium other than the
human brain.''
 
* I might have said `simulating', but that is a matter of semantics.  To the
  degree that a model not only exhibits but *comprises* the characteristics/
  properties/qualities of the modelled object, it has transcended mere
  simulation to become a *duplication* of those attributes.  Hence a suitably
  robust simulation of water becomes a duplication by actually possessing the
  physical property of `wetness'.  Presumably, once we understand AI better,
  we will be able to describe very succinctly (formally, mathematically, etc.)
  the precise difference between `simulation' and `duplication', or at least
  *define* it satisfactorily.
 
* See the word `human'.  Clearly, many other entities exist (namely animals,
  but perhaps even inanimate objects, if there is a true strict dichotomy)
  that seem to exhibit intelligent abilities that are not readily accessable,
  computationally speaking (`learning' and `predator avoidance', for example--
  I haven't met any computers that have learned to flee power spikes or bite
  abusive users!)  But most would agree that the human is the epitomy and
  pinnacle of those capacities (at least evolutionarily), so we can confine
  the study there `without loss of generality'.
 
(To some extent this is the driving force behind AI, to give computers whatever
ideal qualities that seem to exist they currently lack.  This is the imfamous
`moving goalpost' definition of the realm of AI: ``all solvable problems that
computers can't currently solve.'')
 
* Notice I said `mind'.  I don't know about anyone else, but until every aspect
  of human personality is accounted for in a materialistic way, I will not be
  satisfied.  Many might be satisfied with a small subset of the whole, such
  as vision processing, speech recognition, reasoning capacities, artistic
  aptitude, or whatever.  Clearly, though, these all have the common factor of
  being domains of the human mind.
 
All of the above is where `intelligence' comes in.
 
* Notice I said `medium'.  I am not committed to `neural networks' (in the
  formal sense), computers, or even `machines' (all in the sense of current
  connotations), nor, hopefully, is anyone else.  Of course, some approaches
  seem more promising than others, especially those that actually have been
  observed to exist in the system(s) we are modelling (hint, hint).  Anyway,
  there is unanimous agreement within the field that the computer is at least
  a superbly dynamic *tool* to explore the various avenues (if not already a
  medium, or potential one, itself).
 
* Notice I said *other* than the human brain.
 
These are where `artificial' comes in.
 
* I might have said it is the `attempt', but that casts suspicion on its
  feasibility.
 
That's where *faith* comes in!
 
In this way I have tried to generalize the aims of AI to encompass all of the
present approaches and motives along with future ones that may fit under the
umbrella.  (On the other hand, it's getting rather crowded there and many
approaches and their practitioners will surely be cast out into the rain.)
 
Like alchemy, AI is a somewhat undeveloped territory, full of many confusing
and even outrightly conflicting approaches, accounts, and personalities
(perhaps most vividly and popularly characterized as a `bandwagon').  The
analogy may carry even further: through further understanding, we may come to
realize that the fundamental goal of AI is unachievable in the same way that
creating gold out of lead is chemically impossible.  However, a new science
would undoubtedly emerge out of the ashes of discredited theory.
 

ld231782@longs.LANCE.ColoState.EDU

feldman@rex.cs.tulane.edu (Damon Feldman) (03/11/91)

	Perhaps the problem in defining AI lies in the fact that
srong-AI proponents believe that "intellegence" is not a black and
white issue; it is a question of extent.
	Many people (i.e. me) think that the same processes that
govern the behaviour of a slug govern ours.  Of course our brains are
much, much more complex (but not infinitely more complex).  Assuming
that slugs are not intellegent and that we are, there must be a grey
area where intellegence is not clearly present or absent.  This grey
area may be at the level of bugs, birds, or whatever.
	In short, the word "intellegence" has a colloquial meaning
that is not precise enough to be used in deciding if artificial
intelegence has been achieved or not, because all we really agree upon
is that slugs are not intelegent and we are.

Or so it seems to me.

Damon
-- 

Damon Feldman                  feldman@rex.cs.tulane.edu
Computer Science Dept.         Tulane University, New Orleans LA, USA

smoliar@isi.edu (Stephen Smoliar) (03/11/91)

In article <13477@ccncsu.ColoState.EDU> news@ccncsu.ColoState.EDU (USENET News)
writes:
>`AI' is a very difficult term to describe to the layman, perhaps even more so
>than other scientific terms, because there is not even total agreement within
>the field itself on its goals.  However, AI researchers might agree,
> 
>``The mind is a machine.''
> 
>* Notice I said `mind' and not `brain'.  Otherwise the statement would not be
>  scientifically controversial (or, in other words, the claim would not be
>  worth investigating in the same way that the statement ``atoms exist'' is
>  not).
> 
>* Notice I said `machine'.  This should be taken in a completely general way.
>  It should not evoke any visions of turning gears or intricate chips.
>  Anything that can be physically isolated constitutes a machine.
> 
>* By `is', I mean that the mind can be `completely described' in terms of
>  mechanics.  The test is whether we can use the entities (namely `minds'
>  and `machines') interchangeably.
> 
I guess I want to go on record as one AI researcher who is no rush to agree
with the above statement, even in light of the meticulous elaborations.  As a
matter of fact, I try to talk about "mind" as little as possible, because it
gives me so much trouble whenever I try to engage it as a well-formed piece of
terminology.  I am certainly willing to argue that the BODY is a machine,
complete with the elaboration of "machine" stated above.  However, I find
that my plate is full enough if I try to address why it is that the human
body behaves the way it does (and do so in a manner consistent with the
assumption that it is a machine), that I am too busy to worry about where
"mind" comes into the picture and will be perfectly happy to let philosophers
argue about that word after I have achieved a few respectable concrete results!

By the way, I think this whole matter is useful for putting the recent debate
between Searle and the Churchlands into perspective.  Consider the following
formulation of an "AI" question:

	Can we analyse the behavior of a human body as if it were a
	machine to the extent that the resulting mechanics would account
	for those aspects of its behavior which we choose to call
	"intelligent?"

This is sort of an attempt to inch forward from Turing's initial approach.
Turing was perfectly willing to leave the term "intelligence" to philosophers
and focus on the behavior required to play his Imitation Game.  This question
basically proposes to push the boundaries of behavior from the Imitation Game
out to more general matters of getting on in the world.

Ultimately, the Churchlands claim that this question can be answered in the
affirmative.  The crux of their argument is that we can build intelligent
machines because the human body is such a machine.  Therefore, whatever we
need to know about intelligent behavior should be deducible from an analysis
of the appropriate mechanics.

Searle, on the other hand, is saying that there must be more to human behavior
than any sort of mechanical analysis.  What is that "more?"  Well, that's where
all the controversy lies.  Searle seems to be part of a long line of
philosophers, beginning with Brentano, who firmly believe that such
a "more" exists but have not gotten much further than giving it a name:
intentionality.  The whole point of the Chinese Room argument is not so
much to dump on artificial intelligence as to demonstrate that machines
are fundamentally incapable of having intentionality.  Given that
"intentionality" is about as elusive a piece of terminology as "intelligence"
(or Searle's favorite, "understanding"), Searle's arguments have more to do
with intimidation than with deduction.

Personally, I cannot understand why this controversy has attracted so much
fuss.  There is so much to be done by way of implementing convincing behavior
in even the most limited set of circumstances that it hardly seems worthwhile
to dwell on whether or not such behavior is REALLY intelligent.  Of course, if
one of the intentionality experts could come up with an argument as to why a
lack of intentionality would impede ever implementing that behavior, those of
us who "do" artificial intelligence would certainly be obliged to listen.  That
would be like discovering that there is no need to waste any more effort on
developing a technique to trisect an angle with straight edge and compass.
However, such a powerful argument has not yet been presented;  and because,
in my own humble opinion, "intentionality" is too slippery a word to even be
dignified with the sobriquet of "concept," I am not going to hold my breath
waiting for it.
-- 
USPS:	Stephen Smoliar
	5000 Centinela Avenue  #129
	Los Angeles, California  90066
Internet:  smoliar@venera.isi.edu

pja@neuron.cis.ohio-state.edu (Peter J Angeline) (03/11/91)

In article <13477@ccncsu.ColoState.EDU> news@ccncsu.ColoState.EDU (USENET News)
   writes:

>   Searle, on the other hand, is saying that there must be more to human
>   behavior than any sort of mechanical analysis.  What is that "more?"  Well,
>   that's where all the controversy lies.  Searle seems to be part of a long
>   line of philosophers, beginning with Brentano, who firmly believe that such
>   a "more" exists but have not gotten much further than giving it a name:
>   intentionality.  The whole point of the Chinese Room argument is not so
>   much to dump on artificial intelligence as to demonstrate that machines
>   are fundamentally incapable of having intentionality.  Given that
>   "intentionality" is about as elusive a piece of terminology as
>   "intelligence" (or Searle's favorite, "understanding"), Searle's arguments
>   have more to do with intimidation than with deduction.

You're equating "machine" with "Turing Machine".  Searle's argument is not that
"human behavior", as you've termed it, is extra-mechanical but is not
adequtaely represented by Turing Machine formalisms and the standard notion of
"strict" AI (i.e. pure symbol manipulation).  Equating all "machines" with
"Turing machines" and "symbol manipulation" is a limitation of what we
might call "machine".  There are more powerful methods of computation which
can still be called "machines" but which can not fit into a turing machine.
Boolean Circuit Families (from computational theory) are an alternative method
of computation, clearly dentotable as a "machine" but are not able to be
represented by turing machines.  

>   USPS:	Stephen Smoliar
>	   5000 Centinela Avenue  #129
>	   Los Angeles, California  90066
>   Internet:  smoliar@venera.isi.edu


--
-------------------------------------------------------------------------------
Peter J. Angeline      ! Laboratory for AI Research (LAIR)
ARPA:		       ! THE Ohio State University, Columbus, Ohio 43210
pja@cis.ohio-state.edu ! "Nature is more ingenious than we are."

smoliar@isi.edu (Stephen Smoliar) (03/12/91)

In article <PJA.91Mar11102729@neuron.cis.ohio-state.edu> pja@cis.ohio-state.edu
writes:
>
>In article <13477@ccncsu.ColoState.EDU> news@ccncsu.ColoState.EDU (USENET
>News)
>   writes:
>
>>   Searle, on the other hand, is saying that there must be more to human
>>   behavior than any sort of mechanical analysis.  What is that "more?"
>>   Well,
>>   that's where all the controversy lies.  Searle seems to be part of a long
>>   line of philosophers, beginning with Brentano, who firmly believe that
>>   such
>>   a "more" exists but have not gotten much further than giving it a name:
>>   intentionality.  The whole point of the Chinese Room argument is not so
>>   much to dump on artificial intelligence as to demonstrate that machines
>>   are fundamentally incapable of having intentionality.  Given that
>>   "intentionality" is about as elusive a piece of terminology as
>>   "intelligence" (or Searle's favorite, "understanding"), Searle's arguments
>>   have more to do with intimidation than with deduction.
>
>You're equating "machine" with "Turing Machine".  Searle's argument is not
>that
>"human behavior", as you've termed it, is extra-mechanical but is not
>adequtaely represented by Turing Machine formalisms and the standard notion of
>"strict" AI (i.e. pure symbol manipulation).

First of all, I should assume the credit or blame for the original quotation,
thereby taking the heat off the contributor for Colorado State who started all
this.  Secondly, the point about Searle is well-taken.  However, we are still
left with this awkward position that the inadequacy of symbol manipulating
machines lies in this lack of intentionality.  In other words we must now
confront the question of what qualities a machine must possess to allow it
to have intentionality.  Saying it has to be more than a symbol manipulator
is not enough.  It is still necessary to be able to look at a machine, analyze
it, and conclude from that analysis whether or not it has intentionality.  The
Chinese Room argument essentially says that we cannot base our analysis on the
observed behavior of the machine.  Very well, then, what CAN we use as a basis
for our analysis?

Another way of approaching Searle is to assume that he may be flogging the
wrong horse.  Probably the horse he REALLY wants to flog is Cartesian dualism.
When he gets all "visceral" in talking about understanding, he is really saying
that you cannot talk about the mind without taking the body into account.
Since Turing's initial paper on artificial intelligence is basically dualist,
Searle seems to have concluded that all artificial intelligence is similarly
dualist.  When I heard him at UCLA, he described Minsky as "the ultimate
dualist," a description which, I think, makes little sense in light of THE
SOCIETY OF MIND, which seems more concerned with getting the story about the
body straight (or at least adequately modeled) than in trying to deal with mind
as some THING which can be abstracted away from the body.  It is probably the
case that most of what has been done in the name of knowledge representation
can be accused of dualism, but not all of the discipline should be viewed as
following in those same dualist footsteps.

>  Equating all "machines" with
>"Turing machines" and "symbol manipulation" is a limitation of what we
>might call "machine".  There are more powerful methods of computation which
>can still be called "machines" but which can not fit into a turing machine.
>Boolean Circuit Families (from computational theory) are an alternative method
>of computation, clearly dentotable as a "machine" but are not able to be
>represented by turing machines.  
>
On the basis of my above paragraph, I do not think this is the issue.  I do not
think we wish to delve into the different flavors of computable functions.
Rather, we should be exploring what it is we want out of machine behavior
and how we can hope to get it.
-- 
USPS:	Stephen Smoliar
	5000 Centinela Avenue  #129
	Los Angeles, California  90066
Internet:  smoliar@venera.isi.edu