[comp.ai] What actually is AI?

F0O@psuvm.psu.edu (08/29/90)

     Lately, I've been reading the book, "Man-Made Minds, The Promise of
Artificial Intelligence".  The one thing going through my mind is, what is
it that makes one program AI, but not another?
     I know it has nothing(or very little) to do with the language you use,
so is it the nature of the problem itself that makes a program an AI one,
or is it in how you code the problem?  Some of both?
     Can a certain problem be written using certain coding techniques, and
be an AI program, but yet be written using other coding techniques, and not
be AI?
     Personally, I don't care what a program is called, as long as it does
what it is supposed to!

                                                           [Tim]

sticklen@cps.msu.edu (Jon Sticklen) (08/30/90)

From article <90241.112651F0O@psuvm.psu.edu>, by <F0O@psuvm.psu.edu>:
>      Lately, I've been reading the book, "Man-Made Minds, The Promise of
> Artificial Intelligence".  The one thing going through my mind is, what is
> it that makes one program AI, but not another?
>      I know it has nothing(or very little) to do with the language you use,
> so is it the nature of the problem itself that makes a program an AI one,
> or is it in how you code the problem?  Some of both?
>      Can a certain problem be written using certain coding techniques, and
> be an AI program, but yet be written using other coding techniques, and not
> be AI?
>      Personally, I don't care what a program is called, as long as it does
> what it is supposed to!
> 
>                                                            [Tim]

tim,
  i think you are looking in the wrong place to find out what
ai is. in your note you emphasize programs and programming. fundamentally,
ai is a set of methodologies for analyzing problems. there are lots of
variations on that theme (eg, the way people solve problems...), but
the main point that i want to make is that ai is not a programing
methodology at all.
	---jon---

dmark@acsu.buffalo.edu (David Mark) (08/30/90)

I gave a talk once at a University in a farming region, and found out
that in Ag. Schools, "AI" stands for "Artificial Insemination".  :-)
                                                   ^^^^^^^^^^
And "they" probably used the abbreviation before "we" did!

David Mark
dmark@acsu.buffalo.edu

tesar@boulder.Colorado.EDU (Bruce Tesar) (08/30/90)

In article <34175@eerie.acsu.Buffalo.EDU> dmark@acsu.buffalo.edu (David Mark) writes:
>I gave a talk once at a University in a farming region, and found out
>that in Ag. Schools, "AI" stands for "Artificial Insemination".  :-)
>                                                   ^^^^^^^^^^
>And "they" probably used the abbreviation before "we" did!
>
>David Mark
>dmark@acsu.buffalo.edu

AI also stands for "Amnesty International", the international human rights
organization (OK, so it's not as funny as David's).

==========================
Bruce B. Tesar
Computer Science Dept., University of Colorado at Boulder 
Internet:  tesar@boulder.colorado.edu

wood@jfred.siemens.edu (Jim Wood) (08/30/90)

After being in the field for seven years, this is MY informal
definition of Artificial Intelligence:

    Artificial Intelligence is a computer science and engineering
    discipline which attempts to model human reasoning methods
    computationally.

The key, I think, to programmers is modeling the human reasoning
process.  It's also the hard part!  An algorithm is not a human
reasoning process in the sense that it lacks utilizing the thinking,
intuitive, or subjective aspect of human reasoning.  There's no
variability based on experience or analogy in algorithms.

Any takers?

Jim
--
Jim Wood [wood@cadillac.siemens.com]
Siemens Corporate Research, 755 College Road East, Princeton, NJ  08540
(609) 734-3643

hmueller@wfsc4.tamu.edu (Hal Mueller) (08/30/90)

In article <38294@siemens.siemens.com> wood@jfred.siemens.edu (Jim Wood) writes:
>    Artificial Intelligence is a computer science and engineering
>    discipline which attempts to model human reasoning methods
>    computationally.

I've spent the last year working with a group that tries to build
models of ANIMAL reasoning methods; we use the same techniques
that you'd apply to any other AI problem.

Everything Jim said in his posting is true in this domain as well.
Shifting from human to animal reasoning doesn't make the problem
any easier.  In fact it's rather annoying to be unable to use 
introspection as a development aid:  I can watch myself solve a
problem and try to build into a program the techniques I see myself
using, but you can't ask an elk or a mountain lion what's going through
its brain.  All we can do is watch the behavior of our models and 
compare it to experimentally observed behavior, using the experience
of ethologists to guide us.

Watching elk in the mountains is much more pleasant than watching
a gripper arm pick up blocks, however.

--
Hal Mueller            			Surf Hormuz.
hmueller@cs.tamu.edu          
n270ca@tamunix.Bitnet

yamauchi@heron.cs.rochester.edu (Brian Yamauchi) (08/31/90)

In article <38294@siemens.siemens.com>, wood@jfred.siemens.edu (Jim
Wood) writes:
> After being in the field for seven years, this is MY informal
> definition of Artificial Intelligence:
> 
>     Artificial Intelligence is a computer science and engineering
>     discipline which attempts to model human reasoning methods
>     computationally.

Actually, this sounds more like the (usual) definition of Cognitive
Science (since the emphasis is on modeling human reasoning).

No doubt if you query a dozen AI researchers, you will receive a dozen
different definitions, but my definition would be:

	Artificial Intelligence is the study of how to build intelligent
	systems.

The term "intelligent" is both fuzzy and open to debate.  The usual
definition involves symbolic reasoning, but, in my opinion, a better
definition would be the ability to generate complex, goal-oriented
behavior in a rich, dynamic environment (and perhaps also the ability to
learn from experience and extend system abilities based on this
learning).  But I'm a robotics researcher, so naturally I'm biased :-).

_______________________________________________________________________________

Brian Yamauchi				University of Rochester
yamauchi@cs.rochester.edu		Computer Science Department
_______________________________________________________________________________

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (08/31/90)

In article <34175@eerie.acsu.Buffalo.EDU> dmark@acsu.buffalo.edu (David Mark) writes:
>I gave a talk once at a University in a farming region, and found out
>that in Ag. Schools, "AI" stands for "Artificial Insemination".  :-)


When I tell friends I am into AI, they say, "Oh, Amnesty International?"

-Tom

F0O@psuvm.psu.edu (09/01/90)

     In following the threads of my original posting, it seems that there
is not one definition of what AI is.  However, what my original question
was is, what is it that makes one program an AI one, and another one non-AI?
Again, I imagine there is not one magical answer to that, but for instance,
I'm finishing up a prolog program that plays unbeatable tictactoe.  Of
course, this is a very simple game, but would it be considered an AI program?
If not, how about a checkers or chess program?  And it they would be AI
programs, what would make them AI, but tictactoe not-AI?


                                                        [Tim]

powers@uklirb.informatik.uni-kl.de (David Powers AG Siekmann) (09/03/90)

F0O@psuvm.psu.edu writes:


>     In following the threads of my original posting, it seems that there
>is not one definition of what AI is.  However, what my original question
>was is, what is it that makes one program an AI one, and another one non-AI?
>Again, I imagine there is not one magical answer to that, but for instance,
>I'm finishing up a prolog program that plays unbeatable tictactoe.  Of
>course, this is a very simple game, but would it be considered an AI program?
>If not, how about a checkers or chess program?  And it they would be AI
>programs, what would make them AI, but tictactoe not-AI?


We have now seen 2 definitions, I prefer to characterize them so:

the engineering perspective: 

	to build systems to do the things we can't build systems to do
	because they require intelligence

the psychological perspective:

	to build systems to do the things we ourselves can do to help
	us to understand our intelligence

The former was the original aim, the latter came from psychology
and is represented in Margaret Boden's book: AI and Natural Man.
But it is also a natural extension of our familiar introspection.
This has now been distinguished with its own name: Cognitive Science.

Note that a corollary of the first definition is that once we can
build something, then the task no longer lies within artificial
intelligence.  AI has lost several subfields on this basis, from
pattern recognition to chess playing programs to expert systems.

I would say the real ai definition is this:

the heuristic perspective:

	to build systems relying on heuristics (rules of thumb)
	rather than pure algorithms

This excludes noughts and crosses (tic-tac-toe) and chess if the
progam is dumb and exhaustive (chess) or pre-analyzed and
exhaustive (ttt).  Unfortunately it could also include expert
systems which I see as a spin-off of ai technology and by no means
main stream, but expert systems capture conscious knowledge or at
least high level knowledge.  The capture of the knowledge is
straightforward and intrinsically no different from the
introspection involved in writing any program - we think "How
would I do it by hand?"  Of course knowledge engineering techniques
can be applied to any domain, even those hard to introspect, by
using the techniques with the experts in the field - e.g. on linguists,
for natural language.  But this won't in general reveal how we are
actually really using language.

This brings us back to the cognitive science definition.

The definition which guides my own work is:

	to build systems which are capable of modifying their
	behaviour dynamically by learning

This takes the responsibility of acquiring and inputting the
heuristics or knowledge from the programmer or knowledge engineer
and gives it to the programmer.  Machine Learning is a subfield of
AI, but somehow central to its future.  Expert Systems are also
really only still AI in so far as we use AI (=heuristic+learning)
techniques in the acquisition of the knowledge base.  But there is
also a lot of work to be done in establishing the foundations
within which learning is possible.

Another definition of AI is:

	Anything written in LISP or PROLOG.  

This definition (or either half thereof) is believed by some.  It
is not so silly as it sounds.  E.g, PROLOG does have something of
the property of automatically finding a way of satisfying
specifications, and logic and induction and theorem proving
technology are the underpinings of machine learning research.
This technology can now be guided by heuristics, and these
heuristics can be learned.  It's only beginning, but it's exciting!
And, of course, you can still misuse any language!

I hope this has stirred the pot a bit.

David
------------------------------------------------------------------------
David Powers		 +49-631-205-3449 (Uni);  +49-631-205-3200 (Fax)
FB Informatik		powers@informatik.uni-kl.de; +49-631-13786 (Prv)
Univ Kaiserslautern	 * COMPULOG - Language and Logic
6750 KAISERSLAUTERN	 * MARPIA   - Parallel Logic Programming
WEST GERMANY		 * STANLIE  - Natural Language Learning

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (09/05/90)

In article <38294@siemens.siemens.com> wood@jfred.siemens.edu (Jim Wood) writes:
>    Artificial Intelligence is a computer science and engineering
>    discipline which attempts to model human reasoning methods
>    computationally.
>

I think this is a pretty good definition, taken from the engineers point
of view.  A psychologist might take a different view of the definition/
purpose of AI.

One thing I'd include is that it's a cognitive psychological as well as
computer science and engineering discipline.  You have to know something
about how people think in order to model human reasoning methods.

- Jim Ruehlin

erich@eecs.cs.pdx.edu (Erich Boleyn) (09/06/90)

F0O@psuvm.psu.edu writes:


>...However, what my original question
>was is, what is it that makes one program an AI one, and another one non-AI?
>Again, I imagine there is not one magical answer to that, but for instance,
>I'm finishing up a prolog program that plays unbeatable tictactoe.  Of
>course, this is a very simple game, but would it be considered an AI program?
>If not, how about a checkers or chess program?  And it they would be AI
>programs, what would make them AI, but tictactoe not-AI?


   The problem is in the hype that the term "AI" has gotten in recent years.
Some people define "AI" as any technique that uses heuristics, rule-based
logic, etc,  while others (such as myself) prefer to reserve the term for
systems capable of adaptive behavior (at least).  The general trend has been
that once an algorithm is well-understood, it is not "AI" anymore.  You'll
have to decide how you define it yourself...  but a guideline that I use
is that a system not capable of any adaptive behavior at all is definitely
*not* AI (i.e. I try to reserve it for the hallmarks of intelligent behavior).

   Erich

   ___--Erich S. Boleyn--___  CSNET/INTERNET:  erich@cs.pdx.edu
  {Portland State University}     ARPANET:     erich%cs.pdx.edu@relay.cs.net
       "A year spent in           BITNET:      a0eb@psuorvm.bitnet
      artificial intelligence is enough to make one believe in God"

weigele@fbihh.UUCP (Martin Weigele) (09/06/90)

powers@uklirb.informatik.uni-kl.de (David Powers AG Siekmann) writes:

>F0O@psuvm.psu.edu writes:


>>     In following the threads of my original posting, it seems that there
>>is not one definition of what AI is.  However, what my original question
>>was is, what is it that makes one program an AI one, and another one non-AI?

I hardly read this group, but the name on our news reader (David Siekmann)
made me curious... Helas, it turned out to be David Powers AG Siekmann!
(AG here is German for "work group").

But more serious: I think that "AI" originated from the ambitious claim of
being able to have computers accomplish "intelligent" tasks of human beings.
Nowadays, many AI researchers have become more modest (not all of them, of
course). They would keep the name to stand for the tradition of certain
"philosophies" underlying the work, such as (e.g. LISP) programming
methodologies, and often a bias on "interpretative approaches" when
representing knowledge rather then coding it in some more classical
programming language and compile it. In a sense, Tim is right: A commercial
banking application that models the bank's need on a computer, possibly
even programmed in Cobol, could be considered a "knowledge based
system" if we argue that the knowledge is contained in the Cobol code.

After having worked in both AI and more classical computer/computing
science environments, it is my impression that AI people often miss
a lot by a certain ideological barrier that makes them not look into
classical fields of computing, such as software engineering (This 
barrier works both ways!). It is true, often very similar things are
done under a different label. This is very striking when you notice
how little communication there is between e.g. mathematical software
engineering approaches and similar approaches from the AI tradition
(it's "logical point of view branch"!).

But maybe, after all, this is just a big game called sales rather
then science, in order to obtain funding... [but don't tell 'em].

Martin.

demers@odin.ucsd.edu (David E Demers) (09/07/90)

In article <6560@uklirb.informatik.uni-kl.de> powers@uklirb.informatik.uni-kl.de (David Powers AG Siekmann) writes:
>F0O@psuvm.psu.edu writes:


>>     In following the threads of my original posting, it seems that there
>>is not one definition of what AI is.  However, what my original question
>>was is, what is it that makes one program an AI one, and another one non-AI?
>>Again, I imagine there is not one magical answer to that, but for instance,
>>I'm finishing up a prolog program that plays unbeatable tictactoe.  Of
>>course, this is a very simple game, but would it be considered an AI program?
>>If not, how about a checkers or chess program?  And it they would be AI
>>programs, what would make them AI, but tictactoe not-AI?


>We have now seen 2 definitions, I prefer to characterize them so:

>the engineering perspective: 

>	to build systems to do the things we can't build systems to do
>	because they require intelligence

the psychological perspective:

>	to build systems to do the things we ourselves can do to help
>	us to understand our intelligence
[...]
>I would say the real ai definition is this:

>the heuristic perspective:

>	to build systems relying on heuristics (rules of thumb)
>	rather than pure algorithms
[...]
>The definition which guides my own work is:

>	to build systems which are capable of modifying their
>	behaviour dynamically by learning

[...]

>Another definition of AI is:
>
>	Anything written in LISP or PROLOG.  
>I hope this has stirred the pot a bit.


I'm still looking for the originator of the definition:

"AI is the art of making computers act like the
ones in the movies"


Dave

cam@aipna.ed.ac.uk (Chris Malcolm) (09/07/90)

In article <6560@uklirb.informatik.uni-kl.de> powers@uklirb.informatik.uni-kl.de (David Powers AG Siekmann) writes:

>F0O@psuvm.psu.edu writes:

>>     In following the threads of my original posting, it seems that there
>>is not one definition of what AI is.  However, what my original question
>>was is, what is it that makes one program an AI one, and another one non-AI?

As Brian Yamauchi has pointed out, it may well be rather silly expecting
a *program* to display intelligence. 

But, as David Powers points out:

>the psychological perspective:
>
>	to build systems to do the things we ourselves can do to help
>	us to understand our intelligence

In other words, AI is a label properly applied at the moment to research
activity rather than artefacts (such as computer programs or robots) --
because we currently can't make any (artificially) intelligent
artefacts. So by this definition there is nothing in a *program* which
makes it AI or not; what makes a program AI or not is whether it taught
its author anything original and useful about the nature of
intelligence.
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

cah@cs.bham.ac.uk (Ceri Hopkins <HopkinsCA>) (09/07/90)

If a program works in theory, but not in practice... that's AI ;-)

-- 
Ceri Hopkins                                    
Dept. of Computer Science                         cah@uk.ac.bham.cs
University of Birmingham			  Tel. +44-21-414-3708

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (09/08/90)

In article <6560@uklirb.informatik.uni-kl.de> powers@uklirb.informatik.uni-kl.de (David Powers AG Siekmann) writes:
>We have now seen 2 definitions, I prefer to characterize them so:
>the engineering perspective: 
>	to build systems to do the things we can't build systems to do
>	because they require intelligence
>the psychological perspective:
>	to build systems to do the things we ourselves can do to help
>	us to understand our intelligence

As I understand it, these are really the definitions of Cognitive 
Science (I admit it - I'm a Cognitive Scientist), as seen from the
computer and psychological points of view.
Cognitive Science intersects AI in many areas, but there are areas where
"AI" isn't "CS".

>I would say the real ai definition is this:
>the heuristic perspective:
>	to build systems relying on heuristics (rules of thumb)
>	rather than pure algorithms

I've heard it said by some that AI was purely the discovery of heuristic
search techniques - AI is a field of complex computational searching.
Actually I like this definition.  It avoids that sticky but important
question of "what is intelligence?", and looks at what actually is performed
when executing an AI application.
An Expert System can be seen to be a complex, custom search of a database
(searching for a solution), vision recognition is classifying and
identifying previously seen or similar patterns.  There's lots more
examples.

>This brings us back to the cognitive science definition.
>The definition which guides my own work is:
>	to build systems which are capable of modifying their
>	behaviour dynamically by learning

I'd say this was a definition of machine learning, a subset of AI and/or
Cognitive Science.  I don't believe an intelligent system has to learn
to be intelligent.  Some expert systems display intelligent behaviour
but don't have the capacity for learning.

>Another definition of AI is:
>	Anything written in LISP or PROLOG.  

Sorry, I can't accept that the implementation platform or language defines
intelligence.  I can write a LISP program that adds numbers, but this is
no display of intelligent behaviour.


- Jim Ruehlin

yamauchi@cs.rochester.edu (Brian Yamauchi) (09/08/90)

In article <2982@aipna.ed.ac.uk>, cam@aipna.ed.ac.uk (Chris Malcolm) writes:
> In article <6560@uklirb.informatik.uni-kl.de>
powers@uklirb.informatik.uni-kl.de (David Powers AG Siekmann) writes:
> 
> >F0O@psuvm.psu.edu writes:
> 
> >>     In following the threads of my original posting, it seems that there
> >>is not one definition of what AI is.  However, what my original question
> >>was is, what is it that makes one program an AI one, and another one
non-AI?
> 
> As Brian Yamauchi has pointed out, it may well be rather silly expecting
> a *program* to display intelligence. 

Well... that's not exactly what I meant.  Although I was defining AI as
a research field, I *do* think it is possible for systems (autonomous
robots, for example) to display intelligence -- at least at the level of
very primitive animals (e.g. insects).

> But, as David Powers points out:
> 
> >the psychological perspective:
> >
> >	to build systems to do the things we ourselves can do to help
> >	us to understand our intelligence
> 
> In other words, AI is a label properly applied at the moment to research
> activity rather than artefacts (such as computer programs or robots) --
> because we currently can't make any (artificially) intelligent
> artefacts. So by this definition there is nothing in a *program* which
> makes it AI or not; what makes a program AI or not is whether it taught
> its author anything original and useful about the nature of
> intelligence.

I think this is a valid way to judge the usefulness of a research
program, but I also think it is reasonable to say that certain
"intelligent systems" do display certain limited forms of intelligence. 
Some display low-level animal-like intelligence (behavior-based robots,
etc.) while others display very narrow and very brittle slices of human
intelligence (expert systems, chess-playing programs, etc).

To answer the original poster's question: there is no clear definitions
to determine that program X is an "AI program" and program Y is not,
primarly because there is no clear definition of intelligence.  A better
way to look at the situation is that AI researchers are writing programs
which they hope will:

	(a) possess and display certain (limited) forms of intelligence
	(b) teach them something about the nature of human intelligence
	(c) show them how to build progressively more intelligent machines
	(d) have useful applications to tasks currently requiring intelligence
	(e) serve as an intermediate step toward the creation of truly
	    intelligent systems
	(f) help them get a major research grant
	(g) help them get tenure
	(h) all of the above

_______________________________________________________________________________

Brian Yamauchi				University of Rochester
yamauchi@cs.rochester.edu		Computer Science Department
_______________________________________________________________________________

cam@aipna.ed.ac.uk (Chris Malcolm) (09/15/90)

In article <1990Sep7.203744.4326@cs.rochester.edu> yamauchi@cs.rochester.edu (Brian Yamauchi) writes:
>In article <2982@aipna.ed.ac.uk>, cam@aipna.ed.ac.uk (Chris Malcolm) writes:

>> As Brian Yamauchi has pointed out, it may well be rather silly expecting
>> a *program* to display intelligence. 

>Well... that's not exactly what I meant.  Although I was defining AI as
>a research field, I *do* think it is possible for systems (autonomous
>robots, for example) to display intelligence -- at least at the level of
>very primitive animals (e.g. insects).

I suspect we do agree: the distinction I was making was between a
*program* and an *autonomous system* such as a robot. Consider a robot
that we agree is intelligent, and has a computer for a brain. The
intelligence of the robot does not necessarily imply that there is an
intelligent program running in the computer (robot's brain).
Consequently, if you are trying to build an intelligent robot, and your
strategy is to first of all build a stupid robot, expecting later to be
able to add intelligence in the form of a driving computer running an
intelligent program, then you may well be giving yourself an
unnecessarily difficult -- or even impossible -- task. Yet this is the
strategy implied by many AI research programmes. And I thought I had
seen Brain Yamauchi making similar sorts of distinction between programs
(which run in computers) and systems (which may contain programs as
components, and which run in the physical world).

>> In other words, AI is a label properly applied at the moment to research
>> activity rather than artefacts (such as computer programs or robots) --
>> because we currently can't make any (artificially) intelligent
>> artefacts. So by this definition there is nothing in a *program* which
>> makes it AI or not; what makes a program AI or not is whether it taught
>> its author anything original and useful about the nature of
>> intelligence.

>I think this is a valid way to judge the usefulness of a research
>program, but I also think it is reasonable to say that certain
>"intelligent systems" do display certain limited forms of intelligence. 
>Some display low-level animal-like intelligence (behavior-based robots,
>etc.) while others display very narrow and very brittle slices of human
>intelligence (expert systems, chess-playing programs, etc).

Quite so, and behaviour-based robots don't necessarily owe their
"intelligence" to computer programs. In fact, I hope -- just to make the
point -- to be able to show (very!) limited "intelligence" in a
robot without any computational facilities.

"Intelligent" is an adjective that describes a kind of purposeful and
adaptable behaviour by a creature (autonomous system) going about its
business in its environment. I suspect that analysis of the meaning of
"intelligent" will inevitably involve reference to the physical
composition and structure of the creature, its environment, its
purposes, and how these all interact (sensors and effectors); and that
describing the interaction cannot be done except by a multi-level
description, in which lower levels create the terms in which higher
levels are described (like -- but more complex then -- multiple levels
of interpretation in computer languages). What the brain (or computer)
does may well be an essentail component of intelligence, but IMHO it
cannot possibly be intelligent on its own, any more than an internal
combustion engine can fly or a floating-point processor see.

What misleads many people is the supposition that intelligence is what
clever well-educated people (such as professors) have more of than those
who did badly at school. What professors can do better is formal
abstract reasoning, as displayed in argument, maths, expert diagnosis,
etc.. It is easy to make computers do these things that professors are
good at, MUCH easier than what everybody can do without thinking, such
as walk, catch balls, and observe the moving world spread out before
them in full three-dimensional splendour. Although we have implemented
lots of "professorish" AI systems based on various kinds of reasoning,
we hesitate to call them "intelligent" because they have no common sense
or general knowledge, demonstrating hilarious stupidities as soon as one
steps beyond their narrow domains of competence. And implementing a
system with common-sense -- which we all have -- is going to be many
orders of magnitude harder than implementing the specialised abilities
of this or that professor.

After all, why should our educational and social scales of "cleverness"
ranking be a good guide to the functional underpinnings of intelligent
behaviour? IMHO trying to attack artificial intelligence by making
systems do what we consider to be "clever" is about as silly as trying
to make systems with good table manners.
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

person@plains.NoDak.edu (Brett G. Person) (09/16/90)

In article <34175@eerie.acsu.Buffalo.EDU> dmark@acsu.buffalo.edu (David Mark) writes:
>I gave a talk once at a University in a farming region, and found out
>that in Ag. Schools, "AI" stands for "Artificial Insemination".  :-)
>                                                   ^^^^^^^^^^
>And "they" probably used the abbreviation before "we" did!
>
I'm sure they did.  I remember a couple of years ago when I told my aunts
and uncles - who are mostly farmers - that I wanted to specialize in AI.
I got some pretty strange looks. 

"You want to be an Engineer?  Gee, we didn't even know you liked trains."
                                -A quote from an old uncle of mine
-- 
Brett G. Person
North Dakota State University
uunet!plains!person | person@plains.bitnet | person@plains.nodak.edu

person@plains.NoDak.edu (Brett G. Person) (09/16/90)

I had an instructor tell me once that AI was anything that hadn't already
been done in AI.  

He said that once the AI app. had been written that no one considered it
to be AI anymore
-- 
Brett G. Person
North Dakota State University
uunet!plains!person | person@plains.bitnet | person@plains.nodak.edu