[comp.ai] Where does one get started??

mgallagh@uhasun.hartford.edu (Michael Gallagher) (05/20/91)

	Greetings, all. As I have read this newsgroup for awhile, I have
grown very  interested in   the field of Artificial Intelligence. I realize
that it is not an area one just can start as a straight discepline on
the undergraduate level. In fact, this university only offers one intro AI
class; an overview of the different areas of the field.

	Therefore, my question is: What is/are the best ways to get into
the field of AI?? What are the best starting points, background(s) to have, etc.
	Any info/advice/etc would be appreciated.


	-M. Gallagher
 

gal2@quads.uchicago.edu (Jacob Galley) (05/22/91)

In article <622@ultrix.uhasun.hartford.edu> mgallagh@uhasun.hartford.edu (Michael Gallagher) writes:
>	...Therefore, my question is: What is/are the best ways to get into
>the field of AI?? What are the best starting points, background(s) to have?

I am in the same position as you. You should look around in the philosophy
and psychology departments, and if semantics and meaning interest you, check
out linguistics. Basically, I'm finding that the best way to prepeare myself
for CogSci/AI (I'm not even that sure what the difference is yet) is to build
my own major, out of a loosely structured program here called "Philosphy and
Allied Fields."

Good luck!



-- 
-- Jacob Galley
   gal2@midway.uchicago.edu
   University of Chicago.

clin@eng.umd.edu (Charles Chien-Hong Lin) (05/26/91)

In article <1991May22.154503.10564@midway.uchicago.edu>, gal2@quads.uchicago.edu (Jacob Galley) writes:
> In article <622@ultrix.uhasun.hartford.edu> mgallagh@uhasun.hartford.edu (Michael Gallagher) writes:
> >	...Therefore, my question is: What is/are the best ways to get into
> >the field of AI?? What are the best starting points, background(s) to have?
> 
> I am in the same position as you. You should look around in the philosophy
> and psychology departments, and if semantics and meaning interest you, check
> out linguistics. Basically, I'm finding that the best way to prepeare myself
> for CogSci/AI (I'm not even that sure what the difference is yet) is to build
> my own major, out of a loosely structured program here called "Philosphy and
> Allied Fields."
> 
> Good luck!
> 
> 

   AI, to me, is viewed as a conglomeration of different things.  For
folks in philosophy, one might worry about what is intelligence.  In
psychology, one might worry about the same thing but perhaps in
a different manner.  Anyway, I'm not suited to explain AI in that
fashion.  I'm better suited to explain it from a computer science
viewpoint.
   From this viewpoint, it would be better to get into computer
science in general.  AI, at least the way its practiced among
computer scientists, is the use of computer science techniques
to simulate human intelligence.  So, unless you know what a graph
or tree is, knowing AI without knowing CS in general won't be too
useful.
   Now whether CS techniques will do the job of simulating intelligence,
that's a different question.  I take a more pragmatic view.  If
it appears to do things more "intelligently", well good enough.
I don't think it has to AI has to produce things that are exactly
intelligent aw we understand it, just as a plane does not
imitate a bird except that it flies.
   Probably you should go ahead and take the course.  It will give
you an overall feel of what AI is.  The name AI often suggests 
ideas that do not bear out in reality.  Until you take a course, and
see what it's really like, you won't get an accurate view of AI.
If you're still interested, try doing a project with a professor
who does work in AI.  If you are interested in the computing
aspects of AI, then a degree in computer science would be the
way to go.  If you prefer linguistics or semantics without so
much of the programming, then philosophy/psychology/cog. sci
may be the way to go.
   Personally, I'm not to clear on what cognitive science is
myself.  Let me briefly mention some of the topics in AI.

    Planning  --  Start with some initial state (a bunch of
                  blocks lying on a table) and some operators
                  (how to move a robot arm).  Attempt to come
                  up with a plan that achieves a goal state.
                  (Have the blocks piled up).  Planning might
                   be useful for having, say, a robot try to
                   carry out a task without help.

    Expert Systems --  most commercial of the topics.
                       Emulate the behavior of an expert by a
                       lot of if-then clauses ("If coffee cup dropped,
                       ground is wet").  

    Neural networks -- Use neural networks to simulate the brain.
                        Basically, a network with inputs, and
                        some weighted factors that produce a desired
                        output.  Networks are "trained" more than
                        "programmed" in the usual sense of the word.

    Natural language -- Try to understand human sppech.  Need to
                        know stuff like context-free grammars.

    Uncertain Reasoning -- Not exactly a subdiscipline.  Concerns
                        of how one should reason given that you
                        are uncertain of your knowledge.  Uses
                        probability among other methods.

    Automated theorem Proving --  Most mathematical of the group.
                        Attempts to find efficient ways of proving
                        mathematical theorems.

    Machine vision  -- Getting a computer to "see" or identify objects.

    Robotics -- How to get a robot to move around on uncertain
                terrain.  How to get robots to figure out how to
                carry out certain tasks.

    Machine Learning -- Trying to get a machine to "learn".  One version
                        is to use a bunch of logical clauses and decide
                        what are meanignful rules to learn.

    Computational Machine Learning -- Stuff like identifying finite
                        automatas.  Deals with efficient methods of
                        doing so, but domains are usually mathematical
                        objects.

    Commonsense reasoning -- How do people learn common sense ideas.

    Qualititative physics -- How do you represent the idea that if
                             an object falls, it breaks, etc.

   This is only a brief overview, but covers most of the major topics. 
Some I'm sure I've left out; others overlap, or do not yet constitute
a distinct subfield of AI.  Even some of the terminology is suspect.
When I say "learning", I do not mean learning as you might think
of "learning".  Like I said, one way is to have a bunch of logical
clauses, etc.  Sometimes you want to prove something.  If it is proved,
then maybe some of the rules that were deduced can be kept around.
Makes for more efficient access for the next thing to be proved.
   You see that there is a lot of CS flavor in this.  Basically
CS has come up with a bunch of mathematical techniques, and since
those are pretty well understood, they try to apply it to the task
of AI.  It's tough to tell if this is the right approach, but
with nothing else more innovative at the time being, that's the
technique being used.  To me, AI should gear itself toward
making things that are practically useful (more or less), and
not toward completely philosophical ideas, but I haven't really
given the topic much thought.

--
   ____         _
  /    |     __|_|       clin@eng.umd.edu
 |             |         
 |  harles    |  in      "University of Maryland Institute of Technology"
 |          _|
  \_____/  |_|\___/