[comp.ai] The future of AI - my opinion

saal@sfsup.UUCP (S.Saal) (03/31/88)

I think the pessimism about AI is a bit more subtle.  Whenever
something is still only vaguely understood, it is considered a
part of AI.  Once we start understanding the `what,' `how,' and
(sometimes) `why' we no longer consider it a part of AI.  For
example, all robotics used to be part of AI.  Now robotics is a
field unto itself and only the more difficult aspects (certain
manipoulations, object recognition, etc) are within AI anymore.
Similarly so for expert systems.  It used to be that ES were
entirely within the purview of AI.  That was when the AI folks
had no real idea how to do ESs and were trying all sorts of
methods.  Now they understand them and two things have happened:
expert systems are an independant branch of computer science and
people have found that they no longer need to rely on the
(advanced) AI type languages (lisp, etc) to get the job done.

Ironically, this makes AI a field that must make itself obsolete.
As more areas become understood, they will break off and become
their own field.  If not for finding new areas, AI would run out
of things for it to address.

Does this mean it isn't worth while to study AI?  Certainly not.
If for no other reason than AI is the think tank, problem
_finder_ of computer science.  So what if no problem in AI itself
is ever solved?  Many problems that used to be in AI have been,
or are well on their way to being, solved.  Yes, the costs are
high, and it may not look as though much is actually coming out
of AI research except for more questions, but asking the
questions and lookling for the answers in the way that AI does,
is a valid and useful approach.
-- 
Sam Saal         ..!attunix!saal
Vayiphtach HaShem et Peah HaAtone

boris@hawaii.mit.edu (Boris N Goldowsky) (04/03/88)

In article <2979@sfsup.UUCP> saal@sfsup.UUCP (S.Saal) writes:

   Ironically, this makes AI a field that must make itself obsolete.
   As more areas become understood, they will break off and become
   their own field.  If not for finding new areas, AI would run out
   of things for it to address.

Isn't that true of all sciences, though?  If something is understood,
then you don't need to study it anymore.

I realize this is oversimplifying your point, so let me be more
precise.  If you are doing some research and come up with results that
are useful, people will start using those results for their own
purposes.  If the results are central to your field, you will also
keep expanding on them and so forth.  But if they are not really of
central interest, the only people who will keep them alive are these
others... and if, as in the case of robotics, they are really useful
results they will be very visibly and profitably kept alive.  But I
think this can really happen in any field, and in no way makes AI
"obsolete."

Isn't finding new areas what science is all about?

Bng


--
Boris Goldowsky     boris@athena.mit.edu or @adam.pika.mit.edu
                         %athena@eddie.UUCP
                         @69 Chestnut St.Cambridge.MA.02139
    	    	    	 @6983.492.(617)

boris@hawaii.mit.edu (Boris N Goldowsky) (04/08/88)

In article <28619@aero.ARPA> srt@aero.ARPA (Scott R. Turner) writes:

   Eventually we'll build a computer that can pass the Turing Test and
   people will still be saying "That's not intelligence, that's just a
   machine."
				   -- Scott Turner
This may be true, but at the same time the notion that a machine could
never think is slowly being eroded away.  Perhaps by the time such a
"Turing Machine"* could be built, "just a machine" will no longer
imply non-intelligence, because they'll be too many semiinteligent
machines around.  

But I think it is a good point that every time we do begin to understand
some subdomain of intelligence, it becomes clear that there is much
more left to be understood...

					->Boris G.

(*sorry.)
--
Boris Goldowsky     boris@athena.mit.edu or @adam.pika.mit.edu
                         %athena@eddie.UUCP
                         @69 Chestnut St.Cambridge.MA.02139
    	    	    	 @6983.492.(617)

srt@aero.ARPA (Scott R. Turner) (04/08/88)

I think the important point is that as soon as AI figures something out,
it is not only no longer considered to be AI, it is also no longer considered
to be intelligence.

Expert systems is a good example.  The early theory was, let's try and
build programs like experts, and that will give us some idea of why
those experts are intelligent.   Now a days, people say "expert
systems - oh, that's just rule application."  There's some truth to
that viewpoint - I don't think expert systems has a lot to say about
intelligence - but it's a bad trap to fall into.  

Eventually we'll build a computer that can pass the Turing Test and
people will still be saying "That's not intelligence, that's just a
machine."
						-- Scott Turner

cdfk@otter.hple.hp.com (Caroline Knight) (04/08/88)

The Turing Test is hardly adequate - I'm surprised that people
still bring it up - indeed it is exactly the way in which people's
expectations change with what they have already seen on a computer
which makes this a test with continuously changing criteria.

For instance, take someone who has never heard of computers
and show them any competent game and the technically 
unsophisticated may well believe the machine is playing
intelligently (I have trouble with my computer beating
me at Scrabble) but those who have become familiar with
such phenomena "know better" - its "just programmed".

The day when we have won is the inverse of the Turing Test - someone
will say this has to be a human not a computer - a computer 
couldn't have made such a crass mistake  - but then maybe
the computer just wanted to win and looked like a human... 

I realise that this sounds a little flippant but I think that
there is a serious point in it - I rely on your abilities
as intelligent readers to read past my own crassness and 
understand my point.

Caroline Knight

mrspock@hubcap.UUCP (Steve Benz) (04/11/88)

From article <2070012@otter.hple.hp.com>, by cdfk@otter.hple.hp.com (Caroline Knight):
> The Turing Test is hardly adequate - I'm surprised that people
> still bring it up...
> 
> The day when we have won is the inverse of the Turing Test - someone
> will say this has to be a human not a computer - a computer 
> couldn't have made such a crass mistake...
>
> ...Caroline Knight

  Isn't this exactly the Turing test (rather than the inverse?)
A computer being just as human as a human?  Well, either way,
the point is taken.

  In fact, I agree with it.  I think that in order for a machine to be
convincing as a human, it would need to have the bad qualities of a human
as well as the good ones, i.e.  it would have to be occasionally stupid,
arrogant, ignorant, etc.&soforth.

  So, who needs that?  Who is going to sit down and (intentionally)
write a program that has the capacity to be stupid, arrogant, or ignorant?

  I think the goal of AI is somewhat askew of the Turing test.
If a rational human develops an intelligent computer, it will
almost certainly have a personality quite distinct from any human.

				- Steve
				mrspock@hubcap.clemson.edu
				...!gatech!hubcap!mrspock

RLWALD@pucc.Princeton.EDU (Robert Wald) (04/12/88)

In article <1348@hubcap.UUCP>, mrspock@hubcap.UUCP (Steve Benz) writes:
 
>  Isn't this exactly the Turing test (rather than the inverse?)
>A computer being just as human as a human?  Well, either way,
>the point is taken.
>
>  In fact, I agree with it.  I think that in order for a machine to be
>convincing as a human, it would need to have the bad qualities of a human
>as well as the good ones, i.e.  it would have to be occasionally stupid,
>arrogant, ignorant, etc.&soforth.
>
>  So, who needs that?  Who is going to sit down and (intentionally)
>write a program that has the capacity to be stupid, arrogant, or ignorant?
 
 
  I think that you are missing the point. Its because you're using charged
words to describe humans.
 
Ignorant: Well, I would certainly expect an AI to be ignorant of things
or combinations of things it hasn't been told about.
 
Stupid: People are stupid either because they don't have proper procedures
to deal with information, or because they are ignorant of the real meaning
of the information they do possess and thus use it wrongly. I don't see
any practical computer having some method of always using the right procedure,
and I've already said that I think it would be ignorant of certain things.
People think and operate by using a lot of heuristics on an incredible
amount of information. So much that it is probably hopeless to develop
perfect algorithms, even with a very fast computer. So i think that computers
will have to use these heuristics also.
  Eventually, we may develop methods that are more powerful and reliable
than humans. Computers are not subject to the hardware limitations of the
brain. But meanwhile I don't think that what you have mentioned are
'bad' qualities of the brain, nor unapplicable to computers.
 
Arrogance: It is unlikely that people will attempt to give computers
emotions for some time. On the other hand, I try not (perhaps
failing at times) to be arrogant or nasty. But as far as the turing
test is concerned, a computer which can parse real language could
conceivably parse for emotional content and be programmed to
respond. There may even be some application for this, so it may
be done. The only application for simulating arrogance production
might be if you are really trying to fool workers into thinking
their boss is a human, or at least trying to make them forget it
is a computer.
 
I'm not really that concerned with arrogance, but I think that an
AI could be very 'stupid' and 'ignorant'. Not ones that deal with limited
domains, but ones that are going to operate in the real world.
-Rob Wald                Bitnet: RLWALD@PUCC.BITNET
                         Uucp: {ihnp4|allegra}!psuvax1!PUCC.BITNET!RLWALD
                         Arpa: RLWALD@PUCC.Princeton.Edu
"Why are they all trying to kill me?"
     "They don't realize that you're already dead."     -The Prisoner

channic@uiucdcsm.cs.uiuc.edu (04/13/88)

In article <1348@hubcap.UUCP>, mrspock@hubcap.UUCP (Steve Benz) writes:
 
>  In fact, I agree with it.  I think that in order for a machine to be
>convincing as a human, it would need to have the bad qualities of a human
>as well as the good ones, i.e.  it would have to be occasionally stupid,
>arrogant, ignorant, etc.&soforth.
>
>  So, who needs that?  Who is going to sit down and (intentionally)
>write a program that has the capacity to be stupid, arrogant, or ignorant?
 
Another way of expressing the apparent necessity for bad qualities "for a
machine to be convincing as a human" is to say that free will is fundamental
to human intelligence.  I believe this is why the reaction to any "breakthrough"
in intelligent machine behavior is always "but its not REALLY intelligent,
it was just programmed to do that."  Choosing among alternative problem
solutions is an entirely different matter than justifying or explaining
an apparently intelligent solution.  In complex problems of politics, economics,
computer science, and I would even venture to say physics, there are no right
or wrong answers, only opinions (which are choices) which are judged as such
on the basis of creativity and how much it agrees with the choices of those
considered expert in the field.  I think AI by and large ignores
the issue of free will as well as other long standing philoshical problems
(such as the mind/brain problem) which lie at the crux of developing machine
intelligence.  Of course there is not much grant money available for addressing
old philosphy.  This view are jaded, I admit, but five years of experience in
the field has led me to believe that AI is not the endeavor to make machines
that think, but rather the endeavor to make people think that machines can
think.


tom channic
uiucdcs.uiuc.dcs.edu
{ihnp4|decvax}!pur-ee!uiucdcs!channic