[comp.ai] Value of AI, was ->

erich@eecs.cs.pdx.edu (Erich Boleyn) (08/14/90)

   I know that I am going off of the subject a bit, but I feel that this
is relevant to the original question...

In <3156@gara.une.oz.au> pnettlet@gara.une.oz.au (Philip Nettleton) writes:
>In <2860@bruce.cs.monash.OZ.AU>, by frank@bruce.cs.monash.OZ.AU (Frank Breen):
>> ... To me the turing test only tests if a computer can imitate human
>> intelligence (and presumably human thought). ...
>
>"I think, therefore I am" - I still haven't found any proof that other
>people exist, I merely choose to BELIEVE they do. Imitation is a nice
>concept to pose when trying to undermine the Turing Test, but something
>clever enough to imitate a human being well enough to fool a human
>interrogator, must be of equivalent or higher intelligence itself.
>Remember, ANY question is fair game in the Turing Test.

   Phillip has an excellent point here, that we know other humans are
intelligent by convention, not by any innate knowledge.  We are effectively
running constant and numerous advanced versions of turing tests on
everything around us.  How do you decide if someone you meet is more
or less intelligent?  (Let's say you heard that they were a genius or
something similar)  You would tend to probe how flexibly they deal with
different things...  everyone does such things to some extent, expecially
when it is with someone you don't know.  We also tend to act like this to
animals that we come into contact with (unless we are trying not to act
foolish ;-).  Personally, though, I feel that any "AI system" that passes
the turing test robustly (i.e. with several trials, etc.) is probably
more intelligent than most humans anyway, since it has to deal with an alien
environment and convince a NATIVE of this environment that it is one of
them, in the native's own language, yet.

>> ... I'm not convinced that it is a good idea to have an AI that is so close
>> to being human.  If computers can do all the thinking that people can (and
>> presumably better) then what's the point in any humans thinking.
>> We would be reduced to being amused by the AI's (presuming they're
>> nice) and all usefull thought would be done by the AI's. ...
>
>There are a thousand good reasons for pursuing AI...
>                     ...Deep space exploration, deep sea exploration,
>replacement of humans in life endangering jobs, etc, etc. I, for one,
>will not stop thinking just because of the advent of AI. I wouldn't
>advise arming them with nuclear weapons but thats a different issue.

   Well, Frank, at this point it is good to ask what would AI be good for?
Phillips list sounds more reasonable for the near future than yours...
in fact your idea would probably not come into being (if at all) for a
LONG time, and who knows, by then Linear technological progression may not
be the norm any more.

   Two things that I've learned from studying technology, science, and
especially AI are:   1) technology does not advance into what are dreams of
it were (Jules Vern, and other famous examples), and I have found, that,
if anything, instead of it going up to a peak, it builds up a little bit,
then spends a lot of time settling down into areas that we never thought of.
For instance, people did dream of things flying through the sky, but where
were molecular biology and microchips long ago?  It is too easy an answer
just to say that they were naive about such things.  Perhaps we are naive
about the advances to take place 50 or more years from now, or, frighteningly,
even 10 or 20?   2) of all that I have learned from studying AI, quite a
bit of it are applications that I would think "yeah, AI-type systems would
be GREAT for that", but more on that in a bit.

>
>> ... What AI should do is let us humans keep doing what we're good at
>> and let the AI's do what they are better at. ...
>
>We might well be extremely bad at it - there could be thousands of species
>of creatures throughout the Galaxy more intelligent than we are, and we're
>so smart we can't even think of a way to prove whether they exist or not.
>

   Back to what I was saying about what AI is good for.  There are potential
ranifications (or course) in an immense number of fields of study and in
useful applications for almost anything that you could think of.  Most of
them, however, don't require the scale of human intelligence or conciousness
(if that eases your worries, Frank).  In fact, quite a few of them that I
have thought of as being useful to everyday life (or even myself) have been
extensions (or variations) on what people call Artificial Life studies,
which for the most part would be things like controlling the legs on a
walker intelligently (heck, imitating the legs of an insect would do),
getting a smarter operating systems configuration system for very advanced
computers (say, dynamic configurations that adjust to the need in both
software and hardware), plus some more goodies like that, as I just spent a
week getting some systems configured correctly and running, and it is just
too much of a hastle, etc.

   But even when considering an AI that has human scale (or greater, for
sake of argument) intelligence, you seem to be assuming that people will
always be the same as they are as well, but how can you tell?  As I
mentioned before, technology evolves in a strange and rich way, but then so
does culture (we already found out that the linear progression predicted
toward eutopia was wrong), and probably the rest of the human race as well.
Have you considered that some may use these AIs for forced evolution of the
human race (or a subset of it)?

   There are many possibilities and ramifications involved here, and it
looks like "AI's taking over all useful human thought" is related to the
problem of the technological and societal projections made many years ago,
i.e. they take some features of what is happening or what exists
technologically and expand them linearly or mostly linearly, when what
usually ends up happening is a small linear expansion and then the filling
of the little holes that are then exposed that leads to a wholly different
state than what was predicted, leaving out the linear goals entirely.
So we start with some AI ideas years ago, and all they could think
about were some fairly linear extensions of that idea, many of them
involving more "intelligence" than would really be needed, probably.
But the real field just doesn't expand that way, so I don't think that
we need to worry so much about AI's taking over our useful thinking
ability.  When you get down to it, linear expansion of technology is just
not as efficient, anyway.  (we'd miss a lot of interesting stuff, otherwise!)

>> ... The point is that the Turing test seems to me to be somewhat contrived
>> and meaningless. ...

   Out of its proper context, maybe, but when seen in the light of being
a formalization of natural methods of evaluating intellectual flexibility,
it is anything but contrived, and definitely not meaningless.  It is a
test of the relative usefulness, really, for if it is to all intensive
purposes as smart as a human, then it is surely useful regardless if you
believe it is "really" intelligent of not, and that is the POINT of such
a test in its natural context.

>IQ tests are contrived and meaningless - we still do them and so do our
>kids. It is meant to be a critical test of success or failure in creating
>a machine with capabilities approaching those of a human being (at what human

   My point exactly.

>beings do best). It does not represent the definitive answer to what we
>expect AI to give us. After all, who needs a machine that can imitate being
>a human being?

   Good point also.  As I mentioned earlier, what would be efficient about
this kind of expansion, as compared to some other use of that kind of
capability?


   Erich

   ___--Erich S. Boleyn--___  CSNET/INTERNET:  erich@cs.pdx.edu
  {Portland State University}     ARPANET:     erich%cs.pdx.edu@relay.cs.net
       "A year spent in           BITNET:      a0eb@psuorvm.bitnet
      artificial intelligence is enough to make one believe in God"

loren@tristan.llnl.gov (Loren Petrich) (08/15/90)

	That's very good point about the Turing Test -- that our
"knowledge" of other people's minds is based on EXACTLY that
principle. I recall some months back getting into a long and involved
argument with Tom Simmonds on this very subject. He had claimed that a
computer can never truly "think" (which seems to be Searle's
position), and I challenged him to demonstrate that other people
think. I had to explain to him what the Turing Test was, and I
challenged him to find arguments for the existence of other minds that
did not reduce to the Turing Test. All he came up with was versions of
the Turing Test. If alternatives to the Turing Test exist, then they
must be hard to find.

	I think Searle's counterargument to the Turing Test is the
argument of "See? There's no mind inside!" I am not impressed by this
would-be _reductio ad absurdum_ -- how can one tell that there really
is "no mind inside" of a seemingly intelligent system? I would also
like to point out that neuroscientists have never succeeded in finding
the "mind" anywhere in our brains -- all they have ever found is that
parts of the brain do specific things that are far from being a
"mind". This suggests that "mind" is some sort of collective property
of the brain, one that cannot be localized anywhere. And the same
would hold true of AI systems.

						        ^    
Loren Petrich, the Master Blaster		     \  ^  /
	loren@sunlight.llnl.gov			      \ ^ /
One may need to route through any of:		       \^/
						<<<<<<<<+>>>>>>>>
	lll-lcc.llnl.gov			       /v\
	lll-crg.llnl.gov			      / v \
	star.stanford.edu			     /  v  \
						        v    
For example, use:
loren%sunlight.llnl.gov@star.stanford.edu

My sister is a Communist for Reagan

erich@eecs.cs.pdx.edu (Erich Boleyn) (08/15/90)

In <66412@lll-winken.LLNL.GOV> loren@tristan.UUCP (Loren Petrich) writes:
>
>	That's very good point about the Turing Test -- that our
>"knowledge" of other people's minds is based on EXACTLY that
>principle. I recall some months back getting into a long and involved
>argument with Tom Simmonds on this very subject. He had claimed that a
>computer can never truly "think" (which seems to be Searle's
>position), and I challenged him to demonstrate that other people
>think. I had to explain to him what the Turing Test was, and I
>challenged him to find arguments for the existence of other minds that
>did not reduce to the Turing Test. All he came up with was versions of
>the Turing Test. If alternatives to the Turing Test exist, then they
>must be hard to find.

   Hmmm...  This brings an idea to mind.  Might it not be interesting to
work on developing Turing Test theory in much the same way as Turing
Machine theory?  Such as working on what tests could in essence reduce
to a Turing Test and what couldn't?  (as a start)  There are many possible
paths for such an idea, but the one that I have just thought of would
almost be like a generalization of the Turing Test, IQ testing, etc.
It would of course require mathematically rigorous definition (I cannot
think of anything good at the moment ;-) to be very useful, with information
on the physical intelligences that we DO know of that has been accumulated
over recent years (plus some innovative psychology) perhaps something
interesting could be done.

   Part of this idea comes from a sci-fi book that I read recently called
Hyperion (apolgies to the author, Dan Simmons), where they have a rating
system for AI's called the Turing-<something> index much like a very
sophisticated IQ measuring system for AI's.

   Erich
   ___--Erich S. Boleyn--___  CSNET/INTERNET:  erich@cs.pdx.edu
  {Portland State University}     ARPANET:     erich%cs.pdx.edu@relay.cs.net
       "A year spent in           BITNET:      a0eb@psuorvm.bitnet
      artificial intelligence is enough to make one believe in God"

frank@bruce.cs.monash.OZ.AU (Frank Breen) (08/19/90)

In <3231@psueea.UUCP> erich@eecs.cs.pdx.edu (Erich Boleyn) writes:


>   I know that I am going off of the subject a bit, but I feel that this
>is relevant to the original question...

>In <3156@gara.une.oz.au> pnettlet@gara.une.oz.au (Philip Nettleton) writes:
<<In <2860@bruce.cs.monash.OZ.AU>, by frank@bruce.cs.monash.OZ.AU (Frank Breen):

<<There are a thousand good reasons for pursuing AI...
<<                     ...Deep space exploration, deep sea exploration,
<<replacement of humans in life endangering jobs, etc, etc. I, for one,
<<will not stop thinking just because of the advent of AI...

>   Well, Frank, at this point it is good to ask what would AI be good for?
>Phillips list sounds more reasonable for the near future than yours...
>in fact your idea would probably not come into being (if at all) for a
>LONG time, and who knows, by then Linear technological progression may not
>be the norm any more.

I do think that AI is good - I just don't see the point (other than academic)
for AI's to imitate humans.  And I do think that being made obsolete by
AI's is a problem we will have to deal with eventually - but it's not a
problem to be avoided and it would be a stupid reason to avoid studying
AI.


>   Two things that I've learned from studying technology, science, and
>especially AI are:   1) technology does not advance into what are dreams of
>it were ...

But I think that humans being made obsolete by AI's is a general enough
senario that it is inevitable (i.e. I can't see how it can be avoided).

>   Back to what I was saying about what AI is good for...
Yes I agree there will be a great many wonderful benefits, both foreseen
and unforseen and I look forward to discovering what the future holds.

>...[human intelligence may keep pace with AI]
>Have you considered that some may use these AIs for forced evolution of the
>human race (or a subset of it)?

Yes this is a facinating idea and seems to be fairly likely, but it means
we are no longer human, it means the new 'super humans' have rendered
ordinary humans totally obsolete.  It sounds wonderful to me but, sadly,
I am not superhuman so even if my children are, I have still been left
behind in the evolutionary race towards greater intelligence (in human's
AI's and hybrids).

<<> ... The point is that the Turing test seems to me to be somewhat contrived
<<> and meaningless. ...

>   Out of its proper context, maybe, but when seen in the light of being
>a formalization of natural methods of evaluating intellectual flexibility,
>...  It is a test of the relative usefulness ... regardless [of] if you
>believe it is "really" intelligent of not, and that is the POINT of such
>a test in its natural context.

Yes I must agree with you when looking at it like this, it is an important
measure. I think that it's importance has been overated a bit (like IQ tests)


<<beings do best). It does not represent the definitive answer to what we
<<expect AI to give us. After all, who needs a machine that can imitate being
<<a human being?

>   Good point also.  As I mentioned earlier, what would be efficient about
>this kind of expansion, as compared to some other use of that kind of
>capability?

Yes - this was one of the points of my original posting (in a slightly 
round about way)

Frank Breen

erich@eecs.cs.pdx.edu (Erich Boleyn) (08/20/90)

In article <2884@bruce.cs.monash.OZ.AU> frank@bruce.cs.monash.OZ.AU (Frank Breen) writes:
>
>But I think that humans being made obsolete by AI's is a general enough
>senario that it is inevitable (i.e. I can't see how it can be avoided).
>
>>...[human intelligence may keep pace with AI]
>
>Yes this is a facinating idea and seems to be fairly likely, but it means
>we are no longer human, it means the new 'super humans' have rendered
>ordinary humans totally obsolete.  It sounds wonderful to me but, sadly,
>I am not superhuman so even if my children are, I have still been left
>behind in the evolutionary race towards greater intelligence (in human's
>AI's and hybrids).

   First, OK, we are what is loosely identified as "human" right now...  but
think of how easy it is to be removed from that category.  Many people would
agree that most of "being human" is in the behavior, right?  What would
happen to someone's mind if this person were alive (and had a mental
flexibility equivalent to about 15-20 years of age) for, lets say, 1000 years.
It is arguable that over this time the possibilities of mental growth
(maturation) are immense.  What would it be like to converse with such a
being?  Would we even have a really common base of understanding anymore,
or would the emotional states and intellectual capabilities have been so
changed that us "normal" humans couldn't relate at all, that the motives
of this being are just too subtle for us to understand without having it
painfully explained to us (at best).  Would you still classify this person
as "human"?  Would anyone who had that kind of knowledge be "human"?  There
is already a large difference in motivation between very well (and lengthily)
educated people and more-or-less uneducated people, sometimes nearly
UNBRIDGEABLE by any means acceptable to both parties.  "human" is a
loose term that we stamp onto people (physical humans) that fit into some
societal nitche in some way or another.  Genetically, a "super-human"
probably wouldn't be much different from you than someone who was of
another sub-race such as oriental, negro, etc.  The mental and behavioral
states would be the biggest telling difference (besides the fact that they
would probably not look like any human you'd seen before ;-).

   Second, yeah, I've though about what happens when the biggest organizing
AI's get to human-scale overall intelligence, and I'm similarly dissatisfied
(not that they will exist, or that they'll outrun the human race, but that
they'll leave ME behind ;-).  I have no idea if that will happen in my
lifetime (I sort of hope so, though, ironic, huh?), and even though there
are ways (or will be soon) to genetically engineer new kids so that they'll
live longer, be super-geniuses, etc, how does that help our PERSONAL survival &
success (so to speak) wish?  Well, outside of a couple of crazy schemes
(heh heh ;-), I don't know, Frank.  I think about it too.

   But I'm working on the problem!

>Frank Breen

   Regards,  Erich Boleyn

   ___--Erich S. Boleyn--___  CSNET/INTERNET:  erich@cs.pdx.edu
  {Portland State University}     ARPANET:     erich%cs.pdx.edu@relay.cs.net
       "A year spent in           BITNET:      a0eb@psuorvm.bitnet
      artificial intelligence is enough to make one believe in God"