[comp.ai] Limits of AI

ncthangi@ndsuvax.UUCP (sam r. thangiah ) (10/27/88)

One of the students in my class raised a point that:
"Man is not capable of producing a machine that is more intelligent than
oneself".  Is this a valid statement?

I really do not know if this has been debated, but it does tell us the limits
of achievements that can be attained by AI or does it ?

Sam
--
Sam R. Thangiah,  North Dakota State University.
300 Minard Hall     UUCP:       ...!uunet!plains.nodak.edu!ncthangi
NDSU, Fargo         BITNET:     ncthangi@plains.nodak.edu.bitnet
ND 58105            ARPA,CSNET: ncthangi%plains.nodak.edu.bitnet@cunyvm.cuny.edu

dykimber@phoenix.Princeton.EDU (Daniel Yaron Kimberg) (10/29/88)

In article <1651@ndsuvax.UUCP> ncthangi@ndsuvax.UUCP (sam r. thangiah ) writes:
>One of the students in my class raised a point that:
>"Man is not capable of producing a machine that is more intelligent than
>oneself".  Is this a valid statement?
>
>I really do not know if this has been debated, but it does tell us the limits
>of achievements that can be attained by AI or does it ?

I'm not sure if there have been any public debates on this question, but I
think there are some easy (not necessarily correct, though) answers.  My gut
reaction is that this question is identical to the question of whether or not
AI is possible.  Since we know that machines can do things people can't, and
since we know that people can use these machines to allow themselves to do
things they ordinarily wouldn't be able to do, then if we could just replace
the people with AI engines, we would have machines that could do more things
than their designers.  In my judgement, these things would constitute added
intelligence.

                                                 -Dan

dmocsny@uceng.UC.EDU (daniel mocsny) (10/29/88)

In article <1651@ndsuvax.UUCP>, ncthangi@ndsuvax.UUCP (sam r. thangiah ) writes:
> One of the students in my class raised a point that:
> "Man is not capable of producing a machine that is more intelligent than
> oneself".  Is this a valid statement?

Depends on who ``oneself'' is addressed to. :-)

Man can build physical mechanisms that can outperform his own physical
work capacity by orders of magnitude. We can't even define intelligence,
much less establish limits for it. I see no reason to doubt that he
will oneday build a machine that is more intelligent than himself, unless
the dualist view is correct (and physico-chemical mechanisms cannot
account for intelligence). However, if you asked me ``Can Man build a
_logic_ machine more intelligent than himself?'' I would laugh.

I can certainly program a computer to perform an algorithm faster and more
correctly than I can perform it. My programs also exhibit behavior that
I can't always predict (if I could, I wouldn't need to program). However,
logic machines require explicit programming for the most trivial tasks.
They are not self-organizing nor adaptive. They do not learn from 
everyday experience in a generally useful way. As long as that is true they
can never possess what we could reasonably call intelligence.

The connectionist approach to AI may succeed in creating machines that
correct these glaring deficiencies of logic machines. If so, then in
combination with logic machines they may create a hybrid intelligence
that exceeds anything we have yet seen. Especially if that hybrid
includes us.

In any case, discussing whether machines will exceed human intelligence
is a bit premature, rather like arguing over how tall a redwood seedling
might eventually become. Probably none of us will live to see the
question settled, and the seedling has an enormous struggle ahead of
it. Better to pay attention to nibbling away at subproblems...

Dan Mocsny

jinli@gpu.utcs.toronto.edu (Jin Li) (10/29/88)

In article <1651@ndsuvax.UUCP> ncthangi@ndsuvax.UUCP (sam r. thangiah ) writes:
>One of the students in my class raised a point that:
>"Man is not capable of producing a machine that is more intelligent than
>oneself".  Is this a valid statement?
>...

It depends on one's belief.  Technically, I don't think any machine can be
more intelligent than human.  Please think of the following analogy:

	If you believe that God created human, how could human be
	more superior than God?

BTW, I am not a Catholic!

I think we should focus our attention on how to use AI to help us, but not
waste time trying to build a machine(eg. HAL) that is more intelligent than us.

Can HAL pass first year Psychology at university?
-- 
Jin Li at University of Toronto Computing Services>>
						 << Gin & Tonic mix well.
jinli@gpu.utcs.utoronto.ca   uunet!utai!utcs!jinli>>

turk@mit-amt (Matthew Turk) (10/30/88)

In article <1651@ndsuvax.UUCP>, ncthangi@ndsuvax.UUCP (sam r. thangiah ) writes:
> 
> One of the students in my class raised a point that:
> "Man is not capable of producing a machine that is more intelligent than
> oneself".  Is this a valid statement?
> 
> I really do not know if this has been debated, but it does tell us the limits
> of achievements that can be attained by AI or does it ?
> 
> Sam

Yes, this has been debated ad infinitum.  It seems in the same category
of statement at:  "Man is not capable of producing a machine that is
stronger than oneself" or "...flies better than oneself" or "...multiplies
faster than oneself".  Of course it speaks of intelligence, this elusive
quality much more complex than strength, flight, or multiplication, but
the point remains the same -- the statement may possibly be proven wrong
(by some reasonable test such as a Turing test) but there seems to be no
way to prove that it is correct.  It remains an unverified statement of
faith.

The same of course applies to the statement "Man *is* capable of producing
a machine that is more (or as) intelligent than oneself".  Much of science
consists of people attempting to verify things they already believe
because of some kind of faith.  (Note faith = firm belief in something
for which there is no proof - Webster's)

	Matthew Turk

cme@cloud9.UUCP (Carl Ellison) (10/30/88)

In article <1651@ndsuvax.UUCP>, ncthangi@ndsuvax.UUCP (sam r. thangiah ) writes:
> One of the students in my class raised a point that:
> "Man is not capable of producing a machine that is more intelligent than
> oneself".  Is this a valid statement?


To me, that depends on whether we can produce something which REALLY learns.
If so, then the statement is invalidated empirically -- by any
illiterate coal miner's son who gets a PhD.


--Carl Ellison          ...!harvard!anvil!es!cme    (normal mail address)
                        ...!ulowell!cloud9!cme      (usenet news reading)
(standard disclaimer)

smann@watdcsu.waterloo.edu (Shannon Mann - I.S.er) (10/30/88)

In article <1651@ndsuvax.UUCP> ncthangi@ndsuvax.UUCP (sam r. thangiah ) writes:
>
>One of the students in my class raised a point that:
>"Man is not capable of producing a machine that is more intelligent than
>oneself".  Is this a valid statement?
>

>Sam R. Thangiah,  North Dakota State University.

It depends what you call _intelligence_.  If you decide that it is 
expertise in a particular field, from the point of view of a non-expert,
the machine is more intelligence.  But, that is a poor definition.

You ask 'can [human]kind build a machine more intelligent than itself?'
Well, we can take a couple of angles to answer this question. 

First, consider that less that fifty years ago, doing surgery on the human
heart was considered impossible.  A transplant would be completely
unthinkable.  Consider now that, not only do we do transplants, we sustain
human life far beyond what was once considered death.  I guess we have 
finally put 'God in[to] the machine'.  (Please excuse my off-coloured pun)

Now consider the argument posed by Dr. Carl Sagan in ch. 2, Genes and 
Brains, of the book _The Dragons of Eden_.  He argues that, at about the 
level of a reptile, the amount of information held within the brain
equals that of the amount of information held within the genes.  After 
reptiles, the amount of information held within the brain exceeds that
of the genes.

Now, of the second argument, we can draw a parallel to the question asked.
Lets rephrase the question:

Can a system containing X amount of information, create a system containing
Y amount of information, where Y exceeds X?  

As Dr. Sagan has presented in his book, the answer is a definitive _YES_.

        -=-
-=- Shannon Mann -=- smann@watdcsu.UWaterloo.ca
        -=-

'I have no brain, and I must think...' - An Omynous
'If I don't think, AM I' - Another Omynous

sean@cadre.dsl.PITTSBURGH.EDU (Sean McLinden) (10/30/88)

In article <1651@ndsuvax.UUCP> ncthangi@ndsuvax.UUCP (sam r. thangiah ) writes:
>One of the students in my class raised a point that:
>"Man is not capable of producing a machine that is more intelligent than
>oneself".  Is this a valid statement?

This is the kind of topic which could flood this newsgroups for weeks and
probably belongs in a philosophy. How do you measure intelligence? Memory?
Surely it can be done. Logical operations per unit time? Again, there is
no reason why not. Number of simultaneous active process? Why not?

We can build machines that are stronger, faster, more reliable, and more
durable than ourselves, why not more intelligent?

Sean McLinden
Decision Systems Laboratory

andy@cs.columbia.edu (Andy Lowry) (10/31/88)

In article <1651@ndsuvax.UUCP> ncthangi@ndsuvax.UUCP (sam r. thangiah ) writes:
>One of the students in my class raised a point that:
>"Man is not capable of producing a machine that is more intelligent than
>oneself".  Is this a valid statement?

I certainly don't think so.  Humankind has created machines that are
stronger, faster, more precise, more rugged, and "better" than humans
in many other respects.  Why should "more intelligent" be magically
excluded from the list of possibilities?  The only reasons I can think
of are: (1) "Intelligence" is not well defined and is difficult to
measure, and as long as this situation continues, it will always be
easy to discount any machine behavior that somebody calls intelligent;
(2) Our intelligence is something we generally hold very sacred, and
it makes some people extremely uncomfortable to contemplate the
possibility that we might not have an exclusive claim to it.

Intelligence is clearly not something that one either does or does not
possess.  Some people are more intelligent than others; some species
of animal are more intelligent than others.  To assume that humankind
possesses the highest attainable level of intelligence, just because
we have not encountered more intelligence elsewhere in the animal
kingdom (or thus far in the machine "kingdom"), seems an extremely
arrogant attitude.  And to propose that humankind is incapable of
creating machines that exceed human levels of intelligence runs
counter to our experience in countless other domains.

Here's a more radical proposition: Suppose we manage to design and
build mechanisms for learning, perception, abstraction, etc. that are
superior to our own.  Will we then build a bunch of machines that will
be our mental superiors?  How about an alternative: we apply what we
have learned to "re-engineer" the human mind.  How?  I don't know.
Genetic engineering?  21st century magic?  22nd century magic?
Whatever...  But what would we call the results?  Human beings or
machines?  Is this scenario scary?  Is it more palatable than
coexisting with mentally superior machines?  Is all this so hard to
take that we should stop trying to push machine intelligence?  Can we
ignore these problems and stop worrying because the student quoted
above is correct and the situation is inherently inconceivable?

My own views: we can and will fabricate intelligence levels exceeding
our own.  What we learn (and what our creations learn) about the
mechanisms of intelligence will enable us to improve human
performance, though without altering the human physiology, barriers
will be met.  (Analogy: what we have learned about mechanics, human
physiology, nutrition, etc. has allowed us to push the performance
level of athletes, but it is inconceivable that a human runner will
ever break the sound barrier or a strongman lift ten tons.)  The
evolutionary processes that result in physiological changes allowing
greater intelligence will come much more slowly than our ability to
build intelligent machines.

-Andy

sewilco@datapg.MN.ORG (Scot E Wilcoxon) (10/31/88)

In article <1651@ndsuvax.UUCP> ncthangi@ndsuvax.UUCP (sam r. thangiah ) writes:
>One of the students in my class raised a point that:
>"Man is not capable of producing a machine that is more intelligent than
>oneself".  Is this a valid statement?

How about substituting for "Man" one of the following: monkey, mammal, reptile.
Life, and thus evolution, is merely random exceptions to entropy.  Man is
certainly capable of doing better than chance.

Or, in the case of machine intelligence, Man can set up machines which mutate
their "methods of reasoning" very quickly and thus start the same actions which
gave Man its intelligence.  Thus producing a machine which is "more" intelligent
than oneself, even if Man can not understand its functioning.  The answer is
"Yes", although I don't wish to allocate that much hardware to the problem.

By "methods of reasoning" I refer to the methods of processing input and
producting an output.  Each of the major fields of research in AI tends
to have differing methods, each of which can be combined with the others.
Various combinations will of course have differing levels of success with
differing problems.  But then, the problem was painted with a broad brush
on the side of a very large barn.
-- 
Scot E. Wilcoxon  sewilco@DataPg.MN.ORG    {amdahl|hpda}!bungia!datapg!sewilco
Data Progress 	 UNIX masts & rigging  +1 612-825-2607
	I'm just reversing entropy while waiting for the Big Crunch.

goldfain@osiris.cso.uiuc.edu (10/31/88)

Re: "Can we create a machine more intelligent than ourselves."

Some classic works dealing with this matter:

_Computers and Thought_   Feigenbaum and Feldman, Editors
    McGraw-Hill 1963   (especially Alan Turing's article)

_Computer Power and Human Reason_  Joseph Weizenbaum
    W. H. Freeman and Co. 1972?

Further comments -
1) This is just the kind of topic to get the net shimmering with discussions
   that have low probability of resolution.
2) This has been debated ... in more notes than you can shake a cursor at.
3) This is NOT very analogous to whether one can build a machine that is
   STRONGER than oneself, at least in my interpretation of intelligence.

Notes on a definition of intelligence -
1) As a previous poster noted, we don't have such a definition.  I would add
   that we aren't close to one, and that without one the question is
   unanswerable. 
2) A proper characterization will need to have a "nominal" listing of kinds of
   capabilities underlying any "numeric" measurements of given capabilities.
   To me, the nominal list is far more important than the numeric levels.
3) The notion of greater intelligence *in kind* is much more of a problem to
   imagine than greater speed, or memory capacity, etc.  Imagine, if you will,
   a machine that could formulate a concept that no human could grasp.  "We"
   would never be able to verify that this had happened!  Thus, such a claim
   must forever remain beyond the realm of human science.

                                     - Mark Goldfain   (student at UIUC)

lishka@uwslh.UUCP (Fish-Guts) (10/31/88)

In article <1651@ndsuvax.UUCP> ncthangi@ndsuvax.UUCP (sam r. thangiah ) writes:
>
>One of the students in my class raised a point that:
>"Man is not capable of producing a machine that is more intelligent than
>oneself".  Is this a valid statement?

     Unfortunately you will need to define some of the terms better.
The key to all of this is probably what your student or you mean by
someone (or something) being "intelligent."  Indeed, this seems (in my
mind) to be one of the key questions in Artificial Intelligence.
Another term you may need to define more rigidly is "machine."

     In my opinion, even if one defines the terms, the real answer to
this question lies in one's foundations of morals, beliefs, and
religions.  Scientists may come out and "prove" or "disprove" the
above statement, but many of the assumptions they will use will be
very basic ones which others (specifically non-scientists) do not
believe.  For those others, the scientific proof may be meaningless.

>I really do not know if this has been debated, but it does tell us the limits
>of achievements that can be attained by AI or does it ?

     One answer does occur to me: yes, men and women *can* produce
machines that are more intelligent than themselves.  The phenomena is
called "birth," and in this case you would need to accept the term
"man" as meaning men and women at least through the past 2000 years,
"machine" as possibly meaning human beings (i.e. the human mechanism
as a machine), and agree that men and women have generally become more
intelligent over the past 2000 years (note that it is probably *not*
necessary to pin down the exact meaning of "intelligence" in this
case).  Also, one would need to believe that "sexual reproduction" is
a valid means of "creation" in this case.

     The above are just my opinions.  Realize that I am just an
undergraduate who has taken a fairly respectable set of AI courses at
the University of Wisconsin (more than the typical undergraduate in
AI), and who comes from a family that was into Eastern Religions.

>Sam

					.oO Chris Oo.
-- 
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617                            lishka@uwslh.uucp
				     ----
"...Just because someone is shy and gets straight A's does not mean they won't
put wads of gum in your arm pits."
                         - Lynda Barry, "Ernie Pook's Commeek: Gum of Mystery"

smoliar@vaxa.isi.edu (Stephen Smoliar) (10/31/88)

In article <5221@watdcsu.waterloo.edu> smann@watdcsu.waterloo.edu (Shannon
Mann - I.S.er) writes:
>
>Now consider the argument posed by Dr. Carl Sagan in ch. 2, Genes and 
>Brains, of the book _The Dragons of Eden_.  He argues that, at about the 
>level of a reptile, the amount of information held within the brain
>equals that of the amount of information held within the genes.  After 
>reptiles, the amount of information held within the brain exceeds that
>of the genes.
>
>Now, of the second argument, we can draw a parallel to the question asked.
>Lets rephrase the question:
>
>Can a system containing X amount of information, create a system containing
>Y amount of information, where Y exceeds X?  
>
>As Dr. Sagan has presented in his book, the answer is a definitive _YES_.
>
Readers interested is a more technical substantiation of Sagan's arguments
should probably refer to the recent work of Gerald Edelman, published most
extensively in his book NEURAL DARWINISM.  The title refers to the idea that
"mind" is essentially a result of a selective process among a vast (I am
tempted to put on a Sagan accent, but it doesn't come across in print)
population of connections between neurons.  However, before even considering
the selective process, one has to worry about how that population came to be
in the first place.  I quote from a review of NEURAL DARWINISM which I
recently submitted to ARTIFICIAL INTELLIGENCE:

	This population is an EPIGENETIC result of prenatal development.
	In other words, the neural structure (and, for that matter, the
	entire morphology) of an organism is not exclusively determined
	by its genetic repertoire.  Instead, events EXTERNAL to strictly
	genetic activity contribute ot the develo9pment of a diverse
	population of neural structures.  Specific molecular agents,
	known as ADHESION MOLECULES, are responsible for determining
	the course of a morphology and, consequentlty, the resulting
	pattern of neural cells which are formed in the course of that
	morphology;  and these molecules are responsible for the formation,
	during embryonic development, of the population from which selection
	will take place.

Those who wish to pursue this matter further and are not inclined to wade
through the almost 400 pages of NEURAL DARWINISM will find an excellent
introduction to the approach in the final chapter of Israel Rosenfield's
THE INVENTION OF MEMORY.  (This remark is also directed to Dave Peru, who
requested further information about Edelman.)

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (11/01/88)

In article <1651@ndsuvax.UUCP> ncthangi@ndsuvax.UUCP (sam r. thangiah ) writes:
>
>One of the students in my class raised a point that:
>"Man is not capable of producing a machine that is more intelligent than
>oneself".  Is this a valid statement?
>

What does intelligent mean?  We can build machines that know
more than we do about certain subjects, and that do many things
better than we do.  But really, all we have to do is build a machine
that can learn or increase in intelligence, and it may then
outstrip us in time, or build other machines that will.

bwk@mitre-bedford.ARPA (Barry W. Kort) (11/01/88)

In article <1651@ndsuvax.UUCP> ncthangi@ndsuvax.UUCP (Sam R. Thangiah ) writes:
> One of the students in my class raised a point that:
> "Man is not capable of producing a machine that is more intelligent than
> oneself".  Is this a valid statement?

It is a valid opinion, but I suspect your classmate's thesis
will be disproved.

FIrst of all, our machines (and our own minds) are not the product
of a single individual.  A consortium of collaborating contributors
can build something that is beyond the power of any one person to
build.  I suspect intelligence falls into this pattern.  Each
generation nurtures children who are more intelligent than their
parents.  My computer can solve problems which I could not solve
on my own.  In some cases, I don't even know what method the computer
is using.  In high school, I learned how to extract square roots by
hand.  My hand calculator does it faster and more accurately by a
method that I never thought of.

Today, I believe that I can reason by analogy better than a computer.
But when it comes to symbolic processing, I cannot compete with
modern Arithmetic/Logic Units or Inference Engines.

--Barry Kort

berleant@cs.utexas.edu (Dan Berleant) (11/01/88)

What created the intelligence of humans? If your answer is God, then
press 'n' now -- otherwise your answer is "nature". Clearly nature
possesses no intelligence per se, so it is possible for a less
intelligent  system to create a more intelligent one.

If we can create a machine more intelligent than we, there is
real science fiction in pursuing the implications -- it means
we can then create a machine of infinite intelligence! Think
about it...

Dan Berleant

ok@quintus.uucp (Richard A. O'Keefe) (11/01/88)

In article <397@uwslh.UUCP> lishka@uwslh.UUCP (Fish-Guts) writes:
> ... you would need to ... agree that men and women have generally become
> more intelligent over the past 2000 years (note that it is probably *not*
> necessary to pin down the exact meaning of "intelligence" in this
> case).

This is such an extraordinary proposition that I would need to be given
some evidence for it.  More people educated?  Sure.  More people well
fed?  Sure.  Significant change in basic biology?  Pull the other one!

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (11/01/88)

In article <4167@phoenix.Princeton.EDU> dykimber@phoenix.Princeton.EDU (Daniel Yaron Kimberg) writes:
> Since we know that machines can do things people can't, and
>since we know that people can use these machines to allow themselves to do
>things they ordinarily wouldn't be able to do, then if we could just replace
>the people with AI engines, we would have machines that could do more things
>than their designers.  

Typical AI blinkers.  Take the second piece of knowledge, that
human-machine systems are more powerful than human systems for some
tasks. Wow, true since the Archimedian screw if not before.  Computers
now allow intellectual work to be given mechanical support.  

However, you cannot sensibly talk about the machine withou talking
about the human system it interfaces to.  Hence AI folk don't talk
sense, as none of them know the first thing about succesfully fielding
a human-machine system :-)

Furthermore, the first piece of knowledge is drivel.  Machines cannot
do what people cannot, only human-machine systems can.  You can
automate more of the task, but you can never fully automate anything.
There will always be human operators of some sort.  All systems
interface with supersystems.

> In my judgement, these things would constitute added intelligence.
The question of what constitutes intelligence has nothing to do with
your private judgements.  Intelligence is a social construct, as has
been unequivocably established by the miserable failure of
psychometrics in this area.  A good example of why the study of the
individual apart from a social context is liable to generate somwe
stupid academic activity.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

engelson-sean@CS.YALE.EDU (Sean Philip Engelson) (11/02/88)

The question that was asked: "Can man build a machine more intelligent
than himself?" is, both in the individual and in the collective sense,
the wrong question (as in the individual case it is trivially true,
e.g. birth; and in the collective sense, given evolution, is quite
probably true).  The proper question is: "Can man understand the
workings of an intelligence more intelligent than himself?"  This is
an interesting question, which raises a number of issues about the
nature of knowledge and understanding, but on a practical level is
less interesting than the following (which is really the fundamental
question of AI): "Can man understand HIS OWN intelligence?"  The
answer assumed by many (if not most) AI theorists is, of course, yes;
the answer given by many philosopher-types (depending on their
definitions of understanding) is no.

There are arguments both ways, none of them conclusive, and all of
them resting on unproven assumptions.

	-Sean-

----------------------------------------------------------------------
Sean Philip Engelson, Gradual Student
Yale Department of Computer Science
51 Prospect St.
New Haven, CT 06511
----------------------------------------------------------------------
The frame problem and the problem of formalizing our intuiutions about
inductive relevance are, in every important respect, the same thing.
It is just as well, perhaps, that people working on the frame problem
in AI are unaware that this is so.  One imagines the expression of
horror that flickers across their CRT-illuminated faces as the awful
facts sink in.  What could they do but "down-tool" and become
philosophers?  One feels for them.  Just think of the cut in pay!
		-- Jerry Fodor
		(Modules, Frames, Fridgeons, Sleeping Dogs, and the
		 Music of the Spheres)

sher@sunybcs.uucp (David Sher) (11/02/88)

How about the manhatten project (the organization and its structure) as
a device that was more intelligent (specifically intelligence as applied
to nuclear physics) than any single human.  Or do you want to eliminate
devices with human components?  If so why?

Also is a man with an encyclopedia more intelligent than one without one?
How about a man with a book on logic?

-David Sher
-David Sher
ARPA: sher@cs.buffalo.edu	BITNET: sher@sunybcs
UUCP: {rutgers,ames,boulder,decvax}!sunybcs!sher

huub@swivax.UUCP (Huub Knops) (11/02/88)

In article <3802@cs.utexas.edu> berleant@cs.utexas.edu (Dan Berleant) writes:
  > we can then create a machine of infinite intelligence! Think
  > about it...
If you think about it and understand it, you are infinite intelligent,
so the machine could never get more intelligent then you are.

Greetings,
Huub
-- 
This line is intentionally left blank.

lammens@sunybcs.uucp (Johan Lammens) (11/03/88)

In article <3802@cs.utexas.edu> berleant@cs.utexas.edu (Dan Berleant) writes:
>If we can create a machine more intelligent than we, there is
>real science fiction in pursuing the implications -- it means
>we can then create a machine of infinite intelligence! Think
>about it...
>
>Dan Berleant

No we can't. At least, this inductive reasoning does not hold (which
does not mean the conclusion is invalid): by the same reasoning one
could "prove" that we can build machines of infinite strength,
precision, size, whatever.
JL.

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (11/04/88)

From article <1806@crete.cs.glasgow.ac.uk>, by gilbert@cs.glasgow.ac.uk (Gilbert Cockton):
" There will always be human operators of some sort.  All systems
" interface with supersystems.

The second sentence here is to justify the first, I guess.  This
probably makes sense to someone with typical humanist blinkers.

		Greg, lee@uhccux.uhcc.hawaii.edu

berleant@cs.utexas.edu (Dan Berleant) (11/05/88)

In article <2413@cs.Buffalo.EDU> lammens@sunybcs.UUCP (Johan Lammens) writes:
>In article <3802@cs.utexas.edu> berleant@cs.utexas.edu (Dan Berleant) writes:
>>If we can create a machine more intelligent than we, there is
>>real science fiction in pursuing the implications -- it means
>>we can then create a machine of infinite intelligence! Think
>>about it...
>>
>No we can't. At least, this inductive reasoning does not hold (which
>does not mean the conclusion is invalid): by the same reasoning one
>could "prove" that we can build machines of infinite strength,
>precision, size, whatever.
>JL.

Yes it does! First of all, we can build machines of maximum (or
close to maximum) strength, precision, size, etc., and to do it
requires other, lesser, machines.

Second of all, I define intelligence as "the ability to build
intelligent machines" (but see footnote 1).

Dan Berleant
berleant@cs.utexas.edu

footnote 1: A _reasonable_ definition of intelligence that _also_ 
works for the argument above is this:

Intelligence consists of 2 things, 1)the ability to convince the
average person that intelligence is being displayed (which I 
define to have the value of either true or false), and 2)the
ability to build intelligent machines. This definition makes
sense and avoids circularity.

maddoxt@novavax.UUCP (Thomas Maddox) (11/06/88)

In article <1651@ndsuvax.UUCP> ncthangi@ndsuvax.UUCP (sam r. thangiah ) writes:

>One of the students in my class raised a point that:
>"Man is not capable of producing a machine that is more intelligent than
>oneself".  Is this a valid statement?

	If one assumes that genes are machines, then the answer is
yes, we have an existence proof.

lammens@sunybcs.uucp (Johan Lammens) (11/07/88)

In article <3833@cs.utexas.edu> berleant@cs.utexas.edu (Dan Berleant) writes:
>In article <2413@cs.Buffalo.EDU> lammens@sunybcs.UUCP (Johan Lammens) writes:
>>In article <3802@cs.utexas.edu> berleant@cs.utexas.edu (Dan Berleant) writes:
>>>If we can create a machine more intelligent than we, there is
>>>real science fiction in pursuing the implications -- it means
>>>we can then create a machine of infinite intelligence! Think
>>>about it...
>>>
>>No we can't. At least, this inductive reasoning does not hold (which
>>does not mean the conclusion is invalid): by the same reasoning one
>>could "prove" that we can build machines of infinite strength,
>>precision, size, whatever.
>>JL.
>
>Yes it does! First of all, we can build machines of maximum (or
>close to maximum) strength, precision, size, etc., and to do it
>requires other, lesser, machines.
>
>Second of all, I define intelligence as "the ability to build
>intelligent machines" (but see footnote 1).
>
>Dan Berleant


Does this imply that maximum = infinite? Seems rather strange to me.
One can define the maximum performance of a machine (the maximum
weight it will lift, or the maximum number of computations it will
perform in a second), but this does not imply that this performance is
infinite does it? By definition infinity does not have a maximum, and
no matter how clever or strong or precise we build our machines, a
more clever, strong, precise one will always be possible (at least in
theory).  
	But if your point is that using a machine that's more
intelligent than we are, we (or the machine) could build an even
better one, I do not disagree. The problem, of course, is to build the
first one...
	Anyway, this is kind of a non-issue I think, as we have
problems enough already trying to build one that's even moderately
intelligent (say like an ape or so), let alone more intelligent than
we are.

JL.

nick@cs.hw.ac.uk (Nick Taylor) (11/08/88)

In Article 2254 of comp.ai, Gilbert Cockton writes :
 "... intelligence is a social construct ... it is not a measure ..."

Hear, hear. I entirely agree. I suppose it was inevitable that this discussion
would boil down to the problem of defining "intelligence". Still, it was fun
watching it happen anyway.

I offer the following in an effort to clarify the framework within which we
must discuss this topic. No doubt to some people this will seem to obfuscate
the issue rather than clarify it but, either way, I am sure it will 
generate some discussion.

Like Gilbert most people treat the idea of intelligence as an intra-species
comparitor. This is all well and good so long as we remember that it is 
just a social construct which we find convenient when comparing the
apparent intellectual abilities of two people or two dogs or two nematodes, 
etc.

However, when we move outside a single species and attempt to say things
such as "humans are more intelligent than nematodes" we are in a very
different ball game. We are now using the concept of intelligence as an
inter-species comparator. Whilst it might seem natural to use the same
concept we really have no right to. One of the most important axioms of
any scientific method is that you cannot generalise across hierarchies.
What we know to be true of humans cannot be applied to other species 
willy-nilly.

Until we generate a concept ('label') of inter-species intelligence which
cannot be confused with intra-species intelligence we will forever be
running around in circles discussing two different ideas as if they were
one and the same. Clearly, machine intelligence is also concerned with
a different 'species' to ourselves and as such could be a very useful
concept but neither 'machine intelligence' nor 'human intelligence' are
useful in a discussion of which is, or might become, the more intelligent
(in the inter-species meaning of the word).

For more information on bogus reasoning about brains and behaviour
see Stephen Rose's "The Conscious Brain" (published by Penguin I think).

dsm@bucsb.UUCP (David Miller) (11/09/88)

In article <3833@cs.utexas.edu> berleant@cs.utexas.edu (Dan Berleant) writes:
>
>Second of all, I define intelligence as "the ability to build
>intelligent machines" (but see footnote 1).
>
>Dan Berleant
>berleant@cs.utexas.edu
>
>footnote 1: A _reasonable_ definition of intelligence that _also_ 
>works for the argument above is this:
>
>Intelligence consists of 2 things, 1)the ability to convince the
>average person that intelligence is being displayed (which I 
>define to have the value of either true or false), and 2)the
>ability to build intelligent machines. This definition makes
>sense and avoids circularity.

Two faults with this reasoning:
1. Humans have not yet shown their ability to create intelligent
machines, and
2. If they did create something that apeared intelligent, who would 
be the intelligent person to decide that it was intelligent, since we
would still be trying to prove it ourselves..

With this your theory on intelligence will not hold...

good try though...                              dsm



-- 
Discalaimer: If you don't like what I say... don't listen...

Comment: You can't teach a pig to sing, it annoys the pig, and 
            wastes your time...     -Robert Anson Heinlein

-=-=-=-=-=-=-=-=-+-=-=-=-=-=-=-=-=-=-=-=-=-+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 +--\ /--\ ^  ^  | David S Miller          | ARPANET: dsm@bucsf.bu.edu 
 |  | \__  |\/|  | 610 Beacon St.  Box 722 |  BITNET: engm06c@buacca
 |  |    \ |  |  | Boston, MA  02215       |    UUCP: !harvard!bu-cs!bucsf!dsm
 +--/ \--/ |  |  | (617) 375-6381          |   CSNET: dsm%bucsb@bu-cs

berleant@cs.utexas.edu (Dan Berleant) (11/09/88)

In article <2149@bucsb.UUCP> dsm@bucsb.bu.edu (david miller) writes:
>In article <3833@cs.utexas.edu> berleant@cs.utexas.edu (Dan Berleant) writes:
>>footnote 1: A _reasonable_ definition of intelligence that _also_ 
>>works for the argument ... is this:
>>
>>Intelligence consists of 2 things, 1)the ability to convince the
>>average person that intelligence is being displayed (which I 
>>define to have the value of either true or false), and 2)the
>>ability to build intelligent machines. This definition makes
>>sense and avoids circularity.
>
>Two faults with this reasoning:
>1. Humans have not yet shown their ability to create intelligent
>machines, and
>2. If they did create something that apeared intelligent, who would 
>be the intelligent person to decide that it was intelligent, since we
>would still be trying to prove it ourselves..

I was hoping to avoid getting entangled in the question of 
defining intelligence, but...

To be more explicit, the following definition should be sufficient
for my argument that "if we can build a machine smarter than
we are, we can obtain a machine of infinite -- or at least
maximum possible -- intelligence."

I = C + C*A

I... intelligence
C... commonsense
A... ability (to build intelligent machines)

More specifically,

commonsense... the ability to pass the Turing test (or something like
               that, you get the idea). Allowable values are true and 
               false, that is, 0 or 1. Either the machine possesses
               intelligence (whatever its degree) or not.

Given this definition, if we can build a machine more intelligent
than we, we can have a machine whose intelligence is the theoretical
maximum value of intelligence.

>With this your theory on intelligence will not hold...

Yes it will!

>good try though... 

Thanks!

Someone (else) actually posted a valid objection (call it a refutation
if you want) to this conclusion... but other readers seem to have
ignored it...

Dan
berleant@cs.utexas.edu

dharvey@wsccs.UUCP (David Harvey) (11/09/88)

In article <1651@ndsuvax.UUCP>, ncthangi@ndsuvax.UUCP (sam r. thangiah ) writes:
> 
> One of the students in my class raised a point that:
> "Man is not capable of producing a machine that is more intelligent than
> oneself".  Is this a valid statement?
> 
> I really do not know if this has been debated, but it does tell us the limits
> of achievements that can be attained by AI or does it ?
> 
> Sam

Well Sam, I don't want to appear to be rude, but if no one else will
debate me, I will go out of my way to debate many issues myself,
alternating between 2 or more points of view.

First, what is intelligence?  This is not as trivial as you suppose.
Having the dubious privelege of possessing a degree in Psychology I
can candidly say that it is a very difficult thing to deal with.  Just
defining what it is presently is under debate.  Almost all people in
Psychology believe (with reservations)  that the standard IQ tests
do not measure it.  Some cite models that propose different areas of
intelligence.  For example, I can have strong mathematical skills but
be week in areas that require lots of memorization and less problem
solving.  So you can see one argument against it cropping up.  How
can we say a machine does or does not posess intelligence when we have
problems defining the term itself?

Next, supposing the model that our thoughts are nothing more than the
activations of our massively parallel neural networks then there is a
potential for such a system.  This of course comes at you from the
viewpoint of the Empiricists, ala Locke, Berkeley, Hume, et al.  Now
if you start from the framework of the Rationalists like DesCartes or
Leibniz this of course is unacceptable.  But since both philsophical
approaches have problems, perhaps you should try starting with the
foundation of Kant or Nietzsche which will leave you with the helpless
feeling that philosophy is of no help either.  Posessing 35 hours of
the wonderful stuff I can safely say that it is fun to play with, but
in the end you are locked into many beliefs that you can find no safe
foundation for.

Perhaps the safest thing that could be said is that man may not be able
(notice the probablistic way I phrased it) to purposely build such a
machine.  However, you are ruling out random chance in saying this.
I am sure you are well aware that the discovery of antibiotics was due
to the mistake of leaving windows open, thus allowing spores to come
through and 'corrupt' the developing cultures.  At present it may be
even safer to say that we are limited by our technology.  By this I
mean that we can't develop the massively parallel circuits on the
same scale as our brains.

But the only thing you can know for sure is that you can't know
anything for sure!?

smann@watdcsu.waterloo.edu (Shannon Mann - I.S.er) (11/09/88)

In article <6655@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes:
>Readers interested is a more technical substantiation of Sagan's arguments
>should probably refer to the recent work of Gerald Edelman, published most
>extensively in his book NEURAL DARWINISM.  

A summary written by Edelman with the same title can be found in the book:

Title:   How We Know
Editor:  Michael Shafto
Length:  29 pages, including bibliography
Imprint: Harper & Row, New York, 1985

BF311.N62

ISBN 0-06-250777-X

Book also includes five other pieces of interest researching the brain,
memory, and thinking machines.  All are from the 'Nobel Conference XX'

I hope you find this useful.

        -=-
-=- Shannon Mann -=- smann@watdcsu.UWaterloo.ca
        -=-

'I have no brain, and I must think...' - An Omynous
'If I don't think, AM I' - Another Omynous

bwk@mitre-bedford.ARPA (Barry W. Kort) (11/10/88)

In article <3876@cs.utexas.edu> berleant@cs.utexas.edu (Dan Berleant) 
argues in support of his thesis

 > "if we can build a machine smarter than
 > we are, we can obtain a machine of infinite -- or at least
 > maximum possible -- intelligence."

Dan, the ability to augment a system does not automatically
imply that it can be infinitely augmented or that it can
be augmented to finite maximum.

Suppose that the degree of intelligence of a system could be mapped
onto the counting numbers: 0, 1, 2, 3, etc.

Suppose that you knew how to take a system of intelligence n,
and use it to build a machine of intelligence n+1.

Then you could build a machine of any finite intelligence (there
is no theoretical maximum), but you would never arrive at a machine
of infinite intelligence.  

Thinking about infinity is a little tricky.  Georg Cantor created
quite a furor when he came up with a meaningful way to think and
talk about transfinite numbers.  I think you would enjoy a course
in Abstract Algebra, where such ideas are carefully developed.

--Barry Kort

achut@unisoft.UUCP (Achut Reddy) (11/10/88)

In article <3876@cs.utexas.edu> berleant@cs.utexas.edu (Dan Berleant) writes:
>Given this definition, if we can build a machine more intelligent
>than we, we can have a machine whose intelligence is the theoretical
>maximum value of intelligence.

Not necessarily.  There is no guarantee that a more intelligent entity will
be able to build a more intelligent machine than ourselves.

The following analogy will make this clear:
Take the best optimizing C compiler for some new CPU, say the MC88100.
Compile this compiler with itself.  You *may* get a better compiler
(you also may not).  Repeat until fixed point is obtained.  The final
compiler is not necessarily the best that can be achieved.

Achut Reddy

ralph@laas.laas.fr (Ralph P. Sobek) (11/10/88)

In article <3833@cs.utexas.edu>, berleant@cs.utexas.edu (Dan Berleant) writes:

| Second of all, I define intelligence as "the ability to build
| intelligent machines" (but see footnote 1).
| 
| footnote 1: A _reasonable_ definition of intelligence that _also_ 
| works for the argument above is this:
| 
| Intelligence consists of 2 things, 1)the ability to convince the
| average person that intelligence is being displayed (which I 
| define to have the value of either true or false), and 2)the
| ability to build intelligent machines. This definition makes
| sense and avoids circularity.

Sense it makes, but by the second definition, "intelligence does not
exist", since we cannot build intelligent machines.  Furthermore, I
know of no machine capable of building intelligent machines, etc.
Sounds like a bad definition to me - since I believe that intelligence
exists.

	Ralph@laas.laas.fr

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (11/11/88)

From article <769@wsccs.UUCP>, by dharvey@wsccs.UUCP (David Harvey):
" How can we say a machine does or does not posess intelligence when we have
" problems defining the term itself?

No problem.  Everything does or does not possess intelligence.
		Greg, lee@uhccux.uhcc.hawaii.edu