[comp.ai] The difference between machine and human intelligence

lishka@uwslh.UUCP (Fish-Guts) (11/18/88)

In article <4216@homxc.UUCP> marty@homxc.UUCP (M.B.BRILLIANT) writes:
>
>Any definition of ``artificial intelligence'' must allow intelligence
>to be characteristically human, but not exclusively so.

     A very good point (IMHO).  I believe that artificial intelligence
is possible, but that machine intelligence will probably *NOT*
resemble human intelligence all that closely.  My main reason for this
is that unless you duplicate much of what a human is (i.e. the neural
structure, all of the senses, etc.), you will not get the same result.
I propose that a machine without human-like senses cannot "understand"
many ideas and imagery the way a human does, simply because it will
not be able to perceive its surroundings in the same way as a human.
Any comments?

					.oO Chris Oo.-- 
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617                            lishka@uwslh.uucp

		 "I'm not aware of too many things...
		  I know what I know if you know what I mean"
		    -- Edie Brickell & the New Bohemians

prathuri@elbereth.rutgers.edu (Chandrasekhar .P) (11/18/88)

With reference to Lishka's posting:

i agree with your statement that machines can never replicate humans.
Afterall, it is the human who has to give the machine this capability.
To do such a job, man has to go a long way.  

** Man, till now, has no exact knowledge of the physiology of his
brain - ie, the hardware of his system.  

** He is not sure of its software either.  (What operating system does
he use!)

** To make a system that can simulate human intelligence, it should
have the structure of a human brain.  To make such a machine man
should understand his own structure first to give the machine
commonsense.  Don't you think there is a very long way to go?  

**Manyresearchers have said that a perfect reasoning system simulating
human intelligence should know about itself; should reason about the
reasoning process.  Does man do this?  But still aren't we
intelligent?
** Conclusion:  Should the machine know about itself to be
intelligent?

chandra s prathuri@elbereth.rutgers.edu  csp@mars      

lishka@uwslh.UUCP (Fish-Guts) (11/19/88)

In article <Nov.17.21.40.09.1988.14638@elbereth.rutgers.edu> prathuri@elbereth.rutgers.edu (Chandrasekhar .P) writes:
>
[some text deleted...C.L.]
>
>** To make a system that can simulate human intelligence, it should
>have the structure of a human brain.  To make such a machine man
>should understand his own structure first to give the machine
>commonsense.  Don't you think there is a very long way to go?  

     Oh yeah!  If we (as humans) ever get there!

>**Manyresearchers have said that a perfect reasoning system simulating
>human intelligence should know about itself; should reason about the
>reasoning process.  Does man do this?  But still aren't we
>intelligent?

     Maybe we do know about ourselves in the sense that we understand
that we *do* have the ability to reason rationally or irrationally,
and that it seems to be an ability that other, non-living things (such
as rocks, cars, and computers) do not have.  I think that humans *do*
reason about the reasoning process, as witnessed by some original
thinking in religion, philosophy, psychology, sociology, and AI.

>** Conclusion:  Should the machine know about itself to be
>intelligent?

     My feeling on this (and I do not have reasons to back it up!) is
*yes*, the machine should at the very least have a concept of its own
self in order to be intelligent.  Possibly this would be to provide a
reference point from which it could reason about the rest of the
world.  

     I wonder if creating a machine that had a concept of its own
self would be easier than trying to give it intelligence right away?
(Probably not...just a thought).  However, if a machine *was* created
that understood what it was (in some sense), it may have an easier
time relating the rest of the world to itself.

    It would also seem that this is a very important part of human
(and other animals') nature.  The ability to relate to the world from
a fixed perspective (our own individual perspective) seems to me to be
an important one.

>chandra s prathuri@elbereth.rutgers.edu  csp@mars      

    Please do not take this too seriously!  I am just wondering "out
loud," and hoping to stimulate some other interesting ideas (of which
there have been quite a few lately).

					.oO Chris Oo.-- 
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617                            lishka@uwslh.uucp

		 "I'm not aware of too many things...
		  I know what I know if you know what I mean"
		    -- Edie Brickell & the New Bohemians

sbigham@dukeac.UUCP (Scott Bigham) (11/19/88)

In article <401@uwslh.UUCP> lishka@uwslh.UUCP (Fish-Guts) writes:
>I believe that artificial intelligence
>is possible, but that machine intelligence will probably *NOT*
>resemble human intelligence...

So how shall we define machine intelligence?  More importantly, how will we
recognize it when (if?) we see it?

						sbigham

-- 
Scott Bigham                         "The opinions expressed above are
Internet sbigham@dukeac.ac.duke.edu   (c) 1988 Hacker Ltd. and cannot be
USENET   sbigham@dukeac.UUCP          copied or distributed without a
...!mcnc!ecsgate!dukeac!sbigham       Darn Good Reason."

smann@watdcsu.waterloo.edu (Shannon Mann - I.S.er) (11/20/88)

In article <Nov.17.21.40.09.1988.14638@elbereth.rutgers.edu> prathuri@elbereth.rutgers.edu (Chandrasekhar .P) writes:
>With reference to Lishka's posting:
>
>i agree with your statement that machines can never replicate humans.
>Afterall, it is the human who has to give the machine this capability.
>To do such a job, man has to go a long way.  

Earlier this century, there were people saying that humankind would
never go to the moon (sentiment goes back centuries).  But, since
we have gone to the moon, and left artefacts on its surface, have taken
samples back to earth, etc.  Knowing this, it makes sense to say that 
we may never create a machine that... (whatever we may want a machine to 
do)  To say that we can _never_ do a particular something is inherently 
incorrect.  (To say it with conviction infers you have a knowledge about
the future.  If you do, tell us, we would be ecstatic to hear from you :-)

>chandra s prathuri@elbereth.rutgers.edu  csp@mars      

        -=-
-=- Shannon Mann -=- smann@watdcsu.UWaterloo.ca
        -=-

'I have no brain, and I must think...' - An Omynous
'If I don't think, AM I' - Another Omynous

prathuri@elbereth.rutgers.edu (Chandrasekhar .P) (11/21/88)

With reference to Lishka's and Mann's postings:

Let us look at the problem of AI realistically:  How far have we come
and how far can we go.  My intention is not to prove any problem as
NP-complete, ie., to make a lengthy study of how difficult a problem
is and not how to solve the problem(as most algorithm people do!).

And, just because we have done things in the past which we thought to
be impossible, it doesnot mean that we can solve all the problems. At
the same time, I am not intending to be pessimistic; history has great
evidence that it is the hope that drives us to achieve our goals.  

But how far have we come in Knowledge representation?  - McCarthy's
Logic, Minsky's Frames, Schank's Conceptual dependency - each has a
different approach towards the problem and none of them are complete.
It is humanly impossible to make all knowledge representational.  A
computer scientist cannot solve the problem on his own.  Biology
and Psychology of brain has to be understood clearly.  

As to date we do not know how knowledge is represented in our brain.
Is it represented at more than one location?  ..
Or each piece of knowledge has only one representation? .. .
How do we retrieve information from our brain?  ..  Is it by depth
first or breadth first search.?  Backward chaining or forward chaining
or both?  .. 
If information is stored at more than one location, what type of
information is stored at more than one palce?..

I am not saying all the approaches made towards the problem till now
are incorrect( And who am I to say!). All those great researchers have
opened to us new doors to thinking.  
My feeling is that, we should make use of biology and psychology more,
so that it will be less difficult to prove AI an NP complete problem.

Chandra s prathuri  csp

thom@dgbt.uucp (Thom Whalen) (11/23/88)

From article <401@uwslh.UUCP>, by lishka@uwslh.UUCP (Fish-Guts):
> I propose that a machine without human-like senses cannot "understand"
> many ideas and imagery the way a human does, simply because it will
> not be able to perceive its surroundings in the same way as a human.
> Any comments?

Do you believe that Helen Keller "understood many ideas and imagery the
way a human does?  She certainly lacked much of the sensory input that
we normally associate with intelligence.


----------------------------------

Thom Whalen

"Someday I'll make a signature"

> 

lishka@uwslh.UUCP (Fish-Guts) (12/01/88)

In article <960@dgbt.uucp> thom@dgbt.uucp (Thom Whalen) writes:
>From article <401@uwslh.UUCP>, by lishka@uwslh.UUCP (Fish-Guts):
>> I propose that a machine without human-like senses cannot "understand"
>> many ideas and imagery the way a human does, simply because it will
>> not be able to perceive its surroundings in the same way as a human.
>> Any comments?
>
>Do you believe that Helen Keller "understood many ideas and imagery the
>way a human does?  She certainly lacked much of the sensory input that
>we normally associate with intelligence.
>
>Thom Whalen

     I do not believe she *perceived* the world as most people with
full senses do.  I do believe she "understood many ideas and imagery"
the way humans do because she had (1) touch, (2) taste, and (3)
olfactory senses (she was not able to hear or see, if I remember
correctly), as well as other internal sensations (i.e. sickness, pain,
etc.).  The way I remember it, she was taught to speak by having her
"feel" the vibrations of her teacher's throat as words were said while
associating the words with some sensation (i.e.  the "feeling" or
water as it ran over her hands).  Also (and this is a highly personal
judgement) I think the fact that she was a human, with a human nervous
system and human reactions to other sensations (i.e. a sore stomach,
human sicknesses, etc.), also added to her "human understanding."  

					.oO Chris Oo.-- 
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Immunology Section  (608)262-1617                            lishka@uwslh.uucp

		 "I'm not aware of too many things...
		  I know what I know if you know what I mean"
		    -- Edie Brickell & the New Bohemians

jpdres10@usl-pc.usl.edu (Green Eric Lee) (12/03/88)

In article <960@dgbt.uucp> thom@dgbt.uucp (Thom Whalen) writes:
>From article <401@uwslh.UUCP>, by lishka@uwslh.UUCP (Fish-Guts):
>> I propose that a machine without human-like senses cannot "understand"
>> many ideas and imagery the way a human does, simply because it will
>> not be able to perceive its surroundings in the same way as a human.
>Do you believe that Helen Keller "understood many ideas and imagery the
>way a human does?  She certainly lacked much of the sensory input that
>we normally associate with intelligence.

Actually, she didn't. Her world was bounded by touch, taste, and
smell. She became adept at putting together words to make it seem that
she had the same perceptions as a "normal" person, but that was more a
sad attempt to "fit in". Another interesting thing is that the
knowledge that things like "blue" existed came via her existing
sensory inputs (touch), specifically from reading books... which seems
to imply that one sensory input can, to large extent, substitute for
others, when there is some convention of information interchange.

Note, though, that even Helen Keller's sensory input was greater than
that of today's AI systems...

--
Eric Lee Green                            P.O. Box 92191, Lafayette, LA 70509
     {ames,mit-eddie,osu-cis,...}!killer!elg, killer!usl!elg, etc.

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (12/05/88)

From article <131@usl-pc.usl.edu>, by jpdres10@usl-pc.usl.edu (Green Eric Lee):
" In article <960@dgbt.uucp> thom@dgbt.uucp (Thom Whalen) writes:
"...
" >Do you believe that Helen Keller "understood many ideas and imagery the
" >way a human does?  She certainly lacked much of the sensory input that
" >we normally associate with intelligence.
" 
" Actually, she didn't. Her world was bounded by touch, taste, and
" smell.

How do you know she didn't?  What does it mean to say her "world"
was bounded in this way?

" She became adept at putting together words to make it seem that
" she had the same perceptions as a "normal" person, ...

Isn't that what you have done?  I think this is what learning language
consists in.

" but that was more a
" sad attempt to "fit in". Another interesting thing is that the
" knowledge that things like "blue" existed came via her existing
" sensory inputs (touch), specifically from reading books... which seems
" to imply that one sensory input can, to large extent, substitute for
" others, when there is some convention of information interchange.
"...

I don't know about this business of "information interchange".
Another possibility is that at an appropriate level of processing,
different sensory inputs are equivalent.  Though I don't say it
helps us understand Helen Keller's case, von Bekesy's experiments
to establish commonalities between touch and hearing are interesting
in this regard.  There may be commonalities between vision and
speech perception -- there is a three-color theory for both, for
instance (for speech, I'm referring to the first three vowel
formants).

So for perception as well as cognition, it may prove possible to
port the human programs to execute on different hardware.

		Greg, lee@uhccux.uhcc.hawaii.edu

bwk@mitre-bedford.ARPA (Barry W. Kort) (12/06/88)

In article <131@usl-pc.usl.edu> Eric Lee Green (elg@killer.UUCP)
writes about Helen Keller:

 > ... which seems
 > to imply that one sensory input can, to large extent, substitute for
 > others, when there is some convention of information interchange.

Eric's point is well illustrated by the invention of devices for
the blind.  One such device is a TV camera which drives a bank of
vibrating pins.  The blind person straps the bank of vibrating pins
to his back, and "sees" a low-resolution bit-map image from the camera.
The camera is suspended in air from a cable so that the blind person
can easily point it around the room.

When one such person aimed the camera at a candle flame, he exclaimed,
"The flame dances!"  Before that time, he presumed that flames had
no motion.  

By the same token, an artist can take an image in his mind's eye
and transform it into a painting, a sculpture, a poem, or music.
Information-preserving transformations between the senses is a
time honored profession.

--Barry Kort