[talk.philosophy.misc] Artificial Intelligence, Intelligence, and Brain Cancer

peru@soleil.UUCP (Dave Peru) (12/01/88)

I would like to move this discussion to talk.philosophy.misc because of
certain serious AI researchers are getting annoyed with the volume on comp.ai.

However, I've really enjoyed the excellent responses to my questions and
ideas, and I'm not being sarcastic either.  I would hate to lose this input.

Please direct any responses to this entry to talk.philosophy.misc, thanks.

In article <506@soleil.UUCP> peru@soleil.UUCP (I) write:

>>"As an example, think of a cancer researcher using molecular biology to
>> probe the interior of cell nuclei.  If a physicist tells him, quite
>> correctly, that the fundamental laws governing the atoms in a DNA molecule
>> are completely understood, he will find this information true but
>> useless in the quest to conquer cancer.  The cure for cancer involves
>> studying the laws of cell biology, which involve trillions upon trillions
>> of atoms, too large a problem for any modern computer to solve.  Quantum
>> mechanics serves only to illuminate the larger rules governing molecular
>> chemistry, but it would take a computer too long to solve the Schrodinger
>> equation to make any useful statements about DNA molecules and cancer."
>>
>>Using this as an analogy, and assuming Kaku/Trainer were not talking
>>about brain cancer, how big a computer is big enough for intelligence
>>to evolve?

In article <763@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:

>I really don't understand how that paragraph serves as an analogy for
>intelligence.  Finding a cure for cancer may be a very complex process
>involving many levels of explanation, but _causing_ it is something a
>tiny little virus can manage easily (one containing millions, rather
>than trillions, of atoms).  A flatworm can learn; it is not stretching
>language too far to say that it is "intelligent" to _some_ degree.  We
>can already model that much.  With Connection Machines and the like, we
>might be able to do reasonable simulations of insects.  How much

I meant as an analogy to the complexity of the problem in reference to the
size and power of a computer needed to solve it.  That is, the problem of
creating artificial intelligence equal to or better than that of humans.

I like flatworms, they have neat looking faces.  Let's talk about insects for
a moment, house-flys in particular.   When GE bought RCA, they layed off
the other division that worked.  Since then they have been clearing out
all the furniture in the other building.  I've been going over there an
sitting in a corner office that has a window overlooking the parking lot. 
One day, while I was watching the workers cleaning up the three toxic waste
sites beyond the parking lot and I was meditating the causes of the big-bang,
I looked up and there was this florescent light with a clear plastic cover.
At one end, the plastic cover was slighter ajar.  There were all these dead
house flys resting on the inside of the plastic cover.  Then I notice there
was one house fly that was still alive!  He/she was buzzing around trying to
go get out through the plastic cover.  I thought to myself that the fly should
stop, back up, and go over to the other end of the florescent light and go out
the same hole that he/she came in.  Unfortunately for the fly, its intelligence
was not able to solve this problem.  

>might be able to do reasonable simulations of insects.  How much
>intelligence do you want?  Another point:  the size of computer needed
>to support a single ``intelligent'' program and the size of computer
>needed to support an ``evolving'' population in which intelligence emerges
>are two very different sizes.

I want intelligence equal to or greater than that of human beings.  I want
a man/women created machine/organism I can share culture with, exchange ideas
about reality, and most of all be friends with.  :-)

I would really dislike a man/women created machine/organism used to control and
exploit the population, normal computers and people do a fine job as it is.

What is the difference in magnitude between "a single ``intelligent'' program"
and "an ``evolving'' population".   What are talking about, someone said
a few decades.  You say within a century.  These are indeed difficult problems.

Here's another attempt, definition of human-like-intelligence:

     1. Be able to define or identify problems
     2. Know how to solve problems
     3. Know when to stop trying.

I mean problems in a very general sense.  Problems that we solve in our
day to day existence as well as the tough mathematical or philosophical
ones.  Houseflies die all the time.

#3 is always the nasty one.  This means not stopping because of biological
reasons, but because the problem has been shown to be impossible to solve.
Or the problem would take an infinite amount of time to solve, whatever
that means.

"know" is my favorite word.  I'll define it recursively:

Know is whatever we think we know.


             v---v should be "is an", sorry, parity error. :-)
What follows in as entry I made on talk.philosophy.misc, I would greatly
appreciate your time in reading it, and responding to talk.philosophy.misc.

Thanks in advance.

From peru Tue Nov 29 17:33:49 1988
Path: soleil!peru
From: peru@soleil.UUCP (Dave Peru)
Newsgroups: talk.philosophy.misc
Subject: RE: Artificial Intelligence and Intelligence
Message-ID: <508@soleil.UUCP>
Date: 29 Nov 88 22:33:49 GMT
Organization: GE Solid State, Somerville, NJ
Lines: 90

In message <6618@pucc.Princeton.EDU> E Nilges writes:

>    Wittgenstein's Tractatus Logico-Philosophicus is the most
>    thoroughgoing exposition of the syntactical notion of knowledge
>    and truth there is.  Briefly, if the syntactical notion is
>    true, AI is possible, if not, AI is impossible.

Do you have any references to Wittenstein's work, I'm interested.

>   true, AI is possible, if not, AI is impossible.

Do you mean Wittgenstein later in life departed from the view that
"if the syntactical notion is true, AI is possible, if not, AI is
 impossible", or simply that AI is possible?

Or better yet, did Wittgenstein believe AI is possible or not?  And
at what times in his/her life?

>   in later life departed from this view.  The social notion of
>   knowledge found in his Philosophical Investigations has its
>   own special problems.  Briefly, if knowledge is social, then so
>   is ignorance.

Ignorance is my speciality.  :-)

>    My own view of machine intelligence is that machines
>    will be intelligent when they form a society with human beings.

I like your view of machine intelligence.  And I like your T.S. Eliot quote.

Lately, my view on AI has been toggling.  On odd days I think it's impossible,
contrariwise on even days.

AI proponents are like UFOlogists searching for alien intelligence.  I'm a bit
skeptical of both parties, but it may be possible.  A physicist told me
he thinks AI will just happen by accident.

Can algorithms ever transcend their own problems and solve new ones?

I feel written thoughts, including this one, cannot have meaning without
in someway being represented by reality.

I believe the meaning of meaning comes from an infinite regress.

Finite machines are simply that, finite.

At this point Hofstadter would say something like (I being the Tortoise):

      "In fact, machines get around the Tortoise's silly objections as
       easily as people do, and moreover for exactly the same reason: both
       machines and people are made of hardware which runs all by itself,
       according to the laws of physics."

                                 "Godel, Escher, Bach ..." p.685

The laws of physics are not perfect.  Show me a unified field theory that
is deterministic and does not use the concept of infinity.  People are analog,
not digital.  The universe is analog, time is something we created in order
to have science.

So when infinity becomes a problem, AI proponents use the laws of physics
as a crutch.  But previously Hofstadter states in the sub-title of
figure 108 on p. 573:

     "Crucial to the endeavor of Artificial Intelligence research is the
      notion that the symbolic levels of the mind can be 'skimmed off' of
      their neural substrate and implemented in other media, such as the
      electronic substrate of computers.  To what depth the copying of
      brain must go is at present completely unclear."

You can't have your cake recipe and eat it too!  :-)  My point is that you
have to have the whole thing, the whole brain.

Analog brain, not digital.  Or else the meaning of meaning is meaningless.

Has anyone heard of anyone doing AI research on analog computers?

Maybe we could rig up an aborted fetus's brain, I'm being sarcastic,
please no comments.  :-|  :-)

The religious right would simply get too violent.  :-)

In the last 30 years what is the greatest achievement of AI research on
digital computers?

"We judge ourselves by what we think we are capable of doing,
 while others judge us by what we've actually done."

                              Henry Wadsworth Longfellow