[talk.philosophy.misc] Artificial Intelligence and Brain Cancer

peru@soleil.UUCP (Dave Peru) (11/29/88)

I know AI research is building smart-tools for smart-people, but for a moment,
let's get back to the idea of TRUE Artificial Intelligence.

Whatever your definition of intelligence, let's get back to the idea of
building a "Terminator", or a "Commander Data".

Assume the universe is deterministic, consider the following paragraph
from "Beyond Einstein" by Kaku/Trainer 1987:

"As an example, think of a cancer researcher using molecular biology to
 probe the interior of cell nuclei.  If a physicist tells him, quite
 correctly, that the fundamental laws governing the atoms in a DNA molecule
 are completely understood, he will find this information true but
 useless in the quest to conquer cancer.  The cure for cancer involves
 studying the laws of cell biology, which involve trillions upon trillions
 of atoms, too large a problem for any modern computer to solve.  Quantum
 mechanics serves only to illuminate the larger rules governing molecular
 chemistry, but it would take a computer too long to solve the Schrodinger
 equation to make any useful statements about DNA molecules and cancer."

Using this as an analogy, and assuming Kaku/Trainer were not talking
about brain cancer, how big a computer is big enough for intelligence
to evolve?

Can someone give me references to any articles that make "intelligent" guesses
about how much computing power is necessary for creating artificial
intelligence?  How many tera-bytes of memory?  How many MIPS?  Knowing the
recent rates of technological development, how many years before we have
machines powerful enough?

Am I wasting my time on weekends trying to create artificial intelligence
on my home computer?  Should I buy another 2 mega-bytes of memory?  :-)

In a previous article someone made reference to what I meant by "know"
in my statement "know how to solve problems".  If you don't KNOW what
KNOW "means", then you don't KNOW anything.  I "mean", we have to start
somewhere, or we can't have a science.  Without duality, science has no
meaning.

Do you remember the scene from the movie "Terminator" when Arnold uses a
razor blade to cut out his damaged eye, pretty good hand-eye coordination
for a machine.

How many of you out there were rooting for the Terminator?

I love the affect of the idea "Artificial Intelligence" has on society.  With
an army of "Commander Data" androids, why would any corporation keep any human
workers at all.  Of course, after a few years, a bright hard-working
bottom-line lean-and-mean android will become CEO.  Irrational silly human
beings are so inefficient, I like working seven days a week, 24 hrs/day.

Does anyone have any references to any studies on myths and misconceptions the
population may have about AI research?   I'm sure I'm not the only one that
watches sci-fi movies.  Maybe teenagers think there's no point to studying
since androids are just around the corner, maybe they're right!  With all the
money banks pump into AI research, I thought we would have an "intelligent"
stockbroker last year.

Please send responses to "talk.philosophy.misc" or send me email.

P.S. In reference to "Assume the universe is deterministic", I think the
     universe is analog and cannot be described digitally. 

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (11/29/88)

In article <506@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>
>Can someone give me references to any articles that make "intelligent" guesses
>about how much computing power is necessary for creating artificial
>intelligence?  How many tera-bytes of memory?  How many MIPS?  Knowing the
>recent rates of technological development, how many years before we have
>machines powerful enough?
>
The human brain has about a trillion or so neurons.  Higher mammals
can get by on considerably less.  Worms have just a few.  If we
are talking about making intelligence out of neural nets (the best
idea I know of), it will probably be some decades before we have
such beasties.  

ok@quintus.uucp (Richard A. O'Keefe) (11/29/88)

In article <506@soleil.UUCP> peru@soleil.UUCP (Dave Peru) writes:
>Assume the universe is deterministic
I cannot assume that the universe is deterministic and also accept
Quantum Mechanics.

>consider the following paragraph from "Beyond Einstein" by Kaku/Trainer 1987:

>"As an example, think of a cancer researcher using molecular biology to
> probe the interior of cell nuclei.  If a physicist tells him, quite
> correctly, that the fundamental laws governing the atoms in a DNA molecule
> are completely understood, he will find this information true but
> useless in the quest to conquer cancer.  The cure for cancer involves
> studying the laws of cell biology, which involve trillions upon trillions
> of atoms, too large a problem for any modern computer to solve.  Quantum
> mechanics serves only to illuminate the larger rules governing molecular
> chemistry, but it would take a computer too long to solve the Schrodinger
> equation to make any useful statements about DNA molecules and cancer."

Well, that there is such a thing as THE cure for cancer is almost certainly
false, and even if there were a unique cure, presumably studying the laws
of cell biology would be involved in *finding* the cure, rather than in the
cure itself.  As a strict matter of historical fact, Quantum Mechanics has
illuminated some of the problems rather directly (NMR is a quantum-mechanical
phenomenon; it was study of QM which suggested that biological molecules
might be vulnerable to microwave radiation and how to predict what
frequencies might be most relevant).  Also, judging only from this
paragraph, Kaku/Trainer are guilty of reductionism themselves.  (I'll leave
it to other readers to explain how.)

>Using this as an analogy, and assuming Kaku/Trainer were not talking
>about brain cancer, how big a computer is big enough for intelligence
>to evolve?

I really don't understand how that paragraph serves as an analogy for
intelligence.  Finding a cure for cancer may be a very complex process
involving many levels of explanation, but _causing_ it is something a
tiny little virus can manage easily (one containing millions, rather
than trillions, of atoms).  A flatworm can learn; it is not stretching
language too far to say that it is "intelligent" to _some_ degree.  We
can already model that much.  With Connection Machines and the like, we
might be able to do reasonable simulations of insects.  How much
intelligence do you want?  Another point:  the size of computer needed
to support a single ``intelligent'' program and the size of computer
needed to support an ``evolving'' population in which intelligence emerges
are two very different sizes.

>Can someone give me references to any articles that make "intelligent" guesses
>about how much computing power is necessary for creating artificial
>intelligence?  How many tera-bytes of memory?  How many MIPS?  Knowing the
>recent rates of technological development, how many years before we have
>machines powerful enough?

There was an article in CACM this year which included estimates of how
much memory capacity &c humans have.  The answer to the last question
is "well within a century".

>Am I wasting my time on weekends trying to create artificial intelligence
>on my home computer?

Yes, unless you enjoy that sort of thing.

>In a previous article someone made reference to what I meant by "know"
>in my statement "know how to solve problems".  If you don't KNOW what
>KNOW "means", then you don't KNOW anything.  I "mean", we have to start
>somewhere, or we can't have a science.  Without duality, science has no
>meaning.

Did the slave boy "KNOW" how to solve that geometry problem before
Socrates asked him?  If I possess all the information required to solve
a puzzle, am able to perform each of the steps in the solution, but would
not live long enough to perform all of them, do I "KNOW" how to solve it?
If I BELIEVE that I have a method which will solve the puzzle in my life-
time, but my reason for believing it is wrong, although my method is in
fact correct, do I "KNOW" how to solve it?  If I have no idea how to
solve such puzzles myself, but have a friend who always helps me solve
them, so that when presented with such a puzzle I never fail to obtain
a solution, do I "KNOW" how to solve it?  How about if the "friend" is
a computer?  How about if the "computer" is a set of rules in a book which
I cannot remember all at once, but can follow?  According to one
dictionary definition, it would be sufficient if I could recognise a
solution when I saw one.

If we "KNEW" what "KNOW" "means", we wouldn't need philosophy.

If I understood what "without duality, science has no meaning", I might
agree with it.  For now, I can only wonder at it.