[comp.ai] Re^2: Building a brain

lishka@uwslh.UUCP (Hang loose...) (10/19/89)

schultz@cell.mot.COM (Rob Schultz) writes:
>I see several methods for shortening this process to a (nearly?) manageable
>level:

>    1.  Memory/retention.  Presumably, an intelligent machine will not
>        forget information it has learned. (This assumes we do not model
>        the system after ourselves :-)) Therefore, the machine would not
>        have to waste time re-learning something it should already know.

     The possibility exists that if a machine is patterned after the
human brain enough, it *will* forget information that was previously
entered into it.  After all, research into neural nets has shown that
it is possible to lose information put into a neural network (i.e.
forget) if too much information is fed to it.  I am not convinced that
models sufficiently close to the structure of the human brain will be
able to retain everything.

>    2.  Countinuous Input.  Such a machine will not require sleep, nourishment,
>        or any other such distractions. So, instead of losing 8 to 15 hours 
>        out of every 24, it should be able to receive continuous input of
>        information.

     How do we know this?  I believe there is still some puzzle as to
exactly what sleep provides for a person psychologically.  I have read
at least one general article that mentioned the possibility of sleep
having to do with "sorting out" or "processing" events that happened
during the day.  If this is the case, truly intelligent machines might
require sleep (or some equivalent) as well.

>    3.  Input Speed.  Information may be input directly in electronic form, 
>        thus reducing or even eliminating the time required to translate/digest
>        information. This leads to several interesting possibilities:

     Yes, this might be a possible speed up.

>    4.  Restricted Domain.  If we decide to create function-specific machines,
>        we can restrict the domain of information to the required function.
>        For example, if a system is to be a medical diagnosis/treatment
>        prescription system, it would have to learn little or nothing abou
>        meteorology. Of course, this does not help us with a general-purpose
>        system, but we can't have everything, eh? :-)
>    

     Look how long it takes humans to learn a great deal of knowledge
in a restricted domain (kindergarden -> grade school -> middle school
-> high school -> college (undergraduate) -> graduate school ).  A
restricted domain *might* save some time, but again I would think
there is still uncertainty as to what role "common sense" knowledge
has in relation to domain specific knowledge.  Does domain specific
knowledge require a firm basis of common sense knowledge to be truly
effective?  Does common sense knowledge aid in drawing conclusions in
a domain that is little prior information for?  Expert systems have a
large degree of domain specific knowledge and little common sense, and
they seem to be restricted to *very* narrow problem domains.  Is
common sense knowledge needed to stretch this.

     It seems to me that recent arguments comparing the "computing
capacity" of the human brain to computers are fairly useless.  A lot
of people seem to want to say "yeah, we have the power, now all we
have to do is create the program."  But the second part is exactly the
point: we still don't know that much on how humans think, and as the
history of artificial intelligence research has demonstrated, a lot of
the stuff that we thought was easy to duplicate (e.g. common sense
reasoning, spatial and temporal reasoning, etc.) has turned out to be
much harder.  Consider this: the brain is a massively parallel system
of neurons which are connected in a definite non-random structure.
We are *still* having problems trying to develop reasonable methods for
writing algorithms and programs that work in parallel, even though the
hardware to run parallel programs is now available.  Yet the parallel
hardware around today is very simple when compared to the human brain.
If we are having so much trouble designing parallel programs on simple
parallel structures, what makes certain people think that having more
computing power than the human brain is going to get us artificial
intelligence?  We can't even program the simple parallel systems yet,
let alone tackle a massive parallel system such as the brain.

>     rms          Rob Schultz, Motorola General Systems Group

-- 
Christopher Lishka                 ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene                   lishka%uwslh.uucp@cs.wisc.edu
Data Processing Section  (608)262-4485                       lishka@uwslh.uucp

"What a waste it is to lose one's mind -- or not to have a mind at all.
How true that is." -- V.P. Dan Quayle, garbling the United Negro College
Fund slogan in an address to the group (from Newsweek, May 22nd, 1989)

bill@bert.Rosemount.COM (William M. Hawkins) (10/20/89)

The very idea of taking 10 to 20 years to educate a "brain" raises the
problems caused by the rate of change of technology in a big way.
Consider the team that has designed an architecture for a machine, and
a plan for educating it.  The hardware will be changed several times
before the education is complete - lack of parts, company taken over
and broken up, etc.  Not to mention the breakthroughs in AI that will
occur over that span of time.  How many of you still work with the
same computers you had 10 or even 5 years ago?  Do you still use the
same programming language, or something more powerful?

On the other hand, the brain hasn't changed much in 10,000 years.

Let's hope this anthropomorphic view of an AI brain is incorrect,
and that something more practical is revealed.  Otherwise, you will
have something like the generation ships of science fiction, setting
out on journeys of hundreds of years, only to meet the occupants of
much faster ships when they arrive.

bill@bert.rosemount.com  Minneapolis, MN

jwm@stda.jhuapl.edu (Jim Meritt) (10/23/89)

Why on earth would we want a human intelligence living in circuitry?  We
can already mass produce human intelligence with unskilled labor almost
anyplace fairly cheaper.

I would hope a machine intelligence would be different!


"In these matters the only certainty is that nothing is certain"
					- Pliny the Elder
These were the opinions of :
jwm@aplcen.apl.jhu.edu  - or - jwm@aplvax.uucp  - or - meritt%aplvm.BITNET

dmocsny@uceng.UC.EDU (daniel mocsny) (10/24/89)

In article <3554@aplcen.apl.jhu.edu>, jwm@stda.jhuapl.edu (Jim Meritt) writes:
> 
> Why on earth would we want a human intelligence living in circuitry?  We
> can already mass produce human intelligence with unskilled labor almost
> anyplace fairly cheaper.
> 
> I would hope a machine intelligence would be different!

While certain aspects of human intelligence appear to develop more or
less spontaneously, many commercially valuable human skills do require
skilled labor to impart. (Consider the problem of manufacturing a
medical doctor, or even a high-school graduate, with only unskilled
labor.) One advantage of implementing human(like) intelligence in
circuitry and/or software is that (presumably) we could copy it as
easily as we can other computer systems. 

Thus, even if some artificial learning system required as much time
and resources as a human to learn some valuable skill, once we had one
machine with that skill, we could make it universally available to
everyone able to afford a copy of the machine. The present system
requires us to start over almost completely from scratch with every
new baby that comes into the world.  While this system has its
strengths, it is also monstrously inefficient.

One major difference today between a rich person and a poor person is
that the rich person has easy access to a vast resource of human
expertise. With enough Information Power we can reduce the cost of
this expertise to essentially nothing, thereby extending to the
masses a benefit now available to only a privileged few. Note that
despite our higher manufacturing productivity and material
wealth today compared to 100 years ago, the average person still 
cannot afford to hire any more real people now than then, since
the real cost of labor has not declined in the least.

Even if building a human(like) intelligence in hardware/software
proves to be impossible for some reason, we still have much to gain
by implementing parts of human intelligence. As you well know,
human beings have many skills that conventional computers sorely
lack, and vice versa. One of the major drawbacks of conventional
computer systems today is that they can only interface with
highly engineered and standardized aspects of the outside world,
whereas humans are much more flexible in dealing with noisy,
nonstandard information. This greatly restricts the range of problems
we can profitably apply computers to solve, as much human effort must
first organize all the data structures, algorithms, hardware, etc.,
before the computer can do anything.

The only way to get around this problem today is to re-engineer
significant parts of society and our lives to permit computers to
function in them. Since all aspects of our culture have evolved around
the strengths and weaknesses of human intelligence, the scale of this
task is astronomical. The computer today is in a position similar to
that of the automobile in 1900 AD. The automobile offered theoretical
advantages over horses and trains, but it was not effective in the
world that had grown up around them. Re-engineering the world to
accommodate automobiles was an expensive task, and the consequences of
doing so have been far from unambiguously positive. Had someone been
able to perfect some transportation technology that delivered the
functionality of the automobile while adapting transparently to the
existing world, it would have immediately dominated.

Finally, even if AI doesn't produce a saleable product, the research
should eventually bear fruit in understanding how our own minds work
(and knowing that a certain hardware/software approach cannot give
rise to intelligence is a valuable, although disappointing, part of
this). By understanding how human minds develop and operate, we may
learn how to help them develop and operate better.

Dan Mocsny
dmocsny@uceng.uc.edu