[comp.ai] Simulation and Understanding

lammens@sunybcs.uucp (Jo Lammens) (04/14/89)

>Perhaps I missed something by jumping in the middle of this, but it
>seems to me that all the articles posted on this Simulation vs Reality
>argument are talking about two fundamentally different concepts as if
>they were the same.  Understanding and simulation are not the same
>thing.  

I would not want to imply that they are the same. But I do think that
in order to simulate, there has to be understanding. In a previous
posting I used the analogy of trying to understand how an operating
system works by modeling (and simulating) the transistors that make up
the machine on which it runs. Suppose I know nothing about operating
systems nor computers, and I want to simulate an o.s. using some other
technology, say a mechanical construction with gears and pulleys etc.
If high-level understanding is not required to simulate, I would be
able to build my machine by studying the behaviour of the transistors,
and modeling them with the gears and the like. Even though I model a
transistor sufficiently precisely, and throw in a lot of them, do you
think I will ever get the simulated o.s. to work if I don't know how
it works or even what it's supposed to do? Going back to the original
theme of neurons and brain functions, do you think that throwing in a
lot of simulated neurons (I mean a whole lot) will automatically
result in brain functions, consciousness or what have you, if you
don't know what they are or even what they're supposed to do? This is
not a rethorical question.

>I don't know very many people, in fact I don't know any, who
>could accurately simluate a car, although I do know many who understand
>understand how it works.    I would argue that accurate simulation DOES
>require a model from as low a level as possible in order to behave
>exactly as the real thing being simulated.  

There are people who know how atoms 'work', but can they accurately
simulate a car? I agree that in order to obtain a very accurate
simulation it would have to include the lowest levels possible, but I
think it won't work if you leave out the higher levels. And then you
have to define what an accurate simulation is.

>Typically `high level'
>descriptions of functional groups of low level objects are mere
>generalizations of the function of the group, and thus only
>incorporate the default knowledge of that function.

I do not get this. Please explain.

Jo Lammens

BITNET: lammens@sunybcs.BITNET          Internet:  lammens@cs.Buffalo.EDU
UUCP: ...!{watmath,boulder,decvax,rutgers}!sunybcs!lammens

jwi@lzfme.att.com (Jim Winer @ AT&T, Middletown, NJ) (04/14/89)

In article <5254@cs.Buffalo.EDU>, lammens@sunybcs.uucp (Jo Lammens) writes:
> >Perhaps I missed something by jumping in the middle of this, but it
> >seems to me that all the articles posted on this Simulation vs Reality
> >argument are talking about two fundamentally different concepts as if
> >they were the same.  Understanding and simulation are not the same
> >thing.  
> 
> I would not want to imply that they are the same. But I do think that
> in order to simulate, there has to be understanding. In a previous
> posting I used the analogy of trying to understand how an operating
> system works by modeling (and simulating) the transistors that make up
> the machine on which it runs. Suppose I know nothing about operating
> systems nor computers, and I want to simulate an o.s. using some other
> technology, say a mechanical construction with gears and pulleys etc.
> If high-level understanding is not required to simulate, I would be
> able to build my machine by studying the behaviour of the transistors,
> and modeling them with the gears and the like. Even though I model a
> transistor sufficiently precisely, and throw in a lot of them, do you
> think I will ever get the simulated o.s. to work if I don't know how
> it works or even what it's supposed to do? Going back to the original
> theme of neurons and brain functions, do you think that throwing in a
> lot of simulated neurons (I mean a whole lot) will automatically
> result in brain functions, consciousness or what have you, if you
> don't know what they are or even what they're supposed to do? This is
> not a rethorical question.

Modeling the atoms of a car will get you a simulation of some metal
and plastic. This does not have any necessary relationship to
getting from here to there. A simulation of a car need only move
some person(s) and/or material(s) from one place to another while
providing relative protection from the environment, and, optionally,
some thrills for the people who find that a necessary part of a car
ride, and some privacy for the people who find that a necessary part
of a car ride, and some music for the people who find that a
necessary part of a car ride, and a mother-in-law in the back seat
for those who find that a necessary part of a car ride.

Modeling the transistors of a computer will get you a simulation of a
computer. This does not have any necessary relationship to modeling
an operating system (or an application program). Simulating an
operating system is perhaps the clearest example of the difference
between simulating function and simulating form. An operating system
is defined as a set of functions -- it doesn't make any difference
what the form is as long as the function is identical. Thus, UNIX
runs on the 80386 or on the 68020 or on the VAX and we still
recognize it as UNIX.

Similarly, modeling a neuron may get you a simulation of a
brain. This does not have any necessary relationship to modeling a
mind (or an intelligence). The question still remains, can you
simulate a mind (or an intelligence) without necessarily simulating
the brain? Until somebody provides a simulation that we can't tell
from the real thing, we won't know.

Jim Winer ..!lzfme!jwi 

I believe in absolute freedom of the press.
I believe that freedom of the press is the only protection we have
	from the abuses of power of the church, 
	from the abuses of power of the state,
	from the abuses of power of the corporate body, and 
	from the abuses of power of the press itself.
Those persons who advocate censorship offend my religion.

weltyc@cs.rpi.edu (Christopher A. Welty) (04/15/89)

In article <5254@cs.Buffalo.EDU> lammens@sunybcs.UUCP (Jo Lammens) writes:
>
>I would not want to imply that [simulation and understanding] are the
>same. But I do think that 
>in order to simulate, there has to be understanding. 

Perhaps.  I just wanted to make sure people weren't confusing the two.
I didn't say you could simulate without understanding, nor did I say
that understanding the low level is all you need for simulation.  I
also did not intend to open up the conenctionism debate (looks like I
didn't do much of what I intended).

How about this as an explanation of what I'm advocating:  For a
simulation to be exactly accurate, the simulation must be at all
levels.  If you want to simulate a computer you need to simulate each
level.  Just as you need to know how transistors go together to build
a computer chip, so you would need to know how the simulated
transistors go together to build a simulated computer chip, but you
can't just simulate the computer chip.  Of course, I'm not saying you
CAN'T get 90% of the functionality into a simulated system without
modelling the lower levels, and in fact the quite awesome comutational
expense of what I claim would be a complete simulation is probably not
worth that extra 10% in most cases (and the brain may even be one of
these cases, who needs computers that can suffer from amnesia, anyway?).

>>Typically `high level'
>>descriptions of functional groups of low level objects are mere
>>generalizations of the function of the group, and thus only
>>incorporate the default knowledge of that function.

By this I mean that abstractions typically leave out specifics in
favor of more general knowledge (defaults).  While these abstractions
may capture a large portion of knowledge (or functionality) of a
system, because they are abstractions there will be missing things.
You can always add missing things when you note they are missing, this is
not the same, as it requires someone to point out that something is
missing, and there may be a lot (perhaps infinite?) of these special
cases (exceptions) that exist (but only happen rarely).


Christopher Welty  ---  Asst. Director, RPI CS Labs | "Porsche:  Fahren in
weltyc@cs.rpi.edu             ...!njin!nyser!weltyc |  seiner schoensten Form"

dan-hankins@cup.portal.com (Daniel B Hankins) (04/15/89)

In article <5254@cs.Buffalo.EDU> lammens@sunybcs.uucp (Jo Lammens) writes:

>[...] In a previous posting I used the analogy of trying to understand how
>an operating system works by modeling (and simulating) the transistors
>that make up the machine on which it runs.  Suppose I know nothing about
>operating systems nor computers, and I want to simulate an o.s. using some
>other technology, say a mechanical construction with gears and pulleys
>etc.

>[...] Even though I model a transistor sufficiently precisely, and throw
>in a lot of them, do you think I will ever get the simulated o.s. to work
>if I don't know how it works or even what it's supposed to do?
>Going back to the original theme of neurons and brain functions, do you
>think that throwing in a lot of simulated neurons (I mean a whole lot)
>will automatically result in brain functions, consciousness or what have
>you, if you don't know what they are or even what they're supposed to do?

     The trick in getting the simulated opsys to work is not in merely
modelling the function and connectivity of the gates, _but also their state
at a particular point in time_.  That is, if you model the transistors of a
given system, and also initialize them to the actual states of the
transistors in the modelled system, as determined by taking some kind of
'snapshot', then I really think you will preserve the opsys as well.

     An alternative to a snapshot is to model the machine without copying
the state as determined instantaneously (something which is very difficult
to do).  Then one loads up the simulated computer with the operating system
just as one would load the real machine.  This is, in fact, the approach
that is used where I work to load up a machine which simulates another
machine.  We often do this because the actual machine is not yet available,
due to hardware problems, incomplete designs of some sections, cost of the
real machine, and so on.

     This is quite analogous to the way one would treat a neural-network
simulation.  There are two approaches;  one can build the simulated network
and try to program it via a snapshot taken of the equivalent real NN, or
one can program the NN the way the real one was originally programmed.

     The first of these approaches will be impractical for quite some time
to come.  The second seems to be practical.  Build a simulated NN which is
equivalent to that of an infant - a blank slate.  Then it will learn by
experience as humans and other animals do.

     Of course, for quite some time it will be more economical to produce
NNs by manual labor, with a 9-month delivery schedule.


Dan Hankins

rayt@cognos.UUCP (R.) (04/21/89)

In article <975@itivax.iti.org> David H. West writes:

>Infants seem to come somewhat prewired; one could think of them as
>having bootstrap code in ROM.

Clearly, this must be true in terms of some basic capabilities, however,
as I understand it, billions of brain cells die up through childhood;
presumably, reducing the amount of untapped potential that the individual
has and making them more environment-specific.

						R.
-- 
Ray Tigg                          |  Cognos Incorporated
                                  |  P.O. Box 9707
(613) 738-1338 x5013              |  3755 Riverside Dr.
UUCP: rayt@cognos.uucp            |  Ottawa, Ontario CANADA K1G 3Z4

dan-hankins@cup.portal.com (Daniel B Hankins) (04/22/89)

In article <975@itivax.iti.org> dhw@itivax.iti.org (David H. West) writes:

>>Build a simulated NN which is equivalent to that of an infant - a blank
>>slate.  Then it will learn by experience as humans and other animals do.
>
>Infants seem to come somewhat prewired; one could think of them as having
>bootstrap code in ROM.

     Yes, I was aware of this.  However, I didn't think it was relevant to
the discussion at hand, since none of the behaviors involved are what we
would call sentient.

     Some genetically-programmed instinctive behaviors will probably be
necessary for ANNs to achieve sentience;  these could perhaps be provided
by evolving the ANN individuals in competition with others.


Dan Hankins

dan-hankins@cup.portal.com (Daniel B Hankins) (04/22/89)

In article <8021@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar)
writes:

>Actually, the Neural Darwinism thesis of Gerald Edelman is slightly
>different; but it also assumes that infants definitely have more than a
>blank slate.  [...] an infant is actually "over-wired" with far more
>neural connections than it actually needs.  Those connections which
>actually embody the acquisition of experience arise as a result of
>competitive selection.  [...] Perhaps, rather than simulating the infant,
>we should begin by simulating that developmental process!

     Sounds good.  Perhaps we should go one step further and make the
construction of the ANN somewhat dependent on a kind of computational
DNA... then create populations of these and let them evolve in a simulated
environment.


Dan Hankins