[comp.ai] simulating brains

ada612@csc.anu.oz.au (09/21/90)

Here is a basic  question about digital simulations of analog computing
systems such as the human brain is currently taken to be.  Namely, is there
any theorem that shows that this is in general possible, assuming
that the precision of the arithmetic is fixed (or, equivalently,
I hope, if the device has a fixed upper bound on the time needed
to compute the state at t_{i+1} from that it at t_i).

More precisely, discussions of brain simulations (as in Hofstadter's
Book-of-Einstein's-Brain scenario) assume that one can can simulate
a brain by breaking the passage of time into small intervals, and
using the equations governing the evolution of the system to
compute the state at t_{i+1} from that at t_{i}, furthermore using
approximations to physical & mathematical constants such as e and pi.

For this procedure to be convincing, we need to know that as our
time-interval size decreases and arithmetic precision increases, the
predicted state of the system at t (given initial conditions for t_0)
converges to a limit.  This is obviously true for the kinds of well-
behaved systems that we look at in baby Calculus, but is it provable or
plausibly conjecturable for brains?

Reasons for suspecting that it isn't are:

I: A semi-ignorant reading of the semi-popular literature on chaos theory
suggests that it might be possible to set up systems that passed
through critical periods t_c with the property that tiny differences
in the state at t_c would magnify into big differences at a later
time t, such that the states calculated for t_c with different
approximation methods produced non-convergingly different answers
at t.

II: The conclusion that a fixed-precision simulation of a brain is possible
leads immediately to the conclusion that a finite-state machine can simulate
the brain, which leads to one of the following conclusions, which I find
implausible:

  A) a finite state machine can be a sentient being.

  B) a finite state machine can simulate the behavior of a sentient
       being without being sentient.

Note the importance of the proviso that the simulation be fixed-precision.
If we allow the precision of the simulation to grow as the calculation
proceeds, it ceases to be finite state, but also, the time needed to
calculate the next state function would increase as the simulation
proceeded, and we wouldn't really have a functional equivalent to
a brain.  Thus an algorithm might be able to provide a sort of
semi-simulation of a brain (with the time scale expanding as the
simulation proceeded) without leading us to conclude either (A) or
(B) for algorithms.

Avery Andrews   ada612@csc.anu.oz.au

miron@fornax.UUCP (Miron Cuperman) (09/23/90)

What is a sufficient condition for a simulation of a brain to be good enough?
The noise induced by the finite precision of the simulation must be on the
order of magnitude of normal noise we experience.  If that is so, the
simulation is adequate.

There is no a-priori reason to assume we cannot build a simulation
with the same amount of noise as in nature.

Chaos does not influence the possibility of simulation in any way.  The
brain may be sensitive to some perturbances.  Since the simulation will
posses the same amount of noise, it will cause the same amount of
perturbances.

Brains MUST be equivalent to finite state machines.  Any precision beyond
the energy of natural noise has no influence.

Conclusion:  Brains are finite state machines with noise.  Therefore there
is no a-priori reason why they cannot be simulated.
-- 
	By me: Miron Cuperman <miron@cs.sfu.ca>

	"Do not go gentle into that good night,
	Rage, rage against the dying of the light" - Dylan Thomas, 1933

jacob@latcs1.oz.au (Jacob L. Cybulski) (09/26/90)

From article <1292@fornax.UUCP>, by miron@fornax.UUCP (Miron Cuperman):
> Brains MUST be equivalent to finite state machines.  Any precision beyond
> the energy of natural noise has no influence.
> 
> Conclusion:  Brains are finite state machines with noise.  Therefore there
> is no a-priori reason why they cannot be simulated.

I think your reasoning is a bit illogical. It is your assumption that
brains are equivalent to finite state machines, and I cannot see any
convincing argument that this is the case. The subsequent conclusion
is thus unacceptable.

I do agree, however, with you and others that some aspects of mental
manipulation could be simulated as if the part of the brain responsible
were a finite state machine (I am not even sure if the term "part" is
appropriate here).

Jacob

ada612@csc.anu.oz.au (09/26/90)

From <1292@fornax.UUCP> miron@fornax.UUCP (Miron Cuperman)

>What is a sufficient condition for a simulation of a brain to be good enough?
>The noise induced by the finite precision of the simulation must be on the
>order of magnitude of normal noise we experience.  If that is so, the
>simulation is adequate.

Reflecting on my original question, this seems right:  since neural
behavior is sloppy and imprecise, the roundoff errors of fixed precision
digital simulations shouldn't make any difference to the quality of
performance.

>Brains MUST be equivalent to finite state machines.  Any precision beyond
>the energy of natural noise has no influence.
>
>Conclusion:  Brains are finite state machines with noise.  Therefore there
>is no a-priori reason why they cannot be simulated.

But this seems wrong, because brains can also *grow* while they operate,
which is not something that finite state machines can do.  Turing
machines on the other hand can grow in the rather limited sense that
they amount of tape they have written on can get larger, but brains
can add new active computational agents, in the form of synapse connections.
This is clearly a more radical form of extensibility (if you're interested
in what can be done in real time).


Avery Andrews  (ada612@csc.anu.oz.au)

cowan@marob.masa.com (John Cowan) (09/28/90)

In article <1990Sep26.202658.2906@csc.anu.oz.au> ada612@csc.anu.oz.au writes:
>From <1292@fornax.UUCP> miron@fornax.UUCP (Miron Cuperman)
>
>>What is a sufficient condition for a simulation of a brain to be good enough?
>>The noise induced by the finite precision of the simulation must be on the
>>order of magnitude of normal noise we experience.  If that is so, the
>>simulation is adequate.
>
>Reflecting on my original question, this seems right:  since neural
>behavior is sloppy and imprecise, the roundoff errors of fixed precision
>digital simulations shouldn't make any difference to the quality of
>performance.

Turing actually makes this very point in "Can Machines Think?".  He points
out that while a digital computer cannot exactly simulate the behavior of an
(analog) differential analyzer (because the digital machine has only finite
precision), it can approximate the random error in the analyzer's behavior
to an arbitrarily close degree.

>>Conclusion:  Brains are finite state machines with noise.  Therefore there
>>is no a-priori reason why they cannot be simulated.
>
>But this seems wrong, because brains can also *grow* while they operate,
>which is not something that finite state machines can do.  Turing
>machines on the other hand can grow in the rather limited sense that
>they amount of tape they have written on can get larger, but brains
>can add new active computational agents, in the form of synapse connections.
>This is clearly a more radical form of extensibility (if you're interested
>in what can be done in real time).

I don't understand the sense of your final parenthesis.  Neglecting it for
a moment, the claim that brains are superior to Turing machines because they
can add "new active computational agents" seems clearly wrong.

The universal Turing machine has a fixed finite-state repertoire and a single
tape, like any Turing machine.  However, the tape may be thought of as
logically divided into two tapes.  The H-tape contains a symbolic representation
of the finite-state part of the TM being simulated by the UTM, and the S-tape
is the simulated tape of the simulated TM.

In the standard UTM, the H-tape contains both the unchanging representation of
the finite-state machine hardware, and the changing representation of the
current state.  The machine hardware representation (MHR) is not changed during
operation of the UTM.  However, there is no problem with constructing a variant
UTM which is allowed to change the MHR.  In particular, the amount of
MHR table space can grow without bound, since the H-tape is of infinite length.
(The easiest way to simulate an H-tape/S-tape pair is to use alternate cells
of the physical tape.)  Of course, such a modified UTM cannot simulate an
oracle (an infinite-state machine) because it would take infinite time to
"grow" the representation of such a machine on the H-tape.

OTOH, a brain cannot grow to infinite size (and processing power) in less than
infinite time either.  Turing machines are notoriously slow in "arbitrary time
units": they have to work confoundedly hard to overcome the limitations of
serial access.  But I don't see that "real time" has much to do with it.
If the modified UTM hardware is made fast enough, within quantum limits,
surely it could simulate an arbitrarily fast finite-state device?
-- 
cowan@marob.masa.com			(aka ...!hombre!marob!cowan)
			e'osai ko sarji la lojban

ada612@csc.anu.oz.au (10/02/90)

Re:  Message-ID: <270367E4.160B@marob.masa.com>
     From: cowan@marob.masa.com (John Cowan)

>>But this seems wrong, because brains can also *grow* while they operate,
>>which is not something that finite state machines can do.  Turing
>>machines on the other hand can grow in the rather limited sense that
>>they amount of tape they have written on can get larger, but brains
>>can add new active computational agents, in the form of synapse connections.
>>This is clearly a more radical form of extensibility (if you're interested
>>in what can be done in real time).
>
>I don't understand the sense of your final parenthesis.  Neglecting it for>
>a moment, the claim that brains are superior to Turing machines because they
>can add "new active computational agents" seems clearly wrong.
>
> <discussion of UTMs which I omit for brevity)

The sense of my final parenthesis is that I find the standard idealizations
of computability theory to be a very dubious framework for thinking
about brains.  Computability theory is about what can be done eventually,
whereas brains have to keep up with the real world, always providing some
sort of output in response to the current input.

Consider for example the following basis, which seems to be fairly widely
accepted, for believing that a computer or robot is sentient:  if it looks
like a brain (at the relevant level of structure) and acts like a brain,
it should be presumed to be/have a mind.  But  `fixed horsepower'
computing devices will neither look nor act like brains.  In Searlespeak,
one might say that their causal powers differ from those of brains in a
non-mystical and functionally relevant way.  So why suspect them of sentience?

  Avery Andrews  (ada612@csc.anu.oz.au)

weyand@csli.Stanford.EDU (Chris Weyand) (10/03/90)

In <1990Oct2.221006.3024@csc.anu.oz.au> ada612@csc.anu.oz.au writes:

>The sense of my final parenthesis is that I find the standard idealizations
>of computability theory to be a very dubious framework for thinking
>about brains.  Computability theory is about what can be done eventually,
>whereas brains have to keep up with the real world, always providing some
>sort of output in response to the current input.

So don't use computatbility theory.  I mean there really is a difference
between computers such as the MacIIcx I'm using right now and Turing Machines.
The main one being that TM's can't be fitted with an array of sensors and
effectors or anything else physical since they themselves are not.  Also, sure
computatbility theory talks about what functions can be computed and is not
concerned with time or space efficiency.  But we who program real computers
are very concerned with those issues.  Just because the theory says little
about efficiency doesn't mean that computations can't be done efficiently.

>Consider for example the following basis, which seems to be fairly widely
>accepted, for believing that a computer or robot is sentient:  if it looks
>like a brain (at the relevant level of structure) and acts like a brain,
>it should be presumed to be/have a mind.  But  `fixed horsepower'
>computing devices will neither look nor act like brains.  In Searlespeak,
>one might say that their causal powers differ from those of brains in a
>non-mystical and functionally relevant way.  So why suspect them of sentience?

You can't disprove a statement X by simply saying X is not true!

I think a better test of sentience would be if it acts like a mind then
it may be presumed to be a mind.  Acting like a brain presumably involves
sending chemical/electrical signals here and there and that doesn't sound
very interesting.  After all the brain of a frog acts like a brain.

Chris Weyand
weyand@cs.uoregon.edu -=- weyand@csli.stanford.edu

smoliar@vaxa.isi.edu (Stephen Smoliar) (10/14/90)

In article <15631@csli.Stanford.EDU> weyand@csli.Stanford.EDU (Chris Weyand)
writes:
>In <1990Oct2.221006.3024@csc.anu.oz.au> ada612@csc.anu.oz.au writes:
>
>>The sense of my final parenthesis is that I find the standard idealizations
>>of computability theory to be a very dubious framework for thinking
>>about brains.  Computability theory is about what can be done eventually,
>>whereas brains have to keep up with the real world, always providing some
>>sort of output in response to the current input.
>
>So don't use computatbility theory.  I mean there really is a difference
>between computers such as the MacIIcx I'm using right now and Turing Machines.
>The main one being that TM's can't be fitted with an array of sensors and
>effectors or anything else physical since they themselves are not.  Also, sure
>computatbility theory talks about what functions can be computed and is not
>concerned with time or space efficiency.  But we who program real computers
>are very concerned with those issues.  Just because the theory says little
>about efficiency doesn't mean that computations can't be done efficiently.
>
Efficiency is only part of the story.  More important is that computability
theory is concerned with FUNCTIONS, in the strict mathematical sense of the
word, which is to say a relation which associates with every element from some
domain space at most one element from a range space.  Within this theory
computation is a finite process which eventually halts given any domain
element for which such an association exists.  However, there are plenty
of things that computers do which cannot be reduced to such functions.
For example, operating systems do not halt in a finite amount of time
and return a function value (at least they are not designed to do so).
If we want to talk about simulating a brain, we would do better to consider
an operating system as an appropriate metaphor than a function which computes
a polynomial.  Computability theory may then tell us about certain functions
which we may want to build as COMPONENTS of such a system, but that does not
guarantee that we shall gain any insights about building the whole system.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"It's only words . . . unless they're true."--David Mamet