[comp.theory.cell-automata] efficiency and philosophy of CA computer simulation

hiebeler@cs.rpi.edu (Dave Hiebeler) (03/03/90)

  Two things recently said here led to a few interesting thoughts.
First, the quotes:
1>From kyriazis@iear.arts.rpi.edu (George Kyriazis) Fri Mar  2 16:48:38 1990
1> Will CA's be eventually fast enough to accurately simulate such a
1> circuit...?

and

2>From karakots@apple.com (Ken Karakotsios) Fri Mar  2 16:48:44 1990
2> Wire world running on a serial computer is many orders of magnitude
2> slower than current logic simulators.


  One of the strong incentives to use CA for things like physical
modeling, is that the path between the physics and the computer
simulation is much more direct, i.e. instead of following the
following steps:
 1. make a continuum approximation of the system under study
 2. write some PDE's
 3. turn the PDE's into finite-difference equations on the computer
 4. do lots of rounding off in the process of simulating these
    finite-difference eqns

We just do the following with CA:
 1. "assume" the universe (hence the system under study) is discrete
 2. write a CA rule which captures the essentials of the laws of
    physics, in particular the ones that most strongly affect what
    we're studying, and implement the CA rule exactly on a computer
    (no round-off, etc)

Because the CA approach is more direct, if you build a computer (e.g.
a CAM) using the CA philosophy (local connections, very parallel) you
will be able to do physics simulations very fast, although from a very
different perspective than PDE's.

  Anyways, enough background, which everyone here has heard many
times (I quickly repeated it for the benefit of new people).  What the
2 quotes above got me wondering about was the efficiency of using CAs
to simulate traditional-architecture computers.

  In some sense, it has to be efficient, because the traditional
computer will be implemented in our physical reality, which a CA
corresponds well to.  But is there some advantage to be gained by
using a standard (physically inefficient) serial computer to simulate
another such computer?

  I think not, with the continuing development of CA hardware.  The
folks working on CAM-8 describe future CA machines as being
"programmable matter".  We will be able to do such incredibly detailed
physical simulations, it's almost like having a "holodeck" from the
new Star Trek -- we'll be able to create some virtual physical system,
in some sense.  I'm not getting into the virtual reality topic here;
I'm saying that at some point we'll be able to do things like simulate
a cubic centimeter of air at the molecular level, in something not far
from real-time, using CA technology!

  So a *CA simulation* of a serial computer will approach the speed of a
*physical implementation* of that computer!  At least, it seems that
way to me.  

  This really starts to sound like a lot of philosophical hocus-pocus,
but now that I've already gone that far, I might as well ask the real
philosophical knockout question: do you think it is feasible that we
will someday be able to simulate computers, using CA techniques, so
that they will run *faster* than the physical implementation of the
machines?  That is, maybe CA hardware will develop so well that it
will get ahead of manufacturing technology in some areas?

  I really don't know anything about manufacturing technology, so maybe
it's a ludicrous question.  It does seem like an interesting one
though.  I guess it's safe to say we can't simulate the universe more
efficiently than it "runs" itself, since our CA machine will be
embedded in the universe.  But it does seem theoretically possible, I
think, that we could "beat ourselves" at building things, so to speak.

-- 
Dave Hiebeler / Computer Science Dept. / Amos Eaton Bldg. /
Rensselaer Polytechnic Institute / Troy, NY 12180-3590 USA
Internet (preferred): hiebeler@turing.cs.rpi.edu   Bitnet: userF3JL@rpitsmts
"Off we go, into the wilds you ponder..."

kyriazis@iear.arts.rpi.edu (George Kyriazis) (03/03/90)

In article <K{+#+Y+@rpi.edu> hiebeler@cs.rpi.edu (Dave Hiebeler) writes:
>
>We just do the following with CA:
> 1. "assume" the universe (hence the system under study) is discrete
> 2. write a CA rule which captures the essentials of the laws of
>    physics, in particular the ones that most strongly affect what
>    we're studying, and implement the CA rule exactly on a computer
>    (no round-off, etc)
>
Hmm..  I don't know if that will work nicely for circuit simulators.
Since the universe is not discrete (well, physicists can argue that it
is, so it's better to say that the universe is not as 'lumpy' as
today's CA machines assume), I think that the results will
probably be better if the CA rules take into account the discretness
of the CA world.  Doing that of course will increase the computation
per step.


>I'm not getting into the virtual reality topic here;
>I'm saying that at some point we'll be able to do things like simulate
>a cubic centimeter of air at the molecular level, in something not far
>from real-time, using CA technology!
>
This is interesting.  What is more interesting is the following:
There is a concept called the time quantum, which is VERY small
(order of 10^-40 secs?).  You are talking about simulating nature in
close to real-time.  Definetely your time step will be mach larger than
that (I am dreaming of 500 picosecs or so), so some kind of interpolation
(spatial and/or temporal) has to be done.  ODE solving does the same
thing:  It's repetitive and interpolates on the time axis.  So, the
question that comes to mind is:  Is there any fundamental difference
between DE solving and CA?  I don't think there is too much.
Finite element Analysis texts start from fixed grid techniques
(hmm..  it reminds me of the CA grid) and go on with localized grid
refinement and a huge list of other stuff I don't want to think about.

But FEM is for Mech. Engineers!  They deal will stresses and they need
refinement.  Circuit simulation uses experimental results 
(capacitance/unit area, resistance/unit length, etc.)  for simulation
(I am talking about SPICE, etc.).  In VLSI design there is a minimum
feature size that can be used, so NOT refining will work.  


>
>  So a *CA simulation* of a serial computer will approach the speed of a
>*physical implementation* of that computer!  At least, it seems that
>way to me.  
>
Assuming that lots of approximations are made during the simulation,
I agree.


>
>  This really starts to sound like a lot of philosophical hocus-pocus,
>but now that I've already gone that far, I might as well ask the real
>philosophical knockout question: do you think it is feasible that we
>will someday be able to simulate computers, using CA techniques, so
>that they will run *faster* than the physical implementation of the
>machines?  That is, maybe CA hardware will develop so well that it
>will get ahead of manufacturing technology in some areas?
>
If you are willing to stretch your question I think it's already done!
There are some new chips just announced that are called
'Field Programmable Logic Arrays'.  They contain an array PLA's
(Programmable Logic Arrays) that are connected with a programmable
interconnect.  That means that you can program it at run-time.
In other words you can have a bunch of them emulatinga SPARC at one
point in time, then change their mind and reprogram themselves so
they emulate a 68020 equally well, and finally change so they
emulate a small Connection Machine.

Now where does the simulation come into play?  I think it makes the
gap between simulation and implementation VERY small.  What you can
do is design the circuit and instead of simulating it, program it
on an array of FPLA's, see if it works, and then (if necessary) go
back to the drawing board.  Scary isn't it?


>
>-- 
>Dave Hiebeler / Computer Science Dept. / Amos Eaton Bldg. /
>Rensselaer Polytechnic Institute / Troy, NY 12180-3590 USA
>Internet (preferred): hiebeler@turing.cs.rpi.edu   Bitnet: userF3JL@rpitsmts
>"Off we go, into the wilds you ponder..."


----------------------------------------------------------------------
  George Kyriazis                 kyriazis@turing.cs.rpi.edu
				  kyriazis@rdrc.rpi.edu
 				  kyriazis@iear.arts.rpi.edu