[rec.puzzles] How Intelligent are the Winning Ways?

bagchi@eecs.umich.edu (Ranjan Bagchi) (03/02/91)

	How well-regarded is the stuff in 'Winning Ways' by
Berkamp, Conway, and Guy?  Prior to looking at it, I was under the
impression that it was basically a book of recerational mathematics,
albeit with sound math behind it.  
	Anyway, I looked it up today because I'd heard of some work in
it screwing around with Conway's Life ;), and trying to show how to
build a computing machine with an appropriate configuration.
 
	Anyway...some of the initial proofs I have no problem with,
until the author mentions some wonderful things which a Life computer
could do, if there were some kind of closed form way to determine what
a given Life configuration would stabilize out to.   This is where I
balked... seeing that a Turing Machine is definitely constructable in Life
if you can do a computing machine, then the author just described the
Halting Problem, and treats it like it's something decidable.

	One last anyway... more for the cell-automata folks, but isn't
the method of constructing a computing machine gone about in the book
(basically modelling the E.E. approach to computer-building, using
gates and stuff) kinda un-natural?  I've only really just started
studying automata theory, but this strikes me as weird.
 
	-rj

	
	
--
--------------------------------------------------------------------------------
Ranjan Bagchi - At Large.  Well Kinda.  |  what kind of person
bagchi@[eecs                            |  would want to count syllables
        caen,                           |  just to write haiku?
	math.lsa].umich.edu     	|  
--------------------------------------------------------------------------------

wlod@netcom.COM (Wlodzimierz Holsztynski) (03/03/91)

In article <BAGCHI.91Mar2000950@snarf.eecs.umich.edu> bagchi@eecs.umich.edu (Ranjan Bagchi) writes:
>
>	How well-regarded is the stuff in 'Winning Ways' by
>Berkamp, Conway, and Guy?

Berkamp, Conway and Guy regard it very well.

	Wlodek

dseal@armltd.uucp (David Seal) (03/04/91)

In article <BAGCHI.91Mar2000950@snarf.eecs.umich.edu> bagchi@eecs.umich.edu
(Ranjan Bagchi) writes:

>
>       How well-regarded is the stuff in 'Winning Ways' by
>Berkamp, Conway, and Guy?  Prior to looking at it, I was under the
 ^^^^^^^
 Berlekamp, I believe.

>impression that it was basically a book of recerational mathematics,
>albeit with sound math behind it.  
>       Anyway, I looked it up today because I'd heard of some work in
>it screwing around with Conway's Life ;), and trying to show how to
>build a computing machine with an appropriate configuration.
> 
>       Anyway...some of the initial proofs I have no problem with,
>until the author mentions some wonderful things which a Life computer
>could do, if there were some kind of closed form way to determine what
>a given Life configuration would stabilize out to.   This is where I
>balked... seeing that a Turing Machine is definitely constructable in Life
>if you can do a computing machine, then the author just described the
>Halting Problem, and treats it like it's something decidable.

I'm afraid you're missing the point. The overall argument is:

   If there were a "closed form way" to determine the fate of a given Life
   configuration, then we could do all sorts of "wonderful things".

   But it is known that there are such things that we cannot do. Therefore
   there cannot be such a "closed form way".

The exact "wonderful thing we cannot do" to use is a matter of personal
choice. The authors use the known fact that there are undecidable problems
in number theory. Your argument uses the known fact that the Turing machine
halting problem is undecidable. There are all sorts of other arguments, all
more or less equivalent and all leading to the same conclusion.

I can see advantages to both arguments. Your argument about the Turing
machine is closer to what the Life computer actually is. However, I believe
the argument about undecidable problems in number theory is a better one for
the book: the authors can reasonably assume that anyone who has followed
them to that point in the book knows what a simple mathematical formula
means, but not that they know what a Turing machine is! So their particular
choice of argument probably makes the book some 10-20 pages shorter than
your one...

Incidentally, how would one go about building a Life Turing machine? I can
see one very long-winded technique, with a stationary tape and a moving
state machine: the state machine could move by emitting fleets of gliders to
(a) destroy itself; (b) reassemble an exact copy of itself one tape spacing
to the left or right. However, this seems like an awful lot of work: a
moving tape, stationary state machine Turing machine would seem a lot more
efficient. Any ideas how the moving tape might be constructed?

David Seal
dseal@acorn.co.uk

Standard disclaimers apply.

jgk@osc.COM (Joe Keane) (03/05/91)

In article <BAGCHI.91Mar2000950@snarf.eecs.umich.edu> bagchi@eecs.umich.edu
(Ranjan Bagchi) writes:
>	Anyway...some of the initial proofs I have no problem with,
>until the author mentions some wonderful things which a Life computer
>could do, if there were some kind of closed form way to determine what
>a given Life configuration would stabilize out to.   This is where I
>balked... seeing that a Turing Machine is definitely constructable in Life
>if you can do a computing machine, then the author just described the
>Halting Problem, and treats it like it's something decidable.

I think you're misinterpreting them.  They're just emphasizing the point that
the decision problem for Life is undecidable.

They claim that a Life computer can be programmed to spit out the first
solution to Fermat's Last Theorem, if one exists.  I wouldn't bet my life on
their construction, but i don't see any problem with it.  Then, if you had a
decision procedure for this computer, you'd know if FLT were true or false.

Some problems are undecidable in a contrived or artificial way.  Their point
is that Life is not like that, since it can include arbitrary Turing machines,
including universal ones.  If there were a decision procedure for Life, then
you could solve any mathematical problem.

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) (03/06/91)

In article <5539@acorn.co.uk> dseal@armltd.uucp (David Seal) writes:

> Incidentally, how would one go about building a Life Turing machine? I
> can see one very long-winded technique, with a stationary tape and a
> moving state machine: the state machine could move by emitting fleets
> of gliders to (a) destroy itself; (b) reassemble an exact copy of
> itself one tape spacing to the left or right. However, this seems like
> an awful lot of work: a moving tape, stationary state machine Turing
> machine would seem a lot more efficient. Any ideas how the moving tape
> might be constructed?

Won't work, or at least won't act like a Turing machine.

You have to be able to reverse direction on the tape.  Rebuilding the
head one space left rather than one space right is at least feasible,
but reversing the motion of a tape which is by definition infinite in
length is not.

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

edp@jareth.enet.dec.com (Eric Postpischil (Always mount a scratch monkey.)) (03/06/91)

In article <1991Mar5.171804.1429@zorch.SF-Bay.ORG>, xanthian@zorch.SF-Bay.ORG
(Kent Paul Dolan) writes:

[about creating a Turing machine in the universe of Conway's Life]

>You have to be able to reverse direction on the tape.  Rebuilding the
>head one space left rather than one space right is at least feasible,
>but reversing the motion of a tape which is by definition infinite in
>length is not.

What's been written to the tape is never actually infinite; reversing the motion
of the portion that has been written and marking the current ends is sufficient
to achieve the emulation.


				-- edp (Eric Postpischil)
				"Always mount a scratch monkey."
				edp@jareth.enet.dec.com

cjhs@minster.york.ac.uk (03/07/91)

in article <1991Mar5.171804.1429@zorch.SF-Bay.ORG>, xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) says:
> 
> In article <5539@acorn.co.uk> dseal@armltd.uucp (David Seal) writes:
>> Incidentally, how would one go about building a Life Turing machine? I

The method proposed in "Winning Ways" is to construct a machine which
has the same computational power as a turing machine, using counters
that can hold arbitrary integers. To build something corresponding
directly to a turing machine with an infinite tape...

>> can see one very long-winded technique, with a stationary tape and a
>> moving state machine: the state machine could move by emitting fleets
>> of gliders to (a) destroy itself; (b) reassemble an exact copy of
>> itself one tape spacing to the left or right. However, this seems like
>> an awful lot of work: a moving tape, stationary state machine Turing
>> machine would seem a lot more efficient. Any ideas how the moving tape
>> might be constructed?
> 
> Won't work, or at least won't act like a Turing machine.
> 
> You have to be able to reverse direction on the tape.  Rebuilding the
> head one space left rather than one space right is at least feasible,
> but reversing the motion of a tape which is by definition infinite in
> length is not.

Actually, I think it IS possible. What we may be able to do is
represent a finite tape which can be moved to the left and the right,
and also extended as necessary. As long as we insist that the infinite
tape is such that at any given time all symbols sufficiently far from
the read write head are OFF (or zero), then this extendible finite tape
is essentially the same thing.

We now come to a fun application of a remarkable pattern recently
posted to the net by Paul Callahan: <callahan@server.cs.jhu.edu>

This pattern is one which I would have said was impossible, until it
was explicitly provided. I have tried it out using Xlife, and it works
as advertised.

The pattern provided is a extendible delay loop. It is a pattern which
encodes some number of bits of information as an endlessly circling
ring of gliders, and by sending gliders at the pattern in appropriate
ways one can rewrite any bit, or (dig this!) INSERT an extra bit into
the stream.

We need two such delay loops to represent the tape, and a finite
control for the turing machine itself. The initial input tape is
encoded onto the initial delay loops, and must (of course) have only a
finite number of ON (or one) symbols.

The bits on the delay loop are grouped into pairs, so that four symbols
can be represented. They are HEAD, TAIL, ON, OFF. Each delay loop
always has exactly one HEAD symbol, one TAIL symbol, and some number of
ON and OFF symbols between them.

The symbol under the read write head of the tape is kept in the finite
control. The symbols on the tape to the "left" of the read write head
correspond (in order) to the symbols coming after the HEAD symbol on
the left hand delay loop. The TAIL symbol represents an infinite
sequence of OFF symbols. And analogously for the right hand side.

To move the tape to the left, the finite control acts as follows.  It
waits until the HEAD symbol is about to come around on the left hand
delay loop, then overwrites it with any symbol, and overwrites the
following symbol with HEAD. At the same time, this following symbol
must also be tested, and some signal sent back to the finite control
representing ON or OFF. Tricky co-ordination required here, but far
from impossible. At some point the delay stream must test for the HEAD
symbol, and then send a timing signal back to the finite control.

At the same time, the finite control must insert into the right hand
delay loop (after the HEAD symbol) the bit which is to replace the
symbol which was under the read write head.

Voila!

This turing machine has some unfortunate properties. In particular,
each step of the machine will take more life cycles than the step
before it. Even worse, the extendible delay loop grows very rapidly and
without limit, even if you are not inserting bits into it.  Thus any
implementation would bog down before the turing machine had executed
its first couple of cycles.

Even so, I am strongly of the opinion that a turing machine could be
implemented in life EXPLICITLY, and the pattern actually run with the
xlife program using a reasonable amount of memory. You just won't have
the patience to wait for it to do anything useful.

I don't want to spend the time to figure out the pattern. If anyone
else does, go for it. Don't forget to acknowledge Paul and myself when
you publish!

I have the pattern for Paul's extendible delay loop, if anyone needs it.
But I have lost his excellent comentary. So better you mail Paul first,
if you are interested.

Ciao -- Christopher Ho-Stuart
	University of York
	cjhs@uk.ac.york.minster     or possibly    cjhs@minster.york.ac.uk

dseal@armltd.uucp (David Seal) (03/11/91)

In article <1991Mar5.171804.1429@zorch.SF-Bay.ORG> xanthian@zorch.SF-Bay.ORG
(Kent Paul Dolan) writes:

>In article <5539@acorn.co.uk> dseal@armltd.uucp (David Seal) writes:
>
>> Incidentally, how would one go about building a Life Turing machine? I
>> can see one very long-winded technique, with a stationary tape and a
>> moving state machine: the state machine could move by emitting fleets
>> of gliders to (a) destroy itself; (b) reassemble an exact copy of
>> itself one tape spacing to the left or right. However, this seems like
>> an awful lot of work: a moving tape, stationary state machine Turing
>> machine would seem a lot more efficient. Any ideas how the moving tape
>> might be constructed?
>
>Won't work, or at least won't act like a Turing machine.
>
>You have to be able to reverse direction on the tape.  Rebuilding the
>head one space left rather than one space right is at least feasible,
>but reversing the motion of a tape which is by definition infinite in
>length is not.

Some vague thoughts I've had about a moving tape:

Suppose you have a pattern which can contain an arbitrarily long string of
data:

   A B C D ... Y Z

And that you can control it from its head to either shift the data outwards,
adding a new entry:

   A B C D ... Y Z
 ->nAB C D ... Y Z
 ->n ABC D ... Y Z
 ->n A BCD ... Y Z
 ->n A B CD... Y Z
 ->...
 ->n A B C ...XY Z
 ->n A B C ... XYZ
 ->n A B C ... X Y Z

Or to shift the data inwards, removing the head in the process:

   A B C D ... Y Z
 -> B  C D ... Y Z
 ->B  C  D ... Y Z
 ->B C  D  ... Y Z
 ->B C D  E... Y Z
 ->...
 ->B C D E ...Y  Z
 ->B C D E ...  Z
 ->B C D E ... Z

Then you could construct a moving tape out of two of these: you shift the
tape right by shifting its left half inwards and its right half outwards,
moving the symbol taken from the head of the left half on to the right half:

   z y ... d c b a   |   A B C D ... Y Z
 ->  z ... e d c b   |   a A B C ... X Y Z

You would of course need to take special action if the left hand side is
empty: this simply consists of pretending you read a "0" from the left hand
half-tape, extending the right hand half-tape with a "0", and not attempting
to shorten the left hand half-tape. Shifting the tape leftwards would be
similar.

An important part of this scheme is that the actual shift of the tape would
take place as a "wave" moving out along the tape at some constant speed, as
indicated in the diagrams above. Furthermore, provided both the "compression
wave" generated by shifting a tape half outwards and the "expansion wave"
generated by shifting a tape half inwards move at the same speed, you don't
have to wait for the tape movement to complete - you merely have to wait for
a suitable period to ensure that two successive waves don't interfere with
each other. This should be a constant amount of time, independent of the
length of the half-tape.

If you want a Turing machine implementation that runs at a reasonable
constant speed, something like this would seem to be the answer. The other
options I've seen by now are:

(a) The moving machine, stationary tape option referred to in my original
    message. Runs at a constant but VERY slow speed.

(b) The technique using extendible delay loops to simulated tape halves,
    described in a posting by Christopher Ho-Stuart. Runs at a non-constant
    speed, roughly inversely proportional to the size of the chunk of tape
    used so far.

(c) A technique described to me by email by Richard Schroeppel, which can be
    summarised as having a stationary tape and a stationary machine with a
    movable read/write head. He says it uses technology similar to that used
    for the sliding block memory to move the head and to read and write the
    contents of the tape. When reading the tape, two pulses are sent out:
    one is returned or not returned according to the contents of the tape,
    while the other is always returned and acts as a timing pulse. This
    again runs at a non-constant speed, this time roughly inversely
    proportional to the distance the tape head has moved from the state
    machine.

What I don't know is how to construct the half-tape required for my moving
tape. I suspect it's quite difficult!

David Seal
dseal@acorn.co.uk or dseal@armltd.uucp

Standard disclaimers apply.

callahan@cs.jhu.edu (Paul Callahan) (03/12/91)

In article <5717@acorn.co.uk> dseal@armltd.uucp (David Seal) writes:
[Some stuff about Life Turing Machines with constant slowdown in simulation]

It's interesting that this thread has gotten going now, since I posted
related material over two months ago that did not generate much commentary.
In a recent posting, Christopher Ho-Stuart referred to my construction of an 
extensible delay loop.  Unfortunately, as he and David Seal have pointed out, 
any TM simulation that used it would have a non-constant slowdown proportional
to the current size of the store. 

On the other hand, it *is* an explicit Life pattern (available in Xlife format),
and for this reason, I find it a little more satisfying than most of the general
schema that have been proposed.  It would not be all that hard to add a 
finite control and turn it into a universal machine, but I have not had the 
time recently.

However, if we are speaking of general approaches instead of explicit patterns, 
then there is a straightforward general method for constructing a constant time
simulation of an arbitrary 1-D cell automaton (with finite states per cell and
transition function based on left and right neighbors), and it doesn't take 
much imagination to see that this implies a constant time simulation of an 
arbitrary Turing Machine.

We first note that it is possible to construct any finite logic circuit with
glider guns, and that glider guns can be produced by a collision of gliders.
There is a construction called a "rake" (or a "glider puffer") that produces
a glider at regular intervals (like a gun) but which itself propagates 
either horizontally or vertically.  There is a technique for constructing 
rakes with arbitrarily long periods, which I discussed in some detail in
early January.  

It turns out that this last point was not well-known (i.e. not discovered
by Gosper in the early 70s), but essentially the same approach had been
discovered a little earlier by Dean Hickerson.  To make a long story short,
it is possible to construct a class of rakes whose period grows exponentially
with the size (measured either by cell population or area of bounding box) of 
the construction.  Again, I have an Xlife pattern that illustrates the 
technique, and if some kind reader wishes to dredge up the accompanying text
of my two articles about this pattern, I can repost.

The famous breeder pattern (see any Life reference) consists of a growing 
sequence of glider guns left in the wake of a flotilla of interacting rakes.  
As long as our rakes are of sufficiently long period, we can construct a 
sequence of finite logic cells in the same manner (modulo a few details about 
how to get the logic cells in the right initial state).  The logic cells 
are themselves composed of glider guns, so the gliders-->glider-gun collision  
is all we really need.

Our 1-D cell automaton consists of an infinite sequence of cells lying along a 
horizontal line.  They are initially quiescent with the exception of a finite
interval around the origin (call these the active cells).  We allow cells to 
wake up neighbors and send symbols back and forth, with the internal state 
changing according to some transition function.  

In the Life construction, logic cells would interact by sending gliders to 
their neighbors.  Gliders, of course, propagate diagonally, not horizontally, 
but we may get around this by leaving a trail of "shuttles" (also constructable
by a glider collision), parallel to the cells, to reflect the gliders 90 
degrees back to the horizontal axis (details left as an exercise).  
Alternatively, we can use horizontally propagating "spaceships" to carry 
information, as these can be produced in the logic cell using appropriate glider
collisions.

To construct a Life pattern corresponding to such an automaton with a given
initial configuration, we construct the active cells explicitly (note there 
are finitely many) and we develop two rake flotillas to build the infinite 
sequences of quiescent cells to the left and right (note that we could 
get away with an automaton that is infinite in just one direction). 

The rake flotillas move at half the "speed of light," so the sequence of
new cells is produced very rapidly.  As long as the propagation of data
between the cells is slower (and it will be), we are guaranteed that before
any cell tries wake up its neighbor, it will, in fact, have a neighbor to 
wake up. 

The number of steps it takes to compute the transition function in the
logic cell gives us the constant slowdown.  Otherwise, the pattern is a direct
simulation of an arbitrary 1-D cell automaton.  The catch is that we
need a rake for every glider to make the logic cell, and a cell may
require hundreds of gliders even if its transition function is quite simple.
Thus, our rake flotillas consist of very large (but finite) populations of 
cells.

However, all of the pieces (logic cells, rakes) can be easily constructed 
explicitly, so it is not as bad as a simulation that relies on universal 
constructors.  The essential difference is that rake flotillas are 
pre-programmed constructors that perform a very specific job.  Hence, they 
are much easier to design than universal constructors.

I believe that with a suitable encoding scheme in which rakes are treated
as aggregrates identified by position and phase-shift (rather than cell
populations), it would be possible to explicitly construct and verify such a 
pattern using readily available computational power (e.g. a desktop 
workstation).  If we had some CAD tools along the lines of "silicon compilers" 
for Life logic, then the design process would be much easier.  Of 
course, all of this is merely a question of filling in the details of proofs
that are generally agreed upon.  Besides the simple pleasure of building  
a machine and watching it click and whir, there may be little merit to 
such activity.

--
Paul Callahan
callahan@cs.jhu.edu

xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) (03/12/91)

dseal@armltd.uucp (David Seal) writes:
> xanthian@zorch.SF-Bay.ORG (Kent Paul Dolan) writes:

> dseal@armltd.uucp (David Seal) writes:
>
>> Incidentally, how would one go about building a Life Turing machine? I
>> can see one very long-winded technique, with a stationary tape and a
>> moving state machine: the state machine could move by emitting fleets
>> of gliders to (a) destroy itself; (b) reassemble an exact copy of
>> itself one tape spacing to the left or right. However, this seems like
>> an awful lot of work: a moving tape, stationary state machine Turing
>> machine would seem a lot more efficient. Any ideas how the moving tape
>> might be constructed?

>Won't work, or at least won't act like a Turing machine.

By which I should have clarified that I meant that in terms of the
number of cycles needed to complete an operation, operations that have
the same order statististics for a Turing machine have very different
order statistics for the moving tape case in Life. Example: move right
writing a symbol into each blank frame "forever", and doing that but
pausing one "step" time at each frame are both order N time operations
in the number N of frames traversed on a Turing machine. The first is
still order N on a moving tape Life emulation of a Turing machine, but
the second is (at least) order N*N, since the emulation has to wait for
the cessession/resumption of motion to reach the (ever more distant)
left end of the tape and report back between each pair of steps.

>You have to be able to reverse direction on the tape.  Rebuilding the
>head one space left rather than one space right is at least feasible,
>but reversing the motion of a tape which is by definition infinite in
>length is not.

And as several folks have written me, the unwritten piece of the tape
can be implicit, so only a finite tape need be considered in real use.
This doesn't remove the objection that what you end up with doesn't
have behavior subject to the same analysis as a "real" Turing machine.

> Some vague thoughts I've had about a moving tape:

> Suppose you have a pattern which can contain an arbitrarily long
> string of data:

>    A B C D ... Y Z

> And that you can control it from its head to either shift the data outwards,
> adding a new entry:

>    A B C D ... Y Z
>  ->nAB C D ... Y Z
>  ->n ABC D ... Y Z
>  ->n A BCD ... Y Z
>  ->n A B CD... Y Z
>  ->...
>  ->n A B C ...XY Z
>  ->n A B C ... XYZ
>  ->n A B C ... X Y Z

> Or to shift the data inwards, removing the head in the process:

>    A B C D ... Y Z
>  -> B  C D ... Y Z
>  ->B  C  D ... Y Z
>  ->B C  D  ... Y Z
>  ->B C D  E... Y Z
>  ->...
>  ->B C D E ...Y  Z
>  ->B C D E ...  Z
>  ->B C D E ... Z

> Then you could construct a moving tape out of two of these: you shift the
> tape right by shifting its left half inwards and its right half outwards,
> moving the symbol taken from the head of the left half on to the right half:

>    z y ... d c b a   |   A B C D ... Y Z
>  ->  z ... e d c b   |   a A B C ... X Y Z

> You would of course need to take special action if the left hand side
> is empty: this simply consists of pretending you read a "0" from the
> left hand half-tape, extending the right hand half-tape with a "0",
> and not attempting to shorten the left hand half-tape. Shifting the
> tape leftwards would be similar.

> An important part of this scheme is that the actual shift of the tape
> would take place as a "wave" moving out along the tape at some
> constant speed, as indicated in the diagrams above. Furthermore,
> provided both the "compression wave" generated by shifting a tape half
> outwards and the "expansion wave" generated by shifting a tape half
> inwards move at the same speed, you don't have to wait for the tape
> movement to complete - you merely have to wait for a suitable period
> to ensure that two successive waves don't interfere with each other.
> This should be a constant amount of time, independent of the length of
> the half-tape.

But the capability to move that wave down the tape is the capability to
record and erase an arbitrary current state, remember and write the
previous state, and move one step; i.e., pretty much the capabilities of
the moving head of an (at least) alphabet*alphabet state Turing machine.
If you can send that wave, whatever means you use to promulgate it is
pretty much the functional equivalent of the thing you were trying to
use it to replace, a moving head. Being that close to solving the
problem of a moving head that rebuilds itself with saved stateone step
left or right, it looks simpler to finish the job than to build an
immobile head which can also build this moving "almost a head" in two
versions.

I don't see that solving the problem, more making it worse, but I'm
willing to listen.

Kent, the man from xanth.
<xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>

dseal@armltd.uucp (David Seal) (03/14/91)

In article <callahan.668718947@newton.cs.jhu.edu> callahan@cs.jhu.edu (Paul
Callahan) writes:

>In article <5717@acorn.co.uk> dseal@armltd.uucp (David Seal) writes:
>[Some stuff about Life Turing Machines with constant slowdown in simulation]
>
>It's interesting that this thread has gotten going now, since I posted
>related material over two months ago that did not generate much commentary.

I've only recently found and started reading this group, because of the
crossposting of this thread to sci.math!

>However, if we are speaking of general approaches instead of explicit patterns, 
>then there is a straightforward general method for constructing a constant time
>simulation of an arbitrary 1-D cell automaton (with finite states per cell and
>transition function based on left and right neighbors), and it doesn't take 
>much imagination to see that this implies a constant time simulation of an 
>arbitrary Turing Machine.
>
>We first note that it is possible to construct any finite logic circuit with
>glider guns, and that glider guns can be produced by a collision of gliders.
>There is a construction called a "rake" (or a "glider puffer") that produces
>a glider at regular intervals (like a gun) but which itself propagates 
>either horizontally or vertically.  There is a technique for constructing 
>rakes with arbitrarily long periods, which I discussed in some detail in
>early January.

...

>Our 1-D cell automaton consists of an infinite sequence of cells lying along a 
>horizontal line.  They are initially quiescent with the exception of a finite
>interval around the origin (call these the active cells).  We allow cells to 
>wake up neighbors and send symbols back and forth, with the internal state 
>changing according to some transition function.  

...

>To construct a Life pattern corresponding to such an automaton with a given
>initial configuration, we construct the active cells explicitly (note there 
>are finitely many) and we develop two rake flotillas to build the infinite 
>sequences of quiescent cells to the left and right (note that we could 
>get away with an automaton that is infinite in just one direction). 

... 

>The number of steps it takes to compute the transition function in the
>logic cell gives us the constant slowdown.  Otherwise, the pattern is a direct
>simulation of an arbitrary 1-D cell automaton.  The catch is that we
>need a rake for every glider to make the logic cell, and a cell may
>require hundreds of gliders even if its transition function is quite simple.
>Thus, our rake flotillas consist of very large (but finite) populations of 
>cells.

Interesting idea. However, rather than using the 1-D cellular automaton
directly to simulate the Turing machine, why not just use the same idea to
generate a moving tape?

In other words, produce something like the following:


                          +-----------------+
                          |State machine for|
                          | Turing machine  |
                          +-----------------+
                                   |
                                   |
                                   |
                             +-----------+
                             |  Special  |
           +----+   +----+   | read/write|   +----+   +----+
    ... ---|Cell|---|Cell|---|    cell   |---|Cell|---|Cell|--- ...
           +----+   +----+   +-----------+   +----+   +----+


with the rows of cells being generated by rakes disappearing into the
distance, as in your suggestion.

I haven't worked out a detailed state diagram for the cells being generated,
but it should be fairly simple. Essentially, the two halves of the tape are
mirror images of each other. Each cell can contain no bits, one bit, or two
bits. No bit cells would grab the bit from the next cell out along the tape,
converting it to a no bit cell, while one bit cells would be passive and two
bit cells would force their outermost bit into the next cell out along the
tape, converting it into a two bit cell. To ensure that everything worked
smoothly, I suspect the tape state machine would have to cycle at something
like twice the speed of the Turing state machine.

Overall, I'm fairly certain such a tape would require a lot less logic per
cell than a 1-D cellular automaton that is capable of simulating a Turing
machine. This would presumably make designing and setting up a suitable
pattern of rakes considerably easier!

>... Of 
>course, all of this is merely a question of filling in the details of proofs
>that are generally agreed upon.  Besides the simple pleasure of building  
>a machine and watching it click and whir, there may be little merit to 
>such activity.

Yes, indeed. I enjoy simple pleasures! :-)

In fact, I even enjoy the thought experiments involved in working out how
one might go about doing such a design, regardless of whether the details
are ever filled in.

David Seal

Preferred email address is "dseal@armltd.uucp". Please use "dseal@acorn.co.uk"
instead if this doesn't succeed - "armltd.uucp" is not known everywhere yet.