[comp.ai] random vs. ran

silber@sbphy.ucsb.edu (05/26/89)

Let 'random sequence' (:== a sequence whose observable characteristics
correspond to the observable characteristics of a sequence of physical
events which we identify as 'random'.  If we KNOW that the sequence
was generated by an 'algorithm', then it is PSEUDO-random.  However,
the sequence corresponding to the motivating physical phenomenon (whose
pattern cannot be generated by any algorithm known to us) is
of a different epistemological order, viz. 'RANDOM'.  We can never
transcend the realm of partial knowledge, and even if we believe that
on some ultimate scale everything is pseudo-random, we can't prove it.
so 'free-will' vs. 'determinism' is just a matter of aesthetics!

gpmenos@phoenix.Princeton.EDU (G. Philippe Menos) (05/27/89)

In article <1862@hub.ucsb.edu> silber@sbphy.ucsb.edu writes:
>...If we know that the sequence
>was generated by an 'algorithm', then it is PSEUDO-random.  However,

Is this a cop-out?  Why say pseudo-random, we know it's not even close
to random -- that randomness cannot even be achieved.

>the sequence corresponding to the motivating physical phenomenon (whose
>pattern cannot be generated by any algorithm known to us) is
>of a different epistemological order, viz. 'RANDOM'. 

First, this seems like a bit of a leap.  The fact of a different level
of analysis does not in itself justify the assumption of randomness,
especially if you're correct in your later statement about the realm of
partial knowledge.

Second, I'm not sure what you mean by "motivating physical phen."  Are
you alluding to some physical basis for consciousness, which has roots
in random behaviour?  Oris this an allusion to Neo-Darwinism?  I'm in a
fog;  but that's no doubt my fault.

Still, how could we think consequtively, if the mind and its evolution
were not guided by some algorithm that we have yet to discern;  an
algorithm that might even allow for randomness and non-rational forms of
knowledge... But an algorithm nevertheless.  That is, order and law, at
the foundation.

> We can never
>transcend the realm of partial knowledge, and even if we believe that
>on some ultimate scale everything is pseudo-random, we can't prove it.

If we are doomed to partial knowledge, why take any position at all, as
is implied in the continued usage of the term "pseudo-random".

Actually, even our partial knowledge seems to point always to the
underlying order and law that is the basis of any functioning system,
whether a machine, a human, or a universe.

Here's an interesting story... (I think)...  In 1967, a few
mathematicians and biologists were chatting over a picnic lunch
organised by Victor Weisskopf, prof. of physics at MIT.  A "weird"
discussion took place as the conversation turned to the subject of
evolution by natural selection.  The mathematicians were stunned by
the optimism of the evolutionists about what could be achieved by
chance.  The wide rift between the participants led them to organise a
conference on "Mathematical Challenges to the Neo-Darwinian Theory of
Evolution"...(skip to the conference)...  which opened with a paper by
Murray Eden, Prof. of Electrical Engineering at MIT, entitled "The
Inadequacy of Neo-Darwinian Evolution as a Scientific Theory".  Eden
showed that if it required a mere six mutations to bring about an
adaptive change, this would occur by chance only once in a billion
years --while, if two dozen genes were involved, it would require
10,000,000,000 years, which is much longer than the age of the earth.
(See Gordon R. Taylor's "The Great Evolution Mystery").  "Since
evolution does occur and has occured, something more than chance
mutation must be involved."

With all best wishes,
-Phil

cs_bob@gsbacd.uchicago.edu (05/27/89)

>Here's an interesting story... (I think)...  In 1967, a few
>mathematicians and biologists were chatting over a picnic lunch
>organised by Victor Weisskopf, prof. of physics at MIT.  A "weird"
>discussion took place as the conversation turned to the subject of
>evolution by natural selection.  The mathematicians were stunned by
>the optimism of the evolutionists about what could be achieved by
>chance.  The wide rift between the participants led them to organise a
>conference on "Mathematical Challenges to the Neo-Darwinian Theory of
>Evolution"...(skip to the conference)...  which opened with a paper by
>Murray Eden, Prof. of Electrical Engineering at MIT, entitled "The
>Inadequacy of Neo-Darwinian Evolution as a Scientific Theory".  Eden
>showed that if it required a mere six mutations to bring about an
>adaptive change, this would occur by chance only once in a billion
>years --while, if two dozen genes were involved, it would require
>10,000,000,000 years, which is much longer than the age of the earth.
>(See Gordon R. Taylor's "The Great Evolution Mystery").  "Since
>evolution does occur and has occured, something more than chance
>mutation must be involved."
> 
This IS an interesting story, and it shouldn't surprise anyone who's
ever been exposed to evolutionary biology, but you don't know what you're 
up against. You see, the counter to this argument is, "Yes, but if there
are 10,000,000,000 possible worlds (that is, planets) where life could
have evolved, then the chance it would have evolved somewhere is very great."
This from people who call themselves scientists. Essentially the argument is
"yes, the chances are slim, but they are non-zero, so in a very large
universe over a very large space of time, life was bound to emerge sometime,
purely by accident." The adherents to this version of Darwinism are often as
reluctant to give up their world view as any creationist. I don't know why,
but any suggestion that evolution is active, as opposed to passive, 
(known as Lemarckianism, or some such, after the person credited with first
proposing the possibility) is treated as utter heresy. 

Certainly Lemarck's (sp?) hypothesis, in which he used giraffes as a primary
example, is naive, but why are we expected to seriously accept this "
very, very slim possibility" argument? What is the advantage to a "passive"
view of evolution over the "active" alternative?

R.Kohout

cik@l.cc.purdue.edu (Herman Rubin) (05/27/89)

In article <1862@hub.ucsb.edu>, silber@sbphy.ucsb.edu writes:
> Let 'random sequence' (:== a sequence whose observable characteristics
> correspond to the observable characteristics of a sequence of physical
> events which we identify as 'random'.  If we KNOW that the sequence
> was generated by an 'algorithm', then it is PSEUDO-random.  However,
> the sequence corresponding to the motivating physical phenomenon (whose
> pattern cannot be generated by any algorithm known to us) is
> of a different epistemological order, viz. 'RANDOM'.

If there is a computable algorithm producing the sequence, it is NOT
random.  This algorithm could even have inputs of preceding physical
variables.  The test that the sequence is produced by the algorithm
is the proof that the sequence is NOT random.  A random sequence has
the property that there is no non-prescient test which the sequence
will fail with positive probqbility.
-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet, UUCP)

mkamer@cs.columbia.edu (Matthew Kamerman) (06/01/89)

I have been following the discussion of free will vs. determinism, 
randomness vs. hidden order, etc., with some interest and I have a 
question for those involved.

Might these distinctions be a matter of degree rather than kind?

For instance, might free will be those actions for which the simplest
predictive model CAPABLE OF BEING RUN TO COMPLETION IN A TIME 
PROPORTIONATE TO THE SCALE OF ACTIVITY (busy beavers aren't acceptable
models for real phenomena!), requires an amount of information of
near the same magnitude as that contained in the system itself?

Might a pseudo-random sequence effectively achieve true randomality
if in addition to meeting all standard distribution criteria, its
period of repetition is beyond the capacity of the fastest available 
computing device running for a period comparable to the age of the
universe?

We exist in a finite domain which constrains both the rate at which
information propagates (relativity) and the accuracy with which any
object can be modeled (quantum mechanics).  In such a domain, there
are practical limits on computability far more severe than those
usually stipulated by Complexity Theorists.  The practical limitations
of physical systems may be sufficient to transform distinctions of
degree into usefully symbolizeable distinctions of kind.

cik@l.cc.purdue.edu (Herman Rubin) (06/01/89)

In article <228@cs.columbia.edu>, mkamer@cs.columbia.edu (Matthew Kamerman) writes:

			.........................

> Might a pseudo-random sequence effectively achieve true randomality
> if in addition to meeting all standard distribution criteria, its
> period of repetition is beyond the capacity of the fastest available 
> computing device running for a period comparable to the age of the
> universe?

As far as the period of repetition goes, this is no problem at all.
It is easy to construct pseudo-random sequences with arbitrary periods
at reasonable computational cost, and XORing one of these to your 
favorite candidate should take care of that problem.

But your use of standard is improper.  There are so many reasonable
randomness criteria that I would not believe that enough were used.
I have done a simulation with approximately 25,000 bits per trial,
and I can easily envision 1,000,000 bits per trial.  What assurance
can you give me that the standard tests will prevent erroneous results?

One test the sequence will fail is the test that it was produced in
the way it was.
-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet, UUCP)