[net.religion] Quantum mechanics and free will...

speaker@umcp-cs.UUCP (03/06/84)

	As part of a discussion of free will, Dave Norris writes:

	  A cannon ball doesn't have the ability to change course, even
	  if it had the free will to do so.  

	According to quantum mechanical findings, a cannon ball can
	change course.

Well, I'm no expert in Quantum Mechanics... but I don't think
that you are going to find any non-newtonian behavior anywhere
above the sub-atomic level.

	There is a sense in which automata capable of random behaviour
	can do more than strictly deterministic automata -- a fact of
	considerable value to network designers, who use routing
	programs based on such methods.  So, I pose these questions:

   1. Can we construct systems which will not work without randomness?

Many systems in existance today work without such randomness.
Systems that do allow such randomness are by definition,
not determinisitic.  Remember... even psuedo-randomness is still
deterministic.

   2. Do such things exist in nature?

Digital computers.

   3. Does what we call "free will" actually consist of
   randomness?

Some people believe that free will actually occurs below
the chemical reations... at the sub-atomic level where the
randomness of quantum electrodynamics can do its thing.

							- Speaker

rpw3@fortune.UUCP (03/07/84)

#R:umcp-cs:-571900:fortune:21900012:000:923
fortune!rpw3    Mar  6 22:19:00 1984

Uhh...

Digital computers are not all that deterministic. Get a local hardware
wizard to explain about Heisenberg Uncertainty and the "metastable
problem" in synchronizers. (Synchronizers are the gadgets that have
to make decisions as to whether something happened or not.) Quantum
effects show up in many places in modern semiconductor logic. MOST
of the time they can be ignored, but...

Given that neurons have to make the same sorts of decisions as computer
synchronizers (whether to fire or not), it seems that we should not
be surprised to discover nondeterminacy in human "consciousness"
(whatever that is!). Heisenberg + decisionmaking => SchroedingerCats

(And I am NOT talking about the "alpha particle" problem. That's still
another hassle.)

Rob Warnock

UUCP:	{sri-unix,amd70,hpda,harpo,ihnp4,allegra}!fortune!rpw3
DDD:	(415)595-8444
USPS:	Fortune Systems Corp, 101 Twin Dolphins Drive, Redwood City, CA 94065

speaker@umcp-cs.UUCP (03/09/84)

	Digital computers are not all that deterministic. Get a
	local hardware wizard to explain about Heisenberg Uncertainty
	and the "metastable problem" in synchronizers. (Synchronizers
	are the gadgets that have to make decisions as to whether
	something happened or not.) Quantum effects show up in many
	places in modern semiconductor logic. MOST of the time they
	can be ignored, but...

Ah!  Good!  This is very true.  Tunnel diodes can only be explained
via quantum mechanics.  A cannon ball is a very simple device,
however; not as obtuse as a tunnel diode.

There are times when newtonian mechanics seems to totally break
down altogether.  In some cases where frictional forces are
involved, traditional theory simply cannot describe what is observed.
I view this, not as the introduction of random behavior or
uncertainty, but of the inadequacy of theory to describe fact....

Besides, hardware doesn't count!  I did specify a STABLE computing
environment.

	Given that neurons have to make the same sorts of decisions
	as computer synchronizers (whether to fire or not), it
	seems that we should not be surprised to discover nondeterminacy
	in human "consciousness" (whatever that is!). Heisenberg
	+ decisionmaking => SchroedingerCats

Yes.  Although remember, making a high-level conscious decision
is MUCH different from one or two or a thousand random neurons
firing.  That's like saying a computer program can make random
decisions by pouring hot tea on the CPU.
-- 

				Debbie does Daleks
				- Speaker

dap@ihopa.UUCP (afsd) (03/10/84)

All this talk of non-determinism vs. determinism doesn't really have much
bearing on free will vs. lack of same.  So what if atoms happen to disintegrate
at random rather that due to some predetermined cause?  It still isn't due to
"free will".  It happens by pure chance (as opposed to the type of chance which
one gets when rolling dice).  So you have your choice.  Either it happens
determinately or it happens by pure chance.  I don't see how the lack of
determinacy "proves" free will.

Darrell Plank
ihnp4!ihopa!dap

rpw3@fortune.UUCP (03/11/84)

#R:umcp-cs:-571900:fortune:21900013:000:2194
fortune!rpw3    Mar 10 20:17:00 1984

Let's see:

	"STABLE computing environment"?
	
Maybe you misunderstood or I wasn't clear enough. The uncertainty problem
with synchronizers is NOT a matter of bad design, it is inherent in ANY
digital system that must communicate with an "outside" (asynchronous)
world.  There is no way (even theoretically) to avoid it.

	"making random decisions...hot tea on the CPU".

The problem is not that a "random" decision gets made. A random decision
could be tolerated and is in fact expected (that's why the synchronizer
is there). It's that NO decisions (or sometimes MULTIPLE "decisions") get
made, and the logic then does any number of non-deterministic things, like
execute NO code, multiple codes, mixtures of codes, or worse. The microscopic
quantum effects can and do cause macroscopic system crashes.

As far as human thought goes, again I am not talking about "random", but
"non-deterministic" (which is why I mentioned the "S. Cat"). Since neurons
are subject to the same problems as any other synchronizers, no matter
how complete our model of the brain becomes, we will not be able to
predict its behaviour completely, since the completeness of our model
is in fact limited by quantum effects. Such effects ARE significant at the
macro level wherever binary decisions (neuron firings) are made from either
asynchronous digital inputs (other neurons) or analog inputs (perceptions,
hormone levels, sugar level, etc.).

As was so nicely pointed out in an editorial in Analog magazine (April 1984),
the most one can ever hope for is a statistical description of likelihoods.
The actual behaviour of an individual can only be discovered by examination
(observation).  ("O.k., cat, time to open the box!")

(This article goes into considerable detail on the appropriate and
inappropriate use of statistical techniques when dealing with humans.
While most of the article is concerned with classical mechanics [human
populations vs. "ideal gases"], the author does touch on the question of
quantum effects towards the end.)

Rob Warnock

UUCP:	{sri-unix,amd70,hpda,harpo,ihnp4,allegra}!fortune!rpw3
DDD:	(415)595-8444
USPS:	Fortune Systems Corp, 101 Twin Dolphin Drive, Redwood City, CA 94065

speaker@umcp-cs.UUCP (03/15/84)

		"STABLE computing environment"?
		
	Maybe you misunderstood or I wasn't clear enough. The uncertainty problem
	with synchronizers is NOT a matter of bad design, it is inherent in ANY
	digital system that must communicate with an "outside" (asynchronous)
	world.

I didn't say it WAS bad design!

My point was that the functionality of a computer is totally deterministic,
because the underlying software is deterministic.  I specified a STABLE
computing envronment to exclude this hardware-oriented random-crash stuff.
Computing devices (in their cleanest sense) DO NOT rely on randomness.

	There is no way (even theoretically) to avoid it.

Turing machines are clearly deterministic and do not
rely on randomness for their operation.  You'll also have a hard
time convincing us that a DFA (a computing model) is in anyway
non-deterministic.  Not only devices
on paper... but devices that function in everday life.  You will NOT
find a synchroninzer in the definition of the Turing Machine.
Nor will you find a synchronizer in a cash register.

You claim that ALL digital devices rely on randomness, but does your
hand (the first digital computer) rely on randomness or synchronizers?
Of course not... because the implementation is far above the quantum
level.

	"making random decisions...hot tea on the CPU".

	The problem is not that a "random" decision gets made. A random decision
	could be tolerated and is in fact expected (that's why the synchronizer
	is there). It's that NO decisions (or sometimes MULTIPLE "decisions") get
	made, and the logic then does any number of non-deterministic things, like
	execute NO code, multiple codes, mixtures of codes, or worse. The microscopic
	quantum effects can and do cause macroscopic system crashes.

No, no, no... my point is that functional decisions cannot be made by introducing
randomness into the implementation (e.i. hardware).

	As far as human thought goes, again I am not talking about "random", but
	"non-deterministic" (which is why I mentioned the "S. Cat"). Since neurons
	are subject to the same problems as any other synchronizers, no matter
	how complete our model of the brain becomes, we will not be able to
	predict its behaviour completely, since the completeness of our model
	is in fact limited by quantum effects. Such effects ARE significant at the
	macro level wherever binary decisions (neuron firings) are made from either
	asynchronous digital inputs (other neurons) or analog inputs (perceptions,
	hormone levels, sugar level, etc.).

This says that neurons (and other objects) will display non-deterministic
behavior because they are subject to non-deterministic quantum events.
That's like saying the cannon-ball will "fall up" once every thousand
years or so.

Neurons are not similar to semiconductor devices
since the behavior of semiconductors is more dependent upon the
atomic structure of the semiconductor.  You might very well expect
to see some aggregate effects in this kind of crystal.

Neurons involve more complex chemical reactions... not processes related
only to the atomic structure of the material.  Small-scale
quantum effects will probably be totally overshadowed by the
larger chemical reactions.

Besides... I AGREED with you on that point (assuming that neurons
DO fire non-deterministically).
-- 

				Debbie does Daleks
				- Speaker