[comp.ai] free will

channic@uiucdcsm.cs.uiuc.edu (04/19/88)

I can't justify the proposition that scientific endeavors grouped
under the name "AI" SHOULD NOT IGNORE issues of free wil, mind-brain,
other minds, etc.  If these issues are ignored, however, I would
strongly oppose the use of "intelligence" as being descriptive
of the work.  Is it fair to claim work in that direction when
fundamental issues regarding such a goal are unresolved (if not
unresolvable)?  If this is the name of the field, shouldn't the
field at least be able to define what it is working towards? 
I personally cannot talk about intelligence without concepts such
as mind, thoughts, free will, consciousness, etc.  If we, as AI
researchers make no progress whatsoever in clarifying these issues,
then we should at least be honest with ourselves and society, and find a
new title for our efforts.  Actually the slight modification,
"Not Really Intelligence" would be more than suitable.


Tom Channic
Dept. of CS
Univ. of Illinois
channic@uiucdcs.uiuc.edu
{ihnp4|decvax}!pur-ee!uiucdcs!channic

dvm@yale.UUCP (Drew Mcdermott) (05/09/88)

My contribution to the free-will discussion:

Suppose we have a robot that models the world temporally, and uses
its model to predict what will happen (and possibly for other purposes).
It uses Qualitative Physics or circumscription, or, most likely, various
undiscovered methods, to generate predictions.  Now suppose it is in a
situation that includes various objects, including an object it calls R,
which it knows denotes itself.  For concreteness, assume it believes
a situation to obtain in which R is standing next to B, a bomb with a
lit fuse.  It runs its causal model, and predicts that B will explode,
and destroy R.

Well, actually it should not make this prediction, because R will be
destroyed only if it doesn't roll away quickly.  So, what will R do?  The
robot could apply various devices for making causal prediction, but they
will all come up against the fact that some of the causal antecedents of R's
behavior *are situated in the very causal analysis box* that is trying to
analyze them.  The robot might believe that R is a robot, and hence that
a good way to predict R's behavior is to simulate it on a faster CPU, but
this strategy will be in vain, because this particular robot is itself.
No matter how fast it simulates R, at some point it will reach the point
where R looks for a faster CPU, and it won't be able to do that simulation
fast enough.  Or it might try inspecting R's listing, but eventually it
will come to the part of the listing that says "inspect R's listing."
The strongest conclusion it can reach is that "If R doesn't roll away,
it will be destroyed; if it does roll away, it won't be."  And then of
course this conclusion causes R to roll away.

Hence any system that is sophisticated enough to model situations that its own
physical realization takes part in must flag the symbol describing that
realization as a singularity with respect to causality.  There is simply
no point in trying to think about that part of the universe using causal
models.  The part so infected actually has fuzzy boundaries.  If R is
standing next to a precious art object, the art object's motion is also
subject to the singularity (since R might decided to pick it up before
fleeing).  For that matter, B might be involved (R could throw it), or
it might not be, if the reasoner can convince itself that attempts to
move B would not work.  But all this is a digression.  The basic point
is that robots with this kind of structure simply can't help but think of
themselves as immune from causality in this sense.  I don't mean that they
must understand this argument, but that evolution must make sure that their
causal-modeling system include the "exempt" flag on the symbols denoting
themselves.  Even after a reasoner has become sophisticated about physical
causality, his model of situations involving himself continue to have this
feature.  That's why the idea of free will is so compelling.  It has nothing
to do with the sort of defense mechanism that Minsky has proposed.

I would rather not phrase the conclusion as "People don't really have
free will," but rather as "Free will has turned out to be possession of
this kind of causal modeler."  So people and some mammals really do have
free will.  It's just not as mysterious as one might think.

                       -- Drew McDermott

tw@aiva.ed.ac.uk (Toby Walsh) (05/10/88)

Drew McDermott's proposes a "cute" example of a robot R next to a bomb B, 
thinking about (thinking about (thinking about ..... its thinking) ....));
to avoid this infinite regress,  he proposes "free will" = "ability to 
identify one's special status within one's model of the universe".

This example immediately suggests to me the analogy with meta-level 
reasoning; reasoning about reasoning occurs at the meta-level, and
reasoning about this meta-level reasoning at the meta-meta-level, ....
To escape this infite regress of meta-meta-.... levels, we need to
introduce the idea of (self-)reflection, where we reason about the
meta^n-level in the meta^n-level. The notion of identifying one's 
special status within the model then becomes the analogous concept
of naming between object- and meta-levels. 

But does this example/analogy tell us more about the annoying issue of free
will ? No, I believe. It has much to say about consciousness but
doesn't directly address what it is to have goals, desires, what it is
to MAKE a decision when confronted with choice. Nevertheless, meta-level
reasoning is an interesting model within which to formulate these concepts.


-------------------------------------------------------------------------------
Toby Walsh                      JANET: T.Walsh@uk.ac.edinburgh
Dept of AI                      ARPA:  T.Walsh%uk.ac.edinburgh@nss.cs.ucl.ac.uk
Edinburgh University            Tel:   (=44)-31-225-7774 ext 235
80 South Bridge, Edinburgh EH1 1HN  
-------------------------------------------------------------------------------

rwojcik@bcsaic.UUCP (Rick Wojcik) (05/11/88)

In article <28705@yale-celray.yale.UUCP> dvm@yale.UUCP (Drew Mcdermott) writes:
DM> Hence any system that is sophisticated enough to model situations that its own
DM> physical realization takes part in must flag the symbol describing that
DM> realization as a singularity with respect to causality.  There is simply
DM> no point in trying to think about that part of the universe using causal
DM> models...  

I like your metaphor of 'a singularity with respect to causality'.  It
neatly captures the concept of the Agent case role in linguistic theory.
But it goes beyond modelling one's own physical realization.  Chuck
Fillmore used to teach (in the heyday of Case Grammar) that simple clause
structure only admits to two overtly marked causers--the Agent and the
Instrument.  This is a fairly universal fact about language (the only
exception being languages with 'double agent' verbs, where the verb stem
can have an affix denoting indirect causation).  Agents refer to verbal
arguments that are 'ultimate causers' and Instruments refer to those that
are 'immediate causers'.  He has always been quite explicit in his belief
that the human mind imposes a kind of filter on the way we can view chains
of causally related events--at least when we try to express them in
language.   One of the practical side effects of the belief in free will
is that it provides us with a means of chunking chains of causation up
into conceptual units.
-- 
Rick Wojcik   csnet:  rwojcik@boeing.com	   
              uucp:   uw-beaver!ssc-vax!bcsaic!rwojcik 
address:  P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone:    206-865-3844

stewart@sunybcs.uucp (Norman R. Stewart) (05/15/88)

>From: paulg@iisat.UUCP (Paul Gauthier)

writes:
> I'm sorry, but there is no free will. Every one of us is bound by the
>laws of physics. No one can lift a 2000 tonne block of concrete with his
>bare hands. No one can do the impossible, and in this sense none of us have
>free will.
 
     I don't believe we're concerned with what we are capable of doing,
but rather our capacity to desire to do it.  Free will is a mental, not
a physical phenomenom.  What we're concerned with is if the brain (nervous
system, organism, aggregation of organisms and objects) is just so many
atoms (sub-atomic particles?, sub-sub-atomic particles) bouncing around 
according to the laws of physics, and behavior simply the unalterable
manifestion of the movement of these particles.              /|\
                                                              |
                                                      Note: in a closed system.





Norman R. Stewart Jr.             *
C.S. Grad - SUNYAB                *  If you want peace, demand justice.
internet: stewart@cs.buffalo.edu  *                  (of unknown origin)  
bitnet:   stewart@sunybcs.bitnet  * 

dvm@yale.UUCP (Drew Mcdermott) (05/30/88)

More on the self-modeling theory of free will:

Since no one seems to have understood my position on this topic,
I will run the risk that no one cares about my position, and try
to clarify.

Sometimes parties to this discussion talk as if "free will" were
a new kind of force in nature.  (As when Biep Durieux proposed that
free will might explain probability rather than vice versa.)  I am
sure I misrepresent the position; the word "force" is surely wrong
here (as is the word "new").  The misrepresentation is unavoidable;
this kind of dualism is simply not a live option for me.  Nor can
I see why it needs to be a perenially live option on an AI discussion 
bulletin board.

So, as I suggested earlier, let's focus on the question of free will
within the framework of Artificial Intelligence.  And here it
seems to me the question is, How would we tell an agent with free
will from an agent without it?  Two major strands of the discussion
seem completely irrelevant from this standpoint:

  (1) Determinism vs. randomness.  The world is almost
certainly not deterministic, according to quantum mechanics.  Quantum
mechanics may be false, but Newtonian mechanics is certainly false,
so the evidence that the world is deterministic is negligible.
(Unless the Everett-Wheeler interpretation of quantum mechanics is true, 
in which case the world is a really bizarre place.)  So, if determinism
is all that's bothering you, you can relax.  Actually, I think what's
really bothering people is the possibility of knowledge (traditionally,
divine knowledge) of the outcomes of their future decisions, which has 
nothing to do with determinism.

  (2) My introspections about my ability to control my thoughts or 
whatnot.  There is no point in basing the discussion on such evidence,
until we have a theory of what conscious thoughts are.  Such a theory
must itself start from the outside, looking at a computational agent
in the world and explaining what it means for it to have conscious
thoughts.  That's a fascinating topic, but I think we can solve the
free will problem with less trouble.

  So, what makes a system free?  To the primitive mind, free decisions
are ubiquitous.  A tornado decides to blow my house down; it is worth
trying to influence its decision with various rewards or threats.
But nowadays we know that the concept of decision is just out of place
in reasoning about tornados.  The proper concepts are causal; if we
can identify enough relevant antecedent factors, we can predict (and
perhaps someday control) the tornado's actions.  Quantum mechanics
and chaos set limits to how finely we can predict, but that is
irrelevant.

  Now we turn to people.  Here it seems as if there is no need to do
away with the idea of decision, since people are surely the paradigmatic
deciders.  But perhaps that attitude is "unscientific."  Perhaps the
behaviorists are right, and the way we think about thunderstorms is
the right way to think about people.  If that's the actual truth, then
we should be tough-minded and acknowledge it.

  It is *not* the truth.  Freedom gets its toehold from the fact that
it is impossible for an agent to think of itself in terms of causality.  
Contrast my original bomb scenario with this one:  

   R sees C wander into the blast area, and go up to the bomb.  R knows
   that C knows all about bombs, and R knows that C has plenty of time to
   save itself, so R decides to do nothing.  (Assume that preventing the
   destruction of other robots gets big points in R's utility function.)

In this case, R is reasoning about an agent other than itself.  Its problem
is to deduce what C will actually do, and what C will actually suffer.  The
conclusion is that C will prosper, so R need do nothing.  It would
be completely inappropriate for R to reason this way about itself.  Suppose
R comes to realize that it is standing next to a bomb, and it reasons as
follows:

   R knows all about bombs, and has plenty of time to save itself, so I need
   do nothing.

Its reasoning is fallacious, because it is of the wrong kind.  R is not being
called on to deduce what R will do, but to be a part of the causal fabric that
determines what R will do, in other words: to make a decision.  It is certainly
possible for a robot to engage in a reasoning pattern of this faulty kind, but 
only by pretending to make a decision, inferring that the decision will be made 
like that, and then not carrying it out (and thus making the conclusion of the
inference false).  Of course, such a process is not that unusual; it is called
"weakness of the will" by philosophers.  But it is not the sort of thing one
would be tempted to call an actual decision.  An actual decision is a process
of comparative evaluation of alternatives, in a context
where the outcome of the comparison will actually govern behavior.  (A robot
cannot decide to stop falling off a cliff, and an alcoholic or compulsive may
not actually make decisions about whether to cease his self-destructive
behavior.)

   This scenario is one way for a robot to get causality wrong when reasoning
about itself, but there is a more fundamental way, and that is to just not 
notice that R is a decision maker at all.  With this misperception, R could 
tally its sources of knowledge about all influences on R's behavior, but it 
would miss the most important one, namely, the ongoing alternative-evaluation 
process. Of course, there are circumstances in which this process is in fact not
important.  If R is bound and gagged and floating down a river, then it might
as well meditate on hydrodynamics, and not work on a decision.  But most of the
time the decision-making process of the robot is actually one of the causal
antecedents of its future.  And hence, to repeat the central idea, *there is
no point in trying to think causally about oneself while making a decision that
is actually part of the causal chain.  Any system that realizes this has free
will.*

  This theory accounts for why an agent must think of itself as outside the
causal order of things when making a decision.  However, it need not think
of other agents this way.  An agent can perfectly well think of other agents' 
behavior as caused or uncaused to the same degree the behavior of a 
thunderstorm is caused or uncaused.  There is a difference: One of the best 
ways to cause a decision-making agent to do something is to give him a good 
reason to do it, whereas this strategy won't work with thunderstorms.  Hence, 
an agent will do well to sort other systems into two categories, those that 
make free decisions and those that don't, and deal with them differently.  

  By the way, once a decision is made there is no problem with its maker 
thinking of it purely causally, in exactly the same way it thinks about 
other decision makers.  An agent can in principle see *all* of the causal 
factors going into its own past decisions, although  in practice the events 
of the past will be too random or obscure for an exhaustive analysis.  It is 
surely not dehumanizing to be able to bemoan that if only such-and-such had 
been brought to my attention, I would have decided otherwise than I did, but, 
since it wasn't, I was led inexorably to a wrong decision.

 Now let me deal with various objections: 

   (1) Some people said I had neglected the ability of computers to do reflexive
meta-reasoning.  As usual, the mention of meta-reasoning makes my head swim, but
I shall try to respond.  Meta-reasoning can mean almost anything, but it usually
means escaping from some confining deductive system in order to reason about 
what that system ought to conclude.  If this is valuable, there is no reason 
not to use it.  But my picture is of a robot faced with the possibility of 
reasoning about itself as a physical system, which is in general a bad thing to 
do.  The purpose of causal-exemption flagging is to shut pointless reasoning 
down, meta or otherwise.

So, when O'Keefe says:

    So the mere possibility of an agent having to appear to simulate itself 
    simulating itself ... doesn't show that unbounded resources would be 
    required:  we need to know more about the nature of the model and the 
    simulation process to show that.

I am at a loss.  Any system can simulate itself with no trouble.  It could 
go over past or future decisions with a fine-tooth comb, if it wanted to.  
What's pointless is trying to simulate the present period of time.  Is an 
argument needed here?  Draw a mental picture: The robot starts to simulate, 
and finds itself simulating ...  the start of a simulation.  What on earth 
could it mean for a system to figure out what it's doing by simulating itself?

  (2) Free will seems on this theory to have little to do with consciousness 
or values.  Indeed it does not.  I think a system could be free and not be 
conscious at all; and it could certainly be free and not be moral.

  What is the minimal level of free will?  Consider a system for scheduling the
movement of goods into and out of a warehouse.  It has to synchronize its 
shipments with those of other agents, and let us suppose that it is given 
those other shipments in the form of various schedules that it must just work 
around.  From its point of view, the shipments of other agents are caused, and 
its own shipments are to be selected.  Such a system has what we might call 
*rudimentary* free will.  To get full-blown free will, we have to suppose that 
the system is able to notice the discrepancy between boxes that are scheduled 
to be moved by someone else, and boxes whose movements depend on its 
decisions.  I can imagine all sorts of levels of sophistication in 
understanding (or misunderstanding) the discrepancy, but just noticing it is 
sufficient for a system to have full-blown free will.  At that point, it will 
have to realize that it and its tools (the things it moves in the warehouse) 
are exempt from causal modeling.

   (3) Andrew Gelsey has pointed out that a system might decide what to do by
means other than simulating various alternative courses of action.  For 
instance, a robot might decide how hard to hit a billiard ball by solving an 
equation for the force required.  In this case, the asymmetry appears in what 
is counted as an independent variable (i.e., the force administered).  And if 
the robot notices and appreciates the asymmetry, it is free.

   (4) David Sher has objected

      If I understand [McDermott's theory] correctly it runs like this:
      To plan one has a world model including future events.
      Since you are an element of the world then you must be in the model.
      Since the model is a model of future events then your future actions
      are in the model.
      This renders planning unnecessary.
      Thus your own actions must be excised from the model for planning to
      avoid this "singularity."

      Taken naively, this analysis would prohibit multilevel analyses such
      as is common in game theory.  A chess player could not say things like
      if he moves a6 then I will move Nc4 or Bd5 which will lead ....

The response to this misreading should be obvious.  There are two ways to
think about my future actions.  One way is to treat them as conditional actions,
begun now, and not really future actions at all.  (Cf. the notion of strategy
in game theory.)

The more interesting insight is that an agent can reason about its future 
actions as if they were those of another agent.  There is no problem with 
doing this; the future is much like the past in this respect, except we have 
less information about it.  A robot could reason at its leisure about what 
decision it would probably make if confronted with some future situation, and 
it could use an arbitrarily detailed simulation of itself to do this reasoning, 
provided it has time to run it before the decision is to be made.  But all of 
this self-prediction is independent of actually making the decision.  When 
the time comes to actually make it, the robot will find itself free again.  
It will not be bound by the results of its simulation.  This may seem like a 
nonsequitur; how could a robot not faithfully execute its program the same 
way each time it is run?  There is no need to invoke randomness; the 
difference between the two runs is that the second one is in a context where 
the results of the simulation are available.  Of course, there are lots of 
situations where the decision would be made the same way both times, but all 
we require is that the second be correctly classified as a real -- free -- 
decision.

I find Sher's "fix" to my theory more dismaying:

   However we can still make the argument that Drew was making its just
   more subtle than the naive analysis indicates.  The way the argument 
   runs is this:
   Our world model is by its very nature a simplification of the real
   world (the real world doesn't fit in our heads).  Thus our world model
   makes imperfect predictions about the future and about consequence.  
   Our self model inside our world model shares in this imperfection. 
   Thus our self model makes inaccurate predictions about our reactions
   to events.  We perceive ourselves as having free will when our self
   model makes a wrong prediction.  

This is not at all what I meant, and seems pretty shaky on its own merits.
This theory makes an arbitrary distinction between an agent's mistaken 
predictions about itself and its mistaken predictions about other systems.
I think it's actually a theory of why we tend to attribute free will to so
many systems, including thunderstorms.  We know our freedom makes us hard to
predict, and so we attribute freedom to any system we make a wrong prediction
about.  This kind of paranoia is probably healthy until proven false.  But
the theory doesn't explain what we think free will is in the first place, or
what its explanatory force is in explaining wrong predictions.

Free will is not due to ignorance.  Imagine that the decision maker is a robot
with a very routine environment, so that it often has complete knowledge both of
its own listing and of the external sensory data it will be receiving prior to
a decision.  So it can simulate itself to any level of detail, and it might
actually do that, thinking about decisions in advance as a way of saving time 
later when the actual decision had to be made.  None of this would allow it to 
avoid making free decisions.  
                                     -- Drew McDermott

jbn@glacier.STANFORD.EDU (John B. Nagle) (05/31/88)

       Since this discussion has lost all relevance to anything anybody
is likely to actually implement in the AI field in the next twenty years
or so, could this be moved to talk.philosophy?

					John Nagle

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (05/31/88)

Edward Lasker, in his autobiographical book about his experiences as
a chess master, describes a theory and philosophical tract
by his famous namesake, Emmanuel Lasker, who was world chess
champion for many years.  It concerned a hypothetical being,
the Macha"ide, which is so advanced and profound in its thought
that its choices have become completely constrained.  It can
discern and reject all courses of action that are not optimal,
and therefore it must.  It is so evolved that it has lost
free will.
		Greg, lee@uhccux.uhcc.hawaii.edu

ghh@thought.Princeton.EDU (Gilbert Harman) (05/31/88)

In article <17470@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle) writes:
>
>       Since this discussion has lost all relevance to anything anybody
>is likely to actually implement in the AI field in the next twenty years
>or so, could this be moved to talk.philosophy?
>
>					John Nagle


Drew McDermott's suggestion seems highly relevant to
implementations while offering a nice approach to at least
one problem of free will.  (It seems clear that people have
been worried about a number of different things under the
name of "free will".)  How about keeping a discussion of
McDermott's approach here and moving the rest of the
discussion to talk.philosophy?

		       Gilbert Harman
                       Princeton University Cognitive Science Laboratory
	               221 Nassau Street, Princeton, NJ 08542
			      
		       ghh@princeton.edu
		       HARMAN@PUCC.BITNET

falkg@vaxa.UCalgary.CA (Geoffrey Falk) (06/05/89)

I beg to disagree with much of what has been said here.  Although it can never
be known what free will is, it obviously exists (at least for me.)  And,
since I am a strict believer in a universe governed by physical (secular)
phenomena alone, I have formed what I believe may be the only explanation
for consciousness.

Intelligence may well be exhibited by some machine of the future.  It may well
be possible to create a deterministic software/silicon thing which can pass
the Turing test with flying colours.  However, it is my FIRM belief that no
entity whose behaviour is strictly determined (i.e. by a piece of coded soft-
ware) can actually possess a consciousness.  It is therefore my conclusion
that an actual "thinking" machine lies in the exploitation of some other
physical phenomenon by which an element of nondeterminism can be injected.
Such is the nature of the human brain.

I suggest that the way to achieve this, is by utilizing quantum random effects
in neural microcircuitry.

Although it will always be impossible to strictly prove that any entity has
a consciousness, it is evidently possible for consciousness to take place.
The Turing test reduces to another moronic demonstration of the Other Minds
problem.  The evidence for consciousness will be in the demonstration of
behaviour that was not envisioned by the creators of the system.

Geoffrey Falk
(falkg@vaxa.cpsc.UCalgary.CA)
Student, U. of C.

weyand@csli.Stanford.EDU (Chris Weyand) (06/11/89)

In article <1478@cs-spool.calgary.UUCP> falkg@vaxa.UCalgary.CA (Geoffrey Falk) writes:
>I beg to disagree with much of what has been said here.  Although it can never
>be known what free will is, it obviously exists (at least for me.)  

How do you know it obviously exists?

>Intelligence may well be exhibited by some machine of the future.  It may well
>be possible to create a deterministic software/silicon thing which can pass
>the Turing test with flying colours.  However, it is my FIRM belief that no
>entity whose behaviour is strictly determined (i.e. by a piece of coded soft-
>ware) can actually possess a consciousness.  

If you were willing to call the machine intelligent what does it matter if
it is conscious?  You imply that consciousness is not neccessary for
intelligence.  Why then are you so concerned about consciousness?

Turing's point in proposing the Imitation game was that were a machine
able to fool human participants as often as humans themselves did, we
could conclude that the machine was exhibiting what we might call human
intelligence.  He said nothing of consciousness.  

>The evidence for consciousness will be in the demonstration of
>behaviour that was not envisioned by the creators of the system.

If you believe this then there are several systems that you would say
show evidence of consciousness.  Doug Lenat's AM for example showed 
very surprising and remarkable behavior in discovering principles of 
mathematics.  

krobt@nova.UUCP (Robert Klotz) (06/12/89)

In an article of <5 Jun 89 15:24:49 GMT>, falkg@vaxa.UCalgary.CA (Geoffrey Falk) writes:

 " Message-ID: <1478@cs-spool.calgary.UUCP>
 " 
 " 
 " Although it will always be impossible to strictly prove that any entity 
 " has a consciousness, it is evidently possible for consciousness to take 
 " place. The Turing test reduces to another moronic demonstration of the 
 " Other Minds problem.  The evidence for consciousness will be in the 
 " demonstration of behaviour that was not envisioned by the creators of the 
 " system.
 " 
hi,
 
   you were doing so good, and then your conclusion left us with something 
more vague than the turing test.  under your definition, most of my programs, 
in their early test runs, have consciousness.  i can certainly clearly 
demonstrate that their behaviour is in no way what i "expected" when i wrote 
them.

   i am begining to wonder if attempts at artifical intelligence are a 
waste of time.  i am not trying to say that it is unatainable, i am just 
wondering why we would even want it.  computers are fast becomming tools 
which are fulfilling all purposes which i originally thought would require 
"artifical intelligence".  expert systems are becomming very well developed, 
and soon there will be a good expert expert.  sence everyone has decided that 
expert technology isn't really true "intelligence" then that eliminates one 
big need for "ai".

   i have just written several pgms which allow users to query a database in 
plain and simple english.  i am sure most of you have seen such algorythms.  
everyone agrees that these manipulations are proformed without consciousness, 
cognition, or intelligence, even though the user always seems to get the 
answer to his/her questions.

   what i am trying to get around to is that when ever a problem arises which 
requires "ai", soon an alternate algorythm is found which is obvious to all 
is not truly "intelligent".  perhaps intelligence, artificial or not, can be 
defined as a significantly massive collection of algorythms useful in problem 
solving.

...robert
--  
------
{att!occrsh|dasys1|killer|uokmax}!mom!krobt | argue for your limitations
  or   --------------                       | and soon you will find that
    krobt%mom@uokmax.ecn.uoknor.edu         | they are yours.

atdcad@prls.UUCP (Ron Cline) (06/17/89)

In article <1478@cs-spool.calgary.UUCP> falkg@vaxa.UCalgary.CA (Geoffrey Falk) writes:
>...since I am a strict believer in a universe governed by physical (secular)
>phenomena alone, I have formed what I believe may be the only explanation
>for consciousness.
>

I don't "believe" in a purely "physical" universe, but I still agree with
everything you say in your posting.

>..(lines deleted)
> 
>...  It is therefore my conclusion
>that an actual "thinking" machine lies in the exploitation of some other
>physical phenomenon by which an element of nondeterminism can be injected.
>Such is the nature of the human brain.
>
>I suggest that the way to achieve this, is by utilizing quantum random effects
>in neural microcircuitry.
>

In 1975, L. Bass from University of Queensland, Australia, published a
paper on "A Quantum Mechanical Mind-Body Interaction", hypothesizing such
a random decision maker inside the human brain.

As far as microcircuitry is concerned, I believe it will be *necessary*
at some point in the future to include such a quantum-based decision maker
within computational hardware, based solely on system needs.  Note that
"Chaos" is not necessarily present.

I could go further, but the venue is wrong.  However, I can suggest that
free-will and indeterminate-choice are, indeed, self-consistent within
a single world-view.  And there is no reason that AI should be excluded.

Ron Cline
Signetics/Philips
Adv Tech Dev

mbb@cbnewsh.ATT.COM (martin.b.brilliant) (06/19/89)

From article <896@orbit.UUCP>, by philo@pnet51.cts.com (Scott Burke):
>   I'm sure that QM and chaos both play a part in the behavior of the human
> brain -- but I hardly hold out any hopes of it playing the role that many
> people want to make it fill, that of savior for the doctrine of free will.    

I think that's an interesting question to raise.

> ...  The actions of a "free agent"....
> .... appear[s] to display "random" behavior.....
> ..... it is the central idea
> of chaos theory that perfectly determinate systems (such as the weather)
> display what appears to be "random" behavior, by virtue of their complexity*. 

Another example is the pre-quantum theory of ideal gases.  Molecules
were assumed to be perfectly deterministic, but for practical, and
hence good theoretical purposes, they were unpredictable.  Chaos theory
does not predict new physical phenomena so much as provide a
mathematical framework for old ones.

> ..... the individual behavior of a
> chaotic system may be unpredictable, but many chaotic systems can be
> characterized by "chaotic attractors", regions and patterns of behavior which
> the system as a whole follows.  There is no reason to believe that the
> ultimately highly* complex system of the mind is any less chaotic in that it's
> behavior "appears random" but is not, exhibits stable patterns at higher
> levels (eg. "predictable people", morality itself, the internal consistency of
> consciousness and intelligence and choosing), and is at rock bottom COMPLETELY
> DETERMINED.....

I had trouble unraveling that five-line sentence.  I think it says: The
mind is chaotic, in the technical sense: it appears random, exhibits
stable attractors, and is embodied in a completely deterministic
physical system.  Basically, I agree.  Quantum mechanics may play a
role here, but the conclusion does not depend on quantum mechanics.

Yes, people usually follow their principles or their habits, but not
always.  Under "stress" even "predictable people" do things you
wouldn't expect them to do.  What is this "stress"?  Subjectively, it
feels like being in a region where the "attractor" is not obvious.
This is the region where "free will" has to be tested.  Otherwise you
might infer that people's actions are determined by their principles
and habits, which are determined by their heredity and environment,
etc.  And this is the region where chaos theory is applicable.

So I think chaos theory can describe the mechanism of free will.  What
it does not describe is the sensation of free will (which I alluded to
above), the "feeling" of being undecided.  That would seem to be in the
realm of "consciousness."

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

ellis@chips2.sri.com (Michael Ellis) (06/28/89)

> Scott Burke

>  I'm sure that QM and chaos both play a part in the behavior of the human
>brain -- but I hardly hold out any hopes of it playing the role that many
>people want to make it fill, that of savior for the doctrine of free will.    

    I for one hardly think QM+CT "play the savior for the doctrine of free 
    will". They do however demolish determinism and certain classic
    arguments against free will.

>A case in point, the above.  The actions of a "free agent", from all
>appearances and our ability to describe them, appear to be the result of some
>"random chooser".  Put in a different light, all we are saying is that this
>incredibly complex "system" appears to display "random" behavior.  But the
>chaos theory itself is both savior and devil here, for it is the central idea
>of chaos theory that perfectly determinate systems (such as the weather)
>display what appears to be "random" behavior, by virtue of their complexity*. 

    Weather doesn't just appear to be random; it really is random, by the same
    QM+CT argument I have repeated before. 

>It is not that complex systems defy the laws of determinism... 
>..
>..    [[Lots of declarations of faith to defunct 19th century dogmas]]
>..
>and is at.. rock bottom COMPLETELY DETERMINED.  

    Your devotion to the Correct and True principle Universal Determinism
    in the face of massive scientific evidence to the contrary is inspiring.
    Credis quia absurdum, no doubt.

>Chaos is not the science of random systems -- the systems
>themselves are quite determinate -- it is the science of non-random systems
>that exhibit* random behavior (by virtue of the complex interaction of
>sub-processes in the whole of the system).

     Exactly so. But if CT is correct, then the global behavior of
     sufficiently complex systems is sensitive to infinitesimal
     fluctuations, everywhere. QM provides infinitesimal fluctuations
     everywhere. Ergo, QM+CT *together* demonstrate that sufficiently
     complex systems are genuinely unpredictable on both micro- and macro-
     scopic levels. QED.

-michael

Gordon@ucl-cs.UUCP (07/03/89)

> From: Michael Ellis <ellis@chips2.sri.com>
> Exactly so. But if CT is correct, then the global behavior of
> sufficiently complex systems is sensitive to infinitesimal
> fluctuations, everywhere. QM provides infinitesimal fluctuations
> everywhere. Ergo, QM+CT *together* demonstrate that sufficiently
> complex systems are genuinely unpredictable on both micro- and macro-
> scopic levels. QED.

Chaos does not depend on "complexity". The logistic map,
      
            x -> x*x + c 

is only chaotic above c ~= 3.7

Gordon.

demers@beowulf.ucsd.edu (David E Demers) (07/07/89)

In article <317@ucl-cs.UUCP> Gordon@ucl-cs.UUCP writes:
>> From: Michael Ellis <ellis@chips2.sri.com>
[... Ergo, QM+CT *together* demonstrate that sufficiently
					     ^^^^^^^^^^^^
>> complex systems are genuinely unpredictable on both micro- and macro-
   ^^^^^^^
>> scopic levels. QED.
>
Gordon says:
>Chaos does not depend on "complexity". The logistic map,
>      
>            x -> x*x + c 
>
>is only chaotic above c ~= 3.7

I believe Gordon "misspoke".  The logistic map is x -> c * x * (1 - x),
and is, as he said chaotic for c slightly less than 4.

The map x -> x*x + c is interesting since for c > 1/4 there are no
attractors, while c < 1/4 has two fixed points; the rightmost being
a repellor and the leftmost generally an attractor.  Well, maybe that's
not interesting after all, but it IS a pretty simple example of 
bifurcation behavior in a map.

But on to what chaos tells us.  Chaos "theory" shows that very
simple (completely deterministic!) systems with only a few 
degrees of freedom can exhibit
complex behavior that is unpredictable in the long run and which
passes many statistical tests of randomness.  These systems
exhibit powerful sensitivity to their initial conditions.  The
argument which was made, I think, is that quantum mechanics
indicates that there is some level below which we CANNOT have
certainty, thus it is not possible to measure any chaotic system 
accurately enough in order to predict the future state of the
system beyond some limiting period of time.  All predictions
will be in error, with the amount of error growing exponentially
until all significant bits are garbage...

This is getting far away from AI... is it reasonable to take the
position that there may or may not be free will, and thus act as
if one HAS free will just in case? :-) 
 
Dave

Robert@ucl-cs.UUCP (07/13/89)

The map x -> c*x*(1-x) is the logistic map, but by a transformation of co
coordinates x == y + 1/2 followed by a rescaling of y it can be put into
the form x -> x*x + c (with a different value for teh c in this equation).
Thus the two maps are really equivalent.

This map, x -> x*x + c, can be used to generate the Mandelbrot set, by 
looking at the COMPLEX values of c for which the map converges.  More than
a simple bifurcation behaviour in obtained from keeping c real.

Is `free will' any different from `will'? The discussion about `free will' 
and `determinism' may become clarified by developing the distinction
between 'free will' and `will'. Or then again it may not, whichever
the case may be.

Robert.

andrew@berlioz (Lord Snooty @ The Giant Poisoned Electric Head ) (07/15/89)

In article <334@ucl-cs.UUCP>, Robert@ucl-cs.UUCP writes:
> Is `free will' any different from `will'? The discussion about `free will' 
> and `determinism' may become clarified by developing the distinction
> between 'free will' and `will'. Or then again it may not, whichever
> the case may be.

I coincidentally just caught a line from the group "Yes" (I think) which
goes:	"Then I will choose free will".

While I am not suggesting that pop contains any deep truisms (or not), when
I heard this I wondered 
a) if he >has< free will, then the choice is specious
b) else, how can he choose?

Just a little light relief from
-- 
...................................................................
Andrew Palfreyman	I should have been a pair of ragged claws
nsc!berlioz!andrew	Scuttling across the floors of silent seas
...................................................................

rjf@ukc.ac.uk (Robin Faichney) (07/18/89)

In article <425@berlioz.nsc.com> andrew@berlioz (Lord Snooty @ The Giant Poisoned Electric Head ) writes:
>[..]
>I coincidentally just caught a line from the group "Yes" (I think) which
>goes:	"Then I will choose free will".
>
>While I am not suggesting that pop contains any deep truisms (or not), when
>I heard this I wondered ..

In my humble opinion, some Yes lyrics can be quite profound.  As in (my
interpretation of) this case.  What if the intended meaning is "Then I
will choose to believe in free will"?  Seems to me that is both the
most down-to-earth and the most positive position that can be taken on
this issue.  And the Yes version is a very nice, if slightly subtle,
way of putting it.  It is an Occams Razor to cut a Gordian Knot.  Just
because something cannot be proven, is no reason not to believe in it.

On the other hand, maybe Yes just like recording silly truisms.  (Though
somehow I don't think Andrew believes that.)

I say, if you want to preserve your Knots, don't listen to Yes (or me :-)

Robin

prohaska%lapis@Sun.COM (J.R. Prohaska) (07/19/89)

Sorry, but we've got a *major* misunderstnading brewing here that I feel
morally compelled to clear up.  The song folks are referring to is almost
certianly "Free Will" by Rush, not Yes (both truly wonderful groups, of
course, but Yes cannot compare with Rush in the thoughtfulness/intelligence
(that word again!) of their lyrics).  Anyway, the song is about choosing
mental models by which to interpret the cosmos (yeah, that's the ticket).

	J.R. Prohaska
	Sun Microsystems, Mountain View, California  (415) 336 2502
	internet:  prohaska@sun.com
	usenet:    {backbone}!sun!prohaska
	USnail:    Box 9022, Stanford, CA  94305
J.R.
Knowledge Systems Group, MS 12-33, x6-2502

bgr@wild.Rice.EDU (Robert G. Rhode) (07/19/89)

>I coincidentally just caught a line from the group "Yes" (I think) which
>goes:	"Then I will choose free will".

The song is "Free Will", from the album "Permanent Waves" by Rush.
That same song also includes the line

'If you choose not to decide, you still obey the choice.'

As far as pop not containing deep truisms, it doesn't.  But
neither is Rush a pop band.

- Robert Rhode

"Today's champion is tomorrow's crocodile shit." - Monty Python

demers@beowulf.ucsd.edu (David E Demers) (07/20/89)

In article <1842@harrier.ukc.ac.uk> rjf@ukc.ac.uk (Robin Faichney) writes:
>In article <425@berlioz.nsc.com> andrew@berlioz (Lord Snooty @ The Giant Poisoned Electric Head ) writes:
>>[..]
>>I coincidentally just caught a line from the group "Yes" (I think) which
>>goes:	"Then I will choose free will".
>In my humble opinion, some Yes lyrics can be quite profound...
>...Just because something cannot be proven, is no reason not to believe in it.
Etc...

Maybe the subject line should be changed, as it has little to do with
my previous posting, nor anything to do with me...

:-)

Dave
demers@cs.ucsd.edu

cebly@ai.toronto.edu (Craig Boutilier) (07/20/89)

In article <4286@kalliope.rice.edu> rhode@ricevm1.rice.edu writes:
>
>>I coincidentally just caught a line from the group "Yes" (I think) which
>>goes:	"Then I will choose free will".
>
>The song is "Free Will", from the album "Permanent Waves" by Rush.
>That same song also includes the line
>
>'If you choose not to decide, you still obey the choice.'

      "If you choose not to decide, you still have made a choice"

       is the correct reading.

>
>- Robert Rhode

- Craig

"I will choose a path that's clear, I will choose free will". (N.Peart)

mrcheezy@pnet51.cts.com (Steve Anderson) (07/21/89)

Not that it really matters, but the group is Rush.

Being a former fanatic of that group, I felt I had to point that out.


*----------------------------------------------------------------------------*
| UUCP: {amdahl!bungia, uunet!rosevax, chinet, killer}!orbit!pnet51!mrcheezy |
| ARPA: crash!orbit!pnet51!mrcheezy@nosc.mil                                 |
| INET: mrcheezy@pnet51.cts.com                                              |
*----------------------------------------------------------------------------*

philo@pnet51.cts.com (Scott Burke) (07/22/89)

atdcad@prls.UUCP (Ron Cline) writes:
>>...  It is therefore my conclusion
>>that an actual "thinking" machine lies in the exploitation of some other
>>physical phenomenon by which an element of nondeterminism can be injected.
>>Such is the nature of the human brain.
>
>As far as microcircuitry is concerned, I believe it will be *necessary*
>at some point in the future to include such a quantum-based decision maker
>within computational hardware, based solely on system needs.  Note that
 
  I'm sure that QM and chaos both play a part in the behavior of the human
brain -- but I hardly hold out any hopes of it playing the role that many
people want to make it fill, that of savior for the doctrine of free will.    
A case in point, the above.  The actions of a "free agent", from all
appearances and our ability to describe them, appear to be the result of some
"random chooser".  Put in a different light, all we are saying is that this
incredibly complex "system" appears to display "random" behavior.  But the
chaos theory itself is both savior and devil here, for it is the central idea
of chaos theory that perfectly determinate systems (such as the weather)
display what appears to be "random" behavior, by virtue of their complexity*. 
It is not that complex systems defy the laws of determinism -- they don't --
they defy our ability to conceptualize the deterministic chain of causes in
such complex systems.  Technically, there is no random-ness whatsoever -- it
is completely "pseudo random" in the same sense that a computer random number
generator creates "pseudo random" numbers -- it is nothing more than a series
for which the pattern of numbers OVER SOME LIMITED FRAME OF REFERENCE take on
the quality of true random numbers.  The same is true of the weather, we can't
determine for certain whether it will rain tomorrow, or what the temperature
will be at midnight in Topeka -- in this small window on the process, our
frame of reference is far too limited, and the phenomena are AT THAT LEVEL
random and indeterminate; but we all know that the jet stream will keep on
flowing, summer and winter aren't going to disappear, and the larger features
are pretty much predictable, non-random events.  There is a similar behavior
in chaotic systems of other kinds as well -- the individual behavior of a
chaotic system may be unpredictable, but many chaotic systems can be
characterized by "chaotic attractors", regions and patterns of behavior which
the system as a whole follows.  There is no reason to believe that the
ultimately highly* complex system of the mind is any less chaotic in that it's
behavior "appears random" but is not, exhibits stable patterns at higher
levels (eg. "predictable people", morality itself, the internal consistency of
consciousness and intelligence and choosing), and is at rock bottom COMPLETELY
DETERMINED.  Chaos is not the science of random systems -- the systems
themselves are quite determinate -- it is the science of non-random systems
that exhibit* random behavior (by virtue of the complex interaction of
sub-processes in the whole of the system).


UUCP: {amdahl!bungia, uunet!rosevax, chinet, killer}!orbit!pnet51!philo
ARPA: crash!orbit!pnet51!philo@nosc.mil
INET: philo@pnet51.cts.com

brianc@daedalus (Brian Colfer) (07/23/89)

In article <1842@harrier.ukc.ac.uk> rjf@ukc.ac.uk (Robin Faichney) writes:
>In article <425@berlioz.nsc.com> andrew@berlioz (Lord Snooty @ The Giant Poisoned Electric Head ) writes:
>>[..]
>>I coincidentally just caught a line from the group "Yes" (I think) which
>>goes:	"Then I will choose free will".
>>
>>While I am not suggesting that pop contains any deep truisms (or not), when
>>I heard this I wondered ..
>
>In my humble opinion, some Yes lyrics can be quite profound.  
...
> It is an Occams Razor to cut a Gordian Knot.  
...
>Just because something cannot be proven, is no reason not to believe in it.

Why believe something if you cannot prove it?   How can I prove that
there is no free will? ... it is logically impossible.  For example,
if I want to prove that there cannot ever be a black swan I must examine
every swan which has ever existed or ever will exist... impossible.

Some one else on USENET said that free will is "intending to intend".
Sounds kind of like the Yes lyric... but this just begs the question
of what is intention... or will.

>
>I say, if you want to preserve your Knots, don't listen to Yes (or me :-)
>

Put them on ice or BHT might work.

=============================================================================
Brian  | UC San Francisco        | E-mail: USENET, Internet, BITNET
Colfer | Dept. of Lab. Medicine  |...!{ucbvax,uunet}!daedalus.ucsf.edu!brianc
       | S.F. CA, 94143-0134 USA | brianc@daedalus.ucsf.edu 
       | PH. (415) 476-2325      | BRIANC@UCSFCCA.BITNET
-----------------------------------------------------------------------------
       "Leave your body and soul at the door ..." -- Oingo Boingo
=============================================================================

wlp@calmasd.Prime.COM (Walter Peterson) (07/24/89)

In article <2230@ucsfcca.ucsf.edu>, brianc@daedalus (Brian Colfer) writes:
> In article <1842@harrier.ukc.ac.uk> rjf@ukc.ac.uk (Robin Faichney)
writes:

[... various parts of the discussion deleted...]

> >Just because something cannot be proven, is no reason not to believe in it.
> 
> Why believe something if you cannot prove it?  

Sometime there is no choice in the matter. I will grant that proof is
a very desirable thing to have, but as Godel's Incompletness Theorem
shows, in any system that is complete there will be unprovable true
statements. Axioms are also accepted without proof.

-- 
Walt Peterson.  Prime - San Diego R&D (Object and Data Management Group)
"The opinions expressed here are my own."

brianc@daedalus (Brian Colfer) (07/24/89)

In article <438@calmasd.Prime.COM> wlp@calmasd.Prime.COM (Walter Peterson) writes:
>> Brian Colfer >>> Robin Faichney
>
>[... various parts of the discussion deleted...]

>>>Just because something cannot be proven, is no reason not to believe in it.

>> Why believe something if you cannot prove it?  

>Sometime there is no choice in the matter. I will grant that proof is
>a very desirable thing to have, but as Godel's Incompletness Theorem
>shows, in any system that is complete there will be unprovable true
>statements. Axioms are also accepted without proof.

I always thought that Godel proved that there will always be incompleteness
in deductive systems.

Also, I was suggesting inductive rather than deductive proof.  I
probably should have said,  "Why believe in something if there is no
publicly validated evidence for it?"  

Are there inductive axioms?
=============================================================================
Brian  | UC San Francisco        | E-mail: USENET, Internet, BITNET
Colfer | Dept. of Lab. Medicine  |...!{ucbvax,uunet}!daedalus.ucsf.edu!brianc
       | S.F. CA, 94143-0134 USA | brianc@daedalus.ucsf.edu 
       | PH. (415) 476-2325      | BRIANC@UCSFCCA.BITNET
-----------------------------------------------------------------------------
       "Leave your body and soul at the door ..." -- Oingo Boingo
=============================================================================

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (08/17/89)

In article <896@orbit.UUCP> philo@pnet51.cts.com (Scott Burke) writes:

> 
>  I'm sure that QM and chaos both play a part in the behavior of the human
>brain -- but I hardly hold out any hopes of it playing the role that many
>people want to make it fill, that of savior for the doctrine of free will.    
>A case in point, the above.

Let's look at the dreaded "QM+Chaos" from a computational angle:
1)  The brain is clearly a massively parallel non-linear system,
    and we should expect it to behave in a chaotic regime.
    Several neural network learning algorithms deal with the
    net as a dynamic system, which must be trained to have its output
    appraoch a desired attractor [1].  By understanding a net as
    a dynamic system, we can figure out how to change the weights to
    achieve that output.

2)  It is possible that "random" noise and QM noise are used in
    some learning procedures, and possibly decision procedures in the
    brain.  A learning algorithm may use random sampling of the
    weight space around the current weight point to determine
    how weights should be changed to achieve the desired learning.
    A good example of "random" noise used in a learning algorithm is
    simulated annealing [2].  

3)  I personally doubt "random" noise or "QM" holds the seed of knowledge,
    (I think that's a metaphysical consideration)
    but just presents tools for achieving learning.  The actual
    knowledge comes from the ability of a brain ciruit to achieve the
    desired output based upon current enviromental and brain states
    (the "learning algorithm").  There are probably many possible "local 
    minima" which a brain circuit can arrive at during any decision process, 
    and the ultimate choice between those acceptable decision choices may be 
   "made" by noise effects.

[1]  F.J. Pineda, "Dynamics and architecture for neural computation," J. of
	complexity, Vol. 4, pp. 216-245, Spet. 1988.
[2]  G.E. Hinton and T.J. Sejnowski, "Learning and Relearning in 
	Boltzmann Machines," Parallel Distributed Processing, Vol. 1,
	pp. 282-317, Rumelhart et al. eds..  MIT Press, 1986.