[talk.philosophy.misc] Free Will & Self-Awareness

bill@proxftl.UUCP (T. William Wells) (05/22/88)

In article <445@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
> In article <8805092354.AA05852@ucbvax.Berkeley.EDU> eyal@COYOTE.STANFORD.EDU (Eyal Mozes) writes:
>...
> As far as you have explained so far, Rand's theory is little more
> than simply saying that free will = the ability to focus consciousness,
> which we can all observe.  Since we can all observe this without the
> aid of Rand's theory, all Rand seems to be saying is "that's all there
> is to it".
>
> -- Jeff

Actually, from what I have read so far, it seems that the two of
you are arguing different things; moreover, eyal@COYOTE.STANFORD
.EDU (Eyal Mozes) has committed, at the very least, a sin of
omission: he has not explained Rand's theory of free will
adequately.

Following is the Objectivist position as I understand it.  Please
be aware that I have not included everything needed to justify
this position, nor have I been as technically correct as I might
have been; my purpose here is to trash a debate which seems to be
based on misunderstandings.

To those of you who want a justification, I will (given enough
interest) eventually be doing so on talk.philosophy.misc, where I
hope to be continuing my discussion of Objectivism.  Please
direct any followups to that group.

Entities are the only causes: they cause their actions.  Their
actions may be done to other entities, and this may require the
acted on entity to cause itself to act in some way.  In that
case, one can use `cause' in a derivative sense, saying: the
acting entities (the agents) caused the acted upon entities (the
patient) to act in a certain way.  One can also use `cause' to
refer to a chain of such.  This derivative sense is the normal
use for the word `cause', and there is always an implied action.

If, in order that an entity can act in some way, other entities
must act on it, then those agents are a necessary cause for the
patient's action.  If, given a certain set of actions performed
by some entities, a patient will act in a certain way, then those
agents are a sufficient cause for the patient's actions.

The Objectivist version of free will asserts that there are (for
a normally functioning human being) no sufficient causes for what
he thinks.  There are, however, necessary causes for it.

This means that while talking about thinking, no statement of the
form "X(s) caused me to think..." is an valid statement about
what is going on.

In terms of the actual process, what happens is this: various
entities provide the material which you base your thinking on
(and are thus necessary causes for what you think), but an
action, not necessitated by other entities, is necessary to
direct your thinking.  This action, which you cause, is
volition.

> But where does the "subject of your own choice" come from?  I wasn't
> thinking of letting one's thoughts wander, although what I said might
> be interpreted that way.  When you decide what to think about, did
> you decide to decide to think about *that thing*, and if so how did
> you decide to decide to decide, and so on?

Shades of Zeno!  One does not "decide to decide" except when one
does so in an explicit sense.  ("I was waffling all day; later
that evening I put my mental foot down and decided to decide once
and for all.") Rather, you perform an act on your thoughts to
direct them in some way; the name for that act is "decision".

Anyway, in summary, Rand's theory is not just that "free will =
the ability to focus consciousness" (actually, to direct certain
parts of one's consciousness), but that this act is not
necessitated by other entities.

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (05/24/88)

In article <205@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
>interest) eventually be doing so on talk.philosophy.misc, 
>Please direct any followups to that group.
Can I please just ask one thing here, as it is relevant?
>
>The Objectivist version of free will asserts that there are (for
>a normally functioning human being) no sufficient causes for what
>he thinks.  There are, however, necessary causes for it.
Has this any bearing on the ability of a machine to simulate human
decision making?  It appears so, but I'd be interested in how you think it
can be extended to yes/no/don't know about the "pure" AI endeavour.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

	     The proper object of the study of humanity is humans, not machines

bill@proxftl.UUCP (T. William Wells) (06/12/88)

In article <1226@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
> In article <205@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
> >The Objectivist version of free will asserts that there are (for
> >a normally functioning human being) no sufficient causes for what
> >he thinks.  There are, however, necessary causes for it.
> Has this any bearing on the ability of a machine to simulate human
> decision making?  It appears so, but I'd be interested in how you think it
> can be extended to yes/no/don't know about the "pure" AI endeavour.

If you mean by "pure AI endeavour" the creation of artificial
consciousness, then definitely the question of free will &
determinism is relevant.

The canonical argument against artificial consciousness goes
something like: humans have free will, and free will is essential
to human consciousness.  Machines, being deterministic, do not
have free will; therefore, they can't have a human-like
consciousness.

Now, should free will be possible in a deterministic entity this
argument goes poof.

bill@proxftl.UUCP (T. William Wells) (06/13/88)

I really do not want to further define Objectivist positions on
comp.ai.  I have also seen several suggestions that we move the
free will discussion elsewhere.  Anyone object to moving it to
sci.philosophy.tech?

In article <463@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
> ]In terms of the actual process, what happens is this: various
> ]entities provide the material which you base your thinking on
> ](and are thus necessary causes for what you think), but an
> ]action, not necessitated by other entities, is necessary to
> ]direct your thinking.  This action, which you cause, is
> ]volition.
>
> Well, how do I cause it?  Am I caused to cause it, or does it
> just happen out of nothing?  Note that it does not amount to
> having free will just because some of the causes are inside
> my body.  (Again, I am not sure what you mean by "other entities".)

OK, let's try to eliminate some confusion.  When talking about an
action that an entity takes, there are two levels of action to
consider, the level associated with the action of the entity and
the level associated with the processes that are necessary causes
for the entity level action.

[Note: the following discussion applies only to the case where
the action under discussion can be said to be caused by the
entity.]

Let's consider a relatively uncontroversial example.  Say I have
a hot stove and a pan over it.  At the entity level, the stove
heats the pan.  At the process level, the molecules in the stove
transfer energy to the molecules in the pan.

The next question to be asked in this situation is: is heat the
same thing as the energy transferred?

If the answer is yes then the entity level and the process level
are essentially the same thing, the entity level is "reducible"
to the process level.  If the answer is no, then we have what is
called an "emergent" phenomenon.

Another characterization of "emergence" is that, while the
process level is a necessary cause for the entity level actions,
those actions are "emergent" if the process level action is not a
sufficient cause.

Now, I can actually try to answer your question.  At the entity
level, the question "how do I cause it" does not really have an
answer; like the hot stove, it just does it.  However, at the
process level, one can look at the mechanisms of consciousness;
these constitute the answer to "how".

But note that answering this "how" does not answer the question
of "emergence".  If consciousness is emergent, then the only
answer is that "volition" is simply the name for a certain class
of actions that a consciousness performs.  And being emergent,
one could not reduce it to its necessary cause.

I should also mention that there is another use of "emergent"
floating around, it simply means that properties at the entity
level are not present at the process level.  The emergent
properties of neural networks are of this type.

sierch@well.UUCP (Michael Sierchio) (06/14/88)

The debate about free will is funny to one who has been travelling
with mystics and sages -- who would respond by saying that freedom
and volition have nothing whatsoever to do with one another. That
volition is conditioned by internal and external necessity and
is in no way free.

The ability to make plans, set goals, to have the range of volition
to do what one wants and to accomplish one's own aims still begs the
question about the source of what one wants.
-- 
	Michael Sierchio @ SMALL SYSTEMS SOLUTIONS
	2733 Fulton St / Berkeley / CA / 94705     (415) 845-1755

	sierch@well.UUCP	{..ucbvax, etc...}!lll-crg!well!sierch

bc@mit-amt.MEDIA.MIT.EDU (bill coderre) (06/14/88)

In article <6268@well.UUCP> sierch@well.UUCP (Michael Sierchio) writes:
>The debate about free will is funny to one who has been travelling
>with mystics and sages -- who would respond by saying that freedom
>and volition have nothing whatsoever to do with one another....

(this is gonna sound like my just previous article in comp.ai, so you
can read that too if you like)

Although what free will is and how something gets it are interesting
philosophical debates, they are not AI.

Might I submit that comp.ai is for the discussion of AI: its
programming tricks and techniques, and maybe a smattering of social
repercussions and philosophical issues.

I have no desire to argue semantics and definitions, especially about
slippery topics such as the above.

And although the occasional note is interesting (and indeed my
colleague Mr Sierchio's is sweet), endless discussions of whether some
lump of organic matter (either silicon- or carbon-based) CAN POSSIBLY
have "free will" (which only begs the question of where to buy some and
what to carry it in) is best confined to a group where the readership
is interested in such things.

Naturally, I shall not belabour you with endless discussions of neural
nets merely because of their interesting modelling of Real(tm)
neurons. But if you are interested in AI techniques and their rather
interesting approaches to the fundamental problems of intelligence and
learning (many of which draw on philosophy and epistemology), please
feel free to inquire.

I thank you for your kinds attention.....................mr bc