[comp.ai] Free Will & Self-Awareness

yamauchi@SPEECH2.CS.CMU.EDU (Brian Yamauchi) (04/21/88)

In article <3200014@uiucdcsm>, channic@uiucdcsm.cs.uiuc.edu writes:
> 
> I can't justify the proposition that scientific endeavors grouped
> under the name "AI" SHOULD NOT IGNORE issues of free wil, mind-brain,
> other minds, etc.  If these issues are ignored, however, I would
> strongly oppose the use of "intelligence" as being descriptive
> of the work.  Is it fair to claim work in that direction when
> fundamental issues regarding such a goal are unresolved (if not
> unresolvable)?  If this is the name of the field, shouldn't the
> field at least be able to define what it is working towards? 
> I personally cannot talk about intelligence without concepts such
> as mind, thoughts, free will, consciousness, etc.  If we, as AI
> researchers make no progress whatsoever in clarifying these issues,
> then we should at least be honest with ourselves and society, and find a
> new title for our efforts.  Actually the slight modification,
> "Not Really Intelligence" would be more than suitable.
> 
> 
> Tom Channic

I agree that AI researchers should not ignore the questions of free will,
consciousness, etc, but I think it is rather unreasonable to criticise AI
people for not coming up with definitive answers (in a few decades) to
questions that have stymied philosophers for millenia.

How about the following as a working definition of free will?  The
interaction of an individual's values (as developed over the long term) and
his/her/its immediate mental state (emotions, senses, etc.) to produce some
sort of decision.

I don't see any reason why this could not be incorporated into an AI
program.   My personal preference would be for a connectionist
implementation because I believe this would be more likely to produce
human-like behavior (it would be easy to make it unpredictable, just
introduce a small amount of random noise into the connections).

Another related issue is self-awareness.  I would be interested in hearing
about any research into having AI programs represent information about
themselves and their "self-interest".  Some special cases of this might
include game-playing programs and autonomous robots / vehicles.

By the way, I would highly recommend the book "Vehicles: Experiments
in Synthetic Psychology" by Valentino Braitenburg to anyone who doesn't
believe that machines could ever behave like living organisms.

______________________________________________________________________________

Brian Yamauchi                      INTERNET:    yamauchi@speech2.cs.cmu.edu
Carnegie-Mellon University
Computer Science Department
______________________________________________________________________________

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (04/26/88)

In article <1484@pt.cs.cmu.edu> yamauchi@SPEECH2.CS.CMU.EDU (Brian Yamauchi) writes:
>I agree that AI researchers should not ignore the questions of free will,
>consciousness, etc, but I think it is rather unreasonable to criticise AI
>people for not coming up with definitive answers (in a few decades) to
>questions that have stymied philosophers for millenia.

It is not the lack of answers that is criticised - it ia the ignorance
of candidate answers and their problems which leads to the charge of
self-perpetuating incompetence.  There are philosophers who would
provide arguments in defence of AI, so the 'free-will' issue is not
one where the materialists, logicians and mechanical/biological
determinists will find themselves isolated without an intellectual tradition.
>
>I don't see any reason why this could not be incorporated into an AI program
So what? This is standard silly AI, and implies that what is true has
anything to do with the quality of your imagination.  If people make
personal statements like this, unfortunately the rebuttals can only be
personal too, however much the rebutter would like to avoid this position.
>
>By the way, I would highly recommend the book "Vehicles: Experiments
>in Synthetic Psychology" by Valentino Braitenburg to anyone who doesn't
>believe that machines could ever behave like living organisms.

There are few idealists or romantics who believe that NO part of an organism
can be modelled as a mechanical process.  Such a position would require that a
heart-lung machine be at one with the patient's geist, soul or psyche!  The 
logical fallacy beloved in AI is that if SOME aspects of an organism can be 
modelled mechanically, than ALL can.  This extension is utterly flawed.  It may
be the case, but the case must be proven, and there are substantial arguments 
as to why this cannot be the case.

For AI workers (not AI developers/exploiters who are just raiding the
programming abstractions), the main problem they should recognise is
that a rule-based or other mechanical account of cognition and decision
making is at odds with the doctrine of free will which underpins most Western
morality.  It is in no way virtuous to ignore such a social force in
the name of Science.  Scientists who seek moral, ethical, epistemological
or methodological vacuums are only marginalising themselves into
positions where social forces will rightly constrain their work.

jbn@glacier.STANFORD.EDU (John B. Nagle) (04/28/88)

      Could the philosophical discussion be moved to "talk.philosophy"?
Ken Laws is retiring as the editor of the Internet AILIST, and with him
gone and no replacement on the horizon, the Internet AILIST (which shows
on USENET as "comp.ai.digest") is to be merged with this one, unmoderated.
If the combined list is to keep its present readership, which includes some
of the major names in AI (both Minsky and McCarthy read AILIST), the content
of this one must be improved a bit.

					John Nagle

olivier@boulder.Colorado.EDU (Olivier Brousse) (04/29/88)

In article <17424@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle) writes:
>
>      Could the philosophical discussion be moved to "talk.philosophy"?
>Ken Laws is retiring as the editor of the Internet AILIST, and with him
>gone and no replacement on the horizon, the Internet AILIST (which shows
>on USENET as "comp.ai.digest") is to be merged with this one, unmoderated.
>If the combined list is to keep its present readership, which includes some
>of the major names in AI (both Minsky and McCarthy read AILIST), the content
>of this one must be improved a bit.
>
>					John Nagle

"The content of this one must be improved a little bit."
What is this ? I believe the recent discussions were both interesting and of
interest to the newsgroup. 
AI, as far as I know, is concerned with all issues pertaining to intelligence
and how it could be artificially created. 
The question raised are indeed important questions
to consider, especially with regards to the recent success of connectionism.

Keep the debate going ...
Olivier Brousse                       |
Department of Computer Science        |  olivier@boulder.colorado.EDU
U. of Colorado, Boulder               |

ok@quintus.UUCP (Richard A. O'Keefe) (04/29/88)

In article <1029@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
> For AI workers (not AI developers/exploiters who are just raiding the
> programming abstractions), the main problem they should recognise is
> that a rule-based or other mechanical account of cognition and decision
> making is at odds with the doctrine of free will which underpins most Western
> morality.

What about compatibilism?  There are a lot of arguments that free will is
compatible with strong determinism.  (The ones I've seen are riddled with
logical errors, but most philosophical arguments I've seen are.)
When I see how a decision I have made is consistent with my personality,
so that someone else could have predicted what I'd do, I don't _feel_
that this means my choice wasn't free.

bwk@mitre-bedford.ARPA (Barry W. Kort) (04/29/88)

One of the problems with the English Language is that most of the
words are already taken.

Rather than argue over whether AI should or should not include
investigations into consciousness, awareness, free will, etc,
why not just make up a new label for this activity.

I would like to learn how to imbue silicon with consciousness,
awareness, free will, and a value system.  Maybe this is not
considered a legitimate branch of AI, and maybe it is a bit
futuristic, but it does need a name that people can live with.

So what can we call it?  Artificial Consiousness?  Artificial
Awareness?  Artificial Value Systems?  Artificial Agency?

Suppose I were able to inculcate a Value System into silicon.
And in the event of a tie among competing choices, I use a
random mechanism to force a decision.  Would the behavior of
my system be very much different from a sentient being with
free will?

--Barry Kort

shani@TAURUS.BITNET (05/01/88)

In article <30502@linus.UUCP>, bwk@mbunix.BITNET writes:
>
> I would like to learn how to imbue silicon with consciousness,
> awareness, free will, and a value system.

  First, by requesting that, you are underastimating yourself as a free-willing
creature, and second, your request is self-contradicting ans shows little
understanding of matters, like free will and value systems - such things cannot
be 'given', they simply exist. (Something to beare in mind for other perpuses,
besides to AI...). You can write 'moral' programs, even in BASIC, if you want,
because they will have YOUR value system....

O.S.

shani@TAURUS.BITNET (05/01/88)

In article <912@cresswell.quintus.UUCP>, ok@quintus.BITNET writes:

> compatible with strong determinism.  (The ones I've seen are riddled with
> logical errors, but most philosophical arguments I've seen are.)

That is correct, but there are few which arn't and that is mainly because
they mannaged to avoid self-contradictions and mixing of concepts...

O.S.

RLWALD@pucc.Princeton.EDU (Robert Wald) (05/02/88)

In article <1029@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
 
>In article <1484@pt.cs.cmu.edu> yamauchi@SPEECH2.CS.CMU.EDU (Brian Yamauchi) writes:
>For AI workers (not AI developers/exploiters who are just raiding the
>programming abstractions), the main problem they should recognise is
>that a rule-based or other mechanical account of cognition and decision
>making is at odds with the doctrine of free will which underpins most Western
>morality.  It is in no way virtuous to ignore such a social force in
>the name of Science.  Scientists who seek moral, ethical, epistemological
>or methodological vacuums are only marginalising themselves into
>positions where social forces will rightly constrain their work.
 
 
 
 
    Are you saying that AI research will be stopped because when it ignores
free will, it is immoral and people will take action against it?
 
    When has a 'doctrine' (which, by the way, is nothing of the sort with
respect to free will) any such relationship to what is possible?
 
 
 
 
 
-Rob Wald                Bitnet: RLWALD@PUCC.BITNET
                         Uucp: {ihnp4|allegra}!psuvax1!PUCC.BITNET!RLWALD
                         Arpa: RLWALD@PUCC.Princeton.Edu
"Why are they all trying to kill me?"
     "They don't realize that you're already dead."     -The Prisoner

smoliar@vaxa.isi.edu (Stephen Smoliar) (05/02/88)

In article <912@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe)
writes:
>In article <1029@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert
>Cockton) writes:
>> For AI workers (not AI developers/exploiters who are just raiding the
>> programming abstractions), the main problem they should recognise is
>> that a rule-based or other mechanical account of cognition and decision
>> making is at odds with the doctrine of free will which underpins most
>>Western morality.
>
>What about compatibilism?  There are a lot of arguments that free will is
>compatible with strong determinism.  (The ones I've seen are riddled with
>logical errors, but most philosophical arguments I've seen are.)
>When I see how a decision I have made is consistent with my personality,
>so that someone else could have predicted what I'd do, I don't _feel_
>that this means my choice wasn't free.


Here, here!  Cockton's statement is the sort of doctrinaire proclamation which
is guaranteed to muddy the waters of any possible dialogue between those who
practice AI and those who practice the study of philosophy.  He should either
prepare a brief substantiation or relegate it to the cellar of outrageous
vacuities crafted solely to attract attention!

marty1@houdi.UUCP (M.BRILLIANT) (05/02/88)

In article <717@taurus.BITNET>, shani@TAURUS.BITNET writes:
> In article <30502@linus.UUCP>, bwk@mbunix.BITNET writes:
> >
> > I would like to learn how to imbue silicon with consciousness,
> > awareness, free will, and a value system.
> 
> .... free will and value systems - such things cannot
> be 'given', they simply exist.....
> .... You can write 'moral' programs, even in BASIC, if you want,
> because they will have YOUR value system....

It has been suggested that intelligence cannot be "given" to a machine
either.  That is, an "expert system" using only expertise "given" to it
out of the experience of human experts is not exhibiting full
"artificial intelligence."

BWK suggested "artificial awareness" as a complement to "artificial
intelligence,"  but apparently that is not enough.  You need artificial
learning.  My value system was not "given" to me, nor was my
professional expertise; both were learned.  At its ultimate, AI
research is really devoted to the invention of artificial learning.

For full artificial intelligence, the machine must derive its expertise
from its own experience.  For full artificial awareness, the machine
must derive its values from its own experience.  Not much different. 
Achieve artificial learning, and you will get both.

I hate to rehash the old "Turing test" again, but a machine cannot pass
for human longer than a few hours, or days at most, unless it has the
capacity for "agonizing reappraisal": the ability to "reevalueate its
basic assumptions."  That would be learning as humans do it.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201)-949-1858
Holmdel, NJ 07733	ihnp4!houdi!marty1

Disclaimer: Opinions stated herein are mine unless and until my employer
            explicitly claims them; then I lose all rights to them.

lantz@eleazar.Dartmouth.EDU (Bob Lantz) (05/03/88)

John Nagle writes (17424@glacier.stanford.edu)

>     Could the philosophical discussion be moved to "talk.philosophy"?
>... the major names in AI (both Minsky and McCarthy read AILIST), the content
>of this one must be improved a bit.

One could just as easily abstract the articles on cognitive psychology,
programming, algorithms, or several other topics equally relevant to AI.
Considering AI is an interdisciplinary endeavor combining philosophy,
psychology, and computer science (for example) it seems unwise to 
artificially narrow the scope of discussion.

I expect most readers of comp.AI (and Minsky, McCarthy, McDermott...
other AI people whose names start with 'M') have interests in multiple
facets of this fascinating discipline.

-Bob

Bob_Lantz@dartmouth.EDU

bwk@mitre-bedford.ARPA (Barry W. Kort) (05/03/88)

I also went back to reread Professor Minsky's theory of Free Will in the
concluding chapters of _Society of Mind_.  I am impressed with the
succinctness with which Minksky captures the essential idea that
individual behavior is generated by a mix of causal elements (agency
motivated by awareness of the state-of-affairs vis-a-vis one's value system)
and chance (random selection among equal-valued alternatives).

The only other treatises on Free Will that I resonated with were the
ones by Raymond Smullyan ("Is God a Taoist" in _The Tao is Silent_ and
reprinted in Hofstadter and Dennet's _The Mind's I_) and the book by
Daniel Dennet (_Elbow Room: The Varieties of Free Will Worth Wanting_).

My own contribution to this discussion is summarized in the only free
verse I ever composed in my otherwise prosaic career:


                             Free Will 

                                or

                         Self Determination



          I was what I was.

                     I am what I am.

                               I will be what I will be.


--Barry Kort

cwp@otter.hple.hp.com (Chris Preist) (05/03/88)

O.S. writes...
 
>> I would like to learn how to imbue silicon with consciousness,
>> awareness, free will, and a value system.
>
>  First, by requesting that, you are underastimating yourself as a free-willing
> creature, and second, your request is self-contradicting ans shows little
> understanding of matters, like free will and value systems - such things cannot
> be 'given', they simply exist. (Something to beare in mind for other perpuses,
> besides to AI...). You can write 'moral' programs, even in BASIC, if you want,
> because they will have YOUR value system....

Sorry, but not correct. While it is quite possible that the goal of 'imbuing
silicon with a value system' may never bge fulfilled, it is NOT correct to 
say that values simply exist.

Did my value system exist before my conception? I doubt it. I learnt it, 
through interaction with the environment and other people. Similarly, a
(possibly deterministic) program MAY be able to learn a value system, as
well as what an arch looks like. Simply because we have values, does not
mean we are free.

On the question of Free Will - simply because someone denies that we are
truly free, does not mean they have little understanding of the matter.
As Sartre pointed out, we have an overwhelming SUBJECTIVE sensation of 
freedom. Questioning this sensation is a major step, but a step which
has to be made. Up to now, the problem has been purely metaphysical. An
answer was impossible. But now, AI provides an investigation into 
deterministic intelligence. I believe it IS important for AI researchers
to make an effort to understand the philosophical arguments on both sides.
Maybe your heart will lie on one of those sides, but the mind must remain
as open as possible.

Chris Preist


P.S. Note that AI only provides a semi decision procedure to the problem
of free will. Determinism would be proven (though even this is debatable)
if an 'intelligent' deterministic system were created. However, if objective
free will exists, then we could hack away with the infinite monkeys, all
to no avail.

vu0112@bingvaxu.cc.binghamton.edu (Cliff Joslyn) (05/04/88)

In article <30800@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>The essential idea that
>individual behavior is generated by a mix of causal elements (agency
>motivated by awareness of the state-of-affairs vis-a-vis one's value system)
>and chance (random selection among equal-valued alternatives).

This is a central tennet of Systems Science.  Absolute determinism is
possible and relatively common; absolute freedom is impossible; relative
freedom is possible and relatively common.  Most (all?) real systems
involve mixes of relatively free and determined elements operating at
multiple levels of interaction/complexity.

It should be emphasized that this is not just true of mental systems,
but also of biological and even physical systems.  As one moves from the
physical to the biological and finally to the mental, the relative
importance of the free components grows.  Intelligent organisms are more
free than unintelligent organisms; which are more free than
non-organisms.  

None of the above are absolutely free.  No-one even knows what it might
mean to be absolutely free. 

>--Barry Kort


-- 
O---------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large
| Systems Science, SUNY Binghamton, vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

ok@quintus.UUCP (Richard A. O'Keefe) (05/05/88)

In article <2070013@otter.hple.hp.com>, cwp@otter.hple.hp.com (Chris Preist) writes:
> O.S. writes...
> Did my value system exist before my conception? I doubt it.
This is rather like asking whether some specific number existed before
anyone calculated.  Numbers and value systems are symbolic/abstract
things, not material objects.  I have often wondered what philosophy
would have been like if it had arisen in a Polynesian community rather
than an Indo-European one (in Polynesian languages, numbers are _verbs_).

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (05/05/88)

In article <5100@pucc.Princeton.EDU> RLWALD@pucc.Princeton.EDU writes:
>    Are you saying that AI research will be stopped because when it ignores
>free will, it is immoral and people will take action against it?
Research IS stopped for ethical reasons, especially in Medicine and
Psychology.  I could envisage pressure on institutions to limit its AI
work to something which squares with our ideals of humanity.  If the
US military were not using technology which was way beyond the
capability of its not-too-bright recruits, then most of the funding
would dry up anyway.  With the Pentagon's reported concentration on
more short-term research, they may no longer be able to indulge their
belief in the possibility of intelligent weaponry.

>    When has a 'doctrine' (which, by the way, is nothing of the sort with
>respect to free will) any such relationship to what is possible?
From this, I can only conclude that your understanding of social
processes is non-existent.  Behaviour is not classified as deviant
because it is impossible, but because it is undesirable.  I know of NO
rational theory of society, so arguments that a computational model of
human behaviour MAY be possible are utterly irrelevant.  This is a
typical academic argument, and as you know, academics have a limited
influence on society.

The question is, do most people WANT a computational model of human
behaviour?  In these days of near 100% public funding of research,
this is no longer a question that can be ducked in the name of
academic freedom.  Everyone is free to study what they want, but public
funding of a distasteful and dubious activity does not follow from
this freedom.   If funding were reduced, AI would join fringe areas such as
astrology, futorology and palmistry.  Public funding and institutional support
for departments implies a legitimacy to AI which is not deserved.

jeff@aiva.ed.ac.uk (Jeff Dalton) (05/06/88)

In article <717@taurus.BITNET> <shani%TAURUS.BITNET@CUNYVM.CUNY.EDU>
writes: 
- you are underastimating yourself as a free-willing
- creature, and second, your request is self-contradicting ans shows
- litle understanding of matters, like free will and value systems -
- such things cannot be 'given', they simply exist.

Is this an Ayn Rand point?  It sure sounds like one.  Note the use
of `self-contradicting'.

- You can write 'moral' programs, even in BASIC, if you want,
- because they will have YOUR value system....

It is hard to see how this makes any sense whatsoever.

cwp@otter.hple.hp.com (Chris Preist) (05/06/88)

 
R. O'Keefe replies to me...

> > Did my value system exist before my conception? I doubt it.
>This is rather like asking whether some specific number existed before
>anyone calculated.  Numbers and value systems are symbolic/abstract
>things, not material objects.  I have often wondered what philosophy
>would have been like if it had arisen in a Polynesian community rather
>than an Indo-European one (in Polynesian languages, numbers are _verbs_).
>----------

Oh no! Looks like my intuitionist sympathies are creeping out!!!

Seriously though, there IS a big difference between numbers and value
systems - Empirical evidence for this is given by the fact that (most of)
society agrees on a number system, but the debate about which value system
is 'correct' leads to factionism, terrorism, war, etc etc. Value systems
are unique to each individual, a product of his/her nature and nurture.
While they may be able to be expressed abstractly, this does not mean
they 'exist' in abstraction (Intuitionist aside: The same could be said of
numbers). They are obviously not material objects, but this does not mean
they have Platonic Ideal existance. We are not imbued with them at birth,
but aquire them. This aquisition is perfectly compatible with determinism.


So what does this mean for AI?  Earlier, in my reply to O.S., I was arguing
that our SUBJECTIVE experience of freedom is perfectly compatible with our
existance within a deterministic system, hence AI is not necessarily 
fruitless. You have drawn me out on another metaphysical point - I believe
that our intelligence (rather than our capacity for intelligence), our 
value systems, and also our 'semantics' stem from our existance within the
world, rather than our essential nature. Sensation and experience are
primary. The brain is a product of the spinal chord, rather than vice-versa.
For this reason, I believe that the goals of strong AI can only be 
accomplished by techniques which accept the importance of sensation. 
Connectionism is the only such technique I know of at the moment. 


Chris Preist

yamauchi@speech2.cs.cmu.edu (Brian Yamauchi) (05/07/88)

In article <1099@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
> In article <5100@pucc.Princeton.EDU> RLWALD@pucc.Princeton.EDU writes:
> >    Are you saying that AI research will be stopped because when it ignores
> >free will, it is immoral and people will take action against it?
> Research IS stopped for ethical reasons, especially in Medicine and
> Psychology.  I could envisage pressure on institutions to limit its AI
> work to something which squares with our ideals of humanity.

I can envisage pressure on institutions to limit work on sociology and
psychology to limit work to that which is compatible with orthodox
Christianity.  That doesn't mean that this is a good idea.

> If the
> US military were not using technology which was way beyond the
> capability of its not-too-bright recruits, then most of the funding
> would dry up anyway.  With the Pentagon's reported concentration on
> more short-term research, they may no longer be able to indulge their
> belief in the possibility of intelligent weaponry.

Weapons are getting smarter all the time.  Maybe soon we won't need the
not-too-bright recruits.....

> >    When has a 'doctrine' (which, by the way, is nothing of the sort with
> >respect to free will) any such relationship to what is possible?
> From this, I can only conclude that your understanding of social
> processes is non-existent.  Behaviour is not classified as deviant
> because it is impossible, but because it is undesirable.

From this, I can only conclude that either you didn't understand the
question or I didn't understand the answer.  What do the labels that society
places on certain actions have to do with whether any action is
theoretically possible?  Anti-nuke activists may make it practically
impossible to build nuclear power plants -- they cannot make it physically
impossible to split atoms.

> The question is, do most people WANT a computational model of human
> behaviour?  In these days of near 100% public funding of research,
> this is no longer a question that can be ducked in the name of
> academic freedom.

100% public funding?????  Haven't you ever heard of Bell Labs, IBM Watson
Research Center, etc?  I don't know how it is in the U.K., but in the U.S.
the major CS research universities are actively funded by large grants from
corporate sponsors.  I suppose there is a more cooperative atmosphere here --
in fact, many of the universities here pride themselves on their close
interactions with the private research community.

Admittedly, too much of all research is dependent on government funds, but
that's another issue....

>  Everyone is free to study what they want, but public
> funding of a distasteful and dubious activity does not follow from
> this freedom.   If funding were reduced, AI would join fringe areas such as
> astrology, futorology and palmistry.  Public funding and institutional support
> for departments implies a legitimacy to AI which is not deserved.

A modest proposal: how about a cease-fire in the name-calling war?  The
social scientists can stop calling AI researchers crackpots, and the AI
researchers can stop calling social scientists idiots.

______________________________________________________________________________

Brian Yamauchi                      INTERNET:    yamauchi@speech2.cs.cmu.edu
Carnegie-Mellon University
Computer Science Department
______________________________________________________________________________

ok@quintus.UUCP (Richard A. O'Keefe) (05/07/88)

In article <2070015@otter.hple.hp.com>, cwp@otter.hple.hp.com (Chris Preist) writes:
> The brain is a product of the spinal chord, rather than vice-versa.

I'm rather interested in biology; if this is a statement about human
ontogeny I'd be interested in having a reference.  If it's a statement
about phylogeny, it isn't strictly true.  In neither case do I see the
implications for AI or philosophy.  It is not clear that "develops
late" is incompatible with "is fundamental".  For example, the
sociologists hold that our social nature is the most important thing
about us.  In any case, not all sensation passes through the spinal
cord.  The optical nerve comes from the brain, not the spinal cord.
Or isn't vision "sensation"?

> For this reason, I believe that the goals of strong AI can only be 
> accomplished by techniques which accept the importance of sensation. 
> Connectionism is the only such technique I know of at the moment. 

Eh?  Now we're really getting to the AI meat.  Connectionism is about
computation; how does a connectionist network treat "sensation" any
differently from a Marr-style vision program?  Nets are interesting
machines, but there's still no ghost in them.

shani@TAURUS.BITNET (05/08/88)

In article <2070013@otter.hple.hp.com>, cwp@otter.BITNET writes:

> Did my value system exist before my conception? I doubt it. I learnt it,
> through interaction with the environment and other people. Similarly, a
> (possibly deterministic) program MAY be able to learn a value system, as
> well as what an arch looks like. Simply because we have values, does not
> mean we are free.

  Try to look at it this way: even assumeing that you did learn your values
from other people (pearents, teachers, TV etc.) and did not make anything up,
how did you decide, what to adupt from who? randomly? or by some pre-determent
factors? doesn't it make values worthless, if you can always blame chance,
heritage or some teachers in your school, for your decisions?

There is one mistake (In my opinion, ofcourse), which repeat in many of the
postings on this subject. You must make difference, between values as
themselfes (Like which is your favorite color, whether you belive in god or
not), from the practicing of your system-of-values (or alignment as I prefer to
machine), is the given realm, you 'play' on, because 'real' things (in the
manner of things that exist in the given realm), are the only things which
have a common (more or less...) meaning to all of us. Now, if you will think of
values, as theyre pure self, and not as theyre practice in realety, you will
see that they are not 'real' in this manner, and therfore cannot be learnt or
'given'. Maybe one day we will be able to create machines that will have this
uniqe human abielty to give a personal meaning to things... Infact, we can
do it already, and could for thousands of years - we create new human beings.

O.S.

-----------------------------------------------------------------------------
I think that I think, and therfore I think that I am...
-----------------------------------------------------------------------------

shani@TAURUS.BITNET (05/08/88)

In article <402@aiva.ed.ac.uk>, jeff@aiva.BITNET writes:
> Is this an Ayn Rand point?  It sure sounds like one.  Note the use
> of `self-contradicting'.

I bet you will not belive me, but I'm not sure I know who Ayn Rand is...

O.S.

ard@pdn.UUCP (Akash Deshpande) (05/09/88)

Consider a vending machine that for $.50 vends pepsi, coke or oj. After
inserting the money you make a selection and get it. You are happy.

Now consider a vending machine that has choices pepsi, coke and oj, but
always gives you only oj for $.50. After inserting the money you make
a selection, but irrespective of your selection you get oj. You may feel
cheated.

Thus, the willed result through exercise of freedom of choice may not be
related to the actual result. The basic question of freewill is - 
"Is it enough to maintain an illusion of freedom of choice, or should
the willed results be made effective?". The latter, I suppose.

Further consider the first (good) vending machine. While it was being
built, the designer really had 5 brands, but chose (freely, for whatever
reasons) to vend only the three mentioned. As long as I (as a user of the 
vending machine) don't know of my unavailable choice space, I have the
illusion of a full freedom of choice. This is where awareness comes in.
Awareness expands my choices, or equivalently, lack of awareness creates
an illusion of freewill (since you cannot choose that which you do not
know of). Note that the designer of the vending machine controls the 
freewill of the user. 

Indian philosophy contends that awareness (=consciousness) is fundamental.
Freewill always exists and is commensurate with awareness. But freewill
is also an illusion when examined in the perspective of greater awareness.

Akash
--
Akash Deshpande					Paradyne Corporation
{gatech,rutgers,attmail}!codas!pdn!ard		Mail stop LF-207
(813) 530-8307 o	 			Largo, Florida 34649-2826
Like certain orifices, every one has opinions. I haven't seen my employer's!

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (05/09/88)

In art. <5404@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes
>a rule-based or other mechanical account of cognition and decision making
>is at odds with the doctrine of free will which underpins most Western morality

>Cockton should either prepare a brief substantiation or relegate it to the 
>cellar of outrageous vacuities crafted solely to attract attention!

Hey, that vacuity's sparked off a really interesting debate, from
which I'm learning a lot.  Don't put it in the cellar yet.
Apologies to anyone who doesn't like polemic, but I've always found it a great
way of getting the ball rolling - I would use extreme statements as a classroom
teacher to get discussion going, hope no-one's bothered by the transfer of this
behaviour to the adult USENET.

Anyway, the simplified, and thus inadeqaute argument is:

	machine intelligence => determinism
	determinism => lack of responsibility
	lack of responsibility => no moral blame
	no moral blame => do whatever your rulebase says.

Now we could view morality as just another rulebase applied to output 1 of the
decision-process, a pruning operator as it were.

Unfortunately, all attempts to date to present a moral rule-base have
failed, so the chances of morality being rule-based are slim. Note that
in the study of humanity, we have few better tools now than we had in
Classical times, so there are no good reasons for expecting major advances
in our understanding of ourselves.  Hence Skinner's dismay that while Physics
had advanced much since classical times, Psychology has hardly advanced at all.
Skinner accordingly stocked his lab with high-tech rats and pidgeons in an 
attempt to push back the frontiers of learning theory.

At least you don't have to clean out the computer's cage :-)

bwk@mitre-bedford.ARPA (Barry W. Kort) (05/09/88)

I was gratified to see Marty Brilliant's entry into the discussion.
I certainly agree that an intelligent system must be able to
evolve its knowledge over time, based information supplied partly
by others, and partly by its own direct experience.  Thomas Edison
had a particularly rich and accurate knowledge base because he was
a skeptic:  he verified every piece of scientific knowledge before
accepting it as part of his belief system.  As a result, he was able
to envision devices that actually worked when he built them.

I think Minsky would agree that our values are derived partly from
inheritance, partly from direct experience, and partly from internal
reasoning.  While the state of AI today may be closer to Competent
Systems rather than Expert Systems, I see no reason why the field
of AI cannot someday graduate to AW (Artificial Wisdom), in which an
intelligent system not only knows something useful, it senses that
which is worth knowing.

--Barry Kort

jeff@aiva.ed.ac.uk (Jeff Dalton) (05/10/88)

In article <726@taurus.BITNET> <shani%TAURUS.BITNET@CUNYVM.CUNY.EDU> writes:
>I bet you will not belive me

I do believe you.  But I'd still like to know how I can write moral
programs in Basic, or even ones that have my value system.

Cheers,
Jeff

smoliar@vaxa.isi.edu (Stephen Smoliar) (05/10/88)

In article <1099@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>Research IS stopped for ethical reasons, especially in Medicine and
>Psychology.  I could envisage pressure on institutions to limit its AI
>work to something which squares with our ideals of humanity.

Just WHOSE ideals of humanity did you have in mind?  I would not be surprised
at the proposition that humanity, taken as a single collective, would not be
able to agree on any single ideal;  that would just strike me as another
manifestation of human nature . . . a quality for which the study of artificial
intelligence can develop great respect.  Back when I was a callow freshman, I
was taught to identify Socrates with the maxim, "Know thyself."  As an
individual who has always been concerned with matters of the mind, I can
think of no higher ideal to which I might aspire than to know what it is
that allows myself to know;  and I regard artificial intelligence as an
excellent scientific approach to the pursuit of this ideal . . . one which
enables me to test flights of my imagination with concrete experimentation.
Perhaps Gilbert Cockton would be kind enough to let us know what it is that
he sees in artificial intelligence research that does not square with his
personal ideals of humanity (whatever they may be);  and I hope he does not
confuse the sort of brute force engineering which goes into such endeavours
as "smart weapons" with scientific research.

>If the
>US military were not using technology which was way beyond the
>capability of its not-too-bright recruits, then most of the funding
>would dry up anyway.  With the Pentagon's reported concentration on
>more short-term research, they may no longer be able to indulge their
>belief in the possibility of intelligent weaponry.
>
Which do you want to debate, ethics or funding?  The two have a long history
of being immiscible.  The attitude which our Department of Defense takes
towards truly basic research is variable.  Right now, times are hard (but
then they don't appear to be prosperous in most of Europe either).  We
happen to have an administration that is more interested in guns than brains.
We have survived such periods before, and I anticipate that we shall survive
this one.  However, a whole-scale condemnation of funding on grounds of
ethics doesn't gain very much other than a lot of bad feeling.  Fortunately,
we have benefited from the fat years to the extent that the technology has
become affordable to the extent that some of us can pursue more abstract
studies of artificial intelligence with cheaper resources than ever before.
Anyone who REALLY doesn't want to take what he feels is "dirty" money can
function with much smaller grants from "cleaner" sources (or even, perhaps,
work out his garage).
>
>The question is, do most people WANT a computational model of human
>behaviour?

Since when do "most people" determine the agenda of any scientific inquiry.
Did "most people" care whether or not this planet was the center of the
cosmos.  The people who cared the most were navigators, and all they cared
about was the accuracy of their charts.  The people who seemed to care the
most about Darwin were the ones who were most obsessed with the fundamental
interpretation of scripture.  This may offend sociological ideals;  but
science IS, by its very nature, an elite profession.  A scientist who lets
"most people" set the course of his inquiry might do well to consider the
law or the church as an alternative profession.

>  Everyone is free to study what they want, but public
>funding of a distasteful and dubious activity does not follow from
>this freedom.

And who is to be the arbiter of taste?  I can imagine an ardent Zionist who
might find the study of German history, literature, or music to be distasteful
to an extreme.  (I can remember when it was impossible to hear Richard Wagner
or Richard Strauss in concert in Israel.)  I can imagine political scientists
who might find the study of hunter-gartherer cultures to be distasteful for
having no impact on their personal view of the world.  I have about as much
respect for such tastes as I have for anyone who would classify artificial
intelligence research as "a distasteful and dubious activity."

>   If funding were reduced, AI would join fringe areas such as
>astrology, futorology and palmistry.  Public funding and institutional support
>for departments implies a legitimacy to AI which is not deserved.


Of course, those "fringe areas" do not get their funding from the government.
They get it through their own private enterprising, by which they convince
those "most people" cited above to part with hard-earned dollars (after the
taxman has taken his cut).  Unfortunately, scientific research doesn't "sell"
quite so well, because it is an arduous process with no quick delivery.
Gilbert Cockton still has not made it clear, on scientific grounds at any
rate, why AI does not deserve this so-called "legitimacy."  In a subsequent
article, he has attempted to fall back on what I like to call the
what-a-piece-of-work-is-man line of argument.  Unfortunately, this
approach is emotional, not scientific.  Why he has to draw upon emotions
must only be because he cannot muster scientific arguments to make his
case.  Fortunately, those of us who wish to pursue a scientific research
agenda need not be deterred by such thundering.  We can devote our attention
to the progress we make in our laboratories.

verma@hpscad.dec.com (Virendra Verma, DTN 297-5510, MRO1-3/E99) (05/10/88)

>Consider a vending machine that for $.50 vends pepsi, coke or oj. After
>inserting the money you make a selection and get it. You are happy.
 
>Now consider a vending machine that has choices pepsi, coke and oj, but
>always gives you only oj for $.50. After inserting the money you make
>a selection, but irrespective of your selection you get oj. You may feel
>cheated.
 
>Thus, the willed result through exercise of freedom of choice may not be
>related to the actual result. The basic question of freewill is - 
>"Is it enough to maintain an illusion of freedom of choice, or should
>the willed results be made effective?". The latter, I suppose.
 
>Further consider the first (good) vending machine. While it was being
>built, the designer really had 5 brands, but chose (freely, for whatever
>reasons) to vend only the three mentioned. As long as I (as a user of the 
>vending machine) don't know of my unavailable choice space, I have the
>illusion of a full freedom of choice. This is where awareness comes in.
>Awareness expands my choices, or equivalently, lack of awareness creates
>an illusion of freewill (since you cannot choose that which you do not
>know of). Note that the designer of the vending machine controls the 
>freewill of the user. 
 
>Akash

	It seems to me that you are mixing "free will" and "outcome". I think
	"free will" is probabilitically related to the "outcome". Isn't the
	essence of "law of karma" when Krashna mentions that you are free
	to exercise your will (i.e., the act of doing something which is
	karma element, "insertion of coins" is an act of free will in your
	example"). You have no control over the "results" element of your
	free will? The "awareness" element simply improves the probablity 
	of the "outcome". Even in your first example with good machine, you 
	may not get what you want because there may be a power failure 
	right after you insert the coin!!

	- Virendra

shani@TAURUS.BITNET.UUCP (05/12/88)

In article <415@aiva.ed.ac.uk>, jeff@aiva.BITNET writes:
>
> I do believe you.  But I'd still like to know how I can write moral
> programs in Basic, or even ones that have my value system.
>
Whell, I said this only as a figure of speech, but still, if, for inctance,
you write a video-game or some thing like that, you may encounter some points,
in which you have to decide what the program will do, on a 'value' basis
(balancing difficlty, bonous points, things like that...). This is, more or
less, what I ment...

O.S.

jbn@glacier.STANFORD.EDU (John B. Nagle) (05/12/88)

In article <1115@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>
>Unfortunately, all attempts to date to present a moral rule-base have
>failed, so the chances of morality being rule-based are slim.

     There have been attempts, such as the following.

      "I.   No robot may harm a human being, or by inaction cause one to come
	    to harm.

      II.   A robot must obey all orders from human beings, unless this
	    conflicts with the First Law.

     III.   A robot must act to protect its own existence, unless this
	    conflicts with the First or Second Law."

					(I. Asimov, circa 1955)

Yes, we don't know how to implement this yet.  Yes, it's a morality for
slaves.  But it is an important concept.  As we work toward mobile robots,
it is worth keeping in mind.

					John Nagle

ansley@sunybcs.UUCP (05/12/88)

In article <17442@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle) writes:
>In article <1115@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>>
>>Unfortunately, all attempts to date to present a moral rule-base have
>>failed, so the chances of morality being rule-based are slim.
>
>     There have been attempts, such as the following.
>     [Statement of Isaac Asimov's 3 Laws of Robotics deleted.]
>Yes, we don't know how to implement this yet.  Yes, it's a morality for
>slaves.  But it is an important concept.  As we work toward mobile robots,
>it is worth keeping in mind.
>
>					John Nagle


There is also the problem that you have to define "human being" in a way that
the robot can infallibly recognize.  Asimov has been using this shortcoming of
the "Three Laws" as the basis for much of his recent fiction concerning
robots.

The problem would seem to be most severe with very primative or very
sophisticated robots.  The primative ones might be hard-pressed to recognize
anything as a human being, while the sophisticated ones might begin to wonder
if they didn't perhaps qualify as human themselves.  (This latter idea is due
to Asimov - I can't recall they story title.)


---------------------------------------------------------------------------
William H. Ansley, Dept. of CS, 226 Bell Hall, SUNY at Buffalo, NY  14260.

ansley@gort.cs.buffalo.EDU | ansley@sunybcs.BITNET | ansley@sunybcs.UUCP

bwk@mitre-bedford.ARPA (Barry W. Kort) (05/13/88)

I was glad to see John Nagle bring up Asimov's 3 moral laws of robots.
Perhaps the time has come to refine these just a bit, with the intent
of shaping them into a more implementable rule-base.

I propose the following variation on Asimov:

      I.   A robot may not harm a human or other sentient being,
           or by inaction permit one to come to harm.

     II.   A robot may respond to requests from human beings,
           or other sentient beings, unless this conflicts with
           the First Law.

    III.   A robot may act to protect its own existence, unless this
           conflicts with the First Law."

     IV.   A robot may act to expand its powers of observation and
           cognition, and may enlarge its knowledge base without limit.

Can anyone propose a further refinement to the above?

--Barry Kort

smoliar@vaxa.isi.edu (Stephen Smoliar) (05/13/88)

In article <1115@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>
>Anyway, the simplified, and thus inadeqaute argument is:
>
>	machine intelligence => determinism
>	determinism => lack of responsibility
>	lack of responsibility => no moral blame
>	no moral blame => do whatever your rulebase says.
>
Until this argument gets fleshed out further, I would argue that its weakest
link is on the second line.  There are plenty of deterministic systems which
are too complex for us to comprehend in any coherent manner.  After all,
physics begins to elude our grasp as soon as we consider more than two
bodies!  When confronted with such complexity, the only way we can deal
with it is through abstraction;  and there lies a possible situation in
which every individual must make choices (what will be "abstracted away"
and what remains as "priority items") and must, consequently, accept
responsibility for the committment to those choices.  I am responsing
to a simplified sketch with an equally simple sketch of my own, but
perhaps this can provided a basis for discussion in less flaboyant
use of language.

marsh@mitre-bedford.ARPA (Ralph J. Marshall) (05/13/88)

In article <31738@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>I was glad to see John Nagle bring up Asimov's 3 moral laws of robots.
>Perhaps the time has come to refine these just a bit, with the intent
>of shaping them into a more implementable rule-base.
>
>I propose the following variation on Asimov:
>
>     IV.   A robot may act to expand its powers of observation and
>           cognition, and may enlarge its knowledge base without limit.
>
I don't think I want the U.S. government "expanding its powers or
observation without limit" since I still think I am entitled to some
privacy.  I therefore certainly don't want some random robot, controlled
by and reporting to God knows who attempting to gain as much information
as it can possibly acquire.

On a different note, your change of wording from human to sentient being
is too vague for this type of rule.  While I agree that other lifeforms
that we may encounter should be given the same respect we reserve for
other humans, I don't think we would ever want to choose a sentient robot
over a human in a life or death situation in which only one could be saved.
(This was the rationale for sending Lt. Cmdr. Data into a hostile situation
_alone_ on a recent Star Trek and I agreed with it entirely.  Androids/robots/
artificial persons are more expendable than people)

decot@hpisod2.HP.COM (Dave Decot) (05/14/88)

> I propose the following variation on Asimov:
> 
>       I.   A robot may not harm a human or other sentient being,
>            or by inaction permit one to come to harm.
> 
>      II.   A robot may respond to requests from human beings,
>            or other sentient beings, unless this conflicts with
>            the First Law.
> 
>     III.   A robot may act to protect its own existence, unless this
>            conflicts with the First Law."
> 
>      IV.   A robot may act to expand its powers of observation and
>            cognition, and may enlarge its knowledge base without limit.
> 
> Can anyone propose a further refinement to the above?

I like this much better, but "harm" needs some definition or qualification.

It might be inferred (by a robot?) that a robot must stop sentient beings
from smoking, jumping out of airplanes, driving a car, etc.

I don't see why II and II have protections against violating I and IV doesn't.
Does this imply it's OK to zap somebody standing in front of your sensors?
If all the laws must be satisfied at all times, the qualifications in II
and III are redundant.

Dave Decot
hpda!decot

eyal@COYOTE.STANFORD.EDU (Eyal Mozes) (05/16/88)

In article <434@aiva.ed.ac.uk> jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
>In article <8805092354.AA05852@ucbvax.Berkeley.EDU> eyal@COYOTE.STANFORD.EDU (Eyal Mozes) writes:
>>all the evidence I'm familiar with points to the fact that it's
>>always possible for a human being to control his thoughts by a
>>conscious effort.
>
>It is not always possible.  Think, if no simpler example will do, of
>obsessives.  They have thoughts that persist in turning up despite
>efforts to eliminate them.

First of all, even an obsessive can, at any given moment, turn his
thoughts away from the obsession by a conscious effort. The problem of
obsession is in that this conscious effort has to be much greater than
normal, and also in that, whenever the obsessive is not consciously
trying to avoid those thoughts, they do persist in turning up.

Second, an obsession is caused by anxiety and self-doubt, which are the
result of thinking the obsessive has done, or failed to do, in the
past. And, by deliberately training himself, over a period of time, in
more rational thinking, sometimes with appropriate professional help,
the obsessive can eliminate the excessive anxiety and self-doubt and
thus cure the obsession. So, indirectly, even the obsession itself is
under the person's volitional control.

>Or, consider when you start thinking about something.  An idea just
>occurs and you are thinking it: you might decide to think about
>something, but you could not have decided to decide, decided to
>decide to decide, etc. so at some point there was no conscious
>decision.

Of course, the point at which you became conscious (e.g. woke up from
sleep) was not a conscious decision. But as long as you are conscious,
it is your choice whether to let your thoughts wander by chance
association or to deliberately, purposefully control what you're
thinking about. And whenever you stop your thoughts from wandering and
start thinking on a subject of your choice, that action is by conscious
decision.

This is why I consider Ayn Rand's theory of free will to be such an
important achievement - because it is the only free-will theory
directly confirmed by what anyone can observe in his own thoughts.

	Eyal Mozes

	BITNET:	eyal%coyote@stanford
	ARPA:	eyal@coyote.stanford.edu

jeff@aiva.ed.ac.uk (Jeff Dalton) (05/16/88)

In article <8805092354.AA05852@ucbvax.Berkeley.EDU> eyal@COYOTE.STANFORD.EDU (Eyal Mozes) writes:
1 all the evidence I'm familiar with points to the fact that it's
1 always possible for a human being to control his thoughts by a
1 conscious effort.

In article <434@aiva.ed.ac.uk> jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
2 It is not always possible.  Think, if no simpler example will do, of
2 bsessives.  They have thoughts that persist in turning up despite
2 efforts to eliminate them.

In article <8805151907.AA01702@ucbvax.Berkeley.EDU> eyal@COYOTE.STANFORD.EDU (Eyal Mozes) writes:
>First of all, even an obsessive can, at any given moment, turn his
>thoughts away from the obsession by a conscious effort. The problem of
>obsession is in that this conscious effort has to be much greater than
>normal, and also in that, whenever the obsessive is not consciously
>trying to avoid those thoughts, they do persist in turning up.

That an obsessive has some control over his thoughts does not mean
he can always control his thoughts.  If all you mean is that one can
always at least temporarily change what one is thinking about and
can eventually eliminate obsessive thoughts or the tune that's running
through one's head, no one would be likely to disagree with you,
except where you seem to feel that obsessions are just the result
of insufficiently rational thinking in the past.

>So, indirectly, even the obsession itself is  under the person's
>volitional control.

I would be interested in knowing what you think *isn't* under a
person's volitional control.  One would normally think that having
a sore throat is not under conscious control even though one can
chose to do something about it or even to try to prevent it.

2 Or, consider when you start thinking about something.  An idea just
2 occurs and you are thinking it: you might decide to think about
2 something, but you could not have decided to decide, decided to
2 decide to decide, etc. so at some point there was no conscious
2 decision.

>Of course, the point at which you became conscious (e.g. woke up from
>sleep) was not a conscious decision. But as long as you are conscious,
>it is your choice whether to let your thoughts wander by chance
>association or to deliberately, purposefully control what you're
>thinking about. And whenever you stop your thoughts from wandering and
>start thinking on a subject of your choice, that action is by conscious
>decision.

But where does the "subject of your own choice" come from?  I wasn't
thinking of letting one's thoughts wander, although what I said might
be interpreted that way.  When you decide what to think about, did
you decide to decide to think about *that thing*, and if so how did
you decide to decide to decide, and so on?

Or suppose we start with a decision, however it occurred.  I decide
read your message.  As I do so, it occurs to me, at various points,
that I disagree and want to say something in reply.  Note that these
"occurrences" are fairly automatic.  Conscious thought is involved,
but the exact form of my reply is a combination of conscious revision
and sentences, phrases, etc. that are generated by some other part of
my mind.  I think "he thinks I'm just talking about letting the mind
wander and thinking about whatever comes up."  That thought "just
occurs".  I don't decide to think exactly that thing.  But my
consciousness has that thought and can work with it.  It helps provide
a focus.  I next try to find a reply and begin by reading the passage
again.  I notice the phrase "subject of your own choice" and think
then write "But where does the...".  

Of course, I might do other things.  I might think more explicitly
about that I'm doing.  I might even decide to think explicitly rather
than just do so.  But I cannot consciously decide every detail of
every thought.  There are always some things that are provided by
other parts of my mind.

Indeed, I am fortunate that my thoughts continue along the lines I
have chosen rather than branch off on seemingly random tangents.  But
the thoughts of some people, schizophrenics say, do branch off.  It is
clear in many cases that insufficient rationality did not cause their
problem: it is one of the consequences, not one of the causes.

As an example of "other parts of the mind", consider memory.  Suppose
I decide to remember the details of a particular event.  I might not
be able to, but if I can I do not decide what these memories will be:
they are given to me by some non-conscious part of my mind.

>This is why I consider Ayn Rand's theory of free will to be such an
>important achievement - because it is the only free-will theory
>directly confirmed by what anyone can observe in his own thoughts.

As far as you have explained so far, Rand's theory is little more
than simply saying that free will = the ability to focus consciousness,
which we can all observe.  Since we can all observe this without the
aid of Rand's theory, all Rand seems to be saying is "that's all there
is to it".

-- Jeff

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (05/17/88)

In article <5511@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes:
>>	2. determinism => lack of responsibility
>I would argue that its weakest link is on the second line. 

The objection runs
    complex determinism => incomprehensible coherently => must abstract
     => arbitrary prioritisation => responsibility for these choices

and thus transitively, determinism => responsibility, not the lack of it.
I respond with an (unelaborated) attack on the idea of coherent comprehension, 
and replace it with an appeal to intuition.

 complex determinism => rely on insight => accept/reject insight 
      => responsibility for these choices

That is, complexity can be managed by 'know how' and 'know when' without
'knowing what'.  I do not need to be a physiologist to ride a bicycle.
Coherent comprehension is not the Queen of Understanding.

But what are the implications of both these chains of reasoning to the first,
which they so obviously contradict? They imply a contradiction of course, and in
true dialectical fashion, I'm sticking with all of them because they all have a
grain or 100 of truth in them, and the contradictions could be removed by adding
the appropriate contextual contingencies.  For the logically handicapped, it 
should be clear now why a logical contradiction is not the end of the world, 
just an omen! And the omen is, you never addressed my link, you just presented
another, intriguing and, to me, valid one.

My point was that MENTAL determinism and MORAL responsibility are incompatible.
I cite the whole ethos of Western (and Muslim? and??) justice as evidence.  If
AI research has to assume something which undermines fundamental values, it
better have a good answer beyond academic freedom, which would also justify
unrestricted embryo research, forced separation of twins into controlled
upbringings, unrestricted use of pain in learning research, ...

It's a question of which you value most, and where you feel your
responsibilities lie.  AI isn't the first area to go this way.  Say
hello to your big brothers Behaviourism and Sociobiology :-)

>perhaps .. can provide basis for discussion in less flaboyant use of language.
Don't be such a bore :-)  This is how I wrote before I entered the
grey world of science and mathematics, and if it was good enough for
history, philosophy, psychology and sociology at college (the sociologists
did grumble at times!), it's good enough for anything.  No time is to ever be
had by me for the prescriptive barbarism of technical writing guidelines. 

Anyway, if you stripped AI of flamboyant language, there'd be little left :-)

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (05/17/88)

> Just WHOSE ideals of humanity did you have in mind? [Which do you know of?]
> Back when I was a callow freshman, I was taught to identify Socrates 
> with the maxim, "Know thyself."  As an individual who has always been 
> concerned with matters of the mind, I can think of no higher ideal to 
> which I might aspire than to know what it is that allows myself to know;
Plato never had to apologise for Socrates requesting discs of self-knowledge
from Athenian youth.  Self-knowledge requires no semantic networks.

> I regard artificial intelligence as an excellent scientific approach to the
> pursuit of this ideal . . . one which enables me to test flights of my 
> imagination with concrete experimentation.
I don't think a Physicist or an experimental psychologist would agree
with you. AI is DUBIOUS, because so many DOUBT that anyone in AI has a
elaborated view of truth and falsehood in AI research. So tell me, as
a scientist, how we should judge AI research?  In established sciences, the
grounds are clear.  Certainly, nothing in AI to date counts as a controlled
experiment, using a representative population, with all irrelevant variables
under control.  Given the way AI programs are written, there is no way of even
knowing what the independent variable is, and how it is being driven.  I don't
think you know what experimental method is, or what a clearly formulated 
hypothesis is either.  You lose your science badge.

> let us know what it is that he sees in artificial intelligence research that 
> does not square with his personal ideals of humanity.
I have. Don't you appreciate that free will (some degree of choice) is essential
to humanist ideals.  Read about the Renaissance which spawned the Science on
whose shirt tails AI rides.  Perhaps then you will understand your intellectual
heritage.  Do you believe that an artificial AI researcher is possible?  Can a
machine of its own volition and with its own imagination push back the frontiers
of AI knowledge?  The true Turing test is whether an AI researcher can be
fooled that another machine is an AI researcher!  [Nah, far too easy :-)]

> I have about as much respect for such tastes as I have for anyone who'd 
> classify AI research as "a distasteful and dubious activity."
Then think again. It IS dubious - look how many academics regard the area as
lacking any concern for methodology, being cavalier with the truth and evasive
about short-term hypotheses under test.  No-one in AI seems to be concerned
with the truth question, and that's serious where academic study is
concerned.  I'm not saying AI researchers are to BLAME, but they certainly
are easy TO blame.  The lack of concern is DISTASTEFUL.  As is the pursuit of
areas of enquiry in arrogant self-imposed ignorance of other relevant and
available knowledge.  As far as academic values are concerned, it is
distasteful to ignore established bodies of knowledge, techniques, concepts
and arguments.  It is distasteful in its lack of humility and the arrogance that
a mathematically or technically trained person can wade in, uninformed and
hope to have any insight about problems when their understanding is so obviously
inadequate when compared to the best traditions available in real disciplines.
I cite 4 years reading of comp.ai.digest seminar abstracts as evidence.

> Gilbert Cockton still has not made it clear, on scientific grounds at any
> rate, why AI does not deserve this so-called "legitimacy."  
I do not know how to argue this on "scientific grounds".  I take it that you
do have some hold on the truth question.  Please elaborate.  If you tell me
your rules of certainty, I will be better placed to force my arguments into
the epistemological harness of your choice.

> fall back on what I ... call the what-a-piece-of-work-is-man line of argument
What are you calling this?  
> Unfortunately, this  approach is emotional, not scientific.
Emotive, not emotional.  You cannot judge my emotional state from where you
are.  Again, not the mark of observational science.
> must only be because he cannot muster scientific arguments to make his case.
Can you?  What do you understand by a scientific argument?  Who's thundering!

> We can devote our attention to the progress we make in our laboratories.
Now that's what started this whole debate.  You see, it's hard to see what
progress is being made, how progress will be made (methodological
foundations) and what anyone is doing.  Never once have I seen anything that
goes beyond some bright fantasising with mathematical semantics in
comp.ai.digest seminar announcements.  This is not science, and there can be
no progress in this methodological idealism.  Facts man, find some.

colin@pdn.UUCP (Colin Kendall) (05/18/88)

In article <445@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
> In article <8805092354.AA05852@ucbvax.Berkeley.EDU> eyal@COYOTE.STANFORD.EDU (Eyal Mozes) writes:

> >This is why I consider Ayn Rand's theory of free will to be such an
> >important achievement - because it is the only free-will theory
> >directly confirmed by what anyone can observe in his own thoughts.
> 
> As far as you have explained so far, Rand's theory is little more
> than simply saying that free will = the ability to focus consciousness,
> which we can all observe.  Since we can all observe this without the
> aid of Rand's theory, all Rand seems to be saying is "that's all there
> is to it".
> 

I agree. I'm a strict determinist. But let me make these observations:

1. We can never know whether free will exists, because: If it does
not exist, i.e. if everything is determined, then no matter how
long we discuss it nor how convincingly we argue that free will
does exist, it doesn't matter; we are predestined to do so.
That is, we may be predestined to fool ourselves into believing
in free will; in fact, I believe, many of us are, including Ayn Rand;
but that is no argument for the existence of free will.

2. Therefore, each person *believes* in free will or doesn't.
Free willers may describe this as a result of rational arguments, or
as a leap of faith; I consider it the consequence of an individual's
genetic makeup and life experiences, including being subjected to
arguments for and against free will. 

So I think arguing about whether free will exists is like arguing about
how many angels can dance on the head of a pin - fascinating, no
doubt, to some, but rather pointless. What is interesting is
whether we will try to program free will into artificially intelligent
entities. 

Let us suppose that some AI researchers have constructed such an
entity, and they ask it, "Do you have free will?", and it says,
"Of course". Now, either the researchers will have expected that
answer because of the way they programmed it, or they won't; and
the researchers are either free willers or determinists.

If they had expected that answer, and they are free willers, then they
will not have accomplished anything, because the entity is simply saying
what they programmed it to say, and is not exhibiting free will.

If they had expected that answer, and they are determinists, then they
will be able to say, like me, "See! This poor entity was predestined (by us) 
to believe that he has free will."

If they had not expected that answer, and they are free willers, then they
can say, like me, "See! We have created a being so intelligent that
he, like us, has free will."

If they had not expected that answer, and they are determinists, then they
will attempt to fix their programming error.
-- 
Colin Kendall				Paradyne Corporation
{gatech,akgua}!usfvax2!pdn!colin	Mail stop LF-207
Phone: (813) 530-8697			8550 Ulmerton Road, PO Box 2826
					Largo, FL  33294-2826

egranthm@jackson.UUCP (Ewan Grantham) (05/18/88)

In article <31738@linus.UUCP>, bwk@mbunix.UUCP writes:
> 
> I propose the following variation on Asimov:
> 
>       I.   A robot may not harm a human or other sentient being,
>            or by inaction permit one to come to harm.
> 
>      II.   A robot may respond to requests from human beings,
>            or other sentient beings, unless this conflicts with
>            the First Law.
> 
>     III.   A robot may act to protect its own existence, unless this
>            conflicts with the First Law."
> 
>      IV.   A robot may act to expand its powers of observation and
>            cognition, and may enlarge its knowledge base without limit.
> 
> Can anyone propose a further refinement to the above?
> 
> --Barry Kort

Well, this still seems to leave the question of how do you properly
define a sentient being so that the robot can have no doubt about
whether a being is sentient or not. Again the robot may feel that
he himself/herself is sentient, and therefore place self-preservation
above the first law by applying the first law to itself.

I also find that the human/sentient statement doesn't go far enough.
For example, should a robot be able to kill a dog just for barking
at it. Especially if the robot is quite well capable of being bitten
with no harm?

Ewan Grantham

(uunet!nuchat!jackson!egranthm)

My bosses aren't responsible for me, and vice-versa.

bill@proxftl.UUCP (T. William Wells) (05/22/88)

In article <445@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
> In article <8805092354.AA05852@ucbvax.Berkeley.EDU> eyal@COYOTE.STANFORD.EDU (Eyal Mozes) writes:
>...
> As far as you have explained so far, Rand's theory is little more
> than simply saying that free will = the ability to focus consciousness,
> which we can all observe.  Since we can all observe this without the
> aid of Rand's theory, all Rand seems to be saying is "that's all there
> is to it".
>
> -- Jeff

Actually, from what I have read so far, it seems that the two of
you are arguing different things; moreover, eyal@COYOTE.STANFORD
.EDU (Eyal Mozes) has committed, at the very least, a sin of
omission: he has not explained Rand's theory of free will
adequately.

Following is the Objectivist position as I understand it.  Please
be aware that I have not included everything needed to justify
this position, nor have I been as technically correct as I might
have been; my purpose here is to trash a debate which seems to be
based on misunderstandings.

To those of you who want a justification, I will (given enough
interest) eventually be doing so on talk.philosophy.misc, where I
hope to be continuing my discussion of Objectivism.  Please
direct any followups to that group.

Entities are the only causes: they cause their actions.  Their
actions may be done to other entities, and this may require the
acted on entity to cause itself to act in some way.  In that
case, one can use `cause' in a derivative sense, saying: the
acting entities (the agents) caused the acted upon entities (the
patient) to act in a certain way.  One can also use `cause' to
refer to a chain of such.  This derivative sense is the normal
use for the word `cause', and there is always an implied action.

If, in order that an entity can act in some way, other entities
must act on it, then those agents are a necessary cause for the
patient's action.  If, given a certain set of actions performed
by some entities, a patient will act in a certain way, then those
agents are a sufficient cause for the patient's actions.

The Objectivist version of free will asserts that there are (for
a normally functioning human being) no sufficient causes for what
he thinks.  There are, however, necessary causes for it.

This means that while talking about thinking, no statement of the
form "X(s) caused me to think..." is an valid statement about
what is going on.

In terms of the actual process, what happens is this: various
entities provide the material which you base your thinking on
(and are thus necessary causes for what you think), but an
action, not necessitated by other entities, is necessary to
direct your thinking.  This action, which you cause, is
volition.

> But where does the "subject of your own choice" come from?  I wasn't
> thinking of letting one's thoughts wander, although what I said might
> be interpreted that way.  When you decide what to think about, did
> you decide to decide to think about *that thing*, and if so how did
> you decide to decide to decide, and so on?

Shades of Zeno!  One does not "decide to decide" except when one
does so in an explicit sense.  ("I was waffling all day; later
that evening I put my mental foot down and decided to decide once
and for all.") Rather, you perform an act on your thoughts to
direct them in some way; the name for that act is "decision".

Anyway, in summary, Rand's theory is not just that "free will =
the ability to focus consciousness" (actually, to direct certain
parts of one's consciousness), but that this act is not
necessitated by other entities.

bwk@mitre-bedford.ARPA (Barry W. Kort) (05/23/88)

Ewan Grantham has insightfully noted that our draft "laws of robotics"
begs the question, "How does one recognize a fellow sentient being?"

At a minimun, a sentient being is one who is able to sense it's environment,
construct internal maps or models of that environment, use those maps
to navigate, and embark on a journey of exploration.  By that definition,
a dog is sentient.  So the robot has no business killing a barking dog.
Anyway, the barking dog is no threat to the robot.  A washing machine
isn't scared of a barking dog.  So why should a robot fear one?

--Barry Kort

smoliar@vaxa.isi.edu (Stephen Smoliar) (05/24/88)

In article <1173@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>  No-one in AI seems to be concerned
>with the truth question, and that's serious where academic study is
>concerned.  I'm not saying AI researchers are to BLAME, but they certainly
>are easy TO blame.  The lack of concern is DISTASTEFUL.  As is the pursuit of
>areas of enquiry in arrogant self-imposed ignorance of other relevant and
>available knowledge.  As far as academic values are concerned, it is
>distasteful to ignore established bodies of knowledge, techniques, concepts
>and arguments.  It is distasteful in its lack of humility and the arrogance
>that
>a mathematically or technically trained person can wade in, uninformed and
>hope to have any insight about problems when their understanding is so
>obviously
>inadequate when compared to the best traditions available in real disciplines.
>I cite 4 years reading of comp.ai.digest seminar abstracts as evidence.
>
Now that Gilbert Cockton has revealed the source of his knowledge of artificial
intelligence, I must say that I agree whole-heartedly with those who have
suggested that the discussion be moved to a "softer" forum, such as a "talk"
newsgroup.  Readers who sample this newsgroup only occasionally might think
that he was writing from a position of actual experience in artificial
intelligence, and that would be a dangerous confusion!

smoliar@vaxa.isi.edu (Stephen Smoliar) (05/24/88)

In article <1172@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>  This is how I wrote before I entered the
>grey world of science and mathematics, and if it was good enough for
>history, philosophy, psychology and sociology at college (the sociologists
>did grumble at times!), it's good enough for anything.  No time is to ever be
>had by me for the prescriptive barbarism of technical writing guidelines. 
>
Perhaps this explains why I have never encountered the name of Gilbert
Cockton in my reading experiences which include artificial intelligence,
computer science, cognitive psychology, mathematics, and philosophy!

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (05/24/88)

In article <205@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
>interest) eventually be doing so on talk.philosophy.misc, 
>Please direct any followups to that group.
Can I please just ask one thing here, as it is relevant?
>
>The Objectivist version of free will asserts that there are (for
>a normally functioning human being) no sufficient causes for what
>he thinks.  There are, however, necessary causes for it.
Has this any bearing on the ability of a machine to simulate human
decision making?  It appears so, but I'd be interested in how you think it
can be extended to yes/no/don't know about the "pure" AI endeavour.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

	     The proper object of the study of humanity is humans, not machines

jeff@aiva.ed.ac.uk (Jeff Dalton) (05/25/88)

In article <205@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
]The Objectivist version of free will asserts that there are (for
]a normally functioning human being) no sufficient causes for what
]he thinks.  There are, however, necessary causes for it.

That is, as you indicated, just an assertion.  It does not seem
a particularly bad account of what having free will might mean.
The question is whether the assertion is corret.  How do you know
there are no sufficient causes?

]In terms of the actual process, what happens is this: various
]entities provide the material which you base your thinking on
](and are thus necessary causes for what you think), but an
]action, not necessitated by other entities, is necessary to
]direct your thinking.  This action, which you cause, is
]volition.

Well, how do I cause it?  Am I caused to cause it, or does it
just happen out of nothing?  Note that it does not amount to
having free will just because some of the causes are inside
my body.  (Again, I am not sure what you mean by "other entities".)

]] But where does the "subject of your own choice" come from?  I wasn't
]] thinking of letting one's thoughts wander, although what I said might
]] be interpreted that way.  When you decide what to think about, did
]] you decide to decide to think about *that thing*, and if so how did
]] you decide to decide to decide, and so on?
]
]Shades of Zeno!  One does not "decide to decide" except when one
]does so in an explicit sense.

My point was precisely that one could not decide to decide, and so
on, so that the initial step (and it might just be the decision,
without any decision to decide) was not something arrived at by
conscious reasoning.

]("I was waffling all day; later
]that evening I put my mental foot down and decided to decide once
]and for all.") Rather, you perform an act on your thoughts to
]direct them in some way; the name for that act is "decision".

Yes, but what determines the way in which I direct them, or even
whether I bother to direct them (right then) at all?  I have no
problem (or at least not one I can think of right now) with calling
that act a decision.  But why do I make that decision rather than
do something else?

By the way, we do not get talk.philosophy.misc, so if you answer
me there I will never see it.

-- Jeff

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (05/26/88)

In article <5569@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar) writes:
>>I cite 4 years reading of comp.ai.digest seminar abstracts as evidence.
>>
>Now that Gilbert Cockton has revealed the source of his knowledge of artificial
>intelligence
OK, OK, then I call every AI text I've ever read as well.  Let's see
Nielson, Charniak and the other one, Rich, Schank and Abelson,
Semantic Information Processing (old, but ...), etc.  (I use AI
programming concepts quite often, I just don't fall into the delusion
that they have any bearing on mind).

The test is easy, look at the references.  Do the same for AAAI and
IJCAI papers.  The subject area seems pretty introspective to me.
If you looked at an Education conference proceedings, attended by people who
deal with human intelligence day in day out (rather than hack LISP), you
would find a wide range of references, not just specialist Education references.
You will find a broad understanding of humanity, whereas in AI one can
often find none, just logical and mathematical references. I still
fail to see how this sort of intellectual background can ever be
regarded as adequate for the study of human reasoning.  On what 
grounds does AI ignore so many intellectual traditions?

As for scientific method, the conclusions you drew from a single
statement confirm my beliefs about the role of imagination in AI.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

	     The proper object of the study of humanity is humans, not machines

bill@proxftl.UUCP (T. William Wells) (06/12/88)

In article <1226@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
> In article <205@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
> >The Objectivist version of free will asserts that there are (for
> >a normally functioning human being) no sufficient causes for what
> >he thinks.  There are, however, necessary causes for it.
> Has this any bearing on the ability of a machine to simulate human
> decision making?  It appears so, but I'd be interested in how you think it
> can be extended to yes/no/don't know about the "pure" AI endeavour.

If you mean by "pure AI endeavour" the creation of artificial
consciousness, then definitely the question of free will &
determinism is relevant.

The canonical argument against artificial consciousness goes
something like: humans have free will, and free will is essential
to human consciousness.  Machines, being deterministic, do not
have free will; therefore, they can't have a human-like
consciousness.

Now, should free will be possible in a deterministic entity this
argument goes poof.

bill@proxftl.UUCP (T. William Wells) (06/13/88)

I really do not want to further define Objectivist positions on
comp.ai.  I have also seen several suggestions that we move the
free will discussion elsewhere.  Anyone object to moving it to
sci.philosophy.tech?

In article <463@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
> ]In terms of the actual process, what happens is this: various
> ]entities provide the material which you base your thinking on
> ](and are thus necessary causes for what you think), but an
> ]action, not necessitated by other entities, is necessary to
> ]direct your thinking.  This action, which you cause, is
> ]volition.
>
> Well, how do I cause it?  Am I caused to cause it, or does it
> just happen out of nothing?  Note that it does not amount to
> having free will just because some of the causes are inside
> my body.  (Again, I am not sure what you mean by "other entities".)

OK, let's try to eliminate some confusion.  When talking about an
action that an entity takes, there are two levels of action to
consider, the level associated with the action of the entity and
the level associated with the processes that are necessary causes
for the entity level action.

[Note: the following discussion applies only to the case where
the action under discussion can be said to be caused by the
entity.]

Let's consider a relatively uncontroversial example.  Say I have
a hot stove and a pan over it.  At the entity level, the stove
heats the pan.  At the process level, the molecules in the stove
transfer energy to the molecules in the pan.

The next question to be asked in this situation is: is heat the
same thing as the energy transferred?

If the answer is yes then the entity level and the process level
are essentially the same thing, the entity level is "reducible"
to the process level.  If the answer is no, then we have what is
called an "emergent" phenomenon.

Another characterization of "emergence" is that, while the
process level is a necessary cause for the entity level actions,
those actions are "emergent" if the process level action is not a
sufficient cause.

Now, I can actually try to answer your question.  At the entity
level, the question "how do I cause it" does not really have an
answer; like the hot stove, it just does it.  However, at the
process level, one can look at the mechanisms of consciousness;
these constitute the answer to "how".

But note that answering this "how" does not answer the question
of "emergence".  If consciousness is emergent, then the only
answer is that "volition" is simply the name for a certain class
of actions that a consciousness performs.  And being emergent,
one could not reduce it to its necessary cause.

I should also mention that there is another use of "emergent"
floating around, it simply means that properties at the entity
level are not present at the process level.  The emergent
properties of neural networks are of this type.

sierch@well.UUCP (Michael Sierchio) (06/14/88)

The debate about free will is funny to one who has been travelling
with mystics and sages -- who would respond by saying that freedom
and volition have nothing whatsoever to do with one another. That
volition is conditioned by internal and external necessity and
is in no way free.

The ability to make plans, set goals, to have the range of volition
to do what one wants and to accomplish one's own aims still begs the
question about the source of what one wants.
-- 
	Michael Sierchio @ SMALL SYSTEMS SOLUTIONS
	2733 Fulton St / Berkeley / CA / 94705     (415) 845-1755

	sierch@well.UUCP	{..ucbvax, etc...}!lll-crg!well!sierch

bc@mit-amt.MEDIA.MIT.EDU (bill coderre) (06/14/88)

In article <6268@well.UUCP> sierch@well.UUCP (Michael Sierchio) writes:
>The debate about free will is funny to one who has been travelling
>with mystics and sages -- who would respond by saying that freedom
>and volition have nothing whatsoever to do with one another....

(this is gonna sound like my just previous article in comp.ai, so you
can read that too if you like)

Although what free will is and how something gets it are interesting
philosophical debates, they are not AI.

Might I submit that comp.ai is for the discussion of AI: its
programming tricks and techniques, and maybe a smattering of social
repercussions and philosophical issues.

I have no desire to argue semantics and definitions, especially about
slippery topics such as the above.

And although the occasional note is interesting (and indeed my
colleague Mr Sierchio's is sweet), endless discussions of whether some
lump of organic matter (either silicon- or carbon-based) CAN POSSIBLY
have "free will" (which only begs the question of where to buy some and
what to carry it in) is best confined to a group where the readership
is interested in such things.

Naturally, I shall not belabour you with endless discussions of neural
nets merely because of their interesting modelling of Real(tm)
neurons. But if you are interested in AI techniques and their rather
interesting approaches to the fundamental problems of intelligence and
learning (many of which draw on philosophy and epistemology), please
feel free to inquire.

I thank you for your kinds attention.....................mr bc

cfh6r@uvacs.CS.VIRGINIA.EDU (Carl F. Huber) (06/21/88)

In article <306@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
>Let's consider a relatively uncontroversial example.  Say I have
>a hot stove and a pan over it.  At the entity level, the stove
>heats the pan.  At the process level, the molecules in the stove
>transfer energy to the molecules in the pan.
> ...
>Now, I can actually try to answer your question.  At the entity
>level, the question "how do I cause it" does not really have an
>answer; like the hot stove, it just does it.  However, at the
>process level, one can look at the mechanisms of consciousness;
>these constitute the answer to "how".

I do not yet see your distinction in this example.	
What is the difference between saying the stove _heats_ or the
molecules _transfer_energy_?  The distinction must be made in the
way we describe what's happening.  In each case above, you seem to
be giving the pan and the molecules volition.  The stove does not
heat the pan.  The stove is hot.  The pan later becomes hot.  Molecules do
not transfer energy.  The molecules in the stove have energy s+e.  Then
the molecules in the pan have energy p+e and the molecules in the 
stove have energy s.

So it seems that both cases here are entity level, since the answer 
to "how do I cause it" is the same.  If I have totally missed the
point, could you please try again?

-carl

bill@proxftl.UUCP (T. William Wells) (07/03/88)

In article <2485@uvacs.CS.VIRGINIA.EDU>, cfh6r@uvacs.CS.VIRGINIA.EDU (Carl F. Huber) writes:
) In article <306@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
) >Let's consider a relatively uncontroversial example.  Say I have
) >a hot stove and a pan over it.  At the entity level, the stove
) >heats the pan.  At the process level, the molecules in the stove
) >transfer energy to the molecules in the pan.
) > ...
) >Now, I can actually try to answer your question.  At the entity
) >level, the question "how do I cause it" does not really have an
) >answer; like the hot stove, it just does it.  However, at the
) >process level, one can look at the mechanisms of consciousness;
) >these constitute the answer to "how".
)
) I do not yet see your distinction in this example.
) What is the difference between saying the stove _heats_ or the
) molecules _transfer_energy_?  The distinction must be made in the
) way we describe what's happening.  In each case above, you seem to
) be giving the pan and the molecules volition.

[Minor nitpick: the pan and the molecules act, but volition and
action are not the same things. The discussion of the difference
belongs in a philosophy newsgroup, however.]

)                                                The stove does not
) heat the pan.  The stove is hot.  The pan later becomes hot.  Molecules do
) not transfer energy.  The molecules in the stove have energy s+e.  Then
) the molecules in the pan have energy p+e and the molecules in the
) stove have energy s.
)
) So it seems that both cases here are entity level, since the answer
) to "how do I cause it" is the same.  If I have totally missed the
) point, could you please try again?
)
) -carl

I think you missed the point. Perhaps I can fill in some missed
information. I think you got the idea that the process level
description could be made without reference to entities; this is
not the case. The process level MUST be made with reference to
entities, the main point is that these acting entities are not
the same as the entity involved in the entity level description.

Does that help? Also, could we move this discussion to another
newsgroup?