[comp.ai] Behaviorism and AI

ejs@unix.sri.com (e john sebes) (05/20/89)

Lest anyone else jump on the subject of behaviorism, and turn it in to
more interminable rambling about free-will-vs-determinism, let me say that
Mr. Colfer and I are approaching vigorous agreement on many of the
points we raised.

If you are not interested in being convinced of this, but are
interested in some of the other points Mr. Colfer raised in
then skip to the last three or so paragraphs.

For instance, consider his distinction between "Behaviorism as a school
of psychological thought" and "Radical Behaviorism ... [as] an
investigation of the principles of being". Since this latter term
is pretty much a definition of philosophy, it is the same distinction
I wanted to make between the philosophy of Behaviorism, and Behaviorism
as part of psychology.

As Mr. Colfer pointed out, he is aware of the distinction (and I regret
having inferred and stated otherwise), as is Skinner.

Part of what I tried to say is that some people, including proponents
of Behaviorism, *don't* have a firm grasp of this distinction; 
and it is this that makes others uncomfortable with them,
as scientists or as policy-makers.
That is, *in addition to* holding what Mr. Colfer correctly identifies as
an "unattractive view of what it means to be human" (to some at least),
such proponents appear willing to assert this view as fact and use this
assertion as justification for the application of some technology
that sounds rather frightening.

Let me add the disclaimer that I don't necessarily think that such
perceptions of behaviorists are accurate, or that such reactions
are entirely reasonable. I was just trying to explain why many
don't like behaviorism as either science or technology.

After correctly restating some of my points,
In article <1979@ucsfcca.ucsf.edu> brianc@daedalus.UUCP (Brian Colfer) writes:
>My point is that if we talk about the rest of the universe as being 
>materialistically determined...<except for Pele in Hawaii :-) > then
>why change our set of assumptions when it comes to the human mind.

There isn't a priori reason why not... just so long they are assumptions.

>>there is nothing to do about it but take an agnostic stance, and
>>get on with work.
>
>Just because we cannot prove that free will is not taking place is no
>reason for adopting it or even being agnostic.  
>There is a large body of data and logic describing that our universe
>is materialistically determined.
>...  I think the more reasonable position is to stay with
>my previous assumptions unless I have good reasons not to, based on sound
>logic or good evidence.

In my view, keeping such assumptions (of determinism) as assumptions
*is precisly* what I meant by remaining "agnostic", and not being dogmatic.

By stating that
>Behaviorism is not the ultimate statement on how and why humans behave
>it is an important component help to describe the interaction between
>the biology and environment of an organism (in this case a person).
Mr. Colfer is setting himself out the dogmatic camp, and asking
the question of what good Behaviorism and AI can be to each other.

In espousing the investigation of "behaviorist technology", he admits that
>We probably are just not smart enough to understand the 
>social ecology well enough to prevent us from making mistakes
>with behavioral technologies. 
This is precisely the point that some have made: people are much too
complicated for behaviorism to be useful, so why bother and maybe even
risk doing harm?

I think, however that this reaction misses some important points. One is
>If we are not systematic about [applying behaviorist technology] then
>we loose the opportunity to do some wonderful things and we will
>have no way of counteracting those who will abuse the technology.
>But just because we should be
>careful doesn't mean we shouldn't do it ... lets be careful, 
>have full disclosure about whats happening but let's not stop
>it just because it involves control.
This is pretty much on target, although it must be said that for
anything outside of the lab, there will be many others who will want
a say before anyone does go ahead, however carefully-- as with genetic
engineering.

But another important point missed by the "why bother" attitude has
a lot to do with AI, and the question:
>How can we use behaviorism to understand AI?
If we don't really know enough to apply behaviorism well,
then perhaps we can learn more by applying it in an AI setting.
That is, develop deterministic decision-making systems designed
to mimic humans in some limited setting, and see how well behaviorist
techniques work in controlling this system.
Maybe then try similar experiments with humans in similar settings.
Perhaps that is one way we can use behaviorism to understand AI,
and vice versa.

I would take issue with one statement, however:
>These questions on free will have an impact because they frame the type
>of questions and problems people will address.  People are being
>effected by assumptions of free will, e.g. "for someone to change
>(learn, rehabilitate, etc.) they need to want change?" More important
>to AI assumptions on the nature of human intelligence is an issue which
>can dramatically the course of the technology.
I agree that *questions* about free will" are relevant, and rightly
so, but not *assumptions*. Instead, for "free will" substitute
"the experience of free will", and I think it comes out fine.
It is perfectly acceptable to talk about a deterministic system being
in a state that it seems natural to describe as "wanting to change",
and to include this phenomenon in the scope of investigation.
However, since I missed the sense of the last sentence, we may
again be in agreement.

I hope all this lays to rest a lot of my previous blather, but leaves
open for discussion the relationship between AI and behaviorism,
and ways in which in that context it is scientifically acceptable
to deal with the notion of "free will".

John Sebes