[comp.ai.philosophy] What does intentionality have that AI doesn't.....

petersja@debussy.cs.colostate.edu (james peterson) (03/14/91)

In article <17107@venera.isi.edu> smoliar@venera.isi.edu (Stephen Smoliar) writes:
>However, we are still
>left with this awkward position that the inadequacy of symbol manipulating
>machines lies in this lack of intentionality.  In other words we must now
>confront the question of what qualities a machine must possess to allow it
>to have intentionality.  Saying it has to be more than a symbol manipulator
>is not enough.  It is still necessary to be able to look at a machine, analyze
>it, and conclude from that analysis whether or not it has intentionality.  

This is the second time Stephen has recently requested clues as to what is so
special about intentionality.  The first time he asked if "one of the
intentionality experts could come up with an argument as to why a lack of
intentionality would impede ever implementing [intelligent] behavior...."

I certainly don't wish to claim being an intentionality expert, and I don't
have an "argument" to offer in the sense intended here, but I have a suggestion
as to what is so special about intentionality that I beleive to be at least
consistent with Searle.  In any case, it may prompt some discussion of
something I take to be important. 

What it is about "intentionality" the lack of which would impede the 
implementation of intelligent behavior artificially is related to the
problem of "relevance."  How is it that intelligent creatures are capable
of selecting from their manifold inputs that portion which will be considered 
as important, and that which is to be ignored?  How is it, moreover, that
intelligent creatures are able to assign relative values to parts of
the environment related to importance, and readjust these relative values
as they procede?

Frames and scripts, it seems to me, gloss over this difficulty by assigning
relevance in advance.  The hard problem is to account for how relevance
comes about in the first place, and how it develops...

What makes assignments of relevance possible on an ongoing basis is 
*motivation* --- things, parts of the environment, are relevant, important, or
interesting precisely in the context of some *purpose* (if my purposes change,
so does what is relevant); relevance is thus a function of our
reasons (or motives) for acting...  Humans act for reasons, but for 
reasons which do not compel or necessitate (reasons are not causes); being
free to act according to one's own plans, plans of one's own authorship,
and to change those plans on an on-going and flexible manner is what
I believe intentionality has that is needed to implement intelligence.  Searle
says that intentionality and intelligence are tied to "causal powers" --
and this is what I take him to mean -- the ability to cause actions for
reasons independent of nature's causal nexus, in a word, motivation.

Excuse me if I have been less than clear, I did not have much time to
trot this out......


-- 
james lee peterson				petersja@CS.ColoState.edu
dept. of computer science                       
colorado state university		"Some ignorance is invincible."
ft. collins, colorado  (voice:303/491-7137; fax:303/491-2293)

cpshelley@violet.uwaterloo.ca (cameron shelley) (03/14/91)

In article <13503@ccncsu.ColoState.EDU> petersja@debussy.cs.colostate.edu (james peterson) writes:
[...]
>
>What it is about "intentionality" the lack of which would impede the 
>implementation of intelligent behavior artificially is related to the
>problem of "relevance."  How is it that intelligent creatures are capable
>of selecting from their manifold inputs that portion which will be considered 
>as important, and that which is to be ignored?  How is it, moreover, that
>intelligent creatures are able to assign relative values to parts of
>the environment related to importance, and readjust these relative values
>as they procede?
>
>Frames and scripts, it seems to me, gloss over this difficulty by assigning
>relevance in advance.  The hard problem is to account for how relevance
>comes about in the first place, and how it develops...
>
>What makes assignments of relevance possible on an ongoing basis is 
>*motivation* --- things, parts of the environment, are relevant, important, or
>interesting precisely in the context of some *purpose*  ...

If I can inject a tangential remark here, we should be aware of a traditional
division of vocabulary on this issue, namely that between *intention* and
*intension*.

Intention is related to "intent" and usually refers to a predisposition
to some action or view.  In this sense, any program has intention --- it
is created to fulfil a specific purpose normally in a specific manner.

Intension is related to "intense" and usually means whatever the author
wants it to, but in this context it refers to "the content of a notion",
or let's say "the meaning of an intention".  In philosophy (so far as I
can tell), it also denotes "the ability to form intentions" plus some
intangible spin.  Call it "motivation" or "purpose" if you like.

The "spin" causes all the mental thrashing (poetic justice?).

--
      Cameron Shelley        | "Belladonna, n.  In Italian a beautiful lady;
cpshelley@violet.waterloo.edu|  in English a deadly poison.  A striking example
    Davis Centre Rm 2136     |  of the essential identity of the two tongues."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

evenson@cis.udel.edu (Mark Evenson) (03/15/91)

In article <1991Mar14.150044.12197@watdragon.waterloo.edu> you write:
>In article <13503@ccncsu.ColoState.EDU> petersja@debussy.cs.colostate.edu (james peterson) writes:
>[...]
>>
>>What it is about "intentionality" the lack of which would impede the 
>>implementation of intelligent behavior artificially is related to the
>>problem of "relevance."  How is it that intelligent creatures are capable
>>of selecting from their manifold inputs that portion which will be considered 
>If I can inject a tangential remark here, we should be aware of a traditional
>division of vocabulary on this issue, namely that between *intention* and
>*intension*.
>

>Intension is related to "intense" and usually means whatever the author
>wants it to, but in this context it refers to "the content of a notion",
>or let's say "the meaning of an intention".  In philosophy (so far as I
>can tell), it also denotes "the ability to form intentions" plus some
>intangible spin.  Call it "motivation" or "purpose" if you like.
>

	In semantic theories of reference from at least the Philosophy
of Science tradition, "intension" is opposed to "extension" as to how
words work in refering to qualities of the world.  I may attempt to
crudely reduce a long debate by suggesting that words work by "picking
out" members of the set of all possible objects (that all words refers
to objects is of course the primary reduction here).  The actual
"things" that a word refers to are said to be its extension.  As to
what unites these varied objects belongs to the realm of the word's
intension.  So, for example "cat" has the extension of furry creatures
with tails, but also has the extension of a hipster as in a "cool
cat".  In the case, one may argue the the intension of the term "cat"
lies in a quality of aloofness, reserve, poise, and cool or something
along these.

	Now, in AI debates there exists the echo of the Logical
Positivist project of ascribing one intension and one extension to
every word in a formalized languages.  Usually this is attempted by
defining the proper extension of a word, and then working backwards to
the intension.  By looking at the set of a words extension,
Positivists hoped to clarify and reduce the diaphaenous web of
intension to a cut and regulated science.  Thus, it is hoped, that
language would lose its ambiguity and the quality of its "shiftiness"
when you sit down to code out an AI tool.

	So, intension is allied with "meaning" but it smacks--for me
at least--of a very rigid structural attempt to codify its existence
from a sort of catalouge of function.  I have deeply held convictions
on why this attempt will fail, but that is not really relevant to my
small insight into "intension".

		Mark Evenson

smoliar@isi.edu (Stephen Smoliar) (03/15/91)

In article <13503@ccncsu.ColoState.EDU> petersja@debussy.cs.colostate.edu
(james peterson) writes:
>
>
>What it is about "intentionality" the lack of which would impede the 
>implementation of intelligent behavior artificially is related to the
>problem of "relevance."  How is it that intelligent creatures are capable
>of selecting from their manifold inputs that portion which will be considered 
>as important, and that which is to be ignored?  How is it, moreover, that
>intelligent creatures are able to assign relative values to parts of
>the environment related to importance, and readjust these relative values
>as they procede?
>
>Frames and scripts, it seems to me, gloss over this difficulty by assigning
>relevance in advance.  The hard problem is to account for how relevance
>comes about in the first place, and how it develops...
>
>What makes assignments of relevance possible on an ongoing basis is 
>*motivation* --- things, parts of the environment, are relevant, important, or
>interesting precisely in the context of some *purpose* (if my purposes change,
>so does what is relevant); relevance is thus a function of our
>reasons (or motives) for acting...  Humans act for reasons, but for 
>reasons which do not compel or necessitate (reasons are not causes); being
>free to act according to one's own plans, plans of one's own authorship,
>and to change those plans on an on-going and flexible manner is what
>I believe intentionality has that is needed to implement intelligence.  Searle
>says that intentionality and intelligence are tied to "causal powers" --
>and this is what I take him to mean -- the ability to cause actions for
>reasons independent of nature's causal nexus, in a word, motivation.
>
Maybe we had better go back to how Searle claims to define intentionality.
Unfortunately, his definition is relegated to a footnote in "Minds, Brains,
and Programs:"

	Intentionality is by definition that feature of certain
	mental states by which they are directed at or about objects
	and states of affairs in the world.  Thus, beliefs, desires,
	and intentions are intentional states;  undirected forms of
	anxiety and depression are not.

Matters such as relevance and causality are secondary.  The primary issue is
the assumption that there are these mental states and then there are world
states.  We can talk about things being IN world states at a casual intuitive
level, so we assume we can do the same about mental states.  Intentionality
accounts for the ability to translate between what is IN a mental state and
what it IN a world state.

Looked at in this light, we can then ask if we can have machine states which
admit of a similar translation.  Such machine states could be said to have
intentionality, which, supposedly, means, that their machines would be capable
of the sort of "understanding" which Searle argues is lacking in symbol
manipulating systems.  On the other hand are we in a position to argue
that we can build symbol manipulating machine which LACK such powers of
translation?

As I said in my last article, I do not think the real issue is one of what
machine states have or lack.  Rather, the question has to do with the
relationship between machine state and machine behavior and with the
question of whether or not it makes sense to talk about disembodied
mental states.  In other words an agent can only HAVE mental states
in the first place by virtue of certain properties of its BODY.  If
you try to abstract away the body (as Turing assumed in his initial
paper on artificial intelligence), you lose the mental states, too.

Does this make sense?  Actually, a more appropriate question would be:  CAN
this make sense?  I think it can.  However, the issue is not so much whether
we are talking about symbol manipulation as to how we choose to look at machine
states.  If we view a machine state as a configuration of bits in
memory--something we can freeze in time or even copy from one physical
machine to another--then we may run into trouble.  Such machine states
are essentially divorced from the machine itself . . . particularly any
peripheral hardware concerned with sensors and effectors.  If, on the other
hand, we imagine a more dynamic machine in which there is constant interaction
between those sensors and effectors and the state of the machine, then it may
no longer make sense to try to identify that machine state with something like
a pattern of bits in memory.  The bits are always changing, and taking away the
dynamics of the situation would amount to inducing brain-death.

Now this may be naive, but at an intuitive level there seems to be no reason
why such a machine cannot be built from components which are basically symbol
manipulating systems.  Thus, symbol manipulation does not appear to be the
critical issue.  The critical issue involves the nature of the processing
concerned with the management of sensors and effectors and the assumption
that such processing is highly dynamic.  If we allow such dynamic qualities
into the system, we should be able to play the game Minsky has laid out in
THE SOCIETY OF MIND, building a system which manages those sensors and
effectors up from relatively low-level processing components, very much
in the spirit with which Brooks builds his robots.  The question is not
whether or not our machines have intentionality but whether or not they
have bodies through which they interact with what Searle calls "states
of affairs in the world."
-- 
USPS:	Stephen Smoliar
	5000 Centinela Avenue  #129
	Los Angeles, California  90066
Internet:  smoliar@venera.isi.edu

christo@psych.toronto.edu (Christopher Green) (03/15/91)

In article <1991Mar14.150044.12197@watdragon.waterloo.edu> cpshelley@violet.uwaterloo.ca (cameron shelley) writes:
>
>If I can inject a tangential remark here, we should be aware of a traditional
>division of vocabulary on this issue, namely that between *intention* and
>*intension*.
>
>Intention is related to "intent" and usually refers to a predisposition
>to some action or view.  In this sense, any program has intention --- it
>is created to fulfil a specific purpose normally in a specific manner.
>
>Intension is related to "intense" and usually means whatever the author
>wants it to, but in this context it refers to "the content of a notion",
>or let's say "the meaning of an intention".  In philosophy (so far as I
>can tell), it also denotes "the ability to form intentions" plus some
>intangible spin.  Call it "motivation" or "purpose" if you like.
>
I really don't like the tone of what I'm about to say but you've got this
entirely wrong. IntenTionality, in philosophy, indicates the capacity of 
some entity to point to or refer to something else. Thoughts (inasmuch
as they're propositional attitudes) refer in this way. So do sentences
and, sometimes, pictures, although their intentionality seems to be 
derived from ours -- we use them that way.  Intent, as the term is used
colloquially (e.g., "I intend to go to the movies") is not at issue. It
is only a species of the tehcnical form of IntenTionality and not priveledged
in any particular way.

IntenSion is an entirely different matter. To put things crudely, the 
IntenSion of a mental act (Brentano's term) is its representation (or
'mental picture', sometimes) 'in your head'. This is to be contrasted
with the extension of the act, the thing in the world which is being
represented. There has long been a debate in philosophy over which
(the intension or the extension) is the MEANING. Hilary Putnam is
particularly notable for having argued the latter. Jerry Fodor has
severely criticized Putnam's position.

I hate to be so blunt and I hope this won't be construed as a flame but
it is important to get these terms straight before engaging in any further
debate on this topic. (Incedentally, a similar discussion is going on
over in sci.philosophy.tech. I wonder how much overlap in readership 
there is between these two. Perhaps we can link the two discussions up?)


-- 
Christopher D. Green
Psychology Department                             e-mail:
University of Toronto                   christo@psych.toronto.edu
Toronto, Ontario M5S 1A1                cgreen@lake.scar.utoronto.ca 

greenba@gambia.crd.ge.com (ben a green) (03/15/91)

I suggest that the term "intension" or "intention" is hopelessly
ambiguous and should be replaced by other terms according to what
is intended:

1) "John intentionally bumped the vase to see Marsha's reaction."

The common meaning.

2) "The intension of 'vase' comprises the attributes of 'can contain
flowers and water' and 'is relatively tall and slender'."

The meaning in logic, according to dictionaries.

3) "'The Hulk is so named because of his size' uses 'The Hulk'
intensionally because the phrase cannot be replaced with the name
of the wrestler and preserve meaning."

Failure of substitutional transparency, as used by Quine.

4) "Intentionality is by definition that feature of certain
mental states by which they are directed at or about objects
and states of affairs in the world." -- John Searle, quoted by
Stephen Smoliar. Problematical in that it is defined as an
attribute of "mental states," a concept itself in need of
definition. A "state" can be "directed at" an object, for example,
or be "about" an object. Much too vague for my taste.

In (1), intention is possessed by a person.
In (2), intension is possessed by a term.
In (3), intension is possessed by a sentence.
In (4), intension is possessed by mental state.

TAKE ME OUT, COACH!

--
Ben A. Green, Jr.              
greenba@crd.ge.com
  Speaking only for myself, of course.

cpshelley@violet.uwaterloo.ca (cameron shelley) (03/15/91)

In article <1991Mar14.191814.26802@psych.toronto.edu> christo@psych.toronto.edu (Christopher Green) writes:
[...]
>I really don't like the tone of what I'm about to say but you've got this
>entirely wrong. IntenTionality, in philosophy, indicates the capacity of 
>some entity to point to or refer to something else. Thoughts (inasmuch
>as they're propositional attitudes) refer in this way. So do sentences
>and, sometimes, pictures, although their intentionality seems to be 
>derived from ours -- we use them that way.  Intent, as the term is used
>colloquially (e.g., "I intend to go to the movies") is not at issue. It
>is only a species of the tehcnical form of IntenTionality and not priveledged
>in any particular way.
>
>IntenSion is an entirely different matter. To put things crudely, the 
>IntenSion of a mental act (Brentano's term) is its representation (or
>'mental picture', sometimes) 'in your head'. This is to be contrasted
>with the extension of the act, the thing in the world which is being
>represented. There has long been a debate in philosophy over which
>(the intension or the extension) is the MEANING. Hilary Putnam is
>particularly notable for having argued the latter. Jerry Fodor has
>severely criticized Putnam's position.
>

I'll admit I was using entirely the "wrong" vocabulary, hoping to circumvent
the philosphical dispute you mention.  Particularly, I wished to avoid
priveledging philosophy (to borrow your phrasing) since we are not 
necessarily talking about "artificial philosophy" when "artificial
intelligence" is the subject.  At any rate, the philosophical notions
regarding intension/extension are notoriosly static, and thus ill-suited
to talk of how useful they are for implemenation (which is how I had
interpreted the thread to that point).  Also, I've just been reading
over work in text planning in which intenTion is represented as plan
goals, so perhaps I am mixing myself up ...

>I hate to be so blunt and I hope this won't be construed as a flame but
>it is important to get these terms straight before engaging in any further
>debate on this topic. 

No problem here.  I promise to take my philosophy more seriously in
future.

--
      Cameron Shelley        | "Belladonna, n.  In Italian a beautiful lady;
cpshelley@violet.waterloo.edu|  in English a deadly poison.  A striking example
    Davis Centre Rm 2136     |  of the essential identity of the two tongues."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

typ125m@monu6.cc.monash.edu.au (John Wilkins) (03/15/91)

petersja@debussy.cs.colostate.edu (james peterson) writes:

>What it is about "intentionality" the lack of which would impede the 
>implementation of intelligent behavior artificially is related to the
>problem of "relevance."  How is it that intelligent creatures are capable
>of selecting from their manifold inputs that portion which will be considered 
>as important, and that which is to be ignored?  How is it, moreover, that
>intelligent creatures are able to assign relative values to parts of
>the environment related to importance, and readjust these relative values
>as they procede?

>Frames and scripts, it seems to me, gloss over this difficulty by assigning
>relevance in advance.  The hard problem is to account for how relevance
>comes about in the first place, and how it develops...

>What makes assignments of relevance possible on an ongoing basis is 
>*motivation* --- things, parts of the environment, are relevant, important, or
>interesting precisely in the context of some *purpose* (if my purposes change,
>so does what is relevant); relevance is thus a function of our
>reasons (or motives) for acting...  Humans act for reasons, but for 
>reasons which do not compel or necessitate (reasons are not causes); being
>free to act according to one's own plans, plans of one's own authorship,
>and to change those plans on an on-going and flexible manner is what
>I believe intentionality has that is needed to implement intelligence.  Searle
>says that intentionality and intelligence are tied to "causal powers" --
>and this is what I take him to mean -- the ability to cause actions for
>reasons independent of nature's causal nexus, in a word, motivation.

>Excuse me if I have been less than clear, I did not have much time to
>trot this out......

I'm no AI expert, but surely humans also have frames and scripts
that determine what is relevant: sensory filters, biological "drives",
social imperatives, personal traits, etc, that are all "pre-programmed"
as it were, limiting the range of inputs and the nature of the responses?
What determines these is surely the evolution of transmitted frames/
scripts in biological and social terms. Those that are not successful
are eliminated -- biologically through the failure of the population
in which the traits reside, culturally through the failure of the
tradition in which the traits are transmitted.

Disclaimer: IMHO intelligence is a generic name given to a class
of mechanisms evinced by sufficiently complex systems that interact
with their environment.
-- 
John Wilkins, Manager, Publishing & Advertising, Monash University
Melbourne, Australia - Internet: john@publications.ccc.monash.edu.au
Nobody's views but mine own -- who'd want them?

christo@psych.toronto.edu (Christopher Green) (03/16/91)

In article <GREENBA.91Mar14160439@gambia.crd.ge.com> greenba@gambia.crd.ge.com (ben a green) writes:
>I suggest that the term "intension" or "intention" is hopelessly
>ambiguous and should be replaced by other terms according to what
>is intended:
>
>In (1), intention is possessed by a person.
>In (2), intension is possessed by a term.
>In (3), intension is possessed by a sentence.
>In (4), intension is possessed by mental state.
>
You've got this dead right (except that you haven't included the technical
meaning of intenTion) and there's no ambiguity at all. The reason three things
have are associated with intenSion is that ther's a long standing debate over
whether the word or the sentence is the 'unit' of meaning and that mental
states (many of them, anyway) are widely taken to be propositional attitidues.
If so they're relations to propositions, propositions have meaning, and 
meaning just might be the intenSion of the proposition. 

-- 
Christopher D. Green
Psychology Department                             e-mail:
University of Toronto                   christo@psych.toronto.edu
Toronto, Ontario M5S 1A1                cgreen@lake.scar.utoronto.ca