[comp.ai.philosophy] Split from AI/CogSci

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) (11/02/90)

   I have been reading this most fascinating thread on (comp.ai/
comp.ai.philosophy) for quite a long time (in its various forms as it has
mutated through time), and also the one on the e-mail list <cybsys-l@
bingvmb.bitnet>.  Various notions have been forming in my mind (as I am sure
ones have for all of you out there) along the way.  I attempted to express
one of them while it was still fuzzy (on cybsys) and got toasted ;-), which
I should have expected considering the way I stated them.

   There are a set of concepts that I feel it is important to put forward
here (not origonal to this thread, I'm sure ;-)...

     1) There is a lot of discussion about testing for machine conciousness,
	intelligence, etc.  I think we are *very* inhibited by our use of
	terms that were never meant to be used in any but a human sense.  And
	for that matter, to human constraints.  I have recently been studying
	the history and philosophy of science, and what appears to me to be
	a clear pattern has emerged.  As one learns about a subject one's
	perception of it differentiates from the basics.  When I started
	learning about "AI" I had a vague idea of rule-based reasoning,
	decision trees, and of images of books and movies with non-human
	(and sometimes machine) "intelligences".  After the last two years,
	I see a richness of structure and possibilities, which, ironically,
	have little to do with conciousness itself, but more with all of the
	supportting processes.  This has also happened in my studies of
	mathematics, where I do not see the field as a chunk but as a rich
	structure.  I did not notice this really until I started to work
	with people in a computer lab (helping introductory people with
	assignments) and noticed how hollow and limited they saw Computer
	Science from the outside, with no conceptual framework to latch onto.
	Now, since Language is not a communication process per se, but a
	referential one (the "symbols" of language refer to internal meanings
	in the listener and/or reader, but not necessarily the intended ones),
	after all the reading and thought, I *do* have ideas and concepts
	to work with, but because of the loaded terms (AI really has no
	language of its own that I have seen) I cannot express many of these
	ideas in unambiguous terms without bringing in a host of associations
	that for many people end up meaning something else or not making sense
	at all (whether or not they are correct).  I see a lot of this going
	on with almost tortuous explanations being passed back and forth
	to explain the internal concepts that people have.  Certainly some of
	those on this newsgroup(s) are able to use the standard terminology
	to convey their ideas, but it still inevitably causes trouble.
	I have seen about 3 different uses of the word "concious" (and
	especially "conciousness") that I can think of off of the top of my
	head, and we have to fight to get the intended concept across.  I say
	that the time is right for our own language, so to speak.  (as many
	others have hinted at or stated themselves)
	   (Note:  "AI" has become a loaded term that has to be gotten rid of,
	if you ask me...  it's too vague and misleading...  "cognitive science"
	is a good first step, and may be the one I'm looking for, but I'm still
	not sure)

     2) On a similar note, in using our existing terminology, we have
	brought in a host of assumptions from our normal use of the language
	of human thought and psychology.  I think that this is not only
	hindering, but damaging to the field.  In mathematics, it has become
	standard dogma to work at making the least amount of assumptions as
	possible.  The most obvious assumptions that we ignore are the claims
	like "a machine could never be concious because it has no soul", but
	subtle assumptions still seem in when we aren't looking at it too
	closely.  For example, I have seen many arguments (both given and
	taken quite seriously, I might add) where if you removed particular
	assumptions about *human* psychology and/or intelligence, they would
	fall apart completely.  Addmittedly, our best-documented case of
	intellegence and conciousness is ourselves, and for that matter, even
	in animals, we find *very* similar patterns.  But especially
	considering our lack of understanding of what intelligence and
	conciousness really are (even *with* the loaded connotations), is
	staying wedded to this going to help us much?  Sometimes the *hardest*
	things to see are what we take most for granted.  I claim that we will
	get the farthest by trying to remove as many socio/psychological
	elements that are human (or even mammilian) as possible and working on 
	invariants of what we consider intelligent behavior.  A recent example
	of an interesting question (comp.ai, I think) is: "Is social behavior
	required for intelligence?".  It provoked some fascinating thought...
	but I digress.  Consider our concept of intentionality.  It is very
	useful, but how accurate?  It forces us to ask "why?" and to ascribe
	purpose a lot of the time, but as experience has suggested, maybe
	"why?" is the wrong question...  and be anology I suggest that maybe
	intentionality is the wrong answer (meaning that it is a useful
	high-level explanation of things that has little use on the lower
	level).

     3) An interesting (but hardly origonal) consideration would be to think
	about how we organize the world internally.  It has been stated before
	that our world *is* our internal representation, or model, and that
	our success is due to the accuracy, and especially, the usefullness
	of this model.  A possibility would be to look at intelligence as a
	spacial/temporal encoding scheme.  For instance, a simple neural net
	can be described as encoding its inputs internally and attempting to
	reproduce the information as accurately as possible on the outputs.
	I think a similar process goes on with intelligence.  It is an
	encoding scheme (but including temporal, of course) that at a certain
	complexity is very useful.  In this picture, intentionality and
	categorization are aspects of an excoding scheme to usefully represent
	a complex world with a lot of demands.  (my own view of this is that
	given certain inputs and outputs, one could think of an encoder of some
	sort between them, with the minimal being constant outputs, and the
	maximal being a super-lookup table with all the answers (over time,
	of course), with us being somewhere in the middle, with a limited
	amount of data-space (of come sort), and a large requirement to learn
	new behaviors fast, so we encode complex information in ways that are
	useful to *us* (and/or interesting, as an offshoot) as behaviors or
	memories)

     4) On a related tone to 3, I (and some others, I think), put forward the
	idea of attempting to develop a "Turing Test" theory, that would work
	at studying invariants in "intelligent" or even "concious" behaviours
	or even given some of the internal structure would work from that...
	Anyway, it would attempt to try to find ways of difinitively testing
	certain things (this is where a language of "AI" would be a lot of
	help ;-).  An idea would be to think about some kind of complexity
	rating for a hypothetical encoding scheme (both "complexity" and
	"encoding" are wrong, and possibly inadiquate for this, I know that
	if one was to develop something like this, a "complexity" factor would
	be only on amoung many), which is what I am working on now, although
	badly, as can be seen.

   General comments follow (I've been saving this up, heh heh ;-):

rjf@canon.co.uk (Robin Faichney) writes:

>In article <1990Oct31.023922.13795@watdragon.waterloo.edu>
>	cpshelley@violet.uwaterloo.ca (cameron shelley) writes:
>>
>>
>>  I'd like to inject a few comments regarding testing for machine
>>consciousness.
>>
>>  Firstly, why do we accept the belief that other humans are conscious?
>>(I use the word "belief" advisedly, since I think that knowledge of
>>another's subjectivity is problematic.)  I would argue that we use a
>>genetic analogy: I am human (which is now a genetic term), and I am
>>conscious; therefore since this other individual is human, he or she
>>is also conscious.  In other words, we believe ourselves to be conscious,
>>and we believe that the genetic connection between ourselves and other
>>humans is 'close' enough to preserve that property.

>I think cameron is on the right lines here, but I don't think he's
>quite got there.  For one thing, his account suggests that this is an
>intellectual phenomenon, but I don't think that can be true.  For
>another, he puts self-consciousness before belief in others'
>consciousness.  I think that we identify with, and therefore by (my)
>definition believe in the consciousness of, other people, long before
>we become self-conscious (even if we don't at that stage put it in
>quite those terms).

>Of these two points, the lack of consideration of non-intellectual
>aspects of this is probably more fundamental.  But it can be elucidated
>by looking at the development of the concept of consciousness.  When
>it's put that way, it is obvious that the concept as such is a relative
>late comer, whether viewed within the evolution of the species or the
>development of the individual.  Its function is to provide an
>intellectual handle to at least one non-intellectual phenomenon.  My
>contention is that this phenomenon is identification with others
>(other, closely related phenomena probably also being implicated).

>This would certainly explain the difficulties which we have in defining
>consciousness:  we assume that because we have a symbol, there must be
>a referent.  On reflection it becomes obvious that a concept could
>easily serve many purposes without actually 'standing for' any single,
>particular thing.  This is the same sort of mistake that Wittgenstein
>tried to explain regarding the meaning of language:  it is not the case
>that each word, phrase, whatever must represent some particular thing
>in the world, which is its meaning; in fact, the meaning of a piece of
>language is simply the way it is used.

>So how is 'consciousness' used?  In more ways than one, to be sure, but
>I think that the common usage -- simple awareness -- is the primary
>one.

>To go back a little:  what are a baby's earliest social interactions?
>I'd suggest (I have a reference for this somewhere) the exchange of
>smiles, probably with the mother.  Note that mother's smile tends to
>trigger baby's smile and vice versa.  This is modelling behaviour, and
>though at first it is undoubtably very low level, it is in principle
>the same thing as when the little girl wants to dress up like mummy (or
>the little boy ;-), and the teenager, having switched from the parental
>model to the peer group model, wants to look/talk/etc just like all her
>friends -- or maybe, wants to be as non-conformist as her cultural
>heroes.  There again, any such social interaction as the feeling and
>expression of sympathy for someone, requires feeling for, ie
>identification with, that person.  What I'm trying to say is that
>identification is fundamental to socialisation and social interaction,
>and you obviously can't identify with anything you don't believe to be
>fundamentally like yourself.

>So what does identification have to do with consciousness?  Well, I
>don't think that it starts with our 'believing ourselves to be
>conscious'.  It is deeper than that: in fact, we simply experience
>things, and are 'programmed' to view other humans as essentially like
>us, ie as 'experiencers'.  The social phenomenon of identifying with
>others may reasonably be assumed to have arisen long before the concept
>of consciousness.  The fact is that we *naturally* identify with some
>of the things in our environment, and not with others; our intellectual
>view of this is that some things are conscious and others are not.

>That could be taken as meaning that maybe our 'programming' is wrong:
>maybe (some?) other people are not conscious, or maybe some inanimate
>things are.  But *that is meaningless*.  We either identify with a
>thing or we don't.  Period.

>The consequences for AI?  I'd suggest the field has nothing to lose by
>forgetting consciousness.  People have suggested that important things
>are associated with consciousness, like introspection and short-term
>memory, but leaving out consciousness would in no way prevent objective
>analogues of these, or any other mental phenomena, from being
>investigated.  You might even look at identification with others, but
>that might be a little one-sided!  ;-)

>BTW, what I am suggesting here might be taken as meaning that the mind
>as an individual entity is not a meaningful concept, that minds are
>"merely" the nodes in a social network.  Maybe a better way of putting
>it is that some of the software cannot, for reasons of function rather
>than implementation, be run on a standalone machine, only on a
>network.  This sort of view of the mind is actually quite common these
>days in the arts and social sciences, and if AI is ever to approach the
>higher level functions, the practitioners will have to start looking at
>postmodernism, structuralism, et al, if only to be able to say what is
>wrong with these approaches!  ;-)


   I think that the points made above are *very* relavent to mine.  Our
general concepts (and certainly our words) of mind, intelligence, thought,
conciousness, etc. (I could go on for a while)  are almost completely
based on the social uses of the words.  Language was origonally (and still
very much is) a social phenomenon, and language very much shapes our thought.
Many of us on this newsgroup(s) have managed to develop new and/or slightly
different (and certainly differentiated, i.e. specific sub-cases) concepts
about this whole business, but we still struggle to deal with it, and to
keep those concepts alive amid a lack of words to express them to each
other and ourselves.  He (Robin) had an excellent point when he hinted that
intelligence might be better considered as a social phenomenon than as an
individual one, as we respond to social pressure in our *ideas* as much as
our behavior, and our very ideas are based mostly on inormation that we
recieve through social contact of come sort (USENET being an interesting
new addition, historically, to what constitutes social contacts).  And they
are almost always *about* something socially related as well.  What about
the social relations of the machines with us?  Most of the comments I have
seen about this seem to assume that they will be using human social rules,
but how do we know this?  Or for that matter, if we design it to be so,
how do we know we made the right choice?

   Our thought is so much wrapped up in our language that when new concepts
are developed, they must really be given new words, or old meanings must give
way to the new ones.

   In the case of psychology and sociology, these old words were/are almost
sufficient, and with slight modifications, the fields have gotten away with
those (marginally, if you ask me).  But when they must be used in ways that
are so alien to the origonal uses, they are becoming entirely inadiquate,
and as mentioned before, misleading, especially when used in reference to
machines and the radically different paradigm that they could (and are,
really) open up.

   Revolutions in fields happen when conceptual fields and terminology
differentiates to account for splitting of categories in heretofore unseen
ways.  I think another split is starting, and we should accomodate it as
soon as possible (I couldn't resist last comment ;-).

>If you are interested in an example of research in computing which does
>take recent work in the arts and social sciences very seriously, and in
>my view is successful in integrating these areas, where they naturally
>overlap, look out some of the stuff on computer supported cooperative
>work (CSCW) and groupware by the Awakening Technologies group.  (They
>seem to be mainly P and J Johnson-Lenz, and publish themselves.) They
>have submitted a paper to the forthcoming CSCW Special Edition of The
>Intl Jnl of Man Machine Studies.

   I shall.

   One last comment (before anyone flames me to death ;-).  It seems to me
that a lot of people have been tentatively (or not so tentatively) saying
this kind of material before, and have either not said it strong enough and
been ignored, or (like myself) are not quite sure how to proceed.  I not
only invite any and all comments, but would like to encourage others
interested in continuing this discussion on this thread or through e-mail,
as I am very enthusiastic about it.

   Thanks for your time,
	Erich

     /    Erich Stefan Boleyn     Internet E-mail: <erich@cs.pdx.edu>    \
>--={   Portland State University      Honorary Graduate Student (Math)   }=--<
     \   College of Liberal Arts & Sciences      *Mad Genius wanna-be*   /
           "I haven't lost my mind; I know exactly where I left it."

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) (11/03/90)

   In my article <492@pdxgate.cs.pdx.edu> I commented on the current state
of discussion in AI/Cognitive Science about the topics of "conciousness" and
"intelligence" primarily, and an e-mail reply was sent to me commenting on
the length of my article (280 lines) and asking that I make a shorter version
of it.  (Here it is, don't flame it too badly ;-)

   I had four main points and here they are in short:

     1)	   A lot of discussion in AI/CogSci about "conciousness" and
	"intelligence" (etc) is inhibited by the lack of a language
	(metaphorically speaking) that we can use for it.  The terms
	we are using are borrowed from our normal social useages and
	from psychology/sociology (although I think that they are at
	best marginal even there), and from what I have seen of
	discussion going on in comp.ai/comp.ai.philosophy and an e-mail
	group <cybsys-l@bingvmb.bitnet>, I would definitely say that
	they are inadiquate.  There is a differentiation of concepts
	that happen when one learns about a new subject, and for myself,
	at least, I passed up the adiquacy of the language to be clear
	a while ago, and I have yet to see (admittedly from my somewhat
	scientifically naive point of view) the existence of such a
	language.

     2)	   On a similar note, I think that the assumptions brought in
	from psychology and sociology, etc. by using *their* admittedly
	adopted language are naive in the useful sense.  They are useful
	since they work with the best known examples we have, but we are
	wedding ourselves to too much of the human paradigm.  In
	mathematics it is an old saying that the hardest things to prove
	are the ones we normally take for granted, and I certainly agree
	in this case.  The connotations are part of what are confusing
	us to much.  Just because the terms and useages we know are so
	eminently useful and practical in the social sense of useage (not
	scientific, that is), does not imply in any way that they are
	useful to transfer into a scientific domain.  Maybe "conciousness"
	and "intelligence" are naive questions?  I am not sure, but it
	is looking more and more so.  A related one that comes to mind is
	our concept of intentionality, or "purpose".  I have come to think
	that it is a useful thing to have in terms of object-level
	description, but that it may also be a naive notion, one that we
	should try to look away from.  It was good for a
	start, but it is apparent me that we are tripping over ourselves
	to get anywhere as a community.  So what if we have finely developed
	ideas of what is going on, I know I do...  but can you communicate
	it in any kind of believable way without a considerable amount of
	persuasion?  It has been my experience that I have developed my
	notions to a fine enough extent that it takes a hell of a long
	time to communicate it to anyone.  We are re-inventing the wheel
	far too much.

     3)	   An interesting (but hardly origonal) consideration would be to
	think about how we arganize the world internally.  There seems to
	be a consensus of sorts amoung a good percentage of posters in
	this general thread that "higher intelligence" (This so badly
	illustrates what I am thinking) is a modeling ability of sorts.
	An idea that comes from this is to think of intelligence as an
	encoding scheme between inputs and outputs (over time) that on
	the low end would be a constant output, and on the high end a
	sort of maximal look-up table over time.  Our brains certainly
	don't handle *all* of the information in from the senses, so in
	a sense we are in the middle somewhere.

     4)	  On a related tone to 3, I (and some others, I think), put forward the
        idea of attempting to develop a "Turing Test" theory, that would work
        at studying invariants in "intelligent" or even "concious" behaviours
        or even given some of the internal structure would work from that...
        Anyway, it would attempt to try to find ways of difinitively testing
        certain things (this is where a language of "AI" would be a lot of
        help ;-).  An idea would be to think about some kind of complexity
        rating for a hypothetical encoding scheme (both "complexity" and
        "encoding" are wrong, and possibly inadiquate for this, I know that
        if one was to develop something like this, a "complexity" factor would
        be only on amoung many), which is what I am working on now, although
        badly, as can be seen.

   This is abbreviated (at least the justifications are), so for a better
reference look to the origonal article, or of course, I'll respond to specific
inquiries.  All comments are welcome, and appreciated.  I really think
something needs to be done about this, as this "problem", as I see it, has
been around for long enough.

   I'm willing to work on this, and would like to know of others who are too.

   Erich

     /    Erich Stefan Boleyn     Internet E-mail: <erich@cs.pdx.edu>    \
>--={   Portland State University      Honorary Graduate Student (Math)   }=--<
     \   College of Liberal Arts & Sciences      *Mad Genius wanna-be*   /
           "I haven't lost my mind; I know exactly where I left it."