[comp.ai] Split from AI/CogSci

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) (11/03/90)

   (I am cross-posting this to 'comp.ai' for more comments)

   In my article <492@pdxgate.cs.pdx.edu> on 'comp.ai.philosophy' I commented
on the current state of discussion in AI/Cognitive Science about the topics of
"conciousness" and "intelligence" primarily, and an e-mail reply was sent to
me commenting on the length of my article (280 lines) and asking that I make
a shorter version of it.  (Here it is, don't flame it too badly ;-)

   I had four main points and here they are in short:

     1)	   A lot of discussion in AI/CogSci about "conciousness" and
	"intelligence" (etc) is inhibited by the lack of a language
	(metaphorically speaking) that we can use for it.  The terms
	we are using are borrowed from our normal social useages and
	from psychology/sociology (although I think that they are at
	best marginal even there), and from what I have seen of
	discussion going on in comp.ai/comp.ai.philosophy and an e-mail
	group <cybsys-l@bingvmb.bitnet>, I would definitely say that
	they are inadiquate.  There is a differentiation of concepts
	that happen when one learns about a new subject, and for myself,
	at least, I passed up the adiquacy of the language to be clear
	a while ago, and I have yet to see (admittedly from my somewhat
	scientifically naive point of view) the existence of such a
	language.

     2)	   On a similar note, I think that the assumptions brought in
	from psychology and sociology, etc. by using *their* admittedly
	adopted language are naive in the useful sense.  They are useful
	since they work with the best known examples we have, but we are
	wedding ourselves to too much of the human paradigm.  In
	mathematics it is an old saying that the hardest things to prove
	are the ones we normally take for granted, and I certainly agree
	in this case.  The connotations are part of what are confusing
	us to much.  Just because the terms and useages we know are so
	eminently useful and practical in the social sense of useage (not
	scientific, that is), does not imply in any way that they are
	useful to transfer into a scientific domain.  Maybe "conciousness"
	and "intelligence" are naive questions?  I am not sure, but it
	is looking more and more so.  A related one that comes to mind is
	our concept of intentionality, or "purpose".  I have come to think
	that it is a useful thing to have in terms of object-level
	description, but that it may also be a naive notion, one that we
	should try to look away from.  It was good for a
	start, but it is apparent me that we are tripping over ourselves
	to get anywhere as a community.  So what if we have finely developed
	ideas of what is going on, I know I do...  but can you communicate
	it in any kind of believable way without a considerable amount of
	persuasion?  It has been my experience that I have developed my
	notions to a fine enough extent that it takes a hell of a long
	time to communicate it to anyone.  We are re-inventing the wheel
	far too much.

     3)	   An interesting (but hardly origonal) consideration would be to
	think about how we arganize the world internally.  There seems to
	be a consensus of sorts amoung a good percentage of posters in
	this general thread that "higher intelligence" (This so badly
	illustrates what I am thinking) is a modeling ability of sorts.
	An idea that comes from this is to think of intelligence as an
	encoding scheme between inputs and outputs (over time) that on
	the low end would be a constant output, and on the high end a
	sort of maximal look-up table over time.  Our brains certainly
	don't handle *all* of the information in from the senses, so in
	a sense we are in the middle somewhere.

     4)	  On a related tone to 3, I (and some others, I think), put forward the
	idea of attempting to develop a "Turing Test" theory, that would work
	at studying invariants in "intelligent" or even "concious" behaviours
	or even given some of the internal structure would work from that...
        Anyway, it would attempt to try to find ways of difinitively testing
        certain things (this is where a language of "AI" would be a lot of
        help ;-).  An idea would be to think about some kind of complexity
        rating for a hypothetical encoding scheme (both "complexity" and
        "encoding" are wrong, and possibly inadiquate for this, I know that
        if one was to develop something like this, a "complexity" factor would
        be only on amung many), which is what I am working on now, although
        badly, as can be seen.

   This is abbreviated (at least the justifications are), so for a better
reference look to the origonal article, or of course, I'll respond to specific
inquiries.  All comments are welcome, and appreciated.  I really think
something needs to be done about this, as this "problem", as I see it, has
been around for long enough.

   I'm willing to work on this, and would like to know of others who are too.

   Erich

     /    Erich Stefan Boleyn     Internet E-mail: <erich@cs.pdx.edu>    \
>--={   Portland State University      Honorary Graduate Student (Math)   }=--<
     \   College of Liberal Arts & Sciences      *Mad Genius wanna-be*   /
           "I haven't lost my mind; I know exactly where I left it."