[net.ai] reflexive reasoning ?

marcel@uiucdcs.UUCP (marcel ) (09/28/83)

#N:uiucdcs:32300003:000:3358
uiucdcs!marcel    Sep 27 10:27:00 1983


I believe the pursuit of "consciousness" to be complicated by the difficulty
of defining what we mean by it (to state the obvious). I prefer to think in
less "spiritual" terms, say starting with the ability of the human memory to
retain impressions for varying periods of time. For example, students cramming
for an exam can remember long lists of things for a couple of hours -- just
long enough -- and forget them by the end of the same day. Some thoughts are
almost instantaneously lost, others last a lifetime.

Here's my suggestion: let's start thinking in terms of self-observation, i.e.
the construction of models to explain the traces that are left behind by things
we have already thought (and felt?). These models will be models of what goes
on in the thought processes, can be incorrect and incomplete (like any other
model), and even reflexive (the thoughts dedicated to this analysis leave
their own traces, and are therefore subject to modelling, creating notions
of self-awareness).

To give a concrete (if standard) example: it's quite reasonable for someone
to say to us, "I didn't know that." Or again, "Oh, I just said it, what was
his name again ... How can I be so forgetful!"

This leads us into an interesting "problem": the fading of human memory with
time. I would not be surprized if this was actually desirable, and had to be
emulated by computer. After all, if you're going to retain all those traces
of where a thought process has gone; traces of the analysis of those traces,
etc; then memory would fill up very quickly.

I have been thinking in this direction for some time now, and am working on
a programming language which operates on several of the principles stated
above. At present the language is capable of responding dynamically to any
changes in problem state produced by other parts of the program, and rules
can even respond to changes induced by themselves. Well, that's the start;
the process of model construction seems to me to be by far the harder part
of the task.

It becomes especially interesting when you think about modelling what look
like "levels" of self-awareness, but could actually be manifestations of just
one mechanism: traces of some work, which are analyzed, thus leaving traces
of self-analysis; which are analyzed ... How are we to decide that the traces
being analyzed are somehow different from the traces of the analysis? Even
"self-awareness" (as opposed to full-blown "consciousness") will be difficult
to understand. However, at this point I am convinced that we are not dealing
with a potential for infinite regress, but with a fairly simple mechanism
whose results are hard to interpret. If I am right, we may have some thinking
to do about subject-object distinctions.

In case you're interested in my programming language, look for some papers due
to appear shortly: one in Software Practice and Experience (short communication
on AI '83 (METALOG: a Language for Knowledge Representation and Manipulation).
Of course, I don't say that I'm thinking about "self-awareness" as a long-term
goal (my co-author isn't) ! If/when such a goal becomes acceptable to the AI
community it will probably be called something else. Doesn't "reflexive
reasoning" sound more scientific?.

						Marcel Schoppers,
						Dept of Comp Sci,
						U of Illinois @ Urbana-Champaign
						uiucdcs!marcel

marcel@uiucdcs.UUCP (marcel ) (09/28/83)

#R:uiucdcs:32300003:uiucdcs:32300004:000:275
uiucdcs!marcel    Sep 27 11:22:00 1983

There is a line missing in the last paragraph. The references are:

	Logic-Programming Production Systems with METALOG.  Software Practice
	   and Experience, to appear shortly.

	METALOG: a Language for Knowledge Representation and Manipulation.
	   Conf on AI (April '83).