[mod.ai] self-recursive functions == consciousness

mckee@CORWIN.CCS.NORTHEASTERN.EDU (03/08/87)

While trying to come up with some "characteristically lisp" code
to benchmark different implementations with (since we didn't have
R.Gabriel's collection at the time), the following argument occurred
to me:
	Suppose one has two large, intelligent systems, both of which
can speak English, know about baseball and politics, and can accurately
report on their past experiences, yet one is conscious and the other
is not.  If we take away language, baseball, politics, and the past,
we are left with two *content-free* mental systems, i.e. pure structure.
One structure exhibits the properties of consciousness, while the other
does not.  What are the differences in structure that cause the differences
in properties?  Can we write them in Lisp?  Since we have by hypothesis
removed all content from the systems, we are left with no data, but
pure control flow, i.e. an unnamed lambda expression.
	Now the intuitive essence of consciousness seems to be self-
reference.  Without self-reference we have only unidirectional
entropy-processing, which is done by everything.  Can one write a
content-free self-referential function, which gets its work done
by pure control flow and lambda-binding?  How about:

	(LAMBDA (self n) (COND ((ZEROP n) 0)
				(T (+ n
				      (self self (1- n))
					))))

The function this expresses is very simple because what's important
is how it's expressed, not what it does.  In order to work, it has
to be called with itself and an integer as arguments; it then computes
the sum of the first n integers.  But it can do this without touching
the static part of its environment, not by define's, defun's, set's,
or anything else.  (It does require a quote to get it started.)
It is completely dynamic, as pure consciousness seems to be.
	The key feature of self-recursive functions like this appears
to be the applicative loop that occurs when the function is a lambda-
expression that (1) has been given itself as an argument and (2) calls
itself (i.e. its arg) recursively using the self-arg in the same
argument position.  A general pattern for this looks like:

	(LAMBDA (A1 ... Ai ... An)
		  	     ... (Ai Bi ... Ai ... Bn) ...)

where any corresponding Aj and Bj pair may be identical, but at least
one Bj must be different from its Aj or an infinite recursion will
result. (There are other conditions on how they have to differ which
are irrelevant as long as they guarantee termination.)  Again, this
only works if it is initiated with itself as argument Ai.
	It seems to me that this pattern captures the only aspect
of consciousness that is writable in lambda calculus and essential
for consciousness while not essential for not-necessarily-conscious
activities such as speech, memory, vision or problem-solving.
The fact that it's a pattern explains a lot of the trouble people
have with consciousness, since its elements could be broken apart,
scattered, renamed, and passed through other functions before
being resurrected as one funcall among many.  (In a system as complex
as a human mind, make that "many, many, many, many"...)
	A function that recognizes self-recursion in an arbitrary
function definition is not small even in a tiny language, since
it has to be able to track the components of the critical argument
through potential decomposition and reconstruction, quoting, lambda-
binding and other tortures.  The recognition function for a real AI
language like common lisp will be even bigger, since it will have to
deal with macros, reader modifications and STRING/MAKE-SYMBOL pairs.
It may even turn out to not be a computable function, for all I can tell.

	It seems to me that there are four classes of reasonable
objections to this claim that self-recursion is the essence of
consciousness:
1.  "Consciousness is an ill-posed problem" in the sense that Tomaso
    Poggio has been talking about in vision.  There's no unitary,
    simple, elegant way of expressing what we're talking about.
    I'm unhappy with this because it means we'll never "understand"
    consciousness, though we may be able to construct large
    more-or-less-convincing systems that appear to act as if they
    were conscious.
2.  "Consciousness cannot be expressed in pure lisp."  A strong
    claim, since accepting it requires modification of Church's
    Thesis, and claiming that there are material objects that
{_    perform computations that cannot be expressed in the lambda
    calculus.  I'm not entirely opposed to this, since one can
    envision massively parallel systems becoming so large that
    it might be useful to start thinking in terms of "density of
    computation" and taking the limit as the density approaches
    continuity in the same way the rational numbers approach
    the reals.  Physically, you run into quantum limitations first,
    but continuous computation may be theoretically interesting.
    (No, I don't think this is the same as analog computation,
    but I can't explain why.)
3.  "Consciousness can be expressed in lisp, but the pattern
    shown here isn't it."  Please show us the correct answer.
    But remember Occam's razor: in science, small is beautiful.
4.  "Consciousness is an illusion.  It can't be expressed in lisp
    because it doesn't exist."  This is my favorite.  Steven Harnad's
    colleague Julian Jaynes has written a fascinating book which
    argues that consciousness first appeared on the planet less
    than 3000 years ago.  I see no reason why consciousness couldn't
    vanish once we learn how to avoid spending valuable mental
    resources on introspection.  It of course remains to be explained
    why consciousness has been such a powerful illusion.

I apologize if I'm rediscovering ground already covered in this forum;
I've only been reading the AIlist for a few months. This is about
all I have to say on the subject, so if the moderator decides to
distribute this, I hope he doesn't mind if I request that responses
be sent to the net, not to me.

"...a region of sight, of sound, of mind.
    Submitted for your consideration, from"

	- George McKee
	  College of Computer Science
	  Northeastern University