[comp.ai.digest] Self-conscious code and the Chinese room

rei3@SPHINX.UCHICAGO.EDU (ted reichardt) (02/01/88)

From: Jorn Barger, using this account.
Please send any mail replies c/o rei3@sphinx.uchicago.edu



I'm usually turned off by any kind of
philosophical speculation,
so I've been ignoring the Chinese Room melodrama
from day one.
But I came across a precis of it the other day
and it struck me
that a programming trick I've been working out
might offer a partial solution to the paradox.

Searle's poser is this:
when you ask a question of a computer program,
even if it gives a reasonable answer
can it really be said to exhibit "intelligence,"
or does it only _simulate_ intelligent behavior?  
Searle argues that the current technology
for question-answering software
assumes a database of rules
that are applied by a generalized rule-applying algorithm.
If one imagines a _human_ operator
(female, if we want to be non-sexist)
in place of that algorithm,
she could still apply the rules
and answer questions
even though they be posed
in a language she doesn't _understand_--
say, Chinese.
So, Searle says,
the ability to apply rules
falls critically short of our natural sense
of the word "intelligence."

Searle's paradigm for the program
is drawn from the work of Roger Schank
on story-understanding and scripts.
Each domain of knowledge
about which questions can be asked
must be spelled out as an explicit script,
and the rule-applying mechanism
should deduce from clues (such as the vocabulary used)
which domain a question refers to.
Once it has identified the domain,
it can extract an answer from the rules of that domain.

Again, these rules can be applied
by the rule-applying algorithm
to the symbols in the question
without reference to the _meaning_ of the symbols,
and so, for Searle, intelligence is not present.

But suppose now that one domain we can ask about
is the domain of "question-answering behavior in machines"?
So among the scripts the program may access
must be a question-answering script.
We might ask the program,
"If a question includes mathematical symbols,
what domains need to be considered?"
The question-answering script will include rules
like "If MATH-SYMBOL then try DOMAIN (arithmetic)"

But the sum of all these
rules of question-answering
will be logically identical to 
the question-answering algorithm itself.
In Lisp, the script (data) and the program (code)
could even be exactly the same set of Lisp expressions.

Now, Searle will say, even so,
the program is still answering these questions
without any knowledge of the meanings of the symbols used.
A human operator could similarly
answer questions about answering questions
without knowing what is the topic.
In this case, for the human operator
the script she examines will be pure data,
no executing code,
Her own internal algorithms
as they execute
will not be open to such mechanical inspection.

Yet if we ask the program to modify one of its scripts,
as we have every right to do,
and the script we ask it modify is one that also executes,
_its_ behavior may change
while the human operator's never will.

And in a sense we might see evidence here
that the program _does_ understand Chinese,
for if we ask a human to change her behavior
and she subsequently does
we would have little doubt that understanding took place.
To explain away such a change as blind rule-following
we would have to picture her as
changing her own brain structures
with microtomes and fiber optics.
(But the cybernetic equivalent of this ought to be
fiber optics and soldering irons...)

Self-modifying code
has long been a skeleton key in the programmer's toolbox,
and a skeleton in his closet.
It alters itself blindly, dangerously,
inattentive to context and consequences.
But if we strengthen our self-modifying code
with _self-conscious_ code,
as Lisp and Prolog easily can,
we get something very _agentlike_.

Admittedly, self-consciousness about question-answering behavior
is pretty much of a triviality.
But extend the self-conscious domain
to include problem-solving behavior,
goal-seeking behavior,
planning behavior,
and you have the kernel of something more profound.

Let natural selection build on such a kernel
for a few million, or hundreds of millions of years,
and you might end up with something pretty intelligent.

The self-reference of Lisp and Prolog
takes place on the surface of a high-level language.
Self-referent _machine code_ would be more interesting,
but I wonder if the real quantum leap
might not arrive when we figure out how to program
self-conscious _microcode_!

bwk@MITRE-BEDFORD.ARPA (Barry W. Kort) (02/02/88)

Jorn Berger identifies an important characteristic of an intelligent
system:  namely the ability to learn and evolve its intelligence.

In thinking about artificial intelligence, I like to draw a distinction
between a sapient system and a sentient system.  A sapient system reposes
knowledge, but does not evolve.  A sentient system adds to its abilities
as it goes along.  It learns.

If the Chinese Room not only applied the rules for manipulating the
squiggles and squoggles, but also evolved the rules themselves so
as to improve its ability to synopsize a story, then we would be more
sympathetic to the suggestion that the room was intelligent.

Here is where the skeleton key comes in.  In computer programming, there
is no inherent taboo that prevents a program from modifying its own code.
Most programmers religiously avoid such practice, because it usually leads
to suicidal outcomes.  But there are good examples of game-playing
programs that do evolve their heuristic rules based on experience.

Jacob Bronoswki has said that if man is any kind of machine, he is
a learning machine.  I think that Minsky would agree.

Now if we can just work out the algorithms for learning...

--Barry Kort