[comp.ai] Value systems for AI

jps@cat.cmu.edu (James Salsman) (07/10/89)

I would like to write a "Creative AI" program that could use
modern NL processing techniques such as those used at CMU's
Center for Machine Translation.  I think a lot of the NLP
systems that they use in the CMT are cool hacks, and
eventually it is going to be possible to layer higher-level
systems above the language processing software.

I want my "Creative AI" program to post to bboards, mailing
lists and netnews, so that I don't have to :-).  In order to
do this I have been trying to knowledge engineer an expert
value system.

In other words, I have been trying to formalize my own value
system.  The way I've been going about it is to construct
various "value logics" and then evaluate them relative to my
observations of myself.  Usually, I build upon temporal
event logics, and I make it a point to stay within the
bounds of the 2nd order typed lambda-calculus, for
programmability.  If the Value Logic can generate goals from
observations, then I'll be able to use the strategy employed
by the Soar architecture and bridge the gap into natural
language.

The dataflow diagram of the system would look like this:

_________________      ___________    _______________     _________________
Outside World   |      |   I/O   |    |  Interface  |     | Value System  |
                +--A--->   NLP   +-B-->     SOAR    +--C-->   Knowledge   |
(Bboards and    |      | Systems |    |  Cog. Arch. |     |     Base      |
 Netnews, say.) <--F---+ from MT <-E--+Search Engine<--D--+ (Lambda Calc) |
-----------------      -----------    ---------------     -----------------

The Datatypes:
A: English Text
B: Case-Frame Statements:  Textual Summary, and Elaboration if requested.
C: Formalized Value-Logic Statements.
D: Value-Logical Evaluations in the form of SOAR-style Goals.
E: "Attention Commands" (requests for elaboration), and Case Frame "Output"
F: Search Commands and English Text "Output"

Unfortunately, these sort of introspective techniques are
quite circular, sometimes, and during the K-E process I
often find myself locating inconsistencies in my own value
system.  I hate it when that happens.

If I find an inconsistency, I don't know whether to act to
correct it, perhaps by rehearsal of imaginary instances of
the dilemma in question, or to ignore it and get back to
work.  It all boils down to this: I have no good evaluation
function on inconsistencies.  When I see that I have what
neo-Freudian psychologists call a "complex," I don't know
whether to act to remove it (possibly introducing another
complex in some other domain) or to ignore it's existence
and suffer the consequences.

This sort of "moral dilemma" is what zapped HAL in the film
_2001_, and I don't want to see that kind of behavior in
any AI systems that I write.

:James Salsman
::Carnegie Hackers for Global Responsibility and Peace
-- 

:James P. Salsman (jps@CAT.CMU.EDU)