[net.ai] John Batali at the AI Revolving Seminar 30 November

AGRE@sri-unix.UUCP (11/23/83)

le to affect itself by its
decisions.

A program built on these lines cannot think about every step of its
reasoning -- because it would never stop thinking about "how to think
about" whatever it is thinking about.  On the other hand, we want it
to be possible for the program to consider any and all of its
reasoning steps.  The solution to this dilemma may be a kind of
"virtual reasoning" in which a program can exert reasoned control over
all aspects of its reasoning process even if it does not explicitly
consider each step.  This could be implemented by having the program
construct general reasoning plans which are then run like programs in
specific situations.  The program must also be able to modify
reasoning plans if they are discovered to be faulty.  A program with
this ability could then represent itself as an instance of a reasoning
plan.

Brian Smith's 3-LISP achieves what he calls "reflective" access and
causal connection: A 3-LISP program can examine and modify the state
of its interpreter as it is running.  The technical tricks needed to
make this work will also find their place in an introspective
problem-solver.

My work has involved trying to make sense of these issues, as well as
working on a representation of planning and acting that can deal with
real world goals and constraints as well as with those of the planning
and plan-execution processes.