[comp.ai.digest] Attn set comments from a man without any

alk@UX.ACSS.UMN.EDU (11/15/88)

The problem of constraint of the attention set by prior knowledge
which was observed by Tony Stuart, i.e. that a known solution may inhibit
the search for an alternative, even when the known solution does not
have optimal characteristics, goes far beyond the range of David Harvey's
statement that 'the only thing that can be said is that insconsistencies of
data with the rule base must allow for retraction of the rule and assertion
for [sic] new ones.'  Stuart's observation, unless I miscontrue [please
correct me] is not focused on the deduction of hypotheses, but extends also
to realms of problem-solving wherein the suitability of a solution is
(at the least) fuzzy-valued, if not outright qualitative.
The correctness of a solution is not so
much at issue in such a case as is the *suitability* of that solution.
Of course this suggests the use of fuzzy-valued backward-chaining
reasoning as a possible solution to the problem (the problem raised by
Tony Stuart, not the "problem" faced by the AI entity), but I am unclear
as to what semantic faculties are required to implement such a system.
Perhaps the most sensible solution is to allow resolution of all
paths to continue in parallel (subconscious work on the "problem")
for some number of steps after a solution is already discovered.
(David Harvey's discussion prompts me to think in Prolog terms here.)

Why do I quote "problem"?  Deconstruct!  In this broader context,
a problem may consist of a situation faced by the AI entity, without
the benefit of a programmatic goal in the classical sense.
What do I mean by this?  I'm not sure, but its there, nagging me.
Of course goal-formulation must be driven, but at some point
the subgoal-goal path reaches an end.  This is where attention set
and sensation (subliminal suggestion?  or perhaps those continuing
resolution processes, reawakened by the satisfaction of current
goals--the latter being more practically useful to the human
audience of the AI entity) become of paramount importance.

Here I face the dilemma:  Are we building a practical, useful,
problem solving system, or are we pursuing the more elevated (???)
goal of writing a program that's proud of us?  Very different things!

Enough rambling.  Any comments?

--alk@ux.acss.umn.edu, BITNET: alk@UMNACUX.
U of Mn ACSS <disclaimer>
"Quidquid cognoscitur, cognoscitur per modum cognoscentis"