[comp.ai.digest] Text Critiquing and Eliza

ethan@BOEING.COM (Ethan Scarl) (05/29/87)

The "grammar checker" discussions were stirring some old memories which I
finally pinpointed: a 1973 debate (centered on Joe Weizenbaum and Ken
Colby) over whether Eliza should be used for actual therapy.

The heart of the grammar checker issue is whether a computational package of
limited or doubtful competence should be given an authoritative role for some
vulnerable part of our population (young students, or confused adults).  What
was most shocking in the Eliza situation (and may be true here as well) was
the quick and profound acceptance of a mechanical confidante by naive users.
Competent and experienced writers have no trouble discarding (or
extrapolating from) Rightwriter's sillier outputs; the problem is with
inexperienced or disadvanted users.  Many of us were (are) enraged at this
automated abuse as absurd, irresponsible, and even inhuman," only to be
stopped short by a sobering argument: "if competent human help is scarce,
then isn't this better than nothing?"

The Rightwriter discussion summarizes/coheres rather well:  Such systems are
suggestive aids for competent writers and may be useful in tutoring the
incompetent.  Such systems will be unsuitable as replacement tutors for some
time to come, but may be worthwhile (in time and effort expended for results
achieved) as aids to be be used by a competent tutor or under the tutor's
supervision.

We are in deep trouble if there are no competent humans available to help
others who need it.  But the secondary question: "Is sitting in front of a
CRT better than sitting in a closet?" can at least be tested empirically.

In the Rightwriter case, I would expect that most students will quickly
understand the the program's analytic limitations after they are pointed out
by a teacher.  However, the human teacher's perspective is essential.