[comp.ai.digest] AI applications

cdfk@hplb.CSNET (Caroline Knight) (07/10/87)

This is sort of growing out from the discussion on whether AI is a
science or not, although I'm more concerned with the status of AI
applications.

Ever since AI applications started to catch on there has been a
growing divide between those who build software as some form of
experiment (no comment on the degree of scientific method applied) and
those who are building software *FOR ACTUAL USE* using techniques
associated with AI.

Many people try to go about the second as though it were the first.
This is not so: an experimental piece of software has every right to
be "toy" in all those dimensions which can be shown to be unnecessary
for testing the hypotheses. A fancy interface with graphics does not
necessarily make this into a usable system. However most pieces of
software built to do a job have potential users some of whom can be
consulted right from the start.

I am not the first person to notice this, I know. See, for instance,
Woods' work on human strengths and weakness or Alty and Coombes
alternative paradigm for expert systems or Kidd's work on expert
systems answering the wrong questions (sorry I haven't the refs to
hand - if you want them let me know and I'll dig them out).

I think I have a good name for it: complementary intelligence. By this
I mean complementary to human intelligence. I am not assuming that the
programmed part of the system need been seen as intelligent at all.
However this does not mean that it has nothing to do with AI or
cognitive psychology:

    AI can help build up the computer's strengths and define what
    will be weaknesses for sometime yet.

    Cog psy can help define what human's strengths and weaknesses
    are.

Somehow we then have to work out how to put this information together
to support people doing various tasks. It is currently much easier to
produce a usable system if the whole task can be given to a machine
the real challenge for complementary intelligence is in how to share
tasks between people and computers.

All application work benefits from some form of systems analysis or
problem definition. This is quite different from describing a system
to show off a new theory. It also allows the builder to consider the
people issues:

    Job satisfaction - if the tool doesn't enrich the job how are you
    going to persuade the users to adopt it?.

    Efficient sharing of tasks - just because you can automate some
    part does not mean you should!

    Redesign of process?

I could go on for ages about this. But back to the main point about
whether AI is a science or not.

AI is a rather fuzzy area to consider as a science. Various sub-parts
might well have gained the status. For instance, vision has good
criteria to measure the success of a hypothesis against.

I suggest that the area that I am calling complementary intelligence
consists of both a science and an engineering discipline. It is a
science in which experiments such as those of cog psy can be applied.
They are hard to make clear cut but so are many others (didn't you
ever have a standard classroom physics experiment fail at school?).
It is engineering because it must build a product.

And if we want to start a new debate off how about whether it is more
profitable to apply engineering methods to software production or to
consider it an art - I recently saw a film of Picasso painting in
front of a camera and I could see more parallels with some of the
excellent hackers I've observed than with what I've seen of engineers
at work. (This is valid AI stuff rather than just a software
engineering issue because it is about how people work and anyone
interested in creating the next generation of programmer's assistants
must have some views on this subject!).

Caroline Knight             This is my personal view.
Hewlett-Packard Ltd
Bristol, UK