[net.ai] Maintaining High Quality in AI Products

Bundy%edxa@ucl-cs.arpa (06/20/84)

From:  BUNDY HPS (on ERCC DEC-10) <Bundy%edxa@ucl-cs.arpa>

        Credibility has always been a precious asset for AI, but never
more so than now.  We are being given the chance to prove ourselves.  If
the range of AI products now coming onto the market are shown to
provide genuine solutions to hard problems then we have a rosy future.
A few such products have been produced, but our future could still be
jeopardized by a few, well publised, failures.

        Genuine failures - where there was determined, but ultimately
unsuccesful, effort to solve a problem - are regretable, but not fatal.
Every technology has its limitations.  What we have to worry about are
charlatans and incompentents taking advantage of the current fashion
and selling products which are overrated or useless.  AI might then be
sigmatized as a giant con-trick, and the current tide of enthusiasm
would ebb as fast as it flowed.  (Remember Machine Translation - it
could still happen.)

        The academic field guards itself against charlatans and
incompentents by the peer review of research papers, grants, PhDs, etc.
There is no equivalent in the commercial AI field.  Faced with this
problem other fields set up professional associations and codes of
practice.  We need a similar set-up and we needed it yesterday.  The
'blue chip' AI companies should get together now to found such an
association.  Membership should depend on a continuing high standard of
AI product and in-house expertise.  Members would be able to advertise
their membership and customers would have some assurance of quality.
Charlatans and incompetents would be excluded or ejected, so that the
failure of their products would not be seen to reflect on the field as
a whole.

        A mechanism needs to be devised to prevent a few companies
annexing the association to themselves and excluding worthy
competition.  But this is not a big worry.  Firstly, in the current state
of the field AI companies have a lot to gain by encouraging quality in
other companies.  Every success increases the market for everyone,
whereas failure decreases it.  Until the size of the market has been
established and the capacity of the companies risen to meet it, they
have more to gain than to lose by mutual support.  Secondly, excluded
companies can always set up a rival association.

        This association needs a code of practice, which members would
agree to adhere to and which would serve as a basis for refusing
membership.  What form should such a code take, i.e.  what counts as
malpractice in AI?  I suspect malpractice may be a lot harder to define
in AI than in insurance, or medicine, or travel agency.  Due to the
state of the art, AI products cannot be perfect.  No-one expects 100%
accurate diagnosis of all known diseases.  On the other hand a program
which only works for slight variations of the standard demo is clearly
a con.  Where is the threshold to be drawn and how can it be defined?
What consitutes an extravagent claim?  Any product which claims to:
understand any natural language input, or to make programming
redundant, or to allow the user to volunteer any information, sounds
decidedly smelly to me.  Where do we draw the line?  I would welcome
suggestions and comments.

                Alan Bundy

abc@brl-tgr.UUCP (06/27/84)

I suggest that the ACM provides an appropriate umbrella under which such
an effort can at least be planned.  It is sufficiently broad-based as to
be representative and not exclusive and its democratic procedures
provide protection from the types of abuses that could be possible.  (I
do not mean to slight the AAAI; it's just that ACM seems to have more of
the "mechanisms" that such an efort will need.)

Also, I have felt for many years that ACM should, at least in the US,
provide the kind of accreditation of Computer Science curricula that the
engineering societies provide for theirs.