[net.ai] Stolfo's call for discussion

STEINBERG@RUTGERS.ARPA (04/02/84)

From:  Louis Steinberg <STEINBERG@RUTGERS.ARPA>

One way AI programming is different from much of the programming in other
fields is that for AI it is often impossible to produce a complete set of
specifications before beginning to code.

The accepted wisdom of software engineering is that one should have a
complete, final set of specifications for a program before writing a
single line of code.  It is recognized that this is an ideal, not
typical reality, since often it is only during coding that one finds
the last bugs in the specs.  However, it is held up as a goal to
be approached as closely as possible.

In AI programming, on the other hand, it is often the case that an
initial draft of the code is an essential tool in the process of
developing the final specs.  This is certainly the case with the
current "expert system" style of programing, where one gets an expert
in some field to state an initial set of rules, implements them, and
then uses the performance of this implementation to help the expert
refine and extend rules.  I would argue it is also the case in fields
like Natural Language and other areas of AI, to the extent that we
approach these problems by writing simple programs, seeing how they
fail, and then elaborating them.

A classic example of this seems to be the R1 system, which DEC uses to
configure orders for VAXen.  An attempt was made to write this program
using a standard programming approach, but it failed.  An attempt was
then made using an expert system approach, which succeeded.  Once the
program was in existence, written in a production system language, it
was successfully recoded into a more standard programming language.
Can anyone out there in net-land confirm that it was problems with
specification which killed the initial attempt, and that the final
attempt succeeded because the production system version acted as the
specs?

DBrown%HI-MULTICS@sri-unix.UUCP (04/05/84)

From:  Dave Brown    <DBrown @ HI-MULTICS>

  A side point about Louis Steinberg's response: The accepted wisdom
is actually that AI and plain commercial programming has shown that
specification in complete detail is really just mindless hacking, by
a designer rather than a hack.
  *However* the salesmen of "software engineering methodologies" are
just getting up to about 1968 (the first sw eng conference), and
are flogging the idea that perfect specifications are possible
and desirable.
  Therefore the state of practice lags behing the state of the art
an unconsciousable distance....
  AI leads the way, as usual.

  --dave (software engineering ::= brilliance | utter stupidity) brown

rcd@opus.UUCP (Dick Dunn) (04/10/84)

<>
>One way AI programming is different from much of the programming in other
>fields is that for AI it is often impossible to produce a complete set of
>specifications before beginning to code.
>
>The accepted wisdom of software engineering is that one should have a
>complete, final set of specifications for a program before writing a
>single line of code.  It is recognized that this is an ideal, not
>typical reality, since often it is only during coding that one finds
>the last bugs in the specs.  However, it is held up as a goal to
>be approached as closely as possible.

I submit that these statements are NOT correct in general for non-AI
programs.  Systems whose implementations are not preceded by complete
specifications include those which
	- involve new hardware whose actual capability (e.g., speed) is
	  uncertain.
	- are designed with sufficiently new hardware and/or are to be
	  manufactured in sufficient quantity that hardware price per-
	  formance tradeoffs will change significantly in the course of the
	  development.
	- require user-interface decisions for which no existing body of
	  knowledge exists (or is adequate) - thus the user interface is
	  strongly prototype (read: trial/error) oriented.
as well as the generally-understood characteristics of AI programs.  In
some sense, my criteria are equivalent to "systems which don't represent
problems already solved in some way already" - and there are a lot of such
problems.
-- 
"A friend of the devil is a friend of mine."		Dick Dunn
{hao,ucbvax,allegra}!nbires!rcd				(303) 444-5710 x3086

warren@teltone.UUCP (04/12/84)

	Unexpectedness and lack of pre-specification occur in many professional
	programming environments.  In AI particulary it occurs because
	experimentation reveals unexpected results, as in all science. 

	In hardware (device-driver) code it occurs because the specs lie or
	omit important details, or because you make an "alternative
	interpretation".

	In over-organized environments, where all the details are spelled out
	to the nth degree in a stack of documents 8 feet high, unexpectedness
	comes when you read the spec and discover the author was a complete idiot
	having a very bad day.  I have seen alleged specs that were signed off
	by all kinds of high mucky-mucks that are completely, totally, zonkers.
	Not just in error, but complete jibberish, having no visible association
	with either reality or thought, not to mention the project at hand.
	At the very least, they are simply out of date.  Something crucial has
	changed since the specs were written.

	In business environments, it occurs when the president of the company
	says he just changed the way records are to be kept, and besides, doesn't
	like the looks of the reports he agreed to several months ago.  Whats a
	programmer to do ?  Tell the boss to shove it ?  The single most difficult
	kind of programming occurs when  1) The user is your boss (or "has power").
	2) The user is fairly stupid.  3) The user/boss is good enough of a con
	artist to prevent the programmer from leaving.  It is admitted, however,
	that the difficulty is not technical, per se, but political.

	All the above examples are from my professional experience, which spans
	over ten years.  None of the situations are very unusual.  Unexpectedness
	is part of our job.  In any case, 90 to 99% of the code in the AI systems
	I've seen are much like any other program.  There are parsers, allocators,
	symbol tables, error messages, and so on.  I'll let others testify to
	the remainder of the code, its been a while.
					
					warren

WYLAND@SRI-KL.ARPA (04/18/84)

        Your question - "What are the fundamental characteristics
of AI computation that distinguish it from more conventional
computation." - is a good one.  It is answered, consciously or
unconsciously, by each of us as we organize our understanding of
the field.  My own answer is as follows:

        The fundamental difference between conventional programs
and AI programs is that conventional programs are static in
concept and AI programs are adaptaive in concept.  Conventional
programs, once installed, have fixed functions: they do not
change with time.  AI programs are adaptive: their functions and
performance improve with time.

        A conventional program - such as a payroll program, word
processor, etc. - is conceived of as a static machine with a
fixed set of functions, like a washing machine.  A payroll
program is a kind of "cam" that converts the computer into a
specific accounting machine.  The punched cards containing the
week's payroll data are fed into one side of the machine, and
checks and reports come out the other side, week after week.  In
this concept, the program is designed in the same manner as any
other machine: it is specified, designed, built, tested, and
installed.  Periodic engineering changes may be made, but in the
same manner as any other machine: primarily to correct problems.

        AI programs are adaptive: the program is not a machine
with a fixed set of functions, but an adaptive system that grows
in performance and functionality.  This focus of AI can be seen
by examining the topics covered in a typical AI text, such as
"Artificial Intellegence" by Elaine Rich, McGraw-Hill, 1983.
The topics include:

  o  Problem solving: programs that solve problems.
  o  Game playing
  o  Knowledge representation and manipulation
  o  Natural language understanding
  o  Perception
  o  Learning

        These topics are concerned with adaptation, learning, or
any of several names for the same general concept.  This seems to
be the consistant characteristic of AI programs.  The interesting
AI program is one that can improves its performance - at solving
problems, playing games, absorbing and responding to questions
about knowledge, etc. - or one that addresses issues associated
with problem solving, learning, etc.

        The adaptive aspect of AI programs implies some
difference in methods used in the programs.  AI programs are
designed for change, both by themselves while running, and by the
original programmer.  As the program runs, knowledge structures
may expand and change in a number of dimensions, and the
algorithms that manipulate them may also expand - and change
THEIR structures.  The program must be designed to accommodate
this change.  This is one of the reasons that LISP is popular in
AI work: EVERYTHING is dynamically allocated and modifyable -
data structures, data types, algorithms, etc.

        Good luck in your endeavors!  It is a great field!

Dave Wyland
WYLAND@SRI