[comp.ai.digest] In Defense of FORTRAN

LAWS@IU.AI.SRI.COM (Ken Laws) (11/03/87)

Eiji Hirai asks whether FORTRAN is seriously considered an AI language.
I'm certain that Alan Bundy was joking about it.  That leaves an opening
for a serious defender, and I am willing to take the job.  Other languages
have already been much touted and debated in AIList, so FORTRAN deserves
equal time.

Many expert system companies have found that they must provide their
end-user programs in C (or occasionally PASCAL or some other traditional
language).  A few such companies actually prefer to do their development
work in C.  There are reasons why this is not insane.  The same reasons
can be made to apply to FORTRAN, providing that one is willing to consider
a few language extensions.  They apply with even more force to ADA, which
may succeed in giving us the sharable subroutine libraries that have been
promised ever since the birth of FORTRAN.  I will concentrate on C because
I know it best.

The problem with traditional languages is neither their capability nor
their efficiency, but the way that they limit thought.  C, after all,
can be used to implement LISP.  A C programmer may be more comfortable
growing the tail end of a dynamic array than CONSing to the head of
a list, but that is simply an implementation option that should be
hidden within a package of list-manipulation routines.  (Indeed, the
head/tail distinction for a 1-D array is arbitrary.)  Languages that
permit pointer manipulation and recursive calls can do just about
anything that LISP or PROLOG can.  (Dynamic code modification is
possible in C, although exceedingly difficult.   It could be made more
palatable if appropriate parsing and compilation subroutines were made
available.)

My own definition of an "AI" program is any program that would never
have been thought of by a FORTRAN/COBOL programmer.  (The past tense
is significant, as I will discuss below.)  FORTRAN-like languages
are thus unlikely candidates for AI development.  Why should this
be so?  It is because they designed for low-level manipulations (by
modern standards) and are clumsy for expressing high-level concepts.
C, for instance, is so well suited to manipulating character strings
that it is unusual to find a UNIX system with an augmented library of
string-parsing routines.  It is just so much easier to hack an
efficient ad hoc loop than to document and maintain a less-efficient
general-purpose string library that the library never gets written.
String-manipulation programs do exist (editors, AWK, GREP, etc.), but
the intermediate representations are not available to other than
system hackers.

FORTRAN, with its numeric orientation, is even more limiting.  One can
write string-parsing code, but it is difficult.  I suspect that string
libraries are therefore more available in FORTRAN, a step in the right
direction.  People interested in string manipulation, though, are more
likely to use SNOBOL or some other language -- almost any other language.
FORTRAN makes numerical analysis easy and everything else difficult.

Suppose, though, that FORTRAN and C offered extensive "object oriented"
libraries for all the data types you were likely to need: lists, trees,
queues, heaps, strings, files, buffers, windows, points, line segments,
robot arms, etc.  Suppose that they also included high-level objects
such as hypotheses, goals, and constraints.  (These might not be just
what you needed, but you could use the standard data types as templates
for making your own.)  These libraries would then be the language in
which you conduct your research, with the base language used only to
glue the subroutines together.  A good macro capability could make the
base+subroutine language more palatable for specific applications,
although there are drawbacks to concealing code with syntactic sugar.

Given the appropriate subroutine libraries, there is no longer a mental
block to AI thought.  A FORTRAN programmer could whip together a
backtrack search almost as fast as a PROLOG programmer.  Indeed,
PROLOG would be a part of the FORTRAN environment.  Current debugging
tools for FORTRAN and C are not as good as those for LISP machines,
but they are adequate if used by an experienced programmer.  (Actually,
there are about a hundred types of FORTRAN/COBOL development tools
that are not commonly available to LISP programmers.  Their cost and
learning time limit their use.)  The need for garbage collection can
generally be avoided by explicit deallocation of obsolete objects
(although there are times when this is tricky).  Programming in a
traditional language is not the same as programming in LISP or PROLOG,
but it is not necessarily inferior.

The problem with AI languages is neither their capability nor
their efficiency, but the way that they limit thought.  Each makes
certain types of manipulations easy while obscuring others.
LISP is a great language for manipulating lists, and lists are an
exceptionally powerful representation, but even higher level constructs
are needed for image understanding, discourse analysis, and other
areas of modern AI research.  No language is perfectly suited for
navigating databases of such representations, so you have to choose
which strengths and weaknesses are suited to your application.
If your concern is with automating intelligent >>behavior<<,
a traditional algorithmic language may be just right for you.

					-- Ken Laws

-------

hamscher@HT.AI.MIT.EDU (Walter Hamscher) (11/05/87)

In any discussion where C and Fortran are defended as languages
for doing AI, if only they provided the constructs that Lisp and
Prolog already provide, I am reminded of the old Yiddish saying
(here poorly transliterated) ``Wenn mein Bubba zul huben
Bietzem, vol tzi gevain mein Zayda.''  Or, loosely, ``IF is a
big word.''

   Date: Mon 2 Nov 87 14:29:09-PST
   From: Ken Laws <LAWS@IU.AI.SRI.COM>

	* * *

   The problem with AI languages is neither their capability nor
   their efficiency, but the way that they limit thought. * * *

Exactly so.  Using Fortran or any language where you have to
spend mental energy thinking about the issues that Lisp and
Prolog already handle ``cuts your chances of fame and fortune
from the discovery of the one true path,'' to quote an earlier
contributor.  Fortran's a fine language for writing programs
where the problem is well understood, but it's just a lousy
language for tackling new problems in.  This doesn't just go for
academic research, either; same goes for doing applications that
have never been tackled before.

LAWS@IU.AI.SRI.COM (Ken Laws) (11/05/87)

Good points.

I happen to program in C and have built a software environment that
does provide many of the capabilities of LISP.  It has taken me many
years, and I would not recommend that others follow this path.

My real point, though, was that LISP and PROLOG are also at too low
a level.  The Lisp Machine environment, with its 10,000 predefined
functions, is a big factor in the productivity of LISP hackers.  If
similar (or much better!) libraries were available to FORTRAN hackers,
similar productivity would be observed.  LISP does permit many clever
programming techniques, as documented in Abelson and Sussman's book,
but a great deal can be done with the simple conditionals, loops,
and other control structures of a language like FORTRAN.

The AI community is spending too much time reprogramming graph search
algorithms, connected-component extraction, cluster analysis, and
hundreds of other solved problems.  Automated programming isn't coming
to our rescue.  As Fred Brooks has pointed out, algorithm development
is one of the most intricate, convoluted activities ever devised;
software development tools are not going to make the complexities
vanish.  New parallel architectures will tempt us toward brute-force
solutions, ultimately leaving us without solutions.  It's time we
recognize that sharable, documented subroutine libraries are essential
if AI programs are ever to be developed for real-world problems.

Such subroutines, which I envision in an object-oriented style, should
be the language of AI.  Learned papers would discuss improvements to the
primitive routines or sophisticated ways of coordinating them, seldom
both together -- just as an earlier generation separated A* and
garbage collection.  This would make it easier for others to repeat
important work on other computer systems, aiding scientific verification
and tech transfer as well as facilitating creativity.

					-- Ken Laws


[This applies particularly in my own field of computer vision, where many
graduate students and engineers spend years reinventing I/O code, display
drivers, and simple image transformations.  Trivial tasks such as mapping
buffers into display windows cease to be trivial if attempted with any
pretense to generality.  Code is not transportable and even images are
seldom shared.  The situation may not be so bad in mainstream AI research,
although I see evidence that it is.]

siklossy@cs.vu.nl (Laurent Siklossy) (11/10/87)

FORTRAN and other "standard" programming languages have
been used for years for advanced AI. One of the French AI
pioneers (if not THE pioneer, Ph.D. around 1961(?)),
Dr. Jacques Pitrat, has programmed for years in FORTRAN
with his own extensions. His programs included
discovering interesting logical theorems, learning in
the domain of games (chess), and many other areas. 

Prof. Jean-Louis Lauriere wrote his Ph.D. thesis
(Universite de Paris VI, 1976; see his 100+ pages
article about that in the AI Journal, 1977 I think) in
PL/1. Lauriere's system was, in my opinion, the first
real (powerful) general problem solver, and remains a top
performing system in the field. (Lauriere may have been
pushed into using PL/1 by lack of other more appealing
choices, I cannot remember for sure.)

So it has been done, therefore you can do it too. I would
not recommend it, but that may be a matter of taste or
of limitations.

Laurent Siklossy
Free University, Amsterdam
siklossy@cs.vu.nl

---------------------------------------------------

Ken:

You are welcome to send above via the net if you find
it useful. 

Cheers,    LS
-------