[comp.lang.prolog] To memo, or not to memo?

debray@cs.arizona.edu (Saumya K. Debray) (06/15/90)

Peter Van Roy recently posted an article showing the results of some
benchmarks on which his Prolog implementation outperformed some C
implementations of the same programs.  A number of people have suggested
that this might be because the Prolog system performs "memoization",
i.e. saves the results of procedure calls, and on later calls simply
does a table lookup.  This article tries to show why it's very unlikely
that this could be done automatically by the compiler.

In order to replace a procedure call by a table lookup, the Prolog
compiler has to be reasonably sure that the "optimized" program won't
be slower than the original.  To do this, it has to do a cost analysis
of the two versions, in order to weigh the overhead of table maintenance
and lookup against the savings accruing from avoided computation.  Now
cost analysis of Prolog programs is a pretty hairy problem, due to
backtracking and partial data structures: it depends on factors such
as no. of solutions generated, size of variable bindings, etc.  (To get
a feel for the complexity of the problem, consider the fact that
proving program termination is the special case of showing that the
cost of the program is finite.)  There's a paper at SIGPLAN-90 next
week that tries to address the problem of automatic cost analysis, but
this only looks at the deterministic case.

The textbook examples of memoization show how fibonacci gets big
improvements in performance because of memoization.  They don't say
how Ackermann's function can actually slow down with memoization
because of the overhead of table management.  One can't simply stick
memoization into any old program and expect it to necessarily run
faster.  I'd be very surprised if a compiler could figure out the
right cases to memo on its own.
-- 
Saumya Debray		CS Department, University of Arizona, Tucson

     internet:   debray@cs.arizona.edu
     uucp:       uunet!arizona!debray