[comp.lang.c] POP-11 is 8 times faster than C?

pop@cs.umass.edu (Robin Popplestone) (06/15/90)

There has been some comparison of running times of the  tak  function in 
various news groups. I thought I would run tests for POP-11. The best I could
do by various optimising tricks was 2.5 times slower than C, until my memory
was jogged by R.L.Schwartz into trying memoisation. The results are shown
below. All tests are run on a discless Sun 3/60, running POPLOG.

The following POP-11, called from Prolog, runs in the time given:

define tak(x,y,z);
  lvars x y z;
  if x=<y then z else
    lvars a1 = tak(x-1,y,z),
          a2 = tak(y-1,z,x),
          a3 = tak(z-1,x,y);
    tak(a1,a2,a3);
  endif
enddefine;

vars tak = memo_n(tak,3);

:- time((N is tak(24,16,8),print(N))).

9Time for ,
    1.67 secs (CPU)
    0 secs (GC)


If the C code given in the net is loaded into POPLOG as follows, we
obtain the times given:

external declare tttt in c;

    int tak(x, y,z)
     int x,y,z;
    {}

endexternal;


external load tttt;
'~/prolog/tak.o'
endexternal

:- time((N is tak(24,16,8),print(N))).

9Time for ,
    14.5 secs (CPU)
    0 secs (GC)

Now, of course I am cheating - or am I? The point is that POP-11 trivially
supports the memoisation (being derived from the language which Michie used
for demonstrating the concept). This could be done in C with quite a bit of
labour, especially if generality were sought (i.e. you can't always use arrays
for your memo-tables). And the memoisation could be carried through to e.g.
POPLOG ML, with suitable pragmatic declarations. And it supports functions
over any data-type that can be hashed.

To get down to basics, while your favourite form of sugared lambda abstraction
(e.g the ML  let d = sqrt(a^2+b^2+c^2) in vec2(a/d,b/d,c/d) ) can avoid 
local re-application of a function to the same arguments, you need memoisation
to treat non-local cases. While   tak  is not a function that will interest
many of us, there are plenty of examples in which repeated applications
will be non-local, e.g. of a simplification function. Sometimes memoisation
can be accompished conventionally by introducing extra slots in a 
data-structure. Sometimes this is inappropriate.
Thus, for example, the new Pantechnicon mathematical editor/environment
is written in a paradigm that is essentially functional.


Compared with Dr.Schwartz' Perl example, the perturbation of the POP-11
definition is minimal.

I have a private library file for the memo_n function, which uses the
system _newanyproperty_ function to do the donkey work.  Multiple arguments
are memoised as heavyweight tuples. An alternative arrangement would be to
memoise by systematically Currying through multiple arguments. This would
permit address hashing to be used (at the cost of n property look-ups).

          BETTER ALGORITHMS WIN OVER CHEESE-PARING CODE OPTIMISATIONS           

Michie,D. 1968 "Memo functions and Machine Learning, {\em Nature, 218} 19-22.

Popplestone,R.J. 1967 Memo functions and the POP-2 language. {\em Research
Memorandum MIP-R_30} Edinburgh: Department of Machine Intelligence and
Perception.


 Popplestone R.J. (1989), ``Pantechnicon, or `Lets Suppose the Inmost Secrets
of Emacs, Tex and Hypercard lie open to Programmers'", Tech. Report 90-13,
Computer and Incormation Science Dept., University of Massachusetts at
Amherst.