[comp.lang.c] Cachable functions

wald-david@CS.YALE.EDU (david wald) (12/15/88)

In article <377@aber-cs.UUCP> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
>I would like to add a note (hope it does not start another stream of
>misunderstanding, misrepresentation and abuse towards me :->)
>
>    I would like to be able (maybe in C++ ?) to declare a procedure as
>    "register", to signify that the values it returns only depend on the
>    arguments and not on any globals, and thus it will always return the same
>    values given the same arguments.
>
>    Apart from giving a very clever (that, as I said, I do not like) compiler
>    a good opportunity to do some caching of procedure's results, it would be
>    extremely useful to a mechanical (lint?) or human reader/verifier of the
>    code, as it would clearly document which procedures are functions and
>    which are generators/closures, an extremely important distinction.

A nice idea.  However, except for the argument of conserving keywords,
I'm not sure that "register" is the right word (surely you don't intend
to cache function variables in anything analagous to a PDP-11 register,
do you?).  Perhaps "cache" or "cachable"?  It would certainly allow a
useful optimization.


============================================================================
David Wald                                              wald-david@yale.UUCP
						       waldave@yalevm.bitnet
============================================================================

boyne@hplvli.HP.COM (Art Boyne) (01/04/89)

pcg@aber-cs.UUCP (Piercarlo Grandi) writes:

> Let me state for the nth time that I am not against *optimizers*, I don't
> advocate sloppy code generators. I am against *aggressive* optimizers. I
> don't think they are worth the effort, the cost, the reliability problems,
> the results. I have said that an *optimizer* is something that does a
> competent job of code generation, by exploiting *language* and *machine*
> dependent opportunities (I even ventured to suggest some classification of
> good/bad optimizations).

> Aggressive optimization to me is what attempts to exploit aspects of the
> *program* or *algorithm*. This I do not like because it requires the
> optimizer to "understand" in some sense the program, and I reckon that a
> programmers should do that, and an optimizer can only "understand" static
> aspects of a program and the algorithm embodied in it, and experience
> suggests that KISS applies also to compilers, i.e. that the more intelligent
> an optimizer is the buggier it is likely to be and the harder the bugs.

While I agree that the more "aggressive" the optimizer, to use your term, the
more buggy it *tends* to be (but not necessarily), I have to disagree with
your conclusion that "aggressive" optimization is not worth the effort.

After working for the last 5 years with a brain-dead (not just damaged) C
compiler (*) that has been proven to generate 2-3 *times* more code than I would
in assembler, and is bug-ridden besides, I would *love* to have *any* reasonable
optimizing compiler.  The code I write goes into microprocessor-based instruments,
translated: ROM.  The last three instruments introduced from our lab have 256K,
384K, and 512K of ROM, respectively.  An aggressive optimizing compiler could
have cut that probably in half, or better.  Let's face it folks: ROM isn't cheap,
and twice the code size means, in general, half the performance of the instrument.
While the difference of 2x in a program on your PC or mainframe may be annoying,
a difference of 2x in the throughput of a test system can amount to millions of $$
to a high-volume manufacturer (and the loss of a sale to us!).  Working around any
bugs in the optimizer would have been insignificant compared the pains of rewriting
in assembler and/or doing algorithmic handstands in order to get performance up to snuff.
(As a reference point, the product I have been associated with now has 100K of C, 100K
of assembler, and 35K of machine-generated parser tables).

(*) In case you're wondering why we used it - it was the only one supported by
    the emulation hardware we are using.  The newer hardware, which we don't have,
    has a much more reliable compiler that still generates 1.5 to 2 times more code.

These are my opinions and do not necessarily reflect the views of my employer.

Art Boyne, boyne@hplvdz.HP.COM