wilson@uicbert.eecs.uic.edu (10/24/88)
I'm looking for any literature on dynamic/adaptive/optimistic optimization,
though clearly I'm not sure what it is/should be called.
The idea is this -- optimization is based on certain assumptions that
may turn out to be false at runtime (in more than the usual sense of
heuristic optimization).
Two examples:
1) the compiler may make the assumption that no values are ever altered
in non-lexically-apparent ways. It saves dependency information,
however, so that if something is altered dynamically in violation
of this assumption, the appropriate code is automatically recompiled
on the fly. (I see this as useful in Scheme, to allow automatic
inlining of procedures and variable values that might be redefined
dynamically, but usually aren't.)
2) the compiled code may check argument types and recompile special
versions of code for particular argument types. (Some Smalltalks
do a primitive version of this.) If a new argument type combination
is encountered, a new version may be compiled, or a generic version
may be run. (This should be particularly handy in a language with
dynamic types but having homogeneous collections. An argument
might be dynamically typed, but if you can then determine that
it is an array-of-integer or array-of-floats, you can unroll
loops and omit most type checks.)
Many variations on the basic idea are possible, such as initially compiling
procedures to increment a counter when run. If they're run enough times,
they recompile themselves with a higher level of optimization.