wilson@cs.utexas.edu (Paul Wilson) (06/13/91)
In article <KERS.91Jun13120751@cdollin.hpl.hp.com> kers@hplb.hpl.hp.com (Chris Dollin) writes: > >I forgot to add another approach: > >[5] Don't re-use stack frames: allocate fresh ones from the heap. Cost: more GC >(because of the stack-frame turnover), may need fancier GC to cope, and must >take care about not-yet-initialised variables anyway. (I can't rely on virtual >memory hardware to help; not all my targets *have* virtual memory, or even >multiple address spaces.) My advice is to use a stack. You'd be amazed how much extraneous junk you get on the heap if you don't have a stack. For some programs I ran, using a stack cache reduced heap allocation in a Scheme system by MORE THAN AN ORDER OF MAGNITUDE on average. It might be different in a system that did more inlining and therefor less procedure-calling, but I think you'll still get several times as much stuff on the heap. With a simple gc, this is going to increase your space or time costs by a large factor, and gc costs are likely to be the dominant cost for many programs. With a generational GC, you still don't want to heap-allocate activation records -- you want *both* a stack (or stack cache) and a generational gc. (If you *must* get by with only a gc, make sure it has three or more generations -- the youngest one will be used mostly for killing those pesky activation records.) One of the side-effects of rampant heap allocation is to make cache memories much less effective -- all of the short-lived data romp through the cache, causing misses and displacing things that get missed on later. Unless you have a cache big enough to hold your youngest generation, you get cache miss traffic equal to twice the rate of allocation. This effect is particularly bad in a direct-mapped cache, because it undermines some of the locality assumptions that direct-mapped caches are based on. You not only get (capacity) miss traffic greater than the rate of allocation, you get even more misses due to mapping conflicts. Very bad. In my simulations I see miss rates for direct-mapped caches anywhere from about 50% higher (for small caches) to 200-600% higher (for large ones that can hold the youngest generation in cache). (These are for unified I/D caches. Split caches help with small cache sizes -- which is good -- but not so much for large ones.) If you're interested in this sort of thing, you might want to check out my paper "Heap Management and Hierarcical Memories," in SIGPLAN Notices a few months back (Feb? March?). I also have a tech report about GC'd systems' cache performance. -- PRW -- | Paul R. Wilson, Interactive Computing Envts. Lab. lab ph.: (312) 996-9216 | | U. of Illinois at Chicago EECS Dept. (M/C 154) wilson@bert.eecs.uic.edu* | | P.O. Box 4348 Chicago,IL 60680 fax ph.: (312) 413-0024 | | *Next yr, Asst Prof, UT Austin CS Dept-- after 8/1 use wilson@cs.utexas.edu |