will@uoregon.uoregon.edu (William Clinger) (09/24/88)
In article <327@scaup.cl.cam.ac.uk> adg@cl.cam.ac.uk (Andy Gordon) writes: > I found on small benchmarks that native code >was between four and six times bigger than interpreted byte codes, and >ran between one and seven times faster. Another data point: In MacScheme, native code is also four to six times as large as interpreted byte code, but is two to ten times as fast, with a factor of four or five being typical. >There appear to be two reasons for hybrid systems: (1) to give a variable >time/space tradeoff, i.e., between fast/bulky native code and slow/lean >interpreted code; (2) to allow fancy interpretive debuggers and tracers >in the presence of native code. > >I don't think reason (1) is very compelling these days, because the size of >compiled code is not an issue with today's computers... Here I have to disagree. On a Macintosh, a factor of five in program size can easily be the difference between fitting or not fitting on a floppy disk. The space required by a program is also a big issue under MultiFinder, since it determines how many simultaneous applications you can run. RAM accounts for about a quarter of the typical Macintosh II system cost, so five times as much RAM would double the cost. Similarly for disk space. I understand why some people don't count Macintoshes and IBM PCs and PS/2s and their ilk as "today's computers", but I don't think that's realistic. Peace, Will
bard@THEORY.LCS.MIT.EDU (09/24/88)
>I don't think reason (1) is very compelling these days, because the size of >compiled code is not an issue with today's computers... I disagree, in some contexts. It certainly matters a lot on my home computer; some programs won't fit on a floppy. Loading time also matters. If a program like `more' takes a few seconds to load, it is very annoying. -- Bard