prs@N.SP.CS.CMU.EDU (peter steenkiste) (04/06/88)
I am looking for the most recent Gabriel benchmark results. I most interested in the results for the Symbolics and TI Explorer 2, but results for other architectures are also welcome. Does anybody have such numbers or can anybody tell me where I can get them? Thank you, Peter Steenkiste Carnegie Mellon University prs@sam.cs.cmu.edu
mjs@alice.UUCP (04/07/88)
In article <1327@PT.CS.CMU.EDU> prs@N.SP.CS.CMU.EDU.UUCP writes: >I am looking for the most recent Gabriel benchmark results. >Does anybody have such numbers or can anybody tell me where I can get them? > >Thank you, Peter Steenkiste Please post this information! I'm also interested in seeing the benchmarks (also interesting would be "correct" translations of the benchmarks to Scheme; I'm rolling my own, but I'm not sufficiently fluent to believe I've done them correctly). Marty Shannon AT&T Bell Labs Liberty Corner, NJ
esh@otter.hple.hp.com (Sean Hayes) (04/08/88)
>I am looking for the most recent Gabriel benchmark results. This is the second time this week I have heard these mentioned, so at the risk of sounding ignorant (which of course I am :-) Could someone tell me what the Gabriel benchmarks are and how to obtain them. (Send sources if you can) I would like to use them to test some functional language compilers. Is this appropriate? _________________________________________________________________________ |Sean Hayes, Hewlett Packard Laboratories, Bristol, England| |net: esh@hplb.uucp esh%shayes@hplabs.HP.COM ..!mcvax!ukc!hplb!esh|
riedesel@eliot.cs.uiuc.edu (04/10/88)
Gabriel Benchmark Suite tested on: IBM RT PC Explorer 2 Explorer 1 SUN 3/160 (all times are in seconds) RT Exp1 Exp2 SUN ---- ---- ---- ---- Boyer: 41.77 43.57 16.72 25.2 Browse: 33.33 45.98 23.8 25.86 CTAK: 3.1 3.83 .82 1.02 Dderiv: 12.2 12.32 5.73 10.32 Deriv: 10.4 12.23 3.62 5.76 Destructive: 3.9 4.08 .95 4.8 Div-iter: 4.3 4.08 .95 .68 Div-rec: 4.7 7.52 2.08 1.24 FFT: 288.37 23.26 6.91 78.82 Fprint: 2.0 17.53 1.47 .72 Fread: 5.0 11.81 2.22 2.66 Frpoly 2:r=x+y+z+1: 0.0 0.01 0.0 0.0 Frpoly 2:r2=1000r: 0.1 0.014 0.003 0.02 Frpoly 2:r3=r: 0.0 0.014 0.003 0.02 Frpoly 5:r=x+y+z+1: 0.1 0.096 0.022 0.02 Frpoly 5:r2=1000r: 0.4 0.148 0.039 0.2 Frpoly 5:r3=r: 0.7 0.155 0.036 0.06 Frpoly 10:r=x+y+z+1: 0.8 1.264 0.215 0.28 Frpoly 10:r2=1000r: 6.2 1.658 0.437 2.04 Frpoly 10:r3=r: 5.9 1.424 0.337 0.58 Frpoly 15:r=x+y+z+1: 6.7 6.279 1.433 2.44 Frpoly 15:r2=1000r: 75.07 12.996 3.704 28.48 Frpoly 15:r3=r: 37.7 8.339 2.63 6.8 Puzzle: 213.93 26.46 5.92 82.46 STAK: 3.7 5.21 1.7 1.18 TAK: 1.1 1.73 0.296 0.3 TAKL: 3.5 16.99 3.60 1.26 TAKR: 1.3 1.72 0.304 0.46 Tprint: 5.0 18.46 3.97 1.14 Traverse-init: 13.87 20.66 4.39 4.16 Traverse: 66.5 125.02 32.1 30.5 Triangle: -- -- 75.298 598.78 Disclaimer: All times are guaranteed to be approximate.
riedesel@eliot.cs.uiuc.edu (04/11/88)
Minor (?) mistake in the previous note: The SUN is really a 3/280. Sorry. Joel
riedesel@eliot.cs.uiuc.edu (04/12/88)
Ok, ok. For the previous benchmark results I was running Lucid Lisp on both the RT and the SUN. It was version 1.0.1 on the RT and version 2.0.2 on the SUN. The SUN has a floating point chip. As for compiler options, safety levels, type declarations, etc. I simply ran the benchmark suite. I have yet to take a thorough look at the programs to see what they are doing (I have a basic idea). I'm making the assumption that there are a lot of people out there that know more about this than I. Joel
sims@stsci.EDU (Jim Sims) (04/15/88)
I've seen the Gabriel benchmarks and I've seen the results on several machines I've also seen tests of applications run on multiple machines (see below). The ONLY conclusion you can draw from ANY - repeat ANY benchmark of this sort is a ballpark figure. Gabriel would tell you this, I believe. The problem I see with the Gabriel LISP benchmarks is that they don't Garbage Collect or Page significantly (does your code??). Application - Scheduling Hubble Space Telescope machine notes relative rating ------------------------------------------------------------- TI Exploder plain jane 1.0 TI Exploder II new box 3.5 Slimebolics 36xx? 3.0 MAC II Allegro LISP .75 MAC II + LISP CHIP hah! fooled yah - it crashed and burned when we tried to do a directory on the LISP side of the fence (following are unsubstantiated "rumors") Application - Expert Systems Tools machine notes relative rating --------------------------------------------------------------- vaxstation xxxx 3 Slimebolics 36xx 3 TI Exploder II 3.5 SUNN 3/260 3 Basically all this stuff says is that the APPLICATION makes or breaks the system, and if you want to do benchmarks, run the code you plan to use - real applications - not what someone else thinks is a godd "general purpose" mixture of example code. happy LISPing -- Jim Sims Space Telescope Science Institute Baltimore, MD 21218 UUCP: {arizona,decvax,hao,ihnp4}!noao!stsci!sims SPAN: {SCIVAX,KEPLER}::SIMS ARPA: sims@stsci.edu