[comp.archives] [benchmarks] Re: benchmark evaluations

ejk@uxh.cso.uiuc.edu (Ed Kubaitis) (12/15/90)

Archive-name: benchmark/x11/xfroot/1990-12-14
Archive: uxc.cso.uiuc.edu:/pub/xfroot/timings [128.174.5.50]
Original-posting-by: ejk@uxh.cso.uiuc.edu (Ed Kubaitis)
Original-subject: Re: benchmark evaluations
Reposted-by: emv@ox.com (Edward Vielmetti)

Having spent a year of my life devising a rigorous, reproducible, predictive 
benchmark suite for a multi-million dollar competitive procurement at a DOE 
laboratory, I think I fully grasp the shortcomings of the bc benchmark. But
I find the results interesting and some of the criticism patronizing. *Of 
course* it's rough and marred with anomalies. That doesn't  make it silly or 
useless. Grep isn't useless because it finds things one didn't intend or 
expect. The value of the bc benchmark lies in the growing (because it's easy) 
list of reported results that bear some relation to rigorous (and difficult) 
benchmarks, and in the thought provoked by the anomalies.

The same was true of the xfroot timings -- a rough, simple (not as simple as 
bc!), imperfect benchmark I collated a while back. (Still available in 
pub/xfroot/timings on uxc.cso.uiuc.edu)

So I'd like to thank Dave Sill for taking the time to collate and report the
results. Let many simple, rough, imperfect benchmarks flourish. We learn 
something from each. 
----------------------------------
Ed Kubaitis (ejk@uxh.cso.uiuc.edu)
Computing Services Office - University of Illinois, Urbana