saj@chinet.chi.il.us (Stephen Jacobs) (05/30/91)
I seem to be involved in what is fast becoming a shouting match about relative price and performance of Atari computers vs those based on Intel chips. One of the main problems is knowing what to compare with which. So how about we pick some fair yardsticks. Atari isn't 'Joes Garage Computer Manufactorie', and shouldn't have prices compared with it. I'll suggest ZEOS as a company whose prices are widely advertized, and whose quality is comparable to Atari (I rate both of them very good for service, by the way. Start your own thread if you want to talk about service). Often as not, ZEOS machines are the least expensive ones included in those comparative reviews magazines have so much fun with. Cross-platform benchmarks are damn near impossible (Byte magazine has had several articles on the subject), but I'll modestly propose as a benchmark the time for GNU C to compile itself, with all temporary files on hard disk. Any other suggestions? Ok now: I personally have none of these things: GNU C source distribution, TT or 80486 box. The people I know who have them I trouble enough over more serious issues. Anyone care to post about how long it takes GNU C to compile itself on the TT? Steve saj@chinet.chi.il.us
vsnyder@jato.jpl.nasa.gov (Van Snyder) (05/31/91)
In article <1991May30.145023.1684@chinet.chi.il.us> saj@chinet.chi.il.us (Stephen Jacobs) writes: >I seem to be involved in what is fast becoming a shouting match about relative >price and performance of Atari computers vs those based on Intel chips. One >of the main problems is knowing what to compare with which. So how about we >pick some fair yardsticks. Atari isn't 'Joes Garage Computer Manufactorie', >and shouldn't have prices compared with it. I'll suggest ZEOS as a company >whose prices are widely advertized, and whose quality is comparable to Atari >(I rate both of them very good for service, by the way. Start your own thread >if you want to talk about service). Often as not, ZEOS machines are the least >expensive ones included in those comparative reviews magazines have so much >fun with. Cross-platform benchmarks are damn near impossible (Byte magazine >has had several articles on the subject), but I'll modestly propose as a >benchmark the time for GNU C to compile itself, with all temporary files on >hard disk. Any other suggestions? > Some of the benchmarks people commonly use are Dhrystone (integer performance), Whetstone (Floating point performance), Linpack (Floating point performance), Livermore loops (Floating point performance), SpecMark (Overall system performance, but belongs to SPEC Inc). The advantage of using these is that you don't have to re-do the benchmarks on the other machines, which you might not own. I've seen dhrystone for the ST, maybe at atari.archive? -- vsnyder@jato.Jpl.Nasa.Gov ames!elroy!jato!vsnyder vsnyder@jato.uucp
rosenkra@convex.com (William Rosencranz) (05/31/91)
In article <1991May30.145023.1684@chinet.chi.il.us> saj@chinet.chi.il.us (Stephen Jacobs) writes: >has had several articles on the subject), but I'll modestly propose as a >benchmark the time for GNU C to compile itself, with all temporary files on >hard disk. Any other suggestions? benchmarking is often serious stuff, at least at the level i deal with. so, you must also post guidelines like 1) all tmp files to single n sized (empty) partition, 2) hd seed rate should be included in results, 3) non- standard clock speed, 4) version of GNU C (which can't be enhanced), 5) compiler switches, etc, etc, etc... as u can see, to truly make objective comparisons you have to be thorough. in general, benchmarks including compiles are less desirable (yes, even SPECmark) since you more often than not compare compilers not systems. dhrystone is notorious for this (i have seen *widely* varying DS for exactly the same hardware, just different compilers). i think you are better off with real or near-real applications and just use the fastest compiler for that system. report the compiler as well. at least with real applications, users have a better feel more often than not. i would also try to exercise different aspects of an architecture: raw cpu, data/instruction cache, i/o, screen writes, etc. fortunately (???) the TOS ST is not multitasking (really) so that at least there are no issues like throughput vs single job times :-) in general, the more information the better. try to avoid single number performance indeces. let the user make up his own mind. his/her application may be totally different than yours... -bill rosenkra@convex.com -- Bill Rosenkranz |UUCP: {uunet,texsun}!convex!c1yankee!rosenkra Convex Computer Corp. |ARPA: rosenkra%c1yankee@convex.com
saj@chinet.chi.il.us (Stephen Jacobs) (05/31/91)
In article <1991May30.223629.13096@jato.jpl.nasa.gov> vsnyder@jato.Jpl.Nasa.Gov (Van Snyder) writes: >In article <1991May30.145023.1684@chinet.chi.il.us> saj@chinet.chi.il.us (Stephen Jacobs) writes: >>I seem to be involved in what is fast becoming a shouting match about relative >>price and performance of Atari computers vs those based on Intel chips. One >>of the main problems is knowing what to compare with which. So how about we >>pick some fair yardsticks. Atari isn't 'Joes Garage Computer Manufactorie', >>fun with. Cross-platform benchmarks are damn near impossible (Byte magazine >>has had several articles on the subject), but I'll modestly propose as a >>benchmark the time for GNU C to compile itself, with all temporary files on >>hard disk. Any other suggestions? >> >Some of the benchmarks people commonly use are Dhrystone (integer performance), >Whetstone (Floating point performance), Linpack (Floating point performance), >Livermore loops (Floating point performance), SpecMark (Overall system >performance, but belongs to SPEC Inc). The advantage of using these is that >you don't have to re-do the benchmarks on the other machines, which you >might not own. I've seen dhrystone for the ST, maybe at atari.archive? > I know that linpack (which used to be considered the great 'real-world' benchmark) is no longer considered a fair comparison of machines because it's too architecture-sensitive (especially to cache and super-scalar aspects). Whetstone never seems to have caught on for micros, perhaps someone knows why and will say. I guess Byte eventually picked a suite of benchmarks, which they make available in source, but they hedged pretty heavily about counting on them across architectures. I'm assuming that the people who participate in this discussion have some applications that exercise a processor pretty thoroughly, and might be considered as successors to Linpack. For what it's worth, I talked to a friend who rides herd on some big iron, and he says that simply saying that beyond a certain machine speed my biggest application (chromatographic data processing) is disk-bound sounds like a benchmark to him. I disagree, but I'll toss it into the discussion. Steve saj@chinet.chi.il.us
erlingh@idt.unit.no (Erling Henanger) (06/01/91)
-- _______ _____ o ____ Erling Henanger /___ /____/ / / /| / / Norwegian Institute / /\ / / / | / | ___ of Technology. (NTH) ------ / \ /____ / / |/ \____| o MS-dos should be dying!