[comp.sys.acorn] ARM3 versus SPARC

rknight@gec-rl-hrc.co.uk (Roger Knight (B21)) (03/09/91)

In article <klamer.668261007@mi.eltn.utwente.nl> klamer@mi.eltn.utwente.nl (Klamer Schutte -- Universiteit Twente) writes:
>PS And a question for the better-informed: When running on a R160 + ARM3
>   under RISCiX a sample program without very much floating point did
>   perform only at 10% of a Sun Sparcstation 1+ (rated 15 MIPS).
>   The ARM3 should be better than 1.5 (sun) MIPS, isn't he?
>   Unix overhead is not the answer as the SS1+ did run SunOs 4.1 against
>   berkely 4.3 for the ARM. Where does the difference come from?

Probably 2 reasons: (1) any floating point seems to have a disastrous effect
on an ARM processor, and (2) the X-Windows display grabs a lot of memory
bandwidth - it is like running in mode 21 on RISCos. Try switching to one
of the VT220 emulation screens and it will go a lot faster (similarly
RISCos goes like the wind if you switch the desktop to mode 0 ).

We recently had a demo of the R260 and the benchmarks we ran on it put
about equal to a SPARC IPC and faster than an APOLLO DN4500 on integer
only (written in C of course!!). 

The benchmark was a quick hack at a souped up PCW integer test. Approx.
timings were:
R260 : 15.5 secs;  SPARC IPC : 16.1 secs; Apollo DN4500 : 23.2 secs
The R260 times went down by about 3 seconds by switching to a VT220
emulator display (all the machines were fairly idle, but still multi-
user and running TCP/IP, NFS etc.).

Putting a floating point divide into the loop didn't effect the SUN or
Apollo but slowed the Acorn down by about 5 times. All I can do is repeat a
quote from an article in comp.misc a few months back:
"- Real programmers scorn floating point arith.  The decimal point was
invented for pansy bed-wetters who can't think big"

-Rog.   ( rknight@gec-rl-hrc.co.uk )
DISCLAIMER: I said it, not my company.