roy@uiucdcs.UUCP (04/10/84)
#N:uiucdcs:23900015:000:5936 uiucdcs!roy Apr 9 17:27:00 1984 Testing CS9000 XENIX. University of Illinois, Dept. of Computer Science. Testing team: uiucdcs! roy kubitz dehghan eich freeman hohulin johnston murthy sum I have been noticing that there has been some interest on the network in the CS9000. Our testing team has been using the system for a while and I thought that there might be some interest on the network in what we are doing. We have run some simple performance measurements and a rather large test suite. The performance tests and SUN/VAX results were obtained from Columbia. We have added the results for the CS9000. Note that these results were obtained on an early version of the IBM XENIX and do not reflect any subsequent performance improvements that have been made. We have been testing the IBM XENIX system using a UNIX verification suite that we produced as part of an IBM grant. The suite contains over 700 separate programs. We will be writing up our test results and hope to publish them for all to read. The IBM XENIX system should be a lot more reliable as a result of the testing. We found both implementation and XENIX bugs. The current release is much improved and IBM seems to be intent on delivering solid systems. We intend to make the verification suite available as soon as we have completed documentation of our results and the tests themselves. The suite includes system I/O tests, device driver tests, stress tests, shell tests, interactive tests and lots more. We haven't decided how to make the suite available yet as we would like to maintain and extend the suite to include other UNIX systems. We also do not know how much demand there might be for the suite. Anyone interested in the suite should mail or write to me. I cannot guarantee when I will be able to get around to dealing with any requests, however, as we are very busy. Incidently, IBM XENIX includes vi and csh. Roy Campbell, Dept. of Comp. Sci., 1304 W. Springfield Av., Urbana, IL 61801. uiucdcs!roy The performance results are shown below with figures produced by Columbia: Sun 1.5 Sun 2.0 Vax 11/750 CS9000 cc eros.c real 16.8 / 34.0 7.1 / 12.0 7.0 / 10.0 28.0 / 35.5 user 3.0 / 3.0 2.7 / 2.6 1.9 / 2.0 5.4 / 5.6 sys 8.8 / 10.5 2.2 / 2.3 1.8 / 1.5 5.3 / 4.5 execution real 7.4 / 16.0 5.0 / 9.9 4.0 / 9.5 11.0 / 22.5 user 6.2 / 6.2 4.6 / 4.6 4.2 / 4.2 10.4 / 10.4 sys 0.9 / 1.2 0.1 / 0.1 0.1 / 0.1 0.2 / 0.1 cc fibo.c real 13.8 / 26.5 7.0 / 10.9 6.0 / 10.0 28.0 / 35.0 user 2.6 / 2.7 2.2 / 2.3 1.7 / 1.6 4.5 / 4.5 sys 8.7 / 9.5 2.4 / 2.3 1.7 / 1.6 5.9 / 5.2 execution real 4.1 / 7.9 2.6 / 5.0 3.0 / 7.5 5.0 / 10.0 user 2.9 / 3.0 2.2 / 2.2 3.2 / 3.1 4.2 / 4.2 sys 0.8 / 0.7 0.1 / 0.1 0.1 / 0.1 0.2 / 0.1 cc floatpt.c real 15.4 / 25.5 7.7 / 12.8 7.0 / 9.5 32.0 / 46.0 user 3.8 / 3.8 3.1 / 3.3 1.9 / 2.1 7.7 / 8.1 sys 9.0 / 8.0 2.4 / 2.2 1.7 / 1.6 6.7 / 5.3 execution real 58.5 / 119.5 11.1 / 22.5 9.0 / 17.5 195.0 / 390.0 user 53.6 / 53.5 10.7 / 10.7 8.7 / 8.7 192.5 / 193.0 sys 4.4 / 5.0 0.1 / 0.2 0.1 / 0.1 0.5 / 0.4 cc iotest.c real 23.6 / 38.0 9.2 / 16.1 8.0 / 13.5 36.0 / 52.5 user 5.5 / 5.7 4.6 / 4.4 3.8 / 3.6 11.6 / 11.1 sys 10.9 / 11.7 2.4 / 2.4 1.8 / 1.7 6.3 / 5.7 execution real 279.6 / 561.5 178.1 / 337.2 120.0 / 230.0 326.0 / 602.5 user 5.1 / 4.7 4.4 / 3.7 4.4 / 5.1 8.3 / 8.9 sys 268.5 / 270.8 171.6 / 163.9 104.1 / 105.0 267.3 / 266.8 cc sort.c real 25.7 / 45.4 12.8 / 22.0 10.0 / 11.0 39.0 / 52.5 user 9.3 / 9.5 7.5 / 7.9 3.3 / 3.3 9.8 / 10.3 sys 10.7 / 11.5 2.4 / 2.1 2.0 / 1.9 7.0 / 5.2 execution real 61.8 / 124.8 43.7 / 87.1 29.0 / 60.0 97.0 / 194.5 user 56.2 / 56.3 43.0 / 43.1 28.2 / 28.5 95.2 / 95.4 sys 5.1 / 5.7 0.3 / 0.2 0.2 / 0.9 0.4 / 0.4 Most of the following remarks were made by Columbia about the performance test. We have just inserted a few extra comments about the CS9000 figures. The benchmark above was run on machines that were up in multi-user mode but with only one user logged in. "Real" indicates elapsed time for the function, "user" represents cpu seconds spent executing user program instructions, and "sys" represents cpu seconds executing system code. The first number of the pair indicates the time in seconds to perform the function alone. The second number indicates the average of the times to perform the functions when 2 copies of the benchmark were being run concurrently. NOTES: The vast decrease in time necessary for the floating point test between the two SUN models is due primarily to the use of the SKY floating point board in the 2.0 version SUN. The cs9000 has only an 8 MHz clock on its 68000, and all floating point is done with software. A brief description of the benchmark programs: eros - Eratosthenes sieve prime number generation, 1 to 8190 fibo - compute the Fibonacci series. This program tests the efficiency of a compiler's recursion by calculating a 16 bit fibonacci number. floatpt - does repeated multiplications and divisions in a loop that is large enough to make the looping time insignificant. iotest - file writing and reading benchmark. The program sequentially writes a 65000 byte file on disk and then generates random long integers and uses these mod 65000 to read and write strings of ODDNUM bytes. sort - creates an array of random, long integers then does repeated quicksorts on the array. [Disclaimer] This is a fairly rough comparison of the different systems. It is by no means a quantitative analysis. There are other elements, e.g. amount of main memory, type of disk and disk controller used, etc, that we were not careful to keep constant when running the benchmark.