steiny@scc.UUCP (Don Steiny) (11/28/85)
*** I am interested in finding out what people think of the AIM benchmarks. How useful are they to you? How would you improve them? -- scc!steiny Don Steiny @ Don Steiny Software 109 Torrey Pine Terrace Santa Cruz, Calif. 95060 (408) 425-0382
glenn@uel (Glenn Wright ) (12/03/85)
> *** > > I am interested in finding out what people > think of the AIM benchmarks. How useful are they > to you? How would you improve them? > I have used the AIM benchmarks for testing numerous systems. They take sometime to set up as they are mainly based on shell scripts which have to be tailored to produce banners and printouts to your requirements. As far as the tests go they are comprehensive within each individual test, but the problem is (as with many benchmarks) bundling these results together and judging the overall results. As I remember the tests cover the following: terminal i/o disk block movements multi-user (many different processes) matrix multiply (diiferent sizes) timing of the cc command A compute bound program test ... The AIM bencmarks (I feel) compare favourably with the "Byte" magazine tests, but remember they are only useful when comparing two systems with EXACTLY the same test! -- Glenn Wright ============ UNIX Europe Ltd, London UK. {mcvax!ukc!}uel!glenn
campbell@sauron.UUCP (Mark Campbell) (12/09/85)
> I am interested in finding out what people > think of the AIM benchmarks. How useful are they > to you? How would you improve them? [Preliminary Disclaimer: This article in no way reflects the judgement or policy of NCR Corporation. I alone am responsible for its contents.] There are two AIM benchmarks currently in use: AIM 1.5 and AIM 2.0. AIM Technology has a pretty restrictive licensing policy with both. Publishing the results of these benchmarks are subject to this licensing policy. AIM 1.5 consists of eight tests: - A C Compiler Test - A Disk Write Throughput Test: - An FP Test - A Multi-User Edit Test - A Multi-User Sort Test - A Million Operation Test - A Memory Throughput Test - An Interprocessor Communication Test Aim 2.0 consists of a large collection of generally smaller tests and uses a linear modeling technique along with a user-specified system mix ratio to approximate system performance. The disk throughput benchmark tests both disk read's and write's with a large fixed buffer size. There are no explicit multi-user tests. Both are decent benchmarks for measuring uniprocessor system performance; however, neither should be used to judge multiprocessing systems. The linear modeling scheme used to determine system performance in AIM 2.0 is highly suspect. Only the individual test results of either should be used. AIM 1.5's disk write throughput test is a pretty interesting concept...it can expose some true weaknesses in a given implementation of a file system (hint: Let the buffer size range up to 16K for fairness, and then test a BSD file system. The results are pretty interesting.). In order to improve these benchmarks, I'd probably try to merge both of them together. I would throw out the AIM 2.0 disk throughput tests in favor of those in AIM 1.5. I'd also make the multi-user benchmarks of AIM 1.5 a bit more realistic. Otherwise, I'd use the results of both. I've seen better benchmarks, but they all take 2 hours or more to execute. If you want a simple benchmark that executes pretty quickly, neither is a bad choice. P.S. "A benchmark proves which system executes the benchmark the fastest...that's it." --- Anonymous -- Mark Campbell Phone: (803)-791-6697 E-Mail: {decvax!mcnc, ihnp4!msdc}!ncsu!ncrcae!sauron!campbell