[comp.sys.amiga.hardware] Controller Speed tests

hawk@pnet01.cts.com (John Anderson) (12/03/90)

  When Hard drive speed-test programs are run, they usually show four speeds.
With 512 buffers, 1024, etc. My question is why do the need to show with
different amount of buffers and why do people always pick the one with the
most when comparing speeds?  If the different speeds were for different sized
files etc, then people should average them together to give an average speed
not the fastest in rarest cases?  Is this correct?  I don't understand?

lphillips@lpami.wimsey.bc.ca (Larry Phillips) (12/04/90)

In <6033@crash.cts.com>, hawk@pnet01.cts.com (John Anderson) writes:
>  When Hard drive speed-test programs are run, they usually show four speeds.
>With 512 buffers, 1024, etc. My question is why do the need to show with
>different amount of buffers and why do people always pick the one with the
>most when comparing speeds?  If the different speeds were for different sized
>files etc, then people should average them together to give an average speed
>not the fastest in rarest cases?  Is this correct?  I don't understand?

Though none of the figures given can be relied upon to give a really good
comparison between drives (due to different amounts of fragmentation on any
given drives), the reads using the largest buffer are the most indicative of
drive/controller performance.  The reason for this is that the smaller the
buffer size, the more of it is Amigados overhead, and perhaps other overhead as
well.

-larry

--
The only things to survive a nuclear war will be cockroaches and IBM PCs.
+-----------------------------------------------------------------------+ 
|   //   Larry Phillips                                                 |
| \X/    lphillips@lpami.wimsey.bc.ca -or- uunet!van-bc!lpami!lphillips |
|        COMPUSERVE: 76703,4322  -or-  76703.4322@compuserve.com        |
+-----------------------------------------------------------------------+

p554mve@mpirbn.mpifr-bonn.mpg.de (Michael van Elst) (12/05/90)

In article <6033@crash.cts.com> hawk@pnet01.cts.com (John Anderson) writes:
>With 512 buffers, 1024, etc. My question is why do the need to show with
>different amount of buffers and why do people always pick the one with the
>most when comparing speeds?

Obviously the larger numbers are more adequate for advertising (larger,
higher, faster). There's one good reason for the large numbers since
they more resemble the raw speed of the hardware. With measuring real
application speed there are much more problems to evaluate. You have to
deal with different access patterns, CPU and bus load, pro and cons of
caching algorithms, etc., which might be too complicated to convince
(persuade ?) consumers to buy your product.


-- 
Michael van Elst
UUCP:     universe!local-cluster!milky-way!sol!earth!uunet!unido!mpirbn!p554mve
Internet: p554mve@mpirbn.mpifr-bonn.mpg.de
                                "A potential Snark may lurk in every tree."