debra@wsinfo11.info.win.tue.nl (Paul De Bra) (03/27/91)
In article <4209@ryn.mro4.dec.com> norcott@databs.enet.dec.com (Bill Norcott) writes: >Attached is a benchmark called IOzone which I have written. It was inspired >by Alvin Park's IOstone benchmark (which tests random access I/O). IOzone >tests sequential file I/O using the C language. It writes, then reads, a >sequential file of fixed length records, and measures the read & write >rates in bytes per second. The default is to create a 1 megabyte file >consisting of 2048, 512-byte records -- however, these parameters can be >changed from the command line. Writing 512-byte records is not a good way to measure sequential file I/O on most systems. To maximize throughput one should use the same block size as the file system (these days that's 4k or 8k on most systems except some old system V systems, pre release 4, that use 1k or 2k file systems). One important remark to make about the results is that if the measured throughput is in the order of 80kbytes/sec or less it means that the file-system gapsize (or the disk-interleaving which is worse) is set too low, meaning you can write only 1 block per disk revolution. Paul. (debra@win.tue.nl)
Chuck.Phillips@FtCollins.NCR.COM (Chuck.Phillips) (04/04/91)
>>>>> On 28 Mar 91 15:08:38 GMT, norcott@databs.enet.dec.com (Bill Norcott) said:
Bill> It is true that you can get the highest number by matching IOzone's
Bill> record size to the particular file syste's block size. You can
Bill> REALLY get great measured transfer rate by running on a quiet system
Bill> which has a buffer cache much bigger than the file size.
Under SVr4 and SunOS 4.x, the effective size of the buffer cache is _all
available RAM_. The _only_ reliable way to flush all file buffers is to
umount the file system. I've been told this is also true of several other
UNIX variants.
--
Chuck Phillips MS440
NCR Microelectronics chuck.phillips%ftcollins.ncr.com
2001 Danfield Ct.
Ft. Collins, CO. 80525 ...uunet!ncrlnk!ncr-mpd!chuck.phillips