tbray@watsol.waterloo.edu (Tim Bray) (04/02/88)
In a recent meeting we were analyzing the performance of this application that is rather I/O bound - in particular, it performs a lot of very random accesses here and there in large (> 100 Mb) files. Somebody said "Now, we'll assume that Unix can do a maximum of 30 disk I/O's a second". Somebody else remarked that that figure had been remarkably constant for quite some time. Somebody else proposed that it was a fundamental law of Computer Science. (Of course, we are poor peons restricted to the use of Vaxes and Suns). Anyhow - Presumably there are other people out there limited by this particular bottleneck. Are there reasonably-priced unix systems out there that do better? Are there a set of benchmarks which reliably characterize system performance in this area? To address this problem, I half-seriously propose a new metric: Application Disk I/Os per Second, named, obviously, ADIOS. Adios, amigos. Tim Bray, New Oxford English Dictionary Project, U of Waterloo, Ontario
sl@van-bc.UUCP (pri=-10 Stuart Lynne) (04/02/88)
In article <3842@watcgl.waterloo.edu> tbray@watsol.waterloo.edu (Tim Bray) writes: >that Unix can do a maximum of 30 disk I/O's a second". Somebody else remarked >that that figure had been remarkably constant for quite some time. Somebody >else proposed that it was a fundamental law of Computer Science. (Of course, >we are poor peons restricted to the use of Vaxes and Suns). Probably related to your average seek time plus rotational delay plus data transfer. On most popular, extant Unix systems 20 - 30 ms is a reasonable figure for average seek. Average rotational latency is 8.5 ms. Transfer time for a one sector, say about 1 ms. Given a fast 20 ms drive, you probably should approach 30 disk I/O's per second. Given a slow 30 ms drive, probably closer to 25, 40 ms about 20. Other factors which will help are controllers which will overlap seeks; multiple disk's to localize file accesses (allowing average seeks times to decline); larger block sizes (actually getting the information in is only a small part of the battle, getting there is the largest component for small random reads). -- {ihnp4!alberta!ubc-vision,uunet}!van-bc!Stuart.Lynne Vancouver,BC,604-937-7532
ron@topaz.rutgers.edu (Ron Natalie) (04/03/88)
Not only is it not a constant, it's not even true. The sad fact is most disk controllers for minis/micros are pretty horendous. Sun's unfortunate use of the Xylogics 450/451 is a prime example. Anyway, with decent controllers (or multiple controllers) there is no reason why the figure 30 can't be exceeded and is on decent Unix systems. -Ron
aland@infmx.UUCP (Dr. Scump) (04/03/88)
In article <3842@watcgl.waterloo.edu>, tbray@watsol.waterloo.edu (Tim Bray) writes: > (misc. comments about UNIX disk i/o performance, etc.) > > To address this problem, I half-seriously propose a new metric: Application > Disk I/Os per Second, named, obviously, ADIOS. > > Adios, amigos. > Tim Bray, New Oxford English Dictionary Project, U of Waterloo, Ontario Sorry, ADIOS has already been used (and I think copyrighted). [Company Name deleted] developed an access method (ISAM) for IBM mainframes running OS/MVT, OS/MVS, etc. called ADIOS (acronym for "Another Disk I/O System"). It was coded in assembler and accessed from COBOL or assembler as callable functions, and outperforms the "standard" stuff by a mile. Plus, the terminal i/o control portion of the in-house ADIOS-based realtime system was named TACOS ("Terminal and Communications Operating System", I think). Please, no anti-IBM, anti-COBOL, mainframe-bashing, etc. flames here. Mainframes are not necessarily evil (or, is that "necessary evil"? :-]). And, no "too late for April Fool's Day" comments -- this is a true story. Only the names were changed to protect the innocent. Alan S. Denney | {pyramid|uunet}!infmx!aland Informix Software, Inc. | CAUTION: This terminal makes wide right turns! Disclaimer: These opinions are mine alone. If I am caught or killed, the secretary will disavow any knowledge of my actions. -- Alan S. Denney | {pyramid|uunet}!infmx!aland Informix Software, Inc. | CAUTION: This terminal makes wide right turns! Disclaimer: These opinions are mine alone. If I am caught or killed, the secretary will disavow any knowledge of my actions.
wcs@ho95e.ATT.COM (Bill.Stewart.<ho95c>) (04/04/88)
In article <1703@van-bc.UUCP> sl@van-bc.UUCP (Stuart Lynne) writes: :In article <3842@watcgl.waterloo.edu> tbray@watsol.waterloo.edu (Tim Bray) writes: :>that Unix can do a maximum of 30 disk I/O's a second". Somebody else remarked :On most popular, extant Unix systems 20 - 30 ms is a reasonable figure :for average seek. Average rotational latency is 8.5 ms. Transfer.. 1ms [Note 3600rpm = 16.6 sec * 50% = 8.3] Optimal scheduling can of course reduce this a lot; for relatively large transfers (even with small blocks), you should get a lot of blocks/seek, and latency will be lower than 50% rotation. Unfortunately, stdio BUFSIZE is still typically 512-1024 (i.e. 1 block), so stdio-based input (and probably output) tends to break this up. Systems with 4K blocks may do a bit better. -- # Thanks; # Bill Stewart, AT&T Bell Labs 2G218, Holmdel NJ 1-201-949-0705 ihnp4!ho95c!wcs # So we got out our parsers and debuggers and lexical analyzers and various # implements of destruction and went off to clean up the tty driver...
slk@clutx.clarkson.edu (Steve Knodle) (04/05/88)
Concerning the following discussion: >From: Tim Bray <tbray@watsol.waterloo.EDU> >Subject: How fast are your disks? >Date: 1 Apr 88 19:09:18 GMT ... >In a recent meeting we were analyzing the performance of this application that >is rather I/O bound - in particular, it performs a lot of very random accesses >here and there in large (> 100 Mb) files. Somebody said "Now, we'll assume >that Unix can do a maximum of 30 disk I/O's a second". Somebody else remarked >that that figure had been remarkably constant for quite some time. Somebody >else proposed that it was a fundamental law of Computer Science. (Of course, >we are poor peons restricted to the use of Vaxes and Suns). > >Anyhow - Presumably there are other people out there limited by this >particular bottleneck. Are there reasonably-priced unix systems out there >that do better? Are there a set of benchmarks which reliably characterize >system performance in this area? >From: Ron Natalie <ron@topaz.rutgers.EDU> >Subject: Re: How fast are your disks? >Date: 3 Apr 88 00:04:52 GMT >Keywords: Disk I/O throughput >To: unix-wizards@brl-sem.arpa > >Not only is it not a constant, it's not even true. The sad fact >is most disk controllers for minis/micros are pretty horendous. >Sun's unfortunate use of the Xylogics 450/451 is a prime example. >Anyway, with decent controllers (or multiple controllers) there is >no reason why the figure 30 can't be exceeded and is on decent Unix >systems. > >-Ron Let me offer, as a point of reference, extracts from simultaneous (vm/io)stat performance logs for our Gould Powernode 9080, which performs very gracefully under severe I/O load, I feel. The logs were taken during the End-of-Semester Crunch, and are being used to substantiate my request that the machine be upgraded from 8 Meg of memory to 16 Meg. (The relatively small amount of memory explains the severe paging rate below.) Our 9080 serves as a campus timesharing host serving the engineering and scientific faculty and student computing. The job mix is a combination of large background jobs (finite element jobs and fluid-flow simulations) with editing, compiling, and electronic mail as foreground tasks. There were about 25 - 35 users logged on during the period below. Response time was decently good. Few users noticed that the load average had drifted up until we reached the threshhold at which sendmail stops delivering and only queues. They started wondering why e-mail wasn't being delivered! So I think that significant I/O capacity still remained. The main contribution to its I/O performance is the combination of Gould's good High Speed Disk Processor and the CDC 858 MB disk (our device "dk0"). This pair is reportedly capable of reading and buffering an entire cylinder in a single I/O operation. For large sequential files, this and the Berkeley Fast File System cylinder grouping gives a big win. I suspect this explains the very last extract below where transfers/sec hit 60. The complete machine configuration is: 2 CPU's, 3 disks on 3 controllers (1 HSDP and CDC 858 (18 ms) and two older CDC 650's (25 ms) on UDP controllers). Swapping is distributed on all three disks. One third of the users come in via ethernet, the rest via ttys. The logs were taken with a script that alternated "date" with "vmstat(iostat) 180 20" The second CPU isn't included in cpu usage under this display format, and spends its time almost exclusively in user mode. ---------------------- vmstat log ------------------------------- procs memory page dk faults cpu r b w avm fre re at pi po fr de sr d0 d1 d2 d3 in sy cs us sy id ....... 2015 0 17992 152 8 0 296 80 224 0 101 21 6 16 0 902216 231 67 32 1 24 0 0 17552 360 6 0 104 72 104 0 69 35 7 16 0 772015 121 61 37 1 1718 2 19272 48 9 0 664 120 280 192 133 33 8 13 0 120 758 376 58 41 1 24 1 1 17784 104 11 0 400 176 280 0 116 22 6 11 0 881293 147 58 41 1 25 2 0 19144 312 10 0 320 136 200 0 98 19 6 14 0 117 492 196 62 38 1 21 6 0 17080 208 5 0 392 160 248 0 87 27 8 15 0 751885 128 65 34 1 20 5 0 16464 64 7 0 432 168 264 0 130 17 5 12 0 881113 178 61 38 1 25 1 0 15728 416 12 0 136 80 144 0 70 23 6 13 0 392390 55 61 38 1 21 5 0 16808 128 9 0 504 128 280 0 131 17 6 13 0 70 505 139 70 29 1 22 5 0 16000 152 5 0 232 80 160 0 124 25 7 16 0 76 484 119 76 23 1 18 5 6 19192 80 7 0 320 88 184 416 91 25 8 17 0 84 252 143 74 25 1 17 5 6 19376 88 6 0 464 208 280 256 133 36 12 18 0 116 512 176 66 33 1 22 5 0 15448 160 9 0 280 88 160 0 111 30 7 13 0 84 395 123 74 25 1 22 4 5 18000 56 8 0 448 136 216 216 99 33 11 21 0 98 499 170 64 35 1 ---------------------------- iostat log ------------------------------- tty dk0 dk1 dk2 dk3 cpu tin tout bps tps msps bps tps msps bps tps msps bps tps msps us ni sy id ... 16 467 200 18 16.2 77 5 15.4 182 12 16.4 0 0 0.0 38 25 36 1 15 379 230 20 16.8 93 6 15.6 217 13 16.2 0 0 0.0 37 38 24 1 19 473 279 26 17.7 111 7 15.6 276 17 17.6 0 0 0.0 23 51 25 1 18 606 318 30 17.3 141 10 18.7 265 17 17.3 0 0 0.0 34 38 28 1 16 353 333 33 18.5 163 11 17.5 248 16 17.4 0 0 0.0 43 25 30 1 19 493 302 30 17.3 117 8 17.2 234 16 16.7 0 0 0.0 27 42 30 1 19 458 344 35 17.8 170 11 17.4 294 21 17.8 0 0 0.0 39 26 33 1 19 555 340 35 18.2 187 13 17.9 282 20 17.5 0 0 0.0 44 20 34 2 13 552 388 39 17.5 160 11 17.1 221 15 17.2 0 0 0.0 41 25 32 1 12 443 340 30 17.5 131 9 18.6 228 15 17.1 0 0 0.0 41 33 24 1 15 301 348 30 17.7 133 9 18.6 245 17 17.4 0 0 0.0 32 40 26 2 13 355 426 41 17.2 138 9 18.0 231 17 16.9 0 0 0.0 44 22 33 1 24 915 394 35 17.8 151 11 19.4 238 16 17.5 0 0 0.0 40 27 32 1 -------------------------- iostat extract --------------------------- Sat Dec 12 07:29:18 EST 1987 tty dk0 dk1 dk2 dk3 cpu tin tout bps tps msps bps tps msps bps tps msps bps tps msps us ni sy id ... 0 3 201 38 13.6 14 1 15.9 10 1 13.5 0 0 0.0 19 64 16 1 0 3 326 61 10.1 6 0 15.6 3 0 13.1 0 0 0.0 18 66 15 1 0 2 138 24 13.4 14 1 13.8 16 1 13.9 0 0 0.0 55 29 15 0
scb@juniper.UUCP (Steve Blair) (04/05/88)
1) Change some kernal related parameters for swapping algorithms, 2) Manage window control better, 3) Get faster disks & controllers. An interesting talk goven at the USENIX conference by some folks from Convex(Tm), spoke of their rather large block sizes on their disks(I think it was 16k/block). This was one of the ways that they were dealing with speed issues. I can't do this since I don't have source fot SUN O/S. I can only speak for some of my customers who've I done consulting for; I yanked the 451's and installed the Interphase controllers and some of these newer , much faster drives. Their performances have rised much more than I could have envisioned; load times for some Lips transactions went from 25+ minutes to 7-10 minutes. It's al relative to the speed of DARK...... Steve Blair $CBlairnix(tm) Software Inc. Cedar Park, Texas uucp{backbone}!sun!austsun!ascway!blair Expires: References: <3842@watcgl.waterloo.edu> <Apr.2.19.04.51.1988.14798@topaz.rutgers.edu> Sender: Reply-To: scb@juniper.UUCP (Steve Blair) Followup-To: Distribution: Organization: Austin UNIX Users' Group, Austin, TX Keywords: Disk I/O throughput
dkc@hotlr.ATT (Dave Cornutt) (04/06/88)
In article <12800@brl-adm.ARPA> slk@clutx.clarkson.edu (Steve Knodle) writes: > Concerning the following discussion: > > >From: Tim Bray <tbray@watsol.waterloo.EDU> > >Subject: How fast are your disks? > >Date: 1 Apr 88 19:09:18 GMT > ... > >In a recent meeting we were analyzing the performance of this application that > >is rather I/O bound - in particular, it performs a lot of very random accesses > >here and there in large (> 100 Mb) files. Somebody said "Now, we'll assume > >that Unix can do a maximum of 30 disk I/O's a second". Somebody else remarked > >that that figure had been remarkably constant for quite some time. Somebody > >else proposed that it was a fundamental law of Computer Science. (Of course, > >we are poor peons restricted to the use of Vaxes and Suns). > > > >Anyhow - Presumably there are other people out there limited by this > >particular bottleneck. Are there reasonably-priced unix systems out there > >that do better? Are there a set of benchmarks which reliably characterize > >system performance in this area? > > >Not only is it not a constant, it's not even true. The sad fact > >is most disk controllers for minis/micros are pretty horendous. > >Sun's unfortunate use of the Xylogics 450/451 is a prime example. > >Anyway, with decent controllers (or multiple controllers) there is > >no reason why the figure 30 can't be exceeded and is on decent Unix > >systems. > > > Let me offer, as a point of reference, extracts from simultaneous (vm/io)stat > performance logs for our Gould Powernode 9080, which performs very gracefully > under severe I/O load, I feel. The logs were taken during the End-of-Semester > Crunch, and are being used to substantiate my request that the machine > be upgraded from 8 Meg of memory to 16 Meg. (The relatively small > amount of memory explains the severe paging rate below.) A few months ago, I got curious to find out how much difference there was between a Xylogics 450 and some other systems. I wrote a little program to write a large buffer out to a raw disk partition a set number of times, and by timing the program was able to figure out an approximate transfer rate for the disk subsystem. To make the contest as fair as possible, I picked out two systems that were both BSD-based and relatively unloaded at the time I ran the benchmark. I tried to get the kernel out of the way as much as possible by using an unused raw partition for the test (I hope this also minimized the effects of seek time, since the writes should have been to adjacent cylinders). I had the program do the write a fairly large number of times to make the execution time long enough (about 2-5 minutes) to avoid quantization effects and to make the time needed to load the executable as insignificant as possible. I ran this on both a Sun-3 and a Gould PN9080, starting with a 100k buffer and increasing the size until no further improvement was seen in the benchmark time. The exact configurations of the systems were: Sun-3/160 with 16M memory, Xylogics 450 connected to 2351 Eagle (the older 400M ones, still pretty fast), SunOS 3.2 Gould PN9080, dual processor (this should have had no effect since the benchmark was just doing I/O), 12M memory, HSDP disk controller connected to CDC XMD850 (the official name of the 858M disk), UTX 2.0 Both systems were running multi-user but were quiescent at the time. Neither disk had an active swap partition on it. I ran vmstat to insure that no paging or other significant activity occurred during the benchmark. The Sun topped out at a transfer rate of ~ 700 kB/sec. This occurred at a write size of 2M, I believe (I don't remember for sure and I don't have my notes). The 9080 hit 2.0 MB/sec and was still getting faster when I ran out of memory at a write size of 8M. (I tried a couple of runs at a 10M write size, and got one result of 2.4 MB/sec, but the system started paging because I was using all the available physical memory, and so I was unable to duplicate the result.) Now, I realize that this was not terribly scientific. There were any number of things that could have interfered with the result, such as differences in the disk driver between SunOS and Gould's UTX. Also, Gould systems with HSDPs have their disks set up for a disk sector size of 1024 bytes instead of the usual 512. However, this does reenforce the impression that many people have that the Xylogics 450 is a slow controller. In defense of the poor, downtrodden 450, I should say that it is not a bad little board; in fact, it's one of the nicer Multibus disk controllers on the market, and if I were putting together a Multibus system, I wouldn't hesitate to buy one. The problem is that it was never designed to do what Sun is asking it to do. It was fine on the Sun-2 line, and it got them over the hump with the 3/100 machines (there's something to be said for reusing hardware that you're already familiar with), but it is totally out of place on a Sun-4 (the VME adaptor probably doesn't help any either). Sun needs to come up with a new, native VME controller to match the faster Sun-4 and 3/200 lines. -- Dave Cornutt, AT&T Bell Labs (rm 4A406,x1088), Holmdel, NJ UUCP:{ihnp4,allegra,cbosgd}!hotly!dkc "The opinions expressed herein are not necessarily my employer's, not necessarily mine, and probably not necessary"
david@daisy.UUCP (David Schachter) (04/07/88)
Be careful in measuring disk performance. I did some simple analysis of an Interphase SMD controller for the Multibus two years ago. The claim of the manufacturer is that it can handle two megabytes per second. I wrote a program which talked directly with the controller, bypassing the (non-Unix) operating system and twiddled various parameters to see the effect. I made sure the data I was requesting was in the track buffer of the controller and checked that assertion by verifying that the disk drive (a 475 MB Fujitsu "Eagle") was quiescent (the "drive busy" light stayed off.) The best transfer rate I could get was one megabyte per second, on a fast Multibus system, doing sixteen bit transfers. I put the controller on an extender card and used an oscilloscope to check out the hardware; the controller simply wasn't using the available bus cycles. (There are some parameters one can set in the controller to control bus usage; the one MB/second rate was achieve by telling the controller to hog the bus.) A call to Interphase got an answer of "well, of course, it doesn't really get two megabytes per second...." Thanks, guys. Real helpful. Moral: don't trust the controller manufacturer. Or the documentation. You may have to measure raw hardware, with a 'scope, to get believable answers. -- David Schachter The opinions expressed above are mine, as are the facts, and most everything else.
ron@topaz.rutgers.edu (Ron Natalie) (04/07/88)
Sorry, I can't agree with you. The Xylogics controller might have been reasonable as far as Multibus goes, but for UNIX systems it's pretty grungy technology when UNIX kernels have been able to do overlapped seeks for nearly ten years now. =Ron
clewis@spectrix.UUCP (Chris Lewis) (04/09/88)
In article <3842@watcgl.waterloo.edu> tbray@watsol.waterloo.edu (Tim Bray) writes: >In a recent meeting we were analyzing the performance of this application that >is rather I/O bound - in particular, it performs a lot of very random accesses >here and there in large (> 100 Mb) files. Somebody said "Now, we'll assume >that Unix can do a maximum of 30 disk I/O's a second". Somebody else remarked >that that figure had been remarkably constant for quite some time. Somebody >else proposed that it was a fundamental law of Computer Science. (Of course, >we are poor peons restricted to the use of Vaxes and Suns). > >Anyhow - Presumably there are other people out there limited by this >particular bottleneck. Are there reasonably-priced unix systems out there >that do better? Are there a set of benchmarks which reliably characterize >system performance in this area? Yes. Depending on scenarios, even a Tower 32/400 can beat 30 I/O's per second. Yes to the second question, and I'll post it when it's totally cleaned up. How fast do our disks go? Well, since I'm doing some performance analysis I thought I'd show some numbers extracted from our database. Environment: Standard NCR Tower 32/400 (16Mhz 68020) without CPU caching and with some relatively slow memory. Disk: Hitachi 85Mb with 28Ms. avg seeks (moderately fast). The standard Tower figures below are using the standard NCR disk controllers (ST506). The other numbers are for a new controller we're working with (same type of disk) that uses a SCSI interface. Explanation of tests: "Random" is simply a series of lseek(... 512*random()...). read(.. bsize); Linear is simply continuous read(...bsize) and "Reread" is continous "lseek(... 0 ...); read(... bsize)" (wierd testing is so we can intuit some absolute max bandwidths). In the tables below "bsize" is the request size in bytes, "req/sec cooked" is number of requests of bsize per second thru buffer cache, "bw cooked" is bytes per second thru buffer cache. Similarly, the remaining two columns are req/sec and bandwidth for raw interface. Obviously, we should be doing this to specific files rather than directly thru the blocked or unblocked special devices. Given the amount of resources we can commit to this evaluation, and the behaviour of the caches we figure that only by running the real application on top will get the true application figures. A lot of these numbers need to be taken with a fair grain of salt - UNIX buffer cache hits (and controller cache hits) are occuring, so they don't necessarily reflect *true* physical disk speed. Just UNIX I/O throughput. For the standard Tower, the req/seq and bandwidth is true disk speed on raw. On the second environment, it's difficult to say - the controller caches blocks too. Remember, the "req/sec" figures are blocks of bsize bytes in size. So, the raw Linear test with bsize of 1/2 meg with standard tower is actually transfering about 800 blocks per second. Even buffered it's approx 50 blocks per second. Standard Tower: Random bsize req/sec bw req/sec bw cooked cooked raw raw 512 35.3103 18078 35.9298 18396 1024 16.5161 16912 35.3103 36157 2048 10.449 21399 32 65536 4096 6.09524 24966 28.4444 116508 8192 3.2 26214 25.6 209715 16384 1.72973 28339 16 262144 32768 0.864865 28339 10.6667 349525 65536 0.435374 28532 5.56522 364722 131072 0.217687 28532 2.90909 381300 262144 0.109589 28728 1.45455 381300 524288 0.0547945 28728 0.727273 381300 Reread bsize req/sec bw req/sec bw cooked cooked raw raw 512 862.316 441505 59.9049 30671 1024 546.133 559240 60.0147 61455 2048 327.68 671088 59.7956 122461 4096 170.667 699050 60.2353 246723 8192 89.0435 729444 30.1176 246723 16384 44.5217 729444 20.0784 328965 32768 10.6667 349525 10.6667 349525 65536 6.4 419430 6.4 419430 (UNIX buffer cache filled up) 131072 0.214765 28149 2.90909 381300 262144 0.108475 28435 1.45455 381300 524288 0.0547009 28679 0.727273 381300 Linear bsize req/sec bw req/sec bw cooked cooked raw raw 512 55.3097 28318 55.8036 28571 1024 27.9018 28571 52.521 53781 2048 13.9509 28571 48.0769 98461 4096 7.06787 28950 39.05 159948 8192 3.48661 28562 28.9259 236961 16384 1.74888 28653 16.9565 277815 32768 0.870536 28525 11.4706 375868 65536 0.431111 28253 5.70588 373940 131072 0.216216 28339 2.82353 370085 262144 0.104803 27473 1.41176 370085 524288 0.0547945 28728 0.705882 370085 New Controller: Random bsize req/sec bw req/sec bw cooked cooked raw raw 512 170.667 87381 157.538 80659 1024 170.667 174762 170.667 174762 2048 73.1429 149796 170.667 349525 4096 51.2 209715 128 524288 8192 25.6 209715 64 524288 16384 16 262144 64 1048576 32768 6.4 209715 32 1048576 65536 3.55556 233016 16 1048576 131072 1.82857 239674 7.11111 932067 262144 0.914286 239674 3.55556 932067 524288 0.444444 233016 1.77778 932067 Reread bsize req/sec bw req/sec bw cooked cooked raw raw 512 840.205 430185 158.3 81049 1024 780.19 798915 167.184 171196 2048 481.882 986895 146.286 299593 4096 273.067 1118481 113.778 466033 8192 146.286 1198372 81.92 671088 16384 78.7692 1290555 48.7619 798915 32768 32 1048576 32 1048576 65536 16 1048576 10.6667 699050 131072 8 1048576 8 1048576 (UNIX buffer cache filled up) 262144 0.888889 233016 3.55556 932067 524288 0.450704 236298 1.77778 932067 Linear bsize req/sec bw req/sec bw cooked cooked raw raw 512 231.481 118518 162.338 83116 1024 231.481 237037 173.611 177777 2048 115.741 237037 148.81 304761 4096 57.8519 236961 120.154 492150 8192 28.9259 236961 78.1 639795 16384 13.9286 228205 48.75 798720 32768 7.22222 236657 27.8571 912822 65536 1.83019 119943 4.04167 264874 131072 0.872727 114390 1.84615 241979 262144 0.413793 108473 0.923077 241979 524288 0.26087 136770 0.666667 349525 Sorry for the format of the tables, but this is something I hacked out of one of my statistics gathering awk scripts in a few minutes. ps: people were making comments about "2Mb/sec" controllers only transferring 1Mb per second on Multibus. Well, when the manufacturers quote bandwidths they're usually quoting instantaneous max transfer rate thru the disk interface. Eg: "Standard SCSI" is actually about 1Mbyte/sec rated that way. Then, you have to consider: - disk driver overhead - UNIX system overhead - missed rotations/interleave - actual max disk output. A standard 512 byte per sector 5.25" disk that rotates at 3600 RPM has the bytes going by the head at only 522K or so bytes/second (disregarding seeks and any controller overhead). You can't go faster than that no matter what you do. Besides, Multibus is slow.... -- Chris Lewis, Spectrix Microsystems Inc, UUCP: {uunet!mnetor, utcsri!utzoo, lsuc, yunexus}!spectrix!clewis Phone: (416)-474-1955
rbj@icst-cmr.arpa (Root Boy Jim) (04/15/88)
A call to Interphase got an answer of "well, of course, it doesn't really get two megabytes per second...." Thanks, guys. Real helpful. -- David Schachter Yeah, but it goes twice as fast as the ones who claim one megabyte/second :-) (Root Boy) Jim Cottrell <rbj@icst-cmr.arpa> National Bureau of Standards Flamer's Hotline: (301) 975-5688 The opinions expressed are solely my own and do not reflect NBS policy or agreement Those aren't WINOS--that's my JUGGLER, my AERIALIST, my SWORD SWALLOWER, and my LATEX NOVELTY SUPPLIER!!