rkimble@nswc-wo.navy.mil (Robert Kimble) (02/02/91)
Has anybody out there measured throughput performance for file transfers over FDDI? If so, please post a reply with the following information: 1. The manufacturer and model of: the two computers involved in the file transfer, the network interface cards used, the networking softare used. 2. The size of the file and the average throughput (in bytes/sec) for the file transfer. Any information will be appreciated. Robert Kimble rkimble@nswc-wo.navy.mil
mohta@necom830.cc.titech.ac.jp (Masataka Ohta) (02/04/91)
In article <1991Feb1.175751.13639@relay.nswc.navy.mil> rkimble@nswc-wo.navy.mil.UUCP (Robert Kimble) writes: >Has anybody out there measured throughput performance for file transfers >over FDDI? If so, please post a reply with the following information: We are connecting two RC6280s (60MHz R6000) with FDDI. The FDDI system is of CMC. Software version is of 6-Nov-90. >2. The size of the file and the average throughput (in bytes/sec) for the > file transfer. ttcp (default setting) showed 10Mbps TCP and 40Mbps UDP speed. NFS speed is about 1MB/sec for writing and 2MB/sec for reading. Masataka Ohta
vjs@rhyolite.wpd.sgi.com (Vernon Schryver) (02/05/91)
In article <7178@titcce.cc.titech.ac.jp>, mohta@necom830.cc.titech.ac.jp (Masataka Ohta) writes: > > >2. The size of the file and the average throughput (in bytes/sec) for the > > file transfer. > > ttcp (default setting) showed 10Mbps TCP and 40Mbps UDP speed. > > NFS speed is about 1MB/sec for writing and 2MB/sec for reading. How was NFS speed measured? Nfsstone or one of the other benchmarks? What, if any, compensation for client and server cache policies and mechanisms was there? Is the UDP speed quoted above the value reported by the transimitter or by the receiver? The natures of ttcp and UDP are such it is easy to have the transmitter produce very high values and the receiver as close to 0 as you wish. The value reported by the receiving UDP ttcp is often much lower than the value reported by the transmitter. Standard UNIX BSD-style drivers (which I suspect are being reported above) discard output packets when the output queue gets too far full. In other words, the ttcp user-process may think and report that its data has been transmitted but in reality much of the data never made it as far as fiber. This point is often missed or ignored; I continue to repeat the story of the Ethernet board vendor who tried to sell me his boards by saying his Sun "blast" driver could transmit 12Mbits/sec UDP/IP/Ethernet. To make the receive side of ttcp as close to zero as you want, use a very fast transmitter (say something that transmitts >30Mbit/sec), and a receiver that is very much slower. For example, use a very old, slow Sun 3 and a new, fast Sun 4 transmitter. If the speed difference is high enough, you may be able to keep the receiver so busy handling input interrupts and DMA that the receiving ttcp process never has an opportunity to run, until after th transmitter stops. The result can be that receiver reports receiving one socket-buffer-size worth of data out of many MByte transmitted. Vernon Schryver, vjs@sgi.com
mohta@necom830.cc.titech.ac.jp (Masataka Ohta) (02/06/91)
In article <83979@sgi.sgi.com> vjs@rhyolite.wpd.sgi.com (Vernon Schryver) writes: >> NFS speed is about 1MB/sec for writing and 2MB/sec for reading. >How was NFS speed measured? Nfsstone or one of the other benchmarks? Just read/write large files. >What, if any, compensation for client and server cache policies and mechanisms >was there? For reading, the read file is completely in the buffer cache at the server, but the client could have no cache because the read file is created on the server and new to the client. For writing, according to the NFS specification, writing must be somewhat synchronous, so forget about caching. >Is the UDP speed quoted above the value reported by the transimitter or by >the receiver? Of course, 40Mbps is observed on both side. I know what is UDP. I can understand that the figure has astonished you. But, please, don't assume I am an idiot and make a rather lengthy post. You could have used mail. Also, you should have considered more and should have noticed that 2MB/sec (equivalent to 16Mbps) NFS read speed support my observation of UDP speed. By the way, I also performed nhfsstone with varying load from 10 to 210. nhfsstone -l 10: 490 sec 5022 calls 10.24 calls/sec 7.49 msec/call nhfsstone -l 20: 245 sec 5019 calls 20.48 calls/sec 7.13 msec/call nhfsstone -l 30: 163 sec 5043 calls 30.93 calls/sec 8.03 msec/call nhfsstone -l 40: 123 sec 5025 calls 40.85 calls/sec 8.72 msec/call nhfsstone -l 50: 99 sec 5017 calls 50.67 calls/sec 9.03 msec/call nhfsstone -l 60: 81 sec 5033 calls 62.13 calls/sec 11.64 msec/call nhfsstone -l 70: 70 sec 5022 calls 71.74 calls/sec 11.55 msec/call nhfsstone -l 80: 61 sec 5011 calls 82.14 calls/sec 11.14 msec/call nhfsstone -l 90: 54 sec 5011 calls 92.79 calls/sec 11.98 msec/call nhfsstone -l 100: 49 sec 5016 calls 102.36 calls/sec 13.52 msec/call nhfsstone -l 110: 44 sec 5032 calls 114.36 calls/sec 14.48 msec/call nhfsstone -l 120: 41 sec 5016 calls 122.34 calls/sec 17.43 msec/call nhfsstone -l 130: 37 sec 5012 calls 135.45 calls/sec 18.22 msec/call nhfsstone -l 140: 35 sec 5020 calls 143.42 calls/sec 25.37 msec/call nhfsstone -l 150: 34 sec 5019 calls 147.61 calls/sec 27.23 msec/call nhfsstone -l 160: 30 sec 5021 calls 167.36 calls/sec 20.57 msec/call nhfsstone -l 170: 29 sec 5019 calls 173.06 calls/sec 22.51 msec/call nhfsstone -l 180: 27 sec 5005 calls 185.37 calls/sec 24.42 msec/call nhfsstone -l 190: 26 sec 5001 calls 192.34 calls/sec 29.98 msec/call nhfsstone -l 200: 26 sec 5001 calls 192.34 calls/sec 32.91 msec/call nhfsstone -l 210: 26 sec 5008 calls 192.61 calls/sec 34.59 msec/call Masataka Ohta
vjs@rhyolite.wpd.sgi.com (Vernon Schryver) (02/08/91)
In article <7185@titcce.cc.titech.ac.jp>, mohta@necom830.cc.titech.ac.jp (Masataka Ohta) writes: > ... > For writing, according to the NFS specification, writing must be somewhat > synchronous, so forget about caching. There is controversy on this subject. At least two system vendors and at least one add-in-hardware vendor offer optional server cache mechanisms. > >Is the UDP speed quoted above the value reported by the transimitter or by > >the receiver? > > Of course, 40Mbps is observed on both side. I know what is UDP. > > I can understand that the figure has astonished you. But, please, > don't assume I am an idiot and make a rather lengthy post. You could > have used mail. The reported numbers are respectable, but far from "astonishing." I know of more than one independent implementation that gets several times the ttcp TCP value. The NFS value of 1-2 MByte or 8-16 Mbit sounds like the systems are disk limited. The 40Mbit ttcp UDP value is excellent, but not unheard of. I'll guess that the low 10Mbit TCP value, esp. compared to the 40Mbit UDP value, is caused by using a small window or MSS. The latter can occur with routers (not the fault of the routers, of course.). Observing the same value on transmitter and receiver is surprising. It suggests that the tested implementation is transmitter limited. That is surprising given MIPS style caching. (Measurements in such a realm is a popular pastime around here.) Please accept my appologies if my previous message implied I consider you an "idiot." I do not know you well enough to have any opinion on your competance. It is a fact that many people paid to work on this stuff do not understand UDP, not to mention TCP. The development mangers and VP from the ethernet board maker with the 12Mbit/sec UDP/ether performance that I wrote about previously were not joking and not idiots. They were trying for a long term contract with a major workstation maker. Vernon Schryver, vjs@sgi.com
mohta@necom830.cc.titech.ac.jp (Masataka Ohta) (02/08/91)
In article <84440@sgi.sgi.com> vjs@rhyolite.wpd.sgi.com (Vernon Schryver) writes: >At least two system vendors and at >least one add-in-hardware vendor offer optional server cache mechanisms. Sorry, I forgot to mention them, obviously because we are not using them. :-) >> Of course, 40Mbps is observed on both side. >The reported numbers are respectable, but far from "astonishing." Maybe. But, FDDI is only 100Mbps. With the early versions of software, it is difficult to fully extract the interface speed. So, 40Mbps today may mean that FDDI will soon be saturated if the device driver is a little more tuned. We just began to move toward FDDI, so the figure is at least menacing, if not astonishing. >I know of more than one independent implementation that gets several times >the ttcp TCP value. Would you post more detailed information about them? >The NFS value of 1-2 MByte or 8-16 Mbit sounds like the >systems are disk limited. As for read performance of 2MB/sec, the read file is fully buffer cached (I assume you know what is buffer cache). So, there can be no disk limit. With 40Mbps UDP speed, 16Mbps NFS read seems to be fairly reasonable figure, dosen't it? >I'll guess that the low 10Mbit TCP value, esp. compared to the >40Mbit UDP value, Surely. >is caused by using a small window or MSS. May be or may not be. The only thing I know is the default buffer size of ttcp is 8KB. Masataka Ohta
vjs@rhyolite.wpd.sgi.com (Vernon Schryver) (02/12/91)
In article <7200@titcce.cc.titech.ac.jp>, mohta@necom830.cc.titech.ac.jp (Masataka Ohta) writes: > > We just began to move toward FDDI, so the figure is at least menacing, > if not astonishing. Agreed. FDDI is only about 10x ethernet. There are commercially available multi-homed UNIX computers and file servers whose aggregate ethernet load would menance a single FDDI ring. I've long been whining about that. More recently, I've begun to realize that 100Mb is not so bad. 100Mb is too slow for backbones connecting many 10Mb or 100Mb LANs. However, there seem to be few applications for Gb networks, except to interconnect LAN's. At a recent meeting of high speed network developers, the gov. offered funding for non-backbone applications, but had no takers. Just as there is an upper bound on the useful speed of graphics, there may be an upper bound on the useful speed of a LAN, at least for the next few years. > >I know of more than one independent implementation that gets several times > >the ttcp TCP value. > Would you post more detailed information about them? I try to overcome the tempation to brag about my numbers in public, non-commercial forums like this; the SGI sales organization has them. I've been told first hand of simple ttcp TCP values >=30Mb. To my knowledge, those other numbers have not been published, so I can't provide attributions. A long time ago I've heard rumors of >=40Mb from a respected developer, but the hardware he was probably using has been conspicuously absent from the market. A workstation company and a FDDI board maker are advertising 25Mb, but each gets that by somehow running two simultaneous ttcp's on one machine to two other machines. Those values are hard to evaluate without knowing how the multiple ttcp's were synchronized, but neither vendor says. A mainframe maker and its customers have a version of ttcp that does several simultaneous TCP transfers. The numbers from that benchmark are believable, but give different information than the familiar ttcp number. > >The NFS value of 1-2 MByte or 8-16 Mbit sounds like the > >systems are disk limited. > > As for read performance of 2MB/sec, the read file is fully buffer cached > (I assume you know what is buffer cache). So, there can be no disk limit. > With 40Mbps UDP speed, 16Mbps NFS read seems to be fairly reasonable > figure, dosen't it? It seems respectable. How does the same benchmark run locally on the server? That would help determine where the 40Mb/16Mb=2.5x difference is. > >I'll guess that the low 10Mbit TCP value, esp. compared to the > >40Mbit UDP value, > >is caused by using a small window or MSS. > > May be or may not be. The only thing I know is the default buffer > size of ttcp is 8KB. Current versions of ttcp allow changing the window with -b. If the TCP implementation has the 4.3+ fixes, then a multiple of 4096 >= 48K might be much higher. Vernon Schryver, vjs@sgi.com
mohta@necom830.cc.titech.ac.jp (Masataka Ohta) (02/13/91)
In article <84864@sgi.sgi.com> vjs@rhyolite.wpd.sgi.com (Vernon Schryver) writes: >Just as there is >an upper bound on the useful speed of graphics, there may be an upper bound >on the useful speed of a LAN, at least for the next few years. Surely there is an upper bound on thhe useful speed of graphics. Data transfer rate of usual color workstation is about 200Mbytes/sec. So, 1Gbytes/sec network is perhaps enough for near-realtime image transfer. >At a recent meeting of high speed network developers, the gov. offered >funding for non-backbone applications, but had no takers. Perhaps, "high speed network" is not fast enough for realtime image transfer. As for file transfer, there is no theoretical upper bound because of buffer caching and disk striping. Masataka Ohta