retrac@titan.rice.edu (John Carter) (11/13/87)
Hello, I'm studying high throughput bulk data transfer protocols and am interested in finding out what the best performing current protocols can achieve. I'd like to know what are some best/average case times for various protocols to transmit 10 megabytes over a 10 Mbit/sec ethernet. To be more specific, what are the best or average times for your favorite bulk data transfer protocol (tcp-ip, etc.) to transfer 10 Mbytes from the main memory of one machine to the main memory of another machine (as measured from the time the sender invokes the protocol until the receiver receives the entire chunk of data and the sender is informed of this). I'm primarily interested in performance over a 10 Mbit/sec ethernet (which is pretty standard) but wouldn't mind hearing about other systems. An alternative performance measure is the best case throughput (megabits/sec) that your favorite protocol achieves - in this case mention how much data you transferred to get the result. Please include a short description of your configuration (e.g. a pair of Sun 3/180's on a 10 Mbit/sec ethernet running Sun Unix 3.2). Also mention any performance tuning hacks you may have done to the original protocol (if any). This should make it obvious what the bottleneck in the system is (usually the CPUs) and other things. If its an uncommon or special purpose protocol, a reference would be helpful. Mail your responses to me directly via e-mail. After the responses begin to taper off, I'll post the results to comp.protocols.misc. Thanks in advance for any time and effort you make on my behalf! John Carter Dept. of Computer Science Rice University P.S. I've posted the results of a similar query restricted to tcp-ip in comp.protocols.tcp-ip for anyone that's interested but doesn't normally read that newsgroup. =*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=* * UUCP: {Backbone or Internet site}!rice!retrac oo = = ARPA: retrac@rice.edu < * * CSNET: retrac@rice.edu U - Bleh. = =*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
mike@BRL.ARPA (Mike Muuss) (11/21/87)
A pair of Sun-3/50 machines running SUNOS 3.3 with tcp_sndspace and tcp_rcvspace (or whatever they are called) increased to 16K (ie, increased offered windows). Test is typically 1 Mbyte memory to memory using the TTCP program (copies on request). Typical data rate is 3 Mbits/sec. For two pairs, typically see 6 Mbits/sec total for both connections. Never bothered to do three pairs. Trailers were off. 6 Mbits/sec is fairly close to the maximum usable bandwidth of an Ethernet. On an NSC Hyperchannel, between a Gould PN9080 running UTX 2.0, using a PI32 to access an A400, with an otherwise idle trunk to an A130 adaptor connected to a Cray XMP48 running UNICOS 2.0 (at the time), I was able to achieve 11 Mbits/sec aggregate, using MTU of 4144 and Cray-IP encapsulation. This was not using TCP at all, but merely IP/ICMP_Echo request/response packets, in a "flood ping" test. Best, -Mike