[fa.tcp-ip] tcp/ip on hyperchannel

tcp-ip@ucbvax.ARPA (06/17/85)

From: T. Michael Louden (MS W422) <louden@mitre-gateway>

Can anyone give me some information on TCP/IP over a local Hyperchannel?
I would like to know what data rates a user file transfer could reasonably
expect to see.
Additional information on interfaces and configurations would also be
useful.

Thanks for any help!
Mike Louden
Louden@MITRE

tcp-ip@ucbvax.ARPA (06/18/85)

From: Mike Muuss <mike@BRL.ARPA>

Creon Levitt and Eugene Myia at NASA-AMES did a fairly complete set of
tests on the Hyperchannel;  they will probably send you a copy of it
if you like.

Locally, between a 780 and a 750, we see data rates on the order
of 80 Kbytes/sec of user->user data, which is similar to our other
interfaces (ethernet, etc).

Of course, for us the choice of Hyperchannel for that particular room
was necessitated by having to talk to a Cyber 750 running NOS2.
The fact that the VAXen can talk amongst themselves over the Hyperchannel
is incidental.

If you are looking for something REALLY FAST to interconnect just
minis and super-minis, try the 80 Mbit PRONET;  much cheaper than
Hyperchannel.

	Best,
	 -Mike

tcp-ip@ucbvax.ARPA (06/18/85)

From: Ron Natalie <ron@BRL.ARPA>

I should point out that it is NASA's conviction that the speed limitations
on the numbers they came up with are a result of the PI-13 interface to the
PDP-11.  I don't know if this is true, but it shouldn't surprise me.  The
interface is a pain to deal with and the whole hyperchannel system is amazingly
tempermental considering the small size of the system here and the high price
we paid for it.

-Ron

tcp-ip@ucbvax.ARPA (06/18/85)

From: fouts@AMES-NAS.ARPA (Marty)

     Actually, we have some more experience at NASA now, and aren't
completely convince that the PI13(14)/VAX are the biggest bottleneck.

     I'm seeing some pretty horrible numbers when I make a Cray 2 pump
data onto the floor, and I'm pretty sure it's not the Cray, but I still
don't know what it is.

Marty

----------

tcp-ip@ucbvax.ARPA (06/18/85)

From: ihnp4!houxm!hrpd3!burns@BERKELEY

Could you plese send me a copy of your replies.

Derrick Burns

tcp-ip@ucbvax.ARPA (06/19/85)

From: "J. Spencer Love" <JSLove@MIT-MULTICS.ARPA>

There is an implementation of TCP/IP via the Hyperchannel for Multics,
which is used as an in-machine-room local area network between 4 Multics
systems in the Pentagon.  By setting the window size to 50000 and the
packet size to 5000, we were able to get FTP rates on a single
connection as high as 275,000 bits per second.  These large buffers and
packet sizes are not a problem for Multics, but we had to special case
the window size for our multi-home test site, since many implementations
on the ARPAnet do strange and bizarre things when given huge windows.
(Reply to me if you want a somewhat more detailed description).

Spitting data straight through the network we were only able to unload
800,000 bits per second (with no protocol).  This is a tiny fraction of
the raw bandwidth of the network.  We blame the problem on the hardware
interface design, which is amazingly brain damaged.  Given this
performance, I would recommend practically any other vendor who has an
appropriate interface card for your machine; you'll spend a whole lot
less money and get much better service.

tcp-ip@ucbvax.ARPA (06/19/85)

From: fouts@AMES-NAS.ARPA (Marty)

     I have also seen a maximum of 800000 bit/second, in this case
transferring data from a Cray 2 onto the floor. 

----------

tcp-ip@ucbvax.ARPA (06/21/85)

From: CERF@USC-ISI.ARPA


My glancing exposure to Hyperchannel some years ago left me
with the impression that the 50 Mbit channel had some built
in bus contention and handshaking logic which made its maximum
datarate a function of the physical length of the channel
(handshaking delays limit access frequency etc.).

This style of operation can, indeed, leave one with much
less effective bandwidth from any one source than one would
be led to expect from the burst rate of the channel.

Vint Cerf

tcp-ip@ucbvax.ARPA (06/21/85)

From: Jerry Morence <morence@Almsa-2>

Mike:

We have had the hyperchannel in production since 1983, connecting four IBM 
4341/4381 S/370 MVS systems in a private local network.  This configuration
is installed at a total of six sites using our (ALMSA, St. Louis, Mo.) own
developed software.  We do not use TCP/IP.

The speeds of data interchange among all these super hosts has been limited
only to the channel speeds of the slowest host.  We are averaging at each
location approximately 1.5 megabytes of data per second across multiple hosts.
We have had two pairs of hosts carrying on data interchange concurrently with
each pair averaging the same high 1.5 megabytes per second equating to 3.0
megabytes (30 megabits) per second across the hyperchannel.

We are so satisfied with the performance of the hyperchannel and our software,
that we are investigating expansion of the local network and linking our six
sites together (possibly using Hyperlink).

Regards,
Jerry

tcp-ip@ucbvax.ARPA (06/21/85)

From: Ron Natalie <ron@BRL.ARPA>

I heard a rumor that TCP/IP runs faster over Hyperchannel than NETIX does.
Does someone else who has hyperchannel know how to deal with the Adapters
floating away, other than resetting them by hand?

-Ron

tcp-ip@ucbvax.ARPA (06/22/85)

From: CERF@USC-ISI.ARPA

Jerry,

thanks for the report on hyperchannel - have things changed in the
last couple of years? Do you have a short bus (literally, how
many feet of backplane or whatever is used to implement the
channel?).

Is my perception of the handshaking delay being a function of
distance incorrect? I would like to clear up any misconception
I have or may have propagated.

thanks,

Vint

tcp-ip@ucbvax.ARPA (06/29/85)

From: ihnp4!ihu1e!jee@BERKELEY

The protocol on the Hyperchannel is best described as a CSMA/CP where
CP is collision prevention. This roughly equates to a p-persistenc CSMA
except it is prioritized.

What all this means is that it is CSMA when it is planning to transmit.
All adapters  recognize when a transmission is happening. If they have
to transmit each waits a different time (the backoff algorithm) which
is preselected (the priority). It is true that prior to transmission
of the actually date there is a control information exchange with the
destination adapter.  It is a simple way of making sure the channel is
clear prior to transmission (i.e collision occurring after transmission
begins) and the time is only equal to the round trip time.  This control
information allowys them to do transmit very large packets (much more
than 4kbytes).

In fact their protocol is similar to the proposed ANS X3T9.5 proposed
standard for high speed local networks.

I would suggest you contact Network
Systems Corporation in Minneapolis, Minnesota directly for some
introductory information which goes into much more detail.

tcp-ip@ucbvax.ARPA (07/01/85)

From: "Richard Kovalcik, Jr." <Kovalcik@MIT-MULTICS.ARPA>

Unfortunately, the hyperchannel collision protection is worthless.  The
adapters are protected against transmitting against each other, but for
all real messages (> about 32 bytes) each adapter only has one receive
buffer.  If you transmit a second packet to another node before it has
read the first out, or worse if two different nodes transmit a packet
each to a third node in a small interval, all hell breaks loose.  All
the adapters forget all the messages (including the one already in the
buffer) and you have to issue reset commands to them all.