[comp.protocols.tcp-ip] on-/off-board protocol processing

dcrocker@TWG.COM (Dave Crocker) (03/24/88)

As with most technical choices, each alternative has its appropriate uses.

The concern about reliability of processing, expressed by Vint, seems not
to apply to many of the current "intelligent-board" solutions.  Their 
interface to the host is essentially -- often exactly -- like accessing
standard primary memory.  So, if you don't trust your primary memory,
you probably have more serious architectural fish to fry that where you
put your protocol processing.  When the interface is more channel-oriented,
then reliability becomes factor.  That is, if you need a serious protocol,
to access your protocols, there is a reasonable chance that the access
protocol has bugs, so that you have a step in the networking chain
truly exposed to problems, and not detectable by the end-to-end
safety mechanism of the transport protocol.

On the other hand,

There is some interesting mythology about the benefits of moving your
protocols to an intelligent board.  Having a slow board and a fast
host has become a very significant issue, as discussed in some
messages, already.  My only comment is that when making a choice, you
should pay close attention to this issue.

Less well-understood are some beliefs that the intelligent board
makes the host less complex and relieves the host of substantial
overhead.  This is true only sometimes.  The only factor that is
always nicer about intelligent boards is that they do the checksum
calculation.

On the other hand,

They significantly increase the cost of networking hardware.  They
tend to limit the number of virtual circuits that you can have, due to
memory limitations on the board -- a factor that is becoming less of
a problem, with 512K and 1M boards.  They tend to make multiple-interface
configurations a problem, since you then have to add complexity to the
host, anyhow, to coordinate the cards.

That is, when you move the protocols to the board, then multiple
boards become multiple protocol implementations.  Coordinating IP
routing, UDP and TCP Port number allocation, etc, becomes a real
hassle.  Worse, customers seem to be inclined to try to do this
with implementations from different vendors(!)

The idea that intelligent boards reduce interrupt overhead sounds
appealing, but often does not prove out.  Most incoming packets
have their data handed immediately to the receiving application, so
that the network interrupt, caused when the packet arrives, still
causes the o/s device driver -- yup, you still need such a beast when
you have intelligent boards -- to interrupt and you still have the
kernel/user context switch.  In the case of heavy traffic with
small packets, the intelligent board does have an edge.  Otherwise,
the host-based solution seems to be a much more efficient use of
resources.

Now, suppose that you have a host that is tending towards saturation and
you believe that the extra processing on the board will relieve the
problem.  At one level of analysis, you would be correct. 

On the other hand,

It is a very short-sighted way to solve a system congestion problem.  If
your host has a problem at that level, you probably have only bought yourself
relief for a short time.  In all likelihood, you need to get yourself
another host.  

Given that most machines are in the micro-computer and small-minicomputer
range, this expense is not necessarily onerous.

Now, about the idea of having a non-networking-oriented access
model for the software, such as the view of fitting the network in
as a portion of a process's address space, so that the software need
not be aware of networking; it simply thinks that it is doing memory
transfers, and the underlying hardware/software handle the rest...

For general, "distributed processing" types of activities, this
will, no doubt, prove very, very useful.  In essence, it is the
natural evolution down the path that includes the remote procedure
call model, since you are integrating networking into increasingly
"natural" computing models.

This, also, is its problem.  While it often is very nice to hide
networking issues, it often is necessary to manipulate them directly.
Allowing a program to ignore the issues is great.  Preventing it from
paying attention is terrible.

wayne@petruchio.UUCP (Wayne Hathaway) (03/25/88)

One problem with offloading protocols which has not yet been mentioned
has to do with the "locked-up" nature of commercial on-board protocol
implementations.  A specific example from a place I used to be
associated with:  They used intelligent Ethernet cards (brand name
irrelevant) quite successfully -- until they extended their Ethernet
to the East Coast with a satellite link.  Needless to say, the
on-board Ethernet software was hardly tuned to the extra delay, and
the throughput was abysmal.  But since the on-board software was
proprietary there was nothing the host administrator could do, and
while the card manufacturer was sympathetic, tuning his software for
such a "strange" environment was very low priority.  The solution?
The satellite link was replaced with a slower (bandwidth) but faster
(throughput) land line!

Of course, this is nothing unique to network protocols; any
"intelligent controller" could have the same problem.  In the case of
something like a disk controller, however, one is considerably less
likely to suddenly add some 40,000 miles to the length of the cable!

      Wayne Hathaway                  ultra!wayne@Ames.ARPA
      Ultra Network Technologies
      2140 Bering drive               with a domain server:
      San Jose, CA 95131                 wayne@Ultra.COM
      408-922-0100