[comp.protocols.misc] Realtime Protocols

wardc@banana.cse.eng.auburn.edu (Christopher Ward) (09/26/90)

A friend just asked me an interesting question regarding network
protocols.  His company requires a "realtime" protcol.  By realtime they
are interested in ensuring that data arrived uncorrupted at the
destination within a certain period.  None of the protocols that I'm
aware of (TCP/IP, MAP etc.) provide this as an option.  About the only
suggestion I could make is to use MAP with token-ring.  Does anyone
know of any other protocols that might be appropriate.

Chris Ward

wardc@eng.auburn.edu

--
INTERNET:   wardc@eng.auburn.edu
US-MAIL:    CSE Dept. Auburn University, Auburn, AL 36847
PHONE:      (205) 844-6320

haas%basset.utah.edu@cs.utah.edu (Walt Haas) (10/01/90)

In article <24594@uflorida.cis.ufl.EDU> wardc@banana () writes:
>A friend just asked me an interesting question regarding network
>protocols.  His company requires a "realtime" protcol.  By realtime they
>are interested in ensuring that data arrived uncorrupted at the
>destination within a certain period.  None of the protocols that I'm
>aware of (TCP/IP, MAP etc.) provide this as an option.  About the only
>suggestion I could make is to use MAP with token-ring.  Does anyone
>know of any other protocols that might be appropriate.

Actually I don't believe either MAP or token ring will qualify either.
What you need is something based on time division.  A company called
Applitech sells a line of Ethernet bridges using a proprietary protocol
over a broadband network.  This protocol has time divisions, or "slots",
which can be hard allocated or contended for according to the parameters
set by the system administrator.  When you have a slot reserved for your
exculsive use, you know that you will always get an opportunity to
transmit in that slot.

I think the 802.6 committee may be taking a similar approach
(if I'm wrong someone will doubtless say so :-).

-- Walt Haas    haas@ski.utah.edu

levin@sparkyfs.istc.sri.com (Larry Levin) (10/02/90)

In article <1990Sep30.194033.16776@hellgate.utah.edu> haas%basset.utah.edu@cs.utah.edu (Walt Haas) writes:
>In article <24594@uflorida.cis.ufl.EDU> wardc@banana () writes:
>>A friend just asked me an interesting question regarding network
>>protocols.  His company requires a "realtime" protcol.  By realtime they
>>are interested in ensuring that data arrived uncorrupted at the
>>destination within a certain period.  None of the protocols that I'm
>>aware of (TCP/IP, MAP etc.) provide this as an option.  About the only
>>suggestion I could make is to use MAP with token-ring.  Does anyone
>>know of any other protocols that might be appropriate.
>
>Actually I don't believe either MAP or token ring will qualify either.
>What you need is something based on time division.  A company called
>Applitech sells a line of Ethernet bridges using a proprietary protocol

There are alternatives to TDMA.  A timed token protocol such as that
used with 802.4 or FDDI will put a bound on response time and provide
prioritization as well.  Another possible approach that I have used is
a polled/response scheme.  This actually works quite well over any type
of lan.   Ive used this on an Ethernet to get response times down in the
10-100 millisecond range without requiring custom hardware.

By the way, I believe that MAP is still based on the 802.4 timed token
bus standard.

Larry Levin
Information Systems-Engineering Center
SRI International
Menlo Park, CA
levin@itstd.sri.com

marc@uni-paderborn.de (Marc Gumbold) (10/02/90)

haas%basset.utah.edu@cs.utah.edu (Walt Haas) writes:

>In article <24594@uflorida.cis.ufl.EDU> wardc@banana () writes:
>>A friend just asked me an interesting question regarding network
>>protocols.  His company requires a "realtime" protcol.  By realtime they
>>are interested in ensuring that data arrived uncorrupted at the
>>destination within a certain period.  None of the protocols that I'm
>>aware of (TCP/IP, MAP etc.) provide this as an option.  About the only
>>suggestion I could make is to use MAP with token-ring.  Does anyone
>>know of any other protocols that might be appropriate.

>Actually I don't believe either MAP or token ring will qualify either.
>What you need is something based on time division.  A company called
>Applitech sells a line of Ethernet bridges using a proprietary protocol
>over a broadband network.  This protocol has time divisions, or "slots",
>which can be hard allocated or contended for according to the parameters
>set by the system administrator.  When you have a slot reserved for your
>exculsive use, you know that you will always get an opportunity to
>transmit in that slot.

>I think the 802.6 committee may be taking a similar approach
>(if I'm wrong someone will doubtless say so :-).

You're right. DQDB (802.6) should once provide isochronous service, too.
An isochronous service user will be guaranteed to transmit one byte every
125 microseconds. In the current version of the standard this is not
very clearly specified yet. 

Cheers, 
Marc




-- 

   Marc Gumbold    EMail: marc@uni-paderborn.de       Phone: +49 5251 60 3803
                   Snail: Uni-GH Paderborn, FB17i, 4790 Paderborn, W. Germany

vjs@rhyolite.wpd.sgi.com (Vernon Schryver) (10/03/90)

In article <32683@sparkyfs.istc.sri.com>, levin@sparkyfs.istc.sri.com (Larry Levin) writes:
> 
> There are alternatives to TDMA.  A timed token protocol such as that
> used with 802.4 or FDDI will put a bound on response time and provide
> prioritization as well....


I don't know about 802.4, but I do know about FDDI.

It is true that the FDDI token ring does put a lower bound on the speed
with which the token rotates.  However, no one using the stuff cares.

First, the bound is valid only when the ring is working.  Any of a zillion
things can break the FDDI ring.   For example, if one station "sticks in
BEACON" and it does not do the RMT state machinery stuff, the entire ring
will be dead forever.  Since any network can be trashed by sufficiently
sneaky hardware or software failures, this complaint is not interesting.

Second, the latency between opportunities to transmit on a saturated FDDI
ring with the maximal number of stations and default parameters is a small
matter of hundreds of seconds.  You can improve the latency on a big,
saturated ring to about 6 seconds by reducing its bandwidth to arbitrarily
close to zero by decreasing TRT toward D_Max.  Of course, reducing the
bandwidth does wonders for increasing the likelihood of saturation.


The bound exists, but is useless to an honest person.



Vernon Schryver,    vjs@sgi.com

vjs@rhyolite.wpd.sgi.com (Vernon Schryver) (10/04/90)

In article <59769@bbn.BBN.COM>, craig@bbn.com (Craig Partridge) writes:
> 
> Raj Jain did a nice analysis of TTRT and access time bounds for FDDI
> in a paper he gave at SIGCOMM '90.  The gist of his talk is that if
> you set TTRT to 8 milliseconds, you get good response even with large
> numbers of stations and high load, and don't give up too much of the
> bandwidth.


Without disagreeing or agreeing with his paper (I've read it), I must
disagree with the conclusions you draw from it.

It is true that if you set TRT=8, then the worst case latency goes down
by about 165/8.  This means that with the maximal 500 stations, you have
around 10 seconds of worst case latency.  I work for a graphics workstation
company and hear from  customers who build simulators, and many of those
customers worry about keeping screens up to date and synchronized.  Latency
guarantees for them of seconds are funny, not just useless.  There might be
applications that coud use guaranteed latencies of large parts of seconds,
and I would like to hear about them.  Only they would care about the FDDI
token ring guanrantees.

Setting TRT=8 has a large cost, in my opinion.  If you have a network
consisting of a zillion PC's and remote terminals, each of which offers a
load of a few KBytes/second, then you do not care about this cost.  If you
are shipping workstations today that can send or receive more than
1,000KBytes/second on a single TCP connection over ethernet and expect,
hope, and are required to do much better soon, you might think FDDI is not
very fast, and reducing it from around 10MBytes/sec to about 8MBytes/sec is
too much to pay.

(Please forgive me.  I did not wish to advertise, although I'm not ashamed
our numbers.  There are others who are doing as well, but I can't use them
to illustrate my point.)

In practice, the latencies of a correctly sized and operating FDDI ring
will be like the latencies of a correctly sized and operating Ethernet.
They will be a small number of milliseconds.  The blarney is only in the
"guarantee."

 ---

Speaking of TRT=8, is anyone bothered by the reaction to Raj Jain's paper?
I know of two vendors who have decided to set their default T_Neg to 8msec
and 7.xxx msec, respectively.  The latter builds bridges and concentrators.
Consider the result on the large rings you are building with these boxes.
If you decide you prefer TRT=10msec, you will be out of luck.  Yes, in
principle, you could convince each of those boxes on your ring to use 10.
In practice, with power failures and equipment changes, it is impossible.
If TRT=8 is the right answer, it need and should be set on only one station.

No, MAC level SMT management, PMF's, and so forth are not now and never will
be the answer to this problem, because:
  -there is no broadcast PMF
  -there are no authorization facilities defined in SMT (Contrary to the
      statements of one SMT software vendor, the holes labeled
      "authorization" in the standard are still "TBD.")
  -there are no authentication even mentioned in the standard.
  -you don't want Joe College Student reaching out from a lab workstation,
    in the absence of A&A, and telling your Kerboros server to shut down,
    so he can take over its MAC address.
  -if there were a broadcast PMF, then Joe could send an orphan broadcast
    frame telling all stations to disconnect, just to brighten those dreary
    days before finals.
  -in a bridged bunch of rings, you might not have an Official SMT Network
    Management station on every ring that needs to be managed.


Vernon Schryver
Silicon Graphics
vjs@sgi.com

craig@bbn.com (Craig Partridge) (10/04/90)

In article <71126@sgi.sgi.com> vjs@rhyolite.wpd.sgi.com (Vernon Schryver) writes:
>It is true that if you set TRT=8, then the worst case latency goes down
>by about 165/8.  This means that with the maximal 500 stations, you have
>around 10 seconds of worst case latency.  I work for a graphics workstation
>company and hear from  customers who build simulators, and many of those
>customers worry about keeping screens up to date and synchronized.  Latency
>guarantees for them of seconds are funny, not just useless.  There might be
>applications that coud use guaranteed latencies of large parts of seconds,
>and I would like to hear about them.  Only they would care about the FDDI
>token ring guanrantees.

Right -- but Raj also argues eloquently that using 500 stations is a bad
idea.  More stations implies higher bit error rates, and bridges work
quite well.

He computes the max delay and average efficiency for 500 stations with
TTRT at 8ms as 8 secs and 75%.  But if, as he suggests, you keep
yourself to a max of about 100 stations, max access delay is about
0.8 seconds and 86%.  (If you are willing to get smaller, down to
around 10 stations, its 0.15 and 99.5%).

Now I admit (as you point out) 0.8 seconds still ain't great for
synchronization -- but it is close.  We're going to have to worry about
synchronization delays of several 100s of milliseconds in gigabit wide
area networks anyway.  And making nets smaller and using bridges can
help.

I also agree with your point that we're worry about extremes.  A
distribution of how likely we are to hit, say 0.8 seconds would be
of interest, but that requires reasonable traffic models (which is
always hard to do...).

Craig Partridge

levin@sparkyfs.istc.sri.com (Larry Levin) (10/08/90)

In article <59793@bbn.BBN.COM> craig@ws6.nnsc.nsf.net.BBN.COM (Craig Partridge) writes:
>
>He computes the max delay and average efficiency for 500 stations with
>TTRT at 8ms as 8 secs and 75%.  But if, as he suggests, you keep
>yourself to a max of about 100 stations, max access delay is about
>0.8 seconds and 86%.  (If you are willing to get smaller, down to
>around 10 stations, its 0.15 and 99.5%).
>
If you are a vendor like Silcon Graphics, you have to take the worst case 
scenario (i.e. 500 stations) into account.  If you are an architect and
systems integrator of turn key real-time control systems like myself, you
focus on your clients actual requirements.  These are typicaly 15 to
60 stations of which 2 to 10 exchange real-time data (i.e. access
delay must be less then some Tmax where Tmax is application dependant but 
typicaly under .5 sec) and the rest require interactive response times
with a max delay under 1 sec.  In this scenario TTR protocols appear to
be a viable alternative.

Larry Levin
Information Systems-Engineering Center
SRI International,  Menlo Park CA.
levin@istd.sri.com

rpw3@rigden.wpd.sgi.com (Rob Warnock) (10/19/90)

A number of people said a lot of fine stuff about FDDI & TTRT, etc. I'd just
like to point out another thing I dislike about small TTRTs is that it makes
the ring "brittle". That is, instead of seeing a pretty good approximation
of an M/D/1 server (where D = 100 Mb/s) in which you experience a smooth
queuing delay versus load curve, as is shown in all the venerable old texts,
the ring presents a very sharp load-limit to each station. That is, as a given
station's load builds up, for a while you track the default delay/load curve,
then *suddenly* the ring bandwidth will appear to limit, and the perceived
delays will skyrocket.

This may be "good" --in *some* environments-- if you have a mix of light-load
stations and heavy-load stations. When the heavy-load stations "hit the wall"
of the small TTRT, the light-load stations will get better perceived response.
(Even here, the downside is that is you experience a temporary ring-wide
overload, it's *worse* with a small TTRT than a large one, since the ring's
overall carrying capacity is limited by he small TTRT.)

But in the general-purpose computing environment, where we all just want the
data to move, like it does (eventually) on Ethernet, small TTRTs are *bad*!
They take away the "soft edge" of the natural 100 Mb/s M/D/1 queuing curve.

And if your "small" users are trying to share a few big common file servers,
you're going to get worse-than-expected performance out of your FDDI, since
the big fast file servers will saturate at the TTRT limit long before the ring
is really loaded.

And *that's* what's wrong with vendors putting small default TTRT's in their
gear (as noted by Vernon Schryver). It *prevents* the user (local net admin)
from tuning their ring for best local performance, by *preempting* the most
significant FDDI tuning parameter! ONLY ONE (OBSCURE) STATION need set a small
TTRT to preempt the net manager's options. *ALL* vendors should [dare I say
*must*?] leave the TTRT at the default (169ms?) in shipped product, or risk
being labled a "polluter" in some markets... [E.g., general-purpose computing!]

Let me be explicit: If I have bought FDDI and a mongo fast NFS file server
that can pump out 85 or 90 Mb/s to 200-300 diskless workstations, I am going
to be *very* unhappy if some random workstation or router or monitor box that
I add to the ring [or one of my users adds without telling me!] suddenly causes
my expensive file server to clamp at 10 or 20 Mb/s (or less) of throughput!


-Rob

-----
Rob Warnock, MS-9U/510		rpw3@sgi.com		rpw3@pei.com
Silicon Graphics, Inc.		(415)335-1673		Protocol Engines, Inc.
2011 N. Shoreline Blvd.
Mountain View, CA  94039-7311