[comp.protocols.tcp-ip] OOB problems, wisdom anyone?

daveb@rtech.UUCP (Dave Brower) (12/19/87)

Some people here have been trying to use the OOB mechanism to send
"expedited data" between processes.  No one here is all that familiar
with using it, and they have run into some problems that make them want
to give up and turn to something else.

I'd appreciate hearing general thoughts about using the BSD oob
mechanism for IPC, and specific comments on the problems they report
below.  I will forward responses to the interested parties.

Thanks!
-dB

------- forwarded message describing OOB headaches.

1.  "Leapfrogging:" The fatal aspect of OOB data is that when two
socket sends for OOB data are issued in sequence before the first has
been read, the second send passes the first and is read out of
sequence by the recipient.  This is a gross violation of the stream
socket abstraction and makes the mechanism fundamentally unusable
except in the simplest and most restricted cases, not the situation
here.

2.  OOB receive loop: When an application has been notified of the
availability of OOB data and issues a receive for it, a subsequent
receive for normal data must be issued to "clear" the OOB notification
mechanism.  If instead a select requesting notification of OOB data is
issued without an intervening receive for normal data, the caller is
immediately notified of the availability of OOB data: the same data
just received.  This is at best a form of bizarre and undocumented
behavior; at worst a serious bug.

3.  The integer-character problem: The IOCTL system call to determine
whether one has reached the OOB data in the stream is documented to
return a character.  Several coding examples support this.  In fact,
an integer is returned.  It was necessary, after considerable grief,
to go to kernel source code to find that in fact an integer is
returned.  It is not known how many similar major documentation errors
exist.  Each could cause major delays, with time spent in fruitless
debugging.


-- 
"If it was easy, we'd hire someone cheaper than you to do it."
{amdahl, cbosgd, mtxinu, ptsfa, sun}!rtech!daveb daveb@rtech.uucp

JBVB@AI.AI.MIT.EDU ("James B. VanBokkelen") (12/22/87)

There are at least two interpretations of the TCP "URG" bit and its
associated pointer.

The Berkeley Unix interpretation is as you describe in your posting.
Out-of-band reads return the (16-bit value, I guess) that the Urgent
pointer points at (the byte & the byte before, maybe?).

Another interpretation has the TCP pass the caller the number of bytes
that are to be treated as Urgent (from where the caller has read so far),
and it is up to the caller to read/process so as to consume the urgent
information.  This doesn't imply that the urgent information is any
particular size, or even that it is all in one place in the pending
data (the caller is assumed to be able to figure it out).

The only widely-implemented spec that uses Urgent (that I know of) is
Telnet, where a number of IAC-x sequences are sent as Urgent data.  In
the cases I've seen, Telnet uses the 2nd interpretation.

Given the disparate interpretations, I've advised people to stay away from
it, and we haven't been particularly eager to expand (or document to the
user) our implementation thereof.  When I read RFC-793, the 2nd (non-BSD)
interpretation seems more reasonable, but I wasn't there when it was written.

James B. VanBokkelen
FTP Software Inc.

CERF@A.ISI.EDU (12/23/87)

The intent of the URGENT indicator was to say where (at
what byte) in the datastream the URGENT data ended - 
the TCP level provided an absolute pointer (sequence
number reference) to the last urgent byte. If two
instances of urgent data were injected into the
data stream, the urgent indicator would flag the latest
of them, requiring the next level up to scan the data
stream from wherever the "next" input byte was to the
end of urgent data. No semantics was associated with
urgency. The translation into "how many bytes to read"
was not part of the TCP spec, as I recall it.

Vint

JBVB@AI.AI.MIT.EDU ("James B. VanBokkelen") (12/23/87)

    ..... The translation into "how many bytes to read"
    was not part of the TCP spec, as I recall it.

    Vint

Sorry, I was projecting how I'd implement it onto the bare RFC there,
and I didn't issue a warning.

jbvb