[comp.protocols.tcp-ip] timimg-out a socket read

hwajin@daahoud.UUCP (06/26/90)

> I am trying to use tcp-ip for real-time application.  The application
> consists of a client in the field that collects real-time data every
> two seconds and sends it to the server which stores it in the database.
> (A host of application programs use this data for different purpose.)
> The specification requires that if no data is received within 15 seconds,
> I declare the real-time data channel dead.

actually, if this is the only requirement, more precise way of
handling this situation would be to have your application perform
non-blocking (or asynchronous) i/o on connected data channel and keep
your own application level timer.  when your timer expires (say,
SIGALARM goes off and your signal handler is called) you can close and
shutdown the connection, or just mark it dead and move onto something
else.  in certain implementations (like bsd 4.3 unix) of tcp, users can
specify a keep_alive option on the connected socket which is used to
detect a dead connection, which works for most applications.  the time
it takes to detect a dead connection is not easily settable by the users,
and certainly not settable per connection basis (e.g. you need to modify 
the values of global variables in your unix kernel -- tcp_keepidle,
tcp_keepintvl, and tcp_maxidle in 4.3 bsd tahoe release of tcp/ip, which
will affect all of the tcp based applications running on that version
of the kernel).  this keep_alive is *not* a standard specified in the
rfc; it's just an implementation specific feature.

gjack@datlog.co.uk ( Graham Jack) (06/27/90)

ihsan@ficc.ferranti.com (jaleel ihsan) writes:

>I do not know of any other means of timing-out the read except to use the
>keep socket "warm" option, ...

If the supplier's offering doesn't give the facilities you want, you
can provide them in your own 'session-layer', it ain't pretty but its
pragmatic.
The rationale used against an effective 'keepalive' at the transport
layyyer seems to take a rather narrow view of the sort of applications
and environments in which IPS is used these days.

> ... but the vendor says (and quotes form the few
>last pages of internetworking by Comer) that the standard does not require
>him to implement it and even if he implements it the standard does not
>require to make the timers in the option to be user selectable.  What does
>the standard has to say about this ?

All credit to Comer for introducing me and many others to the Internet
protocols but your supplier should be referring to the Host Requirements
RFCs and in particular to Section 4.2.3.6 of RFC-1122.
This says keepalives MAY be implemented (ie are OPTIONAL), if provided
the interval MUST be configurable.
Practically, if the products are based on the 4.3BSD networking code as
many (most?) UNIX implementations are, then keepalive should be
provided and should work, it is unlikely, however, to be configurable.
That's life ...

>Did I choose the wrong vendor, or did I made a mistake in choosing tcp-ip
>for real-time application ? (8=:|)

I'm not sure I can comment on this.
-- 
Regards,
	    Graham Jack, Data Logic.
	    <gjack@datlog.co.uk>

donp@na.excelan.com (don provan) (06/28/90)

In article <VZ74GZC@ccs.ferranti.com> ihsan@ficc.ferranti.com (jaleel ihsan) writes:
>
>The specification requires that if no data is received within 15 seconds,
>I declare the real-time data channel dead.
>
>I do not know of any other means of timing-out the read except to use the
>keep socket "warm" option...

Using keep-alives to keep the socket "warm" only insures that the
remote device is still handling TCP traffic.  It does not insure that
the remote device is actually sending data.  If it's possible for the
device to continue to maintain the TCP connection while not sending
any data, keep-alives and the associated timers will do you no good at
all.  This is yet another reason to avoid depending on TCP to provide
this timeout and, instead, use your own timer to regain control and
handle the error condition outside of TCP.

Generally i find it's a bad idea to try to use TCP as a timing
mechanism.  TCP, being the good general purpose transport protocol it
is, really can't anticipate how long an application is willing to wait
under any given condition.  It's better for the application to handle
such things itself, particularly in a case like this where the desired
timeout condition is so specific.
						don provan
						donp@novell.com

pcg@cs.aber.ac.uk (Piercarlo Grandi) (06/28/90)

In article <VZ74GZC@ccs.ferranti.com> ihsan@ficc.ferranti.com (jaleel
ihsan) writes:

   I am trying to use tcp-ip for real-time application.  The application
   consists of a client in the field that collects real-time data every
   two seconds and sends it to the server which stores it in the database.
	[ ... ]
   The specification requires that if no data is received within 15 seconds,
   I declare the real-time data channel dead.

This is a fuzzy specification. What do you mean by "received"? If all
you mean is "data present at the socket" you can just time out the
select(2) call. If you mean "connection still alive" things are
different, but I cannot believe it...

   I do not know of any other means of timing-out the read except to use the
   keep socket "warm" option, [ ... ]

But this is something that has to do with the lower level protocols. You
want to check for the presence of *application* "data", I understand,
not whether the other side of the connection is still keeping open the
virtual circuit. The keepalive timeout has to do with whether the TCP
circuit is still there, not whether there is any data on it, really. Of
course if the circuit is dropped by the other side you assume that data
will not be forthcoming, but then the converse is not true.

Knowing that the circuit is still there (kind of carrier detect) will
not help you determine whether it is still being used...

   Did I choose the wrong vendor, or did I made a mistake in choosing tcp-ip
   for real-time application ? (8=:|)

I believe that you chose the wrong protocol. I reckon that for real time
work the default should be UDP, not TCP. I do not really think that you
need reliable, ordered virtual circuits, especially if the data you deal
with is, as in most real time monitoring applications, perishable and
time stamped. I think that for sampling/monitoring unreliable and
connectionless is better, and lower overhead. Hey, even NFS uses it.
Hey, they even use UDP for real time speech transmission...
--
Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

ihsan@ficc.ferranti.com (jaleel ihsan) (06/30/90)

In article <12600781585.32.LYNCH@A.ISI.EDU>, LYNCH@A.ISI.EDU (Dan Lynch) writes:
> 1)  As noted by Comer, the application (meaning vendor) is not obliged
> to expose the timer values to the application, but also, the vendor is certainly
> not prevented from doing so.  Most implementations are, in fact, not too
> good about letting the individual application instance set values such
> as timer values.  This is simply a matter of choice.  The protocol supports
> the concept, but if an implementaiton had to keep track of this on a
> "per instance" basis, it would add to the implementation storage space
> requirements.  

I agree asking TCP to keep timers on a "per instance" basis would be
an unreasonable and I dont want to be unreasonable !!! However asking
the OS to keep a timer on a "per read" basis would not be an unreasonable
request.

Jaleel