[comp.protocols.tcp-ip] Out-of-band, 0 length msgs

davec@rock.concert.net (David Cohen -- DBLinx) (05/08/91)

I'm trying to set up a gateway between a Sybase front-end on a
Sun and a Sybase server on a VAX. Without going into details, 
I decided to test some stuff locally on the Sun:

    S   front-end---------->
    U                     gateway
    N   server<-------------+                                                  

This is just for simulation purposes - I'm sending Sybase client stuff
to the gateway's socket, which simply pumps it out to
the Sybase server. Also, server messages go in the gateway's socket and
get pumped out to the front-end. In this test, the gateway is a surrogate
for the Sybase server. Without getting into DECnet stuff, I'm having
some problems:

    The gateway is cranked up by inetd. It uses a select() statement:
  select(32,&mask,NIL,&oobmask,NIL);
with mask and oobmask set to the client and server fds. Everything's
fine until the client sends an exception (out-of-band) data. Since
I just want the gateway to be  a dumb pass-thru, I don't do anything
special - just pass it on to the server. But, after this happens,
the select() keeps waking up thinking the client has sent data to
the gateway's socket. However, ioctl says that 0 bytes are available.
This select() woke up on the exception mask for the client.

Anyway, does anyone have a clue as to why the select() keeps waking
up with 0 client bytes to send after an out-of-band condition had
been handled?

thanks
-dave