dcj@AUSTIN.LOCKHEED.COM (David Jacobson rimux) (03/16/91)
Hi, I'm rather new to programming BSD sockets. I was wondering how to clear all buffered data in for a socket (say, if a certain interrupt were received) without closing the socket. Any help would be appreciated, David C. Jacobson ____________________________________ __________________________________________ . | David C. Jacobson ====___/ \___ Lockheed | (512) 386-4267 =======`/ . \' Austin Division | ==========/' `\ (LAD) | INTERNET: dcj@shrike.austin.lockheed.com ____________________________________|__________________________________________
jik@athena.mit.edu (Jonathan I. Kamens) (03/18/91)
In article <511@shrike.AUSTIN.LOCKHEED.COM>, dcj@AUSTIN.LOCKHEED.COM (David Jacobson rimux) writes: |> I was wondering how to clear all buffered data in for a socket |> (say, if a certain interrupt were received) without closing |> the socket. Could you explain a bit more clearly what you're trying to do? Are you talking about a socket that you're reading from, or a socket that you're writing to? Are you saying that you're using stdio to access the socket, and want to flush everything in the stdio buffers, or that you want to tell the kernel to discard any text waiting to be read in the socket? If you want to do the latter, my suspicion is that you're just going to have to read the data from the socket and throw it away. The only call I know of (and I may just be forgetting something here, but I don't think so {of course, if I am forgetting something, I wouldn't know about it so I still wouldn't think so :-)}) that does anything about flushing socket data is shutdown(), and you don't want to do that because you won't be able to do any more reading after you shutdown a socket. -- Jonathan Kamens USnail: MIT Project Athena 11 Ashford Terrace jik@Athena.MIT.EDU Allston, MA 02134 Office: 617-253-8085 Home: 617-782-0710
dcj@AUSTIN.LOCKHEED.COM (David Jacobson rimux) (03/22/91)
ut-emx!rutgers.edu!phri!marob!nsi1!mms ( Michael Sykora ) of Organization: The Nikko Securities Co., International, Inc. Advised me on clearing a socket: > Assuming you are using stream sockets: > > - find the maximum TCP segment size by issuing a getsockopt() call > for TCP_MAXSEG > - set the socket for non-blocking I/O (if it isn't already set) > - call read() for TCP_MAXSEG bytes repeatedly until it returns > < TCP_MAXSEG > - (if read() returned < 0, it's an error, of course) > - (if read() returned >0 but < TCP_MAXSEG, you've cleaned > out the buffer - as of that moment) > - if read() returned a 0 that means that either there was no > data available or that the connection is broken, so you > may need to test for the latter in this case > - reset the socket for blocking I/O, if necessary I have a little different situation than above...maybe someone could shed some light on it: __________ ____________ | "sender" | ---SOCKET---> | "receiver" | |__________| |____________| My sender is writing data to the receiver via a socket. The sender is told by another process that all the data it is currently writing to to the receiver is BAD. Questions: Can I clear the socket from the senders end? Can I read data from a socket I've been writing to and discard it (bidirectional) ? How would you keep the receiver from processing the remaining BAD data its reading from the socket? > Hope this helps, > Mike Sykora Thanx much! Every little bit helps, David Jacobson. ____________________________________ _______________________________________ . | David C. Jacobson =========___/ \___ Lockheed | (512) 386-4267 =======`/ . \' Austin Division | INTERNET: ====/' `\ (LAD) | dcj@shrike.austin.lockheed.com ____________________________________|_______________________________________
jik@athena.mit.edu (Jonathan I. Kamens) (03/25/91)
In article <519@shrike.AUSTIN.LOCKHEED.COM>, dcj@AUSTIN.LOCKHEED.COM (David Jacobson rimux) writes: |> I have a little different situation than above...maybe someone could |> shed some light on it: |> |> __________ ____________ |> | "sender" | ---SOCKET---> | "receiver" | |> |__________| |____________| |> |> My sender is writing data to the receiver via a socket. The |> sender is told by another process that all the data it is currently |> writing to to the receiver is BAD. Questions: |> |> Can I clear the socket from the senders end? No. The data is in the kernel buffers for the receiver process, and there's no way your program can get access to it in order to clear it. The best you can do is close the socket completely, but even in that case, the receiver will still get to read the data you've sent before it sees EOF. |> Can I read data from a socket I've been writing to and discard it |> (bidirectional) ? No. |> How would you keep the receiver from processing the remaining |> BAD data its reading from the socket? If the receiver is cooperating with you, and you are writing the code for both the sender and receiver, then I see two options: 1) As someone else has already suggested in response to a slightly different question (or, at least, I seem to recall it being suggested), the sender and receiver can connect to each other on another socket that is used for nothing but exceptional indications. For example, in this case the sender would send a message to the receiver telling to, "Hey, my data is bad, flush what's waiting for you from me and let me know when you're finished!" The receiver than reads from the data socket until there's nothing left to read, and sends an ACK back to the sender on the exception socket. 2) Use out-of-band data on the data socket -- the sender sends the receiver an out-of-band message when the data has gone bad, and the receiver sends an out-of-band message back to the sender when it has cleared the bad data and is ready to accept good data again. Alas, I have never had to use out-of-band data, and as far as I can tell, the documentation on it is horrendously sparse, so I can't tell you how to do that (although I suspect that glancing through the sources to rlogin/rlogind and/or telnet/telnetd will give you hints). -- Jonathan Kamens USnail: MIT Project Athena 11 Ashford Terrace jik@Athena.MIT.EDU Allston, MA 02134 Office: 617-253-8085 Home: 617-782-0710
torek@elf.ee.lbl.gov (Chris Torek) (03/27/91)
(I stayed out of this because the original problem statement was not accurate enough either to give a solution or to declare unsolvable. Data buffering between communicating network peers occurs at several levels, including the applications themselves, any kernel buffers, and also data `in flight' on the wires. The latter can be significant: on a transcontinental 100 Mb/s optical link in which the speed of `light' is .9c, you can easily have 300 kilobytes of data stored in the optic fiber alone. This is a moderately significant amount, and it only goes up as the transmission speeds increase and/or latency, typically in the form of routers, is added.) In article <1991Mar25.113452.6370@athena.mit.edu> jik@athena.mit.edu (Jonathan I. Kamens) writes: > Alas, I have never had to use out-of-band data, and as far as I can tell, >the documentation on it is horrendously sparse, so I can't tell you how to do >that (although I suspect that glancing through the sources to rlogin/rlogind >and/or telnet/telnetd will give you hints). The documentation is deliberately vague, for two reasons: First, the existing out-of-band mechanisms in Berkeley sockets are considered `experimental', have changed several times, and may change again. Perhaps more important in this case, TCP does not actually *have* out of band data at all. TCP has a concept called `urgent data' (which differs from ISO/OSI `expedited data' in that TCP's is `urgent' while ISO's is `expedited' :-) ) and the BSD `pull one byte out' trick is nowhere near standard, unless you count de facto standards. All in all, it is impossible to say what the right answer is, because, as I noted above, the problem has not been pinned down. In all likelihood an attempt to state the problem exactly will lead to the discovery that the problem is best solved by avoidance (choose a different communications scheme). -- In-Real-Life: Chris Torek, Lawrence Berkeley Lab CSE/EE (+1 415 486 5427) Berkeley, CA Domain: torek@ee.lbl.gov
dcj@AUSTIN.LOCKHEED.COM (David Jacobson rimux) (04/02/91)
> > I have a little different situation than above...maybe someone could > shed some light on it: > > __________ ____________ > | "sender" | ---SOCKET---> | "receiver" | > |__________| |____________| > > My sender is writing data to the receiver via a socket. The > sender is told by another process that all the data it is currently > writing to to the receiver is BAD. Questions: > > Can I clear the socket from the senders (writer's) end? > > Can I read data from a socket I've been writing to and discard it > (bidirectional) ? > > How would YOU keep the receiver from processing the remaining > BAD data its reading from the socket? I have since solved this problem by RTFMing on Out-of-band data (which was specifically invented for clearing sockets as far as I can tell). SunOS Network Programming Guide pp 302 - 304. Unix Network Programming, W. Richard Stevens, pp 332 - 333 ...and see the excellent rlogin example, pp 625 - 665. Thanx to Jonathan Kamens and Mike Sykora for putting me on the right track. David Jacobson. ____________________________________ _______________________________________ . | David C. Jacobson =========___/ \___ Lockheed | (512) 386-4267 =======`/ . \' Austin Division | INTERNET: ====/' `\ (LAD) | dcj@shrike.austin.lockheed.com ____________________________________|_______________________________________