kdp@hplabs.UUCP (Ken Poulton) (07/23/83)
I am considering putting up a Ratfor implementation of MMDF on several machines, including various flavors of Unix. MMDF requires that you be able to time-out on a read request, i.e., that the read returns (with or without data) after some time limit has elapsed. Some perusing of manuals (4.1) has not shed any light on this for me. Does anyone know of a way? The ideal method would allow a call to simply set a time-out interval on an i/o channel, which would be observed until changed, but I'm open to any working solution. Since this needs to runs on both Berkeley and Bell Unices, the less fancy stuff needed, the better. Much thanks, Ken Poulton ...!hplabs!kdp
drockwel@bbn-vax@sri-unix.UUCP (07/23/83)
From: Dennis Rockwell <drockwel@bbn-vax> The simplest way to timeout a read is to use the alarm(II) system call. This will send your process a signal (SIGALRM) after n seconds. The read will terminate with an error (EINTR), and the routine which catches the signal can set a flag to be tested when the read does so.
lwa@mit-csr@sri-unix.UUCP (07/23/83)
I use this scheme also, but there are a couple of potential problems that people should be aware of: 1) If you're using the Berkeley job-control stuff, the signal won't interrupt the read. This is a "feature" of the sigsys() system call. Note that any time you link against /usr/lib/libjobs.a, you get a version of the signal() routine which actually does a sigsys system call and hence gets you this feature whether you want it or not. (See the documentation in Jobs(III) in the 4.1 UPM). 2) There's a race condition between the alarm and the read call. If the alarm goes off between the time you set it and the time you do the read, you may sleep forever. In practice, this is only a problem with very short alarms (one or two seconds), but the problem does exist. -Larry Allen -------
JPAYNE@BBNG.ARPA (07/25/83)
But with the new signal mechanism, sigset and sigsys stuff, if no IO has taken place at the time of the interrupt, the read/write is restarted. I can't decide whether this is a feature or a misfeature. What if a program, like an interpreter, is sitting in a read-eval-print loop and the user types the interrupt character. The program does what it does with the interrupt and then goes back to that same read! If you don't want the read to be restarted, you use the old signal mechanism. But the old signal mechanism allows recursive traps and all that bad stuff ... (I don't know how new or old any of this stuff is on the VAX. We recently got 2.81 up on our PDP11/70) Shouldn't there be a way to specify what you want?
nrf@whuxlb.UUCP (Neal Fildes) (07/27/83)
Are the routines 'setjmp' and 'longjmp' available in this version of unix? if so, the interrupt handler could do a longjmp to some higher level in the program rather than doing a 'return'. N. R. Fildes, Bell Telephone Laboratories, Whippany, NJ
Tappan@BBNG.ARPA@sri-unix.UUCP (07/27/83)
From: Dan Tappan <Tappan@BBNG.ARPA>
It seems to me that setjmp/longjmp won't work for timing out a read
if you don't want to lose data. There's a potential race condition.
eg. if you have code like
nread = 0;
if (!setjmp(env))
{
alarm(XXX);
nread = read(..) /* (1) */;
alarm(0);
}
if (nread > 0) { /* process data */ }
if the alarm goes off in the middle of statement (1), after the read
completes but before 'nread' gets set, then that buffer of data
will be lost.
However I'm not familiar with the UNIX kernal signal handling code. Is it
guaranteed that that statement won't be interrupted?
Dan
-------