chris@umcp-cs.UUCP (Chris Torek) (09/14/86)
In article <11997@watnot.UUCP> cagordon@watnot.UUCP (Chris Gordon) writes: >If I have a function [func() of] (no args) ... And I wish to catch >SIGINT and SIGQUIT to execute this function just before an exit(0), >what would be the EXACT signal() statement (and do I need any others)? This is a Unix question, not a C question, so I have directed followups to net.unix. SIGINT and SIGQUIT are, as are all signals, set with the `signal' system call: int (*old_disposition)(); int new_catcher(); old_disposition = signal(SIGINT, new_catcher); Writing int userquit(); signal(SIGINT, userquit); signal(SIGQUIT, userquit); is a common mistake that shows no problems under `modern' systems using `modern' shells and job control. However, on any 4BSD or V7 system using `sh', this has the wrong effect on background jobs. The Bourne shell starts background jobs with keyboard signals ignored, so that the interruption of a foreground job will not also interrupt those same background jobs. It is therefore necessary to first determine that signals are not being ignored: if (signal(SIGINT, SIG_IGN) != SIG_IGN) (void) signal(SIGINT, userquit); This, however, opens a window during which interrupts are ignored ---lost forever. An equally unsatisfactory alternative is if (signal(SIGINT, userquit) == SIG_IGN) (void) signal(SIGINT, SIG_IGN); Prior to the job control mechanism in 4.1BSD, there was no way to atomically test the state of a signal. In 4.1, one could write sigblock(SIGINT); if (signal(SIGINT, userquit) == SIG_IGN) (void) signal(SIGINT, SIG_IGN); sigrelse(SIGINT); (requring compilation with `-ljobs'). This holds off any keyboard interrupts until after the signal disposition has been properly set. In 4.2 and 4.3BSD, the code becomes int omask = sigblock(sigmask(SIGINT)); if (signal(SIGINT, userquit) == SIG_IGN) (void) signal(SIGINT, SIG_IGN); (void) sigsetmask(omask); In 4.2BSD it is necessary to first define sigmask: #define sigmask(s) (1 << ((s) - 1)) In 4.3BSD the sigmask macro is defined in <signal.h>. To return to the original problem, assuming that func() does not itself exit, you must first write a version that does: quitfunc() { func(); exit(0); /* exit(0) implies success, which seems rather odd in a quitting routine. But on with the code.... */ } set(sig, act) int sig, (*act)(); { int omask = sigblock(sigmask(sig)); if (signal(sig, act) == SIG_IGN) (void) signal(sig, SIG_IGN); (void) sigsetmask(omask); } ... set(SIGINT, quitfunc); set(SIGQUIT, quitfunc); This is still not entirely reliable, as exit() flushes stdio buffers, but signals may occur when stdio's internals are not in a flushable state. In practice, this is often not a problem. -- In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 1516) UUCP: seismo!umcp-cs!chris CSNet: chris@umcp-cs ARPA: chris@mimsy.umd.edu