jik@athena.mit.edu (Jonathan I. Kamens) (07/02/89)
Thus far, two people have suggested that one way to prevent zombie processes from building up is to fork twice instead of once when starting a child -- the child of the first fork forks again and then immediately exits. The "grandchild" which results is inherited by init, which takes care of cleanup, and the parent therefore doesn't have to worry about cleaning up. While this will accomplish the started purpose, I am not convinced at all that it is the correct solution to the problem. First of all, it is quite possible at some point in the future development of the program (if not now) that the parent will wish to know when children exit. If you implement this grandchild system throughout the code, it'll probably be a bitch to modify things to keep track of how things happen. This is not a major problem if you are careful to isolate functionality (we wouldn't want the whole program to be infected with functionality, after all :-), but how many programmers really are careful with something as mundane as starting a child :-)? Second, let's say that my program has a 512k binary image. I fork the first time, and I'm taking up a meg of memory. I fork again, and I'm taking up a meg and a half. Granted, this memory usage only stays around for a second (if that); however, on my workstation I am quite often within a meg of my memory limits, and this could very well push me over. Also granted, on some architectures the forked processes will share the text segment with the parent. So what -- you shouldn't count on this because it is far from portable. It is bad practice to implement a solution which will take one and a half times the necessary amount of memory just because it eliminates the need for a call to signal() and a three-line (if that) signal handler. Just my two cents.... Jonathan Kamens USnail: MIT Project Athena 432 S. Rose Blvd. jik@Athena.MIT.EDU Akron, OH 44320 Office: 617-253-4261 Home: 216-869-6432
peter@ficc.uu.net (Peter da Silva) (07/04/89)
In article <12376@bloom-beacon.MIT.EDU>, jik@athena.mit.edu (Jonathan I. Kamens) writes: > Second, let's say that my program has a 512k binary image. I fork > the first time, and I'm taking up a meg of memory. I fork again, and > I'm taking up a meg and a half. Not if you have a modern fork() that does copy-on-write. And even if you have a stupid fork you probably have vfork(). The cost of forking children is really pretty minor these days. (yow, an innovation that actually REDUCES memory use! Must be a mistake!) -- Peter da Silva, Xenix Support, Ferranti International Controls Corporation. Business: peter@ficc.uu.net, +1 713 274 5180. | "X3J11 is not in the business Personal: peter@sugar.hackercorp.com. | of legislating morality ..." Quote: Have you hugged your wolf today? `-_-' | -- Henry Spencer
jeffl@berick.uucp (Jeff Lawhorn) (07/04/89)
] Thus far, two people have suggested that one way to prevent zombie ]processes from building up is to fork twice instead of once when ]starting a child -- the child of the first fork forks again and then ]immediately exits. The "grandchild" which results is inherited by ]init, which takes care of cleanup, and the parent therefore doesn't ]have to worry about cleaning up. There is a better way to do this. Have the parent ignore the signal SIGCLD. My man page for signal states: The SIGCLD affects two other system cals (wait(2), and exit(2)) in the following ways: wait If the func value of SIGCLD is set to SIG_IGN and a wait is executed, the wait will block until all of the calling process's child processes terminate; it will then return a value of -1 with errno set to ECHILD. exit If in the exiting process's parent process the func value of SIGCLD is set to SIG_IGN, the exiting process will not create a zombie process. It seems to me that this is exactly what you want. If at some future time you want to know whan the child goes away, all you have to do is change your handling of SIGCLD. -- Jeff Lawhorn I know I had a pithy quote sitting jeffl@berick.uucp around here somewhere... ucsd!sdsu!berick!jeffl -- Jeff Lawhorn I know I had a pithy quote sitting jeffl@berick.uucp around here somewhere... ucsd!sdsu!berick!jeffl
brian@apt.UUCP (Brian Litzinger) (07/05/89)
From article <JEFFL.89Jul4084155@berick.uucp>, by jeffl@berick.uucp (Jeff Lawhorn): > ] Thus far, two people have suggested that one way to prevent zombie > ]processes from building up is to fork twice instead of once when > ]starting a child -- the child of the first fork forks again and then > ]immediately exits. The "grandchild" which results is inherited by > ]init, which takes care of cleanup, and the parent therefore doesn't > ]have to worry about cleaning up. > > There is a better way to do this. Have the parent ignore the > signal SIGCLD. My man page for signal states: > It is interesting how these discussions can become circular. 8-) I believe this discussion originated from my posting regarding a problem I was having with the creation of zombie processes. Ignoring SIGCLD is a wonderful solution for people running System V. Unfortunately, ignoring SIGCHLD in most BSD systems will not produce the same result. Note: BSD SIGCHLD generally equals System V SIGCLD. The reasoning behind the double forking is that it is a more portable solution than ignoring SIGCLD. The discussion then seemed to move to the effeciency of double forking and maybe using vfork(). <> Brian Litzinger @ APT Technology Inc., San Jose, CA <> UUCP: {apple,sun,pyramid}!daver!apt!brian brian@apt.UUCP <> VOICE: 408 370 9077 FAX: 408 370 9291
vlcek@athena.mit.edu (Jim C Vlcek) (07/06/89)
Double-forking to avoid creating zombies or having to wait on children just don't sit right with me. You fork() because you need another process, not because you want to hide your trail from something. It's like bringing two cars on a trip just in case one gets a flat: why go to all that trouble just to avoid bringing a spare? Brian Litzinger, in <1668@apt.UUCP>, sez: ``Ignoring SIGCLD is a wonderful solution for people running System V. Unfortunately, ignoring SIGCHLD in most BSD systems will not produce the same result.'' ``The reasoning behind the double forking is that it is a more portable solution than ignoring SIGCLD.'' Steve Summit (I think) recommended trapping SIGC(H)LD (at least our version of 4.3BSD #defines SIGCHLD SIGCLD), and then waiting on the expiring child, as a portable solution. This seems to me far better than double-forking, and doubtless faster. One has the added bonus of having the exit status of the children easily available -- just in case you want them later. Jim Vlcek (vlcek@caf.mit.edu uunet!mit-caf!vlcek)