[net.general] speeding up forking in UNIX

jim (04/26/83)

        I do alot of process forking from within programs & was wondering
if anybody out there in Netland had ideas about optimizing process
forking in UNIX. Process forking tends to be so slow that it makes more
sense to write in what you'd like to fork as a subroutine rather than
try to do it concurrently, unless the child process is forked at the beginning
of the program with a pipe going to it and killed at the end of the program.
Forking on the fly, so to speak, tends to be too expensive.
        Talking with some of the locals and thinking this over has lead to
the thought that the overhead in copying the parent process' data and
code segments, scheduling them for memory, then overlaying that with the
new data and code segment would seem to be the problem. Since 99% of the
time, you're not interested in having a copy of the parent anyway, it would
seem to be more efficient to simply schedule the new executable file for
memory allocation, rather than go through the rigamarole of copying the
parent. Of course, one would want to copy the open file table from the
parent, so pipes could be shared.
        Please forgive me if this repeats discussion on some other group
or from some point in the past. I only subscribe to general, followup,
and misc to prevent information overload & have only been on the net
for ~6 months or so.                   !arizona:jim