mason@utcsrgv.UUCP (Dave Mason) (09/22/83)
I've been reading all these wise words, but have decided I should add my two cents worth. (This is the subject of my M.Sc. thesis, in progress). First to clear up one misconception: Someone asked why Unix couldn't ignore uncaught signals rather than abort. The obvious answer (which is often the RIGHT answer, tho not always) is that there would then be no way to stop a runaway (dare I say "rogue") program. There have been several suggestions that the proper approach is one using signals or ASTs, or in other words *interrupts*. Much work has been done in the recent past to design high-level languages that include the concept of concurrency primarily in order to be able to write operating systems and other stand-alone programs in a way that would avoid the concept of asynchronous interrupts. Note that this was done so that the "system programmers", presumably good programmers already familiar with these concepts, could be enabled to write clearer, more correct, more robust systems. (For refs: Lampson & Redell,CACM (23,2); Cheriton et al.,CACM (22,1); Holt: Concurrent Euclid, Unix & Tunis, Addison-Wesley 1982; Wirth's Lillith stuff (sorry no refs)) It seems reasonable to assume that many of the people who wish to do IPC lack much of the feeling for interrupts that has been a mark of the system programmer. Once we agree that interrupts aren't the way to go, we are left with 2 choices: "message based" or "procedure oriented"(monitors etc.) (See Lauer & Needham, ACM O/S Review (13,2); or an EXCELLENT paper by Cashin on IPC, Bell Northern Research #8005014) Named pipes can give effectively the message based model (although as was pointed out, byte oriented is a problem). The 4.2 stuff appears to give a reasonable implementation (when it works) of message based concurrency (although some of it seems a little kludgy (all this from prelim documentation)). Pipes are a great idea, but for I/O redirection, NOT interprocess communication. At UofToronto we have a procedure oriented concurrency language called Concurrent Euclid (CE) which was used to write the Unix work-alike Tunis. (See Holt82; Cordy & Holt: Specification of CE, U of Toronto report CSRG-133, 1981) The major part of my thesis will be to migrate the concepts of monitors into the operating system so that one can write CE programs that can take full advantage of the concurrency available to the operating system (multi- processors etc.) (at least that is the plan. My supervisor and I have to decide on implementation in the next couple of weeks. Note that the opinions expressed here are mine and doot neccessarily reflect those of my supervisor or anyone else.) I could say more, but this is already longer than most news articles I'll read. I'm open to comments, questions, and (shudder) flames. -- Dave Mason, U. Toronto CSRG, {cornell,watmath,ihnp4,floyd,allegra,utzoo,uw-beaver}!utcsrgv!mason or {decvax,linus,research}!utzoo!utcsrgv!mason (UUCP)
tom@rlgvax.UUCP (Tom Beres) (09/26/83)
The question was "Why not reset signals to IGNORE rather than DEFAULT?". The "obvious" answer given ("because there would be no way to stop runaway programs") is incorrect. Signal 9 ("Kill", or "DIE, you *##*@, DIE!") can never be caught or ignored, so there is no way it could ever be reset to IGNORE, and any runaway process could always be terminated with a "kill -9" signal. Perhaps the real answer has to do with catching your own memory faults, or something along those lines, I haven't thought it out. Perhaps the real answer was shortsidedness? - Tom Beres {seismo, allegra, mcnc, brl-bmd, we13}!rlgvax!tom