[net.lang.c] When does void make code less re

jim@ISM780B.UUCP (02/27/85)

>Actually, note that the asynchronous nature of Unix disk i/o can cause
>a real, live i/o error to be reported to a close() rather than to the
>write() that caused it.  Programs which really want to be paranoid *will*
>check the returned value from close() and fclose().

Not to challenge your point about checking for errors, but I
would like to clear a misconception.  The UNIX systems I am aware
of do no such thing.  They simply discard asynchronous write
errors (these *will* be recorded on the console and in the error
log, but will not be passed to user code).  The claim in write(2)
in some manuals that an error may be reported in a later write is
simply a lie.  iodone() for an ASYNC buffer calls brelse, which clears
B_ERROR.  Even if I/O errors were reported to later users of the buffer,
close would not get them because it doesn't use I/O buffers.
However, close on some devices may indeed encounter some sort of synchronous
non-transfer error.
And fclose definitely can produce an error (the error that *is* reported
and should be watched for is ENOSPC) since it generates a write() call.

>I am deeply suspicious of event-handling primitives; I don't think I
>have ever seen a good way of doing them.  Lots of bad ways, though.

There are very good systems that use a stack-discipline exception mechanism.
Upon an error, the most recently set error handler is given a chance to
interpret the error, and either provide a correction action at the point
of error, provide a failure alternative at a point of call (a la setjmp), or
pass the error (or a modified form of the error) to the next most recent
handler.  A system-established default handler at the bottom of the stack
will print a message and exit if no user-defined action is taken.

-- Jim Balter, INTERACTIVE Systems (ima!jim)

jim@ISM780B.UUCP (03/07/85)

>You've missed my whole point, Doug.  The low-level routines are not
>pre-empting the decision on how to handle errors, they are aiding in
>the implementation of the most common decision:  "on error, print a
>message and die".  By calling (say) emalloc rather than malloc, the
>higher levels are signifying their decision to adopt this strategy,
>and are asking the lower levels to handle the implementation.  There
>is no difference in power or flexibility, only in ease of use.

Fine; so why aren't there e* versions of every routine that might possibly
produce an error or call a routine that might produce an error?
And why not have versions that write their messages in different languages
and to different file descriptors?  (These are rhetorical questions.)
A special case like emalloc is just a wart, making clear the absence of a
decent global strategy.  I think catch/throw is the best, but even
the PWB fatal package or the USG matherr approach makes for far far better
software engineering than emalloc.  The existence of emalloc just encourages
you to write a subroutine that calls it, but I can't call your routine because
you have preempted the error policy decision.

>Note my earlier comment about the usefulness of a global s/malloc/emalloc/
>in Berkeley code.  By requiring the caller to do the work of checking
>for success, even when there is nothing meaningful to be done about
>failure, the bare malloc interface encourages sloppy programmers to
>ignore the whole issue.  It also makes conscientious programmers do
>repetitive and annoying extra work.

You have made a common error:  the existence of a problem is in no way
a justification for any specific solution (this applies well to
"initialize first member" too).  The key problem is that the current default
action when malloc fails is a core dump or other random behavior.
I would argue that the right solution is to make the default be an error
message and exit, but to allow that behavior to be modified, which emalloc
does not allow.

-- Jim Balter, INTERACTIVE Systems (ima!jim)