[comp.std.unix] <limits.h> - copy of item posted to comp.unix.wizards

std-unix@ut-sally.UUCP (06/10/87)

From: domo@sphinx.co.uk (Dominic Dunlop)

Agreed that finding out how many files you have to close in order to be
sure you've closed them all is a pain on most implementations of UN*X.
IEEE 1003.1 is attempting to address this and related issues (after they
were highlighted by IBM and others).

The current draft (#10) of 1003.1, which recently dropped
stone-like through my (physical) mailbox, incorporates something that I and
others cooked up at the April Toronto meeting.  The affected section is
2.9, "Numerical Limits".  As the changes have yet to be reviewed by the
working group, they may well be thrown out or heavily modified later this
month.

Basically what the draft says is that things like OPEN_MAX, the maximum
number of files that a process can have open at any given time, are defined
in a header file, <limits.h>, as "bounded ranges".  OPEN_MAX defines at
compile time the "minimum maximum" number of files that a program can
expect to be able to have open when it runs on a particular
POSIX-conforming implementation (provided that the host system is not
overloaded, that is, in this case, that it hasn't run out of space in its
system open file or inode tables), while OPEN_MAX_CEIL defines the maximum
number of files that any instance of this implementation could ever allow
the program to have open.

What this means to the programmer is that applications may be written so
that they rely on being able to open OPEN_MAX files; so that they run
better if they succeed in opening more files than that (although there's no
point in trying if OPEN_MAX_CEIL are already open); and so that they can be
sure that everything is closed (when spawning children, for example) if they

	for (i = 0; i < OPEN_MAX_CEIL; (void) close(i++));

Thanks for the idea of the bounded range are due to Terence Dowling of Rolm,
who's not on the net.

There's much more of this sort of thing in the draft standard.  The
alternative to this compile-time approach is a run-time library function
which delivers either all possible information (in which case you get a
fixed-size structure, and lose binary application compatibility if in 
subsequent releases of a particular POSIX-conforming implementation the
amount of information returned increases); or requested items one at a time
in some sort of union.  If anybody would care to submit a proposal along
these lines to the working group, it is likely that it would be gladly
received.

Copy relevant correspondence to John S Quarterman, moderator for
comp.std.unix.  He can be reached at ut-sally!std-unix.
[ Or std-unix@sally.utexas.edu.  -mod ]

I am
Dominic Dunlop	Phone +44 628 75343
Sphinx Ltd.	UKnet domo@sphinx.co.uk

POSIX is a trademark of the IEEE

Volume-Number: Volume 11, Number 61

henry@utzoo.UUCP (Henry Spencer) (06/17/87)

From: henry@utzoo.UUCP (Henry Spencer)

> Agreed that finding out how many files you have to close in order to be
> sure you've closed them all is a pain on most implementations of UN*X.
> IEEE 1003.1 is attempting to address this...

Actually, one way to solve this *particular* problem without the nastiness
of the general case would be to have close() yield EBADF (as it does now)
for a legal-but-not-open descriptor number and EDOM for a descriptor number
beyond the implementation's limit.  (1003 doesn't currently include EDOM in
its error-number list, since in existing implementations it's not returned
by the kernel, but adding it shouldn't hurt anything.)

Another alternative would be to vary the code based on whether there is a
higher-numbered descriptor still open or not.  It's no problem for the kernel
to keep track of the highest open descriptor, so that the decision can be
made quickly even on a system that allows lots of descriptors.

				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,decvax,pyramid}!utzoo!henry

Volume-Number: Volume 11, Number 70