[comp.unix.wizards] shell file descriptor programming

scs@adam.pika.mit.edu (Steve Summit) (04/29/89)

In article <1015@philmds.UUCP> leo@philmds.UUCP (Leo de Wit) writes:
>The shell has no means to rewind that I know of; however, a simple
>one-liner will do the trick (add error checking of your fancy):
>----- rewind.c:
>main(argc,argv) int argc; char **argv; { lseek(atoi(argv[1]),0L,0); }

Ah, yes, a great old trick, impresses the hell out of your
friends, guaranteed to break the ice at parties, but you should
be more general and write lseek(1) instead of rewind(1S):

	extern long int atol();
	main(argc, argv) int argc; char *argv[];
	{lseek(atoi(argv[1]), atol(argv[2]), atoi(argv[3]);}

or, further eschewing compiled code,

	ci <<%
	lseek($1, $2L, $3);
	%

(assuming ci is your friendly neighborhood C interpreter, not
RCS checkin).

Of course, the other great benefit of playing with file
descriptors with 4> and 5< and the like is that it lets us
stick it to the ever-increasing ranks of csh users :-).

                                            Steve Summit
                                            scs@adam.pika.mit.edu

P.S. Don't try lseek(0, -1L, 0) from your login shell
     (but you will...).

stever@tree.UUCP (Steve Rudek) (05/02/89)

In article <10944@bloom-beacon.MIT.EDU>, scs@adam.pika.mit.edu (Steve Summit) writes:
> In article <1015@philmds.UUCP> leo@philmds.UUCP (Leo de Wit) writes:
> >main(argc,argv) int argc; char **argv; { lseek(atoi(argv[1]),0L,0); }

> be more general and write lseek(1) instead of rewind(1S):
> 	extern long int atol();
> 	main(argc, argv) int argc; char *argv[];
> 	{lseek(atoi(argv[1]), atol(argv[2]), atoi(argv[3]);}

Neither works under the ksh (Microport System V/AT 2.4) though both work
under the bourne shell (tested with the shell script).  The ksh failure is
absolutely silent.

Obviously the ksh isn't 100% compatible (I've also noticed that function
recursion which works under sh fails under ksh--but at least it has the
decency to complain).  Is this sort of file rewind impossible under ksh?
Any guesses as to why the ksh falls down?  Other significant upward
incompatabilities?
-- 
----------
Steve Rudek  {ucbvax!ucdavis!csusac OR ames!pacbell!sactoh0} !tree!stever

ka@june.cs.washington.edu (Kenneth Almquist) (05/03/89)

stever@tree.UUCP (Steve Rudek) writes:
> Neither works under the ksh (Microport System V/AT 2.4) though both work
> under the bourne shell (tested with the shell script).  The ksh failure is
> absolutely silent.
>
> Obviously the ksh isn't 100% compatible (I've also noticed that function
> recursion which works under sh fails under ksh--but at least it has the
> decency to complain).  Is this sort of file rewind impossible under ksh?
> Any guesses as to why the ksh falls down?  Other significant upward
> incompatabilities?

It's supposed to be a feature.  The idea is that if you save file
descriptors by moving them, you don't necessarily want to pass them
to the programs you run.  Let me explain that last sentence with an
example:

	if test $flag; then exec 3<&0 <file; fi
	program
	if test $flag; then exec <&3 3<&- ; fi

This uses file descriptor 3 to hold the original value of file de-
scriptor zero if the flag is set.  Under the Bourne shell, "program"
will be invoked with file descriptor 3 will be open, and (in principle)
the program could run out of file descriptors because file descriptor 3
is unavailable.  The Korn shell avoids this problem by closing file
descriptor 3 before exec-ing the program.

This Korn shell feature is being considered for inclusion in P1003.2.
The rewind/lseek program is the first example I have seen that is
broken by the feature.  It is possible to work around it by saying

	lseek 0 0 0 <&3

because the feature only applies to file descriptors greater than two.
				Kenneth Almquist

schwartz@shire.cs.psu.edu (Scott Schwartz) (05/03/89)

In article <8087@june.cs.washington.edu>, ka@june (Kenneth Almquist) writes:
>It's supposed to be a feature.  The idea is that if you save file
>descriptors by moving them, you don't necessarily want to pass them
>to the programs you run.
...
>because the feature only applies to file descriptors greater than two.

Yuck!  What an unpleasant 'feature'.  It's even got magic special cases
to know about.  Who do I write to to vote against this?
-- 
Scott Schwartz		<schwartz@shire.cs.psu.edu>

opus@ihlpe.ATT.COM (452is-Kim) (05/03/89)

In article <296@tree.UUCP>, stever@tree.UUCP (Steve Rudek) writes:
: In article <10944@bloom-beacon.MIT.EDU>, scs@adam.pika.mit.edu (Steve Summit) writes:
: : 	extern long int atol();
: : 	main(argc, argv) int argc; char *argv[];
: : 	{lseek(atoi(argv[1]), atol(argv[2]), atoi(argv[3]);}
: 
: Neither works under the ksh (Microport System V/AT 2.4) though both work
: under the bourne shell (tested with the shell script).  The ksh failure is
: absolutely silent.
: 
: ----------
: Steve Rudek  {ucbvax!ucdavis!csusac OR ames!pacbell!sactoh0} !tree!stever

I had the same problem when I tried it, but I figured out a solution:

	extern long	lseek();
	main()
	{
		return(lseek(0, 0L, 0));
	}

To use this program to rewind file descriptor 4, for example, you say:

	rewind <&4

I know it's not clean, but it works.  I suppose you could have a shell script
front end that takes an actual argument instead of a redirection.
-- 
						RoBiN G. KiM
						...att!ihlpe!opus

bink@aplcen.apl.jhu.edu (Ubben Greg) (05/06/89)

In article <4763@ihlpe.ATT.COM> 452is-Kim writes:
>In article <10944@bloom-beacon.MIT.EDU> Steve Summit writes:
>: 	extern long int atol();
>: 	main(argc, argv) int argc; char *argv[];
>: 	{lseek(atoi(argv[1]), atol(argv[2]), atoi(argv[3]);}
>[...]
>To use this program to rewind file descriptor 4, for example, you say:
>
>	rewind <&4
>
>I know it's not clean, but it works.  I suppose you could have a shell script
>front end that takes an actual argument instead of a redirection.

On the contrary, I feel the UNIXy way of doing it IS to operate on stdin,
and let the shell redirect if necessary.  Here is the generalized seek
program I put on our system after reading the previous postings:

	/*
	 *  seek.c
	 *  Performs an lseek on the standard input.
	 *  Greg Ubben, XXX, 2May89
	 */
	
	static char what[]  = "@(#) 2May89 seek.c gsubben";
	static char usage[] = "Usage:  seek [offset [whence]]\n";
	
	extern long strtol(), lseek();
	
	main (argc,argv)
	    int  argc;
	    char *argv[];
	{
	    long offset = (argc>1 ? strtol(argv[1],&argv[1],0) : 0L);
	    int  whence = (argc>2 ? strtol(argv[2],&argv[2],10) : 0);
	
	    if (argc>1 && *argv[1] || argc>2 && *argv[2] || argc>3) {
		write (2, usage, sizeof(usage)-1);
		exit (2);
	    }
	    if (lseek(0,offset,whence) < 0) {
		perror (argv[0]);
		exit (1);
	    }
	    exit (0);
	}

It's already been used in a shell that needed to write to a dir-like data file.
I'm sure there's problems with this, but I leave it as an exercise to find
them.  :-)  It fails on bad whence values, as this isn't worth checking --
0, 1, or 2 will always be hardcoded in the shells that use it.  Here's a
tenative man entry that gives some nifty examples:

	                                                          SEEK(1)
	
	
	NAME
	     seek - exercise lseek system call
	
	SYNOPSIS
	     seek [offset [whence]]
	
	DESCRIPTION
	     Seek performs an lseek(2) to adjust the file pointer of
	     standard input.  Offset must be a decimal, octal (if it
	     begins with 0), or hexadecimal (if it begins with 0x or 0X)
	     integer.  Whence must be 0, 1, or 2 to set the offset base
	     at the beginning of the file, current location, or end of
	     the file.  Both arguments default to 0, causing no arguments
	     to "rewind" the file.
	
	EXAMPLES
	     (seek 25 && echo "OFFSET25\c") >>datafile <&1
	
	     exec 3<inputfile
	     read <&3 a b c
	     read <&3 a b c
	     ...
	     seek <&3        # rewind to start over
	     read <&3 a b c
	     exec 3<&-
	
	SEE ALSO
	     lseek(2), sh(1).

Both examples work on System V.2 Bourne shell, but only the first does on
this here Ultrix system under ksh.  Is this a good idea?  Problems?
Portability?

					-- Greg Ubben
					   bink@aplcen.apl.jhu.edu

jc@minya.UUCP (John Chambers) (05/10/89)

> P.S. Don't try lseek(0, -1L, 0) from your login shell
>      (but you will...).

You're right; I did, and as near as I can tell, it was a total no-op.  
Was it supposed to do something interesting and/or amusing?  Does it
do something on some systems?  I'd expect it to just return -1, which
is what it did.

This is a reasonably generic Sys/V.2 system, BTW.

-- 
John Chambers <{adelie,ima,mit-eddie}!minya!{jc,root}> (617/484-6393)

[Any errors in the above are due to failures in the logic of the keyboard,
not in the fingers that did the typing.]

lvc@cbnews.ATT.COM (Lawrence V. Cipriani) (05/10/89)

In article <1015@philmds.UUCP> leo@philmds.UUCP (Leo de Wit) writes:
>The shell has no means to rewind that I know of;

For tapes the following works:

	$ < tape_device_pathname

Just type it and and watch the tape rewind!  It might even work for files,
but I never used it that way.
-- 
Larry Cipriani, att!cbnews!lvc or lvc@cbnews.att.com
"Life is not a seminar." -- Thomas Sowell

jc@minya.UUCP (John Chambers) (05/12/89)

In article <4542@psuvax1.cs.psu.edu>, schwartz@shire.cs.psu.edu (Scott Schwartz) writes:
> In article <8087@june.cs.washington.edu>, ka@june (Kenneth Almquist) writes:
> >It's supposed to be a feature.  The idea is that if you save file
> >descriptors by moving them, you don't necessarily want to pass them
> >to the programs you run.
> ...
> >because the feature only applies to file descriptors greater than two.
> 
> Yuck!  What an unpleasant 'feature'.  It's even got magic special cases
> to know about.  Who do I write to to vote against this?

It's definitely a mis-feature, and now I know a good argument against ksh,
at least for some applications.  I've worked on several projects for which
I produced an augmented I/O library that made conventional use of files
3 and higher.  With /bin/sh, it's easy enough to include in /etc/profile
(or wherever) a set of lines that initialize these files to a default (an
audit trail, for example), and then just assume that the files are always 
available.  If ksh shoots them down, it'd definitely interfere with such 
an approach.

One of the things that is generally done well on Unix is to set things up
so that they are generic and easily extendible.  This sounds like someone
has decided to take the opposite approach.  Instead of saying that files
0,1,2 are open by default, and anyone can extend the list, the approach
seems to be that exactly files 0,1,2 are open by default, and you aren't
allowed to modify the list.  This is the opposite of extendibility.  Who
needs such silly restrictions?  

-- 
John Chambers <{adelie,ima,mit-eddie}!minya!{jc,root}> (617/484-6393)

[Any errors in the above are due to failures in the logic of the keyboard,
not in the fingers that did the typing.]

ekrell@hector.UUCP (Eduardo Krell) (05/16/89)

In article <134@minya.UUCP> jc@minya.UUCP (John Chambers) writes:

>It's definitely a mis-feature, and now I know a good argument against ksh,
>at least for some applications.

But at the same time, it's a good argument for ksh in some other
applications.  One of the reasons this was put into ksh was the
existance of broken window managers who leave several file descriptors >2
open and cause programs to fail when they run out of file descriptors.

>With /bin/sh, it's easy enough to include in /etc/profile
>(or wherever) a set of lines that initialize these files to a default (an
>audit trail, for example), and then just assume that the files are always 
>available.				  ^^^^^^

That's the problem. What makes you think this is a safe assumption?
There are already too many ways of breaking this. Say you want to
run this program you have from within an editor or a shell script
or some other program.
What makes you think the file descriptors you need (>2) are still
open? Many programs start with closing all file descriptors >2.
The only standard file descriptors that all programs expect to be
opened are 0, 1, and 2.

>Instead of saying that files
>0,1,2 are open by default, and anyone can extend the list, the approach
>seems to be that exactly files 0,1,2 are open by default, and you aren't
>allowed to modify the list.

You aren't allowed to modify the list because there is no notation
for it. Someone suggested to overload the export builtin to allow
to specify which file descriptors would be "exported" to exec'ed
processes (default being 0, 1, and 2), but the current POSIX draft
doesn't support that.
    
Eduardo Krell                   AT&T Bell Laboratories, Murray Hill, NJ

UUCP: {att,decvax,ucbvax}!ulysses!ekrell  Internet: ekrell@ulysses.att.com

les@chinet.chi.il.us (Leslie Mikesell) (05/18/89)

In article <11529@ulysses.homer.nj.att.com> ekrell@hector.UUCP (Eduardo Krell) writes:

>>With /bin/sh, it's easy enough to include in /etc/profile
>>(or wherever) a set of lines that initialize these files to a default (an
>>audit trail, for example), and then just assume that the files are always 
>>available.				  ^^^^^^

>That's the problem. What makes you think this is a safe assumption?

How about the man page for exec(2) where it says that file descriptors
open in the calling process remain open in the new process?

>There are already too many ways of breaking this. Say you want to
>run this program you have from within an editor or a shell script
>or some other program.

Then you should expect an error just like you would get with a program
that wants to open a file that is not accessable.  

>What makes you think the file descriptors you need (>2) are still
>open? Many programs start with closing all file descriptors >2.
>The only standard file descriptors that all programs expect to be
>opened are 0, 1, and 2.

Is this documented somewhere?  I think the only program that has any
business closing open files if it expects to start general-purpose
children is "getty", and it shouldn't have to.

>You aren't allowed to modify the list because there is no notation
>for it. Someone suggested to overload the export builtin to allow
>to specify which file descriptors would be "exported" to exec'ed
>processes (default being 0, 1, and 2), but the current POSIX draft
>doesn't support that.

If I:
exec 3<foo 4>bar
under what conditions should I not expect to be able to access these
files?

Les Mikesell

jc@minya.UUCP (John Chambers) (05/18/89)

In article <11529@ulysses.homer.nj.att.com>, ekrell@hector.UUCP (Eduardo Krell) writes:
> In article <134@minya.UUCP> jc@minya.UUCP (John Chambers) writes:
> >With /bin/sh, it's easy enough to include in /etc/profile
> >(or wherever) a set of lines that initialize these files to a default (an
> >audit trail, for example), and then just assume that the files are always 
> >available.				  ^^^^^^
> 
> That's the problem. What makes you think this is a safe assumption?
> There are already too many ways of breaking this. Say you want to
> run this program you have from within an editor or a shell script
> or some other program.
> What makes you think the file descriptors you need (>2) are still
> open? Many programs start with closing all file descriptors >2.
> The only standard file descriptors that all programs expect to be
> opened are 0, 1, and 2.

True, but then, it is also easy enough to write code that closes files 
0, 1, and 2, and then does an exec of another unsuspecting program.  
Strictly speaking, no program should ever rely on any files being open,
and even if open, you can't rely on file 0 being readable, etc.  Does
all your code behave correctly in such a case?

This isn't entirely facetious.  There are several Unices around whose
cron starts things up without all the standard files open, and things
started by init are also highly likely to have this problem.

There is a reasonable scenario in which alternative assumptions are
quite reasonable.  Suppose you are building a "turnkey" system that
has Unix inside it, but you supply your own shell to your users, with
a standard setup that only a few wizards even know exists.  Included
would be a set of libraries full of your applications.  It is not at
all unreasonable for programs in these libraries to just assume that
they are being called from the "environment" that you are selling.
The ability to develop such an environment is, after all, one of the
great strengths of Unix.

-- 
John Chambers <{adelie,ima,mit-eddie}!minya!{jc,root}> (617/484-6393)

[Any errors in the above are due to failures in the logic of the keyboard,
not in the fingers that did the typing.]

ekrell@hector.UUCP (Eduardo Krell) (05/18/89)

In article <8473@chinet.chi.il.us> les@chinet.chi.il.us (Leslie Mikesell) writes:

>How about the man page for exec(2) where it says that file descriptors
>open in the calling process remain open in the new process?

That man page says nothing about individual programs.

>>The only standard file descriptors that all programs expect to be
>>opened are 0, 1, and 2.
>
>Is this documented somewhere?

It's tradition. Why do you think stdio only defines stdin, stdout and
stderr?

>If I:
>exec 3<foo 4>bar
>under what conditions should I not expect to be able to access these
>files?

Those files are opened to the shell. The shell has every right to
exec a process with the close-on-exec flags on all file descriptors
other than 0, 1, 2 and any file descriptors indicated in the command line
(as in "program 3<foo 4>bar").

The real problem is that the close-on-exec flag was a late arrival
into Unix and so the default exec() behavior remained not to use
the close-on-exec flag (because there was no such a flag when it
all started). I think this is a mistake. File descriptors are part
of the environment of a process, and like environment variables,
shouldn't be exported by the shell unless it's done explicitly
(by something like the overloading of the "export" builtin I
described).
    
Eduardo Krell                   AT&T Bell Laboratories, Murray Hill, NJ

UUCP: {att,decvax,ucbvax}!ulysses!ekrell  Internet: ekrell@ulysses.att.com

les@chinet.chi.il.us (Leslie Mikesell) (05/20/89)

In article <11540@ulysses.homer.nj.att.com> ekrell@hector.UUCP (Eduardo Krell) writes:

>>>The only standard file descriptors that all programs expect to be
>>>opened are 0, 1, and 2.

>>Is this documented somewhere?

>It's tradition. Why do you think stdio only defines stdin, stdout and
>stderr?

You mean my 3B2's aren't old enough to have an associated tradition?
Here is a tidbit from the "sysadm" command:
exec 4<&0
echo $* | 3<&0 0<&4 4<&- /bin/su - ${cmd}

Should this sort of thing not work?

>Those files are opened to the shell. The shell has every right to
>exec a process with the close-on-exec flags on all file descriptors
>other than 0, 1, 2 and any file descriptors indicated in the command line
>(as in "program 3<foo 4>bar").

Why?  It seems non-intuitive and somewhat antisocial in spite of the fact
that it might fix some other program's problems.

>The real problem is that the close-on-exec flag was a late arrival
>into Unix and so the default exec() behavior remained not to use
>the close-on-exec flag (because there was no such a flag when it
>all started). I think this is a mistake. File descriptors are part
>of the environment of a process, and like environment variables,
>shouldn't be exported by the shell unless it's done explicitly
>(by something like the overloading of the "export" builtin I
>described).

But it would be more intuitive to continue the previous behaviour
unless explicitly told *not* to.  That is, add a notation to the
shell to set the close-on-exec flag when you want it.  Since there
is already an explicit "close file" notation, it seems almost
unnecessary anyhow.  BTW, how does ksh know how far to go with
its file-closing?  I don't recall seeing a handy way to find the
highest allowable fd other than trying them all until you get an
error.  Is that a reasonable thing to do?

Les Mikesell

ekrell@hector.UUCP (Eduardo Krell) (05/21/89)

In article <8494@chinet.chi.il.us> les@chinet.chi.il.us (Leslie Mikesell) writes:

>That is, add a notation to the
>shell to set the close-on-exec flag when you want it.

Yes, this would be a good compromise, but the POSIX draft doesn't
have such notation.

>BTW, how does ksh know how far to go with
>its file-closing?  I don't recall seeing a handy way to find the
>highest allowable fd other than trying them all until you get an
>error.

The ksh configuration scripts determine how many file descriptors
your system supports (by running a test program which does dup()'s until
it fails) and creates a configuration header file which is used to
compile ksh.
    
Eduardo Krell                   AT&T Bell Laboratories, Murray Hill, NJ

UUCP: {att,decvax,ucbvax}!ulysses!ekrell  Internet: ekrell@ulysses.att.com

les@chinet.chi.il.us (Leslie Mikesell) (05/21/89)

>[Eduardo Krell]

>The ksh configuration scripts determine how many file descriptors
>your system supports (by running a test program which does dup()'s until
>it fails) and creates a configuration header file which is used to
>compile ksh.

Does this mean something nasty will happen if NOFILES is tuned lower
after the compile - or on a system that receives a binary distribution?


Les Mikesell

kre@cs.mu.oz.au (Robert Elz) (05/21/89)

In article <11566@ulysses.homer.nj.att.com>, ekrell@hector.UUCP (Eduardo Krell) writes:
> The ksh configuration scripts determine how many file descriptors
> your system supports (by running a test program which does dup()'s until
> it fails) and creates a configuration header file which is used to
> compile ksh.

Are you deliberately trying to make us all ill?  The number of file
descriptors is (or should be) a kernel configuration parameter (and
I expect to see config parameters like this turn into boot time options
on many kernels soon).

Now recompiling system dependant stuff like ps when the kernel changes
(though not normally for a minor configuration) is grudgingly acceptable,
but recompiling the shell isn't (I mean, without the shell, how do you
get the new system up to recompile it?)

Not that I really suppose that anything really badly breaks if the shell
is misconfigured this way, but making this kind of information be a
compiled in constant in any program is just asking for trouble.

kre

ekrell@hector.UUCP (Eduardo Krell) (05/21/89)

In article <8501@chinet.chi.il.us> les@chinet.chi.il.us (Leslie Mikesell) writes:

>Does this mean something nasty will happen if NOFILES is tuned lower
>after the compile - or on a system that receives a binary distribution?

You'll be doing extra close()'s, which are harmless. You could patch
the ksh binary if you want. No big deal.
    
Eduardo Krell                   AT&T Bell Laboratories, Murray Hill, NJ

UUCP: {att,decvax,ucbvax}!ulysses!ekrell  Internet: ekrell@ulysses.att.com

ekrell@hector.UUCP (Eduardo Krell) (05/22/89)

In article <1508@murtoa.cs.mu.oz.au> kre@cs.mu.oz.au (Robert Elz) writes:

>Not that I really suppose that anything really badly breaks if the shell
>is misconfigured this way, but making this kind of information be a
>compiled in constant in any program is just asking for trouble.

The problem is that not all systems ksh runs on have a way of asking for
the maximum number of file descriptors at run time (like getdtablesize() ).

If you know of a portable way to do this (ie, one that works on BSD 4.x,
System V Release 1 through 4, POSIX, etc.), please let me know.
    
Eduardo Krell                   AT&T Bell Laboratories, Murray Hill, NJ

UUCP: {att,decvax,ucbvax}!ulysses!ekrell  Internet: ekrell@ulysses.att.com

guy@auspex.auspex.com (Guy Harris) (05/23/89)

>If you know of a portable way to do this (ie, one that works on BSD 4.x,
>System V Release 1 through 4, POSIX, etc.), please let me know.

You don't need a portable one.  All you need is a way to tell what kind
of system you have (something that, as far as I know, the "ksh"
configuration script already *has*), so you know whether to:

	1) compile the number in;

	2) fetch it with "getdtablesize" (BSD4.2 and later);

	3) fetch it with "ulimit(4, 0L)" (S5R3.0 and later).

greywolf@unisoft.UUCP (The Grey Wolf) (05/23/89)

In article <11566@ulysses.homer.nj.att.com> ekrell@hector.UUCP (Eduardo Krell) writes:
>In article <8494@chinet.chi.il.us> les@chinet.chi.il.us (Leslie Mikesell) writes:
>
>>BTW, how does ksh know how far to go with
>>its file-closing?  I don't recall seeing a handy way to find the
>>highest allowable fd other than trying them all until you get an
>>error.
>
>The ksh configuration scripts determine how many file descriptors
>your system supports (by running a test program which does dup()'s until
>it fails) and creates a configuration header file which is used to
>compile ksh.

Oh, this is just Brilliant.  Does this mean that if I decide to reconfigure
my kernel, I have to recompile my ksh as well?  How many other programs
out there are thus braindead that they become obsolete upon reconfiguration
or tuning of one's kernel?  This is stupid.

There exists something of a system call (in some (good) versions of UNIX)
called getdtablesize(), which returns the size of the Kernel File Table.
I would think that there would be some way of determining the size
of the User File Descriptor Table.  The only way this is currently possible
is by marking your starting point, opening ad nauseum (marking any open
descriptors so they don't get closed later...) and hoping you hit an error,
closing all the descriptors that weren't open and returning the value of
the last valid descriptor.  It's expensive, but it IS accurate.

If you depend upon a header file (like <sys/param.h>), you are getting
the same "approximation of reality" that ps(1) delivers, since you will
have to recompile your program if your local system administrator decides
to re-tune the kernel for different values of NOFILES (or whatever the
constant is these days).

>    
>Eduardo Krell                   AT&T Bell Laboratories, Murray Hill, NJ
>
>UUCP: {att,decvax,ucbvax}!ulysses!ekrell  Internet: ekrell@ulysses.att.com


-- 
"Insane I may be.  I am not stupid."	Antryg Windrose <the mad wizard>

rml@hpfcdc.HP.COM (Bob Lenk) (05/25/89)

>	1) compile the number in;
>
>	2) fetch it with "getdtablesize" (BSD4.2 and later);
>
>	3) fetch it with "ulimit(4, 0L)" (S5R3.0 and later).

	4) fetch it with "sysconf(_SC_OPEN_MAX)" (POSIX)

		Bob Lenk
		hplabs!hpfcla!rml
		rml@hpfcla.hp.com