[comp.unix.questions] Is there a limit to create sockets on UNIX??

jian@kuhub.cc.ukans.edu (07/25/90)

I design a system on UNIX thafrequently creates sockets and closes sockets.
I got an error message "socket: Too Many Open Files" from last run time. It
seems to me that there is a limit on UNIX to create sockets. Is that true?
What is the maximum number of sockets created onX? Any helps would be
appreciated.

Jian Q. Li
jian@kuhub.cc.ukans.edu

kent@opus.austin.ibm.com (Kent Malave') (07/26/90)

	Sounds like a filesystem limitation.  If you are using UNIX
	family sockets it actually writes to file space and this causes
	a file to be opened.  Thus you may have to many open files.
	You might try INET family sockets!
	Just a suggestion.  Also you can check so see if a process
	can increase the the number of files it can have open on your
	system.
				I hope this helps,
===============================================================================
							Kent Malave'
	...uunet!cs.utexas.edu!ibmchs!auschs!opus.austin.ibm.com!kent
Disclaimer: This is no one's opinion. (Not even mine!)
===============================================================================

jhc@m2jhc.uucp (James H. Coombs) (07/27/90)

In article <2913@awdprime.UUCP> kent@opus.austin.ibm.com (Kent Malave') writes:
>
>	Sounds like a filesystem limitation.

I agree, although I would say that it is not a filesystem limitation
but a limit on the number of files that may be concurrently open for a
single process.

>	You might try INET family sockets!

Internet domain sockets still take a file descriptor in the process
even though no file is created on the filesystem.

--Jim

guy@auspex.auspex.com (Guy Harris) (07/27/90)

>	Sounds like a filesystem limitation.  If you are using UNIX
>	family sockets it actually writes to file space and this causes
>	a file to be opened.  Thus you may have to many open files.

Nope.  "Too many open files" really means "too many open file
descriptors in this process"; a socket in any family uses a file
descriptor, so switching to another family won't help.

>	Just a suggestion.  Also you can check so see if a process
>	can increase the the number of files it can have open on your
>	system.

Yup, the per-process file descriptor limit is the problem.  This limit
varies from 20 to 64 to 256 in some systems.

jonathan@speedy.cs.pitt.edu (Jonathan Eunice) (08/03/90)

Guy Harris (guy@auspex.auspex.com) writes
	Too many open files" really means "too many open file
	descriptors in this process"; a socket in any family uses a file
	descriptor...

	Yup, the per-process file descriptor limit is the problem.  This limit
	varies from 20 to 64 to 256 in some systems.

While true that most UNIX systems are limited by static resource limits, not 
all are.  

Known counterexample: AIX 3.1 for the RS/6000 (limit = 2000).

Possible counterexamples: System V Release 4 and Apollo DomainOS.

Future counterexample: OSF/1.  

While not widely delivered today, the clear trend is making UNIX
resource allocation dynamically scalable.  A widespread interim
solution is increasing the maximum limits. (Eg, Sun's recent increase of
the per-process open file table to 256 entries, from 64.)

shore@mtxinu.COM (Melinda Shore) (08/03/90)

In article <8290@pitt.UUCP> jonathan@speedy.cs.pitt.edu (Jonathan Eunice) writes:
>While true that most UNIX systems are limited by static resource limits, not 
>all are.  
>
>Known counterexample: AIX 3.1 for the RS/6000 (limit = 2000).

Did you mean to say this?  2000 sure sounds like a [big] limit to me.

Work is being done on dynamically allocating system tables.  Some folks
from DEC gave a presentation on their work in this area at the Summer
'88 Usenix meeting.
-- 
Melinda Shore                             shore@mtxinu.com
mt Xinu                          ..!uunet!mtxinu.com!shore

jonathan@speedy.cs.pitt.edu (Jonathan Eunice) (08/03/90)

Oops! 

In my recent comments on UNIX resource allocation, I said:

   While not widely delivered today, the clear trend is making UNIX
   resource allocation dynamically scalable.  A widespread interim
   solution is increasing the maximum limits. 

This much is true.  

Unfortunately, whilst upon my soapbox, I failed to see that I had
chosen precisely the wrong time/example to make the point.  The
per-process open file table is one that must be static, given the
necessary constraint of maintaining a fixed size u area (the area of 
user memory the kernel keeps to manage per-process data, such as open
files and sockets).  Other system resources (processes, shared memory
segments, per-system open files, etc) are controlled by tables that
live in memory that can be much more flexibly controlled.

The correct approach to per-process open files is indeed increasing the
maximum, static number.  SunOS, with its 256 limit, and especially AIX
3.1, with its 2,000 limit, have begun to move the figure above most any
potential problem.

Thanks to observant readers Carl Witty (cwitty@cs.stanford.edu) and Eduardo 
Krell (ekrell@ulysses.att.com) for their corrections.

Btw, I believe, but do not know, that Apollo's DomainOS has scalability
in the number of per-process open files.  DomainOS is built on an
internally-developed, production-oriented operating system base.  It
avoids many scalability problems common in traditional UNIX designs.

guy@auspex.auspex.com (Guy Harris) (08/04/90)

>While true that most UNIX systems are limited by static resource limits, not 
>all are.  

No shit, that's why I said "some systems".  Since he was seeing EMFILE,
he was obviously working on a system limited by a static limit on the
number of file descriptors that any one process can open.

>Known counterexample: AIX 3.1 for the RS/6000 (limit = 2000).

That sounds like a static limit to me, albeit a large one.

>Possible counterexamples: System V Release 4

No, S5R4 works the same way SunOS 4.1 does.

guy@auspex.auspex.com (Guy Harris) (08/05/90)

>The per-process open file table is one that must be static, given the
>necessary constraint of maintaining a fixed size u area (the area of 
>user memory the kernel keeps to manage per-process data, such as open
>files and sockets).

And given the assumption that the open file table resides entirely in the
U area.  This assumption is not true in SunOS 4.1, for example; the
first 64 entries live there, the rest, if there are any, are, well,
"controlled by tables that live in memory that can be much more flexibly
controlled."

That memory isn't paged or swapped, but I don't know that u areas are
paged or swapped, either in all UNIX systems, and even if they are, if
few enough processes use more than the number of descriptors that can
fit in the U area, the memory cost of wired-down open file tables for
those processes may not be too bad.

>Btw, I believe, but do not know, that Apollo's DomainOS has scalability
>in the number of per-process open files.  DomainOS is built on an
>internally-developed, production-oriented operating system base.  It
>avoids many scalability problems common in traditional UNIX designs.

Domain/OS may implement the open file table in userland; that means it
may "live in memory that can be much more flexibly controlled."

The problem with traditional UNIX systems here is that they haven't been
as hospitable to sharing data structures between processes as other
systems, perhaps due, in part, to less-ambitious original design goals,
and perhaps due, in part, to the PDP-11s small number of large memory
managment chunks; in general, shared stuff has lived in the kernel. 

UNIX systems these days live on hardware a bit more hospitable to
sharing multiple independent pieces of a process's address space, and
mechanisms to allow that sort of sharing are appearing in more UNIX
systems.  Mach's user-mode implementation of the UNIX programmer's
interface may already use Mach's mechanisms of that sort to manage file
descriptors (the USENIX paper leads me to infer that, but it doesn't
come out and say it, at least not from my quick 2-minute scan).

Aegis/DomainOS also lives on hardware of that sort, and happenes to be
designed to use it in that fashion.  Whether this makes it more
"production-oriented" is another matter, and one I'd leave up to people
who can indicate how reliable, etc.  Domain/OS systems they used were as
compared to whatever "conventional UNIX" systems they were using; you'll
probably find people in both camps.... 

boyd@necisa.ho.necisa.oz (Boyd Roberts) (08/08/90)

In article <8307@pitt.UUCP> jonathan@speedy.cs.pitt.edu (Jonathan Eunice) writes:
>
>The correct approach to per-process open files is indeed increasing the
>maximum, static number.  SunOS, with its 256 limit, and especially AIX
>3.1, with its 2,000 limit, have begun to move the figure above most any
>potential problem.

Sorry?  Increasing the maximum to an absurdly large value is not a
`correct approach'.  Do it properly and make it truly dynamic.

The last thing we need is a u-area with a u_ofile[2000] declaration.
How much ram will be blown away by those 2000 (struct file *)'s?

And how many programs actually require 2000 open files?  Very few.


Boyd Roberts			boyd@necisa.ho.necisa.oz.au

``When the going gets wierd, the weird turn pro...''