[net.unix] How can I get bigger processes on 4.1BSD ?

wcs@ho95b.UUCP (01/05/84)

The users on my system would like to have 12 MB processes for some
large simulation programs.  (I have a VAX 11/780 with 4MB of memory.)
I followed the "Operating/Installing
4.1BSD" instructions, which say to increase MAXTSIZ, MAXDSIZ, and
MAXSSIZ in sys/h/vmparam.h.  Whenever a process gets bigger than
about 8 Meg, the system crashes.  I'm using interleaved swap to get
2 hp?b sections of swap space, and I don't get an out of swap space
message.

Has anybody successfully done this?  What should I check?  Any help
will be much appreciated.

			Bill Stewart at BTL-Holmdel
			ucbvax!ihnp4!ho95b!wcs
			decvax!harpo!ho95b!wcs
			......!{BTL}!ho95b!wcs

Mike.Accetta%cmu-cs-ius@sri-unix.UUCP (01/12/84)

Bill,

What panic message are you getting?  You probably also have to change
the definition of NDMAP in h/dmap.h.  The setup document mentions this
file but neglects to describe what constants to change.  We were able
to use 12Mb data segment sizes after doubling this constant from 16 to
32.

To explain, the paging system allocates chunks of paging space for the
data segment geometrically beginning with a size of DMMIN up to a
maximum of DMMAX.  It stores the pointers to the beginning of each of
these chunks in the dm_map array.  Paging area memory for a large
process would thus get allocated something like this:

	Chunk	Size (sectors)
	  0		32	\
	  1		64	|
	  2		128	|	< .5 Mb
	  3		256	|
	  4		512	/
	  5		1024	\
	  6		1024	|
	  7		1024	|
	  8		1024	|
	  9		1024	|
	 10		1024	|	5.5 Mb
	 11		1024	|
	 12		1024	|
	 13		1024	|
	 14		1024	|
	 15		1024	/

Which as you can see causes the dm_map array to run out of room
slightly before the process size can reach 6Mb.  By adding another 16
elements to the the end of the array, you gain 16*.5Mb = 8Mb more
process address space for a maximum of slightly under 14Mb (actually
the minimum number which you need to increase NDMAP by is more like 12
or 13 for 12Mb data segments depending on how you define MAXDSIZ).

When you change this constant it is also advisable to recompile the
various user programs (w, ps and pstat are the ones that come to mind
immediately) which include this file for examining the user area of a
process and for grabbing the command line arguments from the paging
area when a process is not resident.

			- Mike Accetta

thomas@utah-gr.UUCP (Spencer W. Thomas) (01/16/84)

Following up on Mike Accetta's message, I can maybe shed a little more
light on the subject of increasing process size.  We just did this here,
and I operated from two criteria:
	1. No user programs should need to be recompiled.
	2. System resource usage should not be drastically increased.

There are a coupld of parameters you can adjust to get more data space. 
These are NDMAP and DMMAX.  These control the amount of swap space which
can be allocated, and which is the limiting factor on process growth. 
NDMAP says how many entries are in the swap map, there are two of these
in the user struct - one for data segment and one for stack segment. 
DMMAX controls how big the swap segments get.  You can increase data
size by doubling (and redoubling) DMMAX without changing the size of the
U struct (and thus avoiding the necessity of recompiling N user
programs).  The side effect here is that (internal) fragmentation of the
swap area increases.  I have seen a recommendation that you have at
least 96Mb of swap (3 * 32Mb) for a DMMAX of 4096.  Now, obviously this
depends on your job mix.  Anyway, if you only beat on DMMAX (which is in
autoconfig.c, by the way, at least in 4.2), you can get:
DFLT>	DMMAX=1024	MAXDSIZ=6Mb (approx. (i.e., 12*1024 ...))
	DMMAX=2048	MAXDSIZ=11Mb (-32-slop, again)
	DMMAX=4096	MAXDSIZ=20Mb

The effect of DMMAX is that any process larger than DMMAX-32 gets its
swap space allocated in DMMAX chunks (so 512kbytes in the distributed
system, up to 2Mb with DMMAX=4096).

Luckily for us, 11Mb is large enough (for now).

=Spencer