[comp.unix.xenix] malloc problems in SVr2/286

dave@micropen (David F. Carlson) (09/29/87)

I read the paper on the 286 port of SVr2 recently and have been "playing"
with malloc() recently to see how I might be able to use it with any
predictability.  (See UNIX Papers, Waite Group, Howard Sams, 1987.)

First, manuals are incorrect that one can allocate 64K - 1 bytes (a per 
segment limitation of the 286 architecture.)  First, malloc(3) uses 1K for 
internal use.  I was empirically unable to allocate more than 64476 bytes 
using Microport malloc(3), which is a mysterious 34 bytes shy of a full load 
with the 1K overhead.

Once memory is allocated, the segment is never released in a corresponding 
free(3).  Moreover, only the last allocated segment is used to fill new 
requests, and since mallocs that required new segments to be allocated ask 
for the exact amount of memory (plus 1K overhead), all new mallocs without 
intervening frees will cause a new segment allocation *per malloc*!.  
(This is only true for relatively large requests.  The minimum malloc seems 
to be 4K (including 1K overhead.)

My personal workaround (especially where many malloc/free are expected) is to
allocate a large malloc segment (with the max limit above) and then immediately
free it.  Thus, all subsequent requests will fill out of that preallocated
space rather than forcing the creation of many segments in the LDT.  (One of
my programs had over 75 segments created by small malloc requests.)  It is
"unfortunate" that the '286 malloc(3) cannot ever recover memory once its been
allocated.  An extra level of indirection in the malloc allocation routine
and a working sbrk(2) would allow a full featured malloc(3).  My workaround
above doesn't actually help free(3) free the segment, but most small requests
can be satisfied out of the 63K segment, since upon subsequest free(3), space
can be realloced whereas if several small segments are allocated by small malloc
requests before a free(3), those blocks will never be reallocated as only the 
most current segment is used.  Thus, the buried segments may add up quickly if 
many small (1K) malloc/free sequences are performed.

Although tested under Microport, I believe these some of these problems are
endemic to '286 segmentation arrangement and most to not wanting to spend
the effort to "fix" malloc(3) and sbrk(2).  Results are untested under any
flavor of Xenix.

-- 
David F. Carlson, Micropen, Inc.
...!{seismo}!rochester!ur-valhalla!micropen!dave

"The faster I go, the behinder I get." --Lewis Carroll

greg@gryphon.CTS.COM (Greg Laskin) (10/05/87)

The comments in this posting refer to SCO Xenix SV (2.1.3).

David Carlson writes of some problems with Microport's memory allocation
and speculates that the problems are 80286 architecture related and may
also be present in Xenix implementations.

In article <381@micropen> dave@micropen (David F. Carlson) writes:
>First, manuals are incorrect that one can allocate 64K - 1 bytes (a per 
>segment limitation of the 286 architecture.)  First, malloc(3) uses 1K for 
>internal use.  I was empirically unable to allocate more than 64476 bytes 
>using Microport malloc(3), which is a mysterious 34 bytes shy of a full load 
>with the 1K overhead.

With Xenix you can allocate 65536 - 12 bytes.  The 12 bytes are the
block header for the allocated block.  Some older versions of Xenix
would only permit allocation of 32768 byte segments.

>Once memory is allocated, the segment is never released in a corresponding 
>free(3).  Moreover, only the last allocated segment is used to fill new 
>requests, and since mallocs that required new segments to be allocated ask 
>for the exact amount of memory (plus 1K overhead), all new mallocs without 
>intervening frees will cause a new segment allocation *per malloc*!.  
>(This is only true for relatively large requests.  The minimum malloc seems 
>to be 4K (including 1K overhead.)

Xenix clusters allocations in the same segment until there is no space, then
additional 64K segments are allocated.  Segments are reused when freed.
Allocating, freeing and reallocating a large number of large segments has
been observed to crash the kernel in 2.1.3.

I made these observations by allocating multiple blocks of various sizes,
freeing them, and allocating more blocks repetitively.  This test does
not determine whether, in cases where there is more than one 64k allocation
segment and where Xenix is currently allocating space in the second segment,
Xenix will reuse space freed in the first segment.

>My personal workaround (especially where many malloc/free are expected) is to
>allocate a large malloc segment (with the max limit above) and then immediately
>free it.  Thus, all subsequent requests will fill out of that preallocated
>space rather than forcing the creation of many segments in the LDT.  (One of
>my programs had over 75 segments created by small malloc requests.)  

As previously noted, Xenix makes maximal use of an LDT segment before 
starting another.
>
>Although tested under Microport, I believe these some of these problems are
>endemic to '286 segmentation arrangement and most to not wanting to spend
>the effort to "fix" malloc(3) and sbrk(2).  Results are untested under any
>flavor of Xenix.

There are some bugs in Xenix's malloc.  There used to be more than there
are now.  However, most of the problems you've noted are not present
in Xenix and are, thus, not endemic to the '286.
-- 
Greg Laskin   
"When everybody's talking and nobody's listening, how can we decide?"
INTERNET:     greg@gryphon.CTS.COM
UUCP:         {hplabs!hp-sdd, sdcsvax, ihnp4}!crash!gryphon!greg
UUCP:         {philabs, scgvaxd}!cadovax!gryphon!greg