[gnu.emacs.bug] Malloc policy/bug

grossman@polya.Stanford.EDU (Stu Grossman) (04/04/89)

I was just playing around with some things in emacs that require lots of
memory, when I noticed something that seemed rather strange.  Whenever I
allocate a large hunk of memory, emacs grows -- so far this is normal.  When I
deallocate that memory, and then allocate a smaller (but still large) hunk of
memory, emacs still grows!

A little investigation reveals that internally, malloc() is only dealing with
several discrete sizes of memory (powers of 2 in this case).  It maintains a
vector called nextf which points to the heads of linked lists of free memory.
All memory on each linked list is the same size.  When you want to find memory
of a particular size, you basically index into nextf by log2 of that size.

Now, if there is no free memory at the size that you requested (as indicated
by nextf[log2(size)] == null), then malloc() calls morecore() which will
acquire memory from the operating system.  Here's the rub!  It turns out that
if there was a hunk of free memory LARGER than what you wanted (further on in
nextf), it was never looked at!

So, in my case, I had originally allocated some memory that would have lived
at nextf[17].  Then, that was freed back to nextf[17].  Later on, I allocated
some memory that would have lived at nextf[16].  Although nextf[17] contained
more than enough memory to satisfy my request, it was never looked at, and
memory was created by morecore() for nextf[16].

Therefore malloc() will not hand out memory if it is way too big for the
request.  It will just ask the kernal for more instead.

Is this a policy, or a bug?  I could see this sort of behavior by malloc() as
a policy to prevent fragmentation.  If that was the case, then why not just
hand out the entirity of the first large enough chunk that you can find!  This
sort of policy seems to be indicated by the following comment from malloc.c

> Blocks that don't exactly fit are passed up to the next larger size.

Or perhaps you don't care that much about fragmentation.  In that case, why
not split the larger chunk into several pieces?

		Stu Grossman