[comp.sys.mac.programmer] THINK C malloc

rosen@cs.utexas.edu (Eric Carl Rosen) (02/27/91)

I've been following the thread ">Novice question about malloc() in Think "
with much interests, hoping to see some hint of the strange problem I am
now having with malloc() under THINK C. Before I detail my problem, let me
state that I have never found a problem "with the compiler" that did not
eventually turn out to be a problem with MY code. I hesitate to post this,
but I am out of ideas ...

I am writing several small programs to do DSP on small sets of data.
Datasize is typically N = 1024 samples; at 10 bytes / floating-point value,
I allocate several blocks of 10240 bytes. I am able to do this about five or
six times (it varies) when all of a sudden, and for no reason that I can figure
out, malloc() [or calloc() when I use it instead] does not return. After
several seconds (or minutes), I drop into Macsbug and discover that malloc()
is several calls deep and seems to be caught in a tight infinite loop.
If I force the loop to end, malloc() immediately returns and everything seems
to work, until the next call to malloc().

Initially, I thought I was running out of memory in my project, so I increased
the default partition from 384 KB to 1024 KB. I know "About the Finder" isn't
a terribly reliable way to monitor memory usage, but my running program is
nowhere near (<25%) of the 1024 KB limit. The problem persists.

I am a fairly experienced C programmer, and I am quite certain that I am not
inadvertently writing where I am not supposed to, or corrupting memory in some
manner. If I use a much smaller datasize such as N = 64, there are no problems.

I looked at the source to malloc() and discovered that it treats requests for
blocks smaller than 15000 bytes differently than larger requests. For larger
requests, it merely calls NewPtr(). For smaller requests, it seems to have a
local pool, and possibly perform compaction, but I can't tell. I got around
my problem by declaring blocks of twice the needed size (20480 bytes) and
only using the first 10240 bytes in each block. malloc() handles these requests
with no problems.

Another thing to note is that not just _my_ requests to malloc() hang. If I
call fopen(), fopen() will hang and MacsBug reveals that it is looping in the
same place (presumably fopen() calls malloc() to get space for the FILE struct
that it returns a pointer to, among other things). Of course, this only starts
to happen after I've allocated about 100 KB and free()'d about 40 KB. If I
don't free the 40 KB, the problem still occurs.

Now, what I think is happening is, somehow, something in malloc()'s local pool
storage table is getting corrupted. Maybe I am doing this somehow. Maybe there
is a problem with malloc()? I am posting this to see if anybody has experienced
this behavoir before.

For those that are interested, I do #include <stdlib.h>, I typecast the
arguments to malloc() and calloc() to (size_t) as appropriate, I use prototypes,
and I am using the correct version of the ANSI library. I have also 
reinstalled my entire THINK C 4.0 package from the locked master diskettes and
rebuilt the project. The problem persists.

Your comments, posted or e-mailed, are appreciated. If I have provided too few
details, I can elaborate. Thanks.

--Eric

rosen@cs.utexas.edu (Eric Carl Rosen) (02/27/91)

With the help of two suggestions, I found my problem. It indeed was with MY
code; I had overstepped the bounds of my allocated block by 20 bytes, zeroing
this positions. This corrupted malloc()'s internal pool storage mechanism.

Although many of the replies I received included skepticism about malloc(),
I'm glad to find that, in my experience, malloc() remains reliable.

--Eric