[comp.lang.c] Heap in Turbo C 2.0

schikore@mentor.cc.purdue.edu (Dan Schikore) (04/15/91)

Is there any way to measure the actual space left on the heap in Turbo C?

The way I understand it, coreleft() is not an accurate measure if I have
free'd some of the memory.

Similarly, when I malloc and an address is returned, as far as I know the
actual size of the block allocated may be more than I ask for.  Can I find
out how large a specific block returned by malloc is??

Thanks for any help..

-Dan Schikore
schikore@mentor.cc.purdue.edu

jeenglis@alcor.usc.edu (Joe English) (04/15/91)

schikore@mentor.cc.purdue.edu (Dan Schikore) writes:

>Is there any way to measure the actual space left on the heap in Turbo C?
>The way I understand it, coreleft() is not an accurate measure if I have
>free'd some of the memory.

Not really.  Even if you were able to figure out how much 
space is left in the free list, that information wouldn't 
do you any good since the heap may be fragmented and not
all of it would be available as a contiguous chunk.

>Similarly, when I malloc and an address is returned, as far as I know the
>actual size of the block allocated may be more than I ask for.  Can I find
>out how large a specific block returned by malloc is??

It's best to assume that malloc returns exactly as much
as you asked for (if it succeeds, that is.)  

It sounds like you're having a problem with really tight memory
requirements.  You may find it useful to do your own memory
management instead of relying on malloc() for every request.
Some techniques I've found useful:

* If you have a tree- or graph- based structure and you're
  mallocking lots of nodes, write a special-purpose node allocator
  that grabs a big chunk of memory and gives out node-sized pieces
  one at a time.  You can also maintain a linked list of freed
  nodes.

* For data structures that grow monotonically, but by various
  sizes (for example, if you're processing a text file one line
  at a time) a structure that I call a "pile" is useful.  (It's
  midway between a stack and a heap.)  Again, you malloc off 
  a big chunk of memory, and add data to the end of the pile
  only.  When you hit the end of the chunk, malloc another 
  chunk and start using that one instead.  When you're
  through using the entire entity, you free() all of the
  previously allocated chunks.

The basic idea is that special-purpose allocators can be more
efficient and use less overhead than general-purpose allocators.
If memory is tight, you should look into better allocation schemes
instead of trying to take advantage of various quirks in the
'malloc' implementation.


--Joe English

  jeenglis@alcor.usc.edu