[comp.lang.c] Does TC's farrealloc have a bug?

gram@uctcs.uucp (Graham Wheeler) (06/19/91)

I am reposting this article as I never saw the original appear when using nn
and have had no responses, so I assume that it got lost.

I have an application in which I am allocating a number of variable sized
records, of average size about 6 bytes. Whenever the size changes, I use
farrealloc. I have over 300kb available at the time I start allocating these
nodes. I also keep track of the number of allocated bytes. My problem is that
I get a memory allocation failure at about 120kb of allocated memory. This
must mean one of two things:

i) either there is a bug in TC (actually TC++ v1.0) so that when realloc
	fails to resize a block and allocates a new one it doesn't free the
	old one; or
ii) the allocated blocks are being held on a linked list with an overhead of
	(I would guess) 12 bytes per block (two far pointers and one long size).
	This would mean that my nodes are actually using up 3 times more memory
	than I am actually using for record storage.

Personally, I think it is better to have the free blocks on a linked list as
you get the maximum use from your memory that way. I don't know how TC does
it.

Does anyone know which of these two theories is correct? Or is there a 
different explanation?

Graham Wheeler <gram@cs.uct.ac.za> | "That which is weak conquers the strong,
Data Network Architectures Lab     | that which is soft conquers the hard.
Dept. of Computer Science          | All men know this; none practise it"
University of Cape Town            |		Lao Tzu - Tao Te Ching Ch.78

msmith%peruvian.utah.edu@cs.utah.edu (Matthew Smith) (06/20/91)

In article <1991Jun19.083945.8921@ucthpx.uct.ac.za> gram@uctcs.uucp (Graham Wheeler) writes:
>I am reposting this article as I never saw the original appear when using nn
>and have had no responses, so I assume that it got lost.
>
>I have an application in which I am allocating a number of variable sized
>records, of average size about 6 bytes. Whenever the size changes, I use
>farrealloc. I have over 300kb available at the time I start allocating these
>nodes. I also keep track of the number of allocated bytes. My problem is that
>I get a memory allocation failure at about 120kb of allocated memory. This
>must mean one of two things:
>
>i) either there is a bug in TC (actually TC++ v1.0) so that when realloc
>	fails to resize a block and allocates a new one it doesn't free the
>	old one; or
>ii) the allocated blocks are being held on a linked list with an overhead of
>Graham Wheeler <gram@cs.uct.ac.za> | "That which is weak conquers the strong,
>Data Network Architectures Lab     | that which is soft conquers the hard.
>Dept. of Computer Science          | All men know this; none practise it"
>University of Cape Town            |		Lao Tzu - Tao Te Ching Ch.78

I've run across a similar situation.  I was using realloc as opposed to
farrealloc, and the same thing happened there.  My conclusion after looking
at several examples in the manuals is that they DON'T delete the old block.

This conclusion was achieved by the fact that in all examples, they had a  
dummy pointer pointing to the current block, called realloc with their usual
pointer, and then called free() with their dummy pointer (which really 
pointed to their old block).  That's the way they want you to do it.  I know,
it really sucks.


Matt Smith
msmith@peruvian.utah.edu

 

alexande@borland.com (Mark Alexander) (06/20/91)

(Email bounced yet again, sorry.)

The far heap in BC++ has 16-byte granularity, and 4 bytes of overhead
per block.  What this means is that each block is allocated on a
16-byte boundary, and the first four bytes of each block are reserved.
If you look at the pointers returned by farmalloc() and farrealloc(),
you'll see that they're all in the form xxxx:0004.

So if your program allocates a 6-byte block, you'll actually use up 16
bytes.  A 12-byte allocation would also use 16 bytes, but a 13-byte
allocation would use 32 bytes.

One solution is allocate two of your 6-byte structures in one
call to farmalloc(), but this may make your program ugly.

kooijman@duteca.et.tudelft.nl (Richard Kooijman) (06/20/91)

msmith%peruvian.utah.edu@cs.utah.edu (Matthew Smith) writes:
>>
>>i) either there is a bug in TC (actually TC++ v1.0) so that when realloc
>>	fails to resize a block and allocates a new one it doesn't free the
>>	old one; or
>>ii) the allocated blocks are being held on a linked list with an overhead of

>I've run across a similar situation.  I was using realloc as opposed to
>farrealloc, and the same thing happened there.  My conclusion after looking
>at several examples in the manuals is that they DON'T delete the old block.

>This conclusion was achieved by the fact that in all examples, they had a  
>dummy pointer pointing to the current block, called realloc with their usual
>pointer, and then called free() with their dummy pointer (which really 
>pointed to their old block).  That's the way they want you to do it.  I know,
>it really sucks.

I do not understand what you mean with this. In my opinion the problem is
caused by overhead and memory fragmentation.

The overhead isn't 12 bytes, I think, but 8 bytes as somebody mentioned here
a while ago.

Richard.

colburn@tessa (alex colburn) (06/20/91)

In article <1991Jun19.083945.8921@ucthpx.uct.ac.za> gram@uctcs.uucp (Graham Wheeler) writes:
>I am reposting this article as I never saw the original appear when using nn
>records, of average size about 6 bytes. Whenever the size changes, I use
>farrealloc. I have over 300kb available at the time I start allocating these
>nodes. I also keep track of the number of allocated bytes. My problem is that
>I get a memory allocation failure at about 120kb of allocated memory. This
	I don't think your problem lies in Borland's internal memory
management systems, it would kind of silly for them to write something
that uses twice as much memory as it is managing.  Is this error message
something you get, and then its time for a reboot?  If so, I bet you're
referencing invalid addresses.
   	How are you keeping track of your reallocated memory?  If you have
some sort of linked list data structure, when you farrealloc you just might
be scrambling pointers.  For debugging purposes try doing a farheapcheck()
after each memory function so you can determine the exact point where things
go buggy. If you have memory blocks greater than 64K the pointer must be
explicity declared as huge (this one threw me for the last week and 1/2), 
otherwise address calculations are only done on the offset, and you
get a neat little effect that usually locks up the system.
Good Luck!

Alex.

pj@doc.ic.ac.uk (Paul Jarvis) (06/21/91)

I have recently had a report of problems using huge far pointers and allocating
and freeing memory. The problem appeared to be that the free process did not
work hence memory got lost. The compiler was Turbo C (sorry version not known).
Changing to Borland C++ compiler with identical code - i.e. the same program
no problem. My conclusion is that there may be some problems with allocating
large memory chuncks but I sure don't want to try to prove it.
                           Paul
                        (pj@doc.ic.ac.uk)

msmith%peruvian.utah.edu@cs.utah.edu (Matthew Smith) (06/22/91)

In article <1991Jun21.152700.25820@doc.ic.ac.uk> pj@doc.ic.ac.uk (Paul Jarvis) writes:
>
>I have recently had a report of problems using huge far pointers and allocating
>and freeing memory. The problem appeared to be that the free process did not
>work hence memory got lost. The compiler was Turbo C (sorry version not known).
>Changing to Borland C++ compiler with identical code - i.e. the same program
>no problem. My conclusion is that there may be some problems with allocating
>large memory chuncks but I sure don't want to try to prove it.
>                           Paul
>                        (pj@doc.ic.ac.uk)

Do a farcoreleft() call before doing your farmalloc(), then do another
farcoreleft(), then call farfree() and do a farcoreleft().  Watch how your
values go.  The final farcoreleft() number should match the first farcoreleft()
number.

Matt Smith
msmith@peruvian.utah.edu
University of Utah
Department of Computer Science

frank@cavebbs.gen.nz (Frank van der Hulst) (06/23/91)

>In article <1991Jun21.152700.25820@doc.ic.ac.uk> pj@doc.ic.ac.uk (Paul Jarvis) writes:
>>
>>I have recently had a report of problems using huge far pointers and allocating
>>and freeing memory. The problem appeared to be that the free process did not
>>work hence memory got lost. The compiler was Turbo C (sorry version not known).

One bug I've managed to prove relates to (at least) TC v2.01 and huge pointers.

If you have an array of structures which is more than 64K, then the pointer
arithmetic will be wrong for the array element (structure) which overlap
the 64K boundary.

This may cause subsequent problems with farfree(), farrealloc(), and (I guess)
anything else using the far heap.

Frank.

-- 

Take a walk on the wild side, and I don't mean the Milford Track.
Kayaking: The art of appearing to want to go where your boat is taking you.

gram@uctcs.cs.uct.ac.za (Graham Wheeler) (06/24/91)

In <6574@ns-mx.uiowa.edu> colburn@tessa (alex colburn) writes:

>In article <1991Jun19.083945.8921@ucthpx.uct.ac.za> gram@uctcs.uucp (Graham Wheeler) writes:
>>I am reposting this article as I never saw the original appear when using nn
>>records, of average size about 6 bytes. Whenever the size changes, I use
>>farrealloc. I have over 300kb available at the time I start allocating these
>>nodes. I also keep track of the number of allocated bytes. My problem is that
>>I get a memory allocation failure at about 120kb of allocated memory. This

>	I don't think your problem lies in Borland's internal memory
>management systems, it would kind of silly for them to write something
>that uses twice as much memory as it is managing.  Is this error message
>something you get, and then its time for a reboot?  If so, I bet you're
>referencing invalid addresses.

No, I print the error message when farrealloc returns NULL, and I do a clean
exit.
>   	How are you keeping track of your reallocated memory?  If you have
>some sort of linked list data structure, when you farrealloc you just might
>be scrambling pointers.  For debugging purposes try doing a farheapcheck()

I actually have a tree structure (in fact, a graph) but instead of using
pointers I use indexes into an array of pointers. This means I don't have to
go patch all references to a node whose address might get changed by	
realloc.

The problem is not a bug in the structure (it works fine); just that I am
running out of memory too soon. And from the responses I have seen, this seems
to be due to the 16-byte granularity of allocations, bearing in mind my
average allocation is for six bytes.
--
Graham Wheeler <gram@cs.uct.ac.za> | "That which is weak conquers the strong,
Data Network Architectures Lab     | that which is soft conquers the hard.
Dept. of Computer Science          | All men know this; none practise it"
University of Cape Town            |		Lao Tzu - Tao Te Ching Ch.78

smiller@wet.UUCP (Gregory Shane Miller) (06/28/91)

I once wrote a program which did around 580 to 600K of farmallocs and too
found out that I ran out memory long before I anticipated.  I believe this
is the answer to your question.

First, when any of TC's malloc functions are called, it merely passes the
malloc call to the DOS malloc functions.  You will notice that the DOS 
malloc functions allocates INTEGER UNITS of PARAGRAPHS (16 bytes).  So when
you do malloc(6) DOS does malloc(16) or when you do malloc(12) it does malloc
(32).  Also, my program was very recursive and when in deep levels the stack
gets so big it reduces the far heap space.

That is the problem your running into.  I should also point out that the
function farcoreleft() is useless after the first malloc.  It only reports
the space between the highest allocated byte and the bottom of stack.  When
the heap becomes fragmented, this function returns useless numbers.  The
only thing it is good for is too find the maximum amount of heap available -
remember this goes down when the stack pushes down into memory.

In order to solve my problem, I calculated the entire amount of memory I
needed and did one big malloc and then rewrote my recursive routine to
be iterative (using as it turned out the harddisk to shuffle some data).

Hope that helps -- BTW if you want the info from farcoreleft() use it as the
first line of your main(), for even calls to scanf have an embedded malloc
call that you don't see!

smiller@wet.UUCP

-- 
G. Shane Miller [ smiller@wet.UUCP (415) 453-4926 ]