dmg@ssc-vax.UUCP (David Geary) (04/06/89)
I'm writing some code where I use some home-grown functions to create, manipulate and destroy a tree of "generic" structures. (Each object in the tree is dynamically allocated using malloc()) The last thing I do in the program is to free every object in the tree. After running prof on the executable, I found that almost half my time was spent in _free! If I don't bother to free all of the memory I've dynamically allocated in the tree structure, my program runs considerably faster. Anyway, I'm wondering if it's ok for me to just leave the freeing out altogether. Unix will free all of my resources for me anyway right? Is there any problem with doing this? Of course, I realize this would cause serious problems if ever ported to a lowly PC, but I don't care ;-) Thanks for the help, David -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~ David Geary, Boeing Aerospace, Seattle ~ ~ "I wish I lived where it *only* rains 364 days a year" ~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
gwyn@smoke.BRL.MIL (Doug Gwyn ) (04/07/89)
In article <2580@ssc-vax.UUCP> dmg@ssc-vax.UUCP (David Geary) writes:
- Anyway, I'm wondering if it's ok for me to just leave the freeing
- out altogether. Unix will free all of my resources for me anyway
- right? Is there any problem with doing this?
No problem.
matthew@sunpix.UUCP ( Sun NCAA) (04/07/89)
In article <2580@ssc-vax.UUCP>, dmg@ssc-vax.UUCP (David Geary) writes:
[complaints about _free() speed deleted.]
I doubt that you would have now problem, except if your program was used
so much that it ran out of allocatable memory.
I did have an Idea though, if you are constantly malloc()ing and free()ing
memory, why not create your own free routine {myfree()} what would add the
now unused struct to a list of reusable structs, rather than free()ing it up
for the processors use. Then when you need space for another struct,
your personal malloc() {mymalloc()} would try getting a a myfree()'d struct,
and failing that, malloc() a new struct.
--
Matthew Lee Stier |
Sun Microsystems --- RTP, NC 27709-3447 | "Wisconsin Escapee"
uucp: { sun, mcnc!rti }!sunpix!matthew |
phone: (919) 469-8300 fax: (919) 460-8355 |
cs3b3aj@maccs.McMaster.CA (Stephen M. Dunn) (04/08/89)
In article <2580@ssc-vax.UUCP> dmg@ssc-vax.UUCP (David Geary) writes: > Anyway, I'm wondering if it's ok for me to just leave the freeing > out altogether. Unix will free all of my resources for me anyway > right? Is there any problem with doing this? > > Of course, I realize this would cause serious problems if ever > ported to a lowly PC, but I don't care ;-) Actually, this may be ok on a PC, too. I don't know how the typical C compiler on a PC handles memory allocation, but I would suspect it's similar to Turbo Pascal: when the program starts, it grabs all available memory and then dishes it out to the program using its own memory manager. Why? Because MS-DOS will only allow you to allocate memory in 16-byte chunks, while your program may try to allocate (say) three bytes at a time, leading to a colossal waste of memory. By grabbing all memory and handling allocation itself, an executable can use memory much more space-efficiently (although this approach is incompatible with multi-tasking operating systems). When the program terminates, the memory-management routines built into your executable by the compiler simply return all the memory to DOS, so your program _must_ free its allocated memory only if it will run out of memory if it doesn't. Neat, huh? Oh, and to overcome the problem with multi-tasking, the compiler will most likely allow you to set an option limiting the maximum amount of memory it grabs. For more complicated programs, however, the analysis required to estimate a realistic figure for this becomes astronomical. As I said, I've never looked to see if compilers other than Turbo Pascal (3 and 4, and probably 5, too) do this, I would think it likely that they do. Anyway, just thought you might like to know the "lowly" PC might even be able to handle your code. Regards -- ====================================================================== ! Stephen M. Dunn, cs3b3aj@maccs.McMaster.CA ! DISCLAIMER: ! ! This space left unintentionally blank - vi ! I'm only an undergrad ! ======================================================================
kremer@cs.odu.edu (Lloyd Kremer) (04/10/89)
In article <2580@ssc-vax.UUCP> dmg@ssc-vax.UUCP (David Geary) writes: >Anyway, I'm wondering if it's ok for me to just leave the freeing >out altogether. Unix will free all of my resources for me anyway >right? Is there any problem with doing this? > >Of course, I realize this would cause serious problems if ever >ported to a lowly PC, but I don't care ;-) In article <2367@maccs.McMaster.CA> cs3b3aj@maccs.McMaster.CA (Stephen M. Dunn) writes: > Actually, this may be ok on a PC, too. I don't know how the typical >C compiler on a PC handles memory allocation, but I would suspect it's >similar to Turbo Pascal: > > Anyway, just thought you might like to know the "lowly" PC might even >be able to handle your code. I can't speak for Turbo Pascal, but I am familiar with memory allocation in MSC 5.1, and there is at least one danger area for programmers. An applications program (like a C program) resides in a "memory block" obtained from the operating system when the program is invoked (usually by the shell, COMMAND.COM). (Actually, the program is initially given all available memory, and the compiler run-time start-up code then modifies the size of its memory block to return to the operating system that which it does not anticipate needing.) The operating system recognizes this memory block as belonging to your program, as opposed to the interrupt vector table, system areas, device drivers, bit-mapped screen image, etc. The compiler considers your memory block as divided into various areas such as code space, initialized data, uninitialized data, etc. One of these areas is what can be thought of as "mallocable memory." But this mallocable memory is still part of your program as far as the operating system is concerned. There are several functions available for obtaining memory at run-time: malloc() calloc() realloc() _nmalloc() /* malloc maps to this in small-data models */ _fmalloc() /* malloc maps to this in large-data models */ halloc() Of these, all but halloc() obtain memory from the mallocable memory area, which lies within your memory block. The operating system is petitioned for additional memory only in the event that the mallocable memory becomes exhausted. Library code within your program then increases the size of your program's memory block by an amount determined by 'extern unsigned _amblksiz'. All allocation and freeing occur within your operating system memory block, and when your program exits, the standard exit sequence returns your entire memory block to the operating system, and all is well. halloc() (allocate a huge block) is the great exception to this scheme of things. halloc() gets a new memory block *directly* from the operating system, using the same system call as the shell used originally to get the memory block for your program! When you use halloc(), you are not under anyone's auspices except the operating system (no one's looking out for you). If you halloc() a lot of memory and fail to hfree() it, at program exit you may very well be presented with the disheartening message: Memory allocation error Cannot load COMMAND, system halted They should have added, "Have a nice day." I have had the pleasure of all-night debugging sessions tracing this type of error, and have come away with the firm belief that the use of halloc() should be licensed. Not that halloc shouldn't exist; on a system as small as a PC, you sometimes need "all the memory" to get any useful work done. If it weren't there I'd no doubt write my own in assembly. But to use it is to play god with the system memory maps, and you'd better be up to it! Functions like atexit(), onexit(), and signal() can help to ensure that everything is hfree'd, but it's ultimately up to the programmer to have a flawless conception of the flow control of his program in all cases. (Yeah, right.) To sum up: is it OK not to free memory on a PC? Yes, as long as you don't use halloc(), otherwise watch out! As I recall, the original questioner was asking about UNIX(tm) systems. Oh, there's no problem on UNIX...UNIX is smart! :-) Unless the "sticky bit" of a program is set, UNIX frees all memory directly or indirectly associated with a program on program exit. MSDOSN'T. #ifdef OPINION The more I learn about other operating systems, the more I like UNIX(tm) :-) #endif Lloyd Kremer Brooks Financial Systems {uunet,sun,...}!xanth!brooks!lloyd
afscian@violet.waterloo.edu (Anthony Scian) (04/10/89)
In article <8395@xanth.cs.odu.edu> kremer@cs.odu.edu (Lloyd Kremer) writes: >>In article <2367@maccs.McMaster.CA> cs3b3aj@maccs.McMaster.CA (Stephen M. Dunn) writes: >>>In article <2580@ssc-vax.UUCP> dmg@ssc-vax.UUCP (David Geary) writes: >>>[summary: can a program leave the freeing of memory to the OS (on the PC)] >>YES >If you halloc() a lot of memory and fail to hfree() it, at program exit you >may very well be presented with the disheartening message: > >Memory allocation error >Cannot load COMMAND, system halted > >[misconceptions about halloc/hfree] >To sum up: is it OK not to free memory on a PC? Yes, as long as you don't >use halloc(), otherwise watch out! The errors you experienced were from modifying memory outside the block allocated to you. This IS a difficult thing to debug. Maybe this explanation will clear things up. When a .EXE program starts execution, it is given the largest contiguous block of memory available. Most start-up code will calculate how much the program requires minimally and resize the block to this size. Through resizing of the first block and allocation of new blocks through fmalloc/ffree & halloc/hfree MS-DOS can keep track of who owns what. MS-DOS will release all the memory owned by a 'process' (really the owner of the PSP). There are no problems with not freeing your memory unless you are a TSR or you spawn other programs off which need memory. However, if you managed to stomp on any of the important memory control blocks owned by MS-DOS, termination of the program can result in Memory Allocation Errors (always be thankful when it happens right after the program terminates, if it things bubble awhile and then it happens you have an even tougher debugging problem!) //// Anthony Scian afscian@violet.uwaterloo.ca afscian@violet.waterloo.edu //// "I can't believe the news today, I can't close my eyes and make it go away" -U2
lfoard@wpi.wpi.edu (Lawrence C Foard) (04/11/89)
Does any one know if dos/turbo C is able to put small blocks of memory back together again? I have a program that allocates memory in 512 bytes chunks which also has to call other programs. The memory can all be farfreed before a call to another program, will turbo C and dos make the freed memory available to the spawned process. The two problems I can see is 1) Turbo C may not return the freed memory to dos 2) Dos may not put the 512 byte chunks back together to make big (64K) chunks. Does any one know is either of these problems actually happen and if so can they be fixed (or otherwise gotten around). -- Disclaimer: My school does not share my views about FORTRAN. FORTRAN does not share my views about my school.
root@siva.UUCP (Super user) (04/11/89)
: #ifdef OPINION : The more I learn about other operating systems, the more I like UNIX(tm) :-) : #endif "Aint' that the truth!" : Lloyd Kremer -- ---------------- "I only did what you didn't tell me not to do..." I\/I I/ I \ / "Physics is the law. All else is convention." I I I\ I \/ Mark Marsh ...!ames!pacbell!sactoh0!siva!uumgr ----------------
matt@nbires.nbi.com (Matthew Meighan) (04/27/89)
In article <2580@ssc-vax.UUCP| dmg@ssc-vax.UUCP (David Geary) writes: | | I'm writing some code where I use some home-grown functions to | create, manipulate and destroy a tree of "generic" structures. | (Each object in the tree is dynamically allocated using malloc()) | | The last thing I do in the program is to free every object in | the tree. | | After running prof on the executable, I found that almost half | my time was spent in _free! | | If I don't bother to free all of the memory I've dynamically | allocated in the tree structure, my program runs considerably | faster. | | Anyway, I'm wondering if it's ok for me to just leave the freeing | out altogether. Unix will free all of my resources for me anyway | right? Is there any problem with doing this? One solution would be to malloc blocks of memory large enough to contain n structures apiece, where n is some resonable number for your allocation. You can use this as an array of structures and only call malloc again when you have filled that block and need more structures. This will reduce not only the number of calls to free, but the number of calls to malloc. The larger n is, the more this will spped up your program. And you can still free everything yourself instead of relying on the OS to do it. The trade-off, of course, is that you will usually allocate more memory than you need. But unless the structures are really huge or memory exceptionally tight, this is usually not a big problem. Matt Meighan matt@nbires.nbi.com (nbires\!matt) -- Matt Meighan matt@nbires.nbi.com (nbires\!matt)
bet@dukeac.UUCP (Bennett Todd) (05/03/89)
In article <2580@ssc-vax.UUCP| dmg@ssc-vax.UUCP (David Geary) writes: > I'm writing some code where I use some home-grown functions to > create, manipulate and destroy a tree of "generic" structures. > (Each object in the tree is dynamically allocated using malloc()) > > The last thing I do in the program is to free every object in > the tree. > > After running prof on the executable, I found that almost half > my time was spent in _free! malloc() and free() are a heap managament system, to provide a convenient and portable interface between yourself and whatever memory allocation scheme the native OS offers (under UNIX it is done with brk() and sbrk() and like that, I believe; I've never bothered looking under the hood of malloc()). I think most OS memory management schemes look more like a stack than like a heap, but in any case, regardless of what malloc() is running on top of, the resources it allocates should be freed just fine upon exiting the program. If they weren't, many common programming errors would crash the system very quickly. I never bother calling free, unless I have a system that could be creating and tearing down variable sized objects all day long. If I am working with only a limited number of possible sizes, I make my own allocators that work with a linked free list for each type (grabbing more memory from malloc when necessary, in big blocks, and returning free memory to a linked list rather than the malloc/free heap). This runs extremely fast. If your program never needs to reuse memory, then you don't need to worry about freeing at all. -Bennett bet@orion.mc.duke.edu
paul@athertn.Atherton.COM (Paul Sander) (05/05/89)
In article <1384@dukeac.UUCP>, bet@dukeac.UUCP (Bennett Todd) writes: > In article <2580@ssc-vax.UUCP| dmg@ssc-vax.UUCP (David Geary) writes: >> [Complains that nearly half his time is spent executing free() when >> destroying trees of "generic" structures] > > [discusses guts of malloc()] > I think most > OS memory management schemes look more like a stack than like a heap, but in > any case, regardless of what malloc() is running on top of, the resources it > allocates should be freed just fine upon exiting the program. If they weren't, > many common programming errors would crash the system very quickly. > > I never bother calling free, unless I have a system that could be creating and > tearing down variable sized objects all day long. If I am working with only a > limited number of possible sizes, I make my own allocators that work with a > linked free list for each type (grabbing more memory from malloc when > necessary, in big blocks, and returning free memory to a linked list rather > than the malloc/free heap). This runs extremely fast. If your program never > needs to reuse memory, then you don't need to worry about freeing at all. I once programmed in an environment that did NOT, repeat NOT, automatically free allocated heap memory when a program terminated. Specifically, it was CICS running under VSE/Advanced Functions on an IBM 4341. One could allocate memory just fine, and could free memory just fine. There was a special function one could call that would free all heap memory allocated up to that time, but it was broken. Deciding goodness or badness is left as an exercise for the reader; my point is that this environment exists, and if you want your code to port to it, free() your stuff. If free() doesn't perform, build your own heap manager on top of it which is better suited to your structures. Sorry about the terseness of this message, but this has been an "exciting" week. -- Paul Sander (408) 734-9822 | Do YOU get nervous when a paul@Atherton.COM | sys{op,adm,prg,engr} says {decwrl,sun,hplabs!hpda}!athertn!paul | "oops..." ?
bet@dukeac.UUCP (Bennett Todd) (05/06/89)
In article <2239@athertn.Atherton.COM> paul@athertn.Atherton.COM (Paul Sander) writes: >I once programmed in an environment that did NOT, repeat NOT, automatically >free allocated heap memory when a program terminated. > [...] >Deciding goodness or badness is left as an exercise for the reader; my point >is that this environment exists, and if you want your code to port to it, >free() your stuff. I certainly would *not* want any code of mine ported to an environment like that! I decided a good bit ago that I wasn't going to worry about 6 character monocase unique names, no lines less that 80 characters long, no text strings longer than 80 characters long, never using bitfields, no subroutines longer than a couple of hundred lines, no arrays larger than 64K, no using '_' in external identifiers, or any other of the myriad deficiencies that show up occasionally in obscure environments. Instead, I'd try to write code to be as simple and clear as I could manage (which often means avoiding convoluted names, long lines, multiline text strings, bitfields, etc. -- but for reasons of clarity, not portability to defective environments). I am interested in porting to what is nice to use now, and what will be better in the future. I am not interested in porting to what I am glad we've outgrown. -Bennett bet@orion.mc.duke.edu
darin@nova.laic.uucp (Darin Johnson) (05/09/89)
>>I once programmed in an environment that did NOT, repeat NOT, automatically >>free allocated heap memory when a program terminated. > >I certainly would *not* want any code of mine ported to an environment like >that! I am interested in >porting to what is nice to use now, and what will be better in the future. I >am not interested in porting to what I am glad we've outgrown. The only thing that makes most OS's clear allocated memory on exit, is virtual memory. Without it, the OS will have to go to a lot of work to keep track of the memory you allocated. If you allocate averything off of a stack, then it is no big deal, but what if you want to allocate something that won't fit in your stack space? Believe it or not, there are a lot of machines that don't have virtual memory, or keep track of the memory you allocate, or have a near infinite upper bound on the stack? I believe most small computers fall into this category, especially those that multitask and/or support desktop thingies and stay resident thingies. Not to mention embedded systems, etc. I sure hope whoever programs out missiles remembers to free all his memory. Of course, you may have plenty of virtual space, but the real physical space exists on whatever swap device you are using. When that runs out, the computer has no choice but to say "I can't figure out what memory you are using and which is trash, so I'll just terminate your program alltogether". Also, don't expect the customers of your programs to be able to agree with your recommended minimums of physical/swap space needed. Not freeing memory will also tend to increase paging activity, since the OS will be unable to re-use part of a page that you aren't using, and will have to allocate another for you. Note that there is a big difference between freeing memory on EXIT, and freeing memory when you are done with it. If you have a small program, then it isn't as big a deal, and it is convenient not to worry about it, but not very portable. Darin Johnson (leadsv!laic!darin@pyramid.pyramid.com) We now return you to your regularly scheduled program.
gwyn@smoke.BRL.MIL (Doug Gwyn) (05/09/89)
In article <543@laic.UUCP> darin@nova.UUCP (Darin Johnson) writes: >Without it, the OS will have to go to a lot of work to >keep track of the memory you allocated. If the Apple IIGS can do this, there's no excuse for other systems not taking care of it.
ddb@ns.network.com (David Dyer-Bennet) (05/10/89)
In article <543@laic.UUCP> darin@nova.UUCP (Darin Johnson) writes:
:The only thing that makes most OS's clear allocated memory on exit, is
:virtual memory. Without it, the OS will have to go to a lot of work to
:keep track of the memory you allocated. If you allocate averything off
:of a stack, then it is no big deal, but what if you want to allocate something
:that won't fit in your stack space? Believe it or not, there are a lot
:of machines that don't have virtual memory, or keep track of the memory
:you allocate, or have a near infinite upper bound on the stack? I believe
:most small computers fall into this category, especially those that
:multitask and/or support desktop thingies and stay resident thingies.
TSS/8 on a PDP-8/I, RSTS on a pdp-11, TOPS-10 and TOPS-20 on KL-10
hardware, MS-DOS, and all versions of Unix I have seen do not display
the behavior you describe as common above. Only Tops-10, Tops-20, and
a few of the unixes are virtual-memory systems. Most of these probably
count as "small computers" by today's standards :-)
In my experience it's the presence of a "process" abstraction in the
operating system that allows clean handling of un-freed memory at
process termination. Even the 8088-based Unix was able to handle that.
--
David Dyer-Bennet, ddb@terrabit.fidonet.org, or ddb@ns.network.com
or ddb@Lynx.MN.Org, ...{amdahl,hpda}!bungia!viper!ddb
or ...!{rutgers!dayton | amdahl!ems | uunet!rosevax}!umn-cs!ns!ddb
or Fidonet 1:282/341.0, (612) 721-8967 9600hst/2400/1200/300
peter@ficc.uu.net (Peter da Silva) (05/10/89)
The underlying operating system may not do it, but your C library's implementation of malloc() should include code that automatically cleans up memory when you exit(). I don't think this is in the standard, but it's sure a quality of implementation issue. As for operating systems that don't clean up when a program exits, there are operating systems, particularly message-passing ones, that do not have the notion of the owner of a block of memory. Memory is allocated and passed to other programs continually. -- Peter da Silva, Xenix Support, Ferranti International Controls Corporation. Business: uunet.uu.net!ficc!peter, peter@ficc.uu.net, +1 713 274 5180. Personal: ...!texbell!sugar!peter, peter@sugar.hackercorp.com.