[comp.sys.amiga] Tidying fragmented memory

MROBINSON@wash-vax.bbn.com (04/12/89)

A week or so ago I saw a message fly by about not being able to coalesce
free memory into bigger hunks, and that prompted some questions in my
mind.

1.  Is there anything wrong with allocating two blocks of memory,
    discovering they are adjacent, and returning them as one bigger
    block of memory?  From what I have heard, the Amiga "only manages
    free memory, not allocated memory", so this should be possible.

2.  Does the Amiga make any attempt to keep the free memory list simple,
    say by searching through the free memory list every time a block is
    freed and joining the blocks that can be joined?  If so, without
    virtual memory management, it looks like we can't improve very much
    on what the Amiga does already, except...

3.  Does the Amiga have a non-trivial memory allocation scheme, so that
    (say) little blocks tend to be allocated near each other, and big
    blocks tend to be allocated somewhere else?  Like, little blocks
    grow up from low memory, and big blocks grow down from high memory?
    Or system-requested memory grows up, while user-requested memory
    grows down (I would find this highly unlikely, but it might help,
    on the theory that system-allocated memory tends to stay in memory
    longer than user-allocated memory--then again, maybe it doesn't, I
    am not sure)?

My assumptions are 1. yes, 2. no, and 3. no.  If these assumptions are
correct, what about

4.  Wouldn't it be possible to simplify the free memory list with a
    program that does the following:  grabs up all the blocks of memory
    it can of a given size (like, 256K), then all the ones half that size,
    then all the ones half THAT size, and so on to some minimum size (maybe
    even one longword, although I would expect that to be VERY dangerous),
    Then joining all adjacent blocks of our allocated memory, and returning
    all our allocated memory?

My observations on 4 are:  If the answer to 2 is yes, there's no reason to
do this; if the answer to 1 is no, there is no way to do this; and 3 doesn't
have any bearing on 4 at all.

Back to question 3... if the answer to 3 is no, why not?  I realize that it
is difficult to predict how programs are going to use memory, but I think
there is probably some allocation routine that improves on "fastest is best"
allocation without taking too much time.  I seem to remember studying this
topic briefly in Operating Systems...maybe I should drag out the textbooks.
In the mean time, any comments from the net?  BTW, if there is some tome
I can consult in the future for questions like this, someone please direct
me to it via Email.

-- Max Robinson, mrobinson@wash-vax.bbn.com
P.S. V*X m*il s**ks!!!

deven@pawl.rpi.edu (Deven Corzine) (04/13/89)

In article <12913@louie.udel.EDU> MROBINSON@wash-vax.bbn.com writes:
>A week or so ago I saw a message fly by about not being able to coalesce
>free memory into bigger hunks, and that prompted some questions in my
>mind.

That problem was actually due to small chunks of ALLOCATED memory
separating free blocks.

[1. Ok to free adjacent allocated blocks as one?]

Legit, but unnecessary, and you would have to depend on the memory
allocator granularity.

[2. Does Exec combine adjacent free blocks?]

Yes.

[3. Does Exec try to allocate large and small blocks separately, or
system- and user-allocated blocks separately?]

No.  It traces through the free memory list (which CAN be reordered to
give preference to a different memory block) looking for either the
first block large enough for to block the be allocated, I believe.
It MIGHT search for smallest free block large enough to hold the
block to be allocated, but I doubt it; it would be inefficient and
would defeeat the purpose of reordering the free list.  Hence, it is
most likely first fit.

>My assumptions are 1. yes, 2. no, and 3. no.  If these assumptions are
>correct, what about

1. yes, 2. yes, 3. no.

[4. asttempting to combine free blocks]

Useless, since free blocks are combined.

>Back to question 3... if the answer to 3 is no, why not?  I realize that it
>is difficult to predict how programs are going to use memory, but I think
>there is probably some allocation routine that improves on "fastest is best"
>allocation without taking too much time.

Because of considerations, and because the free memory is kept as a
free list, making it difficult to allocate from the top or bottom of
memory.   True, you COULD alternatively search from the TAIL of the
free memory list, but then you have the problem of CHIP memory being
given preference to FAST memory when searching the list in reverse.
Most people would not like that idea.

Deven
--
------- shadow@pawl.rpi.edu ------- Deven Thomas Corzine ---------------------
Cogito  shadow@acm.rpi.edu          2346 15th Street            Pi-Rho America
ergo    userfxb6@rpitsmts.bitnet    Troy, NY 12180-2306         (518) 272-5847
sum...     In the immortal words of Socrates:  "I drank what?"     ...I think.