[comp.sys.mac.hypercard] Memory Problems

jstern@garnet.berkeley.edu (05/08/91)

A while back I posted to this group asking for help with a memory problem.  It 
turned out that my problem was one bad xcmd. However, since I had found it 
frustrating that there was so little information out there about memory, and 
since I got help from Kevin Calhoun (jkc@apple) and George Harrington 
(gort@cup.portal.com), and since I got a few requests to pass on whatever I 
learned, I'll summarize here.

TIPS
----
First, there are simple, generally agreed upon things that are you can do if 
you're having memory problems:
1) Ram Cache: make sure it's off
2) Use finder rather than multifinder, if you can.
3)If running under multifinder, allocate more memory to HC
4) If having problems using painting tools in a particular handler, try and 
rewrite your code so that the handler is small. Though this seems to be common 
knowlege, I appreciated an explanation: "As hypercard runs, when it begins to 
execute a handler, it checks to see if the script has been compiled. If it has 
not, it then attempts to compile it. If there is not enough memory left to 
compile the script, hypercard then flushes some of the  compiled handlers from 
memory (provided they are not currently running) and then compiles. If the 
handler that calls the painting tools is large, it is not purgable (since it 
is still running) and could prevent hypercard from grabbing the 300k or so 
extra from the heap that it needs for the painting tools." (George)
5) Buy additional memory (If you're like me--working in a public school--this 
is not really a sol'n, but it might be for other folks)

George had a couple more that I hadn't heard before:
1) If running under multifinder, in addition to allocating more memory to HC, 
allocate less to the finder (but don't go under minimum recommended amount)
2) Use System 6.0.5, because:  "If you are using 6.0.7 or later, this could be 
a problem,as there seems to be a memory management problem with them. This 
includes the not publicly released versions of system 7. Even if doing 
nothing, the heap continues to grow, and eventually will cause the system to 
bomb (this takes several hours if idle)." 

And Kevin mentioned a HC bug:
"Does the problem ever occur on your machine, when you have read/write
access to the stacks in use?  If it only occurs when HyperCard has
read-only access to the stacks in use, it's possible that you've been
bit by an obscure 2.0 bug we've recently found and fixed.The bug occurs 
when you put a read-only stack in use and then go to and from the stack 
many times."


GLOBALS,STACKSINUSE,XCMDS
-------------------------
In my original post, I asked which of three things (lots of globals, lots of 
stacks in use, or xcmds) was likely to be causing a problem. Though in my case 
it was definitely an xcmd, here's what I found out about each:

GLOBALS: George wrote: "40 [globals] is a lot, and they do take up memory; 
even filling them with empty, they will still take up space in the name 
table."

STACKSINUSE: No real response about this, except suggestions to combine stacks 
if possible (wasn't possible in my case--each was already full).

XCMDs (and XFCNs): Yes, these can be very dangerous. An xcmd can forget to 
deallocate space when done.  One thing to do is to test any suspect xcmds:

on testxcmd xcmd,count
  if count is empty then put 100 into count 
  put the heapspace into startspace
  repeat count times
    do xcmd
  end repeat
  put (startspace - the heapspace) div 1024 & "K change in heap space"
end testxcmd

(I experimented and actually looked at the byte change in heapspace --i.e. 
didn't  "div 1024"--, but I think that's not necessary: I did get  byte --
around 40 bytes- changes for X's that are considered very reliable --e.g. 
Dartmouth XCMDS. In other words, don't worry about little changes; they're 
probably just caused by the script running.)

If you get significant changes you may have a memory leak which "occurs when a 
piece of code that executes repeatedly allocates memory but doesn't deallocate 
it.  Every time it runs, you lose another block of memory.  Eventually, you 
run so low on memory that the rest of the program can't function properly." 
(Kevin)

However, there seems to be another type of problem, the kind that I had: You 
run an XCMD once, heapspace is still plenty big, and yet you don't have enough 
memory to use the painting tools. It's obviously a "bad" xcmd, but doesn't 
seem to be causing a true memory leak, as described by Kevin.


HEAPSPACE
---------
Finally, if you'd like to know what heapspace really is, you get to choose 
from 2 different answers:

1) Actual Contiguous Space 
"the heapspace is the size of the largest contiguous block of memory 
currently available to HyperCard.  Not the total amount of memory, but 
the size of the largest unused block of it." (Kevin)

"the HeapSpace function typed into the Message Box can tell you approximately 
how much contiguous space there is in the heap zone of memory." (p. 644,Danny 
Goodman's Complete HyperCard 2.0 Handbook)

or

2) Potential Contiguous Space
"The heapspace function returns the maximum contiguous amount of space in the 
application heap in bytes that would be available if all purgeable handles 
were purged and all relocatable handles were relocated." (p.417, HyperTalk 
2.0, The Book, Winkler & Kamin)

"...while you may have plenty of heapspace, there may not be enough 
contiguous space available to allow the paint tools to run. If you've 
got 1.4 meg heapspace and still can't paint, the heap sounds like it's
badly fragmented." (George)


You choose. The second one seems to make slightly more sense in light of my 
problem (lots of heapspace, not enough memory); but, then again, it seems that 
the Heapspace isn't of much value if it doesn't tell you what's really out 
there...

Judy Stern