[comp.sys.amiga] Does RAM: ever retry?

peter@sugar.UUCP (Peter da Silva) (08/16/87)

Occasionally I get Out of Disk Space requestors on RAM:. When this happens
I usually try unloading a few programs to make space for RAM:, but it never
seems to help. I can see large chunks of contiguous space showing up on my
fuel gauge, but when I request "RETRY" it always comes back immediately. Does
RAM: ever try to allocate more memory after a failure of this type?
-- 
-- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter (I said, NO PHOTOS!)

gary@eddie.MIT.EDU (Gary Samad) (08/20/87)

In article <510@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
}Occasionally I get Out of Disk Space requestors on RAM:. When this happens
}I usually try unloading a few programs to make space for RAM:, but it never
}seems to help. I can see large chunks of contiguous space showing up on my
}fuel gauge, but when I request "RETRY" it always comes back immediately. Does
}RAM: ever try to allocate more memory after a failure of this type?
}-- 
}-- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter (I said, NO PHOTOS!)

Try unloading one big program rather than a lot of little programs.  Then
"RETRY" will work.  It appears to require a 30K contiguous chunk to continue
correctly.  I've actually had over 100K available and gotten that RAM: 
requester.  By deleting something like ln (the Manx linker) or sometimes
emacs I can get it to continue.

	Gary

peter@sugar.UUCP (Peter da Silva) (08/23/87)

In article <6619@eddie.MIT.EDU>, gary@eddie.MIT.EDU (Gary Samad) writes:
> In article <510@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
> }seems to help. I can see large chunks of contiguous space showing up on my
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Try unloading one big program rather than a lot of little programs.  Then
> "RETRY" will work.  It appears to require a 30K contiguous chunk to continue
                                              ^^^^^^^^^^^^^^^^^^^^
> correctly.  I've actually had over 100K available and gotten that RAM: 
> requester.  By deleting something like ln (the Manx linker) or sometimes
> emacs I can get it to continue.

30 K at a time? That's gross and disgusting. Is there anything that allocates
smaller memory chunks without requiring a lot of RAM? I've tried VD0:, but it's
too big: takes so much off the top that I don't have enough room left for
Manx. It shouldn't be impossible. I can load NewClock and Gauge both and still
have enough RAM: to work in.

Also, I'm sure I'm freeing up more than 30K at a time. I can watch RAM:
allocate memory as it requires, and it's allocating chunks way smaller
than the ones I'm freeing. Something odd is happening here.
-- 
-- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter (I said, NO PHOTOS!)

jesup@steinmetz.steinmetz.UUCP (Randell Jesup) (08/24/87)

In article <535@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
>In article <6619@eddie.MIT.EDU>, gary@eddie.MIT.EDU (Gary Samad) writes:
>> "RETRY" will work.  It appears to require a 30K contiguous chunk to continue
>                                              ^^^^^^^^^^^^^^^^^^^^
>> correctly.  I've actually had over 100K available and gotten that RAM: 
>> requester.  By deleting something like ln (the Manx linker) or sometimes
>> emacs I can get it to continue.
>
>30 K at a time? That's gross and disgusting. 
...
>Also, I'm sure I'm freeing up more than 30K at a time. I can watch RAM:
>allocate memory as it requires, and it's allocating chunks way smaller
>than the ones I'm freeing. Something odd is happening here.

>-- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter

The 1.2 Ramdisk has a 'ripcord' of 30K of ram.  If it finds it can't get
enough memory, it releases it's 'ripcord' and puts up the disk full
requestor.   It won't agree to be un-full until it can get a ripcord
back.  This was to solve problems of copying too much to the ram disk
and having the system crash.  Without the 30K, it's hard to let the user
know the ramdisk is full and let the user correct the problem.

	Randell Jesup
	jesup@steinmetz.uucp
	jesup@ge-crd.arpa

peter@sugar.UUCP (Peter da Silva) (08/26/87)

I still think it's gross...

> The 1.2 Ramdisk has a 'ripcord' of 30K of ram.  If it finds it can't get
> enough memory, it releases it's 'ripcord' and puts up the disk full
> requestor.   It won't agree to be un-full until it can get a ripcord
> back.  This was to solve problems of copying too much to the ram disk
> and having the system crash.  Without the 30K, it's hard to let the user
> know the ramdisk is full and let the user correct the problem.

But does that 30K have to be contiguous? And what's wrong with just saving
enough for the requestor? Surely a system requestor can fit in less than 30K.
-- 
-- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter
--                  U   <--- not a copyrighted cartoon :->

jesup@steinmetz.steinmetz.UUCP (Randell Jesup) (08/28/87)

In article <566@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
>> The 1.2 Ramdisk has a 'ripcord' of 30K of ram.  If it finds it can't get
>> enough memory, it releases it's 'ripcord' and puts up the disk full
>> requestor.   It won't agree to be un-full until it can get a ripcord
>> back.  This was to solve problems of copying too much to the ram disk
>> and having the system crash.  Without the 30K, it's hard to let the user
>> know the ramdisk is full and let the user correct the problem.
>
>But does that 30K have to be contiguous? And what's wrong with just saving
>enough for the requestor? Surely a system requestor can fit in less than 30K.
>-- 
>-- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter

I'm afraid that it does have to be continous, though I don't believe that
it has to be chip ram.  It does have to save more than enough for a requestor,
because if you are to reduce memory usage, it has to be able to load things
such as delete, etc, and you may have to re-arrange windows (or open them)
to fix the problem.

	Randell Jesup
	jesup@steinmetz.UUCP
	jesup@ge-crd.arpa

peter@sugar.UUCP (08/30/87)

I think RAM: has a bug in it. I got "RAM: is full". I quit ARC. Now, ARC is a
big program, so after I quit it RAM: should have plenty of room. Well, this
time it wouldn't ever let me put anything in. According to FRAGS I had gobs
of space. What gives?
-- 
-- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter
--                  U   <--- not a copyrighted cartoon :->

fgd3@jc3b21.UUCP (09/03/87)

In article <595@sugar.UUCP>, peter@sugar.UUCP (Peter da Silva) writes:
> I think RAM: has a bug in it. I got "RAM: is full". I quit ARC. Now, ARC is a
> big program, so after I quit it RAM: should have plenty of room. Well, this

     I've never had a problem like that.  When I get "RAM: is full" I can
always delete some files from RAM: and try again.  Could it be that ARC didn't 
free all its memory?  

--Fabbian Dufoe
  350 Ling-A-Mor Terrace South
  St. Petersburg, Florida  33705
  813-823-2350

UUCP: ...gatech!codas!usfvax2!jc3b21!fgd3 

gary@eddie.MIT.EDU (Gary Samad) (09/03/87)

In article <566@sugar.UUCP> peter@sugar.UUCP (Peter da Silva) writes:
}I still think it's gross...
}
}> The 1.2 Ramdisk has a 'ripcord' of 30K of ram.  If it finds it can't get
}> enough memory, it releases it's 'ripcord' and puts up the disk full
}> requestor.   It won't agree to be un-full until it can get a ripcord
}> back.  This was to solve problems of copying too much to the ram disk
}> and having the system crash.  Without the 30K, it's hard to let the user
}> know the ramdisk is full and let the user correct the problem.
}
}But does that 30K have to be contiguous? And what's wrong with just saving
}enough for the requestor? Surely a system requestor can fit in less than 30K.

'Fraid not.  During development of Microfiche Filer I spent several full
days writing routines to handle low memory conditions.  The major task
was to throw up a requestor alerting the user whenever this happened.  So,
I computed that I should need about }i10K for my requestor (size of window
* 2 bitplanes * 2 again toover the smart bitmaps that are allocated
when you open a window on top of a SMART_REFRESH window).  "No problem,
just set up a small ripcord," thought I.  "10K should do it."  "Whoops,
well how about 20K?" "30K?" "Ah, there's the requestor." "But not every time.
How about 50K?" "Nope, 60K?"

It took 60K to be able to RELIABLY open my requestor!!!!!  Even though the
computed space needs should have been around 10K.

My solution?  Give the user a "memory paranoid" option which opens that
window and just keeps it around, but at the bottom of the window pile.
Yes, this could be a tacky solution but it works fine for my particular
product because I always have a bigger window around, behind which I keep
this "paranoid" window.  It works beautifully.

But why does it actually require up to 60K to open a requestor that only
takes 10K after it is open?  (I have verified that it only takes about
10K once it is opened.)

	Gary

peter@sugar.UUCP (Peter da Silva) (09/06/87)

In article <151@jc3b21.UUCP>, fgd3@jc3b21.UUCP (Fabbian G. Dufoe) writes:
> In article <595@sugar.UUCP>, peter@sugar.UUCP (Peter da Silva) writes:
> > I think RAM: has a bug in it. I got "RAM: is full". I quit ARC. Now, ARC is a
> > big program, so after I quit it RAM: should have plenty of room. Well, this
> 
>      I've never had a problem like that.  When I get "RAM: is full" I can
> always delete some files from RAM: and try again.  Could it be that ARC didn't 
> free all its memory?  

I can get space in RAM: by deleting files from RAM: too. Unfortunately, that 
doesn't help in this case. You can't always do that.

Assume, for a minute, that you're compiling to RAM:. Now then, how can you
delete files from RAM: when the only file in RAM: is the one you're trying
to build? What you need to do is free up more memory and add it to RAM:.

And the "free space count" in the workbench window tells me that ARC is
freeing all its memory.
-- 
-- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter
--                 'U`  <-- Public domain wolf.

peter@sugar.UUCP (Peter da Silva) (09/06/87)

> [ it takes 60K free to reliably open a requestor ]

Why use a requestor?

I mean... what is the whole purpose of requestors? What do they do for you
that just rendering into a smart refresh window doesn't? They're a little
easier to code (not much, though, since you have to open a window for them
in the first place), but they can't be handled by the user as a first-class
object (you can open a window that has that capability, true, and most
everyone does... but then you get back to "you're opening the window anyway"),
and they take up a bunch of contiguous memory (a smart refresh window can be
diced into teeny chunks). And they're not very versatile.

I can see that they save some hassles, but in this case why don't you just
open up a window and render into it? You can even make it simple-refresh
and save even more memory!

Even better... open a 1 bit-plane lores screen that looks like an alert but let
the user click into the workbench window, pull it up, work in it, and then
pull it down and click in the right or left half of the "alert"? I'll bet
you'll save gobs of memory!

Smart alerts... what a concept!
-- 
-- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter
--                 'U`  <-- Public domain wolf.

bryce@hoser.berkeley.edu (Bryce Nesbitt) (09/10/87)

> [Long discussion about RAM: and out-of-memory]

I get around RAM:'s low memory handling by using VD0: instead. 
VD0: is great (yes... I paid, how about you?)
RAM: seems to have a problem or two with memory.  For an example, try this:


Bug Creation Procedure:  [ I thought bugs evolved? :-) ]

Kick with V1.2 Kickstart (33.180, release).
Work with Workbench 33.56 (Comes with the A500/A2000)
Open the Workbench disk.
Drag the Clock into the RAM disk.
Close the Workbench drawer.
Open the RAM disk.
Note how much memory you have free.
Select the Clock icon.
Select "Duplicate" from the menu.
Wait.
You get a "Volume RAM: is full" requester.  Huh??

Doing an "avail" from a CLI reveals that memory is indeed all full...
something ate all the free ram; on my A1000 that's about 1.5 Megabytes!!
Clicking cancel brings it all back.  Strange, eh?

Related bugs:

You can rename two files in RAM: to have the same name.
If the first copy into a RAM: runs out of memory, the block count
will be screwed.  Type "info" to crash your machine with a divide
by zero error.


|\ /|  . Ack! (NAK, EOT, SOH)
{O o} . 
 (")	bryce@hoser.berkeley.EDU -or- ucbvax!hoser!bryce
  U	

ralph@mit-atrp.UUCP (Amiga-Man) (09/10/87)

About this RAM: getting full stuff, has anyone else noticed that
if you do a workbench-style icon-duplicate for something on the
ram disk (or ASDG ram disk too) the system allocates ALL it's fastram,
dups and then frees it ? Why ? It's a dangerous thing to do !
I saw this by running the "performance monitor" by Dale Luck
and watching the lines for fast ram.

cheeser@dasys1.UUCP (Les Kay) (09/13/87)

In article <652@sugar.UUCP>, peter@sugar.UUCP (Peter da Silva) writes:
> In article <151@jc3b21.UUCP>, fgd3@jc3b21.UUCP (Fabbian G. Dufoe) writes:
> > In article <595@sugar.UUCP>, peter@sugar.UUCP (Peter da Silva) writes:
> > > I think RAM: has a bug in it. I got "RAM: is full". I quit ARC. Now, ARC is a
> > > big program, so after I quit it RAM: should have plenty of room. Well, this
> >      I've never had a problem like that.  When I get "RAM: is full" I can
> > always delete some files from RAM: and try again.  Could it be that ARC didn't 
> > free all its memory?  
> 
> I can get space in RAM: by deleting files from RAM: too. Unfortunately, that 
> doesn't help in this case. You can't always do that.


I've had a similar problem happen to me when I compile programs in Ram:
I have taken to doing all <make that ALL> of the compilation in ram, the
compiler, linker, libs AND INCLUDES as well as my source iss all up there.

Well, while I do have a lot of ram....sometimes I end up with both dme and
emacs in ram.  I use several dme windows as well as multiple buffers in
microemacs...when I do a BIG compilation, Ive also run into the RAM: full
nasty.  Normally, if I go kill either dme or emacs or both, the problem goes
away - BUT - not always - occasionally, the system just stays locked with
RAM: full.  Sigh.  It's the reason my make program does a temp save of my
work BEFORE invoking LC!

=============================================================================
Jonathan Bing, Master (cheeser)  		ihnp4!hoptoad!dasys1!cheeser
			A penny saved is  absurd!!!
		Sorry if my Karma ran over your Dogma ...
=============================================================================