[comp.sys.atari.st] Ram Disks

ljdickey@water.waterloo.edu (Lee Dickey) (02/21/88)

In article <439@dukempd.UUCP> gpm@dukempd.UUCP (Guy Metcalfe) asks:

>         P.S.	I'd still like to hear from someone about my desire
>		to find a resizeable and reset survivable ram disk.
>		At least tell me why it's hard to do.  I got no response
>		to an earily inquiry except for requests to pass any
>		information along.
>

I would like to know too.
Could a ramdisk author ( Moshe? ) answer this?

--
 L. J. Dickey, Faculty of Mathematics, University of Waterloo.
	ljdickey@water.waterloo.edu
	ljdickey@watdcs.BITNET
	ljdickey@water.UUCP	...!uunet!water!ljdickey

achowe@trillium.waterloo.edu (CrackerJack) (02/21/88)

The good news...

I don't know if this will help, but "Compute!'s Atari ST" magazine
Issue 5, Vol 2, No.3 provided a recoverable RAM disk which recovered
from system crashes 90% of the time. Mag/disk provided both .TOS and
.S files. It was totally configurable for size, drive name, and files
to copy to the RAM disk. I used it from the time it came out on my
1040 ST till I switched to a Mega ST2... 

...the bad news...

Except this gem of a program which was to be the corner stone of my
development environment (1M recoverable RAM disk, wow!) does NOT
work on the MEGA ST2 (gasp!!). What seems to happen is this, the disk
sets up just fine from cold boot, but when you attempt to reset the
system after a crash, the RAM disk program says it recovered 
(which I have determined to be true because a tool I have says the
system is short 1M of memory after the reset) BUT fails to connect 
the RAM disk with the GEM icon for that disk. As to whether or not the
RAM disk's data is intact, I have not been able to verify since I
can't access it.

I've been meaning to write the magazine and tell them to notify the
author and/or have a look at the code myself but haven't as yet had
time to play with it. 

Maybe *hint, nudge, poke, beat with large stick* if someone else 
could take a look at this fine program that doesn't seem to work 
to specs on MEGAs (grrrr) it might be possible to fix a bug or 
create a MEGA version. The magazine copyright does not allow me to
pass the source around :-(

- Ant
-------------------------------------------------------------
achowe@trillium.waterloo.edu

note: my nickname is for the fact that I eat a lot of "CrackerJack" candy
and is not ment to suggest that I walk around wearing a patch, wooden leg
while waving a sabre, raping large buxom women and sucking back kegs of
rum faster than engineers can demolish 40 beers :) 

"The definition of flying: throwing yourself at the ground and missing."
		- Douglas Adam's  "Life, the Universe and Everything"

jafischer@lily.waterloo.edu (Jonathan A. Fischer) (02/23/88)

In article <439@dukempd.UUCP> gpm@dukempd.UUCP (Guy Metcalfe) asks:

>         P.S.	I'd still like to hear from someone about my desire
>		to find a resizeable and reset survivable ram disk.
>		At least tell me why it's hard to do.  I got no response
>		to an earily inquiry except for requests to pass any
>		information along.

	Actually, this is one of those things I've had on the "what to do
if I ever have enough free time for programming" list.  But I was just
pondering the question last week, and here are a couple of reasons that it
is hard and quite likely impossible (and of course I'm open to corrections):

	o  If you wanted a reset-proof ramdisk, somehow you'd have to find
	   a way to make TOS think that its physical memory size was
	   changing on the fly -- at least, that's the way "eternal" does
	   it.  The "eternal" ram-disk changes the top-of- memory pointer
	   and then performs a warm boot.  I don't know if you can avoid the
	   need for a reset when you want to set a new top-of-ram value. 
	   (In fact, I'm not entirely sure why it's necessary.  Why doesn't
	   "eternal" just do a big Malloc() once when it is run from the
	   auto folder?)

	o  The main problem is with the Malloc() system routine.  Its bugs
	   are notorious, but in short, there is a fixed limit on the number
	   of Malloc's any one application can perform.  In fact, I once
	   caught a message on Compuserve claiming that even if you're just
	   doing something like:
	   	while (1) {
	   		buf = Malloc(size);
	   		Mfree(buf);
	   	}
	   then it _still_ stops giving you memory after a while.  (I'd like
	   someone to confirm this).

	At any rate, those are the two obstacles that come to my mind.  Maybe
with the new TOS, the Malloc problem will be fixed.  But if you were to
just dynamically Malloc sectors as they were needed, I don't think you can
make it resettable, since TOS will zero out all of its ram when you reboot.
--
			- Jonathan A. Fischer,    jafischer@lily.waterloo.edu
...{ihnp4,allegra,decvax,utzoo,utcsri}!watmath!lily!jafischer
"Is your religion real when it costs you nothing and carries no risk?"
			- The Preacher at Arrakeen

Thomas_E_Zerucha@cup.portal.com (02/24/88)

Reset-proofing is straightforward, but requires a warmstart, since the
standard technique is to alter PHYSTOP and leave something at phystop+n
to indicate there is a ramdisk actually there, then reinstall the vectors
the second time through the auto folder.  But then resizing becomes difficult
without another warmstart, but otherwise would simply involve insuring that
the latter region of the ramdisk is not being used, then adjusting the
free space byte in the boot sector, moving it (and changing phystop to reflect
the move - and the move would have to be bottom up), then warmstarting again.

sid@brambo.UUCP (Sid Van den Heede) (03/04/88)

In article <3335@watcgl.waterloo.edu> jafischer@lily.waterloo.edu (Jonathan A. Fischer) writes:
>	o  The main problem is with the Malloc() system routine.  Its bugs
>	   are notorious, but in short, there is a fixed limit on the number
>	   of Malloc's any one application can perform.  In fact, I once
>	   caught a message on Compuserve claiming that even if you're just
>	   doing something like:
>	   	while (1) {
>	   		buf = Malloc(size);
>	   		Mfree(buf);
>	   	}
>	   then it _still_ stops giving you memory after a while.  (I'd like
>	   someone to confirm this).

The following program succeeds on my system.  I'm using MWC version 2.0
and the Nov 20 1985 version of the TOS ROMs.  The program stops after 
10,000 repititions.  When the address returned by Malloc was displayed,
it was always the same.

#include <stdio.h>
#include <osbind.h>

main()
{
long buf;
long count = 0;

printf ("%ld bytes memory available.\n", Malloc(-1L));

	while (1) {
		if ((buf = Malloc(1000L)) < 0L) {
			printf ("Malloc stopped giving memory after %ld calls.\n", count);
			break;
			}
		if (++count > 10000L) {
			printf ("Malloc ok after 10000 tries.\n");
			break;
			}
		if (Mfree(buf) < 0L) {
			printf ("Mfree failed after %ld tries.\n", count);
			break;
			}
		}
	
	}
-- 
Sid Van den Heede		Voice: 416-792-1137
sid@brambo.UUCP			FAX:   416-792-1536
...!utgpu!telly!brambo!sid
Bramalea Software, Suite 406, 44 Peel Centre Dr, Brampton, Ontario  L6T 4B5

dstr012@ucscg.UCSC.EDU (10003012) (11/20/89)

Is it possible or is there already out there a RAM disk program that
could increase it's size as more and more files are put into it?
I would really be great because I keep running out of space on deArcs.

Roman Baker

glk01126@uxa.cso.uiuc.edu (11/21/89)

	Not very probable, if the memory is allocated by Dcopy or ARC.TTP, how would the ram disk grab new?