[comp.sys.atari.st] using memory

apratt@atari.UUCP (Allan Pratt) (05/03/89)

In article <3694@nunki.usc.edu> rjung@sal61.usc.edu (Robert  allen Jung) writes:
> You're
> uncompressing a package, and send all of the bits into the RAMdisk. 

Solutions like this are necessary because of problems like ARC.  ARC is
poorly implemented in that it reads & writes 512 or 1024 bytes at a time
(as far as I know).  You get HUGELY improved throughput if you read a
large buffer-full of your source, decode into another buffer, and write
large chunks at a time.  The chunks don't have to be very big: you get
dramatic improvements at 16K, and usually not much increased benefit by
going up to 1M or more. 

Look how much faster UNARC.TTP is than the equivalent extraction using
ARC.  The entire difference is that UNARC reads the file in from the
archive, uncompresses it, then writes the plaintext.  It might not even
read & write the whole thing, and I don't care; it's a hell of a lot
more than 1024 bytes!

The extreme example of this problem is GFA BASIC, which reads and writes
text files ONE BYTE AT A TIME.  That is, they use

	Fread(fd,1L,&c);

to read a character from a file.  This is ABSURDLY slow, and you end up
with whole subcultures coming up with workarounds to make text file I/O 
reasonably fast.

I know this doesn't directly address the RAMDISK problem; that's been
put to bed, I think (can't resize meaningfully under TOS).  My point is
this: you have lots of memory; use it!

============================================
Opinions expressed above do not necessarily	-- Allan Pratt, Atari Corp.
reflect those of Atari Corp. or anyone else.	  ...ames!atari!apratt

hyc@math.lsa.umich.edu (Howard Chu) (05/04/89)

In article <1477@atari.UUCP> apratt@atari.UUCP (Allan Pratt) writes:
>Solutions like this are necessary because of problems like ARC.  ARC is
>poorly implemented in that it reads & writes 512 or 1024 bytes at a time
>(as far as I know).  You get HUGELY improved throughput if you read a
>large buffer-full of your source, decode into another buffer, and write
>large chunks at a time.  The chunks don't have to be very big: you get
>dramatic improvements at 16K, and usually not much increased benefit by
>going up to 1M or more. 

Yep... Particularly for floppies, using a buffer at least the size of one
track seems to do pretty good. I set up ARC to use 30K for all its stdio
buffers, which has done pretty well. In particular, now it reads and writes
most files in only a couple disk accesses. (ARC'ing up a source tree - how
often do your individual source files exceed 60K?) Used the same approach
with KA9Q Net - FTPs are much faster now, fewer line turnaround delays...

Of course, using Mark Williams C, this required a change to the stdio
library. As distributed, it uses fixed size buffers (of 1024 bytes). If
they were *really* serious about ANSI compliance, they'd have included
setvbuf or setbuffer in the library from the beginning...
--
 -=- PrayerMail: Send 100Mbits to holyghost@father.son[127.0.0.1]
 and You Too can have a Personal Electronic Relationship with God!