[fa.info-vax] compressing disk space

info-vax@ucbvax.ARPA (07/03/85)

From: Ronald A. Jarrell  <JARRELLRA%VPIVAX3.BITNET@WISCVM.ARPA>


VMS has always had problems with fragmented disk packs.  Most systems
do, if only because it destroys your performance slowly but surely.  The
only real way to compress a pack under VMS is to do a BACKUP/IMAGE to
your favorite media (another pack or to tape)  and then reload it.  If you
have removable packs, you can just BACKUP/IMAGE to the other pack and
start using it instead.  A BACKUP/IMAGE restore reloads each file in it's
entirety, because when it dumped them it dumped each completely, so your file
system starts out contiguous.. We schedule fairly regular periods of
compressing at least our system pack.


-Ron

info-vax@ucbvax.ARPA (07/03/85)

From: Kevin Carosso <engvax!KVC@cit-vax>

> VMS has always had problems with fragmented disk packs.

This isn't necessarilly true.  As everything else in life,
this depends on the situation...

I have a fairly large configuration, and never perform file-system
backup/restore cycles.  I keep an eye on my disk fragmentation with
the REPORT=DISK function of the SPM utility, so I know I'm not
really fragmenting.  I do however, take some precautions that may
not be feasible for all sites.

I guess I should stress that I get off so easy because I am not
crunched for disk space.  Our system seems to hover right around
75% full, without too much bouncing up and down.  There is a steady
upward trend, but we have been adding disks slowly over the last
few years to keep things in check.  Because I have the disk space,
I have been rather liberal with cluster factors on my disks.  My
system disk, an RP07, is clustered at 10.  According to SPM, I'm
using about 91% of the allocated space with real data.  The rest is
wasted on the cluster size.  This is not unreasonable, and seems
to keep the fragmentation down and performance up.

My user disks are Eagles, and I have them clustered at 5.  Due to
the nature of user-type files, however, I generally only see something
like 71% of allocated space going to actual data.  This could be
unacceptable in many situations but, again, since I have the space
to spare I'd rather use a little of it to make my job easier and keep
file-system performance consistently good.

Before I get too many flames from disk-poor system managers, I gotta
say that I've been there too (at school, of course, where else?)
and certainly understand the trials and tribulations of weekly disk
r&r's when the cluster-size is 1 and you're lucky if ya got
1000 free blocks out there....

Also, there was a time when I had several 100000 block database files
out there, that were fragmented as hell...  This meant for much
excitement during the 4.0 upgrade.... (gggggrrrrr!!!!!)  That, more
than anything I've experienced, points out the simple fact that
fragmentation is also a function of the file sizes you've got out
there.  While I wouldn't consider my disks fragmented under normal
use, they sure behaved differently from the point of view of the
10000 to 100000+ block file...

(oracle now lives on a different disk, with it's databases allocated
 contiguously before anything else went on the disk!)

	/Kevin Carosso              engvax!kvc @ CIT-VAX.ARPA
	 Hughes Aircraft Co.