[comp.unix.xenix] Xenix disk defragmenter

ohrnb@edstip.EDS.COM (Erik Ohrnberger) (02/08/90)

Hi! I'm just starting in this Unix adventure comming from signficant
DOS experience.  I know that fragmentation of data files cause a
significant loss in performace in the DOS environment.  Is this also
the case for Unix environments?  

When I was running DOS, I had a number of tools that I could call apon
to re-contigutize the files on the disk.  Are there such tools
available for Unix?  Where could get obtains such tools?  Do they work
with Xenix and other *nixes?

If these questions or their answers are trivial (flames too), and just
wasting bandwidth, you can e-mail me to save more bandwith.

Any leads would be greatly appreciated.


-- 
Erik Ohrnberger			Work:	ohrnb@edstip
2620 Woodchase Court		Home:	!<internet>!sharkey!nucleus!echocen
Sterling Heights, MI 48310

-- 
-->Erik Ch. Ohrnberger			UUCP:!uunet!edsews!edstip!ohrnb
-->Permanently Refraining from un-informed opinions

chip@chinacat.Lonestar.ORG (Chip Rosenthal) (02/09/90)

In article <845@edstip.EDS.COM> ohrnb@edstip.EDS.COM (Erik Ohrnberger) writes:
>I know that fragmentation of data files cause a significant loss in performace
>in the DOS environment.  Is this also the case for Unix environments?  

Absolutely.  One of the problems with the XENIX filesystem (and older
V7ish and Sys5) is that the kernel maintains its list of free disk blocks
in a LIFO fashion.  That is, the most recently freed disk block is going
to be the next one used when a file is created or grown.  Not only does
this contribute to the scattering of files across the surface of the disk,
but it relates to why doing a file "undelete" sort of thing is so difficult
in unix.

This becomes even more important with higher performance disk controllers
which can do track buffering.  If the blocks of a file are all contiguous
on a disk track, then you can pull in a whole bunch of the file at once.
When the blocks are scattered about the disk, those extra buffered blocks
which were read in (almost) for free aren't usable.

There are several things you can do to reduce disk fragmentation.

First, you can bring the system down to single user mode, dismount the
file systems, and run "fsck -S" to rebuild the free list.  Files created
after this will tend to be grouped better.  This won't do anything to
help existing files, but it does slow down the entropy of things a bit.

Second, if you are really ambitions, from time to time you can "dump" the
filesystem, rerun "divvy" to re-create the filesystem (and therefore a
well ordered free-block list), and then "restore" the filesystem.  The
problems with doing this are that:  (1) it is somewhat time consuming,
and (2) that "dump" had better be a good backup, because the "divvy" is
going to wipe the disk.

Third, there is a utility called "fsanalyze" written by Michael Young
which has been discussed in this group recently.  This scans filesystems
and generates a report on disk fragmentation.  You can instruct it to
tell you which files are most fragmented.  You can then do a selective
backup of these files, run "fsck -S", and then restore them.  This approach
lies midway between one and two in both difficulty and effectiveness.
Fsanalyze is available in the comp.sources.misc archive.  There have been
three patches to it that you will want to get too.

Finally, there was a utility posted a few months back which messes around
with the filesystem at a low-level and reorganizes things to reduce
fragmentation.  There were some patches posted to get it running on XENIX.
I haven't used it myself.  Scribbling on the disk at this level is a
dangerous proposition if not done correctly, and I just haven't had the
time to bring up the program and convince myself that it isn't going to
trash stuff.  (Some folks out there whose opinions I trust have run this
and posted positive reports, but just the same, I need to convince myself
it works.  I don't think they are willing to take the blame if my system
blows up.)  Another issue with this utility is that I understand it is
verrry slow for severly fragmented filesystems, so prepare to put your
machine out of commission for many hours the first time you run it.
However, it is supposed to become more bareable subsequent times if run
regularly.

>Are there such tools available for Unix?  Where could get obtains such
>tools?  Do they work with Xenix and other *nixes?

Above are pretty much the tools and strategies available to you.  The
filesystem organization is different enough from un*x to un*x that there
is usually more than just a recompile involved to move tools around,
especially when you get into filesystems like Berkeley's fast filesystem
which look very different.
-- 
Chip Rosenthal                            |  Yes, you're a happy man and you're
chip@chinacat.Lonestar.ORG                |  a lucky man, but are you a smart
Unicom Systems Development, 512-482-8260  |  man?  -David Bromberg