[comp.unix.questions] File Fragmentation

slouder@note.nsf.gov (Steve Loudermilk) (01/11/89)

Hi,

I am involved in a local discussion about the benefits of "compacting" the
information on our disks regularly.  By compacting I mean dumping to a
different device, running "newfs" and then restoring a file system.

One school of thought says this is necessary and should be done fairly
frequently to avoid excessive fragmentation and inefficient disk I/O.

The other school of thought says it isn't necessary because of the way 
the Berkeley "fast file system" (BSD 4.2) handles assignment of
blocks and fragments when a file is stored.  

Our system is a Vax 11/785 with six ra81 disk drives, running Ultrix 2.3.
We are currently using a block size of 8192 and a frag size of 1024
for all file systems except one, where we have 4096/1024.  All file
systems are running with at least 12% free space.  

Should I worry about fragmentation?  If so, is dumping and restoring
the best solution?

Thanks in advance,

----------------------------------------------------------------------
Steve Loudermilk			Internet:  slouder@note.nsf.gov
Integrated Microcomputer Systems Inc.	Phonenet:  (202) 357-9648
----------------------------------------------------------------------

debra@alice.UUCP (Paul De Bra) (01/11/89)

In article <18068@adm.BRL.MIL> slouder@note.nsf.gov (Steve Loudermilk) writes:
}Hi,
}
}I am involved in a local discussion about the benefits of "compacting" the
}information on our disks regularly.  By compacting I mean dumping to a
}different device, running "newfs" and then restoring a file system.
}
}One school of thought says this is necessary and should be done fairly
}frequently to avoid excessive fragmentation and inefficient disk I/O.
}
}The other school of thought says it isn't necessary because of the way 
}the Berkeley "fast file system" (BSD 4.2) handles assignment of
}blocks and fragments when a file is stored.  
}

Disk fragmentation (or file-fragmentation as you call it) still occurs
in most versions of Unix, but the Berkeley "fast file system" keeps it
to a minimum.

On a BSD system I would think that a dump/newfs/restore should be done
every year or so. On other systems the file system can be messed up in
a matter of hours, but one (painful) solution is to unmount and fsck -S
all file systems once a day, and then one can keep the fragmentation
down for a long time. The old file system does not try to use the disk-
blocks in any sensible way. It keeps a queue of blocks being freed and
reuses them in that order. The V9 "bitmap" file system keeps fragmentation
more local although it doesn't keep it down quite as much as BSD I believe.

Paul.
-- 
------------------------------------------------------
|debra@research.att.com   | uunet!research!debra     |
------------------------------------------------------

chris@mimsy.UUCP (Chris Torek) (01/11/89)

In article <18068@adm.BRL.MIL> slouder@note.nsf.gov (Steve Loudermilk) writes:
>I am involved in a local discussion about the benefits of "compacting" the
>information on our disks regularly.  By compacting I mean dumping to a
>different device, running "newfs" and then restoring a file system.

>One school of thought says this is necessary and should be done fairly
>frequently to avoid excessive fragmentation and inefficient disk I/O.
>The other school of thought says it isn't necessary because of the way 
>the Berkeley "fast file system" (BSD 4.2) handles assignment of
>blocks and fragments when a file is stored.  

The second school is usually correct.

>... Ultrix 2.3. ... running with at least 12% free space.  

If that `12% free' means that `df' shows 12% free, then you have plenty
of room.  If it means that df shows 2% free, then you have room.  Only
if df shows 110% full are you truly out of space.  This 10% `reserve'
is there to prevent fragmentation from becoming excessive.  It can be
adjusted if desired (see `man 8 tunefs').

Recent Berkeley releases have `fsck' report the amount of fragmentation.
On our machines it is typically under 1%.

If you do have have too much fragmentation, you may find that you
cannot write files even though there is some free space left.  Dumping
and restoring should reduce the fragmentation.  We have never found
it necessary to do this, and I have never heard of anyone who did.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris