[comp.unix.i386] why separate filesystems?

jimmy@icjapan.info.com (Jim Gottlieb) (08/20/90)

I have discovered that in my absence, several of our systems back in 
the good ol' US of A were set up with just one filesystem (under /).
These are systems with 135 or 300 meg disks.

I told them that it is usually a good idea to use a separate /usr file
system but couldn't answer the inevitable "why" question.  Something
about it being easier to repair a damaged file system when it's not
mounted?  They did it, they say, to avoid the problem of having space
available on the disk, but not in the right place.

Could anyone share reasons why or why not to have a separate /usr
(/usr2, ...)?

Thank you...

--
Jim Gottlieb 					Info Connections, Tokyo, Japan
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
    <jimmy@pic.ucla.edu> or <jimmy@denwa.info.com> or <attmail!denwa!jimmy>
Fax: +81 3 237 5867				    Voice Mail: +81 3 222 8429

geoff@ism780c.isc.com (Geoffrey Kimbrough) (08/22/90)

In article <377@icjapan.uucp> Jim Gottlieb <jimmy@denwa.info.com> writes:
>Could anyone share reasons why or why not to have a separate /usr
>(/usr2, ...)?

	The reasons for / and /usr being separate are largely historical,
and have more to do with the size of pdp11 diskpacks than anything else.
The best reason to keep filesystems small is that it makes restoring single 
files from tape easier.  (Of course, if you run Norton Utilities...8^))

Backups are the real reason now.  my / and /usr (actually, I don't use
a separate /usr, anyway...) my /rootuser filesystem seldom changes much,
and usually contains only packaged, installed software.  I don't need to
backup that stuff very often.  (how many backups of /bin/sh do you need? Yes,
I know about incremental backups, but if a filesystem doesn't change much,
you end up with a lot of nearly empty disks or tapes)
My other filesystems are much more active, and get backed up as appropriate.

The other main reason is that
filesystem damage can only affect 1 filesystem at a time.

-- 
Geoffrey Kimbrough -- Senior System Therapist
INTERACTIVE Systems Corporation -- A Kodak Company
I think machines and clocks have secret motives, but then again...
Maybe they're made that way.

bin@primate.wisc.edu (Brain in Neutral) (08/23/90)

You can have an alternate root fs, at least on some Unixes.  If one
gets trashed you can boot off the alternate.  Otherwise you have to rebuild
from scratch.  Not fun.

You don't have to dump/restore as much at one time.  Particularly if you
have to dump/restore and entire fs, not just an incremental.  You can
sometimes get away with dumping less-active fs's less frequently than
more-actives fs's.  If you have only one fs, you can't do this.

Sometimes a disk will start to go bad on just one section.  If that's
localized to one fs, you can move the critical stuff off to still-working
fs's while you decide what to do.  WIth a single fs, you may be crippled
to where you can no longer work.

Summary: having one file system is putting all your eggs in one basket.
There's no way I would do it.

Paul DuBois
dubois@primate.wisc.edu

davidsen@sixhub.UUCP (Wm E. Davidsen Jr) (08/23/90)

  Some reasons for a separate f/s.

  Backup, and all stuff like that.

  Inodes, if you have 16 bit inode #'s and run news.

  Moving user stuff to another machine, pick up /u and go

  Using extra devices. I mount some stuff on the oldest and slowest
drives. If there's a problem only the things using it are hurt.
-- 
bill davidsen - davidsen@sixhub.uucp (uunet!crdgw1!sixhub!davidsen)
    sysop *IX BBS and Public Access UNIX
    moderator of comp.binaries.ibm.pc and 80386 mailing list
"Stupidity, like virtue, is its own reward" -me

carl@p4tustin.UUCP (Carl W. Bergerson) (08/24/90)

Performance:

	"Smaller filesystems are faster" - Xenix Installation Guide

	This is generally true for all versions of *ix.

pgd@bbt.se (P.Garbha) (08/24/90)

In article <1053@p4tustin.UUCP> carl@p4tustin.UUCP (Carl W. Bergerson) writes:
>Performance:
>
>	"Smaller filesystems are faster" - Xenix Installation Guide
>
>	This is generally true for all versions of *ix.

Can you explain why? Becuase I cannot see why it should be like that.
The only reason I can think of is reduced head-movement, but if you
divide one disk into to parts, that effectively defeats that, by
having to move the head back and forth between the parts.

I tend to believe that dividing a file system makes it slower, because
you get less free space on each part, and UNIX file-system with little
free space is slower. You also get greater chance that one partition
will run out of space, or i-nodes. You will also use up more
disk-space, by having to duplicate files on the filesystems -- unless
you have soft-links.

rcd@ico.isc.com (Dick Dunn) (08/25/90)

carl@p4tustin.UUCP (Carl W. Bergerson) writes:
> 	"Smaller filesystems are faster" - Xenix Installation Guide
> 
> 	This is generally true for all versions of *ix.

This is sort of true in a not-very-useful way.  Smaller file systems can be
faster because the disk arm doesn't have to move as far...but if you've got
80 Mb of data to store, you can't put it in a 50 Mb file system.  What's
more, suppose you've got 80 Mb of data and 120 Mb of space to carve up...
which is better:  divide the data between two 60-Mb file systems or put it
all on one 120-Mb file system?  Answer: use the 120.  Otherwise you'll
spend your time seeking back and forth across the unallocated wasteland at
the end of the first file system to get to the second.

In old free-list style file systems, the scrambling of the free list could
have more of an effect on a larger file system, particularly if you once
filled it pretty full, then dropped way back down below the high-water
mark.

A file system structure which places the most-used data closest together is
going to be faster.

There are various historical reasons for splitting up file systems,
including as has been mentioned, recovery (repairing a damaged file system
was *much* harder before fsck!), size of available disks, etc.
-- 
Dick Dunn     rcd@ico.isc.com -or- ico!rcd       Boulder, CO   (303)449-2870
   ...I'm not cynical - just experienced.

vjs@calcite.UUCP (Vernon Schryver) (08/27/90)

In article <1990Aug24.215127.766@ico.isc.com>, rcd@ico.isc.com (Dick Dunn) writes:
>      ...   Otherwise you'll
> spend your time seeking back and forth across the unallocated wasteland at
> the end of the first file system to get to the second....


This is more true in under-designed file systems like that in System V.
Many file systems which started with bit-map allocation mechanisms spread
the unalloated wasteland throughout the allocated rubble.  Consider BSD FFS
cylinder groups or the file system of the 1960's Project Genie.

This is just a nit.  I agree with Dick.  Consider the popularity of logical
volumes, where several physical extents are pasted together into the
illusion of a single large file system.  Such games were vital for UNIX
files larger than 2GB (or 4GB if your u_offset is unsigned) before 2GB
drives became cheap.


Vernon Schryver
vjs@calcite.uucp

david@twg.com (David S. Herron) (08/27/90)

In article <1990Aug24.091111.508@bbt.se> pgd@bbt.se (P.Garbha) writes:
>In article <1053@p4tustin.UUCP> carl@p4tustin.UUCP (Carl W. Bergerson) writes:
>>Performance:
>>
>>	"Smaller filesystems are faster" - Xenix Installation Guide
>>
>>	This is generally true for all versions of *ix.
>
>Can you explain why? Becuase I cannot see why it should be like that.

Depends on activity in your system.  (Your mileage will vary depending
on road conditions and the like)

Like you surmise, head-motion is the reason.  And in this case more
isn't merrier.

Within a file system the head will be going back and forth between
the group of inodes and the data-blocks.  For large files, especially,
it will be popping around a lot as you get into the extended blocks
of the file.

With a smaller file system the distance between the inodes and
data blocks becomes less important.  This still depends on a lot
of "other factors".

Like it works really well if most of your file activity centers
itself on partition at a time.

For instance, during a compile the head'll roam between /tmp, the
swap device, and / (for the libraries) or /usr (for /usr/lib
libraries and /usr/include files).  If you put these partitions
close together you get some benefits from the heads not moving
around a lot.  I expect that putting <swap> and /tmp right next to
each other is a Big Win.  Especially since they both tend to be
fairly small.

Somewhere I remember reading (of) a study which claimed that the
old file system performance max'd out around a 70 meg partition.
This should mean that Usenet on any SysV machine is gonna be a
whooooole lot slower than it should.  (Nowaday's serious news partitions
are well above 200 meg's)



One of the good features of the BSD Fast File System is that they
scattered inodes around the disk and used some heuristics to induce
data blocks for a file to be in the same "cluster" which holds the
inode.  Thus if you're heavily using a particular file then the
disk head's will tend to stay in this one fairly small area.



Note that on my development system I have only one partition because
it's "only" a 100 meg disk and I needed to fit an X11R3 on it along
with a whooooole lot of networking software.  (OSI isn't small..  ;-)
at least at this stage..)
-- 
<- David Herron, an MMDF & WIN/MHS guy, <david@twg.com>
<- Formerly: David Herron -- NonResident E-Mail Hack <david@ms.uky.edu>
<-
<- Sign me up for one "I survived Jaka's Story" T-shirt!

als@bohra.cpg.oz (Anthony Shipman) (08/27/90)

In article <1990Aug24.091111.508@bbt.se>, pgd@bbt.se (P.Garbha) writes:
> In article <1053@p4tustin.UUCP> carl@p4tustin.UUCP (Carl W. Bergerson) writes:
> >Performance:
> >
> >	"Smaller filesystems are faster" - Xenix Installation Guide
> >
> >	This is generally true for all versions of *ix.
> 
> Can you explain why? Becuase I cannot see why it should be like that.
> The only reason I can think of is reduced head-movement, but if you
> divide one disk into to parts, that effectively defeats that, by
> having to move the head back and forth between the parts.
> 
> I tend to believe that dividing a file system makes it slower, because
> you get less free space on each part, and UNIX file-system with little

If you can put the swap space in the middle between two file systems then I
would expect this to improve overall system performance once swapping/paging
starts. The average head movement between swap I/O and file I/O should be less.

-- 
Anthony Shipman                               ACSnet: als@bohra.cpg.oz.au
Computer Power Group
9th Flr, 616 St. Kilda Rd.,
St. Kilda, Melbourne, Australia
D

chris@ctk1.UUCP (Chris Old) (08/29/90)

In addition to the other reasons mentioned, I like to have a separate fs
for /usr/spool/news. It means that I don't need to worry about tweeking
explist according to the current political climate when I leave the
office for a few days. I may lose a few news articles, but at least the
other users can continue without running out of disk.

--------------------
Chris Old  (C.t.K.)               : olsa99!ctk1!chris@ddsw1.mcs.com
Tran Systems Ltd                  : ddsw1!olsa99!ctk1!chris