[comp.sys.sun] Need recommendations on how to select partitions

cdr@pepsi.amd.com (Carl Rigney) (06/30/90)

With two 900GB disks I'd like to put root on a 8-16MB partition on 0a,
/tmp on a 16-32MB partition on 1a, /usr on 0d and /usr/local on 1d, and
split swap evenly between 0b and 1b; 2x memory size on each.  The
advantage of this is that in an emergency if you lose your first disk you
can overwrite /tmp and /usr/local with root & /usr and boot from the
second disk.  It can also be very handy when doing upgrades to be able to
"test" the upgrade on the "spare" partitions before overwriting your old
OS.  I also like to put /var on its own partition, and then put the
remainder of the two disks into /home/foo/1 and home/foo/2 where foo is
the name of the machine.  Leave lots of room on /usr for it to expand in
future OS releases.

With this scheme we only need to back up /usr once, since we never change
it after our initial installation; all local programs go into /usr/local,
and nothing goes into the root filesystem unless it absolutely has to; we
keep a list of all changed files in /CHANGES so upgrades are very quick -
no playing "hunt the non-standard file."

A possible disadvantage to just making it two big disks without partitions
is that files may get scattered further across the disk, reducing
performance.  Also, for drives that use Zone Bit Recording the outer
cylinders can be read faster than the inner ones, and that's where root
swap and tmp go.  Another consideration is time for fsck'ing the disks on
a reboot; SunOS fsck's root and /usr before giving you the single user
shell; its much faster if these are < 100MB instead of a full GB.
(Everything gets fsck'ed when you go multi-user, but frequently when
troubleshooting or installing you do a lot of single-user rebooting.

Carl Rigney
cdr@amd.com
{ames decwrl pyramid sun uunet}!amdcad!cdr

pcg@compsci.aberystwyth.ac.uk (Piercarlo Grandi) (07/01/90)

In article <9255@brazos.Rice.edu> hughes@src.honeywell.com (Matthew
Hughes) writes:

   >Peter Steele asks about considerations when partitioning disks.  I've
   >never really understood just why Sun recommends partitioning a disk, but

Partitioning a disk used to be useful because:

1) You could dump separately only whole partitions: this is no longer
   relevant. Instead of using dump(8) use GNU tar to dump, and dump subtrees,
   not partitions, with the frequency more appropriate to each subtree.

2) It was a way to have multiple ilists: hopefully this would put inodes
   nearer the files. Cylinder groups do that now.

3) Separate high turnover subtrees from static ones: This was useful not
   just for dumping them separately (see above), but also to separately
   reload them to unfragment them. No longer necessary.

4) To limit a subtree to a specific amount of space:  this is has always
   been the most inane reason. Now we have quotas, but even without quotas it
   was stupid to use such a hard limit as partition size as a substitute.
   Lots of small, rigid partitions create administrative nightmares.

For one, partitioning a disk keeps you from complete and utter
fragmentation, which saves time when loading programs and files.

This used to be true under V7 and System V, which have a time ordered free
list. Under 4.2 BSD derived systems the problem almost no longer exists.
It can be ignored.

   Sun recommends putting all purchased software packages on partitions
   separate from files that change.  This way when you install a new software
   package all the info is in the same place (no fragmentation), so when you
   run the program, the info is read straight off the disk with no jumps to
   other parts of the disk (kinda).  

This is true under any filesystem organization. If you load files that in
some optimal fashion, they will not be moved and will stay optimally laid
out if not modified.

   My question is, why, when you have two or more hard drives, does Sun
   recommend you put the OS and /usr on sd0?  I have no clue.

The obvious rule is to put filesystems between which heavvy copying occurs
on different drives. Typically root, usr, sources, tmp and spool on one
spindle and user files and swap on the other. The idea is to be able to

1) Avoid arm motions between two regions of the same drive
2) Keep each arm doing something different at different times.

Now, if only we had sadp(1M)....

Piercarlo "Peter" Grandi          | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth       | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK| INET: pcg@cs.aber.ac.uk

dave@imax.com (Dave Martindale) (07/18/90)

In article <9559@brazos.Rice.edu> pcg@compsci.aberystwyth.ac.uk (Piercarlo Grandi) writes:
>
>Partitioning a disk used to be useful because:
>
>1) You could dump separately only whole partitions: this is no longer
>   relevant. Instead of using dump(8) use GNU tar to dump, and dump subtrees,
>   not partitions, with the frequency more appropriate to each subtree.

GNU tar does not have any way to do incremental dumps.  I want to save
files that have changed in my daily dumps, but don't want to save the
whole subtree they are in that often.  Nor do I wish to manually keep
track of what files or subtrees need backing up tonight.  Dump does a good
job of handling this; nothing else does.  And if you decide to do backups
with dump, then partitioning is still relevant.