[comp.unix.wizards] Why Partition a Hard Disk

jeff@wdl1.UUCP (Jonathan J Jefferies) (08/26/88)

Is there any definitive reason to partition a hard disk?
I am adding a new 70 meg hard disk to my systems and am
debating whether or not it would be desirable to partition
it.  My machine is a System V implementation by Unisoft and
as far as I know there are no parameters such as block
number which would require use of a 16 bit word and hence
an artificial limit to the file size.

Jonathan Jefferies  jeff@FORD-WDL1.ARPA

hulsebos@philmds.UUCP (Rob Hulsebos) (08/29/88)

In article <4360004@wdl1.UUCP> jeff@wdl1.UUCP (Jonathan J Jefferies) writes:
>Is there any definitive reason to partition a hard disk?

That depends on the size of the disk and what you do with it. My secondary
disk (140M) is not partitioned. My primary disk is, for the following 
reasons:

- the bootstrap program doesn't fit in 512 bytes, so the first partition
  is ment to store the bootstrap in
- the root-filesystem must not be to large, otherwise 'fsck' requires a 
  temporary file to store its data in. Therefore, a root-partition is created.
  As an added bonus, I can create a RAMdisk with the root-filesystem in it
  and then mount the other filesystems. This of course only works when the
  root-filesystem is not very large and you have enough RAM.
- because the root-filesystem is now separate, another partition is needed
  for the /usr-filesystem
- user's home directories are created in the /usr1-filesystem on its own
  partition. This allows me to upgrade the root- and usr-filesystems without
  having a need to save all user-directories on tape first and read them
  back later. 

I also have a separate partition for swapping purposes, but this is not 
really necessary.

>My machine is a System V implementation by Unisoft 
Mine is System V.2 by UniSoft.

------------------------------------------------------------------------------
R.A. Hulsebos                                       ...!mcvax!philmds!hulsebos
Philips I&E Automation Modules                            phone: +31-40-785723
Building TQ-III-1, room 11
Eindhoven, The Netherlands                                # cc -O disclaimer.c
------------------------------------------------------------------------------

henry@utzoo.uucp (Henry Spencer) (09/01/88)

In article <4360004@wdl1.UUCP> jeff@wdl1.UUCP (Jonathan J Jefferies) writes:
>Is there any definitive reason to partition a hard disk?

The original reason for partitioning was 16-bit limits on the number of
blocks in a file system.  This has long since ceased to be an issue.
The one significant resource limitation that remains in some systems is
16-bit inode numbers, limiting the number of files that can exist in a
single filesystem.  4BSD has fixed this, but at least the early System V
releases didn't (I'm not sure about the current ones).

On multi-user systems, sometimes it can be useful to have more than one
partition so that a group can be told "you've got all of partition 3,
fight it out among yourselves if space gets short there".

These things aside, the major reason for partitioning is damage limitation.
On some systems, it is absolutely necessary that the root filesystem be
relatively small, because fsck must be able to do a consistency check on
the root with all its tables fitting into main memory.  Having a separate
root filesystem is not a bad idea anyway, because it's very vital and the
less it changes, the less chance of it getting fouled up.  Taking this to
the opposite extreme, giving /tmp its own filesystem can be useful, because
it puts a very active area off by itself where it can't mess up anything
else.  (This can also be useful to spread disk traffic over multiple drives.)

A number of security holes also become harder to exploit if users are not
allowed to write anywhere in the "system" file systems.  The benefits of
this have been somewhat eroded by things like symbolic links, however.
A related issue is that splitting system and users means that runaway
user programs can't fill up the system filesystems, and vice-versa.

Many Unix systems do backups on a partition basis, so if you want to apply
different backup policies to different sets of files, there has to be a
partition boundary between them.

Supporting diskless workstations off a central file server can involve a
major song-and-dance with partitions, depending on who wrote the software.

If none of these considerations apply, the fewer the partitions the better.
It is better to have one big free-space pool than a lot of little ones that
can't help each other out when one gets low.

(Us?  Utzoo as currently operational has two Eagles.  Each has two or three
small partitions and one enormous one that occupies most of the disk.  The
two enormous ones are /usr and /zoo (our users).  We use little ones for
root, swap, /tmp, and a small work area for news archiving.)
-- 
Intel CPUs are not defective,  |     Henry Spencer at U of Toronto Zoology
they just act that way.        | uunet!attcan!utzoo!henry henry@zoo.toronto.edu

chris@mimsy.UUCP (Chris Torek) (09/01/88)

In article <1988Aug31.174144.1694@utzoo.uucp> henry@utzoo.uucp
(Henry Spencer) writes:
[various reasons for particular partitions]
>If none of these considerations apply, the fewer the partitions the better.
>It is better to have one big free-space pool than a lot of little ones that
>can't help each other out when one gets low.

Agreed.  Lo:

mimsy% df
Filesystem    kbytes    used   avail capacity  Mounted on
/dev/ra0a      32795   12962   16553    44%    /
/dev/ra0d     368576  248822   82896    75%    /g
/dev/ra1d     369648  292440   40243    88%    /usr
/dev/ra2a      32795   12929   16586    44%    /tmp
/dev/ra2d     369648  315727   16956    95%    /ful
/dev/ra3h     434910  317324   74095    81%    /u
/dev/hp2h     234292  151032   59828    72%    /news
# ra1a is a backup copy of ra0a, and in a pinch can be used as a /tmp as well
# ra0b, ra1b, and ra2b are swap, ~30MB each

brillig% df
Filesystem    kbytes    used   avail capacity  Mounted on
/dev/hp0a      30823    8948   18793    32%    /
/dev/hp0d     332383     312  298832     0%    /bfd
/dev/hp1d     333463  267661   32455    89%    /usr
/dev/hp2a      30443    1698   25700     6%    /tmp
/dev/hp2d     333643  289715   10563    96%    /g
/dev/hp3h     395607  333135   22911    94%    /u
/dev/hp4h     395607   84214  271832    24%    /y
# hp1a is a backup of hp0a, as on mimsy
# likewise, hp0b, hp1b, and hp2b are swap, all ~30MB each
# `bfd' stands for Backup File Disk... really! :-)
# (and /g, /u, and /y have nothing to do with Mr. Harris)
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

guy@gorodish.Sun.COM (Guy Harris) (09/03/88)

> The one significant resource limitation that remains in some systems is
> 16-bit inode numbers, limiting the number of files that can exist in a
> single filesystem.  4BSD has fixed this, but at least the early System V
> releases didn't (I'm not sure about the current ones).

Only the ones that have 1) picked up the 4.2BSD file system and 2) have changed
inumbers throughout the system to be 32 bits :-).

S5 still uses the V7 file system, which still has 16 bit inumbers in directory
entries.  Even if some vendor adds the 4.2BSD file system to their System V
kernel, there's still a potential problem if they don't, for example, change
the "st_ino" field in the "stat" structure from an "unsigned short" to an
"unsigned long"; if you have inumbers > 65535 on that file system, "st_ino" is
no longer unique on that file system - this probably won't break *most*
programs, but I have no idea which ones it *will* break, or how badly they'll
break.

Furthermore, note that adding NFS to S5 - even if you *haven't* added the
4.2BSD file system - causes the exact same problem, if for example you mount a
4.2BSD file system with inumbers > 65535 from some *other* machine.

Yes, changing the "stat" structure may be painful.  Not changing it may be
painful as well....

gwyn@smoke.ARPA (Doug Gwyn ) (09/03/88)

In article <66800@sun.uucp> guy@gorodish.Sun.COM (Guy Harris) writes:
>Yes, changing the "stat" structure may be painful.  Not changing it may be
>painful as well....

A reasonable way to make the change is to introduce a new system call,
which acts the way one wants (e.g. long st_ino), change the C library
stat() interface to use the new system call (and change the stat.h
header at the same time!), then recompile and test all the system
software at one's leisure.  Old binaries keep working until one is
finished checking everything out and removes the old system call (if
ever; usually it is left enabled so customers don't have to recompile
anything).  Old binaries can fail on long inode numbers, but this is
no worse than not making the change, and at least the official system
software has been upgraded to work right.

levy@ttrdc.UUCP (Daniel R. Levy) (09/05/88)

In article <8424@smoke.ARPA>, gwyn@smoke.ARPA (Doug Gwyn ) writes:
< In article <66800@sun.uucp> guy@gorodish.Sun.COM (Guy Harris) writes:
< >Yes, changing the "stat" structure may be painful.  Not changing it may be
< >painful as well....
< A reasonable way to make the change is to introduce a new system call,
< which acts the way one wants (e.g. long st_ino), change the C library
< stat() interface to use the new system call (and change the stat.h
< header at the same time!), then recompile and test all the system
< software at one's leisure.  Old binaries keep working until one is
< finished checking everything out and removes the old system call (if
< ever; usually it is left enabled so customers don't have to recompile
< anything).  Old binaries can fail on long inode numbers, but this is
< no worse than not making the change, and at least the official system
< software has been upgraded to work right.

I see another problem.  Presumably, the change over to a long inode was
accompanied by a change from a V7-type file sytem to a different file system
where directory entries support the long inodes (whether this be BSD 4.3 or
some new kluge).  This means that old binaries which depend on scanning
directories and assume the V7 format (where the open-directory and
read-directory-entry functions are not system calls but rather library
functions implemented with open() and read()) will now fail miserably.

Presuming a BSD 4.3 file system, even if (joy of joys) directory scanning was
implemented with a special system call, old binaries might not have allocated
enough space for new, longer file names.
-- 
|------------Dan Levy------------|  THE OPINIONS EXPRESSED HEREIN ARE MINE ONLY
| Bell Labs Area 61 (R.I.P., TTY)|  AND ARE NOT TO BE IMPUTED TO AT&T.
|        Skokie, Illinois        | 
|-----Path:  att!ttbcad!levy-----|

gwyn@smoke.ARPA (Doug Gwyn ) (09/05/88)

In article <2916@ttrdc.UUCP> levy@ttrdc.UUCP (Daniel R. Levy) writes:
>I see another problem.  Presumably, the change over to a long inode was
>accompanied by a change from a V7-type file sytem to a different file system
>where directory entries support the long inodes (whether this be BSD 4.3 or
>some new kluge).

Not necessarily; that's a separate issue.  Guy noted that remote
filesystems could use inumbers > 2^16 even if the local filesystems
didn't.  As to directory scanning breaking, it would have to have
broken on the remote directories anyway.  I would hope that by now
everyone has acquired the POSIX-style directory access library and
is using it.

levy@ttrdc.UUCP (Daniel R. Levy) (09/06/88)

In article <8435@smoke.ARPA>, gwyn@smoke.ARPA (Doug Gwyn ) writes:
< In article <2916@ttrdc.UUCP> levy@ttrdc.UUCP (Daniel R. Levy) writes:
< >I see another problem.  Presumably, the change over to a long inode was
< >accompanied by a change from a V7-type file sytem to a different file system
< >where directory entries support the long inodes (whether this be BSD 4.3 or
< >some new kluge).
< 
< Not necessarily; that's a separate issue.  Guy noted that remote
< filesystems could use inumbers > 2^16 even if the local filesystems
< didn't.  As to directory scanning breaking, it would have to have
< broken on the remote directories anyway.  I would hope that by now
                                                              ^^ ^^^ Dream on
< everyone has acquired the POSIX-style directory access library and
< is using it.

Why don't you post source for "the" POSIX-stype directory access library then?
it?  No I don't have ARPA access to anonymously ftp it nor do lots of other
USENET-only folks.
-- 
|------------Dan Levy------------|  THE OPINIONS EXPRESSED HEREIN ARE MINE ONLY
| Bell Labs Area 61 (R.I.P., TTY)|  AND ARE NOT TO BE IMPUTED TO AT&T.
|        Skokie, Illinois        | 
|-----Path:  att!ttbcad!levy-----|

gwyn@smoke.ARPA (Doug Gwyn ) (09/06/88)

In article <2917@ttrdc.UUCP> levy@ttrdc.UUCP (Daniel R. Levy) writes:
>Why don't you post source for "the" POSIX-stype directory access library
>then?

I did that already, quite some time ago.  In fact I keep seeing it
turning up in other people's products.

The old version I posted had two known problems:
	(a)  On pre-4.2BSD UNIX filesystems, directory entries with
	names precisely 14 characters long weren't being properly
	null-terminated.  This is fairly easy to fix and of course
	it has been fixed in my current version.
	(b)  Systems with small stacks had problems allocating
	directory block buffers off the auto stack.  This too is
	easy to fix, by moving the buffers to static storage.  I
	forget whether I did that in my current version, but if I
	repost it I'll make sure that's been done first.

The final thing I would have to check before a reposting would be
that the code does properly implement the final IEEE 1003.1 notion
of what these routines should do.  They kept messing with the specs
long after they were adequate..

Send MAIL (don't post) if you think a posting of the updated
package to comp.sources would be helpful.  Also, if you have any
revisions or additions, such as an MS-DOS getdents(), please send
it for inclusion in the next release.  (I already have a ProDOS
version.)

	Gwyn@BRL.MIL (Internet)
	...!uunet!brl!gwyn (UUCP, I think)