[comp.unix.admin] NFS Mount Point Strategy?

system@alchemy.chem.utoronto.ca (System Admin (Mike Peterson)) (11/10/90)

We have just got NFS working between our Apollo systems and SGI/IBM boxes,
and I have a few questions about the "right" way to set things up:

1) What options should I use on various systems for the mount command
   (e.g. soft vs. hard, use bg or not, use 'hard,bg', retry counts,
   timeouts)?

2) What directory structure is best for the actual mount points:
   a) mount "system:/dir" on /system_dir and let the users refer to 
      /system_dir/..... ?
   b) mount "system:/dir" on /nfs/system_dir and let the users refer to 
      /system_dir/..... where /system_dir is a link to /nfs/system_dir
      (so that the user reference point is 1 link removed from the
      actual mount point)?
   c) mount "system:/dir" on /mnt/system_dir and let the users refer to 
      /system_dir/..... where /system_dir is a link to /nfs/system_dir
      and /nfs/system_dir is a link to /mnt/system_dir
      (so that the user reference point is 2 links removed from the
      actual mount point)?
   The purpose behind b) and c) is to avoid having users directly
   accessing the mount point in case the foreign file system becomes
   unavilable (so they can escape from the attempted access?, or so the
   mount point is clear for remounting?) ?

3) Does the answer to 2) depend on the answer to 1), and/or the
   reliability of the systems involved?

4) What naming schemes are used to handle the large number of potential
   NFS mounts (for example, Physics/Astronomy/CITA here give each
   disk/partition a name (of a tree from the forest), and Apollo
   suggests systemname_dir; I can see advantages of both schemes since
   the former makes disk names consistent everywhere and users don't
   need to know what physical systems files really reside on, whereas 
   the latter brings some order, especially for the sysadmin)?
-- 
Mike Peterson, System Administrator, U/Toronto Department of Chemistry
E-mail: system@alchemy.chem.utoronto.ca
Tel: (416) 978-7094                  Fax: (416) 978-8775

thurlow@convex.com (Robert Thurlow) (11/11/90)

In <1990Nov10.144551.809@alchemy.chem.utoronto.ca> system@alchemy.chem.utoronto.ca (System Admin (Mike Peterson)) writes:

>1) What options should I use on various systems for the mount command
>   (e.g. soft vs. hard, use bg or not, use 'hard,bg', retry counts,
>   timeouts)?

ALWAYS use "bg"; it just means your clients won't hang as badly on bootup
when a server is down.  I use "hard,intr" mounts for filesystems that are
writable so that I get the best data integrity while leaving people have a
chance to kill a hung process.  "soft,ro" is a nice combination for stuff
like man pages and reference sources that programs don't depend on.

>2) What directory structure is best for the actual mount points:
>   b) mount "system:/dir" on /nfs/system_dir and let the users refer to 
>      /system_dir/..... where /system_dir is a link to /nfs/system_dir
>      (so that the user reference point is 1 link removed from the
>      actual mount point)?

We actually use /rmt/<servername>/<fsname> so it's easier to figure
out where everything is, and we have a symlink from /<fsname> to
the actual mount point.  This is really important, since getwd()
and 'pwd' can hang if it stumbles across a remote mount point as
it walks up the tree looking for the right components.  If you can't
do symbolic links, you're kind of stuck, though.

>3) Does the answer to 2) depend on the answer to 1), and/or the
>   reliability of the systems involved?

Not really; do 'em all so they can screw you up as little as possible.
The most reliable systems still have to come down for PM every once 
in awhile.

>4) What naming schemes are used to handle the large number of potential
>   NFS mounts (for example, Physics/Astronomy/CITA here give each
>   disk/partition a name (of a tree from the forest), and Apollo
>   suggests systemname_dir; I can see advantages of both schemes since
>   the former makes disk names consistent everywhere and users don't
>   need to know what physical systems files really reside on, whereas 
>   the latter brings some order, especially for the sysadmin)?

I already described what we do for naming, and it seems to work well.
The single biggest thing you can do is use the Automounter if you have
one on any of your clients.  It lets you lay out rules for how to
find filesystems and will mount and unmount them on demand.  It can
really simplify your life if you set it up correctly, as your rules
can be centralized and distributed via YP or rdist so server changes
don't kill you.

Rob T
--
Rob Thurlow, thurlow@convex.com or thurlow%convex.com@uxc.cso.uiuc.edu
----------------------------------------------------------------------
"This opinion was the only one available; I got here kind of late."

richard@aiai.ed.ac.uk (Richard Tobin) (11/13/90)

In article <1990Nov10.144551.809@alchemy.chem.utoronto.ca> system@alchemy.chem.utoronto.ca (System Admin (Mike Peterson)) writes:
>2) What directory structure is best for the actual mount points:

We mount system:dir on /nfs/system/dir and have a symbolic link to this.
This has the advantage that when getwd() searches a directory, it never
looks at unnecessary remote mount points.

-- Richard
-- 
Richard Tobin,                       JANET: R.Tobin@uk.ac.ed             
AI Applications Institute,           ARPA:  R.Tobin%uk.ac.ed@nsfnet-relay.ac.uk
Edinburgh University.                UUCP:  ...!ukc!ed.ac.uk!R.Tobin

larsen@prism.cs.orst.edu (Scott Larsen) (11/14/90)

In article <1990Nov10.144551.809@alchemy.chem.utoronto.ca> system@alchemy.chem.utoronto.ca (System Admin (Mike Peterson)) writes:
>We have just got NFS working between our Apollo systems and SGI/IBM boxes,
>and I have a few questions about the "right" way to set things up:
.
.
>2) What directory structure is best for the actual mount points:

Well, here at OSU, all of our machines run NFS and we ran into a BIG
problem with inconsistant naming, so we thought a bit and cam up with
this:

mount all partitions under /nfs/machinename/partition.

This boils down to this:

a machine called prism with a /local would be /nfs/prism/local
a machine called mist with a /src would be /nfs/mist/src
a machine called jasper with a /usr/local/src would be 
	/nfs/jasper/usr-local-src with links pu in for /nfs/jasper/src
	and /nfs/jasper/usr/local/src

So far we have had no problems with this naming scheme.  We have been
using it for about 9 months now.

Scott Larsen
larsen@prism.cs.orst.edu

de5@ornl.gov (Dave Sill) (11/15/90)

[note: followups redirected to comp.unix.admin]

In article <21753@orstcs.CS.ORST.EDU>, larsen@prism.cs.orst.edu (Scott Larsen) writes:
>
>mount all partitions under /nfs/machinename/partition.
> :
>So far we have had no problems with this naming scheme.  We have been
>using it for about 9 months now.

Is there some advantage I'm missing to having everything mounted under /nfs?
E.g., why not just /machinename/partition?

-- 
Dave Sill (de5@ornl.gov)
Martin Marietta Energy Systems
Workstation Support

rusty@belch.Berkeley.EDU (rusty wright) (11/15/90)

In article <1990Nov14.203658.23848@cs.utk.edu> de5@ornl.gov (Dave Sill) writes:

   From: de5@ornl.gov (Dave Sill)
   Subject: Re: NFS Mount Point Strategy?
   Date: 14 Nov 90 20:36:58 GMT

   [note: followups redirected to comp.unix.admin]

   Is there some advantage I'm missing to having everything mounted
   under /nfs?  E.g., why not just /machinename/partition?

You'll regret it if you don't have your nfs mounts 3 levels down.  The
method used by pwd (and by the C library getwd() routine which uses
what pwd does to determine the current directory) necessitates walking
up the directory tree and doing a stat() on each directory in .. to
find out where it came from (save the inode number of . before you
move up to .. and then compare that against the inode number of each
directory in the new current directory).  When it does a stat() on an
nfs mounted directory where the nfs server is down you'll hang.  csh
uses getwd() to initialize the csh variable $cwd so users will hang
when logging in if one of the nfs servers is down.  Likewise, every
time you do cd csh uses getwd() to set $cwd.

So each nfs mount has to be mounted on a directory that must be the
only directory in its parent; i.e., it must not have any "sisters" or
"brothers".

I can't remember why they have to be 3 levels down instead of only 2;
someone else can probably explain why.

faustus@ygdrasil.Berkeley.EDU (Wayne A. Christopher) (11/15/90)

Another thing you want to think about is whether your pathnames look
the same no matter where you are (i.e., on a machine for which the fs
is local or one for which it is nfs).  If this is the case, you can
have one home directory, run yp, and everything is nice and
transparent.  On our cluster, all user filesystems, both local and nfs,
are mounted underneath /home -- I wish this were the convention for all
machines I have accounts on...

	Wayne

martien@westc.uucp (Martien F. van Steenbergen) (11/15/90)

system@alchemy.chem.utoronto.ca (System Admin (Mike Peterson)) writes:

>We have just got NFS working between our Apollo systems and SGI/IBM boxes,
>and I have a few questions about the "right" way to set things up:

[all the other stuff deleted]

In general, make data available through:

	/<class>/<instance>

Where <class> could be "home" for home directories, "vol" for volumes
(i.e. applications and other mostly read-only data, "source" for public domain or free source code that you have
collected, "project" for project directories, "distrib" for software distributions, etc. Some examples:

	/home/john	John's home dir
	/home/mary	Mary's home dir

	/vol/frame	default FrameMaker application directory
	/vol/frame-2.1	explicit FrameMaker 2.1 application directory
	/vol/emacs	GNU Emacs

	/project/xyz	Project xyz
	/project/abc	Project abc

	/source/emacs-18.55 GNU Emacs 18.55 sources
	/source/nn-6.3	nn 6.3 sources

	/distrib/frame-2.1 FrameMaker 2.1 SunView distribution
	/distrib/framex-2.1 FrameMaker 2.1 X-Windows distribution

	etc.

Get it?! I consider having the server name (or partition for that
matter) somewhere in a path a bad thing. It means that you have to tell
all and everyone when you move, say, an application to another
partition or server.

You may wonder how to do this. By exploiting NFS, NIS and using the
automount(8) or amd(8) as glue, you can realy get things going.

More info in the Sun386i CookBook.

BTW I am planning to write an article on this and am preparing a
presentation on the what and how. Be patient, I'll try to get it posted
in this news group.

	Martien F. van Steenbergen

-- 
Groeten/Regards,

	Martien van Steenbergen.

mike@vlsivie.tuwien.ac.at (Michael K. Gschwind) (11/15/90)

In article <1990Nov14.203658.23848@cs.utk.edu> Dave Sill <de5@ornl.gov> writes:
>[note: followups redirected to comp.unix.admin]
>
>In article <21753@orstcs.CS.ORST.EDU>, larsen@prism.cs.orst.edu (Scott Larsen) writes:
>>
>>mount all partitions under /nfs/machinename/partition.
>> :
>>So far we have had no problems with this naming scheme.  We have been
>>using it for about 9 months now.
>
>Is there some advantage I'm missing to having everything mounted under /nfs?
>E.g., why not just /machinename/partition?

So what happens if you happen to have a machine called tmp or etc ;-]

One problem I can see with mounting every thing in the / directory is 
performance of the getwd() function. Getwd backtracks all directories
with .. stat'ing all the directory entries, i.e.

until root do
{
	get i-node of current dir 

	cd ..

	for every dir entry 
	{
		stat 
		compare with i-node
		if they match 
			we have discovered part of path, add it to name
			&& break;
	}
}

(* hope you get the idea *) 

Now if you have all the NFS mounted stuff in /, EVERY getwd means
stat'ing NFS mounted volumes. This of course works, EXCEPT if one NFS 
server is down, because then the stat will hang wait & for time-outs etc.
meaning that
# pwd 
(and any other program which uses getwd() :-() will more or less hang.

Mounting everything in /nfs means that you only will hang if you are
below /nfs - a major improvement, but still: if you use pwd in an NFS 
file system, you'll have the same problem as described before, so 
if one NFS server is down, getwd() is down for _ALL_ NFS file systems
X-(

There was an interesting paper at the last EUUG/EurOpen conference
(Autumn '90) how they solved the problem at Chalmers University.

You may want to get hold of a copy from:
EUUG Secretariat 
Owles Hall
Buntingford
Herts SG9 9PL
UK

			hope this helps,
					mike



Michael K. Gschwind, Institute for VLSI-Design, Vienna University of Technology
mike@vlsivie.tuwien.ac.at	1-2-3-4 kick the lawsuits out the door 
mike@vlsivie.uucp		5-6-7-8 innovate don't litigate         
e182202@awituw01.bitnet		9-A-B-C interfaces should be free
Voice: (++43).1.58801 8144	D-E-F-O look and feel has got to go!
Fax:   (++43).1.569697       

rowe@cme.nist.gov (Walter Rowe) (11/16/90)

You may want to read a paper that some colleagues and myself wrote
recently.  Its called "The Depot: A Framework for Sharing Software
Installation Across Organizational and Unix Platform Bounderies".
It details a mechanism we came up with here in our center for sharing
large application distributions such as X windows, GNU software, and
Frame Maker.

You can get the paper from durer.cme.nist.gov (129.6.32.4) in a file
called ~ftp/pub/depot.lisa.ps.Z.  This paper was presented at the LISA
conference in Colorado Springs last month.  I welcome all to read it.
We use the automounter here extensively for this type of thing, and
the depot paper outlines some naming conventions we decided on here.

wpr
---
Walter Rowe           rowe@cme.nist.gov         ...!uunet!cme-durer!rowe

karl_kleinpaste@cis.ohio-state.edu (11/16/90)

de5@ornl.gov writes:
   Is there some advantage I'm missing to having everything mounted under /nfs?
   E.g., why not just /machinename/partition?

Aside from the problems of getwd(), there is also the simple fact that
/ gets _awfully_ cluttered if you have a lot of servers.  I have 31
servers in my fstabs; I don't want an extra 31 directories in /.

--karl

dansmith@well.sf.ca.us (Daniel Smith) (11/17/90)

[much ado about mount points]

	Me and a couple of others came up with the following scheme, which
has been in use at Island Graphics for a couple of years:

	Every machine with a disk has a /usr/machinename/people and
/usr/machinename/projects.  For instance, my home is
/usr/bermuda/people/daniel, and I can log into any other island machine
and access it as /usr/bermuda/people/daniel.  This simplifies things
a lot, since we can talk about a project directory (/usr/java/projects/whatever)
or a something in someone's account as the same absolute name wherever
we're logged in.   It was fun throwing the /usr2 name out the window.
Another benefit is that all of us have just one big account, rather than
10 or 15 little ones (one on each machine).

	We're also starting to drift towards a lot of automounting, since
all the crossmounting on our machines is getting to be a bit much.

				Daniel
-- 
              Daniel Smith, Island Graphics, Marin County, CA
   dansmith@well.sf.ca.us   daniel@island.com   unicom!daniel@pacbell.com
     phone: (415) 491 1000 (w) disclaimer: Island's coffee was laced :-)
             "Salesmen are the Ferengi of the software world"

kseshadr@quasar.intel.com (Kishore Seshadri) (11/20/90)

In article <3737@skye.ed.ac.uk>, richard@aiai (Richard Tobin) writes:
>In article <1990Nov10.144551.809@alchemy.chem.utoronto.ca> system@alchemy.chem.utoronto.ca (System Admin (Mike Peterson)) writes:
>>2) What directory structure is best for the actual mount points:
>
>We mount system:dir on /nfs/system/dir and have a symbolic link to this.
>This has the advantage that when getwd() searches a directory, it never
>looks at unnecessary remote mount points.
>
This does not necessarily fix the hanging problem for SunOS 4.0.x systems.
The getwd() algorithm was changed to where, every time a mount point is
crossed, getwd checks the /etc/mtab and tries to find a mount moint with
the same device id. If it does find one, it prepends the path for this
mount point to the current path (derived so far..). While this means that
getwd doesn't walk all the way up the tree to /, it may stat most of the
entries in /etc/mtab which of course could make things worse...Sun uses
a getwd cache to get around this problem which in turn leads to other
problems...

So while the above may help, it doesn't solve everything.

Kishore
kseshadr@mipos3.intel.com
----------------------------------------------------------------------------
Kishore Seshadri,(speaking for myself)       <kseshadr@mipos3.intel.com>
Intel Corporation                            <..!intelca!mipos3!kseshadr>
"For a successful technology, reality must take precedence over public