[comp.sys.apollo] NFS Mount Point Strategy?

system@alchemy.chem.utoronto.ca (System Admin (Mike Peterson)) (11/10/90)

We have just got NFS working between our Apollo systems and SGI/IBM boxes,
and I have a few questions about the "right" way to set things up:

1) What options should I use on various systems for the mount command
   (e.g. soft vs. hard, use bg or not, use 'hard,bg', retry counts,
   timeouts)?

2) What directory structure is best for the actual mount points:
   a) mount "system:/dir" on /system_dir and let the users refer to 
      /system_dir/..... ?
   b) mount "system:/dir" on /nfs/system_dir and let the users refer to 
      /system_dir/..... where /system_dir is a link to /nfs/system_dir
      (so that the user reference point is 1 link removed from the
      actual mount point)?
   c) mount "system:/dir" on /mnt/system_dir and let the users refer to 
      /system_dir/..... where /system_dir is a link to /nfs/system_dir
      and /nfs/system_dir is a link to /mnt/system_dir
      (so that the user reference point is 2 links removed from the
      actual mount point)?
   The purpose behind b) and c) is to avoid having users directly
   accessing the mount point in case the foreign file system becomes
   unavilable (so they can escape from the attempted access?, or so the
   mount point is clear for remounting?) ?

3) Does the answer to 2) depend on the answer to 1), and/or the
   reliability of the systems involved?

4) What naming schemes are used to handle the large number of potential
   NFS mounts (for example, Physics/Astronomy/CITA here give each
   disk/partition a name (of a tree from the forest), and Apollo
   suggests systemname_dir; I can see advantages of both schemes since
   the former makes disk names consistent everywhere and users don't
   need to know what physical systems files really reside on, whereas 
   the latter brings some order, especially for the sysadmin)?
-- 
Mike Peterson, System Administrator, U/Toronto Department of Chemistry
E-mail: system@alchemy.chem.utoronto.ca
Tel: (416) 978-7094                  Fax: (416) 978-8775

thurlow@convex.com (Robert Thurlow) (11/11/90)

In <1990Nov10.144551.809@alchemy.chem.utoronto.ca> system@alchemy.chem.utoronto.ca (System Admin (Mike Peterson)) writes:

>1) What options should I use on various systems for the mount command
>   (e.g. soft vs. hard, use bg or not, use 'hard,bg', retry counts,
>   timeouts)?

ALWAYS use "bg"; it just means your clients won't hang as badly on bootup
when a server is down.  I use "hard,intr" mounts for filesystems that are
writable so that I get the best data integrity while leaving people have a
chance to kill a hung process.  "soft,ro" is a nice combination for stuff
like man pages and reference sources that programs don't depend on.

>2) What directory structure is best for the actual mount points:
>   b) mount "system:/dir" on /nfs/system_dir and let the users refer to 
>      /system_dir/..... where /system_dir is a link to /nfs/system_dir
>      (so that the user reference point is 1 link removed from the
>      actual mount point)?

We actually use /rmt/<servername>/<fsname> so it's easier to figure
out where everything is, and we have a symlink from /<fsname> to
the actual mount point.  This is really important, since getwd()
and 'pwd' can hang if it stumbles across a remote mount point as
it walks up the tree looking for the right components.  If you can't
do symbolic links, you're kind of stuck, though.

>3) Does the answer to 2) depend on the answer to 1), and/or the
>   reliability of the systems involved?

Not really; do 'em all so they can screw you up as little as possible.
The most reliable systems still have to come down for PM every once 
in awhile.

>4) What naming schemes are used to handle the large number of potential
>   NFS mounts (for example, Physics/Astronomy/CITA here give each
>   disk/partition a name (of a tree from the forest), and Apollo
>   suggests systemname_dir; I can see advantages of both schemes since
>   the former makes disk names consistent everywhere and users don't
>   need to know what physical systems files really reside on, whereas 
>   the latter brings some order, especially for the sysadmin)?

I already described what we do for naming, and it seems to work well.
The single biggest thing you can do is use the Automounter if you have
one on any of your clients.  It lets you lay out rules for how to
find filesystems and will mount and unmount them on demand.  It can
really simplify your life if you set it up correctly, as your rules
can be centralized and distributed via YP or rdist so server changes
don't kill you.

Rob T
--
Rob Thurlow, thurlow@convex.com or thurlow%convex.com@uxc.cso.uiuc.edu
----------------------------------------------------------------------
"This opinion was the only one available; I got here kind of late."

richard@aiai.ed.ac.uk (Richard Tobin) (11/13/90)

In article <1990Nov10.144551.809@alchemy.chem.utoronto.ca> system@alchemy.chem.utoronto.ca (System Admin (Mike Peterson)) writes:
>2) What directory structure is best for the actual mount points:

We mount system:dir on /nfs/system/dir and have a symbolic link to this.
This has the advantage that when getwd() searches a directory, it never
looks at unnecessary remote mount points.

-- Richard
-- 
Richard Tobin,                       JANET: R.Tobin@uk.ac.ed             
AI Applications Institute,           ARPA:  R.Tobin%uk.ac.ed@nsfnet-relay.ac.uk
Edinburgh University.                UUCP:  ...!ukc!ed.ac.uk!R.Tobin

larsen@prism.cs.orst.edu (Scott Larsen) (11/14/90)

In article <1990Nov10.144551.809@alchemy.chem.utoronto.ca> system@alchemy.chem.utoronto.ca (System Admin (Mike Peterson)) writes:
>We have just got NFS working between our Apollo systems and SGI/IBM boxes,
>and I have a few questions about the "right" way to set things up:
.
.
>2) What directory structure is best for the actual mount points:

Well, here at OSU, all of our machines run NFS and we ran into a BIG
problem with inconsistant naming, so we thought a bit and cam up with
this:

mount all partitions under /nfs/machinename/partition.

This boils down to this:

a machine called prism with a /local would be /nfs/prism/local
a machine called mist with a /src would be /nfs/mist/src
a machine called jasper with a /usr/local/src would be 
	/nfs/jasper/usr-local-src with links pu in for /nfs/jasper/src
	and /nfs/jasper/usr/local/src

So far we have had no problems with this naming scheme.  We have been
using it for about 9 months now.

Scott Larsen
larsen@prism.cs.orst.edu

de5@ornl.gov (Dave Sill) (11/15/90)

[note: followups redirected to comp.unix.admin]

In article <21753@orstcs.CS.ORST.EDU>, larsen@prism.cs.orst.edu (Scott Larsen) writes:
>
>mount all partitions under /nfs/machinename/partition.
> :
>So far we have had no problems with this naming scheme.  We have been
>using it for about 9 months now.

Is there some advantage I'm missing to having everything mounted under /nfs?
E.g., why not just /machinename/partition?

-- 
Dave Sill (de5@ornl.gov)
Martin Marietta Energy Systems
Workstation Support

martien@westc.uucp (Martien F. van Steenbergen) (11/15/90)

system@alchemy.chem.utoronto.ca (System Admin (Mike Peterson)) writes:

>We have just got NFS working between our Apollo systems and SGI/IBM boxes,
>and I have a few questions about the "right" way to set things up:

[all the other stuff deleted]

In general, make data available through:

	/<class>/<instance>

Where <class> could be "home" for home directories, "vol" for volumes
(i.e. applications and other mostly read-only data, "source" for public domain or free source code that you have
collected, "project" for project directories, "distrib" for software distributions, etc. Some examples:

	/home/john	John's home dir
	/home/mary	Mary's home dir

	/vol/frame	default FrameMaker application directory
	/vol/frame-2.1	explicit FrameMaker 2.1 application directory
	/vol/emacs	GNU Emacs

	/project/xyz	Project xyz
	/project/abc	Project abc

	/source/emacs-18.55 GNU Emacs 18.55 sources
	/source/nn-6.3	nn 6.3 sources

	/distrib/frame-2.1 FrameMaker 2.1 SunView distribution
	/distrib/framex-2.1 FrameMaker 2.1 X-Windows distribution

	etc.

Get it?! I consider having the server name (or partition for that
matter) somewhere in a path a bad thing. It means that you have to tell
all and everyone when you move, say, an application to another
partition or server.

You may wonder how to do this. By exploiting NFS, NIS and using the
automount(8) or amd(8) as glue, you can realy get things going.

More info in the Sun386i CookBook.

BTW I am planning to write an article on this and am preparing a
presentation on the what and how. Be patient, I'll try to get it posted
in this news group.

	Martien F. van Steenbergen

-- 
Groeten/Regards,

	Martien van Steenbergen.

rowe@cme.nist.gov (Walter Rowe) (11/16/90)

You may want to read a paper that some colleagues and myself wrote
recently.  Its called "The Depot: A Framework for Sharing Software
Installation Across Organizational and Unix Platform Bounderies".
It details a mechanism we came up with here in our center for sharing
large application distributions such as X windows, GNU software, and
Frame Maker.

You can get the paper from durer.cme.nist.gov (129.6.32.4) in a file
called ~ftp/pub/depot.lisa.ps.Z.  This paper was presented at the LISA
conference in Colorado Springs last month.  I welcome all to read it.
We use the automounter here extensively for this type of thing, and
the depot paper outlines some naming conventions we decided on here.

wpr
---
Walter Rowe           rowe@cme.nist.gov         ...!uunet!cme-durer!rowe

aaronf@hpfcbig.SDE.HP.COM (Aaron Friesen) (11/17/90)

From: de5@ornl.gov (Dave Sill)
> Is there some advantage I'm missing to having everything mounted under /nfs?
> E.g., why not just /machinename/partition?

Many times while performing backups, you would not want to include NFS
file systems.  By placing them under /nfs/machinename instead of just
/machinename it is easier to exclude them from the backup without unmounting
them.

Just a thought... 

Aaron Friesen aaronf@hpfcla.hp.com

lau@kings.wharton.upenn.edu (Yan K. Lau) (11/20/90)

In article <10270002@hpfcbig.SDE.HP.COM> aaronf@hpfcbig.SDE.HP.COM (Aaron Friesen) writes:
>Many times while performing backups, you would not want to include NFS
>file systems.  By placing them under /nfs/machinename instead of just
>/machinename it is easier to exclude them from the backup without unmounting
>them.
We have the opposite problem.  Has anyone been able to backup a NFS file
system using the Apollo wbak command?  The wbak command doesn't seem to
be able to recognize the NFS file system directories.


Yan.
   )~  Yan K. Lau    lau@kings.wharton.upenn.edu      The Wharton School
 ~/~   -Sheenaphile-          128.91.11.233       University of Pennsylvania
 /\    God/Goddess/All that is -- the source of love, light and inspiration!

kseshadr@quasar.intel.com (Kishore Seshadri) (11/20/90)

In article <3737@skye.ed.ac.uk>, richard@aiai (Richard Tobin) writes:
>In article <1990Nov10.144551.809@alchemy.chem.utoronto.ca> system@alchemy.chem.utoronto.ca (System Admin (Mike Peterson)) writes:
>>2) What directory structure is best for the actual mount points:
>
>We mount system:dir on /nfs/system/dir and have a symbolic link to this.
>This has the advantage that when getwd() searches a directory, it never
>looks at unnecessary remote mount points.
>
This does not necessarily fix the hanging problem for SunOS 4.0.x systems.
The getwd() algorithm was changed to where, every time a mount point is
crossed, getwd checks the /etc/mtab and tries to find a mount moint with
the same device id. If it does find one, it prepends the path for this
mount point to the current path (derived so far..). While this means that
getwd doesn't walk all the way up the tree to /, it may stat most of the
entries in /etc/mtab which of course could make things worse...Sun uses
a getwd cache to get around this problem which in turn leads to other
problems...

So while the above may help, it doesn't solve everything.

Kishore
kseshadr@mipos3.intel.com
----------------------------------------------------------------------------
Kishore Seshadri,(speaking for myself)       <kseshadr@mipos3.intel.com>
Intel Corporation                            <..!intelca!mipos3!kseshadr>
"For a successful technology, reality must take precedence over public

system@alchemy.chem.utoronto.ca (System Admin (Mike Peterson)) (11/22/90)

In article <33128@netnews.upenn.edu> lau@kings.wharton.upenn.edu (Yan K. Lau) writes:
>We have the opposite problem.  Has anyone been able to backup a NFS file
>system using the Apollo wbak command?  The wbak command doesn't seem to
>be able to recognize the NFS file system directories.

You can not use rbak/wbak on NFS file systems, but the UNIX commands
like 'tar' work (also 'rwmt' I think) - I am about to try some
of these things, so I'll post a followup if this is not correct.
Rbak/wbak is very Aegis oriented, and saves ACL's and object type info
etc., which most non-Apollo systems don't have. I suspect this is why
'df' and 'du' don't "see" NFS mount points as directories too (they
see them as NFS mount point objects, but don't know what to do with it?).
I suppose rbak/wbak could be hacked to process NFS files and directories,
but I doubt it will happen.
-- 
Mike Peterson, System Administrator, U/Toronto Department of Chemistry
E-mail: system@alchemy.chem.utoronto.ca
Tel: (416) 978-7094                  Fax: (416) 978-8775

basti@orthogo.UUCP (Sebastian Wangnick) (11/23/90)

lau@kings.wharton.upenn.edu (Yan K. Lau) writes:

>We have the opposite problem.  Has anyone been able to backup a NFS file
>system using the Apollo wbak command?  The wbak command doesn't seem to
>be able to recognize the NFS file system directories.

This seems to be no good idea. You don't get the Apollo file types 
through NFS anyway, and you possibly can't access every file
because of protection reasons.

What we do to backup our Unix-PC's to an Exabyte at our Apollo
is to run a suid-root pax (i.e., PD tar) on the PC and rsh the stdout
into a dd running on the Apollo with of set to the device special file
of the exabyte.

Sebastian Wangnick (basti@orthogo.uucp)

heiser@tdw201.ed.ray.com (12/01/90)

In article <1042@inews.intel.com> kseshadr@quasar.intel.com (Kishore Seshadri) writes:
>In article <3737@skye.ed.ac.uk>, richard@aiai (Richard Tobin) writes:
>>In article <1990Nov10.144551.809@alchemy.chem.utoronto.ca> system@alchemy.chem.utoronto.ca (System Admin (Mike Peterson)) writes:
>>>2) What directory structure is best for the actual mount points:
>>
>>We mount system:dir on /nfs/system/dir and have a symbolic link to this.
>>This has the advantage that when getwd() searches a directory, it never
>>looks at unnecessary remote mount points.
>>
>This does not necessarily fix the hanging problem for SunOS 4.0.x systems.
>The getwd() algorithm was changed to where, every time a mount point is
>crossed, getwd checks the /etc/mtab and tries to find a mount moint with

Does this mean that there is not a way to stop the "hanging problem" 
in SunOS 4.0.x systems?  In this environment, does the /nfs/system/dir
mount point strategy have any advantage over /mountpoint strategy?

Thanks in advance ...


-- 
Work:	heiser@tdw201.ed.ray.com
	{decuac,necntc,uunet}!rayssd!tdw201.ed.ray.com!heiser
Home:	bill@unixland.uucp		| Public Access Unix  508-655-3848
	uunet!world!unixland!bill	| 1200/2400/9600/19200 PEP/V32/V42