[comp.unix.admin] comp.unix.admin.large

arnold@mango.synopsys.com (Arnold de Leon) (12/18/90)

I am in the process of redesigning the way provide /usr/local/{bin,etc,lib,man}

Here is what I have in mind

        o automount /usr/local
                
        o any 'package' containing more than one binary would be
          installed in it's own directory.
                Example:  perl
                        /usr/local/perl/{bin,lib,man} would be created.
                        The contents of /usr/local/perl/{bin,lib,man}
                        would be symlinked to /usr/local/{bin,lib,man}
                        as needed.

        o a side-effect is /usr/local/bin will be mostly symbolic links

Motivation:

        + allow packages to be distributed on different files servers
          while allowing a consistent name space.

        + automounting will allow 'critical' packages to be replicated,
          also it would be easy to move packages

        + easier to update and deinstall software.  All the binaries,
          libaries are in one place.  Useful with large installations
          with multiple sys admins.

        + responsibility for packages can be more easily delagated to
          others.

        + easier to make a binary distribution for others (in my case,
          remote sales offices).

        - LOTS of symbolic links, possible performance hit

        - initial command generate automount mount request, another possible
          performance

	? possible automount mount storm on hashing of paths?

Questions:

        Any one done anything similar?  If so how is it?  Will you do it
again?

        What do you think of the possible performance problem?  Is there
one?  Is it significant enough to worry about or is it small enough
to ignore.  Is the performance problem heavily tied to into usage
patterns?  If so what are the really bad cases?

-- 
Arnold de Leon  			arnold@synopsys.com
Synopsys Inc.				(415) 962-5051
1098 Alta Ave.
Mt. View, CA 94043

vancleef@nas.nasa.gov (Robert E. Van Cleef) (12/19/90)

The way we do it is:

On your file server, you build a custom file tree for each 
architecure of workstation that you support, with different
file collections for different OS versions:

	/pub/sun4.01 - Sun OS 4.0.1
	     sun4.1  - Sun OS 4.1
             iris3   - SGI IRIS 3 series
	     iris4d  - SGI IRIS 3D series
	etc....

This file tree contains all of the directories that you want to
maintain on the file server because of size, lack of access demand,
difficulty to maintain, etc...

This is then mounted on the appropriate machine as /u/.links and
on the client you set of the needed symbolic links:

	/usr/demo 	 @-> /u/.links/demo
	/usr/games 	 @-> /u/.links/games
	/usr/man 	 @-> /u/.links/man
	/usr/local 	 @-> /u/.links/local
	/usr/unsupported @-> /u/.links/unsupported

This configuration allows us to pick and choose the parts of the system
that we hold on the file server, yet we need only one mount point for
all of them. 

Because NFS is very efficient for reads, file collections that are 
"read-only" can be widely shared. For example, /TeX only exists on 
one file server. Therefore, it must be mounted separately, with the 
correponding links being:

	mount fs01:/TeX /u/.links/tex

  For things which vary per system architecture:
	/usr/local/bin/tex @-> /u/.links/tex/sun3/bin/tex
	/usr/local/lib/tex @-> /u/.links/tex/sun3/lib

  For things that are the same across system architectures
	/usr/local/lib/tex/macros @-> /u/.links/tex/macros
	/usr/local/lib/tex/tfm @-> /u/.links/tex/tfm
	
-- 
Bob Van Cleef 			vancleef@nas.nasa.gov
NASA Ames Research Center	(415) 604-4366
---
Perception is reality...

lidl@eng.umd.edu (Kurt J. Lidl) (12/19/90)

In article <609@synopsys.COM> arnold@mango.synopsys.com (Arnold de Leon) writes:
>I am in the process of redesigning the way provide /usr/local/{bin,etc,lib,man}
>
>Here is what I have in mind
>
>        o automount /usr/local

Difficult to do.  But not impossible.  If you are talking about a system
that understands dynamically linked libraries, and any of those
libraries live in /usr/local/lib, then you have a real headache in
store.  (I've been thinking about this a lot.)

You need (at least with Sun's dynamically linked libraries) to issue
a 'ldconfig' command to get things to work properly.  This is done
at system startup.  If you are continually mounting and unmounting
the /usr/local/lib filesystem, then you need to be aware of this
little snafu.

>        o any 'package' containing more than one binary would be
>          installed in it's own directory.
>                Example:  perl
>                        /usr/local/perl/{bin,lib,man} would be created.
>                        The contents of /usr/local/perl/{bin,lib,man}
>                        would be symlinked to /usr/local/{bin,lib,man}
>                        as needed.

Sounds pretty good.  We went a step further and made /usr/local/bin
*ALL* sym-links.  We made a /usr/local/misc for the one and two program
pakages.  This has the nice effect of telling us what the binaries
in /usr/local/bin are really from -- otherwise it is extremely
difficult to pinpoint the source of commands... This is a bigger
benefit than you think of immediatedly when it comes time to
upgrade an OS...

>        o a side-effect is /usr/local/bin will be mostly symbolic links

Make sure that you make the sym-links relative to /usr/local/bin
itself.  This way, you can mount your current /usr/local as /old/local
and still have things going to the correct place.  Very handy as a
crutch when upgrading operating systems.

>Motivation:
>        + allow packages to be distributed on different files servers
>          while allowing a consistent name space.
                          ^^^^^^^^^^^^^^^^^^^^^^^

Amen to this effort.  We are attempting a similar thing on 5 different
hardware/OS platforms.

>        + automounting will allow 'critical' packages to be replicated,
>          also it would be easy to move packages

See cautions above regarding dynamically linked libraries.  Also,
if you have any daemons that you run, move them off the automounted
filesystems.  Otherwise your automounting will screw the daemons
on a regular basis.

>        + easier to update and deinstall software.  All the binaries,
>          libaries are in one place.  Useful with large installations
>          with multiple sys admins.

Right.  I would advocate the use of relative sym-links as above.

>        + responsibility for packages can be more easily delagated to
>          others.

Right.  However, some sprawling packages rely on too many things.
A good example of this is TeX.  You have got {la,Sli,AmS}TeX, their
font families, screen previewers for sunview, screen previewers for X11,
etc, etc...

>        + easier to make a binary distribution for others (in my case,
>          remote sales offices).

Very much so.  One of the reasons that we are doing this is for
supporting external groups.

>        - LOTS of symbolic links, possible performance hit

Tradeoffs are everywhere.  I think that this is not too bad.

>        - initial command generate automount mount request, another possible
>          performance
>
>	? possible automount mount storm on hashing of paths?

If the /usr/local/bin is autmounted, then there will be one automount
request for the initial /usr/local/bin automount.  If you have say,
/usr/local/perl and /usr/local/bin in your directory, then you will
start to encounter the an automount mount request storm.  Keep in mind
that many rpc.mountd/mountd implementations do the *wrong* thing with
processing a large queue of incoming mount requests.  The SunOS versions
prior to that in 4.1.0 process the queue in a strange manner.  Performance
suffers for it.

>        What do you think of the possible performance problem?  Is there
>one?  Is it significant enough to worry about or is it small enough
>to ignore.  Is the performance problem heavily tied to into usage
>patterns?  If so what are the really bad cases?

For this version of our /usr/local project, we are simply mounting /usr,
/local, and /usr/local on startup.  Home directories, mail spools,
and bought application software automount on demand.  Automounting of
/usr/local and so forth will come in release 2.0, next summer. (hopefully)

--
/* Kurt J. Lidl (lidl@eng.umd.edu) | Unix is the answer, but only if you */
/* UUCP: uunet!eng.umd.edu!lidl    | phrase the question very carefully. */

pcg@cs.aber.ac.uk (Piercarlo Grandi) (12/28/90)

On 18 Dec 90 02:06:56 GMT, arnold@mango.synopsys.com (Arnold de Leon) said:

arnold> I am in the process of redesigning the way provide
arnold> /usr/local/{bin,etc,lib,man} Here is what I have in mind

arnold>         o automount /usr/local

Surely you jest... You always want it mounted :-).

arnold>         o any 'package' containing more than one binary would be
arnold>           installed in it's own directory.
arnold>                 Example:  perl
arnold>                         /usr/local/perl/{bin,lib,man} would be created.
arnold>                         The contents of /usr/local/perl/{bin,lib,man}
arnold>                         would be symlinked to /usr/local/{bin,lib,man}
arnold>                         as needed.

arnold>         o a side-effect is /usr/local/bin will be mostly symbolic links

This is a popular choice which is particularly ill advised. Don't do it.
Too many mount points, too many little and tall directories and
symlinks.  It is a bad bad idea. For efficiency and operability.

In general also symlinks are a poor and silly idea that creates
maintenance nightmares. It is better to think harder (something that
manucfacturers of Unix workstations seem bad at) and shaep a more
appropriate tree. Moreover hard links should be used wherever possible
in place of symbolic links.

What you want is to start having some /usr/local/src directory, which
contains a directory called 'commands' for small one source+one man page
commands, and then a separate source directory for each larger package.
You want to do traditional install of the resulting binaries, libraries,
manual pages, ...  You also want to have remove scripts.  Unfortunately
most packages come with poor makefiles that do not have them.

You want to create *exact* duplicates of the installed tree if you need
redundancy without admin headaches. Don't be clever -- don't customize
for each and every workstation group or similar. I would create a
/usr/local tree like this:

    /usr/local
    /usr/local/bin			commands
    /usr/local/lib			libraries
    /usr/local/lib/perl			Perl library sources
    /usr/local/lib/emacs.lisp		GNU Emacs standard .elc
    ...
    /usr/local/etc			system commands
    /usr/local/man
    /usr/local/man/info			info files (not just GNU Emacs')
    /usr/local/man/man[1-9]		manual pages

How to make this configurable by machine and operating system type? I
suggest having, propagated from /etc/init, two environment variables,
CPU (e.g. SPARC, i386, MIPS, 68010, 68040, ..) and OS (e.g. SVR3.2,
Xenix, SunOS 4, Ultrix 4, Next 2), and creating directories with the
values of those two environment variables in them.

This could be done as either "/usr/local.$CPU.$OS", and then you have
the whole tree keyed on those two, or you key the subdirectories, e.g.
as "/usr/local/bin.$CPU.$OS".  It's a difficult choice really. For small
installations I'd go for the latter, specialized subtrees of /usr/local,
for largish ones I'd go for specialized /usr/local trees.

You also want to have architecture and (or) operating system independent
directories, e.g. for shell scripts. If you need to have a shell script
that works on multiple architectures, it is much better for it to check
the values of CPU and OS internally than to have different versions for
each combination.

Unfortunately many packages come with installation procedures that do
not distinguish between architecture and ooperating system dependent
thigns and those that are. For example in GNU Emacs's etc directory you
will find mixed things like loadst, which is an executable, with DOC,
which is machine independent.

Using some powerful make, like dmake or cake, is best, as you can do
conditional make'ing and installation based on the values of CPU and OS
and keep objects and libraries for different combinations in different
directories.

Finally I would put in people's environment a LOCAL variable to point at
the right /usr/local, and in their PATHs and similar both the machine
independent and the machine dependent subtrees keyed on CPU and OS; for
example

	PATH="$LOCAL/bin.$CPU.$OS:$LOCAL/bin:/bin:/usr/bin:."
or
	PATH="$LOCAL.$CPU.$OS/bin:$LOCAL/bin:/bin:/usr/bin:."

A clever scheme, that allows you to switch between the two, is to have a
third environment variable, say WHERE, which is used like this:

	WHERE="/$CPU.$OS."
or
	WHERE=".$CPU.$OS/"

with

	PATH="${LOCAL}${WHERE}bin:${LOCAL}/bin:/bin:/usr/bin:."

If you go for specialized /usr/local trees then you need to mount *two*
for each machine, the unspecialized and the specialized one. If you go
for specialized subtrees, you can just mount the entire /usr/local tree,
the right subtrees will be used by looking at OS and CPU.

And so on. I wish that manufacturers in their recent reshaping of the
traditional UNIX tree had been cleverer and packed together files
according to criteria like:

	read-only-ness			(e.g. /bin, /lib, ...)

	version dependency		(e.g. some of /lib, some of /usr/man)

	host dependency			(e.g. some of /usr/etc)

	site dependency			(e.g. some of /etc, some of /usr/lib)

	expendability			(e.g. /tmp, many /usr/spool or /usr/adm)

	CPU and OS independence		(e.g. a lot of /usr/man)

	need of periodic attention	(e.g. a lot of /usr/adm)

For example a lot of configuration files should most most definitely not
be in /usr/lib, and a lot of sysadmin executables should not be in /etc
or /usr/etc, and what goes in /usr/adm is hopelessly confused with what
goes in /usr/spool (actually /usr/adm should be /usr/spool/logs), and so
on. System V.3 is a particularly bad offender, it put lp congifuration
files and executables and logs all under /usr/spool, and puts mail and
preserve under /usr instead of /usr/spool, and puts executables in
/usr/lib, ...

In practice the recent reorganization has been done along the
read-only/variable line. Hum. A lot of packages that scatter their
things all around the tree have been left alone. Bah.
--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

barnett@grymoire.crd.ge.com (Bruce Barnett) (12/28/90)

With everyone talking about /usr/local, I thought I would throw in my
2 cents.

We support several different architectures, and having a unique
/usr/local isn't always the most efficient way to organize disk space.
Some files can be shared across architectures, others can't. The ones
that can be shared are easier to maintain when they are only in one location.

We use /usr/local for workgroup specific additions, changes, in
addition to a large /common area - used throughout the center.

Some applications put multiple architectures in one place, others use
two seperate places. Our organization:

/common/all - architecture independant or multiple architecture
	installations, includes:
/common/all/bin - for shell/perl scripts
/common/all/lib - ASCII data used by all architectures.
/common/all/emacs - for common emacs lisp and info files
/common/all/frame - for sun3 and sun4 binaries of framemaker 

We also have a /common/`arch` directory for each popular architecture:
	/common/sun3, /common/sun4, /common/vax, etc.

This way people can put /common/`arch`/bin and /common/all/bin
in their searchpath, in addition to /usr/local - if they want to.

As someone who has installed hundreds of programs, this organization
works out fine. In some cases when it is not clear where a file should
be installed (i.e. in /common/all/lib or /common/sun4/lib, I just use
the architecture specific directory with the knowledge that I might
have a few redundant files in /common/sun3/lib and /common/sun4/lib

Having both directories mounted makes it easier to see if there are
common files in both architectures.link - either hard or symbolic -
can help eliminate duplicated files.
 

 
--
Bruce G. Barnett	barnett@crd.ge.com	uunet!crdgw1!barnett

davy@intrepid.erg.sri.com (David Curry) (12/29/90)

In article <PCG.90Dec27191000@teachk.cs.aber.ac.uk>, pcg@cs.aber.ac.uk
(Piercarlo Grandi) writes:
|>On 18 Dec 90 02:06:56 GMT, arnold@mango.synopsys.com (Arnold de Leon) said:
|>
|>arnold>         o automount /usr/local
|>
|>Surely you jest... You always want it mounted :-).
|>

Feh.  We automount /usr/local on all our workstations.  Works like a charm.
It's not so inefficient... one symbolic link, and that's what a namei cache
is all about anyway.

Our organization is as follows:  we have four Sun 4/390 heterogeneous servers.
Each one has /usr/local (sun4) and /usr/local.sun3 (sun3) exported to all our
systems.  Dataless workstations (which have their own /usr) have a symbolic
link to /net/usr.local3-4.x or /net/usr.local4-4.x as appropriate for their
architecture (sun3 or sun4) and operating system (SunOS 4.x).  There's also
a /net/usr.local3-3.x for the half a dozen SunOS 3.5 clients, but that's
another story.  Diskless clients just have a symlink in /export/exec/sun3
pointing at /net/usr.local3-4.x.

Automounting /usr/local is actually pretty useful - when a server goes down,
the systems mounting /usr/local will just get it from somewhere else.  There
is one problem for people like me who run X (from /usr/local) and leave it
up all the time, since the NFS unmount will fail, but even so, if I'm desperate
I just reboot and pick up /usr/local from some other server.

As far as organization within /usr/local goes, we just have /usr/local/bin,
/usr/local/etc, /usr/local/lib, and /usr/local/include.  For a few packages,
we do stuff like /usr/local/bin/mh and /usr/local/bin/X.V11R4.  Manual pages
go in /usr/man/manl (that's what it's for, folks - why bother with
/usr/local/man?).

Source tree is /usr/src/local/whatever, and we use dirlink (a program to make
symbolic link trees) to build programs for the different architectures.  We
do not maintain the objects on-line, since we can regenerate them.

Why complicate matters?  I saw the paper presented at LISA.  Sure, it's an
interesting idea, and I don't want to disparage others' work.  But the whole
time I was listening, the thought kept running through my mind:  "Why on
earth would you ever want to confuse your life like this?"

Dave Curry
SRI International

kseshadr@quasar.intel.com (Kishore Seshadri) (01/04/91)

In article <1990Dec28.173354.1738@erg.sri.com> davy@erg.sri.com writes:
>
>Why complicate matters?  I saw the paper presented at LISA.  Sure, it's an
>interesting idea, and I don't want to disparage others' work.  But the whole
>time I was listening, the thought kept running through my mind:  "Why on
>earth would you ever want to confuse your life like this?"
>
I wholeheartedly second this. As someone who manages public domain tools
for a network of 500 workstations, I would strongly recommend keeping things
simple.

A few general guidelines:

Keep your directory trees shallow (if thats the right word..). Complexity
in the naming scheme can turn into a nightmare, especially in groups that
see a high turnover of system managers, or groups that have a large number
of superusers (yes, strangely enough, this does happen).

Use a consistent naming scheme for the various architectures and os's. Being
logical in this helps..

Identify systems (for each os/architecture combination) to be used as
master systems for testing and distributing new software. Then make sure
everyone sticks to these. 

Spending a little more for disk space is worth it if managing all the different
/usr/locals is greatly simplified. This is especially true for large networks 
as the cost can be amortized over many clients.

Avoid having mount points on NFS filesystems. This can be a real pain..

The hysteresis principle- resist change! Your user community should have to
handle as few changes as possible. Ease of transition to a new structure should
be a dominant factor in any changes recommended. For example, requiring changes
to everbody's .login or .cshrc, when you have 700 users can prove to be
somewhat problematic.

Use something like rdist or coda for keeping /usr/locals consistent across
servers.

Ideally each machine should only be able to see the /usr/local stuff that is 
specific to its os and architecture. This may not be possible if many of
your users are developers...


At all costs avoid a complicated setup that looks elegant on paper.

----------------------------------------------------------------------------
Kishore Seshadri,(speaking for myself)       <kseshadr@mipos3.intel.com>
Intel Corporation                            <..!intelca!mipos3!kseshadr>
"For a successful technology, reality must take precedence over public
 relations, for Nature cannot be fooled." -Richard Feynmann
----------------------------------------------------------------------------
Kishore Seshadri,(speaking for myself)       <kseshadr@mipos3.intel.com>
Intel Corporation                            <..!intelca!mipos3!kseshadr>
"For a successful technology, reality must take precedence over public

rusty@belch.Berkeley.EDU (Rusty Wright) (01/04/91)

I manage a fileserver (tuna) that exports directories to our campus.
Anybody on campus can nfs mount its disks and use the stuff on it;
public domain software and site-licensed stuff.  One of the primary
constraints I have to work within is that people may have their own
copy of what I have on tuna so it can't conflict or require special
installation gyrations.  Also, people need to be able to "test drive"
what's on tuna easily.  These are goals that aren't always met; for
example, as someone pointed out, stuff that uses shared libraries
throws a monkey wrench into the works.

Tuna is soon to be upgraded to a faster and more capacious server and
I've been thinking about reorganizing how things are laid out.
Currently the arrangement is fairly convenient for users but causes me
problems.  Each exported filesystem is named /tuna_a, /tuna_b,
/tuna_c, etc.  tuna_a contains only Sun 3 binaries and tuna_e contains
only Sun 4.  I also want to support DECstations on the new tuna.  I'm
currently using the "separate bin, man, lib, and so on directories for
each package" scheme; e.g.  /tuna_a/tex82/bin, /tuna_a/x11r4/bin, etc.
The main hassle I have with this setup is that it doesn't give me any
flexibility when there are space problems; i.e., I'd like to be able
to move the tex stuff to a different partition but tuna_e is only Sun
4 and tuna_a is only Sun 3.  It also doesn't allow me to update stuff
that only works with a new version of the OS; for example; tuna_e
started out being compiled under SunOS 4.0.3 and when I needed to
start compiling stuff under SunOS 4.1 I had to give everyone several
weeks advance notice.

The scheme I'm planning on implementing is somewhat similar to the
Depot scheme, but the first paper where I read about a similar scheme
was a paper by Shing and Ni from Michigan State University given at
the Sun User Group conference this winter in San Jose.  I greatly
prefer the MSU scheme over the Depot scheme.  Depot is overly complex
for my tastes and one of my major complaints with the Depot is that
you can't test drive any of the software without mounting everything
just so (I found the thing where you mount the arch.arch-os directory
on the empty arch directory particularly irksome and unnecessarily
arcane).  I currently have the restriction that tuna_a must be mounted
as /tuna_a, and likewise for /tuna_e.  The other problem is that for
any shared libraries the clients must make links pointing to them in
/usr/local/lib.  So there are restrictions, but I think I've reduced
them pretty well.

In the MSU scheme you have directories on the server like

	/tuna_a/x11r4/bin/sun3
	/tuna_a/x11r4/lib/sun3
	/tuna_a/x11r4/bin/sun4
	/tuna_a/x11r4/lib/sun4
	/tuna_b/x11r4/bin/pmax
	/tuna_b/x11r4/lib/pmax
	/tuna_a/x11r4/include
	/tuna_a/x11r4/man

I prefer the Depot method of putting the architecture directory first;

	/tuna_a/x11r4/arch.sun3-os3/bin
	/tuna_a/x11r4/arch.sun3-os3/lib
	/tuna_a/x11r4/arch.sun4-os4/bin
	/tuna_a/x11r4/arch.sun4-os4/lib
	/tuna_b/x11r4/arch.pmax-os4/bin
	/tuna_b/x11r4/arch.pmax-os4/lib
	/tuna_a/x11r4/include
	/tuna_a/x11r4/man

But I'm going to leave the "arch." out of the directory name; e.g.,
just sun3, not arch.sun3.  But I will have the os as part of the name
so that I can support different/incompatible os versions;

	/tuna_a/x11r4/sun3_os3.5/bin
	/tuna_a/x11r4/sun3_os3.5/lib
	/tuna_a/x11r4/sun3_os4.1/bin
	/tuna_a/x11r4/sun3_os4.1/lib
	/tuna_a/x11r4/sun4_os4.1/bin
	/tuna_a/x11r4/sun4_os4.1/lib
	/tuna_b/x11r4/pmax_os4.1/bin
	/tuna_b/x11r4/pmax_os4.1/lib

I'm probably also going to push things down one level and have all of
the tuna mounts in a /tuna directory;

	/tuna/a/x11r4/sun3_os3.5/bin
	/tuna/a/x11r4/sun3_os3.5/lib
	/tuna/a/x11r4/sun3_os4.1/bin
	/tuna/a/x11r4/sun3_os4.1/lib
	/tuna/a/x11r4/sun4_os4.1/bin
	/tuna/a/x11r4/sun4_os4.1/lib
	/tuna/b/x11r4/pmax_os4.1/bin
	/tuna/b/x11r4/pmax_os4.1/lib

Having things mounted in the root creates some problems when you use
the automounter.

So, with the above scheme, as before, the clients will have to mount
the tuna directories in the right place and make the symbolic links in
/usr/local/lib for any shared libraries and then they can test drive
and use the software.

Another idea that the MSU and Depot schemes bring up is making it
fairly transparent to the user.  One of the problems with my current
setup is that you have /tuna_e/<whatever>/bin in your path when you're
on a Sun 4 and /tuna_a/<whatever>/bin when you're on a Sun 3; I like
to be able to use the same .cshrc file on all of my accounts.  What I
will do is, similar to the MSU scheme, provide a shell script that
makes a bunch of symbolic links.  It will attempt to determine the
client's architecture and then link the appropriate directories.  The
links will be made in /tuna_0:

	/tuna_0/x11r4/bin	-> /tuna/a/x11r4/sun4_os4.1/bin
	/tuna_0/x11r4/lib	-> /tuna/a/x11r4/sun4_os4.1/lib
	/tuna_0/x11r4/man	-> /tuna/a/x11r4/man
	/tuna_0/x11r4/include	-> /tuna/a/x11r4/include

One of the main places I diverge from MSU scheme is putting the
symbolic links in /usr/local.  Lots of departments and groups here
have their own /usr/local and I have no control over how they've set
up their systems and I don't want my stuff interferring with their
stuff.  The biggest wart is the symbolic links for the shared
libraries.  At least you can use ldconfig to specify additional
directories, but I'd really like to be able to embed in the programs
(applications) the location where to try first for the shared
libraries.

I have to say that I found David Curry's scheme pretty wacky;
subdirectories in /usr/local/bin?  And manual pages in /usr/man/manl?!
Is that copies of the man pages or symbolic links?  And how in the
world do you keep track of manual pages that need to be removed when a
command has been decomissioned or renamed due to a new release of the
software package?  And what about software packages that have
libraries (i.e., man pages that would go in man3)?  Having separate
man trees for each package works nicely with the Sun man command and
the MANPATH environment variable.  Too bad DEC doesn't support it.