[comp.unix.admin] Software installation opinions needed

ddh@hare.cdc.com (dd horsfall x-4622) (09/18/90)

Wisdom and/or insight needed.  Disclaimer: although I've worked in the
software development field for 15+ years, I'm (relatively) new (~2 yr)
to the Unix variants.

If this is in one of The Fine Manuals, reference thereto would be
appreciated, but I haven't found it yet.

Is there a "convention" (or even a "standard", who knows) which defines
the difference in content between  /bin, /usr/bin, /usr/local/bin,
/usr/new, /usr/etc, /usr/5bin, /usr/sbin ... and so forth, all the 
combinations that start with / and end with bin or lib?

Context: we are about to release a software product which will include
the usual (for us) stuff: program binary, man pages, example problems, 
installation verification data; for each of these, do we
a) recommend a particular directory for its installation?
b) leave it up to each site/purchaser to figure out for themselves
   what's best for their configuration?
c) Some combination -- recommended location for those who don't want to
   think too hard about it, guidelines for the rest?

Software installation: should we
a) _Move_ the program binary to a place where people expect to find such
   things (i.e., something that's probably already in their $path) ?
b) Recommend adding a new directory to the $path?
c) _Leave_ the binary in a product/version catalog, but build a link to
   it from the "preferred" place in the path?  Hard or soft link?

How many of your third-party (i.e., not vendor-supplied) products fall
into the above categories.  Which do you prefer?  Did someone provide
an installation script (or even document) that would be an
exemplary model for us to follow?  If so, would you send me a copy?

Are there any specific "things" that an install script did that 
particularly annoyed you?  In other words, complete this sentence:
"Whatever you do, DON'T DO THIS..."

Lastly, what else in this area should I know that I don't even know
that I don't know (as compared to the things that I know I don't know)?

( Sidebar: How many of the above directories are local to my site and I 
don't know any better?  Are any of them specific to certain vendors?  Does
the list of "standard" or "conventional" directories vary between
SysV and BSD based systems? )
 
Readers with an opinion in the above areas are invited to reply to the
address in .sig; I can't imagine that a large number of general
net.people have any interest in this...
   The Horse
                                       +    Control Data Corporation      
   Dan Horsfall     +1-612-482-4622    +    4201 Lexington Ave North      
   Internet   ddh@dash.udev.cdc.com    +    Arden Hills MN 55126 USA      

gt0178a@prism.gatech.EDU (BURNS,JIM) (09/19/90)

in article <25908@shamash.cdc.com>, ddh@hare.cdc.com (dd horsfall x-4622) says:

[request for guidelines on writing software installation procedures]

> Readers with an opinion in the above areas are invited to reply to the
> address in .sig; I can't imagine that a large number of general
> net.people have any interest in this...

I for one would disagree - this is right up the alley of one of the groups
you posted to - comp.unix.admin. If others disagree, pls send me a
summary.
-- 
BURNS,JIM
Georgia Institute of Technology, Box 30178, Atlanta Georgia, 30332
uucp:	  ...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!prism!gt0178a
Internet: gt0178a@prism.gatech.edu

de5@de5.ctd.ornl.gov (Dave Sill) (09/19/90)

[Followup redirected to comp.unix.admin.]

In article <25908@shamash.cdc.com>, ddh@hare.cdc.com (dd horsfall x-4622) writes:
>
>Is there a "convention" (or even a "standard", who knows) which defines
>the difference in content between  /bin, /usr/bin, /usr/local/bin,
>/usr/new, /usr/etc, /usr/5bin, /usr/sbin ... and so forth, all the 
>combinations that start with / and end with bin or lib?

Nothing formal, but the UNIX System Administration Handbook (Nemeth,
Snyder, & Seebass) has a chart on page 41 that lists a bunch of these.
For example:

    /bin
        commands needed for minimum system operability
    /usr/bin
        executable files
    /usr/local/bin
        local software (BSD) executables
    /usr/new
        new software that will soon be supported (BSD)
    /usr/etc
        where Sun puts things that everyone else puts in /etc

>the usual (for us) stuff: program binary, man pages, example problems, 
>installation verification data; for each of these, do we
>a) recommend a particular directory for its installation?
>b) leave it up to each site/purchaser to figure out for themselves
>   what's best for their configuration?
>c) Some combination -- recommended location for those who don't want to
>   think too hard about it, guidelines for the rest?

Suggest a default, but allow the installer to either specify an
alternate when running your installation script, or tell them how to
edit the script itself.

>Software installation: should we
>a) _Move_ the program binary to a place where people expect to find such
>   things (i.e., something that's probably already in their $path) ?

Probably a good idea.

>b) Recommend adding a new directory to the $path?

Nah, too much of a hassle, and PATH's are getting too long.

>c) _Leave_ the binary in a product/version catalog, but build a link to
>   it from the "preferred" place in the path?  Hard or soft link?

Why?  What good does the redundant link do?

>How many of your third-party (i.e., not vendor-supplied) products fall
>into the above categories.  Which do you prefer?

Most seem to take path b), but I find it annoying; not as an
administator, but as a user.  I'm sick of having to fiddle with
.logins, .profiles, and .bashrcs on my various systems every time I
install a new commercial product.  I don't mind, for example, having
to create /usr/frame to install FrameMaker, but why don't they install
the `maker' script in /usr/bin rather than force people to cd to
/usr/frame and run bin/maker or add /usr/frame/bin to their PATH and
create an FMHOME environment variable?

>Did someone provide
>an installation script (or even document) that would be an
>exemplary model for us to follow?  If so, would you send me a copy?

I generally like the way DEC installs go, using the utility `setld'.
It allows installations from tape/cdrom or disk files, allows
installations to be reversed (a *very* nice feature), can produce a
list of installed software, etc.  Unfortunately, only DEC has it.

>Are there any specific "things" that an install script did that 
>particularly annoyed you?  In other words, complete this sentence:
>"Whatever you do, DON'T DO THIS..."

There are zillions of "Don't do's", but in general, don't create or
modify anything without notifying the installer.

>Lastly, what else in this area should I know that I don't even know
>that I don't know (as compared to the things that I know I don't know)?

I don't know.

>( Sidebar: How many of the above directories are local to my site and I 
>don't know any better?

I've seen tham all before, one place or another, so they aren't
site-specific.

>Are any of them specific to certain vendors?

Even /usr/etc, which Nemeth indicates is Sun-specific, is found on
most UNIX systems, it's just that Sun seems to be particularly fond of
it.

>Does
>the list of "standard" or "conventional" directories vary between
>SysV and BSD based systems? )

Yes.  For example, /usr/new and /usr/old are BSD-only and /usr/lbin is
ATT-only.

>Readers with an opinion in the above areas are invited to reply to the
>address in .sig; I can't imagine that a large number of general
>net.people have any interest in this...

I think this is relevent for comp.unix.admin folks.

-- 
Dave Sill (de5@ornl.gov)		These are my opinions.
Martin Marietta Energy Systems
Workstation Support

hackwort@dg-rtp.dg.com (Brian Hackworth) (09/19/90)

In article <25908@shamash.cdc.com>, ddh@hare.cdc.com (dd horsfall
x-4622) writes:
> Wisdom and/or insight needed.  Disclaimer: although I've worked in
the
> software development field for 15+ years, I'm (relatively) new (~2
yr)
> to the Unix variants.
> 
> If this is in one of The Fine Manuals, reference thereto would be
> appreciated, but I haven't found it yet.
> 
> Is there a "convention" (or even a "standard", who knows) which
defines
> the difference in content between  /bin, /usr/bin, /usr/local/bin,
> /usr/new, /usr/etc, /usr/5bin, /usr/sbin ... and so forth, all the 
> combinations that start with / and end with bin or lib?
> 

Ok, I'll bite.  You asked for mail, but I think this is
a topic of interest for most administrator types, so I'm
following up here.

The background for much of this discussion is the fact
that the industry is moving towards a client-server model
of computing, where software on a server is (or may be)
shared with zero or more operating system clients.

We can divide third party packages into two parts:
the part which is the same for all hosts, and is therefore
shared (an example is the /usr/bin/cat executable); and 
the part which is unique to each host (an example is 
/etc/passwd).

This breakdown maps to / (root) and /usr for vendor-supplied
software and /opt and /usr/opt for third-party-supplied
optional software.

Under the root directory, packages may have these directories
(as well as others, maybe):
    
    etc		- for host-specific data and configuration files
    
Under the usr directory, packages may have these directories:

    bin		- for user-visisble executable commands
    etc		- for host-independent data and configuration files
    lib		- for library data and routines
    catman	- for manual pages
    sbin	- for system (administrative) commands not
		    visible to ordinary users

Note that this convention puts all executables in "bin" and "sbin",
even though most unixes today have executables in etc and lib
as well.

> Context: we are about to release a software product which will
include
> the usual (for us) stuff: program binary, man pages, example problems,
> installation verification data; for each of these, do we
> a) recommend a particular directory for its installation?
> b) leave it up to each site/purchaser to figure out for themselves
>    what's best for their configuration?
> c) Some combination -- recommended location for those who don't want
to
>    think too hard about it, guidelines for the rest?
> 
> Software installation: should we
> a) _Move_ the program binary to a place where people expect to find
such
>    things (i.e., something that's probably already in their $path) ?
> b) Recommend adding a new directory to the $path?
> c) _Leave_ the binary in a product/version catalog, but build a link
to
>    it from the "preferred" place in the path?  Hard or soft link?

(a) (/usr/opt/<package> should be the recommendation) and (c).

I believe that vendors should have complete control of the / (root)
and /usr directories.  Third parties should load all software
into /opt and /usr/opt trees.  The installation process should
make links from the "usual" places (usually /usr/bin or /usr/catman
or whatever) to the package place.  For example, if your package's
main executable is called foobar, create a link from

    /usr/bin/foobar to /usr/opt/foobar/bin/foobar

The link should be a symbolic link because you have no idea
how the customer has partitioned file systems.  Symbolic
links will work across file systems; hard links will not.

As long as you load your software into your own tree, 
sophisticated administrators can "install" the software
anywhere they want by creating the appropriate symbolic
links.

Then, when the customer upgrades his vendor-supplied software,
your third-party software is not deleted, just the links
are deleted.  After upgrading, the customer re-installs
your software (without re-loading).

> 
> How many of your third-party (i.e., not vendor-supplied) products
fall
> into the above categories.  Which do you prefer?  Did someone provide
> an installation script (or even document) that would be an
> exemplary model for us to follow?  If so, would you send me a copy?
> 

#! /bin/sh

#  Installation (setup) script for foobar package

ROOT=${1:-/}		#  Allow setting up a client's root
USR=${2:-/usr}		#  Allow setting up another usr tree

PKG=foobar

MAN=F_man

cd ${USR}/bin
ln -s ../opt/${PKG}/bin/foobar foobar

cd ${USR}/catman
ln -s ../opt/${PKG}/catman/${MAN} ${MAN}


And so on -- you get the idea.

> Are there any specific "things" that an install script did that 
> particularly annoyed you?  In other words, complete this sentence:
> "Whatever you do, DON'T DO THIS..."

Don't
    - assume anything about file system layout
    - put host-specific information in usr
    - put executables in etc or lib
    - modify anything outside of your directory tree unless
	absolutely necessary.  If it is necessary, 
	CLEARLY document what is changed.
    - hard-code pathnames into executables

> 
> Lastly, what else in this area should I know that I don't even know
> that I don't know (as compared to the things that I know I don't
know)?
> 

It sounds to me like you are asking the right questions.

>  
> Readers with an opinion in the above areas are invited to reply to
the
> address in .sig; I can't imagine that a large number of general
> net.people have any interest in this...
>    The Horse
>                                        +    Control Data Corporation  
>    Dan Horsfall     +1-612-482-4622    +    4201 Lexington Ave North  
>    Internet   ddh@dash.udev.cdc.com    +    Arden Hills MN 55126 USA  

I hope others ARE interested!

--
Brian Hackworth
Data General Corporation            hackworth@dg-rtp.dg.com
62 T. W. Alexander Drive            ...!mcnc!rti!dg-rtp!hackworth
Research Triangle Park, NC 27709    (919) 248-6143

emery@linus.mitre.org (David Emery) (09/20/90)

Here's another do/don't do:

     I get VERY UPSET by 3rd party installations that must be done as
'root'.  An installation script should NOT require that it be run by the
superuser to do mundane things such as get the stuff off the tape,
build its directory structure, etc.  It should be MY decision what
userid owns the software, and to run the installation using an userid
other than root.  Besides, in this era of viruses, etc, who knows what
an installation script is doing to your system?

     Generally there is a little bit of installation that must be run
as root, to install things in places like /usr/bin, and /usr/lib.  The
right approach (in my opinion) is to provide these scripts separately,
and also provide an alternate approach.  

     For instance, suppose you have a tool called "munger", and it
needs a file called "munge_lib".  The default installation procedures
would be to install "munger" in /usr/bin and "munge_lib" in /usr/lib.
However, if I want, it should be OK for me to install "munger" in its
own directory /usr/munge, and put the "munge_lib" file in
/usr/munge/lib.  The tool should provide a way to determine the
location of "munge_lib".  Two approaches are
	1.  environment variable
	2.  enclosing the tool in a shellscript and modifying the
shellscript at installation time.

     So, what I think should happen is that the installation script
should look something like the following (intented lines result from
answering 'n'):
	Install executable into /usr/bin (y/n)? 
		Enter the path for the executable: 
	Install library into /usr/lib (y/n)?
		Enter the path for the library:
	Perform installation now (y/n)?
		OK, not performing installation.  Writing
		installation commands to the file "./install.me".
		You may need to be the superuser to execute "./install.me".

     I've installed several Ada compilers on a Sun.  The best
installation script is Verdix, which requires root access ONLY to edit
a file in /etc.  However, you do have to add the path to
<compiler-directory>/bin to your path.  The script edits its internal
files to load path information.  Alsys requires you to log in as root,
and does a lot of messing around in /usr/bin and /usr/lib.  I tried to
run the installation using our 3rd party software account (which is
NOT root), and had lots of problems.  Finally I had to tear apart the
script and do it by hand.  I've had to do similar things with other
3rd party tools, but not with Verdix.

				dave emery
				emery@aries.mitre.org

libes@cme.nist.gov (Don Libes) (09/20/90)

There's a paper in the upcoming LISA conference, called "depot: A
Framework for Sharing Software Installation Across Organizational and
UNIX Platform Boundaries".  It describes how we (NIST) addressed many
of the problems discussed in this thread.

If you want a copy of the paper (and can't wait for LISA), you can
ping the author (klm@cme.nist.gov), although I believe it still has
not completed its internal review prior to public release.  If you are
real nice or promise to critique it or something like that, he'll
probably give out a "beta" copy.

Don Libes          libes@cme.nist.gov      ...!uunet!cme-durer!libes

bill@bilver.UUCP (Bill Vermillion) (09/20/90)

In article <EMERY.90Sep19131715@aries.linus.mitre.org> emery@linus.mitre.org (David Emery) writes:
>Here's another do/don't do:

>     So, what I think should happen is that the installation script
>should look something like the following (intented lines result from
>answering 'n'):
>	Install executable into /usr/bin (y/n)? 
>		Enter the path for the executable: 
>	Install library into /usr/lib (y/n)?
>		Enter the path for the library:
>	Perform installation now (y/n)?
>		OK, not performing installation.  Writing
>		installation commands to the file "./install.me".
>		You may need to be the superuser to execute "./install.me".

And I just installed a package today that did the above (except building an
install file).

And then it proceeded to install itself, with a note on the screen that it
may take from 1 to 20 minutes to install, depending on the machine.

This STOOOOPID! program think's it knows how to handle terminals better
than termcap or terminfo.  So it built IT'S OWN TERMINFO library in it's
own /usr/lib/(application name with-help to protect the guilty) directory.

All umpteen blue jillion terminals that are in the standard terminfo are
compiled, with NO OPTION to withhold compiling, or compile only
selectively.  Now I have to go and remove all but the 3 terminal types we
standardized on because the program want to give me all it's know terminals
and variations which now amounts to several hundred files.

So if a package needs it's own special terminal database, give us an option
to only choose what we need.  A better way would be to write the program to
use the system terminal files - but that's wishful thinking.

On the machines at this particular site the word processor, the spread
sheet, and the data base all use their OWN terminal files. Yech!


-- 
Bill Vermillion - UUCP: uunet!tarpit!bilver!bill
                      : bill@bilver.UUCP

als@bohra.cpg.oz (Anthony Shipman) (09/20/90)

In article <1990Sep19.125944.6489@cs.utk.edu>, de5@de5.ctd.ornl.gov (Dave Sill) writes:
> [Followup redirected to comp.unix.admin.]
> 
> In article <25908@shamash.cdc.com>, ddh@hare.cdc.com (dd horsfall x-4622) writes:
> >
> >Is there a "convention" (or even a "standard", who knows) which defines
> >the difference in content between  /bin, /usr/bin, /usr/local/bin,
> >/usr/new, /usr/etc, /usr/5bin, /usr/sbin ... and so forth, all the 
> >combinations that start with / and end with bin or lib?
.................
> Most seem to take path b), but I find it annoying; not as an
> administator, but as a user.  I'm sick of having to fiddle with
> .logins, .profiles, and .bashrcs on my various systems every time I
> install a new commercial product.  I don't mind, for example, having
> to create /usr/frame to install FrameMaker, but why don't they install
> the `maker' script in /usr/bin rather than force people to cd to
> /usr/frame and run bin/maker or add /usr/frame/bin to their PATH and
> create an FMHOME environment variable?

A problem that I always worry about is name collisions. When program names are
abbreviated (often to ridiculous extremes) there is a real chance that somebody
else's program will have the same abbreviation. If they are all installed in
/usr/bin then installing one package will wipe out another.

Installing a package with its own bin directory gets around the problem but
introduces the hassle of Yet Another Path Directory. Even 'maker' is too short
for my liking. What indicates that it is framemaker and not somebody else's
*maker or "makerules" etc?

Call it framemaker or even FrameMaker if that is what it is. If the customer 
thinks the name is too long to type then he/she can use a shell alias or 
equivalent. 

This has the added advantage that when I look through the bin directories I 
know what the programs are. Too many times I have seen this file in a bin
directory with a meaningless string of characters for a name and not known what
package it belongs to.  There's another tip for software vendors. Supply a list
of installed files with each product in *machine readable* form so that I can
search it to find if the file belongs to the product.

This can be summarised as "Minimise the constraints on the customer".

-	Don't constrain me wrt what directory to install into. I might not have 
	the room on that file system.

-	Don't choose my program name abbreviations for me, I probably won't like
	it.

-	Don't constrain me by leaving me ignorant about what the programs I find in
	my bin directories are.

etc. etc.
-- 
Anthony Shipman                               ACSnet: als@bohra.cpg.oz.au
Computer Power Group
9th Flr, 616 St. Kilda Rd.,
St. Kilda, Melbourne, Australia
D

kandler@lan.informatik.tu-muenchen.dbp.de (Kandler) (09/20/90)

In article <EMERY.90Sep19131715@aries.linus.mitre.org> emery@linus.mitre.org (David Emery) writes:
>     I get VERY UPSET by 3rd party installations that must be done as
>'root'.  An installation script should NOT require that it be run by the
>superuser to do mundane things such as get the stuff off the tape,
>...
>other than root.  Besides, in this era of viruses, etc, who knows what
>an installation script is doing to your system?
>
Yes, that's the reason why I always read a script before I execute it as root.
And that's the reason why I get EVEN MORE UPSET if an installation 
is not a script but a binary. Therefore I decided to leave SunOS
suninstall and friends alone and write my own installation scripts.
(Another reason is, that suninstall is so highly interactive. It's
very boring to install 40+ Suns and to answer the same questions
everytime.) I admit that Sun is not 3rd party but 1st party. But this
makes no difference im my eyes. I want to know _exactly_ what a 
installation does to my system. Someone in this thread mentioned DEC's
'setld'. I too think, that's the way to go. (I implemented a kind of
'setld' on our Suns. Sorry :-)
~~
Matthias Kandler                                 Institut f. Informatik 
                                                 TU Muenchen
kandler@informatik.tu-muenchen.dbp.de            Postfach 20 24 20 
Telefon: +49 89 2105 2025                        D-8000 Muenchen 2 

rowe@cme.nist.gov (Walter Rowe) (09/20/90)

>>>>> On 19 Sep 90 12:59:44 GMT, de5@de5.ctd.ornl.gov (Dave Sill) said:

Dave> Suggest a default, but allow the installer to either specify an
Dave> alternate when running your installation script, or tell them
Dave> how to edit the script itself.

I agree.  As a system administrator, I get frustrated when I install a
third party package that won't allow me to dictate where it resides.
Frequently, I prefer to install it in /usr/local or in our depot
(which Don Libes mentions in a followup post).  In addition, I don't
care to allocate huge amounts of disk space on "/usr" since that is
not generally a network-shared file system but rather a file system
local to each machine.  Imagine doing 100+ X11R4 "make install"s.

FrameMaker, X windows, all the GNU software, and the Verdix Ada
compiler are great about letting me dictate their place of final
resting.  They modify their internal startup scripts or header files
accordingly when you run the installation.  You do have to modify your
path to access most of them, though you can provide symlinks in for
instance /usr/local/bin which usually is a network-shared file system.

>Software installation: should we a) _Move_ the program binary to a
>place where people expect to find such things (i.e., something that's
>probably already in their $path) ?

Dave> Probably a good idea.

I don't necessarily agree for a couple reasons.

[1] We have nearly 100 machines on our net served by five central
    servers.  When we upgrade machines, its nice not to have to
    re-install all the third-party software we have.  Ever have to
    install something like X11R4?  Its quite time consuming.

[2] Using symlinks, we can keep all the various third-party packages
    separate, and their self-documenting.  For instance, a symlink in
    /usr/local/lib like "libX11.a -> /depot/X11/lib/libX11.a" lets me
    know right away that this is part of the X11R4 distribution and
    not part of the OpenWindows 2.0 distribution which also contains a
    file by the same name.

>b) Recommend adding a new directory to $path?

Dave> Nah, too much of a hassle, and PATH's are getting too long.

Yeah, I don't go for having to add things to my path.  Users don't
should't have to modify their path each time you get a new package,
and admins should be able to decide where the application really
resides.

As an aside, one option we are looking at here at NIST that would help
solve this exact problem is the SunOS TFS (Translucent File System),
which allows you to mount directories in a stack and still see all the
different files underneath.  You can mount bin directories from
several places onto one common place and users would only ever have to
add the one common bin directory to their path.  If I add a new
application, I add its bin dir to the TFS list and BANG! users have
access to it.  No symlinks, no relocating applications, nothing!

For instance, the X11R4, GNU, and Frame bin dirs can be mounted onto
say /depot/share/bin.  If you add /depot/share/bin to your path, then
you have access to all these applications.  I don't have to move them
their, you don't have to have a long path, and I can add/change mounts
any time and you automagically see the effects without changing your
path ever again.  When X11R5 comes out, I simply change the TFS list
and you have automatically migrated to X11R5.

>Are there any specific "things" that an install script did that
>particularly annoyed you?  In other words, complete this sentence:
>"Whatever you do, DON'T DO THIS..."

Dave> There are zillions of "Don't do's", but in general, don't create
Dave> or modify anything without notifying the installer.

Definitely!  I don't like to find out too late that your script
overwrote a file that just so happens to have the same name (for
whatever reason) as one I already had.

>Readers with an opinion in the above areas are invited to reply to
>the address in .sig; I can't imagine that a large number of general
>net.people have any interest in this...

Dave> I think this is relevent for comp.unix.admin folks.

This is quite relavant to Unix administration.  I wish more vendors
would ask customers these types of questions before they sent out
products.  I apologize for rambling on as I have, but hopefully other
admins will find this stuff interesting and useful, and perhaps it
will spur innovative ideas of their own.  Are other people using TFS?


wpr
---
Walter P. Rowe                                    ARPA: rowe@cme.nist.gov
System Administrator, Robot Systems Division      UUCP: uunet!cme-durer!rowe
National Institute of Standards and Technology    LIVE: (301) 975-3694

barnett@grymoire.crd.ge.com (Bruce Barnett) (09/20/90)

In article <EMERY.90Sep19131715@aries.linus.mitre.org> emery@linus.mitre.org (David Emery) writes:

	I get VERY UPSET by 3rd party installations that must be done as
   'root'. 

Here Here! Considering that I often attempt to install software on a
NFS server of one architecture, and the tape drive is on another
machine of another arcitecture, I would also like software developers
to consider the following:

	The installer is not root
	The file system is not local, but NFS mounted (i.e. being root
		wouldn't help.
	The tape drive is not on the local system.
	The system doing the installation might be diskless.

	/usr is mounted read only

	The location of the software might move after the
installation is complete.

	Large architecture independent data files should not be
duplicated if different machine types are supported.

For instance, we have a large common file server (actually we have two -
one is a clone). We install software in either

	/common/`arch`
or
	/common/all

The `arch` directory is for machine dependent files. The /common/all
is for all systems, or software that supports multiple architectures.
(i.e. /common/all/frame/bin/sun3 /common/all/frame/bin/sun4, etc.)

The emacs lisp files are in /common/all/emacs/lisp

We also have /usr/local - for server and clients. Some machines have
/local - for machines with local disks.

It is handy to be able to create one link ( either a symbolic link, or
a mount directory, and replace one package with another i.e. a more
recent version.

Bottom line - assume nothing is standard in any system.
--
Bruce G. Barnett	barnett@crd.ge.com	uunet!crdgw1!barnett

mark@DRD.Com (Mark Lawrence) (09/20/90)

As far as installations go, anticipate the desire to share the sharable
and keep separate what can't.

We have a system set up where executable binaries go to /usr/local/bin
and executable scripts go to /usr/local/script (because it can be used
across platforms).  Stuff that goes into /usr/lib, likewise, *should* be
architecture specific binaries.  We should be given the option of
putting potentially sharable stuff somewhere else.

In the simple case, executables will all go to one place (maybe even
/usr/bin) and library stuff another (/usr/lib) but the heterogenous
platform case shouldn't be forgotton.

Frame is nice, as far as installation goes, and does most, if not all,
the work for you.  But it doesn't let me make choices about where things
should go in order to share template files and so forth.  I don't like
this about it.  We get a whole separate hierarchy for everything (one
for sun3 X, sun3 sunview, sun4 X, sun4 sunview, and additional hierarchies
(per arch) for the Version 2.0 (which only comes for sunview, at
present).
-- 
mark@DRD.Com uunet!apctrc!drd!mark$B!J%^!<%/!!!&%m!<%l%s%9!K(B

karl@naitc.naitc.com (Karl Denninger) (09/20/90)

In article <EMERY.90Sep19131715@aries.linus.mitre.org> emery@linus.mitre.org (David Emery) writes:
>Here's another do/don't do:
>
>     I get VERY UPSET by 3rd party installations that must be done as
>'root'.  An installation script should NOT require that it be run by the
>superuser to do mundane things such as get the stuff off the tape,
>build its directory structure, etc.  It should be MY decision what
>userid owns the software, and to run the installation using an userid
>other than root.  Besides, in this era of viruses, etc, who knows what
>an installation script is doing to your system?
>
>     Generally there is a little bit of installation that must be run
>as root, to install things in places like /usr/bin, and /usr/lib.  The
>right approach (in my opinion) is to provide these scripts separately,
>and also provide an alternate approach.  

Well, you're being not only unrealistic in some cases, but paranoid as well.

For commercial software (I publish a package under my own name, not AC
Nielsen) there is a good reason to run as root.  Namely, you have to do a
LOT of things as root to get the package installed.

For example, our package requires:

o)	Installation of two user id's in /etc/passwd under some
	circumstances (ie: if you select one of the options).

o)	SUID of the package if you select one of the options; it has
	to be able to write /etc/utmp if that option is chosen.  The package
	DOES relinquish SUID privs immediately after it does that operation
	on the /etc/utmp file, which is done prior to allowing the user any
	input beyond his/her password and login id.

o)	Installation of a group in /etc/group if it's not already there.

o)	Creation of a parameter file in /etc (so the rest of the package
	can "find itself" when it runs).

On the plus side, it does tell you exactly what it's doing during
installation, and DOES ask you where you want things installed -- including
the libraries it uses and the binaries.  There's an option for "fast
install" which chooses all defaults, but those are also displayed before
they're executed.

So yes, there are reasons to install as root.  For packages which don't do
system things, I don't like the idea, but for those which do (and lots do
either that kind of thing or install drivers, etc) you've got precious
little chance of getting people to break the scripts up.

Why?  Because I'll give you one guess at how many people will forget to run
the second script (to do the "root required" things) and then call tech
support asking why the product doesn't work.

--
Karl Denninger	AC Nielsen
kdenning@ksun.naitc.com
(708) 317-3285
Disclaimer:  Contents represent opinions of the author; I do not speak for
	     AC Nielsen on Usenet.

de5@de5.ctd.ornl.gov (Dave Sill) (09/20/90)

In article <ROWE.90Sep20061307@doc.cme.nist.gov>, rowe@cme.nist.gov (Walter Rowe) writes:
>
>>Software installation: should we a) _Move_ the program binary to a
>>place where people expect to find such things (i.e., something that's
>>probably already in their $path) ?
>
>Dave> Probably a good idea.
>
>I don't necessarily agree for a couple reasons.

I agree with your reasons, but my answer was in response to Dan
Horsfall's question about his particular application.  I didn't get
the impression that he was talking about a very large or multi-filed
package.  I surely wouldn't want X or FrameMaker dumped in toto into
/usr/bin.

>[1] We have nearly 100 machines on our net served by five central
>    servers.  When we upgrade machines, its nice not to have to
>    re-install all the third-party software we have.  Ever have to
>    install something like X11R4?  Its quite time consuming.

Tell me about it.  But X is a special case, I think.  At least, I
couldn't handle a bunch of similarly large and complex systems.  

>[2] Using symlinks, we can keep all the various third-party packages
>    separate, and their self-documenting.  For instance, a symlink in
>    /usr/local/lib like "libX11.a -> /depot/X11/lib/libX11.a" lets me
>    know right away that this is part of the X11R4 distribution and
>    not part of the OpenWindows 2.0 distribution which also contains a
>    file by the same name.

Good point.  Unfortunately, vendors can't rely upon the availability
of symlinks.

>As an aside, one option we are looking at here at NIST that would help
>solve this exact problem is the SunOS TFS (Translucent File System),
>which allows you to mount directories in a stack and still see all the
>different files underneath.

Sounds neat, but it won't be a real-world solution for a long time, if
ever.

-- 
Dave Sill (de5@ornl.gov)		These are my opinions.
Martin Marietta Energy Systems
Workstation Support

rfinch@caldwr.water.ca.gov (Ralph Finch) (09/21/90)

In article <4547@tuminfo1.lan.informatik.tu-muenchen.dbp.de> matthias.kandler@informatik.tu-muenchen.dbp.de (Kandler) writes:
. . .
>Therefore I decided to leave SunOS
>suninstall and friends alone and write my own installation scripts.
>(Another reason is, that suninstall is so highly interactive. It's
>very boring to install 40+ Suns and to answer the same questions
>everytime.)

Another reason is that the suninstall checks partition space and
doesn't recognize nfs mounted directories, thus falsely aborting when
in fact there is enough space.
-- 
Ralph Finch			916-445-0088
rfinch@water.ca.gov		...ucbvax!ucdavis!caldwr!rfinch
Any opinions expressed are my own; they do not represent the DWR

chris@mimsy.umd.edu (Chris Torek) (09/21/90)

In article <1990Sep19.144819.12179@dg-rtp.dg.com>
hackwort@dg-rtp.dg.com (Brian Hackworth) writes:
>We can divide third party packages into two parts:

(Actually, this applies to all the software, not just third party stuff.)

>the part which is the same for all hosts, and is therefore
>shared (an example is the /usr/bin/cat executable); and 
>the part which is unique to each host (an example is /etc/passwd).

(Some might quibble with this particular /etc file; a better example
might be /etc/fstab or /etc/exports or /etc/ttys.)

I would break this even further: the part that is the same for all
hosts *regardless of architecture*; the part that is the same for
all hosts of the same architecture; and the part that is always unique.

This matches /etc vs /usr/bin vs /usr/share in 4.3BSD-reno.

(Even this is still not always right, as the sharable machine independent
files in /usr/share may vary depending on the vendor's release version
even though they are the same across architectures.  This is rare enough
not to worry too much about it.)

>    /etc		- for host-specific data and configuration files
>    /usr/bin		- for user-visisble executable commands
>    /usr/etc		- for host-independent data and configuration files

(4.3BSD-reno puts the lattermost in /usr/share.)

>    /usr/lib		- for library data and routines

4.3BSD-reno has /usr/lib, /usr/libdata, and /usr/libexec: /usr/lib holds
only libraries, /usr/libdata holds only data files, and /usr/libexec holds
only executables.  All of these are independent of the particular host
but not of the architecture.

>    /usr/catman	- for manual pages

These are in /usr/share/man/cat*/* in 4.3BSD-reno.

>    /usr/sbin	- for system (administrative) commands not
>		    visible to ordinary users

(One needs a /sbin as well, for fsck, etc.)

>Note that this convention puts all executables in "bin" and "sbin",
>even though most unixes today have executables in etc and lib
>as well.

In 4.3BSD-reno, /etc binaries move to /sbin and /usr/lib binaries move
to /usr/libexec.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 405 2750)
Domain:	chris@cs.umd.edu	Path:	uunet!mimsy!chris

bob@pta.oz.au (Bob Vernon) (09/21/90)

In article <25908@shamash.cdc.com> ddh@dash.udev.cdc.com writes:
> Readers with an opinion in the above areas are invited to reply to the
> address in .sig; I can't imagine that a large number of general
> net.people have any interest in this...

I beg to differ.  This query is perfect for comp.unix.admin and it
gives me chance to get some things off my chest.

Follow-ups to comp.unix.admin

> Are there any specific "things" that an install script did that 
> particularly annoyed you?  In other words, complete this sentence:
> "Whatever you do, DON'T DO THIS..."

Whatever you do, DON'T ever ever, pretty please, don't do it, don't
even think about doing either of the following :-

(I work as a Pre-sales support engineer for my company, and I am
frequently demo-ing third-party packages.  This means installing and
REMOVING the package and there is nothing that pisses me off more than
either of the following two problems).

DONT #1 :  Do not install any program in a standard directory without
giving the installer an option of changing the location.  I hate
install programs that shove programs in /usr/bin without telling me.
On our in-house system third-party programs are always installed in
/usr/local/bin and we recommend this for customers as well.  It makes
life a lot easier when upgrade time comes around.  But this is our
preference.  Many sites would have a different opinion on the best
location for 3rd party sw.  So the install script should ask the
installer "Where do you want to install the program?".  It can
recommend somewhere if it wants.

The same goes for any data files, help files, other programs etc, etc
that your programs needs.  These should never be installed in /usr/bin
or /usr/lib unless the installer approves.  The best solution I find,
is to create various local sub-directories (eg package/bin package/lib
package/help) and set up environment variables to access.  I know some
people hate environment variables but thats OK.  Create a wrapper
script during installation that sets up all the environment and
adjusts the PATH.  This wrapper script is then the only thing that
needs to be in the Users default environment.


DONT #2:  Don't ever mess with my system files.  Eg, /etc/rc,
/etc/rc.local or god forbid /etc/passwd.  If you need a daemon started
or some cleanup done at boot time, then create a boot time script, tell
me where it is and ask me to add it to rc.local.  Or at least tell me
what you are about to do and give me a chance to change the filename
you are about to munge.  And if you must have an entry in /etc/passwd,
then tell me the details and ask me to add it.  Do NOT assume that your
passwd entry will have userid 63 and groupid 37.  Don't laugh, I've
installed a package like this.  When the install failed, I naturally
tried it again.  By the end of the day the package had 6 passwd
entries all clashing with some quite innocent user.


RECOMMODATION :  Make your install procedure nice and administrator
friendly.  (This is not the same as user-friendly).  Give me the
chance to change any install location.  Tell me what you are about to
do before you do it.  Above all, don't stuff up my system without my
approval.


Bob V!

      -m-------   Robert Vernon			  DOMAIN: bob@pta.oz.au
    ---mmm-----   Pyramid Technology (Australia)  UUCP: pyramid!pta!bob
  -----mmmmm---   328 High Street		  PHONE: +61 2 415 0515
-------mmmmmmm-   Chatswood 2067 Australia	  FAX: +61 2 417 8232

moore@srl.mew.mei.co.jp (W. Phillip Moore) (09/21/90)

The above discussion (not included here for sanity's sake) has made some
very important points.  Most notably, the installer should have the
maximum degree of flexibilty when installing a new software package.
I agree completely.  I can't stand it when I am forced to alter my system
to fit the software package, and this is something I am often forced to do
by many of the vendor's for the LSI CAD software we use here.
	
However, I see one minor point overlooked.  The reason many of these
installations are simple and over-automated is because not all sites have
UNIX gurus who have customized their network to the point where it is so
unique that it doesn't remotely resemble anyone else's.  A lot of default
assumptions are necessary because there are actually a lot of
machines/systems in use out there which pretty much were taken out of the
box, turned on and left as is.

To speak up for the poor scientist/engineer who is not a UNIX guru, but
wants to install some new toy in their machine and use it without reading
1000 pages of system manuals (these people do exist and have nothing to be
ashamed of), I suggest that both a highly customizable installation, as
well as a push-here-dummy type of approach should be offered.  

Mr. Kandler complains that suninstall is too automatic and hides what is
happening from the installer.  True, this is annoying for some, but makes
the installation possible for many who are not complete UNIX guru's.  I
think that both approaches are *necessary* if a software distributor wants
to keep everyone happy.  If the installer is a non-expert UNIX user (like
the scientist types mentioned above) and they have to answer 100
customization questions about system intricacies they may or may not be
completely familiar with, the installation may take forever, and one
question/parameter not done correctly may mean the software won't work.

One software package which my life depends on, but which (and I don't want
to start a Jihad over this -- I love GNU software) is GNU-Emacs.  I
challenge a novice UNIX user to get Gnu-Emacs up and running on a wierd
machine without a lot of grief.

W. Phillip Moore					   Phone: 06-908-1431
LSI Research Group					     FAX: 06-906-7251
Semiconductor Research Laboratory		  E-mail: moore@mew.mei.co.jp
       Matsushita Electric Works, Ltd.	1048 Kadoma, Osaka 571, Japan

fpb@ittc.wec.com (Frank P. Bresz) (09/21/90)

In article <1990Sep20.160212.241@naitc.naitc.com> karl@naitc.naitc.com (Karl Denninger) writes:

>In article <EMERY.90Sep19131715@aries.linus.mitre.org> emery@linus.mitre.org (David Emery) writes:

>Well, you're being not only unrealistic in some cases, but paranoid as well.

>For commercial software (I publish a package under my own name, not AC
>Nielsen) there is a good reason to run as root.  Namely, you have to do a
>LOT of things as root to get the package installed.

>For example, our package requires:

>o)	Installation of two user id's in /etc/passwd under some
>	circumstances (ie: if you select one of the options).

>o)	SUID of the package if you select one of the options; it has
>	to be able to write /etc/utmp if that option is chosen.  The package
>	DOES relinquish SUID privs immediately after it does that operation
>	on the /etc/utmp file, which is done prior to allowing the user any
>	input beyond his/her password and login id.

>o)	Installation of a group in /etc/group if it's not already there.

>o)	Creation of a parameter file in /etc (so the rest of the package
>	can "find itself" when it runs).

>On the plus side, it does tell you exactly what it's doing during
>installation, and DOES ask you where you want things installed -- including
>the libraries it uses and the binaries.  There's an option for "fast
>install" which chooses all defaults, but those are also displayed before
>they're executed.

>So yes, there are reasons to install as root.  For packages which don't do
>system things, I don't like the idea, but for those which do (and lots do
>either that kind of thing or install drivers, etc) you've got precious
>little chance of getting people to break the scripts up.

>Why?  Because I'll give you one guess at how many people will forget to run
>the second script (to do the "root required" things) and then call tech
>support asking why the product doesn't work.

	Just to add my $.02.  I have now installed several packages from
the NET.  NNTP, TRRN, BNEWS ...  All of those had nice configuration files
which did 98% of the job then as a final step.  I su'ed said 'make install'
and a few symlinks were made or files copied into properly root protected
areas.  

	I too find FRAME obtrusive.  I hate having that stupid FMHOME env
lying around but it just won't work without it, and I can't create a
symlink to it because it checks how it was called (what a crock).  I had to
take the extra step of informing every single user on the system that if
they want FRAME they must use 'fmusersetup' first (Something left untaught
in FRAME class).  New accounts that are created I handle by throwing this
into their .cshrc.  Anyway all of the packages I have received from the
NET.  Have better installation procedures than do the 3rd party stuff.
UNIFY is another one that requires environment variables.  It also requires
a login name (or used to anyway).  Come on I bet you can get by without a
login name this is really obscure.  Sure I use login names for a few of my
big packages but that's just because it is convenient, it allows me to
reinstall easily.   I don't have to use root I can just log in as the
desgnated user, I'm already in the right directory, and tar off or rebuild
the new version.

	All in all I'd say the FREEWARE does a better job than the 3rd
party stuff plus you get source code.

	I guess you get what you pay for, no wait that's backwards here
isn't it. :-)

--
| ()  ()  () | Frank P. Bresz   | Westinghouse Electric Corporation
|  \  /\  /  | fpb@ittc.wec.com | ITTC Simulators Department
|   \/  \/   | uunet!ittc!fpb   | Those who can, do. Those who can't, simulate.
| ---------- | (412)733-6749    | My opinions are mine, WEC don't want 'em.

klm@cme.nist.gov (Ken Manheimer) (09/21/90)

I and some other factory automation and robotics systems development
staff here at NIST have just completed a paper about something very
relevant to this software-installation thread.  It's called the depot,
and it's a fairly cohesive approach we developed for sharing software
packages across host platform and organizational boundaries.  We're
going to be presenting the paper at the Usenix LISA (Large
Installation Systems Administration) conference in the middle of
October, but since it seems so germane to this discussion thread i've
obtained permission to make it available to anyone on the net that
might be interested.

It's available via anonymous ftp and mail archive services.  Here's
how to get it.

You can ftp it as 'pub/depot.lisa.ps.Z' from 'durer.cme.nist.gov'.  Be
sure to use ftp's binary mode to get the compressed file.  Then
uncompress your copy and 'lpr' it to a postscript printer or
browser.

If you cannot ftp you can request an email copy by mailing to
"library@cme.nist.gov".  The contents of the message (with no subject
line) should be "send pub/depot.lisa.ps.Z".  The library server will
send a reply as promptly as intervening connections will allow.

As well as delineating an entire approach, i think the paper expresses
some fundamental views i share on many of the issues under discussion
in this thread.  There's a lot i'd like to add but i'm tooo sleepy and
would probably do best to let the paper speak for us.  I hope it's of
some use to you.

BTW, i want to mention that the depot development is not part of nor
does it particularly reflect any of the operating-systems standards
efforts going on here at the NIST.  A few prior postings allude to
"what we do here at NIST" but in fact this is only how some of us do
it here at NIST - the NIST is a large place and the depot framework is
so far used only by a small number of us.  We haven't corresponded
with, eg, the POSIX development folks here about how they might
approach this kind of thing.  The point is that this is *not* an
official standard we're proposing, just an approach we've adopted to
help make our jobs easier...

Cheers,

Ken Manheimer 	(301) 975-3539	National Institute of Standards and Technology
INTERNET: klm@cme.nist.gov	(Formerly National Bureau of Standards)
UUCP: ..!uunet!cme-durer!klm	Rm. A127, Bldg. 220
				Gaithersburg, MD  20899
Factory Automation Systems Division
/Integrated Systems Group, Unix Support Manager

Do you remember life as a child?
When you woke up in the morning,
and the morning smiled?		~~ Taj Mahal

jay@silence.princeton.nj.us (Jay Plett) (09/21/90)

In article <1990Sep20.160212.241@naitc.naitc.com>, karl@naitc.naitc.com (Karl Denninger) writes:
> For commercial software (I publish a package under my own name, not AC
> Nielsen) there is a good reason to run as root.  Namely, you have to do a
> LOT of things as root to get the package installed.
> 
> For example, our package requires:
> 
> o)	Installation of two user id's in /etc/passwd under some
> 	circumstances (ie: if you select one of the options).
 . . .
> o)	Installation of a group in /etc/group if it's not already there.
> 
> o)	Creation of a parameter file in /etc (so the rest of the package
> 	can "find itself" when it runs).

I can't think of three better reasons why an install script shouldn't
run as root.  If you think you have to do these things, you don't
understand how people are using computers.  Each of these things should
be done independently and manually by the installer, who should be
given clear and concise step-by-step instructions together with an
explanation of why they are necessary, how they will be used by your
software, and what alternatives are available if your preferred modus
operandi won't work at the installer's site.

	...jay

prc@erbe.se (Robert Claeson) (09/21/90)

In a recent article chris@mimsy.umd.edu (Chris Torek) writes:

>I would break this even further: the part that is the same for all
>hosts *regardless of architecture*; the part that is the same for
>all hosts of the same architecture; and the part that is always unique.

I would break this down even further: the part that is the same for
all hosts regardless of architecture, o/s version and UNIX vendor;
the part that is the same for all hosts running the same o/s version
from the same o/s vendor regardless of architecture (I believe that
this is what Sun intends with their /usr/share directory); the part
that is the same for all hosts of the same architecture running the
same o/s version; and the part that is always unique for a particular
host. So there.

-- 
Robert Claeson                  |Reasonable mailers: rclaeson@erbe.se
ERBE DATA AB                    |      Dumb mailers: rclaeson%erbe.se@sunet.se
                                |  Perverse mailers: rclaeson%erbe.se@encore.com
These opinions reflect my personal views and not those of my employer.

prc@erbe.se (Robert Claeson) (09/22/90)

In a recent article bob@pta.oz.au (Bob Vernon) writes:

>The best solution I find, is to create various local
>sub-directories (eg package/bin package/lib package/help)
>and set up environment variables to access. I know some
>people hate environment variables but thats OK. Create a wrapper
>script during installation that sets up all the environment and
>adjusts the PATH.  This wrapper script is then the only thing that
>needs to be in the Users default environment.

I tend to find environment variables the best way to relocate
software. It is also much more portable than symbolic links.
All applications that needs environment variables have them set up
in the /etc/profile file that all users get when they log in.
So no users need to ever change their own .profile or .kshrc
files unless they have a specific need for it.

>RECOMMODATION :  Make your install procedure nice and administrator
>friendly.  (This is not the same as user-friendly).  Give me the
>chance to change any install location.  Tell me what you are about to
>do before you do it.  Above all, don't stuff up my system without my
>approval.

That, and use the system spooler (I've came across several packages
that implements their own print spooler that refuses to cooperate
with the system spooler), the system terminal database (if possible).
The worst package I've came across is one that uses its own print
spooler. It also requires some extra entries to be added to each
terminal description in the terminfo database. One has to use the
package-supplied 'tic' (terminfo compiler) to add those entries.
And, having done that, other applications using terminfo bombs...

-- 
Robert Claeson                  |Reasonable mailers: rclaeson@erbe.se
ERBE DATA AB                    |      Dumb mailers: rclaeson%erbe.se@sunet.se
                                |  Perverse mailers: rclaeson%erbe.se@encore.com
These opinions reflect my personal views and not those of my employer.

martin@mwtech.UUCP (Martin Weitzel) (09/23/90)

In article <649@silence.princeton.nj.us> jay@silence.princeton.nj.us (Jay Plett) writes:
[about (not) requiring an installation script running as root]
:I can't think of three better reasons why an install script shouldn't
:run as root.  If you think you have to do these things, you don't
:understand how people are using computers.  Each of these things should
:be done independently and manually by the installer, who should be
:given clear and concise step-by-step instructions together with an
:explanation of why they are necessary, how they will be used by your
:software, and what alternatives are available if your preferred modus
:operandi won't work at the installer's site.

Perfectly right. As intermediate solution I would like to see the
installation gathering all the commands that must be run by root as
commands in a special file, then tell the system administrator to run
this script.

In this way the not-so-experienced system administrator has the chance
to make things right without understanding (at least as right as if
all the steps were immediatly done during installation with a script
running as root).

The more knowledgable administrator can have a look at the script
before he or she runs it.
-- 
Martin Weitzel, email: martin@mwtech.UUCP, voice: 49-(0)6151-6 56 83

seeger@thedon.cis.ufl.edu (F. L. Charles Seeger III) (09/24/90)

In article <MOORE.90Sep21091732@terra.srl.mew.mei.co.jp> moore@srl.mew.mei.co.jp (W. Phillip Moore) writes:
| 
| However, I see one minor point overlooked.  The reason many of these
| installations are simple and over-automated is because not all sites have
| UNIX gurus ...

Which is a fine justification for providing an automated installation
script.  It is in no way a justification for assuming that this idiotic
installation script should be the only way to install the package.  Most
binary distributions that I have had the misfortune to wrestle with
provide little or no indication of how to install the package other than
the script itself.  Of course, the script will often fail because of the
assumptions that it makes, even if one were to try executing it.

In article <929@mwtech.UUCP> martin@mwtech.UUCP (Martin Weitzel) writes:
|
| The more knowledgable administrator can have a look at the script
| before he or she runs it.

This is feasible when the script is, in fact, a script.  There are
instances where the installation "scripts" are executable binaries.
SunOS 4.1 is a non-unique example of this.  Even when the scripts are
readable, my feable mind sometimes has a little trouble remembering all
the shell variables along the way.  I (gasp) sometimes resort to a pencil
to keep track of them.  Why should I have to waste time reverse engineering
such a script in the first place?

Please, if you are doing "release engineering" (or whatever it is called),
send out a *specification* of what needs to be done during the installation
process and how the installed system is supposed to operate.  Then make
sure that the scripts (or example command lines) are kept in sync with the
specification (or vice-versa).  This should also make it easier to debug
the installation if there are problems.  I think that this would would
reduce the load on the support phone lines as well.

--
  Charles Seeger    E301 CSE Building             Office: +1 904 392 1508
  CIS Department    University of Florida         Fax:    +1 904 392 1220
  seeger@ufl.edu    Gainesville, FL 32611-2024

src@scuzzy.in-berlin.de (Heiko Blume) (09/24/90)

first off: it would be a VERY good idea to allow a shell escape in the
installation script, so one can look for something or create a directory
etc without having to start the whole thing again and without having to go
to another terminal.

another thing: when i want to install a package with sources i want 
completely seperate steps (i.e. scrips, makefiles) for config, compile and
install (i.e. copying to the config'ed places etc).

when i want to install a binary package i want seperate steps for copying the
stuff someplace from media, config (perhaps another step here for relinking
like with the kernel) and performing the config'ed actions.

karl@naitc.naitc.com (Karl Denninger) writes:

>In article <EMERY.90Sep19131715@aries.linus.mitre.org> emery@linus.mitre.org (David Emery) writes:
>> I get VERY UPSET by 3rd party installations that must be done as 'root'.

>For commercial software (I publish a package under my own name, not AC
>Nielsen) there is a good reason to run as root.  Namely, you have to do a
>LOT of things as root to get the package installed.

perhaps for a *part* of the installation (see above) !

>For example, our package requires:

>o)	Installation of two user id's in /etc/passwd under some
>	circumstances (ie: if you select one of the options).

not too good if the platform doesn't allow you to create a user that
way, eh?! (i.e. if you have to use some program for that when the *real*
data is in a dbm file etc pepe). i'd prefer to mess up my password (database)
myself, so *i* will get the credit for doing it :-)

>o)	Installation of a group in /etc/group if it's not already there.

do you expect that it's impossible someone choose the group name 
for something else, preferably to limit the access to some sensitive
stuff? may i choose the group name myself?

>o)	Creation of a parameter file in /etc (so the rest of the package
>	can "find itself" when it runs).

yet another file in /etc ? it's already cluttered with so many files...

>Why?  Because I'll give you one guess at how many people will forget to run
>the second script (to do the "root required" things) and then call tech
>support asking why the product doesn't work.

i think it wouldn't be too difficult to put a check in the program
that tells the user "you forgot to configure me with the bla prgm".

sure, some programs need root privs to configure them, but these steps
should be seperated. that way a non-priviledged user can also do
the more time consming work, while the admin with the necessary privs
can do the more dangerous stuff.
-- 
Heiko Blume c/o Diakite   blume@scuzzy.in-berlin.de   FAX   (+49 30) 882 50 65
Kottbusser Damm 28        blume@scuzzy.mbx.sub.org    VOICE (+49 30) 691 88 93
D-1000 Berlin 61          blume@netmbx.de             TELEX 184174 intro d
scuzzy Any ACU,e 19200 6919520 ogin:--ogin: nuucp ssword: nuucp

karl@naitc.naitc.com (Karl Denninger) (09/25/90)

In article <649@silence.princeton.nj.us> jay@silence.princeton.nj.us (Jay Plett) writes:
>In article <1990Sep20.160212.241@naitc.naitc.com>, karl@naitc.naitc.com (Karl Denninger) writes:
>> For commercial software (I publish a package under my own name, not AC
>> Nielsen) there is a good reason to run as root.  Namely, you have to do a
>> LOT of things as root to get the package installed.
>> 
>> For example, our package requires:
>> 
>> o)	Installation of two user id's in /etc/passwd under some
>> 	circumstances (ie: if you select one of the options).
> . . .
>> o)	Installation of a group in /etc/group if it's not already there.
>> 
>> o)	Creation of a parameter file in /etc (so the rest of the package
>> 	can "find itself" when it runs).
>
>I can't think of three better reasons why an install script shouldn't
>run as root.  If you think you have to do these things, you don't
>understand how people are using computers.  Each of these things should
>be done independently and manually by the installer, who should be
>given clear and concise step-by-step instructions together with an
>explanation of why they are necessary, how they will be used by your
>software, and what alternatives are available if your preferred modus
>operandi won't work at the installer's site.

I don't understand how people are using computers eh?  Add some ad-hominen
in there for good measure too, no?

I disagree with the "manually and by the installer" bit.  First off, there 
are a LOT of people who aren't good enough with Unix to do it manually and 
by themselves.  That's a fact, and isn't going to get better (in fact it's
getting worse as Unix becomes more and more a commodity operating system).

Software providers MUST provide products than can be installed and used by
anyone who has a system, not just GURUS.  It is unrealistic and foolish to
provide a product that requires a person with internals knowledge on staff
to successfully install it!

Secondly, ALL THE DOCUMENTATION IS IN THE MANUAL >IF< A PERSON CARES TO READ
IT, in the (wonder of wonders) installation section.  And finally, the 
script does not have to do those three things, it ASKS if you want them 
done (all except for the parameter file, which is a TEXT file, not 
executable, and has GOT to be there).

There is exactly one exception to the "must have" in the above -- you don't
need to have the captive user id installed.  The installation script asks
you if you want to do a number of things, and if the answer is "yes" it says
"Someone has to do <x> in order for the package to do <y> -- can I do it 
for you or would you care to tell me what the value for <x> is supposed 
to be?"

Now the wizard can look in the /tmp/install file we drop on the machine 
during the media load (which doesn't have to be done as root) or read the
manual, move the files by hand (easy enough), set up the group and user 
files appropriately, and create the parameter file.  All the information 
is right there in the printed documentation, with COMPLETE descriptions of 
each variable and system requirement.  OR, if they're not inclined to do 
that, or unable to (lack of knowledge perhaps?) they can run /tmp/install 
and have the installation do it for them.  The install program tells them 
what it's about to do before it does it, so the installer can always say 
"no thanks, I'll do that".

However, I bet the chance of your making a mistake doing it "by hand" and 
hosing your machine far exceeds the chance of the script making that same 
mistake and leaving you with a system that can't be logged into.

Fancy the fact that out of all the copies we have out there, there isn't 
a single installation I'm aware of that hasn't opted to use the install
script.  You see, it is the difference between taking 1 minute and 30
minutes to do the installation, and the installation never seems to 
forget to do something like set the permissions on the files it 
copies over.

--
Karl Denninger	
kdenning@genesis.naitc.com

Of course these are my opinions.  Do you think anyone else would have them,
say much less pay for them?

3003jalp@ucsbuxa.ucsb.edu (Applied Magnetics) (09/25/90)

If your package must be installed by root, do this:
 1) Install via a script with lots of comments.  This way, the system
    administrator can examine the script and change it if he wants to.
    A script is better than a makefile, say, because it can be read
    from top to bottom.
 2) If things are just too complicated, the script can be generated
    by a separate configuration script/makefile/binary.  Or some set
    of actions may be performed by a preliminary
    script/makefile/binary.  The rule here should be:  everything is
    completely innocuous until the final script is run by root.
 3) As a part of final installation, generate a manifest of all files,
    links etc. and their owner, group and final resting place.  Let the
    administrator decide where to keep the manifest.

--Pierre Asselin, Applied magnetics Corp.

jay@silence.princeton.nj.us (Jay Plett) (09/25/90)

In article <1990Sep24.171752.13221@naitc.naitc.com>, karl@naitc.naitc.com (Karl Denninger) writes:
> In article <649@silence.princeton.nj.us> jay@silence.princeton.nj.us (Jay Plett) writes:
> >In article <1990Sep20.160212.241@naitc.naitc.com>, karl@naitc.naitc.com (Karl Denninger) writes:
[ whether an install program should touch /etc/passwd, /etc/group, and /etc ]

> I don't understand how people are using computers eh?  Add some ad-hominen
> in there for good measure too, no?
I apologize for the ad hominem inference that can be drawn due to my
careless use of pronouns in my remark "if you ... then you ..."  This
inference was not intended.  Please substitute "one" for "you".

Providing for installation by a naive user is indeed frustrating and
difficult.  But that doesn't negate the fact that each of /etc/passwd,
/etc/group, and /etc--on the machine where a software package is being
installed--might be inaccessible to the software or otherwise have no
relevance to it when it is executed.  I believe that a software
provider must be entitled to assume that each site has at least one
user who is capable of adding users and groups to their system, whether
by hand-editting the appropriate files or by using software that was
provided with or for the OS.  Meanwhile, it's presumptuous and risky
for a software provider to assume that (s)he can predict what steps are
required to successfully add a user or group to a particular system (or
that the system the software is being installed on is the system
it will be executed on).  How the software is to "find itself" is a
stickier problem.  Looking in /etc for a config file is not the
solution.

In some cases it might be reasonable to include a program to automate
certain superuser tasks which are likely to succeed on most common
systems.  But no such program should be embedded in a more complicated
installation program, or leave the installer wondering about the
consequence of departing from the vendor's recommendations, or leave no
choices for the installer, or require that the installer spend more
time reverse-engineering the install script than (s)he should need to
spend installing the entire package.  No third-party software should
depend for its execution on discovering its essence in any particular
pathname.

It should be both possible and easy for any user to install a third-
party software package even if that user lacks either the authority or
the ability to add users, groups, or files in system directories.
Ideally, software shouldn't depend on any of these things.  If such
dependencies are truly unavoidable, then it is acceptable that a
software package require its installer to seek help for a few critical
tasks that require superuser privilege.

	...jay

john@pcad.UUCP (John Grow) (09/26/90)

Here are a few things we do with our installation scripts for the installation
of our electronic CAD package for PCB layout.  This is a list of steps
required for us:

	Before running the installation script, determine where you want
	  to place our software.  This can be anywhere, but it is recommended
 	  that it be placed where it can be accessed via NFS.  Next extract
	  the first tar file on the tape into that directory.  This contains
	  the installation scripts.  cd into the directory and run installme.

	Select platform.  Select architectures for which you want to
	  load architecture dependent software.

	Load the software.  There are four parts: one required and
	  three optional.  User selects what to load in a menu format.
          The software gets loaded into <wherever>/bin.<arch>

	Install the "command."  This step installs a script which invokes
	  the rest of the application when the application is run.  It sets
          up the environment variables needed by the application.  We
          recommend that it be placed somewhere in the users' path, but
          invoking it with an absolute path works fine.  Information needed
          from the installer for this step is the location of the directory
          where the software is installed in a form accessible over the
          network which is put into the script.  The user is also asked
	  where to place the script.

	Manage Network Licenses.  Our application makes use of a third
	  party floating network license application which requires
	  us to create a directory in /usr/adm, a symbolic link in /etc
          and modify the rc.local file.  However, each step is explained
          before it is done and the installer is told what to do to do
          this himself if he does not want the script to take care of it
          for him.

	Load optional libraries.  In addition to our executables, there
	  are several (50 - 100) optional parts libraries.  These are
	  selected from a menu and loaded after the installer selects
	  what he wants.  These can be located anywhere, but the default
	  is underneath the directory where everything else is loaded.


Here are a few things I found getting the installation scripts for our
software written:

While the rc.local file is modified (either by the installation scripts
or by hand), only a line calling another file, rc.<vendor> (ours is rc.pcad),
is inserted.  The rc.<vendor> file does all the necessary startup work.
This allows us to plug in a new release more easily if the startup sequence
changes for some reason.  Only the rc.<vendor> file needs to be replaced.
There is no need to try editing the rc.local file again for later releases.
Another advantage is that the rc.<vendor> file can be run independently if,
for some reason, the call to it is removed from rc.local, or the installer
decides not to modify rc.local.

Installation directories should be specified by the user.  The only things
which we cannot change are those required by the third part licensing
package we use.

If the disk space required by all the software is large, let customers
load only what they need.

The only part of the installation which requires root access concerns
the 3rd party license package over which we have no control.

Allow loading from a remote tape drive.

The last file in a group of files loaded from tape should be a checkpoint file
which the installation script can check.  If it doesn't exist, the
load is incomplete.

Before loading, the script should check available disk space and inform
installer if not enough is available.  While this is not foolproof (someone
can use up more disk space after a load has started), it works fairly well.

Executing all the software from a single script allows the following:
  
  Environment variables do not need to be set before running the software.
  Any needed are set up by the script.  The one set by ours is a path
  variable specifying the location(s) of files needed by our application.

  The executables and configuration files needed by the application should
  be kept in one spot.  There are is no need to mix these in with other files
  in a /usr/bin (or wherever) directory.  This script is the only
  file which should need to be installed in the users' path, and it should
  be callable by absolute path.

  The PATH variable does not need to be changed.

Document in the installation guide the files which may/will be
installed/changed.  Also, the installation guide should explain
step by step what the installation program is doing, and give examples
where needed.

Like with many other vendors, a large number of our customers are UNIX
novices.  It is necessary to use a script which provides reasonable defaults
to get our product installed.    I have reservations about someone
without system administration experience doing root tasks, but with
the trend of more people buying workstations, it is tough to avoid.

As far as other software packages go, Saber C was pretty good.  It had
a menu based installation script and put all its executables in a single
directory of your choice.  The only executable necessary in the users'
path was a script which got it going.  An rc file change was necessary 
(for their network license manager), but the script told you what to append
to the file instead of modifying it itself.

-- 
John Grow                   |   uucp:    (uunet!pcad!john)
Personal CAD Systems/CADAM  |
1 Technology Park Drive     |
Westford, MA 01886          |

stevedc@syma.sussex.ac.uk (Stephen Carter) (09/26/90)

I will now put on my flameproof suit....

I come from a background of 20+ years Data Processing (deliberate words)
experience.  In the last 18 months I have had exposure to UN*X, and have
now had installed a **very** effective hardware platform to support the
next wave of growth of DP in my current employment.

Past operating systems I have dealt with (no names, no pack drill) have
been what I think of as mature commercial operating systems.  They have
been able to cope with the rigours and realities of being used and
installed and maintained by ordinary operations staff.  (Not by Nobel
candidates).

UN*X is not like that, and it is about time it was.

I shall give three examples of what I mean.  Two close to the real
thread of this discussion, one a bit to one side.

1.  When the system was delivered I insisted that because it had been
with a third party, that we wipe the discs and reinstall from
manufacturer distribution tapes.  SO far, so good.  The basic process
went OK, but when the system build was going on the manufacturer
engineer had to edit (using vi) the command file.  The file was
basically a parameter list of how many of what type of disc you had -
compilingin the drivers.  The vi edit leaves no backup (.BAK), and,
you've guessed it, he got it wrong.  Other operating systems mask this
process with a user friendly front end that says (eg) howmany  Wizzo
Discs, how many whacko drives, and then 'edits' the parameter file and
gets it right.  

2.  We are having trouble at present with a printer that does not do a
CR after a LF (or vice versa!).  Now we know that the control on this is
buried in getty and termcap, but it is all in terms of bit flags that
then have to be entered as fs:octalnumbers.  (Don't flame me if the
utter detail is wrong, think about the point).  This is insane.  Why not
have parameters like (eg) addcr which would (if needed) do the
calculation.  A few weeks ago the manufacturer's (very good) software
support were on site doing a like job on an already installed line.  He
used his decimal, octal, hex calculator.  HE got it wrong!

3.  (Slight aside).  REAL installations (ie ones where commercial style
work is done - who prints your payslips dear readers) don't just print
onto plain listing paper.  We change paper types, and we need to align
the stationery with a test print.  If lp or lpr do it, I've not found
it.  This is not a marginal need - it is central!

Disclaimer:  These comments do not apply to the machine from which this
is being posted.  It is run by all round good guys.  


Stephen Carter, Systems Manager, The Administration,
The University of Sussex, Falmer, Brighton BN1 9RH, UK
Tel: +44 273 678203  Fax: +44 273 678335     JANET: stevedc@uk.ac.sussex.syma
EARN/BITNET  : stevedc@syma.sussex.ac.uk      UUCP: stevedc@syma.uucp
ARPA/INTERNET: stevedc%syma.sussex.ac.uk@nsfnet-relay.ac.uk 

rowe@cme.nist.gov (Walter Rowe) (09/26/90)

>>>>> On 25 Sep 90 21:02:29 GMT, john@pcad.UUCP (John Grow) said:

>> While the rc.local file is modified (either by the installation
>> scripts or by hand), only a line calling another file, rc.<vendor>
>> (ours is rc.pcad), is inserted.  The rc.<vendor> file does all the
>> necessary startup work.

Some of colleagues and myself implemented an extension of this idea
some time ago so that we wouldn't have to continually edit any of the
distributed bootup (rc, rc.boot, rc.local) scripts.

One of the local administrators (Ken Manheimer - klm@cme.nist.gov)
came up with the idea of creating a script called `rc.site' that is
run as the last thing in /etc/rc.  Since all our machines mount the
same /usr/local partition from a central server, that is a good place
to store it.

    # Call my local rc.site script
    if [ -x /usr/local/sys/rc.site ]; then
	/usr/local/sys/rc.site		> /dev/console		2>&1
    fi

Every machine on the net runs `rc.site' at bootup, so all machine
specific bootup commands can go in this file.  And since all NFS
mounts take place in rc.local, adding it to the end of /etc/rc makes
sure that /usr/local has already been mounted and that we can find the
script.

Using an `rc.site' has the advantages that:

[1] all site-specific bootup procedures are in one place,

[2] we don't have to customize the rc scripts on a machine-by-machine
    basis, and

[3] all special bootup procedures are preserved across OS upgrades.

Enclosed is a sample rc.site (modified for obvious reasons).  Just
press `n' to skip it, though I think it makes some points clear.

wpr
---
Walter P. Rowe                                    ARPA: rowe@cme.nist.gov
System Administrator, Robot Systems Division      UUCP: uunet!cme-durer!rowe
National Institute of Standards and Technology    LIVE: (301) 975-3694



====8<======8<======8<======[ rc.site ]======>8======>8======>8====
#! /bin/csh -fb

# Start local stuff
#
echo -n 'local stuff:'

# Add path for mount_xxx binaries
#
set path=($path /usr/kvm)

# get the current hostname
#
set hostname = `hostname`

# aliases to check if hostname is(not) in a list of names
#
alias IfHostIn    echo "\!* | grep -s $hostname ;" 'if ($status == 0) then'
alias IfHostNotIn echo "\!* | grep -s $hostname ;" 'if ($status == 1) then'


#  Start the UPS monitor on stella
#
IfHostIn foo
    /etc/upswatch
    echo -n ' upswatch'
endif


#  Start the automounter on all hosts
#
if (-f /etc/auto.master) then
    /usr/etc/automount -v -M /auto -m -f /etc/auto.master
    echo -n ' automounter'
endif


#  File servers use the atomic clock
#  Other machines use file servers
#
IfHostIn foo bar
    rdate india.colorado.edu	>& /dev/null
else
    rdate bar		>& /dev/null
endif
echo -n ' rdate'


#  Run screenblank (bar doesn't have a bitmap console)
#
IfHostNotIn bar
    if (-x /usr/bin/screenblank) then
	screenblank
	echo -n ' screenblank'
    endif
endif


#  Start up license servers on bar
#  THIS COULD SIMPLY CALL RC.<VENDOR>
#
IfHostIn bar
    setenv FMHOME /depot/.primary/frame
    if (-x $FMHOME/bin/rpc.frameusersd) then
	$FMHOME/bin/rpc.frameusersd $FMHOME/frameusers >& /dev/null
	echo -n ' maker'
    endif
endif


#  Exit normally (successful)
#
echo '.'
exit 0

peter@ficc.ferranti.com (Peter da Silva) (09/26/90)

In article <1788@hulda.erbe.se> prc@erbe.se (Robert Claeson) writes:
> the part that is the same for all hosts running the same o/s version
> from the same o/s vendor regardless of architecture

Plus, in the 386 world, the part that is the same for all hosts running
the same architecture regardless of OS version or vendor. The Xenix-286
binaries for our intel development tools run fine on our System V boxes
no matter what vendor.
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
peter@ferranti.com

peter@ficc.ferranti.com (Peter da Silva) (09/26/90)

In article <929@mwtech.UUCP> martin@mwtech.UUCP (Martin Weitzel) writes:
> In this way the not-so-experienced system administrator has the chance
> to make things right without understanding (at least as right as if
> all the steps were immediatly done during installation with a script
> running as root).

And to protect against total users have the last step in the script be:

echo 'Enter root password here to complete installation, or hit return'
echo 'to complete the installation manually.'
su root -c root-install
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
peter@ferranti.com

peter@ficc.ferranti.com (Peter da Silva) (09/26/90)

Ah, you haven't *lived* until you've installed Oracle on System V/386
with Ethernet.

It requires you set up an Oracle user and so on manually, then dives into
a highly complex under-documented script... that aborts part way through
because you needed to edit a Makefile!

Worst of both worlds: not only is it expert-hostile, it's novice-hostile
as well.

And to top it off, this on a system that already has one of the best
automatic software installation systems I've ever seen.
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
peter@ferranti.com

klm@cme.nist.gov (Ken Manheimer) (09/27/90)

We've actually implemented a refinement on the 'rc.site' scheme that
walter mentioned in a prior article.  The refinements eliminate
problem with additional points-of-failure that the unadorned scheme
introduces.

As walter mentioned, a hook is inserted at the end of /etc/rc to call
rc.site, which in turn is where any site-specific boot activities are
placed.  (We put it in /etc/rc instead of /etc/rc.local because
rc.local is run in the middle of rc (on Suns, at least) and we want
the rc.site stuff to be done *last* in the boot process.)  A couple of
nifty aliases (rc.site was developed as a cshell script, 'cause that's
what i was more comfortable with at the time) allow you to
conditionalize what hosts are to run or to be excluded from running
specific inits:

alias IfHostIn    echo "\!* | grep -s $hostname ;" 'if ($status == 0) then'
alias IfHostNotIn echo "\!* | grep -s $hostname ;" 'if ($status == 1) then'

The rc.site approach has two general benefits - the standard boot rc
files only need to be changed once (to introduce the hook for rc.site
and maybe eliminate any features or bugs the vendor introduced) and
the custom boot activites are centralized in a single place, which has
proven invaluable for keeping track of what does what as our
installation has grown.

The drawback is that all the clients become dependent on some central
server(s) for the rc.site customizations.  The refinement to fix this
consists of changing the hook to the global rc.site to the following
sort of thing:

	if [ -f /usr/local/site/rc.site ]; then
		cp /usr/local/site/rc.site /etc/rc.site
		chmod 555 /etc/rc.site
	fi
	if [ -f /etc/rc.site ]; then
	  echo -n Site-specific business:
	  /etc/rc.site
	  echo "."
	fi

This way, if the global rc.site file is accessible a local copy is
made.  In any case a local copy, if any exists, is executed.  The
trick is that if the current global copy of the rc.site is not
accessible the last copied version is still run anyway.

In the rare cases where the global copy turns out not to be accessible
the last copied version is almost always going to be at least adequate
if not identical.  For almost any situation this setup is more than
adequate to get the right custom activities done...

If it's not too obnoxious of me to do, i recommend the general rc.site
approach highly.  It's helped enourmously to consolidate our handle on
our site's operations and to ease the job of administering
customizations in general and across OS upgrades.

Yay.

Ken Manheimer			 	Nat'l Inst of Standards and Technology
klm@cme.nist.gov (301)975-3539		(Formerly Nat'l Bureau of Standards)
      Factory Automation Systems Division Unix Systems Support Manager

					I like time.  It's one of my favorites.

sarima@tdatirv.UUCP (Stanley Friesen) (09/27/90)

In article <650@silence.princeton.nj.us> jay@silence.princeton.nj.us (Jay Plett) writes:
>Providing for installation by a naive user is indeed frustrating and
>difficult.  But that doesn't negate the fact that each of /etc/passwd,
>/etc/group, and /etc--on the machine where a software package is being
>installed--might be inaccessible to the software or otherwise have no
>relevance to it when it is executed. ...
>  Meanwhile, it's presumptuous and risky
>for a software provider to assume that (s)he can predict what steps are
>required to successfully add a user or group to a particular system (or
>that the system the software is being installed on is the system
>it will be executed on).

For a concrete example of this type of situation, consider our set-up here.
We have a (large) network of Sun workstations, with extensive NFS file
sharing.  We have set our system up so that the login & password information
is kept in a centralized database (called the Yellow Pages).  The result is
that any user with an account on one machine has one on all machines.  To
further support this illusion, all home directories are auto-mounted NFS
file systems.

OK, now an installation script will incorrectly update the local /etc/passwd
file instead of the Yellow Pages (which are probably mastered on a different
machine anyway).  Furthermore, root priveledges are not network transparent.
That is root on machine A does *not* have root priveledges on machine B.
This means that any attempt to update a system file that is actually on a
different machine will fail.  (And this is quite common, since we are
sharing things like /usr across all machines with the same architecture)

I do not see how any install 'script' that assumed standard, stand-alone
unix capabilities could possibly succedd in this environment.

---------------
uunet!tdatirv!sarima				(Stanely Friesen)

karl@naitc.naitc.com (Karl Denninger) (09/27/90)

In article <650@silence.princeton.nj.us> jay@silence.princeton.nj.us (Jay Plett) writes:
>In article <1990Sep24.171752.13221@naitc.naitc.com>, karl@naitc.naitc.com (Karl Denninger) writes:
>> In article <649@silence.princeton.nj.us> jay@silence.princeton.nj.us (Jay Plett) writes:
>> >In article <1990Sep20.160212.241@naitc.naitc.com>, karl@naitc.naitc.com (Karl Denninger) writes:
>[ whether an install program should touch /etc/passwd, /etc/group, and /etc ]
>
>> I don't understand how people are using computers eh?  Add some ad-hominen
>> in there for good measure too, no?
>I apologize for the ad hominem inference that can be drawn due to my
>careless use of pronouns in my remark "if you ... then you ..."  This
>inference was not intended.  Please substitute "one" for "you".

Ok.  Accepted.

>Providing for installation by a naive user is indeed frustrating and
>difficult.  But that doesn't negate the fact that each of /etc/passwd,
>/etc/group, and /etc--on the machine where a software package is being
>installed--might be inaccessible to the software or otherwise have no
>relevance to it when it is executed.  

/etc/passwd inaccessible?  That's a good trick, if it's not readable!  And
for nearly any package (mine included) read-only is what is required - AFTER
installation.

I guess you could do that, if you wanted to SGID (or something similar) all
the programs which read /etc/passwd (like /bin/ls, for example :-)

>I believe that a software
>provider must be entitled to assume that each site has at least one
>user who is capable of adding users and groups to their system, whether
>by hand-editting the appropriate files or by using software that was
>provided with or for the OS.  Meanwhile, it's presumptuous and risky
>for a software provider to assume that (s)he can predict what steps are
>required to successfully add a user or group to a particular system (or
>that the system the software is being installed on is the system
>it will be executed on).  

This is not true if the provider knows what the binary runs on, and what it
was built for.  If I am sending out software for ISC 2.2, for example, I
know darn well what the format of the password file is, and that it uses a
shadow file, and what I have to do to make that install work.  If I'm not
sure I can always go looking for a copyright notice in /unix (with strings,
or something similar) and only do the "dangerous" parts if the install
script finds the proper "signature".

Now if you're talking about source distributions, you're absolutely correct.
I'm talking binary distributions.

>How the software is to "find itself" is a
>stickier problem.  Looking in /etc for a config file is not the
>solution.

Ok, so what's preferred?  Environment variables?  Those are an even worse
hack; check the mess that Framemaker makes you go through to get it to
work, or SYBASE.  I prefer a single file in /etc, thank you very much!

For a network environment, a service provided by TCP/IP is ok, if the
clients are going to mount the original (it can save the IP and port address
of the "find me" server this way).  However, if clients are loaded on
the user's workstation and can't directly get to the original "server" or
"backend" code area, this doesn't work automatically either!

The only >other< possibility I can come up with in 30 seconds or less would 
be to link the binary on the customer's machine, and have the loaded 
directory burned into the binary.  This stinks too; now I can't move the
product from one place to another!

AKCS' use of a file in /etc is not perfect, but it does give you one place
to look for the parameters, and moving the directory the package uses takes
about 10 seconds.  And the file is called "/etc/V7Akcsparams", at least
it's obvious.

>In some cases it might be reasonable to include a program to automate
>certain superuser tasks which are likely to succeed on most common
>systems.  But no such program should be embedded in a more complicated
>installation program, or leave the installer wondering about the
>consequence of departing from the vendor's recommendations, or leave no
>choices for the installer, or require that the installer spend more
>time reverse-engineering the install script than (s)he should need to
>spend installing the entire package.  

If it's documented, you don't have to wonder -- if the installer chooses to
read the manual.  Since you have to read the manual to figure out how to run
the install script in the first place, I don't see that as being a major
problem -- as long as the information is in there!

>No third-party software should
>depend for its execution on discovering its essence in any particular
>pathname.

Ok, so how does a package discover where the libraries are loaded, or the
databases it needs to use to run?  Any ideas?  Environment variables have
already been decried as inane (with good reason) -- so what's the
alternative that people out there would prefer?

>It should be both possible and easy for any user to install a third-
>party software package even if that user lacks either the authority or
>the ability to add users, groups, or files in system directories.

Kinda difficult when the package does things which require that a user be
added.  An example?  A "bbs" package which uses a public login (with no
password) and does it's own user authentication.  Now, since that's central
to the idea of the software, how much sense does it make to install it if
you can't do these things?  Not a lot I'd venture....

>Ideally, software shouldn't depend on any of these things.  If such
>dependencies are truly unavoidable, then it is acceptable that a
>software package require its installer to seek help for a few critical
>tasks that require superuser privilege.

Is it?  

The average purchaser of the AKCS bbs package doesn't know how to do much
more than power up, down, back up, and follow explicit directions such as
"type: tar xvf /dev/rdsk/f0q15dt; cd /tmp; sh install".

How does this user manage to get a package installed if he/she doesn't
understand the underlying concepts of the operating system they're using?
And no, this is not a rarity.  We have a network of Sun machines here at
NAITC, and I'd gander that the majority couldn't do a successful
installation of the AKCS package here on their own workstations without the
install script, WITH OR WITHOUT explicit documentation.

As Unix becomes more commodity-oriented (which, by the way, I believe is
GOOD for the amount of software available commercially) this is going to be
an increasing problem.  I believe that providers need to make available the
information to do the installs manually, should the installer desire, but
need also to provide a "stupid mode" for those who are unable or unwilling
to take the steps to fully understand what they need to do.

-- Karl Denninger	
kdenning@ksun.naitc.com

zwicky@sparkyfs.istc.sri.com (Elizabeth Zwicky) (09/27/90)

 I recently discovered a whole new class of installation headaches. We
now run Sun 3 clients from Sun 4 servers, exclusively.  *Every*
facility machine that is a boot server is a Sun 4; conversely, every
diskless client (and we still have about a hundred of them) is a Sun
3. We don't export file systems with root... The number of installation
scripts that this smashes to pieces is amazing (start with every
package that uses a binary to install with). In most cases, I can get
things off tape on a server, temporarily export a relevant file system
with root to one of the staff Sun 3s, do the install, undo the export
and go on. So far, nothing has demanded to access the tape drive from
the install script, which is a good thing, since the tape drives are
on the servers and they don't trust the clients...

Packages that demand to be installed into their final location,
compiling in path names, are a worse headache; we mount /usr/local
read only, so in this case I have to redo not only the exports file
but also the auto-mount map. Since we do automount /usr/local from
redundant locations, there's the added challenge of getting the
exports changed on the /usr/local that the machine I'm using mounts -
short-circuting automounter here is not impossible, but isn't easy
either, especially since my shell happens to be in /usr/local.

I'm not crazy about this, but I can stand it. What I can't stand is
programs that expect me to be able to modify them with their
binary-only programs at more or less random times, like Interleaf,
which wants me to run a program to read its printer definition files
every time I change the printcaps. This happens more often than you
might think, as we add, subtract, and move printers to keep up with
project needs. (Currently I simply work around the question by having
my printcap building programs also do the work Interleaf's program
does, but that's hardly my idea of a good time - just the least
horrible time I could have in the circumstances.)

(For more fun, try recompiling everything in /usr/local into a test
location with another name - you can either compile things so that
they will probably work when you move them into /usr/local, or so that
you can actually test them in the test location, in most cases.)

	Elizabeth Zwicky

prc@erbe.se (Robert Claeson) (09/28/90)

In a recent article stevedc@syma.sussex.ac.uk (Stephen Carter) writes:

...
>Past operating systems I have dealt with (no names, no pack drill) have
>been what I think of as mature commercial operating systems.  They have
>been able to cope with the rigours and realities of being used and
>installed and maintained by ordinary operations staff.  (Not by Nobel
>candidates).

>2.  We are having trouble at present with a printer that does not do a
>CR after a LF (or vice versa!).  Now we know that the control on this is
>buried in getty and termcap, but it is all in terms of bit flags that
>then have to be entered as fs:octalnumbers.

Sounds like you're using some kind of Berklix UNIX-alike. Well, I'll bite...

We recently installed a system at a site where the DP manager had previously
only been using "commercial" systems like VMS, Primos, AOS/VS, MVS and
the like. He first ordered BSD 4.3. From a University, he'd heard that
it was the only reasonable choice. He never felt comfortable with it,
muttering about "... damn UNIX...unfriendly...stupid...". So we then
suggested him to try a heavily enhanced version of System V Release 3.2
(well, who sells a pure System V these days? Not even AT&T themselves)
instead. He agreed and have been using it since then. He says that it
is more orthogonal. He can build on past experience from other commands
in the system, he says. So if you're not satisfied with your Berklix
system, try System V for a change. I'm not saying that it is better over-all
but just that it might happen that it suits your needs better.

As for me, I use both, and Mach on top of that.

-- 
Robert Claeson                  |Reasonable mailers: rclaeson@erbe.se
ERBE DATA AB                    |      Dumb mailers: rclaeson%erbe.se@sunet.se
                                |  Perverse mailers: rclaeson%erbe.se@encore.com
These opinions reflect my personal views and not those of my employer.

fpb@ittc.wec.com (Frank P. Bresz) (09/28/90)

In article <0706TQG@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:

>And to top it off, this on a system that already has one of the best
>automatic software installation systems I've ever seen.

	Well it isn't necessarily one of the best I have ever seen.
However SUN uses something called extract_unbundled (usually run as root).
To load their add-on packages.  It does a fairly nice job.  The question I
would like to have answered is?  Does SUN encourage/support use of this
script by 3rd party vendors?  If not, why not?  If so why don't more people
start using it.  From what I can tell all it requires is write access into
/usr/tmp a fairly innocuous place.  Can anyone offer any details.  
--
| ()  ()  () | Frank P. Bresz   | Westinghouse Electric Corporation
|  \  /\  /  | fpb@ittc.wec.com | ITTC Simulators Department
|   \/  \/   | uunet!ittc!fpb   | Those who can, do. Those who can't, simulate.
| ---------- | (412)733-6749    | My opinions are mine, WEC don't want 'em.

davidsen@sixhub.UUCP (Wm E. Davidsen Jr) (09/30/90)

In article <ROWE.90Sep26052322@doc.cme.nist.gov> rowe@cme.nist.gov (Walter Rowe) writes:

|     # Call my local rc.site script
|     if [ -x /usr/local/sys/rc.site ]; then
| 	/usr/local/sys/rc.site		> /dev/console		2>&1
|     fi

  Arcane. I would expect:

 	. /usr/local/sys/rc.site	> /dev/console		2>&1

so that you could define the shell variables to be used in the rest of
the script. I have a similar problem and found this to be the only way I
could keep it modular and small. I run the local customize early in the
script, and that may define RUNMELAST for things which run at the end of
the startup.

ie. [ -n "$RUNMELAST"] && [ -x $RUNMELAST ] && . $RUNMELAST

Note that again . is used to pass any symbols defined during startup.
-- 
bill davidsen - davidsen@sixhub.uucp (uunet!crdgw1!sixhub!davidsen)
    sysop *IX BBS and Public Access UNIX
    moderator of comp.binaries.ibm.pc and 80386 mailing list
"Stupidity, like virtue, is its own reward" -me

davidsen@sixhub.UUCP (Wm E. Davidsen Jr) (09/30/90)

In article <3512@syma.sussex.ac.uk> stevedc@syma.sussex.ac.uk (Stephen Carter) writes:

| Past operating systems I have dealt with (no names, no pack drill) have
| been what I think of as mature commercial operating systems.  They have
| been able to cope with the rigours and realities of being used and
| installed and maintained by ordinary operations staff.  (Not by Nobel
| candidates).

  UNIX is a (registered) trademark, not a monolithic product. Lots of
people run UNIX systems in production business environments who are not
remotely gurus on anything.

| UN*X is not like that, and it is about time it was.

  Sounds like you have one of the less user friendly versions. 

| I shall give three examples of what I mean.  Two close to the real
| thread of this discussion, one a bit to one side.
| 
| 1.  When the system was delivered I insisted that because it had been
| with a third party, that we wipe the discs and reinstall from
| manufacturer distribution tapes.  SO far, so good.  The basic process
| went OK, but when the system build was going on the manufacturer
| engineer had to edit (using vi) the command file.  

  Stop there. It's not all that bad. Some vendors do it with menus,
other with questions with well chosen defaults and help as a valid
answer. Having to do an edit isn't a great way to do it, and it's no a
failing in all versions of UNIX.

| 2.  We are having trouble at present with a printer that does not do a
| CR after a LF (or vice versa!).  Now we know that the control on this is
| buried in getty and termcap, but it is all in terms of bit flags that
| then have to be entered as fs:octalnumbers.

  Again, it depends on what vendor you patronize. SysV introduced some
nice lp stuff which has been fitted into some BSD derived systems as
well. You have a nice text shell script to execute when sending a file
to the printer, and you can put a stty command or whatever you need
there. Yes you need to read the manuals, but you don't need any guruship.

| 3.  (Slight aside).  REAL installations (ie ones where commercial style
| work is done - who prints your payslips dear readers) don't just print
| onto plain listing paper.  We change paper types, and we need to align
| the stationery with a test print.  If lp or lpr do it, I've not found
| it.  This is not a marginal need - it is central!

  This isn't and shouldn't be in lp or lpr, they put stuff *on* the
queue. It should be the program which takes stuff *off* the queue which
handles this. It can't be build in, because no two people want the same
thing. All the vendor can provide is the ability to interface with the
operator, printer, and filesystem. After that you write a few lines of
script or code to use the interfaces.

  Again, it depends on the vendor. Given the nice lp queueing I
mentioned before, you should have no trouble writing a few lines in the
control script to either have the operator load the paper and enable the
jobs for that paper, or have the system prompt the operator to change
the paper type (that gets old if you have a lot of jobs using various
forms).

                                 * * *

  I make no claim to have seen it all for operating systems, but I have
seen the admin for VM, MVS, GCOS, VMS, and a number of UNIX variants.
The better UNIX install procedures are the best I've seen. They are far
better than the install procedures for some popular MS-DOS software,
definitely not intended to require years of training.

  As far as install goes, if you don't do it often (I hope you don't
have to) then you consider having a consultant come in and do it. Who
cares if he works hard. Day to day operation should and can be easy. If
you don't have the inhouse expertise to make it so, and it may not be
cost effective to do so, then  hire a consultant, once, to setup the
system and put what you want in place.
-- 
bill davidsen - davidsen@sixhub.uucp (uunet!crdgw1!sixhub!davidsen)
    sysop *IX BBS and Public Access UNIX
    moderator of comp.binaries.ibm.pc and 80386 mailing list
"Stupidity, like virtue, is its own reward" -me

davidsen@sixhub.UUCP (Wm E. Davidsen Jr) (09/30/90)

In article <0706TQG@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:

| And to top it off, this on a system that already has one of the best
| automatic software installation systems I've ever seen.

  The PC versions of UNIX have very good installations, and I just
installed a beta V.4 which asked all the questions up front, then told
me to put the tape in the drive and go away. Good stuff. Of course with
all the installs and builds I selected it ran six hourse and 36 minutes
to install, but I didn't have to stand there while it happened.
-- 
bill davidsen - davidsen@sixhub.uucp (uunet!crdgw1!sixhub!davidsen)
    sysop *IX BBS and Public Access UNIX
    moderator of comp.binaries.ibm.pc and 80386 mailing list
"Stupidity, like virtue, is its own reward" -me

peter@ficc.ferranti.com (Peter da Silva) (10/01/90)

In article <FPB.90Sep27221159@ittc.ittc.wec.com> fpb@ittc.wec.com (Frank P. Bresz) writes:
> However SUN uses something called extract_unbundled (usually run as root).
> Does SUN encourage/support use of this script by 3rd party vendors?

That's another nice thing about the System V stuff... it's documented in
detail in the Integrated Software Development Guide.
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
peter@ferranti.com

shani@TAURUS.BITNET (10/09/90)

  We have a SUN 4/260 running 4.0.3 . We encouterred the question of
software installation and decided to install all local software under
/usr/local and all local libraries under /usr/local/lib/something or
/usr/local/share/lib/something if the library may be sheared with
other computers (our computer is also a file server to some other machines).

  X11 is ofcourse an exception... But the obvious solution is to put X11
executables in /usr/local/X11 (the libraries are in /usr/local/share/lib/X11).

  After having a bad experiance with a case in which we had to reinstall the
generic system, /usr/local is now on a partition of it's own, and we also
moved manl and mann to it (with symbolic links from /usr/man). This will make
things much easyer if we will have to install the system again (and we probably
will when we will move to 4.1), since our / and /usr are almost completely
generic, except /etc and /var and some other minor and known changes (symbolic
links mainly).

O.S.