[comp.unix.large] Survey

richard@locus.com (Richard M. Mathews) (09/10/90)

huntting@boulder.Colorado.EDU (Brad Huntting) writes:

>>Halt! Who goes there? 
>>Is anyone out here? 

>I do...  What I want to know is what is this newsgroup for?  And who created
>it?

It is my understanding that this group is for discussions of Unix on large
machines (how large is large?).  I'd be interested in starting things off
by finding out what large machines and what versions of Unix you have out
there.  For what do you use these systems?

We have AIX/370 running on 370s including a 3090.  We have been working
with IBM to develop AIX on these machines.  Since we use VM to run lots
of small virtual machines in different configurations, perhaps we don't
count as users of "large Unix" (except when helping a customer who IS
using a large system).  As a supplier of such systems, we are, however,
very interested in the opinions and needs of such users.

What other large systems are out there?  What sorts of problems do you
have that you feel are unique to users of large systems?

Richard M. Mathews
Locus Computing Corporation
richard@locus.com
lcc!richard@seas.ucla.edu
...!{uunet|ucla-se|turnkey}!lcc!richard

wojcik@crl.dec.com (Theodore Wojcik) (09/10/90)

In article <richard.652960541@fafnir.la.locus.com>, richard@locus.com
(Richard M. Mathews) writes:
|> huntting@boulder.Colorado.EDU (Brad Huntting) writes:
|> 
|>.....
|> It is my understanding that this group is for discussions of Unix on large
|> machines (how large is large?).  I'd be interested in starting things off
|> by finding out what large machines and what versions of Unix you have out
|> there.  For what do you use these systems?
|> 
|>.....
|> 
|> What other large systems are out there?  What sorts of problems do you
|> have that you feel are unique to users of large systems?
|> 

My feeling is that the problems associated with large clusters of
workstations are as difficult and perhaps more complex than those associated
with large single machines.  The subjects discussed here are likely to be
interesting to both .  I'd like to suggest that the key is "large installation"
and keep away from what exactly constitutes one.  When you have one, you know
it.  As the original proposal stated:

	comp.unix.large		Unix on mainframes and in large networks
                                                   ^^^^^^^^^^^^^^^^^^^^^
I guess the point I'm trying to make is that there isn't any single attribute
that makes an installation "large".  By their very nature, mainframes
usually have big user communities and a big farm of disks.  By the same token,
large clusters of workstations generally have big user communities and a disk
server with a big farm of disks.  It's my feeling that the two styles of
computing have more in things in commmon than not.  FWIW.

|> Richard M. Mathews
|> Locus Computing Corporation
|> richard@locus.com
|> lcc!richard@seas.ucla.edu
|> ...!{uunet|ucla-se|turnkey}!lcc!richard

--
Ted Wojcik, (wojcik@crl.dec.com), Systems Manager
Digital Equipment Corporation
Cambridge Research Lab
1 Kendall Sq. Bldg. 700 Flr. 2
Cambridge, MA 02139, USA
(617)621-6652

fwp1@CC.MsState.Edu (Frank Peters) (09/11/90)

In article <richard.652960541@fafnir.la.locus.com> richard@locus.com (Richard M. Mathews) writes:

   It is my understanding that this group is for discussions of Unix on large
   machines (how large is large?).  I'd be interested in starting things off
   by finding out what large machines and what versions of Unix you have out
   there.  For what do you use these systems?

   We have AIX/370 running on 370s including a 3090.  We have been working
   with IBM to develop AIX on these machines.  Since we use VM to run lots
   of small virtual machines in different configurations, perhaps we don't
   count as users of "large Unix" (except when helping a customer who IS
   using a large system).  As a supplier of such systems, we are, however,
   very interested in the opinions and needs of such users.

   What other large systems are out there?  What sorts of problems do you
   have that you feel are unique to users of large systems?

Well, we are developing a different kind of 'large system' that has
its own unique complexities.

We are in the process of migrating from a mainframe environment to a
distributed environment of UNIX servers and workstations.  Though none
of these systems taken individually are what I would call large
(though I wouldn't call a Sun 4/490 small either), taken together they
add up to more power and complexity than any mainframe I'm familiar
with.

There are a great many issues that are addressed in typical mainframe
operating systems (and perhaps by big iron unices like UTS and
AIX/370) that are all but ignored in the typical mini and workstation
implementations of UNIX.

I would hope that discussion of how varius users have addressed the
following issues would be appropriate to this group.

(1)  Operator driven UNIX.  The concept of a machine room operator is
     ingrained in most mainfram operating systems.  The concept of a
     large installation in which many of the day to day administration
     tasks are performed by operators in a machine room rather than by
     the user or the system administrator with root priviledges is 
     relatively foreign to most UNIX implementations.  There are
     really two sub issues involved here.

     a) We generally have two operators on duty in our machine room
        during peak hours.  Either of these operators must perform
        tasks such as answering requests, killing runaway logins and
        the like.  Requiring each of these operators to log in under a
        separate userid leads to an unacceptable amount of wasted
        time.  On the other hand, a single operator userid is a
        definite security problem.  We are currently attempting to
        establish an operator userid that can only be logged in on the
        system console in our machine room.

     b) We would like to delegate many tasks such as tape control,
        backup, printer control and such to our operators.  At the
        same time we don't want to share the root password.  There are
        a few systems out there to allow the delegation of tasks to
        certain users.  All of these, of course, have security issues
        involved that must be considered.

(2)  Tape device management.  Another capability supported in
     mainframe operating systems is the ability to gain exclusive
     control of a tape device, request that a tape be mounted, and
     release the device when the user is finished with it.  We would
     like to have this capability in our UNIX environment as well.

(3)  Load balancing.  In a single box balancing the load among several
     CPUs is relatively straitforward (at least in concept).  When
     your CPUs are spread across a dozen or more machines how do you
     avoid the situation of one machine being sunk to its knees while
     another is nearly idle.  When you add multiple classes of
     processor (is a 4/490 at 50% more loaded than a sparstation at
     30%?) or multiple types  (how do the above two compare to a
     decstation 3100 at 40%?) this issue can become a nightmare.

(4)  Userid management.  Most UNIX boxes come with instructions about
     which several files should be edited to add a user to the system.
     We are developing programs to manage the addition of userids in a
     relatively bullet proof way so that non-technical personnel can
     add new users.  While there are programs to do that around very
     few address the large system issues such as password file locking
     and batch additions of large groups of users like a class roll.

(5)  Accounting.  We have many projects of one sort and another which
     require accounting of resources.  On our mainframe we have an
     account number which can be used for this purpose.  The same
     userid can be under many accounts.  The UNIX accounting system
     (at least the one on a Sun running SunOS 4.1) doesn't seem to
     have an equivalent concept.  As far as we can determine no
     accounting information by group is available so that the newgrp
     command gives us no help.  It looks as if we will have to have
     multiple userids per user in this case.  The thought of that is
     repulsive both from a personal and administrative viewpoint.

I'm sure there are many other issues out there waiting for us that we
haven't thought of yet.  Any discussion or suggestions on the above
would be most welcome.

Regards,
FWP
--
--
Frank Peters   Internet:  fwp1@CC.MsState.Edu         Bitnet:  FWP1@MsState
               Phone:     (601)325-2942               FAX:     (601)325-8921

ikluft@uts.amdahl.com (Ian Kluft) (09/11/90)

In article <1990Sep10.153946.7269@crl.dec.com> wojcik@crl.dec.com writes:
>I guess the point I'm trying to make is that there isn't any single attribute
>that makes an installation "large".  By their very nature, mainframes
>usually have big user communities and a big farm of disks.  By the same token,
>large clusters of workstations generally have big user communities and a disk
>server with a big farm of disks.  It's my feeling that the two styles of
>computing have more in things in commmon than not.  FWIW.

Point well taken.  I think we have established enough direction to start
discussions in the topic area.
-- 
Ian Kluft             -----------------------------  # Another flying fanatic
UTS Systems Software           \ |--*--| /           # PP-ASEL
Amdahl Corporation      C - 172  /\___/\  Skyhawk    # Member AOPA, ACM, UPE
Santa Clara, CA                 o   o   o            #include <std-disclaimer>

richard@locus.com (Richard M. Mathews) (09/11/90)

fwp1@CC.MsState.Edu (Frank Peters) writes:

>Well, we are developing a different kind of 'large system' that has
>its own unique complexities.

Good point -- large networks are also supposed to be part of this
newsgroup.  Actually, we have both kinds of "large" systems.  As I
said before, we have many AIX guests running on our 3090.  We also have
a handful of other 370s, each with a number of AIX guests.  Finally we
have many PS/2s running AIX.  Groups of these are connected via TCF,
and we use good old fashioned telnet, rlogin, NFS, etc. to connect the
clusters.

>(3)  Load balancing.  In a single box balancing the load among several
>     CPUs is relatively straitforward (at least in concept).  When
>     your CPUs are spread across a dozen or more machines how do you
>     avoid the situation of one machine being sunk to its knees while
>     another is nearly idle.  When you add multiple classes of
>     processor (is a 4/490 at 50% more loaded than a sparstation at
>     30%?) or multiple types  (how do the above two compare to a
>     decstation 3100 at 40%?) this issue can become a nightmare.

TCF allows processes to migrate between machines, and I know there are
others developing similar capabilities.  I can send a signal to a process
to request that it move to a new site (by default, to the site from which
the signal was sent).  A load leveling daemon could be written (but one
does not come with TCF) which automatically moves processes around in
response to varying load.  A difficulty is deciding which processes to
move -- it would be a shame to waste time moving an I/O bound process
which is currently accessing local data.  In our environment I have
found it quite sufficient to be able to manually move things when the
load goes up.

Richard M. Mathews
Locus Computing Corporation
richard@locus.com
lcc!richard@seas.ucla.edu
...!{uunet|ucla-se|turnkey}!lcc!richard

russell@ccu1.aukuni.ac.nz (Russell J Fulton;ccc032u) (09/11/90)

What is a large Unix system? --- Probably anything with over 100 regular users.

This eliminates individual workstations but should include clusters of 
workstations run from a server as well as systems serving lots of ascii 
terminals.

We operate a SGI 4D/240S with is accessed by terminals on the network, either
PCs or terminals connected to terminal servers. We usually have about 50
users on the system with about 1000 registered users.  We also have a SG PI
(Personal Iris, diskless) and a SUN 4/330) both of which mount most of the
240S's disk via NFS.

I think this group could provide a useful forum for the discussion of issues
relating to maintaing and using large systems. This is not an area in which 
UNIX excells. 
For instance we are currently having problems with the 240S crashing and not
leaving any evidence as to the reason. It is very frustrating for us and our
local support agents because we can't even decide if the problem is hardware
or software. A workstaion crashing every now and again is a nuisance for us
its disasterous.

-- 

diamond@tkou02.enet.dec.com (diamond@tkovoa) (09/11/90)

Well, if this group is as large as the world, you could try saying "hello"
to it.  ;-)

> Who goes there?

Monsters.  ;-)
Large ones.
-- 
Norman Diamond, Nihon DEC    diamond@tkovoa.enet.dec.com
                                    (tkou02 is scheduled for demolition)
We steer like a sports car:  I use opinions; the company uses the rack.

ggs@ulysses.att.com (Griff Smith) (09/11/90)

In article <FWP1.90Sep10140422@ra.CC.MsState.Edu>, fwp1@CC.MsState.Edu (Frank Peters) writes:
> (2)  Tape device management.  Another capability supported in
>      mainframe operating systems is the ability to gain exclusive
>      control of a tape device, request that a tape be mounted, and
>      release the device when the user is finished with it.  We would
>      like to have this capability in our UNIX environment as well.

> Frank Peters   Internet:  fwp1@CC.MsState.Edu         Bitnet:  FWP1@MsState
>                Phone:     (601)325-2942               FAX:     (601)325-8921

Part of this is available now, at least for academic sites.  I have a
resource manager that we use to control access to tape drives on our
systems.  The operator request part will probably come soon.  The down
side: academic only, SunOS and BSD only, source not available yet
because of licensing problems (sigh), not ported to System V yet.  The
up side: it's free if you qualify.  See the proceedings of the 1989
Baltimore Summer USENIX convention for details.
-- 
Griff Smith	AT&T (Bell Laboratories), Murray Hill
Phone:		1-201-582-7736
UUCP:		{most AT&T sites}!ulysses!ggs
Internet:	ggs@ulysses.att.com

anselmo-ed@cs.yale.edu (Ed Anselmo) (09/12/90)

>>>>> On 10 Sep 90 19:04:22 GMT, fwp1@CC.MsState.Edu (Frank Peters) said:

Frank>      b) We would like to delegate many tasks such as tape control,
Frank>         backup, printer control and such to our operators.  At the
Frank>         same time we don't want to share the root password.  There are
Frank>         a few systems out there to allow the delegation of tasks to
Frank>         certain users.  All of these, of course, have security issues
Frank>         involved that must be considered.

We have a setuid-root program that allows most of the above to be done
without having to be logged in as a super-user.  Users in group wizard
can kill runaway processes (among other things....):

Menu for wizard.

 0.     Exit this Menu.
 1.     Control Printer Queues.
 2.     Remove Job(s) From Printer Queues.
 3.     Reboot System.
 4.     Halt System.
 5.     Terminate A Process.
 6.     Write To All Users Logged On This Machine.
 7.     Set Date & Time.
 8.     Alter Priority of Process.
 9.     Rebuild UserDataBase Alias Files.
10.     Remove IPC Resources.

(The last option was added to remove IPC resources that ill-mannered
Linda programs started leaving around).

The similar "operator" program allows members of group "operator" to
do backups from a regular account.

Both programs log every action performed by the user.

Frank> (4)  Userid management.  Most UNIX boxes come with instructions about
Frank>      which several files should be edited to add a user to the system.
Frank>      We are developing programs to manage the addition of userids in a
Frank>      relatively bullet proof way so that non-technical personnel can
Frank>      add new users.  While there are programs to do that around very
Frank>      few address the large system issues such as password file locking
Frank>      and batch additions of large groups of users like a class roll.

Yale CS uses the all-singing, all-dancing "User Database Program"
(udb) which tracks users, uids, mailboxs, mailing lists, machines,
serial numbers (among other things).  Through a series of programs and
Shell Scripts from Hell, it's used to build and delete accounts
(assigning unique uids, and keeping them consistent across machines),
and rebuild the sendmail aliases files.

It has also managed to keep several generations of Yale undergraduate
summer programmers entertained for months on end.

anselmo[371] % xdb
Yale Data Base access program (xdb).
Version 1.4 (Exp) of 89/10/02 15:42:52 by long.
Type '?' at any prompt for help.

Trying eli.cs.yale.edu...[Connected]...[OK]
Establishing identity...[OK]
The Database Daemon welcomes anselmo-ed@bigbird
Figuring out who you are...[OK]
Checking for wizardhood...[Wizard]
I welcome Wizard anselmo-ed
Loading entities: distribution...entity...field...machine...mailing-list...person...program...pseudo-user...[Done]
wizard> sh anselmo
** person anselmo-ed
Fullname:     Ed Anselmo
Status:       staff
Expiration:   1999
Birthday:     4/25/59
Work-address: 51 Prospect St. (AKW) Room 012
Work-phone:   432-6428
Room-number:  012
Home-phone:   469-2562
Capability:   arpanet, database
Workstation:  bigbird
Group:        facility
ID Number:    118
Mailbox:      'anselmo@ra, 'anselmo@yale-rt-alaska

wizard> sh machine bigbird
** machine bigbird
Fullname:                     bigbird.cf.cs.yale.edu
Description:                  alaska client
Operating-system:             sun os
Host-id:                      51005683
Component/Make/Model/Ser-Num: cpu sun 4/60fgx-12-p4 935f2634
yaleid:                       066099
Install-date:                 9/89
Location:                     012
Primary-user:                 anselmo-ed
principle-investigator:       facility
Owner:                        facility
Grant:                        overhead

wizard>
-- 
Ed Anselmo   anselmo-ed@cs.yale.edu   {harvard,cmcl2}!yale!anselmo-ed

russell@ccu1.aukuni.ac.nz (Russell J Fulton;ccc032u) (09/12/90)

I agree that the issues Frank Peter raised all need discussion and that this
group seems a good place to start. I would like to comment on a couple of
point he raises.

fwp1@CC.MsState.Edu (Frank Peters) writes:

>(2)  Tape device management.  Another capability supported in
>     mainframe operating systems is the ability to gain exclusive
>     control of a tape device, request that a tape be mounted, and
>     release the device when the user is finished with it.  We would
>     like to have this capability in our UNIX environment as well.

This is very important to us particularly as it affect our backup operations
at te moment the operators leave backups running when they leave at midnight
and the tapes are vuneralble to overwriting until they are unmounted in
the morning. We change the protection on all the device drivers but its a
messy solution. I have seen reference to systems that control access to devices
but have never had time to follow them up. I would be interested in hearing
from people who have  used such systems.

>(4)  Userid management.  Most UNIX boxes come with instructions about
>     which several files should be edited to add a user to the system.
>     We are developing programs to manage the addition of userids in a
>     relatively bullet proof way so that non-technical personnel can
>     add new users.  While there are programs to do that around very
>     few address the large system issues such as password file locking
>     and batch additions of large groups of users like a class roll.

This is one area where we may have something to offer. We are currently 
converting out user data base management program from our vax cluster to
our new UNIX system. It is called ZUMP (Zeno Users Management Program,
Zeno was a student computing system we implemented on a DEC 10 about 10
years ago, the name has stuck.) 

Firstly a little background, we are cronically short of staff and hence we
try and distribute as much of the work for administration of our systems
out to the deptartment that use them. When we got the DEC 10 it was our
first interacive machine for student users. Which meant that each of about
5000 students would need individual usercode, diskspace allocation, etc.
Also with that many student at least 10 will forget their password on any 
given day :-) anyway it all adds up to a lot of administration.

What we did was to write a program that managed a hierarchical data base of
users. At the top we had the computer centre, under that we had depts and
under dept we had classes. Each class had a supervisor. 
The program allowed resources, process time, connect time, etc. to be passed
down the tree. It also had the concept of capabilities that could also be
passed on to users below. The most important capability was that of supervision.
Thus we created the departments and gave them supervisor capabilities and 
resources and it was then up to the departments to create the classes and share
out the resources as they saw fit. The departmental supervisor could creat a
class and delegate the supervision of that class to somebody else.

Somebody with supervisor status could move resources around for people under 
them and perform tasks such as setting passwords.

When the DEC 10 was replaced by a vax cluster we move zump to the vaxes and 
exteneded it to take files from the administration computer to automatically
create userids for whole classes. It was also tranlated into C at this time.

We recently replaced our IBM VM system (which is used for research) with a
SGI 4D/240S UNIX system and we decided to start using zump for controlling
administration on it. We hope to have all user administration handled by zump
next year. We now have a basic version going that tracks usage and will
allow supervisors to change password. 

There are issues that still need resolving, inparticular we require users of
our research systems to sign a statement saying that they have read the
'Computer Centre Regs' and agree to abide by them. Thus we cannot completely
distribute the creation of usercodes to the departments. What we are thinking
of doing is having zump create a skeleton user and add another capability
which turns it into a real user. Thus the department does the donkey work of 
entering the details and fowards the signed form to us. We then activate the
userid with a single command.

We have been using zump in various guises for nearly 10 years and have found 
it very satistfactory.

We would be happy to distribute the source to anybody who wanted it but I 
suggest that we wait until the UNIX version has stabalised.

Has anybody else tried anything like this? 
-- 

matt@locus.com (matt) (09/12/90)

In response to Frank Peters' article:

Frank asks about the following system management problems on large
Unix systems.
>  (1a) Operator Ids.
>  (1b) Operator root privs.
>  (2)  Tape Management
>  (3)  Load Balancing
>  (4)  User Management
>  (5)  Accounting

The OSF has recently sent out a request for designs for a system
management tool package that addresses some of the above concerns.

SCO's ODT on smaller unix machines has a menu driven system operations
facility that helps a beginning user access the various unix utilities
to do some of the above functions.

I am developing a system management tool for AIX on large IBM iron
that will allow full accountability and root privileges with restricted
access to certain functions.  It will cover the following areas of
system administration:
	Inventory Control
	Change Management
	Problem Management
	System Management
		Network accounting and management
		Performance accounting and management
		System maintenance
		User account management over a distributed network
		System accounting and change logging

The interface is through a menu with command line access to any
function or sub-function along with complete customizability and
portability.  If anyone is interested or has any comments or
observations I would be delighted to hear.

I am also interested in anything along these lines that is being
used that can be integrated with our software to increase its
usability or functionality.  If there are other system management
functions that you would like to see added to the above list, or
if there is a unix operating system that you would like to see
this available for please respond directly to me at the following
address.

									Matt Campbell
(213)337-5900
matt@locus.com
lcc!matt@seas.ucla.edu
{randvax,ucbvax}!ucla-se!lcc!matt

wnp@iiasa.AT (wolf paul) (09/12/90)

In article <26107@cs.yale.edu> anselmo-ed@cs.yale.edu (Ed Anselmo) writes:
)We have a setuid-root program that allows most of the above to be done
)without having to be logged in as a super-user.  Users in group wizard
)can kill runaway processes (among other things....):

Is this available??

)The similar "operator" program allows members of group "operator" to
)do backups from a regular account.

And this?

)(udb) which tracks users, uids, mailboxs, mailing lists, machines,
)serial numbers (among other things).  Through a series of programs and
)Shell Scripts from Hell, it's used to build and delete accounts
)(assigning unique uids, and keeping them consistent across machines),
)and rebuild the sendmail aliases files.
)It has also managed to keep several generations of Yale undergraduate
)summer programmers entertained for months on end.

And this? the entertainment value alone would warrant making it
available, apart from its usefulness.

-- 
Wolf N. Paul, IIASA, A - 2361 Laxenburg, Austria, Europe
PHONE: +43-2236-71521-465     FAX: +43-2236-71313      UUCP: uunet!iiasa.at!wnp
INTERNET: wnp%iiasa.at@uunet.uu.net      BITNET: tuvie!iiasa!wnp@awiuni01.BITNET
       * * * * Kurt Waldheim for President (of Mars, of course!) * * * *

dricejb@drilex.UUCP (Craig Jackson drilex1) (09/12/90)

In article <7brK02ZFc7RI01@amdahl.uts.amdahl.com> ikluft@uts.amdahl.com (Ian Kluft) writes:
>In article <1990Sep10.153946.7269@crl.dec.com> wojcik@crl.dec.com writes:
>>I guess the point I'm trying to make is that there isn't any single attribute
>>that makes an installation "large".  By their very nature, mainframes
>>usually have big user communities and a big farm of disks.  By the same token,
>>large clusters of workstations generally have big user communities and a disk
>>server with a big farm of disks.  It's my feeling that the two styles of
>>computing have more in things in commmon than not.  FWIW.
>
>Point well taken.  I think we have established enough direction to start
>discussions in the topic area.

wojcik does have a good point, but I think in the area of disks there still
is a boundary.  Except for things like the Auspex, few file servers really
support connecting more than about 10 Gigs of disk.  (Also excepting
Amdahls, 3090s, etc running Unix.)

This is a real issue for us--we have a user who would like to keep about
20 GB online at any one time, with convenient access to another 100 GB
of archives.  Right now, even on our mainframe he can only keep about
10 GB up at a time--he pages to tape.  (It's data about international
trade.  I don't know what the current paging scheme is, but at one time
import data was available on even-numbered days, and export data on
odd-numbered days.)

Note that these data don't split real well across multiple boxes--any
give update job will probably go through the whole thing, and any given
access job may access throughout the database.  Given his druthers, the
customer would like to think of it as a single file...  Sure, it's
necessary to break it up for several reasons, but none of them are
natural from the applications standpoint.
-- 
Craig Jackson
dricejb@drilex.dri.mgh.com
{bbn,axiom,redsox,atexnet,ka3ovk}!drilex!{dricej,dricejb}

ces@milton.u.washington.edu (Christopher e Stefan) (09/16/90)

our site (*.u.washington.edu) runs several localy developed utillities
that address some of the problems voiced here, specificly:

1. an account creation system tied into unique usernames and student
   numbers

2. an accounting system tied into the student number database providing
   load leveling when users exceed their cpu and connect time quotas

3. TMS, a tape management system, providing exclusive access to tape
   drives and check-in/check-out from the tape library

4. a mail forwarding system

and, undoubtedly several more utillities useful to sysadmins at large
sites that I either: a) can't remember, or b) have never heard of
becasue I'm a user not an administrator.

If you are interested in obtaining more information about the programs
used at the University of Washington, or just exchanging information I
would strongly getting in contact with either:

	Ken Lowe (ken@cac.washington.edu) or
	Ken Case (kc@cac.washington.edu)

who are the people responsible for administrating the systems in the
U.Washington.EDU domain, and who are also responsible for
writing/maintaining most of the locally developed UNIX systems
software.

I hope this is of some help.  :-)

--
Christopher e Stefan.  | INTERNET: ces@u.washington.EDU  | (206) 526-9817
"Natural causes,       |     UUCP: ... !beaver!blake!ces | 4746 21st Ave NE
 people just explode." |   BITNET: ces%milton@UWAVM      | Seattle WA, 98105