[comp.unix.ultrix] Managing a network of UNIX workstations

barrett@jhunix.HCF.JHU.EDU (Dan Barrett) (01/13/90)

	I may be managing a network of DECstation 3100's running Ultrix in
the near future.  I have been managing VAXen for a long time, but never a
network of workstations.  So, I have some questions:

(1)	How do you handle inter-machine superuser privileges?

	I do NOT want to put "root" in /.rhosts -- this is a big security
	risk, right?

(2)	How do you do transparent backups?  I want to pop a tape in ONE
	tape drive and say "Back up ALL files from ALL workstations onto
	this tape."

	Suppose I dedicate one workstation as the "main node", mount all
	other workstation disks on the main node using NFS, and then back it
	up.  This should work...?  But don't I have to worry about
	inter-machine superuser privileges?  After all, we want to back up
	EVERY file from EVERY machine.

(3)	We'd like all users to have accounts on all workstations.  What's
	the best way to maintain an inter-machine password file?  I've
	heard vaguely of "yellow pages" but have never used it.

(4)	We'd like a system where the entire network appears to each user as
	if it were one huge "machine".  A user would log onto this "machine"
	and not care which workstation s/he were actually using.  (Maybe the
	"machine" would automatically log the user onto the workstation with
	the lightest system load.  I've seen this done with VMS systems at
	other schools.)  Can this entire scheme be done?  Transparently?

(5)	Should we put disks on every workstation, or have one fileserver and
	many diskless workstations?  Which is better?  Easier to maintain?

	My idea is to have one or two fileservers, make the other
	workstations use NFS, but put a small disk on each workstation for
	swapping only.  Good?  Bad?  What's better?

(6)	Does anybody make a removable media drive, like the Syquist
	44-megabyte cartridge drive, for the DS3100?

	Thanks very much for your advice!

                                                        Dan

 //////////////////////////////////////\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
| Dan Barrett     -      Systems Administrator, Computer Science Department |
| The Johns Hopkins University, 34th and Charles Sts., Baltimore, MD  21218 |
| INTERNET:   barrett@cs.jhu.edu           |                                |
| COMPUSERVE: >internet:barrett@cs.jhu.edu | UUCP:   barrett@jhunix.UUCP    |
 \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/////////////////////////////////////

grr@cbmvax.commodore.com (George Robbins) (01/13/90)

In article <3949@jhunix.HCF.JHU.EDU> barrett@jhunix.HCF.JHU.EDU (Dan Barrett) writes:
> 
> 	I may be managing a network of DECstation 3100's running Ultrix in
> the near future.  I have been managing VAXen for a long time, but never a
> network of workstations.  So, I have some questions:
> 
> (1)	How do you handle inter-machine superuser privileges?
> 
> 	I do NOT want to put "root" in /.rhosts -- this is a big security
> 	risk, right?

Don't unless you can control physical access to the hardware or are operating
in an intentionally un-secure mode.  It is may be an acceptable risk /
convenience if you have a coule of servers in a secure area.

> (2)	How do you do transparent backups?  I want to pop a tape in ONE
> 	tape drive and say "Back up ALL files from ALL workstations onto
> 	this tape."

One traditional means is to have an "operator" account on all machines
and then have all the "raw" disks readable by "operator" and use a shell
script that remotely executes rdump on on each of the systems. 

Unforturnatly the Ultrix dump program is broken and thinks only "root"
is allowed to run dump.  I don't know of any convienient and secure
automated way to handle this.

The operator can still do the dumps from a central site/machine/tape, but
he has to know the root password and log into each of the machines and
manually run the dump program.
 
> 	Suppose I dedicate one workstation as the "main node", mount all
> 	other workstation disks on the main node using NFS, and then back it
> 	up.  This should work...?  But don't I have to worry about
> 	inter-machine superuser privileges?  After all, we want to back up
> 	EVERY file from EVERY machine.

Yep...  Plus you can only use cpio/tar across NFS.  Dump/restore are
generally speaking better tools.

> (3)	We'd like all users to have accounts on all workstations.  What's
> 	the best way to maintain an inter-machine password file?  I've
> 	heard vaguely of "yellow pages" but have never used it.

Yellow pages is probably a good way to do this, especially for a cluster
of workstations under one management being used in a homogenous manner.
Start with the DEC YP manuals and also get ahold of a set of sun Manuals
if you can...

> (4)	We'd like a system where the entire network appears to each user as
> 	if it were one huge "machine".  A user would log onto this "machine"
> 	and not care which workstation s/he were actually using.  (Maybe the
> 	"machine" would automatically log the user onto the workstation with
> 	the lightest system load.  I've seen this done with VMS systems at
> 	other schools.)  Can this entire scheme be done?  Transparently?

All the file systems can appear as one big filesystem if you set up an
appropriate cross mounting scheme.  YP can help with this.  Automatic load
sharing is not so simple and would be hard to make transparent in most cases.

> (5)	Should we put disks on every workstation, or have one fileserver and
> 	many diskless workstations?  Which is better?  Easier to maintain?

This is a religious question.  Central fileservers definitly make the
backup problem *much* easier to manage.  Backing up across a network is
slow and painful, having a decent performance tape drive on the same system(s)
as the disk drives is much faster.  The fewer filesystems you have to dump
the easier media management and recovery are.

> 	My idea is to have one or two fileservers, make the other
> 	workstations use NFS, but put a small disk on each workstation for
> 	swapping only.  Good?  Bad?  What's better?

Another one.  If you can afford to, put at least a swap disk on each system
and/or a root/swap/var disk(s) on each one and let the fileserver serve
files and not handle swapping or booting.  Some people will tell you that
network stuff is faster that low performance built-in SCSI drives.  This
may be true, especially on a lightly loaded net - if so, just run the
systems diskless.

> (6)	Does anybody make a removable media drive, like the Syquist
> 	44-megabyte cartridge drive, for the DS3100?

Anything SCSI may work, but you'll probably have to try it to test for
compatibilty before buying.  Small removable media hard drives are
still of questionale reliability.
-- 
George Robbins - now working for,	uucp: {uunet|pyramid|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@uunet.uu.net
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)

kjj@varese.UUCP (Kevin Johnson) (01/14/90)

In article <3949@jhunix.HCF.JHU.EDU> barrett@jhunix.HCF.JHU.EDU (Dan Barrett) writes:
>
>	I may be managing a network of DECstation 3100's running Ultrix in
>the near future.  I have been managing VAXen for a long time, but never a
>network of workstations.  So, I have some questions:
>
>(1)	How do you handle inter-machine superuser privileges?
>
>	I do NOT want to put "root" in /.rhosts -- this is a big security
>	risk, right?

Risk?  It depends on the permissions you have on / and /.rhosts

>(2)	How do you do transparent backups?  I want to pop a tape in ONE
>	tape drive and say "Back up ALL files from ALL workstations onto
>	this tape."
>
>	Suppose I dedicate one workstation as the "main node", mount all
>	other workstation disks on the main node using NFS, and then back it
>	up.  This should work...?  But don't I have to worry about
>	inter-machine superuser privileges?  After all, we want to back up
>	EVERY file from EVERY machine.

I'll leave this one for someone else to answer.  I have a system operator
that goes around and swaps tapes every morning - so I don't worry about it :-)

>(3)	We'd like all users to have accounts on all workstations.  What's
>	the best way to maintain an inter-machine password file?  I've
>	heard vaguely of "yellow pages" but have never used it.

Yellow Pages is a berkism.  I do sysV so I'll leave that one to someone else.

>(4)	We'd like a system where the entire network appears to each user as
>	if it were one huge "machine".  A user would log onto this "machine"
>	and not care which workstation s/he were actually using.  (Maybe the
>	"machine" would automatically log the user onto the workstation with
>	the lightest system load.  I've seen this done with VMS systems at
>	other schools.)  Can this entire scheme be done?  Transparently?

I don't know about 'one big machine'...
How about setting up a mount point for each filesystem that contain info
'needed' by the users. Eg: /mach1 /mach2 /mach3
This would allow the users to access this information while still retaining
the concept that those files are actually residing on another machine.
I mention this because I've had several aggravating experiences where
'users' got so insulated from the physical configuration underlying the
logical configuration that they literally had no idea that they were
going over the network to retrieve files.  I'm sure there are folks that
disagree with me (send all flames to /dev/null).  Maybe you won't have a
problem with this...  Maybe you will...  I like identifying resources that
have that kind of availablity so that at least there is something there
to tell them that they are tickling the wire.  Once set up in this manner,
a simple memo describing the directory-name nomenclature will suffice for
informing users that: a) they are living in a logical architecture that has
physical architecture ramifications and b) how they can determine when they
might be crossing that boundary between using the net and abusing it.

>(5)	Should we put disks on every workstation, or have one fileserver and
>	many diskless workstations?  Which is better?  Easier to maintain?
>
>	My idea is to have one or two fileservers, make the other
>	workstations use NFS, but put a small disk on each workstation for
>	swapping only.  Good?  Bad?  What's better?

It depends on your usage profile...

AS ALWAYS:
#include <standard_disclaimer.h>

scs@iti.org (Steve Simmons) (01/14/90)

barrett@jhunix.HCF.JHU.EDU (Dan Barrett) writes:

>	I may be managing a network of DECstation 3100's running Ultrix in
>the near future.  I have been managing VAXen for a long time, but never a
>network of workstations.  So, I have some questions:

[[and goes on to ask a host of questions indicating he's really thought
  about the problems.]]

As a pre-emptive strike, you should immediately order a copy of the
back proceedings of the USENIX Large Installation and Systems Admin
conference proceedings.  There have been three, order them from the
Usenix Association, 2650 Ninth St, Suite 215, Berkeley, CA, 94710.
Lisa I is $4.00, Lisa II is $8.00, Lisa III is $13.00.  I and II are
photocopy only, III is bound.  I believe there will be a Lisa IV
in Monteray in 1990, but I've seen no formal announcement.  The LISA
proceedings aren't perfect by any means, but you can get a wide
variety of ideas and contacts.

Also of good use is the newly-formed and still incomplete UNIX admin
archives.  It's still under construction, but if you can anon ftp
to terminator.cc.umich.edu, look under ~ftp/unix/sysadm.  We will
be keeping lots of odds and ends relating to sysadm stuff there,
see the readme file for details.  Anon uucp will be allowed fairly
soon.  Full details when it's all set up -- maybe another week.

>(2)	How do you do transparent backups?  I want to pop a tape in ONE
>	tape drive and say "Back up ALL files from ALL workstations onto
>	this tape."

Aside from security issues, where the heck are you going to get a
tape that big?

>(3)	We'd like all users to have accounts on all workstations.  What's
>	the best way to maintain an inter-machine password file?  I've
>	heard vaguely of "yellow pages" but have never used it.

YP will work but isn't real wonderful.  You might also look into
Hesiod from Project Athena.

>(4)	We'd like a system where the entire network appears to each user as
>	if it were one huge "machine".  A user would log onto this "machine"
>	and not care which workstation s/he were actually using.  (Maybe the
>	"machine" would automatically log the user onto the workstation with
>	the lightest system load.  I've seen this done with VMS systems at
>	other schools.)  Can this entire scheme be done?  Transparently?

Yes, it can be done -- I did it with an all-Sun network at Schlumberger
(170 modes) and will have it done in another month or so at ITI.  I will
use YP for login IDs, BIND for all other services.  The file systems will
be set up such that all users homes are in '/home/<hostname>[1-n]/user'.
This gives a consistant network view from everywhere.  Since you're all
3100s, architectural and operating system differences aren't a problem.

>(5)	Should we put disks on every workstation, or have one fileserver and
>	many diskless workstations?  Which is better?  Easier to maintain?

I like 'dataless' nodes -- enough disk so that the system can boot
itself and swap (the / partition + swap, basicly).  This cuts 90% of
the writes the file servers must do, optimising their performance.
Unfortunately DEC doesn't yet support booting from the 100MB internals.
So get swap disks, and when DEC supports booting from them, convert to
dataless.

scs@iti.org (Steve Simmons) (01/14/90)

grr@cbmvax.commodore.com (George Robbins) writes:

>In article <3949@jhunix.HCF.JHU.EDU> barrett@jhunix.HCF.JHU.EDU (Dan Barrett) writes:

>> (1)	How do you handle inter-machine superuser privileges?
>> 	I do NOT want to put "root" in /.rhosts -- this is a big security
>> 	risk, right?

>Don't unless you can control physical access to the hardware or are operating
>in an intentionally un-secure mode.  It is may be an acceptable risk /
>convenience if you have a coule of servers in a secure area.

We do something similar: all 'secured' machines (file servers and time-
shared systems in the machine room) have mutual .rhost entries.  All
other systems have the secured machines in their entries, but no others.
In a similar manner, we have 'extremely untrusted' machines on our
net.  We deal with those by not putting them in hosts.equiv, forcing
people to use passwords when accessing central systems.  Prevents rcp
and rsh too.

>> 	My idea is to have one or two fileservers, make the other
>> 	workstations use NFS, but put a small disk on each workstation for
>> 	swapping only.  Good?  Bad?  What's better?

>Another [religious issue].  If you can afford to, put at least a swap disk
>on each system
>and/or a root/swap/var disk(s) on each one and let the fileserver serve
>files and not handle swapping or booting.  Some people will tell you that
>network stuff is faster that low performance built-in SCSI drives.  This
>may be true, especially on a lightly loaded net - if so, just run the
>systems diskless.

I have settled this issue to my satisfaction by experiment, and can
definitively say "it depends"  :-).  A remote swap area (on a file server)
is usually faster than an internal *for one workstation*.  That's
because the file server disk (sync SCSI, SMD, RA, whathaveyou) is
faster than the internal disk even with the 'loss' of network access.
As the number of workstations and/or the amount of swapping on each
increases, eventaully the server becomes overloaded.  We empirically
determined that 3 Sun 3/50s in heavy swap state could swamp a Sun 3/{1,2}60
file server using Fujitsu 2361 disks and Xylogics 451 controllers.  The
limiting factors are the disk and the controllers, not the CPU.  So the
answer will depend on your local configurations and usage pattern.

bjaspan@athena.mit.edu (Barr3y Jaspan) (01/14/90)

Warning!  This is a long-ish article.

In article <3949@jhunix.HCF.JHU.EDU>, barrett@jhunix.HCF.JHU.EDU (Dan
Barrett) writes:
> 
> 	I may be managing a network of DECstation 3100's running Ultrix in
> the near future.  I have been managing VAXen for a long time, but never a
> network of workstations.  So, I have some questions:

Reading your message, it sounds like you are trying to set up *EXACTLY*
what MIT's Project Athena has already set up.  I'll see if I can address
each of your points, explain Project Athena's solution.

> 
> (1)	How do you handle inter-machine superuser privileges?
> 
> 	I do NOT want to put "root" in /.rhosts -- this is a big security
> 	risk, right?

Athena uses the Kerberos Authentication System for authenication and
authorization (which, by the way, is a "bug free" authentication system
at least in its abstract form (ie: the implementation may have bugs)). 
Each host can have a file called ".klogin" that says which users can log
in to the host as root.  So, if I have already proved to Kerberos that I
really am "bjaspan@athena.mit.edu" and if the machine FOOBAR.MIT.EDU has
me in its .klogin file, then I can log in there as root.

> 
> (2)	How do you do transparent backups?  I want to pop a tape in ONE
> 	tape drive and say "Back up ALL files from ALL workstations onto
> 	this tape."
> 
> 	Suppose I dedicate one workstation as the "main node", mount all
> 	other workstation disks on the main node using NFS, and then back it
> 	up.  This should work...?  But don't I have to worry about
> 	inter-machine superuser privileges?  After all, we want to back up
> 	EVERY file from EVERY machine.
>
> (5)	Should we put disks on every workstation, or have one fileserver and
> 	many diskless workstations?  Which is better?  Easier to maintain?
> 
> 	My idea is to have one or two fileservers, make the other
> 	workstations use NFS, but put a small disk on each workstation for
> 	swapping only.  Good?  Bad?  What's better?
> 

These two are related, so I'll group them.  Athena has set up a number
of fileservers (which use Kerberos authentication to make sure they only
give files  to the right people) to store files.  Each "public
workstation" (of which there are approximately 1000 at the moment) has a
small hard disk which contains enough software for the machine to
function (many of the standard "system files" are also stored on
fileservers) and some local scratch space on a partition called "/site".
Filesystems from the fileservers are then mounted on the local disk
using NFS (and we are also currently experimenting with AFS, the Andrew
File System).

The backup problem then becomes easier.  You don't have to backup the
workstations at all, because no working files are stored there.  You
only have to backup the fileservers, and there is some limited number of them.

> 
> (3)	We'd like all users to have accounts on all workstations.  What's
> 	the best way to maintain an inter-machine password file?  I've
> 	heard vaguely of "yellow pages" but have never used it.
> 

Project Athena's solution to this problem is called "Hesiod", which it a
simple network database containing information for each user.  At
Athena, for example, Hesiod stores the following information (plus others):

    PASSWD: bjaspan:*:9123:101:Barr3y
Jaspan,,E40-342,34261,59604:/mit/bjaspan:/
bin/csh
    FILSYS: AFS /afs/athena.mit.edu/user/b/bjaspan w /mit/bjaspan
     POBOX: POP ATHENA-PO-2.MIT.EDU bjaspan

PASSWD is a standard password entry, except that password field itself
is the special character "*", meaning "ask the user for the password and
use Kerberos to do the authentication."  FILSYS says where my personal
filesystem is (which server, what path on that server, what the name of
the mountpoint on the local machine should be.  (My homedir is on AFS. 
An NFS entry looks like this:
"NFS /u1/lockers/testuser cyrus w /mit/testuser" -- it contains
essentially the same info.)  POBOX is used my the mailhub to determine
which machine stores my mail until I pick it up (mail server is
something you didn't mention.  A post office server holds my mail for me
until I call it up and ask for (using Kerberos to authenticate, of
course).  This way I can read my mail from any workstation.)

> (4)	We'd like a system where the entire network appears to each user as
> 	if it were one huge "machine".  A user would log onto this "machine"
> 	and not care which workstation s/he were actually using.  (Maybe the
> 	"machine" would automatically log the user onto the workstation with
> 	the lightest system load.  I've seen this done with VMS systems at
> 	other schools.)  Can this entire scheme be done?  Transparently?
> 

Well, at Athena, all workstations look essentially the same.  We have
VAXen, IBM RTs, and DECstation 3100s (running BSD4.3, BSD4.3, and
Ultrix, respectively) all running the X window system and the look
exactly the same to users.  (Well, almost.. :-)  Hesiod allows the
network services to be independent of which workstation a user is
actually logged on to.

One thing Athena doesn't do is the "using the machine with the lightest
load" trick.  The premise of Athena is that a workstation should be used
by a single user at a time (although of course they all support multiple
users) so any empty machine is the same as any other.


> 
> (6)	Does anybody make a removable media drive, like the Syquist
> 	44-megabyte cartridge drive, for the DS3100?
> 

I don't actually know what Project Athena uses to back up its
fileservers (I work for sysdev, not operations).  The Student
Information Processing Board (SIPB), however, has a tape drive made by
Exabyte that stores 2.2 GIGAbytes on an tape (we buy the tapes at our
local Tower Records store, and I think they're about $8 apiece.. most
people around here by Sony.)  We are currently running the Exabyte of a
VSII, but I think that is because we are using it to back up the SIPB
AFS cell, and the AFS software doesn't yet work on the DS3100 (we
actually have DS3100's in our office as well) but I seem to recall
hearing someone say that the drive does actually work with that machine.
 Incidentally, Exabyte has announced that they will be releasing a drive
that can store 5 gigabytes (instead of 2.2)
on the same tape by the end of this year.

I have no connection to Exabyte Corp.


> 	Thanks very much for your advice!
> 
>                                                         Dan

You're very welcome.  The best part about all this information that I've
given you is that the software to run it is FREE.  You can get Kerberos
and Hesiod by anonymous FTP from athena-dist.mit.edu.  There are also
things you didn't mention, like "How do users communicate with each
other?"  The answer is the Zephyr Notification Service, which you can
also get from athena-dist.  With all these services, you need a
service-management system, which we also have and is called Moira and
you can FTP it from.. you get the idea.  (Have you ever heard of a
networked conference system called "discuss"?  Well...)

This is a rough sketch of how we do things here.  There are more
knowledgeable people about running a network here that could be far more
useful to you.. 


Barry Jaspan, MIT-Project Athena
bjaspan@athena.mit.edu

bph@buengc.BU.EDU (Blair P. Houghton) (01/14/90)

In article <3949@jhunix.HCF.JHU.EDU> barrett@jhunix.HCF.JHU.EDU (Dan Barrett) writes:
>
>	I may be managing a network of DECstation 3100's running Ultrix in
>the near future.  I have been managing VAXen for a long time, but never a
>network of workstations.  So, I have some questions:

Welcome to it.  I have 10 GPXes and 6 vs2000's (waiting for some
3100's on the horizon...) all clustered together, and it's easier
than it looks but harder than it should be...

>(1)	How do you handle inter-machine superuser privileges?
>	I do NOT want to put "root" in /.rhosts -- this is a big security
>	risk, right?

Huge.  Dangerous.  So call uid 0 something bizarre (passwordlike),
remove the word "root" from as many places as you can find, and be happy.

>(2)	How do you do transparent backups?  I want to pop a tape in ONE
>	tape drive and say "Back up ALL files from ALL workstations onto
>	this tape."

rdump never worked for us, either.  We've got things NFS'ed
all over the place (see below) and do dumps on all machines
separately;  this has the con that it's not convenient to
be running all over the building to swap tapes, especially
TK50's :-), but has the pros that dumps happen in parallel
and restores are much quicker.  It also encourages a mixed
full/incremental dump schedule, where a set of filesystems
that have had heavy alterations can be dumped at level 0
while all the others are getting a level 1 or 2.

>	Suppose I dedicate one workstation as the "main node", mount all
>	other workstation disks on the main node using NFS, and then back it
>	up.  This should work...?  But don't I have to worry about
>	inter-machine superuser privileges?  After all, we want to back up
>	EVERY file from EVERY machine.

It would be hideously slow to do it over NFS, but, There's
a keyword to put in the /etc/exports file for NFS that
allows uid 0 to have access to NFS'ed filesystems.  This
allows uid 0 to have access, not just "root", so it's also
rotten security.

>(3)	We'd like all users to have accounts on all workstations.  What's
>	the best way to maintain an inter-machine password file?  I've
>	heard vaguely of "yellow pages" but have never used it.

Get it; use it.  It's not hard to start.  Then you maintain
one big /etc/passwd file on the server, a few lines in
/etc/passwd on each of the clients, and save yourself
several minutes of work per change to the password file.

>(4)	We'd like a system where the entire network appears to each user as
>	if it were one huge "machine".  A user would log onto this "machine"
>	and not care which workstation s/he were actually using.  (Maybe the
>	"machine" would automatically log the user onto the workstation with
>	the lightest system load.  I've seen this done with VMS systems at
>	other schools.)  Can this entire scheme be done?  Transparently?

Absolutely.  NFS and YP give you this.  Simply have
user-partitions exported to all machines in the cluster.
Then the user is logged onto the workstation he's sitting
at, and occasionally accessing a file on the server.  If
you do it the other way, then _all_ computation is done on
a remote host.  It still apears transparent.  I still get
users at the _end_ of a semester asking me "where's the big
computer that runs all these graphics terminals," usually
just as I'm asking them to take their books off it...:-)

>(5)	Should we put disks on every workstation, or have one fileserver and
>	many diskless workstations?  Which is better?  Easier to maintain?

Having one fileserver means that that one machine takes a lot of load,
so it should be significantly more powerful than the rest of the
machines.

We have fully-exported partitions scattered all over the
cluster, so that, for instance, when programs rummage
through the CAD libraries on one machine they aren't
causing collisions with the main user partition, which is
on another machine.

This also avoids duplication and frees up some disk space,
since each (non-diskless-system) station requires / and
/usr locally, but /usr/local and /usr/spool/mail and
whatnot can be mounted remotely.

>	My idea is to have one or two fileservers, make the other
>	workstations use NFS, but put a small disk on each workstation for
>	swapping only.  Good?  Bad?  What's better?

Depends on what space you need for the stuff you're putting on those
servers; and don't expect any sort of usable performance if you
want to log into one of the serving stations, if it has more than
half your stuff on it...

Looking again, I've got perhaps four stations that have any
major serving to do, and at least 6 (the vs2000's) that
serve nothing, having only the necessities and some swap
space on them.  This works rather well.

>(6)	Does anybody make a removable media drive, like the Syquist
>	44-megabyte cartridge drive, for the DS3100?

I have no idea.

All in all, I'd say that small VAXen work well as local
clusters without any dedicated server to support them;
though we have been unable to do one or two things that
would be much better suited to having a large VAX as a
central server, but that's probably more a factor in
the sort of work we do, which occasionally requires
installing 100-200Mb CAD tool packages.

				--Blair
				  "Good luck.  You don't need as
				   much of it as you might think."

D. Allen [CGL]) (01/14/90)

We've had no success trying to do software development over NFS.  You
can run applications over NFS, since all they have to do is load over
the net once; but, the compile/edit/test cycle involves too many file
accesses.  A compile or "make" can reference dozens of files, and the
network overhead getting at each file was too much.  We gave it up and
don't use NFS for much more than moving things from far away to local
disk where we can work on it.
-- 
-IAN! (Ian! D. Allen) idallen@watcgl.uwaterloo.ca idallen@watcgl.waterloo.edu
 [129.97.128.64]  Computer Graphics Lab/University of Waterloo/Ontario/Canada

envbvs@epb2.lbl.gov (Brian V. Smith) (01/15/90)

In article <12938@watcgl.waterloo.edu> idallen@watcgl.waterloo.edu (Ian! D. Allen [CGL]) writes:
< We've had no success trying to do software development over NFS.  You
< can run applications over NFS, since all they have to do is load over
< the net once; but, the compile/edit/test cycle involves too many file
< accesses.  A compile or "make" can reference dozens of files, and the
< network overhead getting at each file was too much.  We gave it up and
< don't use NFS for much more than moving things from far away to local
< disk where we can work on it.

Well, I don't know what kind of machine(s) you have, but on our
modest little Vaxstation II, we often compile the whole X11R4 tree
over NFS from a machine that has enough disk space to store the whole source.

Both the source and object files are stored on the remote NFS server and
there never seems to be too much of a network load because of this.
--
_____________________________________
Brian V. Smith    (bvsmith@lbl.gov)
Lawrence Berkeley Laboratory
I don't speak for LBL, these non-opinions are all mine.

D. Allen [CGL]) (01/15/90)

In article <4624@helios.ee.lbl.gov> envbvs@epb2.lbl.gov (Brian V. Smith) writes:
>Both the source and object files are stored on the remote NFS server and
>there never seems to be too much of a network load because of this.

We don't have a problem with network load either.  The compiles and such
just take twice as long over NFS as doing them right on the local disk.

    Local machine: VS3200/16Mb/RD54/Ultrix 3.0
    Server: VAX8600/32Mb/RA8[12]/4.3 BSD with McGill NFS server
-- 
-IAN! (Ian! D. Allen) idallen@watcgl.uwaterloo.ca idallen@watcgl.waterloo.edu
 [129.97.128.64]  Computer Graphics Lab/University of Waterloo/Ontario/Canada

p576spz@mpirbn.UUCP (S.Petra Zeidler) (01/15/90)

We've got 3 "systemless" DS3100 being servered by a VS3100.
We are using Yellow Pages, home is /home/$VAXname/$USERname,
swap and /tmp are on local disc (/tmp to speed up compiling);
I called the Riscs systemless because they have data-discs connected
to them (640 Mb = one data-set) which get moved according to need.
(Would that answer your removable data-carrier problem ? :^} )

System security and look-alike-ness :
      only root is allowed to log in on the VS3100;
      changes to the DS3100's systems are done on the server.

Backup:
      data are to be saved by the users,because only they know when
      it's sensible;
      home and the like is all on the server.

Our net is running fine enough for 3 machines, but our next DS3100
will be standalones because the net would go down too much if 6 users
compiled simultaneously (3 users already sync themselves fine,thank you 8^} );
also we might want to run a machine single-user with a really huge job.

I hope this helps you on
			   spz

uunet!unido!mpirbn!spz  or spz@specklec.mpifr-bonn.mpg.de
I represent the opinion of a small blue extraterrestrial I happen to know;
noone elses, not even my own.
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
<>                                                                            <>
<>                        THIS IS THE END ...                                 <>
<>                                                      of this message       <>
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>

faustus@yew.Berkeley.EDU (01/16/90)

> Unfortunately DEC doesn't yet support booting from the 100MB internals.

We're booting from them, and it works fine.  We have /, /var, and swap on
the internal rz22, and /usr on the external drive.

	Wayne

bph@buengc.BU.EDU (Blair P. Houghton) (01/16/90)

In article <12943@watcgl.waterloo.edu> idallen@watcgl.waterloo.edu (Ian! D. Allen [CGL]) writes:
>In article <4624@helios.ee.lbl.gov> envbvs@epb2.lbl.gov (Brian V. Smith) writes:
>>Both the source and object files are stored on the remote NFS server and
>>there never seems to be too much of a network load because of this.
>
>We don't have a problem with network load either.  The compiles and such
>just take twice as long over NFS as doing them right on the local disk.

But that was the original complainant's point; that he was in a
software development environment, where you might want to do
fifty fix-and-compile iterations each day, and having to wait
twice as long would, of course, halve your productivity.

				--Blair
				  "Or worse, if you have a
				   habit of going into the
				   Usenet window while cc
				   is running..."

grr@cbmvax.commodore.com (George Robbins) (01/16/90)

In article <21171@pasteur.Berkeley.EDU> faustus@yew.Berkeley.EDU () writes:
> > Unfortunately DEC doesn't yet support booting from the 100MB internals.
> 
> We're booting from them, and it works fine.  We have /, /var, and swap on
> the internal rz22, and /usr on the external drive.

The deal being that since you can't (easily 8-) build a system on a rz22
DEC doesn't "supportr" it and you probably can't do it thru the normal
installation scripts...

As long as they don't break booting from geneic SCSI, fine...

-- 
George Robbins - now working for,	uucp: {uunet|pyramid|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@uunet.uu.net
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)