[comp.unix.admin] Backups

jc@raven.bu.edu (James Cameron) (06/24/91)

System:  4/300 SparcServer (4/380) running 4.1.1

We currently are using 5.5+ GB of data with total diskspace
of nearly 10GB.  6+ MB of this is on a remote system.  We
have one 8mm Exebyte tape drive and 1/4" cartridge tape
drive.  I would like to limit the backups to only the
8mm drive.

My question is what would be automate the backups?  At this
point, I am running shell scripts which dump until end of
tape at which point I end up switching tapes and start
the next script which HAS to start at the last partition which
reached end of tape.  This seems like a great waste of time
and tape, etc.

Please respond to jc@raven.bu.edu and feel free to mention
both hardware and software (and prices please!)

Thanks!

jc

--
					-- James Cameron  (jc@raven.bu.edu)

Signal Processing and Interpretation Lab.  Boston, Mass  (617) 353-2879
------------------------------------------------------------------------------
"But to risk we must, for the greatest hazard in life is to risk nothing.  For
the man or woman who risks nothing, has nothing, does nothing, is nothing."
	(Quote from the eulogy for the late Christa McAuliffe.)

mjr@hussar.dco.dec.com (Marcus J. Ranum) (06/24/91)

jc@raven.bu.edu (James Cameron) writes:

>My question is what would be automate the backups?  At this
>point, I am running shell scripts which dump until end of
>tape at which point I end up switching tapes [...]

	I have a system that I put together which does a couple of things
at once. A process (which talks the same protocol as rdump/rrestore) can
be started up by the inetd from a client (no need for root on the tapserver!)
and enforces permissions as to who can talk to it. If the client is valid,
it then reads the client's data and writes it to disk someplace (or to a
device, if you don't want to queue stuff). When the write is done, the
client's data is moved into a queue where anotther process deals with
writing it to tape, and prompting a tape change, etc, if needed. The
tape writer process keeps a bunch of logs about what came from where and
so forth. This is essentially a two-step process, but the advantage is
that multiple clients can all be "dumping" (they can use rdump) at
once if they like, even though there is only one real tape drive. The
system can easily support more than one real tape drives (several
queues) but has no support for switching queues. Since more than one
client can be writing arbitrarily large images to the queueing file
system, another process (optionally) can be configured that just keeps
an eye on the high-water for that filesystem, and tells the input
readers to pause, if the filesystem overflows past a certain point,
until the tape queuer catches up.

	I plan to put this code out on the net once it's a little
more hammered on and documented - right now the only documentation
is in the code comments. :) I also *plan* to write some trivial
client that will talk to the remote tape writer and give a gauge
of how full it is, etc. It's all fairly simple code, very configurable,
and a lot of the "site policy" (like how to notify that the tape drive
is full) is done by making callouts to shell scripts to permit ease
of modification. The rmtaped can be used as a replacement for /etc/rmt
and can be used to provide a simple permissions/redirection for rdump
requests. There's a trivial client "rmtwrite" which just dumps arbitrary
stuff to the tape server, so that a user could back up stuff with a
tar image or whatever whenever they wanted. Having a bunch of workstations
all pumping images to a single workstation over the net will load it
down pretty heavily, but it gets the pain over a lot faster than doing
each client in sequence. The assumption is that you have a fair sized
chunk of disk you can dedicate (600M?) depending on how many clients
you have - there's absolutely no support for "backup policy" as far
as what types of dumps get done, when.

	Would there be enough interest in this code to warrant writing
some documentation? If anyone wants it "as is" to peer at, let me know.
I'll document it some day, after I get the 200 other things I should be
documenting documented. :)

mjr.