[comp.unix.ultrix] dump on old 1.2 Ultrix...

tjs@wuee1.wustl.edu (tom sullivan) (07/30/89)

we're still running old Ultrix 1.2 on our uVAX-II/GPX. the windowing
system (X11/Decwindows) with 3.0 just messed us up more than anything
else. my biggest complaint with 1.2 is the speed (or lack there of) of
dumps. is there an optimal setting (density, lenght, blocking factor, etc.)
to do dumps onto a TK50 drive. as it is, it takes over 6 hours to perform
a level 0 dump of an 80 Meg partition.

thanks,

tom sullivan
washington university in st. louis
department of electrical engineering
tjs@wuee1.wustl.edu  or uunet!wucs1!wuee1!tjs

grr@cbmvax.UUCP (George Robbins) (07/30/89)

In article <412@wuee1.wustl.edu> tjs@wuee1.wustl.edu.UUCP (tom sullivan) writes:
> we're still running old Ultrix 1.2 on our uVAX-II/GPX. the windowing
> system (X11/Decwindows) with 3.0 just messed us up more than anything
> else. my biggest complaint with 1.2 is the speed (or lack there of) of
> dumps. is there an optimal setting (density, lenght, blocking factor, etc.)
> to do dumps onto a TK50 drive. as it is, it takes over 6 hours to perform
> a level 0 dump of an 80 Meg partition.

Ultrix dump(1) supports an undocumented -b switch that lets you specify
the blocking factor for the dump.  Setting this so that it generates the
largest blocksize that the controller/drivers supports should minimize
wasted tape and time.  You need to verify *most* carefully that the
blocksize you use isn't too big and is silently generating bogus tapes!!!

There is a problem with this - the Ultrix restore(1) utility doesn't support
a corresponding -b option, so you need to use dd(1) to deblock the tape on
input, which *won't* work if it's a multi-volume dump!!!  The alternative
is to use the 4.3 BSD restore utility which supports the -b switch.

Another alternative, if you have access to a 4.3 system, is to snarf the
4.3 BSD dump/restore binaries and use them.  They work good, are somewhat
faster and -b is documented for both dump and restore.

If you have enough disk space, you might find it quicker to do the various
dumps to a temporary output file and dd the output file to the TK50.  Dump
and streaming tape drives are poorly matched, since dump may have to go
jerking all over the disk drive to collect a file, while the tape drive is
merrily spinning away or playing '1 step forward, 2 steps back'.  Doing the
dd from a sequential file to tape should be as close to optimum as you're
going to get.

A final note is to represent that you don't really have to do level 0
dumps very often at all, with the only real need being when the incremental
dumps start taking painfully long or won't fit on one reel/cartridge or
you muck with the filesystems.

The dump(1) documentation is baroque and confusing, but the bottom line is
that the simple mode is to do level 0 dumps occasionally and only one flavor
of level x incremental dumps daily.  Since each incremental represents all
the changes since the last level 0, there is no particular requirement to
keep all the incrementals back to the last level 0.  As insurance, you
should do (and keep) an incremental immediately before each level 0, so
that if the level 0 turns out to be defective, the previous level 0 and the
"extra" incremental give you the equivalent file state as a life-saver.

If your activity level is high enough to warrant intermediate level
incremental backup, the same concept of completing the set of "extra"
incrementals gives you the needed redundancy, though your life becomes
more complex.  Theoretically, in either case,  barring multiple bad tapes,
you can lose at most the changes from the prior incremental to the time
of the disaster, even it you get hit with one bad tape.

You *are* doing those dumps in single user mode and/or with the file
system dismounted aren't you?  If not, the "completeness" of the dump
becomes a statistical issues and there's a chance you'll have to do
massive fiddling if a (level 0 especially) dump is inconsistent due
to things changing while the dump is running.  I generally do the
level 0 dumps in single user mode or by dismounting filesystems when
I can and do the incrementals in multi-user mode.  This isn't "right",
but the risk seems acceptable in comparison to the pain of shutting
down every day.

BIG DISCLAIMER:

The above represents my understanding of the "way dump(1) really works" and
it is possible I am either (1) seriously confused or (2) f**ked in the head.
Consult you local man pages and gurus before implementing backup schemes 
different than the (hopefully) correct, but (perhaps) suboptimal scheme you
may  presently be using.  Questions and corrections appreciated.

PS:  Gnu-tar can also do incremental backups, and the tapes created from
     a gnu-tar dump of an active file system are inherently more restoreable
     than those from a dump(1) of an active file system.  This doesn't
     necessarily mean that they have better data integrity under this
     circumstance, just that the restore will be less prone to blow off.

-- 
George Robbins - now working for,	uucp: {uunet|pyramid|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@uunet.uu.net
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)

davew@gvgpsa.GVG.TEK.COM (David C. White) (07/30/89)

In article <412@wuee1.wustl.edu> tjs@wuee1.wustl.edu.UUCP (tom sullivan) writes:
>my biggest complaint with 1.2 is the speed (or lack there of) of
>dumps. is there an optimal setting (density, lenght, blocking factor, etc.)
>to do dumps onto a TK50 drive. as it is, it takes over 6 hours to perform
>a level 0 dump of an 80 Meg partition.

I think you are out of luck.  If my memory serves me correctly, the
driver was rewritten to drive the TK50 in streaming mode in 2.0.
The driver in 1.X couldn't get data to the drive fast enough to
keep it going in streaming mode, so it runs in start/stop mode
which explains why it takes so long to do a small dump.  

It may be possible to grab the driver out of the 3.0 you seem to
imply that you have, but I haven't really looked into whether
this is possible or not.  Another possiblilty to to see if you
can find a way to get a 2.3 version if you really don't want the
3.0 features.
-- 
Dave White	Grass Valley Group, Inc.   VOICE: +1 916.478.3052
P.O. Box 1114  	Grass Valley, CA  95945    FAX: +1 916.478.3887
Internet: davew@gvgpsa.gvg.tek.com     UUCP:  ...!tektronix!gvgpsa!davew

karish@forel.stanford.edu (Chuck Karish) (07/30/89)

In article <412@wuee1.wustl.edu> tjs@wuee1.wustl.edu.UUCP (tom sullivan) wrote:
>we're still running old Ultrix 1.2 on our uVAX-II/GPX.

>is there an optimal setting (density, lenght, blocking factor, etc.)
>to do dumps onto a TK50 drive. as it is, it takes over 6 hours to perform
>a level 0 dump of an 80 Meg partition.

As George Robbins points out, dump accepts the `-b' flag.  The maximum
value for this flag is 126, as in 126 blocks or 63K.  This works for
tar, too, and speeds up the TK50 a lot.  I don't know whether values
between 20 (the documented maximum for tar) and 126 work.  Don't forget
to specify `-b 126' when you restore an archive.

	Chuck Karish		{decwrl,hpda}!mindcrf!karish
	(415) 493-9000		karish@forel.stanford.edu

grr@cbmvax.UUCP (George Robbins) (07/31/89)

In article <4082@portia.Stanford.EDU> karish@forel.stanford.edu (Chuck Karish) writes:
> In article <412@wuee1.wustl.edu> tjs@wuee1.wustl.edu.UUCP (tom sullivan) wrote:
> >we're still running old Ultrix 1.2 on our uVAX-II/GPX.
> 
> >is there an optimal setting (density, lenght, blocking factor, etc.)
> >to do dumps onto a TK50 drive. as it is, it takes over 6 hours to perform
> >a level 0 dump of an 80 Meg partition.
> 
> As George Robbins points out, dump accepts the `-b' flag.  The maximum
> value for this flag is 126, as in 126 blocks or 63K.  This works for
> tar, too, and speeds up the TK50 a lot.  I don't know whether values
> between 20 (the documented maximum for tar) and 126 work.  Don't forget
> to specify `-b 126' when you restore an archive.

Does -b 126 really work with TK50's?  I get the feeling you're generalizing
from Suns, rather than DEC stuff.  My recollection is that -b 63 is the
biggest that will work with traditional (massbus TU78) drives, and there
are blocksize limitations described for the PMAX SCSI TK50 that would
limit you to -b 32.  I advise careful testing and verification before
getting carried away with numbers...

Note that as of 3.1, Ultrix restore still does not support the -b switch.
I really wish they would update their porting base for both dump/restore
and UUCP to the current 4.3 or tahoe versions instead of sticking with
the antique and feature lacking versions.  Even Honey-Danber would be
better than the current uucp...

-- 
George Robbins - now working for,	uucp: {uunet|pyramid|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@uunet.uu.net
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)

grr@cbmvax.UUCP (George Robbins) (08/01/89)

In article <1226@gvgpsa.GVG.TEK.COM> davew@gvgpsa.gvg.tek.com (David C. White) writes:
> In article <412@wuee1.wustl.edu> tjs@wuee1.wustl.edu.UUCP (tom sullivan) writes:
> >my biggest complaint with 1.2 is the speed (or lack there of) of
> >dumps. is there an optimal setting (density, lenght, blocking factor, etc.)
> >to do dumps onto a TK50 drive. as it is, it takes over 6 hours to perform
> >a level 0 dump of an 80 Meg partition.
> 
> I think you are out of luck.  If my memory serves me correctly, the
> driver was rewritten to drive the TK50 in streaming mode in 2.0.
> The driver in 1.X couldn't get data to the drive fast enough to
> keep it going in streaming mode, so it runs in start/stop mode
> which explains why it takes so long to do a small dump.  

Maybe one of the DEC folk could clarify this? 

> It may be possible to grab the driver out of the 3.0 you seem to
> imply that you have, but I haven't really looked into whether
> this is possible or not.  Another possiblilty to to see if you
> can find a way to get a 2.3 version if you really don't want the
> 3.0 features.

Drivers are generally not binary transportable across major releases and the
3.0 stuff especially, since kernel level memory allocation stuff changed.

-- 
George Robbins - now working for,	uucp: {uunet|pyramid|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@uunet.uu.net
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)

alan@shodha.dec.com ( Alan's Home for Wayward Notes File.) (08/01/89)

	In Ultrix V2.0 a feature was added to the drivers
	of many character special devices (disk and tape).
	To quote the man page title (nbuf(4)):

		nbuf - select multiple-buffer operation to a 
			raw device 

	Dump(8), tar(1) and dd(1) were modified to take advantage
	of this feature.  I'm not sure about cpio and ltf.  THis
	is the driver change that was mentioned in a previous
	posting.  By itself it didn't do anything, but made it
	possible for selected utilities to get better performance
	from stream tape drives.  It's also possible to use it
	with disk devices.

	Before this feature was added one very tacky trick I used
	to make a TK50 stream was to write a filter to sit between
	dump(8) and the tape drive.  The filter had a VERY large
	buffer (4+ MB) that it would fill from stdin.  When the 
	buffer was full it would start writting it to stdout (usually
	pointed at the tape drive.  Writting from memory was fast
	enough to make the TK50 stream.  The program didn't deal with
	dumps that would require more than one tape, but with a little
	work (SMOP) it could probably be added.

	Another tape feature added to V2.0 was end-of-tape detection.
	Rather than just get an I/O error and not know what to do
	with it, programs can now find out more information from the
	driver.  If you know the tape is at EOT it's relatively easy
	to ask for another tape.  Dump(8), tar(1) and dd(1) were
	changed to take advantage of this feature.  If you give a
	long tape length to dump it will use all the tape and do the
	right when it gets to end.  If you leave it to it's estimates
	it can be wrong and waste some tape.

karish@forel.stanford.edu (Chuck Karish) (08/01/89)

In article <7484@cbmvax.UUCP> grr@cbmvax.UUCP (George Robbins) wrote:
>In article <4082@portia.Stanford.EDU> karish@forel.stanford.edu
(Chuck Karish) writes:
>> In article <412@wuee1.wustl.edu> tjs@wuee1.wustl.edu.UUCP
(tom sullivan) wrote:
>> >we're still running old Ultrix 1.2 on our uVAX-II/GPX.

>> >is there an optimal setting (density, lenght, blocking factor, etc.)
>> >to do dumps onto a TK50 drive.

>> As George Robbins points out, dump accepts the `-b' flag.  The maximum
>> value for this flag is 126, as in 126 blocks or 63K.  This works for
>> tar, too, and speeds up the TK50 a lot.

>Does -b 126 really work with TK50's?  I get the feeling you're generalizing
>from Suns, rather than DEC stuff.  My recollection is that -b 63 is the
>biggest that will work with traditional (massbus TU78) drives, and there
>are blocksize limitations described for the PMAX SCSI TK50 that would
>limit you to -b 32.


	A uVAX-II/GPX has neither a Massbus nor a SCSI bus, and
	`-b 126' works.  I used it for a tar backup to TK50 on a
	uVAX-II/GPX running 3.0, last week.  It's been a while since I
	used the flag on a 1.2 system, but it worked then, too.  I'm
	not absolutely sure that it worked on dump/restore, but I think
	it did.



	Chuck Karish		{decwrl,hpda}!mindcrf!karish
	(415) 493-9000		karish@forel.stanford.edu

grr@cbmvax.UUCP (George Robbins) (08/01/89)

In article <4113@portia.Stanford.EDU> karish@forel.stanford.edu (Chuck Karish) writes:
> In article <7484@cbmvax.UUCP> grr@cbmvax.UUCP (George Robbins) wrote:
> >In article <4082@portia.Stanford.EDU> karish@forel.stanford.edu
> 
> >Does -b 126 really work with TK50's?  ...
> 
> 	A uVAX-II/GPX has neither a Massbus nor a SCSI bus, and
> 	`-b 126' works.  I used it for a tar backup to TK50 on a
> 	uVAX-II/GPX running 3.0, last week.  It's been a while since I
> 	used the flag on a 1.2 system, but it worked then, too.  I'm
> 	not absolutely sure that it worked on dump/restore, but I think
> 	it did.

I don't really want to fight about this - my purpose was simply to issue
sufficiently dire warnings that people wouldn't start spewing out big
blocksize dumps, and then be embarrassed at one of those critical moments
when you find that none of your recent dump tape seem to read back in...

The current Ultrix dump/restore and tar program seem pretty good about
blocksize, other versions have been know to *silently* generate unreadable
tapes.  The 1.2 versions can be assumed to be "closer" in an evolutionary
sense to those bad guys than the 3.x versions.

Anyway, I threw up a tape on a drive here an play around a bit.  It wasn't
a TK50, so the results don't apply to TK50's, though the ideas do.

tar:

tar -b switch specifies blocksize in terms of 512 byte blocks.
values of up to 127 work, blocksize=65024.  program claims that 128
or above is "invalid block size".  blocksize validated with dd.

dump/restore:

dump -b switch specifies blocksize in terms of *1024* byte blocks,
restore has no corresponding switch.  values of up to 64 work,
blocksize=65536.  program gets "write error" on first block if value
greater than 64.  blocksize validated with dd.  dd can be used to
reblock tapes to stdin to restore a single volume dump.

notions:

The actual limits here are probably based on Massbus 16-bit byte
count registers, which enforce transfer sizes of 1-65536 bytes.
tar seems to make a software decision to avoid 65536 byte blocks.

Now it's entirely possible that other I/O architectures / controllers
may allow larger transfers.  I don't have access to any of these.
It's also possible for software to be broken when the numbers used are
larger than the programmer considered.   I seem to recall that dd
used to have problems in this regard, though it seems to work good
now.

-- 
George Robbins - now working for,	uucp: {uunet|pyramid|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@uunet.uu.net
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)

ggs@ulysses.homer.nj.att.com (Griff Smith) (08/01/89)

In article <7492@cbmvax.UUCP>, grr@cbmvax.UUCP (George Robbins) writes:
...
> The actual limits here are probably based on Massbus 16-bit byte
> count registers, which enforce transfer sizes of 1-65536 bytes.
> tar seems to make a software decision to avoid 65536 byte blocks.
> 
> Now it's entirely possible that other I/O architectures / controllers
> may allow larger transfers.
> -- 
> George Robbins - now working for,	uucp: {uunet|pyramid|rutgers}!cbmvax!grr
> but no way officially representing	arpa: cbmvax!grr@uunet.uu.net
> Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)

Large blocks may also cause a problem when doing error recovery.  For instance,
if using 9-track tape at 6250 bpi a 64K block takes about ten inches of tape.
If the device driver tries to skip over a bad spot on the tape by rewinding
over a bad record and writing a three inch gap, the bad spot may be on the
other seven inches.  If the driver gives up after three write attempts, it
will never skip over an error in the last inch of the record.  Larger blocks
make the problem worse.  64k blocks are already at 97% of maximum density for
6250, so larger blocks are silly.

Adjust these arguments for newer media, but 64k should usually be a good size;
it's already 8 times larger than the disk block size used by most BSD-based systems.
-- 
Griff Smith	AT&T (Bell Laboratories), Murray Hill
Phone:		1-201-582-7736
UUCP:		{most AT&T sites}!ulysses!ggs
Internet:	ggs@ulysses.att.com

tjs@wuee1.wustl.edu (tom sullivan) (08/01/89)

In article <7487@cbmvax.UUCP> grr@cbmvax.UUCP (George Robbins) writes:
>In article <1226@gvgpsa.GVG.TEK.COM> davew@gvgpsa.gvg.tek.com (David C. White) writes:

>> It may be possible to grab the driver out of the 3.0 you seem to
>> imply that you have, but I haven't really looked into whether

>Drivers are generally not binary transportable across major releases and the
>3.0 stuff especially, since kernel level memory allocation stuff changed.

This is all due to my original posting. I have tried the 3.0 dump, and
George Robbins is correct, the drivers don't work. I have tried a 4.3
dump and it appears to be working well. Dump times for 80 Meg are
down from 6+ hours to about 30 minutes.

thanks for all the suggestions.

tom

grr@cbmvax.UUCP (George Robbins) (08/02/89)

In article <11953@ulysses.homer.nj.att.com> ggs@ulysses.homer.nj.att.com (Griff Smith) writes:
> In article <7492@cbmvax.UUCP>, grr@cbmvax.UUCP (George Robbins) writes:
> ...
> > The actual limits here are probably based on Massbus 16-bit byte
> > count registers, which enforce transfer sizes of 1-65536 bytes.
> > tar seems to make a software decision to avoid 65536 byte blocks.
> > 
> > Now it's entirely possible that other I/O architectures / controllers
> > may allow larger transfers.
> 
> Large blocks may also cause a problem when doing error recovery.  For instance,
> if using 9-track tape at 6250 bpi a 64K block takes about ten inches of tape.
> If the device driver tries to skip over a bad spot on the tape by rewinding
> over a bad record and writing a three inch gap, the bad spot may be on the
> other seven inches.  If the driver gives up after three write attempts, it
> will never skip over an error in the last inch of the record.  Larger blocks
> make the problem worse.  64k blocks are already at 97% of maximum density for
> 6250, so larger blocks are silly.
> 
> Adjust these arguments for newer media, but 64k should usually be a good size;
> it's already 8 times larger than the disk block size used by most BSD-based
> systems.

This is basically true.  The ability to use large blocks reliably depends
on the "quality" or error density of the drive/media combination.  In a
"DP" environment where tapes are heavily cycled and do wear out.  In the
unix backup environment where the "big" backup tapes might be cycled
weekly or monthly, there doesn't seem to be any problem with 64K block
sizes (at least at 6250 BPI, at 1600 BPI you surely want smaller blocks).

The issue is more confused by devices that don't necessarily share the
"variable length block" nature of the traditional tape drives.  Sun
compatible cartridges, for instance, write in 512 byte blocks regardless
of the nominal transfer size, so you can write at bs=126b and read back
at bs=13b if you want to.  Here the urge for large block sizes is to
avoid streaming vs. start/stop tradeoffs via the I/O clustering implict
in requesting large "block" sizes.  Since error control/recover is done
on the per hardware "block" basis there's no physical downside to this.

I'm not sure where the TK50 & TK70 fit into this spectrum.  They are
"preformatted" but I don't know the details of the error handling

-- 
George Robbins - now working for,	uucp: {uunet|pyramid|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@uunet.uu.net
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)

grr@cbmvax.UUCP (George Robbins) (08/02/89)

In article <413@wuee1.wustl.edu> tjs@wuee1.wustl.edu (tom sullivan) writes:
> In article <7487@cbmvax.UUCP> grr@cbmvax.UUCP (George Robbins) writes:
> >In article <1226@gvgpsa.GVG.TEK.COM> davew@gvgpsa.gvg.tek.com (David C. White) writes:
> 
> >> It may be possible to grab the driver out of the 3.0 you seem to
> >> imply that you have, but I haven't really looked into whether
> 
> This is all due to my original posting. I have tried the 3.0 dump, and
> George Robbins is correct, the drivers don't work. I have tried a 4.3
> dump and it appears to be working well. Dump times for 80 Meg are
> down from 6+ hours to about 30 minutes.

Hey, I'm glad it worked out!  Now if I could get that kind of improvment out
of restore doing a 500 M-byte restore on a new spool,  I'd be one happy puppy.
Hopefully all the net fuss has also clarified a few other issues an others
can benefit from tuning their backup prodcedures.

> >Drivers are generally not binary transportable across major releases and the
> >3.0 stuff especially, since kernel level memory allocation stuff changed.

There are actually several issues here.

I think davew@gvgpsa.gvg.tek.com (David C. White) wanted you to steel the
kernel level drivers from 3.0 and use them to build your 1.2 kernel.  This
isn't going to work, for the reasons I mentioned.

The other problem is that there is no real guarantee of backwards binary
compatibility between Ultrix X.Y and either earlier Ultrix/BSD releases
nor between post 4.2 BSD executables and Ultrix.

As part (probably) of the nbuf stuff, dump executes a new "generic"
system call to find out device characteristics.  It looks like if this
fails, it's supposed to act stupid and work anyway, but actually it
just hangs.  This system call doesn't exist in older release of Ultrix
and then system call number (and data format?) changed between 2.x and
3.x making those verions of dump binary incompatible, though the tapes
can certainly be interchanged.

The rdump protcol has also changed thru different releases, which can
cause some headaches in a mixed relase Ultrix and/or mixed Ultrix Sun
environment.  The 3.x rdump seems to talk to everybody else at least. 

4.3 Tahoe binaries also diverge from Ultrix, apparently in the general
area of signal handling.  Simple stuff will run, but more sophisticated
programs will blow off.  Some Ultrix 3.0 binaries also get pretty spooky
if you try to run them on either plain 4.3 or 4.3 Tahoe.  Play these
games at your own risk.

I'd like to thank alan@shodha.dec.com for explaining the implications out of 
the nbuf features.  A while back I was really puzzled as to why the vaunted 
4.3 BSD dump wasn't as much faster as it was reputed to be.  It's appears 
that by adding the nbuf stuff DEC was able to get as much of an asynchronous
I/O effect/improvment as the Berkeley multiple-process 4.3 kludges.

The rational end-of-tape fixes to dump/tar/dd are also very nice.  I had
noticed this with dump but kind of forgotten it, since I use the 4.3 dump
for all my level 0 / multi-volume dumps.  It is a nice win to not have to
specify -s 3450 for a 3600 foot reel of tape and then watch the silly thing
want to put a couple hundred blocks on a thrid reel.

Maybe we can summarize and put this one to bed:

a) There is nothing particularly wrong with the Ultrix 2.x / 3.x dump
   program (unless you're running 1.2).  It is as fast/efficient as the
   current BSD dump program and has the -b switch to allow mucking with
   block sizes though it's undocumented and may be unecessary.  It has 
   some improvments in the area of end-of-tape handling.

b) The Ultrix restore program works ok, though it doesn't support the -b
   switch.  Restore times tend to be more dependent on file creation
   speed than tape speed, so I don't see any real performance issue
   here, though someone with a streamer drive might.

I did run into some heavy duty problems when trying to restore a large
news spool not too long ago.  Since the news spool is overly endowed
with links, the restore had sucked up something over 8-MB of memory and
was getting into some paging.  It tooks the system out a couple of time
by corrupting the swap space and I eventually had to run it single
user.  In retrospect I think it was a problem with having swapon re-add
the root area (which is used to detect and refuse to do) leading to
a corrupted swap map, but I never did get back to analyzing the problem.

-- 
George Robbins - now working for,	uucp: {uunet|pyramid|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@uunet.uu.net
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)

iglesias@orion.cf.uci.edu (Mike Iglesias) (08/02/89)

I saved a message from info-unix about 2 years ago that said how to
speed up dumps to tk-50s under Ultrix 1.2.  I've used it to reduce the
time to save the /usr partition on an RD53 (~42mb, ~40mb used) from 
3 hours to 30 minutes.  The message is enclosed below.


Mike Iglesias
University of California, Irvine

Date:    12 Jun 87 21:04:29 GMT
To:      info-unix@brl-sem.arpa
From:    Robin Cutshaw <robin@itcatl.uucp>
Subject: Re: TK-50 tape backups -- could they go faster?

In article <241@kosman.UUCP>, kevin@kosman.UUCP (Kevin O'Gorman) writes:
> Running dump(8) on a MicroVAX, Ultrix 1.2A -- and I'm getting REAL
> tired of waiting for the tape.  The tape unit is a streamer, and the
> software is not fast enough to keep it in streaming mode, so I wait
> and wait and wait....
> 


Use b and s flags (bs 126 7400) and it will stream.  Restore will also
work under Ultrix using the b flag (b 126) flag.  Our backup script follows...


#!/bin/csh

if ($#argv != 1) then
	echo "Usage:  quickdump level"
	exit
endif

echo -n "Dump starting at " >&/usr/adm/lastdump.$1
/bin/date >>&/usr/adm/lastdump.$1

/etc/dump $1ucbsf 126 7400 /dev/nrmt8 /        >>&/usr/adm/lastdump.$1
echo "" >>&/usr/adm/lastdump.$1
/etc/dump $1ucbsf 126 7400 /dev/nrmt8 /usr     >>&/usr/adm/lastdump.$1
echo "" >>&/usr/adm/lastdump.$1
/etc/dump $1ucbsf 126 7400 /dev/nrmt8 /u       >>&/usr/adm/lastdump.$1
echo "" >>&/usr/adm/lastdump.$1
/etc/dump $1ucbsf 126 7400 /dev/nrmt8 /usr/ita >>&/usr/adm/lastdump.$1
echo "" >>&/usr/adm/lastdump.$1
if (X$1 == "X0") then
	mt -f /dev/rmt8 rew
	echo "Change tapes for level 0 dump.  Hit <CR> to continue." >>&/usr/ad
m/lastdump.$1
	$<
endif
/etc/dump $1ucbsf 126 7400 /dev/rmt8 /u/usr   >>&/usr/adm/lastdump.$1
echo "" >>&/usr/adm/lastdump.$1

echo -n "Dump finished at " >>&/usr/adm/lastdump.$1
/bin/date >>&/usr/adm/lastdump.$1


And here is the restore script (note that we place several partitions on
one tape)...


#!/bin/csh

if ($#argv != 1) then
	echo "Usage:  quickrestore [0-4]"
	exit
endif

mt -f /dev/rmt8 rew
echo -n "Restore starting at "
/bin/date
if (X$1 != "X0") mt -f /dev/nrmt8 fsf $1

/etc/restore ibf 126 /dev/nrmt8

echo -n "Restore finished at "
/bin/date


We use restore by itself for the last partition restoration.

robin

grr@cbmvax.UUCP (George Robbins) (08/08/89)

In article <2425@orion.cf.uci.edu> iglesias@orion.cf.uci.edu (Mike Iglesias) writes:
> I saved a message from info-unix about 2 years ago that said how to
> speed up dumps to tk-50s under Ultrix 1.2.  I've used it to reduce the
> time to save the /usr partition on an RD53 (~42mb, ~40mb used) from 
> 3 hours to 30 minutes.  The message is enclosed below.
... 
> In article <241@kosman.UUCP>, kevin@kosman.UUCP (Kevin O'Gorman) writes:
> > Running dump(8) on a MicroVAX, Ultrix 1.2A -- and I'm getting REAL
> > tired of waiting for the tape...
... 
> Use b and s flags (bs 126 7400) and it will stream.  Restore will also
> work under Ultrix using the b flag (b 126) flag.  Our backup script follows...

Sigh...  (I'll try to say nothing more on this subject 8-| )

*** he mounts the 1.2 distribution tape ***

' Script started on Tue Aug  8 02:23:48 1989
' csh> mt fsf 2
' csh> restore xvf /dev/nrmt0h etc/restore
' Verify tape and initialize maps
' Dump   date: Thu Feb 20 09:56:07 1986
' Dumped from: Wed Dec 31 19:00:00 1969
' Extract directories from tape
' Initialize symbol table.
' Make node ./etc
' Extract requested files
' You have not read any tapes yet.
' Unless you know which volume your file(s) are on you should start
' with the last volume and work forward towards the first.
' Specify next volume #: 1
' extract file ./etc/restore
' Add links
' Set directory mode, owner, and times.
' csh> mt rew
' csh> mt fsf 2
' csh> ./etc/restore xvbf 10 /dev/nrmt0h etc/dump  <<<---<<< Ultrix 1.2 restore
' Bad key character b
' Usage:
'         restore tfBhsvy [file file ...]
'         restore xfBhmsvy [file file ...]
'         restore ifBhmsvy
'         restore rfBsvy
'         restore RfBsvy
' csh> 
' script done on Tue Aug  8 02:27:13 1989

Have you ever tried to restore one of those suckers?

-- 
George Robbins - now working for,	uucp: {uunet|pyramid|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@uunet.uu.net
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)