[comp.unix.i386] Tape backup performance on 386 ISA/EISA systems

cpcahil@virtech.uucp (Conor P. Cahill) (05/25/90)

I am trying to collect data on the performance of the different tape
backup systems available for 386 bases Unix systems.  What I am
trying to obtain is the speed in MB/minute of backing up a file system
to tape.  In order to be meaningful, the file system must be at least
30MB and be backed up using the following command (so that everybody
uses the same mechanism):

	/bin/time sh -c "find . -print | cpio -oBcC 10240 > /dev/rmt0"

Note that you may adjust the block size (10240) as you feel is appropriate
for your system as long as you tell me what you used.  Obviously you might
also need to change the tape device name.

I would like results for any tape drive you got out there including 1/4",
9-track, DAT, 8mm, etc.

If you do run the test please send me the following info:


	CPU:
	Tape drive:
	Disk Drive & controller:
	OS:
	Command:
	Time:
	Size of file system:

For example on my system it would be filled in as follows:

CPU:		33Mhz 80386
Tape drive:	Archive 60MB 1/4" Cartridge
Disk info:	CDC Wren VI (676MB) w/DPT ESDI Controller with 2.5MB cache
OS:		386/ix 2.0.2
Command:	/bin/time sh -c "find . print | cpio -oBcC 10240 > /dev/rmt0"
Time:		Real: 10:50.7 	User: 10.2	System: 1:17.9
FS size:	87472 blocks (reported by du -s .)

Which turns out to be a rate of around 4.03 MB/Minute (not much of a screamer).

Please run the test when there is essentially no load on the system so that
the tests will be run under the same conditions (i.e. running the test while
you are un-batching news will have a significant detrimental effect on the
performanc and thereby give bad figures for your tape drive).

Email or post your results as you feel fit.  I would recommend emailing them
to me and I will post the results after a couple of weeks.

Thanks in advance.	
-- 
Conor P. Cahill            (703)430-9247        Virtual Technologies, Inc.,
uunet!virtech!cpcahil                           46030 Manekin Plaza, Suite 160
                                                Sterling, VA 22170 

ron@rdk386.uucp (Ron Kuris) (05/30/90)

In article <1990May25.123302.26061@virtech.uucp> cpcahil@virtech.uucp (Conor P. Cahill) writes:
>I am trying to collect data on the performance of the different tape
>backup systems available for 386 bases Unix systems.  What I am
>trying to obtain is the speed in MB/minute of backing up a file system
>to tape.  In order to be meaningful, the file system must be at least
>30MB and be backed up using the following command (so that everybody
>uses the same mechanism):
>
>	/bin/time sh -c "find . -print | cpio -oBcC 10240 > /dev/rmt0"
>
>Note that you may adjust the block size (10240) as you feel is appropriate
>for your system as long as you tell me what you used.  Obviously you might
>also need to change the tape device name.
>
> [ stuff deleted ]
Seems to me like you're not taking into account filesystem fragmentation
or a bunch of other factors.  How about running a disk optimizer (e.g.
shuffle) before you start the test?  I've noticed a dramatic increase due
to less head activity (I don't have numbers handy).
-- 
--
...!pyramid!unify!rdk386!ron -or- ...!ames!pacbell!sactoh0!siva!rdk386!ron
It's not how many mistakes you make, its how quickly you recover from them.

cpcahil@virtech.uucp (Conor P. Cahill) (05/30/90)

In article <1990May26..841@rdk386.uucp> ron@rdk386.UUCP (Ron Kuris) writes:
>In article <1990May25.123302.26061@virtech.uucp> cpcahil@virtech.uucp (Conor P. Cahill) writes:
>> [ stuff deleted ]
>Seems to me like you're not taking into account filesystem fragmentation
>or a bunch of other factors.  How about running a disk optimizer (e.g.
>shuffle) before you start the test?  I've noticed a dramatic increase due
>to less head activity (I don't have numbers handy).

For several reasons:

1. There are no commercial disk optimzers for UNIX (at least that I know of)
and most people, myself included, cringe at the thought of letting someone's
program hunt around my raw disk patching things together.  I'm not saying
that the programs are bad. I'm just saying that it will take a lot more
than a simple post to alt.sources to get me to run one of those programs
on my production systems.  Anyway, I can't ask people to run one when they
may not even have it.

2. The performance of the disk due to optimizations will probably have
little performance effect on the overall perforance on the tape write, since
the tape write is the limiting factor.

-- 
Conor P. Cahill            (703)430-9247        Virtual Technologies, Inc.,
uunet!virtech!cpcahil                           46030 Manekin Plaza, Suite 160
                                                Sterling, VA 22170 

davidsen@sixhub.UUCP (Wm E. Davidsen Jr) (05/31/90)

In article <1990May30.132457.6117@virtech.uucp> cpcahil@virtech.UUCP (Conor P. Cahill) writes:

| 1. There are no commercial disk optimzers for UNIX (at least that I know of)
| and most people, myself included, cringe at the thought of letting someone's
| program hunt around my raw disk patching things together.  I'm not saying
| that the programs are bad. I'm just saying that it will take a lot more
| than a simple post to alt.sources to get me to run one of those programs
| on my production systems.  Anyway, I can't ask people to run one when they
| may not even have it.

  True enough, but they are worth getting. Yes, I cringe when I run it,
but I take a backup first.
| 
| 2. The performance of the disk due to optimizations will probably have
| little performance effect on the overall perforance on the tape write, since
| the tape write is the limiting factor.

  I'm sorry, this is just totally wrong. You must never have had a
fragmented disk. I have seen transfer rates as low as 300kBytes/sec with
a fragmented disk and streaming tape which ran in fits and starts. I see
about 4MB overall (from the time I hit return to the time the tape is
rewound) on a non-fragmented f/s. At least with standard Xenix and UNIX
f/s there is a huge gain for backup.

  I have not been able to show degradation in performance due to
fragmentation of the ufa type filesystem on V.4, so perhaps this will
all go away in a year or so.

-- 
bill davidsen - davidsen@sixhub.uucp (uunet!crdgw1!sixhub!davidsen)
    sysop *IX BBS and Public Access UNIX
    moderator of comp.binaries.ibm.pc and 80386 mailing list
"Stupidity, like virtue, is its own reward" -me

cpcahil@virtech.uucp (Conor P. Cahill) (05/31/90)

In article <1060@sixhub.UUCP> davidsen@sixhub.UUCP (bill davidsen) writes:
>In article <1990May30.132457.6117@virtech.uucp> cpcahil@virtech.UUCP (Conor P. Cahill) writes:
>
>| 2. The performance of the disk due to optimizations will probably have
>| little performance effect on the overall perforance on the tape write, since
>| the tape write is the limiting factor.
>
>  I'm sorry, this is just totally wrong. You must never have had a
>fragmented disk. I have seen transfer rates as low as 300kBytes/sec with
>a fragmented disk and streaming tape which ran in fits and starts. I see
>about 4MB overall (from the time I hit return to the time the tape is
>rewound) on a non-fragmented f/s. At least with standard Xenix and UNIX
>f/s there is a huge gain for backup.

300kBytes/sec = 18MB/min which is much faster than any tape backup that
I have seen/heard was available for a 386, so there still wouldn't be enough
gain in the backup to make that much of a difference.

If you still disagree, run the test I mentioned on a non-optimized disk
and then run the same test after the disk has been optimized and
report the results.  I had one other person do that and the difference
was less than 10% which would be expected anyway due to the differences
in file system layout (like zillions of 1 byte files vs 1 zillion byte file).

Note that I am not saying that it has no effect.  I am just saying that
it will not change the results for the test that I specified from 4MB/min
to 6 or 8 MB/min.

>  I have not been able to show degradation in performance due to
>fragmentation of the ufa type filesystem on V.4, so perhaps this will
>all go away in a year or so.

You probably won't see that much difference under one of the FFS's available
for 386 unix boxes (like 386/ix, SCO Unix, ESIX).


-- 
Conor P. Cahill            (703)430-9247        Virtual Technologies, Inc.,
uunet!virtech!cpcahil                           46030 Manekin Plaza, Suite 160
                                                Sterling, VA 22170 

rcd@ico.isc.com (Dick Dunn) (05/31/90)

davidsen@sixhub.UUCP (Wm E. Davidsen Jr) writes:
[cpcahil@virtech.UUCP (Conor P. Cahill) cringes...]

> | ...at the thought of letting someone's
> | program hunt around my raw disk patching things together...

>   True enough, but they are worth getting. Yes, I cringe when I run it,
> but I take a backup first.

But now we've come full circle...if disk fragmentation makes the backup go
slower, so you want to run an optimizer that rearranges things, but you
want to be careful, so you do a backup first...

(Yeah, I know, the de-fragmenting does good for a lot more than just the
backup.:-)

A better approach is to use a file system that doesn't have as much
tendency to fragment...sorry for the obvious plug.
-- 
Dick Dunn     rcd@ico.isc.com    uucp: {ncar,nbires}!ico!rcd     (303)449-2870
   ...Simpler is better.

walter@mecky.UUCP (Walter Mecky) (06/01/90)

In article <1990May25.123302.26061@virtech.uucp> cpcahil@virtech.uucp (Conor P. Cahill) writes:
+ I am trying to collect data on the performance of the different tape
+ backup systems available for 386 bases Unix systems.  What I am
+ trying to obtain is the speed in MB/minute of backing up a file system
+ to tape.  In order to be meaningful, the file system must be at least
+ 30MB and be backed up using the following command (so that everybody
+ uses the same mechanism):
+ []
+ 	/bin/time sh -c "find . -print | cpio -oBcC 10240 > /dev/rmt0"
+ []
+ Time:		Real: 10:50.7 	User: 10.2	System: 1:17.9

Note that only the "Real" portion of time(1) is significant here, because
"User" and "System" problably are those of find(1) only.
-- 
Walter Mecky

debra@alice.UUCP (Paul De Bra) (06/01/90)

In article <1990May31.131341.15453@virtech.uucp> cpcahil@virtech.UUCP (Conor P. Cahill) writes:
>In article <1060@sixhub.UUCP> davidsen@sixhub.UUCP (bill davidsen) writes:
>>In article <1990May30.132457.6117@virtech.uucp> cpcahil@virtech.UUCP (Conor P. Cahill) writes:
>>...
>>  I'm sorry, this is just totally wrong. You must never have had a
>>fragmented disk. I have seen transfer rates as low as 300kBytes/sec with
>>a fragmented disk and streaming tape which ran in fits and starts.
>>...
>300kBytes/sec = 18MB/min which is much faster than any tape backup that
>I have seen/heard was available for a 386, so there still wouldn't be enough
>gain in the backup to make that much of a difference.

I think we have to distinguish backup programs here. If you do something
like 'find . -print | cpio -ocB -C131072 > /dev/rmt/c0s0' the effect of
a fragmented disk is not substantial. It will take cpio longer to accumulate
the 1.3 megabytes (remember to add a 0 due to cpio bug) but the tape will
stream while writing the buffer.

If you use something like tar or any other program that reads and writes
small blocks you need a very fast and unfragmented disk to keep the blocks
coming at the same rate the tape drive needs them. Tape controllers have
only a small buffer usually so they really need a continuous flow of
small blocks. In general I would say that using such a backup program is
a bad idea. If you have been using tar I would suggest either adding a
pipe to dd or switching to bar.

Paul.
(debra@research.att.com)

-- 
------------------------------------------------------
|debra@research.att.com   | uunet!research!debra     |
------------------------------------------------------

martin@mwtech.UUCP (Martin Weitzel) (06/02/90)

In article <1060@sixhub.UUCP> davidsen@sixhub.UUCP (bill davidsen) writes:
>In article <1990May30.132457.6117@virtech.uucp> cpcahil@virtech.UUCP (Conor P. Cahill) writes:

[about keeping a streamer streaming and the
influence of fragmented disk  files systems]

>| 2. The performance of the disk due to optimizations will probably have
>| little performance effect on the overall perforance on the tape write, since
>| the tape write is the limiting factor.
>
>  I'm sorry, this is just totally wrong. You must never have had a
>fragmented disk. I have seen transfer rates as low as 300kBytes/sec with

Isn't the best streaming transfer rate (for QIC-02) something around
85 KB/sec? A test programm which keeps my streamer streaming at least
seems to show this as "best rate". If the above `300kByte/sec' refer
to the disk transfer rate (as I understand the poster), it is more than
three times faster than the best tape transfer rate and with decent
buffering it should be no problem to keep the tape streaming.

IMHO the problem is really not the disk, but within the drivers or
interrupt priority schemes or something similar. I did some
experimenting with a small program that did not read the disk,
and had no problem keeping the tape streaming ... until I switched
between my virtual screens, which sometimes caused a stop of the
tape. It might be that switching screens requiered paging something
in and that some disk accesses caused the tape to stop. But screen
output seems to have the same effect (there are more stops of the
tape if I use cpio with "-v").

If I can achieve an average data-rate of several hundred KB/sec when
reading the disk (which IMHO uses *no* DMA on a typical PC), why can
I not achieve less than 100 KB/sec with writing the tape (which uses
DMA, at least on my system) - may it be that interrupt reaction is the
problem? Let's do some quick calculations: As I read my "space.c" for
the tape driver, it uses four 32 KB buffers (my experiments supported
this assumption). This should allow for around one full second to
fill one buffer again (in the worst case) and this should never be
much of a problem for an 80386, even if the writing process is preempted
for a moment. Of course, optimal usage of buffers would require an
early return from the write, some time before the buffer is completly
written, and that may cause problems with "End-Of-Tape"-detection.

I don't know enough about QIC-02 drives to decide which requests
must be issued within a short time window to keep the tape streaming.
(Also "End-Of-Tape"-problems may dictate a non-optimal strategy.)
But the only explanation for the stops of an otherwise fully
streaming tape when switching between virtual screens is that
there is a relatively long part of kernel code executed with high
priority in this situation.

Well, I would happily accept sluggish character echo or some delay
in switching my virtual screens during streamer operation, but I
know that it would require the driver source, if this trade-off
is feasible at all. May it be, that the disk has a too high priority
to keep the tape streaming? This would be not so wise, as the disk
generally can quickly catch up with the tape - except of course, there
is some reason that bytes must quickly transfered out of the controller
under some circumstances, but if the disk controller buffers at least
one full sector, I can see no reason for this.

Do we need somthing similar for tapes as the FAS-Driver is for serial
lines, so that some kind individuals like Uwe Doering can produce an
optimized "Final Tape Solution"?
-- 
Martin Weitzel, email: martin@mwtech.UUCP, voice: 49-(0)6151-6 56 83

keithe@tekgvs.LABS.TEK.COM (Keith Ericson) (06/02/90)

In article <1990May25.123302.26061@virtech.uucp> cpcahil@virtech.uucp (Conor P. Cahill) writes:
<30MB and be backed up using the following command (so that everybody
<uses the same mechanism):
<
<	/bin/time sh -c "find . -print | cpio -oBcC 10240 > /dev/rmt0"
<
<Note that you may adjust the block size (10240) as you feel is appropriate
<for your system as long as you tell me what you used.  

Doesn't cpio bitch about the inclusion of both the "B" and "C 10240" command
tail?  They're redundant/competing flags to cpio...  ("B" == "C 512").

kEITHe

keithe@tekgvs.LABS.TEK.COM (Keith Ericson) (06/02/90)

In article <1060@sixhub.UUCP> davidsen@sixhub.UUCP (bill davidsen) writes:
<In article <1990May30.132457.6117@virtech.uucp> cpcahil@virtech.UUCP (Conor P. Cahill) writes:
<| 
<| 2. The performance of the disk due to optimizations will probably have
<| little performance effect on the overall perforance on the tape write, since
<| the tape write is the limiting factor.
<
<  I'm sorry, this is just totally wrong. You must never have had a
<fragmented disk. I have seen transfer rates as low as 300kBytes/sec with
<a fragmented disk and streaming tape which ran in fits and starts. I see
<about 4MB overall (from the time I hit return to the time the tape is
<rewound) on a non-fragmented f/s.

By far and away the biggest difference I've ever seen in disk<->tape
transfers is the size of the buffer uses in the cpio command: I generally
use cpio -[i,o] [???] -C 1048576 -[I,O] /dev/tape  for screaming, if not
streaming, tape I/O.

kEITHe

cpcahil@virtech.uucp (Conor P. Cahill) (06/02/90)

In article <7596@tekgvs.LABS.TEK.COM> keithe@tekgvs.LABS.TEK.COM (Keith Ericson) writes:
>In article <1990May25.123302.26061@virtech.uucp> cpcahil@virtech.uucp (Conor P. Cahill) writes:
><
><	/bin/time sh -c "find . -print | cpio -oBcC 10240 > /dev/rmt0"
><
>
>Doesn't cpio bitch about the inclusion of both the "B" and "C 10240" command
>tail?  They're redundant/competing flags to cpio...  ("B" == "C 512").

No. it doesn't bitch.  However it is redundant.  I just can't type in "cpio -o"
without the inclusion of 'B' and 'c'.



-- 
Conor P. Cahill            (703)430-9247        Virtual Technologies, Inc.,
uunet!virtech!cpcahil                           46030 Manekin Plaza, Suite 160
                                                Sterling, VA 22170 

ron@rdk386.uucp (Ron Kuris) (06/04/90)

In article <1990May30.132457.6117@virtech.uucp> cpcahil@virtech.UUCP (Conor P. Cahill) writes:
>In article <1990May26..841@rdk386.uucp> ron@rdk386.UUCP (Ron Kuris) writes:
>>In article <1990May25.123302.26061@virtech.uucp> cpcahil@virtech.uucp (Conor P. Cahill) writes:
>>> [ stuff deleted ]
>>Seems to me like you're not taking into account filesystem fragmentation
>>or a bunch of other factors.  How about running a disk optimizer (e.g.
>>shuffle) before you start the test?  I've noticed a dramatic increase due
>>to less head activity (I don't have numbers handy).
>
>For several reasons:
>
>1. There are no commercial disk optimzers for UNIX (at least that I know of)
>and most people, myself included, cringe at the thought of letting someone's
>program hunt around my raw disk patching things together.  I'm not saying
>that the programs are bad. I'm just saying that it will take a lot more
>than a simple post to alt.sources to get me to run one of those programs
>on my production systems.  Anyway, I can't ask people to run one when they
>may not even have it.
You don't have to run one -- how about a backup then a mkfs, then a restore,
then the REAL backup?

>2. The performance of the disk due to optimizations will probably have
>little performance effect on the overall perforance on the tape write, since
>the tape write is the limiting factor.

I get double the performance on an optimized backup as compared to an
unoptimized backup.  Reason:  My tape streams when everything is optimal,
and does NOT when it is not optimal.  I know this because originally my
disks were not backed up and restored at all.  When I finally did this,
my backup time was halved!
-- 
--
...!pyramid!unify!rdk386!ron -or- ...!ames!pacbell!sactoh0!siva!rdk386!ron
It's not how many mistakes you make, its how quickly you recover from them.

davidsen@sixhub.UUCP (Wm E. Davidsen Jr) (06/04/90)

In article <1990May31.131341.15453@virtech.uucp> cpcahil@virtech.UUCP (Conor P. Cahill) writes:
| In article <1060@sixhub.UUCP> davidsen@sixhub.UUCP (bill davidsen) writes:
| >
| >  I'm sorry, this is just totally wrong. You must never have had a
| >fragmented disk. I have seen transfer rates as low as 300kBytes/sec with
| >a fragmented disk and streaming tape which ran in fits and starts. I see
| >about 4MB overall (from the time I hit return to the time the tape is
| >rewound) on a non-fragmented f/s. At least with standard Xenix and UNIX
| >f/s there is a huge gain for backup.
| 
| 300kBytes/sec = 18MB/min which is much faster than any tape backup that
| I have seen/heard was available for a 386, so there still wouldn't be enough
| gain in the backup to make that much of a difference.

  Sorry, foot in keyboard time. That's 300k/min and 4MB/min for the
fragmented and unfragmented filesystem. Once the tape stops streaming
you are in deep trouble for throughput. Even the tape is slowing things
down, the disk is the heart of the problem when it can't keep the tape
streaming. My usual solution is to take an incremental or physical
backup (raw disk to tape) and then cleanup.
| 
| If you still disagree, run the test I mentioned on a non-optimized disk
| and then run the same test after the disk has been optimized and
| report the results.  I had one other person do that and the difference
| was less than 10% which would be expected anyway due to the differences
| in file system layout (like zillions of 1 byte files vs 1 zillion byte file).

  Yeah, see above. There really is an order of magnitude penalty when
the tape stops streaming, and that happens when the disk is slow.
| 
| >  I have not been able to show degradation in performance due to
| >fragmentation of the ufa type filesystem on V.4, so perhaps this will
| >all go away in a year or so.
| 
| You probably won't see that much difference under one of the FFS's available
| for 386 unix boxes (like 386/ix, SCO Unix, ESIX).

  I haven't tried the recent versions of ESIX. The early version I tried
was quite slow, but that could have been filesystem, tuning, or
something else. I hear good things about it, so I assume that if there
was a problem in an early version it is gone now.
-- 
bill davidsen - davidsen@sixhub.uucp (uunet!crdgw1!sixhub!davidsen)
    sysop *IX BBS and Public Access UNIX
    moderator of comp.binaries.ibm.pc and 80386 mailing list
"Stupidity, like virtue, is its own reward" -me

davidsen@sixhub.UUCP (Wm E. Davidsen Jr) (06/04/90)

In article <1990May31.155113.8383@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:

| But now we've come full circle...if disk fragmentation makes the backup go
| slower, so you want to run an optimizer that rearranges things, but you
| want to be careful, so you do a backup first...
| 
| (Yeah, I know, the de-fragmenting does good for a lot more than just the
| backup.:-)

  See other post... I do an incremental or physical backup of the raw
partition first.
| 
| A better approach is to use a file system that doesn't have as much
| tendency to fragment...sorry for the obvious plug.

  Why is someone from ISC plugging BSD? You're right of course.
-- 
bill davidsen - davidsen@sixhub.uucp (uunet!crdgw1!sixhub!davidsen)
    sysop *IX BBS and Public Access UNIX
    moderator of comp.binaries.ibm.pc and 80386 mailing list
"Stupidity, like virtue, is its own reward" -me

cpcahil@virtech.uucp (Conor P. Cahill) (06/07/90)

A while back I posted a request for performance data on miscellaneous 
tape drives available for 386 systems.  That request got turned into 
a battle over the effectiveness of disk defragmentors and I didn't get
many responses that had actually ran the test.

So, again I am asking you to run the following test.  I am not asking you
to de-fragment you disk, nor do anything special but time a backup of
at least 30MB to tape.  I would especially like info from those of you that
have DAT, 9-track, 8mm, and cartridge tapes with >150MB capacity.

So, here is the request: 

I am trying to collect data on the performance of the different tape
backup systems available for 386 bases Unix systems.  What I am
trying to obtain is the speed in MB/minute of backing up a file system
to tape.  In order to be meaningful, the file system must be at least
30MB and be backed up using the following command (so that everybody
uses the same mechanism):

	/bin/time sh -c "find . -print | cpio -oBcC 10240 > /dev/rmt0"

Note that you may adjust the block size (10240) as you feel is appropriate
for your system as long as you tell me what you used.  Obviously you might
also need to change the tape device name.

I would like results for any tape drive you got out there including 1/4",
9-track, DAT, 8mm, etc.

If you do run the test please send me the following info:


	CPU:
	Tape drive:
	Disk Drive & controller:
	OS:
	Command:
	Time:
	Size of file system:

For example on my system it would be filled in as follows:

CPU:		33Mhz 80386
Tape drive:	Archive 60MB 1/4" Cartridge
Disk info:	CDC Wren VI (676MB) w/DPT ESDI Controller with 2.5MB cache
OS:		386/ix 2.0.2
Command:	/bin/time sh -c "find . print | cpio -oBcC 10240 > /dev/rmt0"
Time:		Real: 10:50.7 	User: 10.2	System: 1:17.9
FS size:	87472 blocks (reported by du -s .)

Which turns out to be a rate of around 4.03 MB/Minute (not much of a screamer).

Please run the test when there is essentially no load on the system so that
the tests will be run under the same conditions (i.e. running the test while
you are un-batching news will have a significant detrimental effect on the
performanc and thereby give bad figures for your tape drive).

Email or post your results as you feel fit.  I would recommend emailing them
to me and I will post the results after a couple of weeks.

Thanks in advance.	


-- 
Conor P. Cahill            (703)430-9247        Virtual Technologies, Inc.,
uunet!virtech!cpcahil                           46030 Manekin Plaza, Suite 160
                                                Sterling, VA 22170 

martin@mwtech.UUCP (Martin Weitzel) (06/08/90)

In article <1990Jun6.205939.26972@virtech.uucp> cpcahil@virtech.uucp (Conor P. Cahill) writes:
>A while back I posted a request for performance data on miscellaneous 
>I am trying to collect data on the performance of the different tape
>backup systems.
[....]
>If you do run the test please send me the following info:
>
>
	CPU: 20 MHz 80386 (ACER 32/20 w 8 MB RAM)
	Tape drive: Archive 60MB 1/4" Cartridge (FT 60 & SC 422R)
	Disk Drive & controller: CDC WREN V (383H) & WD 1007V-SE2
	OS: 386/ix Release 2.0.2
	Command: /bin time sh -c "find /usr -print | cpio -ocO /dev/tape"
	Time: Real: 9:08.5	User: 21.6	System: 2.38.3
	Size of file system: 68338 blocks (reported by du -s)
			     80066 blocks written by cpio

I have played around with my tape in the last few days and have
written some test programs to see where the bottle necks are.
If you are interested in this topic read my other article.
-- 
Martin Weitzel, email: martin@mwtech.UUCP, voice: 49-(0)6151-6 56 83

peterg@murphy.com (Peter Gutmann) (06/09/90)

In article <1990May31.155113.8383@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:
>davidsen@sixhub.UUCP (Wm E. Davidsen Jr) writes:
>[cpcahil@virtech.UUCP (Conor P. Cahill) cringes...]
>
>> | ...at the thought of letting someone's
>> | program hunt around my raw disk patching things together...
>
>>   True enough, but they are worth getting. Yes, I cringe when I run it,
>> but I take a backup first.
>
>But now we've come full circle...if disk fragmentation makes the backup go
>slower, so you want to run an optimizer that rearranges things, but you
>want to be careful, so you do a backup first...
>
>(Yeah, I know, the de-fragmenting does good for a lot more than just the
>backup.:-)
>
>A better approach is to use a file system that doesn't have as much
>tendency to fragment...sorry for the obvious plug.
>-- 
>Dick Dunn     rcd@ico.isc.com    uucp: {ncar,nbires}!ico!rcd     (303)449-2870
>   ...Simpler is better.

Well now we have come full circle. However one important thing
has been overlooked :-}

All of the tools required for disk de-fragmenting exist in the base
distribution of UNIX. All it requires are three simple steps,

1) create a backup of the device using your favorite backup utility 
   (such as tar or cpio). Don't use dump or any other utility which
   creates a "image" of the file system.

2) erase the contents of the devices that has been backed up
   in step one (above).

3) restore the device from the backup made above.

what this accomplishes is that the device has all of the blocks
in the file system moved to the free list. then reallocated sequentially
from the backup.

as Dick Dunn said above "..Simpler is better".

----
peter gutmann	peterg@murphy.com	

	Murphy & Durieu
		Home of Murphy's Law....

davidsen@sixhub.UUCP (Wm E. Davidsen Jr) (06/10/90)

In article <770@mwtech.UUCP> martin@mwtech.UUCP (Martin Weitzel) writes:
| In article <1060@sixhub.UUCP> davidsen@sixhub.UUCP (bill davidsen) writes:

| >  I'm sorry, this is just totally wrong. You must never have had a
| >fragmented disk. I have seen transfer rates as low as 300kBytes/sec with
| 
| Isn't the best streaming transfer rate (for QIC-02) something around
| 85 KB/sec? A test programm which keeps my streamer streaming at least
| seems to show this as "best rate". 

  As I noted before, that was a typo, the low (fragmented) rate was
300KB/min (not /sec), when the disk is not fragmented and the tape
streams the rate is 4MB/min (there I typed it right twice).

  That's fine for my 180MB now, but when I add this 320MB drive on the
table here, it is going to be a pain. The rates are fast enough, but the
tapes are too small...
-- 
bill davidsen - davidsen@sixhub.uucp (uunet!crdgw1!sixhub!davidsen)
    sysop *IX BBS and Public Access UNIX
    moderator of comp.binaries.ibm.pc and 80386 mailing list
"Stupidity, like virtue, is its own reward" -me

davidsen@sixhub.UUCP (Wm E. Davidsen Jr) (06/10/90)

In article <1990Jun8.223358.27138@murphy.com> peterg@murphy.com (Peter Gutmann) writes:

| All of the tools required for disk de-fragmenting exist in the base
| distribution of UNIX. All it requires are three simple steps,
| 
| 1) create a backup of the device using your favorite backup utility 
|    (such as tar or cpio). Don't use dump or any other utility which
|    creates a "image" of the file system.
| 
| 2) erase the contents of the devices that has been backed up
|    in step one (above).
| 
| 3) restore the device from the backup made above.
| 
| what this accomplishes is that the device has all of the blocks
| in the file system moved to the free list. then reallocated sequentially
| from the backup.

  If you do step 2 via mkfs, or if you have a filesystems which uses a
bitmap, this is true. However, you said freelist, and in that case you
had better do an fsck with the -s option to order the freelist, or you
don't gain much.

  Since dump can be used between diferent size filesystems (given enough
room on the 2nd f/s) I believe that the information is logical rather
than physical, and that a restore defragments the disk. At least on this
(Xenix) system. A physical dump doesn't (dd /dev/rawdisk /dev/tape) type.
-- 
bill davidsen - davidsen@sixhub.uucp (uunet!crdgw1!sixhub!davidsen)
    sysop *IX BBS and Public Access UNIX
    moderator of comp.binaries.ibm.pc and 80386 mailing list
"Stupidity, like virtue, is its own reward" -me

cpcahil@virtech.uucp (Conor P. Cahill) (06/10/90)

In article <1123@sixhub.UUCP> davidsen@sixhub.UUCP (bill davidsen) writes:
>  As I noted before, that was a typo, the low (fragmented) rate was
>300KB/min (not /sec), when the disk is not fragmented and the tape
>streams the rate is 4MB/min (there I typed it right twice).

The only reason I could see for this kind of difference is that you are
using backup software that does not have a large tape buffer (i.e. tar).

Using a tape backup archiver with large buffers (like cpio -C 102400)
will negate most of the effect of a fragmented disk on tape streaming.


-- 
Conor P. Cahill            (703)430-9247        Virtual Technologies, Inc.,
uunet!virtech!cpcahil                           46030 Manekin Plaza, Suite 160
                                                Sterling, VA 22170 

bill@ssbn.WLK.COM (Bill Kennedy) (06/10/90)

In article <1990Jun10.114934.17744@virtech.uucp> cpcahil@virtech.UUCP (Conor P. Cahill) writes:
>In article <1123@sixhub.UUCP> davidsen@sixhub.UUCP (bill davidsen) writes:
>>  [ slow tape performance ... ]
>
>The only reason I could see for this kind of difference is that you are
>using backup software that does not have a large tape buffer (i.e. tar).
>
>Using a tape backup archiver with large buffers (like cpio -C 102400)
>will negate most of the effect of a fragmented disk on tape streaming.
>-- 
>Conor P. Cahill            (703)430-9247        Virtual Technologies, Inc.,
>uunet!virtech!cpcahil                           46030 Manekin Plaza, Suite 160

Agreed, and a small addition.  One of my neighbors uses gnu tar and gets
very good performance out of his tape.  I had used cpio with the -C option
and I got decent performance but I would get five to nine stops per cpio
block.  Bill figures, as he points out, and just not acceptable.  I ran into
another problem with cpio that caused me to switch to pax.

I originally thought the problem was with the CompuAdd hard cache controller
but it appeared again with a WD1007-SE2.  When I switched to the second
tape cartridge the residue from the first would write out OK but the system
would just freeze as it was getting ready to write the first block of the
second volume.  I decided to try pax and I got two bonuses.

The first bonus was obvious, the system doesn't freeze (by "freeze" I mean
that it's off in a loop somewhere, reset and only reset/power cycle to
recover).  The second is that a block is written with one movement of the
tape.  The speed improvement isn't dramatic but it's noticable.  Pax is
also a little more flexible about defining block sizes.  With cpio you have
to specify in bytes and ASSume the trailing zero.  Pax lets you specify in
bytes, blocks, Kbytes, or Mbytes.  I mention this because if you specify the
block size too large you'll end up paging to swap space and that's a double
disk I/O penalty.
-- 
Bill Kennedy  usenet      {texbell,att,cs.utexas.edu,sun!daver}!ssbn!bill
              internet    bill@ssbn.WLK.COM   or attmail!ssbn!bill