[comp.unix.questions] Tape backup performance on 386 ISA/EISA systems

cpcahil@virtech.uucp (Conor P. Cahill) (05/25/90)

I am trying to collect data on the performance of the different tape
backup systems available for 386 bases Unix systems.  What I am
trying to obtain is the speed in MB/minute of backing up a file system
to tape.  In order to be meaningful, the file system must be at least
30MB and be backed up using the following command (so that everybody
uses the same mechanism):

	/bin/time sh -c "find . -print | cpio -oBcC 10240 > /dev/rmt0"

Note that you may adjust the block size (10240) as you feel is appropriate
for your system as long as you tell me what you used.  Obviously you might
also need to change the tape device name.

I would like results for any tape drive you got out there including 1/4",
9-track, DAT, 8mm, etc.

If you do run the test please send me the following info:


	CPU:
	Tape drive:
	Disk Drive & controller:
	OS:
	Command:
	Time:
	Size of file system:

For example on my system it would be filled in as follows:

CPU:		33Mhz 80386
Tape drive:	Archive 60MB 1/4" Cartridge
Disk info:	CDC Wren VI (676MB) w/DPT ESDI Controller with 2.5MB cache
OS:		386/ix 2.0.2
Command:	/bin/time sh -c "find . print | cpio -oBcC 10240 > /dev/rmt0"
Time:		Real: 10:50.7 	User: 10.2	System: 1:17.9
FS size:	87472 blocks (reported by du -s .)

Which turns out to be a rate of around 4.03 MB/Minute (not much of a screamer).

Please run the test when there is essentially no load on the system so that
the tests will be run under the same conditions (i.e. running the test while
you are un-batching news will have a significant detrimental effect on the
performanc and thereby give bad figures for your tape drive).

Email or post your results as you feel fit.  I would recommend emailing them
to me and I will post the results after a couple of weeks.

Thanks in advance.	
-- 
Conor P. Cahill            (703)430-9247        Virtual Technologies, Inc.,
uunet!virtech!cpcahil                           46030 Manekin Plaza, Suite 160
                                                Sterling, VA 22170 

ron@rdk386.uucp (Ron Kuris) (05/30/90)

In article <1990May25.123302.26061@virtech.uucp> cpcahil@virtech.uucp (Conor P. Cahill) writes:
>I am trying to collect data on the performance of the different tape
>backup systems available for 386 bases Unix systems.  What I am
>trying to obtain is the speed in MB/minute of backing up a file system
>to tape.  In order to be meaningful, the file system must be at least
>30MB and be backed up using the following command (so that everybody
>uses the same mechanism):
>
>	/bin/time sh -c "find . -print | cpio -oBcC 10240 > /dev/rmt0"
>
>Note that you may adjust the block size (10240) as you feel is appropriate
>for your system as long as you tell me what you used.  Obviously you might
>also need to change the tape device name.
>
> [ stuff deleted ]
Seems to me like you're not taking into account filesystem fragmentation
or a bunch of other factors.  How about running a disk optimizer (e.g.
shuffle) before you start the test?  I've noticed a dramatic increase due
to less head activity (I don't have numbers handy).
-- 
--
...!pyramid!unify!rdk386!ron -or- ...!ames!pacbell!sactoh0!siva!rdk386!ron
It's not how many mistakes you make, its how quickly you recover from them.

cpcahil@virtech.uucp (Conor P. Cahill) (05/30/90)

In article <1990May26..841@rdk386.uucp> ron@rdk386.UUCP (Ron Kuris) writes:
>In article <1990May25.123302.26061@virtech.uucp> cpcahil@virtech.uucp (Conor P. Cahill) writes:
>> [ stuff deleted ]
>Seems to me like you're not taking into account filesystem fragmentation
>or a bunch of other factors.  How about running a disk optimizer (e.g.
>shuffle) before you start the test?  I've noticed a dramatic increase due
>to less head activity (I don't have numbers handy).

For several reasons:

1. There are no commercial disk optimzers for UNIX (at least that I know of)
and most people, myself included, cringe at the thought of letting someone's
program hunt around my raw disk patching things together.  I'm not saying
that the programs are bad. I'm just saying that it will take a lot more
than a simple post to alt.sources to get me to run one of those programs
on my production systems.  Anyway, I can't ask people to run one when they
may not even have it.

2. The performance of the disk due to optimizations will probably have
little performance effect on the overall perforance on the tape write, since
the tape write is the limiting factor.

-- 
Conor P. Cahill            (703)430-9247        Virtual Technologies, Inc.,
uunet!virtech!cpcahil                           46030 Manekin Plaza, Suite 160
                                                Sterling, VA 22170 

davidsen@sixhub.UUCP (Wm E. Davidsen Jr) (05/31/90)

In article <1990May30.132457.6117@virtech.uucp> cpcahil@virtech.UUCP (Conor P. Cahill) writes:

| 1. There are no commercial disk optimzers for UNIX (at least that I know of)
| and most people, myself included, cringe at the thought of letting someone's
| program hunt around my raw disk patching things together.  I'm not saying
| that the programs are bad. I'm just saying that it will take a lot more
| than a simple post to alt.sources to get me to run one of those programs
| on my production systems.  Anyway, I can't ask people to run one when they
| may not even have it.

  True enough, but they are worth getting. Yes, I cringe when I run it,
but I take a backup first.
| 
| 2. The performance of the disk due to optimizations will probably have
| little performance effect on the overall perforance on the tape write, since
| the tape write is the limiting factor.

  I'm sorry, this is just totally wrong. You must never have had a
fragmented disk. I have seen transfer rates as low as 300kBytes/sec with
a fragmented disk and streaming tape which ran in fits and starts. I see
about 4MB overall (from the time I hit return to the time the tape is
rewound) on a non-fragmented f/s. At least with standard Xenix and UNIX
f/s there is a huge gain for backup.

  I have not been able to show degradation in performance due to
fragmentation of the ufa type filesystem on V.4, so perhaps this will
all go away in a year or so.

-- 
bill davidsen - davidsen@sixhub.uucp (uunet!crdgw1!sixhub!davidsen)
    sysop *IX BBS and Public Access UNIX
    moderator of comp.binaries.ibm.pc and 80386 mailing list
"Stupidity, like virtue, is its own reward" -me

walter@mecky.UUCP (Walter Mecky) (06/01/90)

In article <1990May25.123302.26061@virtech.uucp> cpcahil@virtech.uucp (Conor P. Cahill) writes:
+ I am trying to collect data on the performance of the different tape
+ backup systems available for 386 bases Unix systems.  What I am
+ trying to obtain is the speed in MB/minute of backing up a file system
+ to tape.  In order to be meaningful, the file system must be at least
+ 30MB and be backed up using the following command (so that everybody
+ uses the same mechanism):
+ []
+ 	/bin/time sh -c "find . -print | cpio -oBcC 10240 > /dev/rmt0"
+ []
+ Time:		Real: 10:50.7 	User: 10.2	System: 1:17.9

Note that only the "Real" portion of time(1) is significant here, because
"User" and "System" problably are those of find(1) only.
-- 
Walter Mecky

keithe@tekgvs.LABS.TEK.COM (Keith Ericson) (06/02/90)

In article <1990May25.123302.26061@virtech.uucp> cpcahil@virtech.uucp (Conor P. Cahill) writes:
<30MB and be backed up using the following command (so that everybody
<uses the same mechanism):
<
<	/bin/time sh -c "find . -print | cpio -oBcC 10240 > /dev/rmt0"
<
<Note that you may adjust the block size (10240) as you feel is appropriate
<for your system as long as you tell me what you used.  

Doesn't cpio bitch about the inclusion of both the "B" and "C 10240" command
tail?  They're redundant/competing flags to cpio...  ("B" == "C 512").

kEITHe

ron@rdk386.uucp (Ron Kuris) (06/04/90)

In article <1990May30.132457.6117@virtech.uucp> cpcahil@virtech.UUCP (Conor P. Cahill) writes:
>In article <1990May26..841@rdk386.uucp> ron@rdk386.UUCP (Ron Kuris) writes:
>>In article <1990May25.123302.26061@virtech.uucp> cpcahil@virtech.uucp (Conor P. Cahill) writes:
>>> [ stuff deleted ]
>>Seems to me like you're not taking into account filesystem fragmentation
>>or a bunch of other factors.  How about running a disk optimizer (e.g.
>>shuffle) before you start the test?  I've noticed a dramatic increase due
>>to less head activity (I don't have numbers handy).
>
>For several reasons:
>
>1. There are no commercial disk optimzers for UNIX (at least that I know of)
>and most people, myself included, cringe at the thought of letting someone's
>program hunt around my raw disk patching things together.  I'm not saying
>that the programs are bad. I'm just saying that it will take a lot more
>than a simple post to alt.sources to get me to run one of those programs
>on my production systems.  Anyway, I can't ask people to run one when they
>may not even have it.
You don't have to run one -- how about a backup then a mkfs, then a restore,
then the REAL backup?

>2. The performance of the disk due to optimizations will probably have
>little performance effect on the overall perforance on the tape write, since
>the tape write is the limiting factor.

I get double the performance on an optimized backup as compared to an
unoptimized backup.  Reason:  My tape streams when everything is optimal,
and does NOT when it is not optimal.  I know this because originally my
disks were not backed up and restored at all.  When I finally did this,
my backup time was halved!
-- 
--
...!pyramid!unify!rdk386!ron -or- ...!ames!pacbell!sactoh0!siva!rdk386!ron
It's not how many mistakes you make, its how quickly you recover from them.