[comp.unix.i386] backups on 386/ix - some pronblems using GNUtar; any hints?

greyham@hades.OZ (Greyham Stoney) (12/05/89)

I'm having a couple of problems trying to do automatic (as much as possible)
backups on our 386/ix machine, which I need a few hints on.

The machine has a 300MB hard disk, partitioned into 3 filesystems as
follows:

> /         :	Disk space:  31.98 MB of  51.26 MB available (62.39%).
> /usr      :	Disk space:  26.95 MB of 115.09 MB available (23.42%).
> /usr2     :	Disk space:   5.03 MB of 114.84 MB available ( 4.38%).
> 
> Total Disk Space:  63.97 MB of 281.21 MB available (22.75%).

(Ok, I lied; there's also a DOS partition in there for VPIX, but that's
irrelevant). The idea of this is that was can do full backups of the
system by using one tape for each filesystem. Our tape drive is 120Mbyte;
so we should be able to do a full dump and end up with a / tape, a
/usr tape, and a /usr2 tape.

I don't want to do that every day though, so I want to do an overnight
incremental dump of everthing that changed since the last full dump.

Anyway, so it looked to me like GNUtar could do this, since it's tar
compatible, and can do date-based incrementals. [any other suggestions
are welcome]. There are a few troubles though:
(this is using GNU tar version 1.07; I believe this is the latest?)

1) When the tape is completely full, GNUtar reports "error opening
directory ...." for every directory not yet covered. At least, this seems
to be what's happening. If the tape drive is 120Mb, and the filesystem is
115Mb, how come the tape fills up at all anyway when doing a full dump???.

2) When doing a compressed TAR, gnutar's return code incorrectly indicates
that the tar went OK even if the tape is write protected. Our dump script
relies on this return code to know whether or not to update the automatic
tape-contents register we've got.

Has anyone got any hints, or even just general ideas on the best strategy
I should adopt here?. I don't want to hack GNUtar (much);

		any suggestions at all are welcome.

					Greyham.

-- 
/*  Greyham Stoney:                            Australia: (02) 428 6476  *
 *     greyham@hades.oz  - Ausonics Pty Ltd, Lane Cove, Sydney, Oz.      *
 *                ISDN: Interface Subscribers Don't Need                 */

cpcahil@virtech.uucp (Conor P. Cahill) (12/05/89)

In article <481@hades.OZ>, greyham@hades.OZ (Greyham Stoney) writes:
> 1) When the tape is completely full, GNUtar reports "error opening
> directory ...." for every directory not yet covered. At least, this seems
> to be what's happening. If the tape drive is 120Mb, and the filesystem is
> 115Mb, how come the tape fills up at all anyway when doing a full dump???.

This is probably because you have some sparse files on the file system.  When
they get backed up, they take up the logical amount of space, not the physical
amount of space taken up on the file system.



-- 
+-----------------------------------------------------------------------+
| Conor P. Cahill     uunet!virtech!cpcahil      	703-430-9247	!
| Virtual Technologies Inc.,    P. O. Box 876,   Sterling, VA 22170     |
+-----------------------------------------------------------------------+

les@chinet.chi.il.us (Leslie Mikesell) (12/06/89)

In article <481@hades.OZ> greyham@hades.OZ (Greyham Stoney) writes:

>1) When the tape is completely full, GNUtar reports "error opening
>directory ...." for every directory not yet covered. At least, this seems
>to be what's happening. If the tape drive is 120Mb, and the filesystem is
>115Mb, how come the tape fills up at all anyway when doing a full dump???.

Keep in mind that the minimum header block for a tar file is 512 bytes
even for zero length files or linked files that take no actual disk
space.  Try tarring /usr/lib/terminfo to see the problem.  There is
also a possibility that you have sparse files that have their "holes"
filled when you copy them.

>2) When doing a compressed TAR, gnutar's return code incorrectly indicates
>that the tar went OK even if the tape is write protected. Our dump script
>relies on this return code to know whether or not to update the automatic
>tape-contents register we've got.

That probably could be fixed, although I'd rather see per-file compression
so you have some hope of recovery if you have a media error during a
restore.  Anyone working on this (or cpio format output)?

>Has anyone got any hints, or even just general ideas on the best strategy
>I should adopt here?. I don't want to hack GNUtar (much);

You might want to try cpio to see how much space the tar headers are 
actually wasting.  You could still use GNUtar for the incrementals,
since it has a mode where it can delete any files which were not present
when the incremental was taken that is handy if the filesystem is nearly
full or you want to restore that state.


Les Mikesell
  les@chinet.chi.il.us

herder@myab.se (Jan Herder) (12/07/89)

In article <481@hades.OZ> greyham@hades.OZ (Greyham Stoney) writes:
>1) When the tape is completely full, GNUtar reports "error opening
>directory ...." for every directory not yet covered. At least, this seems
>to be what's happening. If the tape drive is 120Mb, and the filesystem is
>115Mb, how come the tape fills up at all anyway when doing a full dump???.

If the tape is not streaming you don't get 120Mb. Everytime it has to 
stop you loose a some bytes. GNUtar also have some overhead so dumping
a 115Mb can take more.
    
    When the tape is full the driver is returning another errorcode
than GNUtar expects. Findout what errorcode and fix GNUtar.

-- 
Jan Herder, MYAB Sweden                    |  Phone: +46 31 18 75 12
Internet: herder@myab.se                   |  Fax:   +46 31 18 28 42
UUCP: 	  uunet!sunic!chalmers!myab!herder |  Address: Dr. Forseliusg 21
ARPA:	  herder%myab.se@uunet.uu.net      |           413 26 Gothenburg

pcg@aber-cs.UUCP (Piercarlo Grandi) (12/08/89)

In article <1989Dec5.012032.2877@virtech.uucp> cpcahil@virtech.uucp (Conor P. Cahill) writes:
    In article <481@hades.OZ>, greyham@hades.OZ (Greyham Stoney) writes:
    > 1) When the tape is completely full, GNUtar reports "error opening
    > directory ...." for every directory not yet covered. At least, this seems
    > to be what's happening. If the tape drive is 120Mb, and the filesystem is
    > 115Mb, how come the tape fills up at all anyway when doing a full dump???.
    
    This is probably because you have some sparse files on the file system.  When
    they get backed up, they take up the logical amount of space, not the physical
    amount of space taken up on the file system.

A likely reason would be that GNU tar, while walking the filesystem tree,
forgets to close directories that it has opened, when unstacking them and
thus runs out of file descriptor slots. Since a typical 386 unix has 60 fd
slots per process, this may take a while, and make you believe that you have
dumped too much.
-- 
Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

buck@siswat.UUCP (A. Lester Buck) (12/08/89)

In article <478@myab.se>, herder@myab.se (Jan Herder) writes:
> If the tape is not streaming you don't get 120Mb. Everytime it has to 
> stop you loose a some bytes.

This is incorrect.  You do not lose any bytes from start/stop mode.  If the
tape is streaming but the next block is not quite ready, the drive can write
an extended inter-record gap and then the data when it arrives.  If the data
does not arrive within a certain time window, and the drive has to stop and
reposition, the extended gap is re-recorded as a normal inter-record gap.
The extended gaps can really chew up tape if your system is on the brink of
stopping streaming.

-- 
A. Lester Buck		...!texbell!moray!siswat!buck