[comp.unix.aux] TAR and CPIO

jim@jagubox.gsfc.nasa.gov (Jim Jagielski) (11/15/90)

In article <1990Nov14.104544.14142@panix.uucp> alexis@panix.uucp (Alexis Rosen) writes:
>
>The problem is, "working" doesn't mean "releasable". Tar and cpio will fail
>with any large job (trivial ones work). Dump and restore work, but I've seen
>problems there too- restore not wanting to restore and suchlike.
>


I've heard a number of times that tar and cpio "fail" with large jobs... what
is meant by "fail"? During the backup? During the restore? Too many files?
Files too big? Too many links (symbolic or otherwise)?

My system is relatively medium in size :

/             53398 blocks used of  102508 total. (52.09%):   /dev/dsk/c0d0s0
/usr2         59426 blocks used of  154418 total. (38.48%):   /dev/dsk/c5d0s3
/usr          71524 blocks used of  190782 total. (37.49%):   /dev/dsk/c0d0s2

and I backup each partition using cpio and the st driver. I've not hit a
snag yet when backing-up and have not required extensive restoring yet, all-
though I have done a cpio -pdmuv from /usr2 to /usr/usr2 when I reformatted
/usr2 to BSD Fast File (after I updated to 2.0) and then back again to /usr2
without problems (/ and /usr are on a Wren 170, /usr2 is Quantum 80).

Have I been lucky or haven't I reached the size limit yet?

By the way, isn't A/UX distributed in cpio format? Therefore, cpio must be
able to restore A/UX entirely from the distribution medium... true, it's
less than 50Megs, but I wouldn't call it "trivial".

Anyone know if any of the 3rd party tar/cpio replacements ("CTar", "LoneTar"(?)
) have been ported to A/UX?

Finally, what about pax?
--
=======================================================================
#include <std/disclaimer.h>
                                 =:^)
           Jim Jagielski                    NASA/GSFC, Code 711.1
     jim@jagubox.gsfc.nasa.gov               Greenbelt, MD 20771

"Kilimanjaro is a pretty tricky climb. Most of it's up, until you reach
 the very, very top, and then it tends to slope away rather sharply."

coolidge@cs.uiuc.edu (John Coolidge) (11/15/90)

jim@jagubox.gsfc.nasa.gov (Jim Jagielski) writes:
>I've heard a number of times that tar and cpio "fail" with large jobs... what
>is meant by "fail"? During the backup? During the restore? Too many files?
>Files too big? Too many links (symbolic or otherwise)?

I just (two days ago) had tar fail on me when getting the release of
g++ out the door. For some reason, it was not at all happy with the
g++ directory (which, now that I think about it, was completely messed
up and has been deleted from the archive; I'll try to put a replacement
out soon. This refers to the sources; the binaries and patches are fine).
Anyway, tar failed on that directory.

>Anyone know if any of the 3rd party tar/cpio replacements ("CTar", "LoneTar"(?)
>) have been ported to A/UX?

Gnu tar-1.09 has been ported. It went very easily and I've been using
it as my primary tar for the past 2 days :-). Gnu cpio also ports
pretty easily.

--John

--------------------------------------------------------------------------
John L. Coolidge     Internet:coolidge@cs.uiuc.edu   UUCP:uiucdcs!coolidge
Of course I don't speak for the U of I (or anyone else except myself)
Copyright 1990 John L. Coolidge. Copying allowed if (and only if) attributed.
You may redistribute this article if and only if your recipients may as well.

alexis@panix.uucp (Alexis Rosen) (11/18/90)

jim@jagubox.gsfc.nasa.gov (Jim Jagielski) writes:
>alexis@panix.uucp (Alexis Rosen) writes:
>>The problem is, "working" doesn't mean "releasable". Tar and cpio will fail
>>with any large job (trivial ones work). Dump and restore work, but I've seen
>>problems there too- restore not wanting to restore and suchlike.
>My system is relatively medium in size :
>/             53398 blocks used of  102508 total. (52.09%):   /dev/dsk/c0d0s0
>/usr2         59426 blocks used of  154418 total. (38.48%):   /dev/dsk/c5d0s3
>/usr          71524 blocks used of  190782 total. (37.49%):   /dev/dsk/c0d0s2
>
>and I backup each partition using cpio and the st driver. I've not hit a
>snag yet when backing-up and have not required extensive restoring yet, all-
>though I have done a cpio -pdmuv from /usr2 to /usr/usr2 when I reformatted
>/usr2 to BSD Fast File (after I updated to 2.0) and then back again to /usr2
>without problems (/ and /usr are on a Wren 170, /usr2 is Quantum 80).
>
>Have I been lucky or haven't I reached the size limit yet?

First of all, this problem only seems to happen with tapes. cpio -p has always
worked for me, with consideerably larger things than your /usr2.
As for backing up something that size, I haven't tried it. The author says,
st _does_ fail. I'm not interested in finding out that I'm the exception
_after_ I need to restore. However, to be honest, it's supposed to crash
during backup. I don't know why yours hasn't. Perhaps a chat with MicroNet
will clear things up. I'll call them monday...

>By the way, isn't A/UX distributed in cpio format? Therefore, cpio must be
>able to restore A/UX entirely from the distribution medium... true, it's
>less than 50Megs, but I wouldn't call it "trivial".

As I think has been made clear, it works with the Apple tape, for less than
one tape _only_ (as you're well aware, I recall...). Nothing else, that I
know of, has been shown to work.

>Anyone know if any of the 3rd party tar/cpio replacements ("CTar", "LoneTar"(?)
>) have been ported to A/UX?
>
>Finally, what about pax?

Well, I just tried it to read a tar archive I had made (a small one). No dice.
Might not have been pax's fault, though, so I can't say anything. Except
that I wish the folks at Apple would fix Major Screwup #2 so that we can
get on with our lives (serial stuff being #1...). A/UX is a wonderful product.
When my disk crashes, I'll be able to say "A/UX _was_ a wonderful product."
And that's all.

---
Alexis Rosen
Owner/Sysadmin, PANIX Public Access Unix, NY
{cmcl2,apple}!panix!alexis

alexis@panix.uucp (Alexis Rosen) (11/18/90)

coolidge@cs.uiuc.edu writes:
>jim@jagubox.gsfc.nasa.gov (Jim Jagielski) writes:
>>I've heard a number of times that tar and cpio "fail" with large jobs... what
>>is meant by "fail"? During the backup? During the restore? Too many files?
>>Files too big? Too many links (symbolic or otherwise)?
>
>I just (two days ago) had tar fail on me when getting the release of
>g++ out the door. For some reason, it was not at all happy with the [...]

Welcome to the club.

>>Anyone know if any of the 3rd party tar/cpio replacements ("CTar", "LoneTar"(?)
>>) have been ported to A/UX?
>
>Gnu tar-1.09 has been ported. It went very easily and I've been using
>it as my primary tar for the past 2 days :-). Gnu cpio also ports
>pretty easily.

Have you tested this with tapes of any sort? Multiple tapes of any sort?
If not, I'd love to try it out... (That's the restrained version of "GIMME!".)

Thanks,
---
Alexis Rosen
Owner/Sysadmin, PANIX Public Access Unix, NY
{cmcl2,apple}!panix!alexis

jim@jagubox.gsfc.nasa.gov (Jim Jagielski) (11/19/90)

In article <1990Nov18.095917.1746@panix.uucp> alexis@panix.uucp (Alexis Rosen) writes:
>jim@jagubox.gsfc.nasa.gov (Jim Jagielski) writes:
>>alexis@panix.uucp (Alexis Rosen) writes:
>>>The problem is, "working" doesn't mean "releasable". Tar and cpio will fail
>>>with any large job (trivial ones work). Dump and restore work, but I've seen
>>>problems there too- restore not wanting to restore and suchlike.
>>My system is relatively medium in size :
>>/             53398 blocks used of  102508 total. (52.09%):   /dev/dsk/c0d0s0
>>/usr2         59426 blocks used of  154418 total. (38.48%):   /dev/dsk/c5d0s3
>>/usr          71524 blocks used of  190782 total. (37.49%):   /dev/dsk/c0d0s2
>>
>>Have I been lucky or haven't I reached the size limit yet?
>
>First of all, this problem only seems to happen with tapes. cpio -p has always
>worked for me, with consideerably larger things than your /usr2.
>As for backing up something that size, I haven't tried it. The author says,
>st _does_ fail. I'm not interested in finding out that I'm the exception
>_after_ I need to restore. However, to be honest, it's supposed to crash
>during backup. I don't know why yours hasn't. Perhaps a chat with MicroNet
>will clear things up. I'll call them monday...
>

Now I have backed up the above partitions using st and cpio and have not
had any problems. The times that I've needed to snag an old copy of something
I've, once again, not hit a problem.

If cpio -p and cpio -o to DISK work, but cpio -o to TAPE doesn't, then I would
suspect the tape driver. As you said, the author says "st _does_ fail", but
does it fail because of the drivers or because something cpio/tar does that
st doesn't like.

In other words, I'm kinda confused. If it's tar/cpio which is bad, then I can
use Gnu-stuff, which I hope works. But if it's _st_, then maybe it doesn't
matter WHAT backup utility I use... what's the word??
--
=======================================================================
#include <std/disclaimer.h>
                                 =:^)
           Jim Jagielski                    NASA/GSFC, Code 711.1
     jim@jagubox.gsfc.nasa.gov               Greenbelt, MD 20771

"Kilimanjaro is a pretty tricky climb. Most of it's up, until you reach
 the very, very top, and then it tends to slope away rather sharply."