[comp.unix.questions] tar -r on a diskfile/tar arg too long

ssi@ziggy.EDU (Ssi) (06/08/90)

Does anyone know any draw backs/problems with using the -r option of
tar(1) on disk files, as in:

cat /dev/null > TARFILE
tar -cvf TARFILE long_list       <<-- table of contents as first file
for i in `cat long_list`
do
tar -rvf TARFILE $i             <<-- append each file to TARFILE
done
tar -cvf /dev/rmt0 TARFILE      <<-- put the whole thing to tape 
rm TARFILE


note:

tar -cvf /dev/rmt0 `cat long_list` 

Would give tar argument too long error.



                     Greg Ripp (813)628-6100 x5123
                     greg@system1.usfvax2.edu
                     ...!uunet!ateng!usfvax2!system1!greg

tr@samadams.princeton.edu (Tom Reingold) (06/08/90)

In article <1339@ziggy.EDU> ssi@ziggy.EDU (Ssi) writes:
$ 
$ 
$ Does anyone know any draw backs/problems with using the -r option of
$ tar(1) on disk files, as in:
$ 
$ cat /dev/null > TARFILE
$ tar -cvf TARFILE long_list       <<-- table of contents as first file
$ for i in `cat long_list`
$ do
$ tar -rvf TARFILE $i             <<-- append each file to TARFILE
$ done
$ tar -cvf /dev/rmt0 TARFILE      <<-- put the whole thing to tape 
$ rm TARFILE
$ 
$ 
$ note:
$ 
$ tar -cvf /dev/rmt0 `cat long_list` 
$ 
$ Would give tar argument too long error.

One disadvantage is that it's going to be very slow.  Each file you add
will take longer than the previous one.

This is why "cpio" takes its input names from its standard input.  It's
an unusual way of doing things, but very appropriate for a file
archiver.

Also, not all files can be appended to, such as exabyte tapes.  It
simply won't work.
--
                                        Tom Reingold
                                        tr@samadams.princeton.edu
                                        rutgers!princeton!samadams!tr
                                        201-560-6082

bill@twg.UUCP (Bill Irwin) (06/09/90)

In article <1339@ziggy.EDU< ssi@ziggy.EDU (Ssi) writes:
<Does anyone know any draw backs/problems with using the -r option of
<tar(1) on disk files, as in:
<
<cat /dev/null > TARFILE
<tar -cvf TARFILE long_list       <<-- table of contents as first file
<for i in `cat long_list`
<do
<tar -rvf TARFILE $i             <<-- append each file to TARFILE
<done
<tar -cvf /dev/rmt0 TARFILE      <<-- put the whole thing to tape
<rm TARFILE

I have always used "tar cvf TARFILE `cat long_list`" to do this.

There is also the -F option to specify the file:

tar cvfF TARFILE long_list

Using one of these methods should eliminate your error message.
-- 
Bill Irwin - TWG The Westrheim Group - Vancouver, BC, Canada
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
uunet!van-bc!twg!bill     (604) 431-9600 (voice) |     UNIX Systems
Bill.Irwin@twg.UUCP       (604) 431-4629 (fax)   |     Integration

guy@auspex.auspex.com (Guy Harris) (06/12/90)

>I have always used "tar cvf TARFILE `cat long_list`" to do this.

Well, to quote the very article to which you're following up:

> note:
> 
> tar -cvf /dev/rmt0 `cat long_list` 
> 
> Would give tar argument too long error.

so, apparently, he *can't* use that.  Maybe you did that under SunOS 4.x
or some other system that permits huge argument lists, or maybe your
"long_list" wasn't as long as his.

>There is also the -F option to specify the file:

There is also the "-F" option on *some* systems.  It is *not* present in
all versions of "tar".

ssi@ziggy.EDU (Ssi) (06/13/90)

In article <3448@auspex.auspex.com> guy@auspex.auspex.com (Guy Harris) writes:
>>I have always used "tar cvf TARFILE `cat long_list`" to do this.
>
>Well, to quote the very article to which you're following up:
>
>> note:
>> 
>> tar -cvf /dev/rmt0 `cat long_list` 
>> 
>> Would give tar argument too long error.
...
>>There is also the -F option to specify the file:
>
>There is also the "-F" option on *some* systems.  It is *not* present in
>all versions of "tar".


 My Fine Manual (SunOS 3.x) says the F option will exclude SCCS 
directories, FF will also exclude .o, errs, core and a.out files.

 Has this changed on SunOS 4.x, or in some other tar ???

For the record, my `long_list' is approx 100 lines of about 50 characters
each ( does'nt sound too long to me :-) ).

                    Greg Ripp (SSI) (813)628-6100 x5123
                        greg@system1.usfvax2.edu
                  ...!uunet!ateng!usfvax2!system1!greg

guy@auspex.auspex.com (Guy Harris) (06/15/90)

 > My Fine Manual (SunOS 3.x) says the F option will exclude SCCS 
 >directories, FF will also exclude .o, errs, core and a.out files.
 >
 > Has this changed on SunOS 4.x, or in some other tar ???

Apparently, since the other guy said that in his "tar" the "F" option
specified a file giving a list of file names.  The original V7 "tar"
didn't have *any* "F" flag, and the S5 "tar" still doesn't, as of the
S5R3.1 3B2 source distribution....  Berkeley added the "F" flag you find
in SunOS, and Sun picked it up from there.

>For the record, my `long_list' is approx 100 lines of about 50 characters
>each ( does'nt sound too long to me :-) ).

100 lines of about 50 characters is about 5000 characters.  Earlier
UNIXes had a limit of about 4096 or 5120 characters worth of arguments;
you're close to the limit for those systems.  (Even *earlier* systems
had limits of somewhere around 512 characters, as I remember, but few of
those remain....)

4BSD bumped that to 10240, and later to 20480; I forget what SunOS 3.x
had as its limit.  (Grep for NCARGS in "/usr/include/sys/param.h".) 
SunOS 4.0 boosted it to 0x100000, although the 4.x C shell can't cope with
an argument list that big (the Bourne shell doesn't have quite the same
problem).

dik@cwi.nl (Dik T. Winter) (06/15/90)

In article <3474@auspex.auspex.com> guy@auspex.auspex.com (Guy Harris) writes:
 >                                   although the 4.x C shell can't cope with
 > an argument list that big (the Bourne shell doesn't have quite the same
 > problem).

Similar observations apply to Silicon Graphics version of Unix.
A humble guess:  the C shell has a hardwired limit somewhere deep in the
sources (a common feature of Unix utilities).
--
dik t. winter, cwi, amsterdam, nederland
dik@cwi.nl

jmm@eci386.uucp (John Macdonald) (06/15/90)

In article <3474@auspex.auspex.com> guy@auspex.auspex.com (Guy Harris) writes:
|
| > My Fine Manual (SunOS 3.x) says the F option will exclude SCCS 
| >directories, FF will also exclude .o, errs, core and a.out files.
| >
| > Has this changed on SunOS 4.x, or in some other tar ???
|
|Apparently, since the other guy said that in his "tar" the "F" option
|specified a file giving a list of file names.  The original V7 "tar"
|didn't have *any* "F" flag, and the S5 "tar" still doesn't, as of the
|S5R3.1 3B2 source distribution....  Berkeley added the "F" flag you find
|in SunOS, and Sun picked it up from there.

The "F" as "next argument is a file containing file names" was in XENIX
release 7 and System III versions.  I don't know if it is still there in
current XENIX variants.  Until this discussion, I assumed that this useful
option was part of the standard (AT&T) tar code rather than being something
added by Microsoft/SCO - obviously I have not tried to use it in the last
couple of years.
-- 
Algol 60 was an improvment on most           | John Macdonald
of its successors - C.A.R. Hoare             |   jmm@eci386

guy@auspex.auspex.com (Guy Harris) (06/16/90)

 > >                                   although the 4.x C shell can't cope with
 > > an argument list that big (the Bourne shell doesn't have quite the same
 > > problem).
 >
 >Similar observations apply to Silicon Graphics version of Unix.
 >A humble guess:  the C shell has a hardwired limit somewhere deep in the
 >sources (a common feature of Unix utilities).

I don't know what happend when (as I infer from your message) SGI
boosted the max arg list size, but the problem in SunOS is that the C
shell has an array of pointers into the arglist, sized proportionally to
NCARGS, that it sticks on the stack.

Unfortunately, if NCARGS is very large, this creates a humongous array
on the stack, which caused some problems.  The *correct* fix is to
dynamically allocate this array dynamically; the *expedient* fix, given
the C shell's internal messiness, was to build in a smaller fixed size
for the array, based on the old value of NCARGS.