[comp.unix.i386] tar & ulimit are pissing me off.

steve@hacker.UUCP (Stephen M. Youndt) (03/13/90)

The title says it all.  I've been trying to get the gcc archive on and off
for the past 3 months, without any luck.  No matter what I do, either tar
or ulimit seems to bite me.  There is a tunable parameter ULIMIT as well as
what seems to be an undocumented command 'ulimit'.  Using 'ulimit 10000'
allows me to create files of up to 5 Meg (approx), while changing the
ULIMIT parameter doesn't seem to do anything at all.  The problem is that
the 'ulimit' command is not inherited by uucp (even when I put the command in
an /etc/rc2.d/S* file and reboot the system).  So, the problem remains that
I can't receive files of over 2 Meg via uucp.  You might suggest at this point
that I get the archive broken down into more manageable chunks.  Great idea!
I tried this, and I received the archive fine.

Problem #2 is that even though I can 

 ulimit 20000
 cat gcc-1.36.tar.Z.*[0,1,2,3,4,5,6,7,8,9,10] > gcc-1.36.tar.Z
 uncompress gcc-1.36.tar
 tar xf gcc-1.36.tar

At this point I get, "tar: directory checksum error" or something thereabouts
which has the effect of depriving me of what seem to be some fairly vital
files. 

I'm running Bell Tech Unix 3.2u, and I'd like to know what to do about either,
or both of these problems.  Is there is way to permanently set ULIMIT?  Is this
a known bug with 'tar', or is there something less apparent screwed up? Thanks
in advance.

			Stephen M. Youndt (uunet!hacker!steve)

kaleb@mars.jpl.nasa.gov (Kaleb Keithley) (03/13/90)

In article <183@hacker.UUCP> steve@hacker.UUCP (Stephen M. Youndt) writes:
> ulimit 20000
> cat gcc-1.36.tar.Z.*[0,1,2,3,4,5,6,7,8,9,10] > gcc-1.36.tar.Z
> uncompress gcc-1.36.tar
> tar xf gcc-1.36.tar
>
>At this point I get, "tar: directory checksum error" 
>which has the effect of depriving me of some fairly vital files. 
>
>or both of these problems.  Is there is way to permanently set ULIMIT?  Is this

You can set ULIMIT higher in your configuration files and rebuild the
kernel, but even better, would be to link "zcat" to "compress"
(uncompress is also a link to compress, it just checks the command line
to see what to do) and then "zcat gcc-1.36.tar | tar xvf -"

But this won't cure the fact that you have a checksum error in your
tar file.

kaleb@mars.jpl.nasa.gov            Jet Propeller Labs
Kaleb Keithley

spelling and grammar flames > /dev/null

cpcahil@virtech.uucp (Conor P. Cahill) (03/13/90)

In article <183@hacker.UUCP> steve@hacker.UUCP (Stephen M. Youndt) writes:
>The title says it all.  I've been trying to get the gcc archive on and off
>for the past 3 months, without any luck.  No matter what I do, either tar
>or ulimit seems to bite me.  There is a tunable parameter ULIMIT as well as
>what seems to be an undocumented command 'ulimit'.  Using 'ulimit 10000'

ulimit is not undocumented.  It is a "built-in" command to the shell
and documented on sh(1).

>allows me to create files of up to 5 Meg (approx), while changing the
>ULIMIT parameter doesn't seem to do anything at all.  The problem is that

You can change the ulimit parameter until you are blue in the face, unless
you recompile the kernel and reboot.  If you did do this, then the problem
is you /etc/default/login file which has a ulimit parameter also.

>the 'ulimit' command is not inherited by uucp (even when I put the command in
>an /etc/rc2.d/S* file and reboot the system).  So, the problem remains that

Setting a ulimit only effects children of the current process.  So when you 
placed a ulimit call in an /etc/rc2.d/S* file, it just took effect while that
file was being processed (and for any children of that process).

The proble with uucp is that even if you had changed the S75cron file (which
starts cron, which starts uucp via a call to uudemon.hr in uucp's crontab),
it wouldn't have any effect on uucp sessions that were initiated from anywhere
but cron (i.e. uucp logins, users forcing a uucp via a "Uutry -r system",etc).

>I can't receive files of over 2 Meg via uucp.  You might suggest at this point
>that I get the archive broken down into more manageable chunks.  Great idea!
>I tried this, and I received the archive fine.

I would suggest this anyway because if you are transferring a 20 meg file
and you have a problem in byte 19999999, the whole file must be retransmitted.
By spliting the file to 1MB portions, only the last 999999 characters would
have to be retransmitted.

>Problem #2 is that even though I can 
>
> ulimit 20000
> cat gcc-1.36.tar.Z.*[0,1,2,3,4,5,6,7,8,9,10] > gcc-1.36.tar.Z
> uncompress gcc-1.36.tar
> tar xf gcc-1.36.tar

Your problem is probably due to the nameing convention you chose.  If the 
files are named as you indicate, the problem is that the order of the
files that are placed into gcc.-1.36.tar.Z is as follows:

	gcc-1.36.tar.Z.0
	gcc-1.36.tar.Z.1
	gcc-1.36.tar.Z.10
	gcc-1.36.tar.Z.2 

BINGO - file.10 got placed before file.2.  However, since uncompress worked
correctly on the file I would tend to doubt this.  Why don't you try to unpack
the data using the following pipeline:

	cat [gcc files] | uncompress | tar -xovf -

This way you won't run into any ulimit problems.

>Is there is way to permanently set ULIMIT? 

Yes.  

	1. remove the ULIMIT line from /etc/default/login
	2. change the ULIMIT configuration parameter 
	3. rebuild the kernel
	4. reboot
	5. make sure there are no ulimit calls in /etc/profile and /etc/rc*

This works if you want your ulimit to be <= 12288 (~6MB).  If you want it
to be larger, you must modify the /etc/conf/cf.d/mtune file and change
the ULIMIT line to be something like the following:

	ULIMIT	3072	2048	whatever_max_you_want


-- 
Conor P. Cahill            (703)430-9247        Virtual Technologies, Inc.,
uunet!virtech!cpcahil                           46030 Manekin Plaza, Suite 160
                                                Sterling, VA 22170 

jackv@turnkey.TCC.COM (Jack F. Vogel) (03/14/90)

In article <183@hacker.UUCP> steve@hacker.UUCP (Stephen M. Youndt) writes:
>The title says it all.  I've been trying to get the gcc archive on and off
>for the past 3 months, without any luck.  No matter what I do, either tar
>or ulimit seems to bite me.  There is a tunable parameter ULIMIT as well as
>what seems to be an undocumented command 'ulimit'.  Using 'ulimit 10000'
                    ^^^^^^^^^^^^^^^^^^^^^^
ulimit is not undocumented, it is not an independent command, it is a
bourne shell builtin, see that man page for info. The csh has a corres-
ponding builtin command 'limit'. BTW, ISC has an interesting bug in the
csh limit function. If you set it unlimited instead of getting what you
would expect everything exceeds your limit!! Sounds like a variable that
needs to be unsigned is not declared so somewhere.

>allows me to create files of up to 5 Meg (approx), while changing the
>ULIMIT parameter doesn't seem to do anything at all.  The problem is that

As another poster noted, you need to rebuild and run a new kernel with this
parameter change to get any effect. As root you can set the ulimit and then
issue the uucp command and you should have no problem.

>Problem #2 is that even though I can 
>
> ulimit 20000
> cat gcc-1.36.tar.Z.*[0,1,2,3,4,5,6,7,8,9,10] > gcc-1.36.tar.Z
                                             ^^^^^
>At this point I get, "tar: directory checksum error" or something thereabouts
 
This is great!! I am not surprised you get a checksum error with a command
like this, what you have when you are done is just part 10!! Try replacing
the '>' with '>>' and everything should work fine :-}!!

Ain't Unix Grand :-} :-}!!

Disclaimer: I speak for myself, not for LCC.


-- 
Jack F. Vogel			jackv@seas.ucla.edu
AIX Technical Support	              - or -
Locus Computing Corp.		jackv@ifs.umich.edu

cpcahil@virtech.uucp (Conor P. Cahill) (03/14/90)

In article <6731@turnkey.TCC.COM> jackv@turnkey.TCC.COM writes:
>>Problem #2 is that even though I can 
>>
>> ulimit 20000
>> cat gcc-1.36.tar.Z.*[0,1,2,3,4,5,6,7,8,9,10] > gcc-1.36.tar.Z
>                                             ^^^^^
>>At this point I get, "tar: directory checksum error" or something thereabouts
> 
>This is great!! I am not surprised you get a checksum error with a command
>like this, what you have when you are done is just part 10!! Try replacing
>the '>' with '>>' and everything should work fine :-}!!

That is not the case (if he typed in what he said he typed in).  And he
should definately use the single > to ensure that only those files are 
in the concatenation.

For example:

		> test.1
		> test.2
		> test.10

		echo *
		test.1 test.10 test.2
	
		echo test.*[0,1,2,3,4,5,6,7,8,9,10]
		test.1 test.10 test.2

		echo test.*[1,2,10]
		test.1 test.10 test.2
	
		echo test.*[1,2]
		test.1 test.2

So, the '10' will match a 10 (it will also match a 01).


-- 
Conor P. Cahill            (703)430-9247        Virtual Technologies, Inc.,
uunet!virtech!cpcahil                           46030 Manekin Plaza, Suite 160
                                                Sterling, VA 22170 

jde@everex.UUCP (-Jeff Ellis) (03/15/90)

For keeping system ulimits edit /etc/defaults/login
and change the ulimit value in this file. This will help your
problems.

-- 
Jeff Ellis		ESIX SYSTEM/V  UUCP:uunet!zardoz!everex!jde
			US Mail: 1923 St. Andrew Place, Santa Ana, CA 92705

marc@dumbcat.UUCP (Marco S Hyman) (03/16/90)

    >> cat gcc-1.36.tar.Z.*[0,1,2,3,4,5,6,7,8,9,10] > gcc-1.36.tar.Z

In all the discussion about the above command no-one has pointed out that
the use of brackets here is incorrect.  Assuming C-shell, I believe the
original poster wanted (and perchance meant) ...Z*{0,1,2,3,4,5,6,7,8,9,10}.
Note the curly braces, not the brackets.  What was entered was a character
class that just happened to have the comma in it 10 times and the 0 and 1
twice.  An equivalent expression would be ...Z*[0-9,].  Also, when using the
curly braces the shell builds the arguments in the order given.

Besides that Conor's analysis was correct.

// marc
-- 
// {ames,decwrl,sun}!pacbell!dumbcat!marc
// pacbell!dumbcat!marc@lll-winken.llnl.gov

steve@hacker.UUCP (Stephen M. Youndt) (03/22/90)

My thanks go to all of you who were kind enough to send answers to my 
questions.  My ULIMIT problem was, in fact, solved by modifying
/etc/default/login.  I suspect the answer to the other question is that
I got a bad copy of the compress set.  I believe I will soon be on my way
to compiling gcc.  Thanks again. -- SMY (uunet!hacker!steve)
-- 
This is a poor attempt at a .sig