mat@zeus.opt-sci.arizona.edu (Mat Watson) (12/28/89)
My professor has a 6386 running Unix system V that he wants to back up using the tape drive on my Sun 3/160. There is no dump command on his system so we tried to use tar. However, the tar command on his system quits after it has writen roughly 2Mb to the archive file ( this is not very nice if your trying to back up about 30Mb of files). Tar always quits at the same number of bytes no matter what kind of device or file we try to write to ( a pipe or a disk file ). BTW these files are on the 6386, they are not remote. Is tar under system V broken? Any ideas on a workaround? Thanks in advance. Mat Watson mat@zeus.opt-sci.arizona.edu [128.196.128.219] ..{allegra,cmcl2,hao!noao}!arizona!zeus.opt-sci.arizona.edu!mat Optical Sciences Center, Univ. of Arizona, Tucson, AZ 85721, USA
fst@gtenmc.UUCP (Fariborz "Skip" Tavakkolian) (12/30/89)
In article <1295@amethyst.math.arizona.edu> mat@zeus.opt-sci.arizona.edu (Mat Watson) writes: [delete] >so we tried to use tar. However, the tar command on his system quits >after it has writen roughly 2Mb to the archive file ... >Tar always quits at the same number of bytes no matter what kind of device >or file we try to write to ( a pipe or a disk file ) ... >Is tar under system V broken? Any ideas on a workaround? >Thanks in advance. >Mat Watson >..{allegra,cmcl2,hao!noao}!arizona!zeus.opt-sci.arizona.edu!mat This looks like the ULIMIT problem; by default most 3B2, 6386 etc. are setup with a ULIMIT of 2048 (1K) blocks. There is information in the AT&T manuals on how to change the system limits. P.S., I am working on a VAX 8810 with 128 MB of main memory (RAM) running SVR3.1 and close to a GAZILION MB of disk; the ULIMIT is set to 2048, makes no sense whatsoever ... Skip -- ---------------------------------------------------------------------------- Fariborz "Skip" Tavakkolian -of- Automated Cellular Engineering Currently consulting -at- GTE Telecom, Inc. Bothell, Wa Mail: tiny1!fst@mcgp1 -or- fst@gtenmc
wtr@moss.ATT.COM (3673,ATTT) (01/03/90)
In article <368@gtenmc.UUCP> fst@gtenmc.UUCP (Fariborz Skip Tavakkolian) writes: >In article <1295@amethyst.math.arizona.edu> mat@zeus.opt-sci.arizona.edu (Mat Watson) writes: >>so we tried to use tar. However, the tar command on his system quits >>after it has writen roughly 2Mb to the archive file ... >>Tar always quits at the same number of bytes no matter what kind of device >>or file we try to write to ( a pipe or a disk file ) ... >>Is tar under system V broken? Any ideas on a workaround? >This looks like the ULIMIT problem; by default most 3B2, 6386 etc. are >setup with a ULIMIT of 2048 (1K) blocks. There is information in the AT&T >manuals on how to change the system limits. well, it makes sense if you are running in a developement environment and want to keep runaway processees from eating up all of the disk space. no matter how much disk space (total) you have, a full partition is a real pain in the butt. also, at least for the vaxen at work running SysV rel3.1 the block size is for 512 byte blocks. did this change for the 3.2 release? >P.S., I am working on a VAX 8810 with 128 MB of main memory (RAM) running >SVR3.1 and close to a GAZILION MB of disk; the ULIMIT is set to 2048, makes >no sense whatsoever ... so get your SA to raise it. my 3b1 at home has a ulimit of 2**31 (-1) (and i only have 67Meg of disk space. :-( BTW: gazillion is spelled with two L's ;-) >Skip -- ===================================================================== Bill Rankin email address: att!moss!wtr was: Bell Labs, Whippany NJ att!bromo!wtr now: AT&T Federal Systems, Burlington NC (919) 228 3673 (cornet 291)
fst@gtenmc.UUCP (Fariborz "Skip" Tavakkolian) (01/04/90)
In article <3456@cbnewsl.ATT.COM> wtr@moss.ATT.COM (Bill Rankin) writes: >In article <368@gtenmc.UUCP> fst@gtenmc.UUCP (Fariborz Skip Tavakkolian) writes: >>In article <1295@amethyst.math.arizona.edu> mat@zeus.opt-sci.arizona.edu (Mat Watson) writes: > [A lot of stuff about tar problem and ULIMIT deleted] >also, at least for the vaxen at work running SysV rel3.1 the block size >is for 512 byte blocks. did this change for the 3.2 release? >>P.S., I am working on a VAX 8810 with 128 MB of main memory (RAM) running >>SVR3.1 and close to a GAZILION MB of disk; the ULIMIT is set to 2048, makes >>no sense whatsoever ... >so get your SA to raise it. my 3b1 at home has a ulimit of 2**31 (-1) >(and i only have 67Meg of disk space. :-( About raising the ulimit on the SVR3.1 on the VAXEN, well I am glad you brought it up. Our SVR3.1 comes from DEC. Our SA can't do what you recommend, since the ULIMIT is apparently hard coded to 2048, even when you set the tunable ULIMIT to what you want (I don't have all the gory details). (BTW: Blocks on this system are 1024 bytes. Sdb crashes the system, using STREAMS crashes the system; and lately I noticed that sneezing crashes the system :-) ) >BTW: gazillion is spelled with two L's ;-) I knew that :-) >Bill Rankin email address: att!moss!wtr Skip -- ---------------------------------------------------------------------------- Fariborz "Skip" Tavakkolian -of- Automated Cellular Engineering Currently consulting -at- GTE Telecom, Inc. Bothell, Wa Mail: tiny1!fst@mcgp1 -or- fst@gtenmc
thad@cup.portal.com (Thad P Floryan) (01/05/90)
In <369@gtenmc.UUCP> fst@gtenmc.UUCP (Fariborz "Skip" Tavakkolian) writes About raising the ulimit on the SVR3.1 on the VAXEN, well I am glad you brought it up. Our SVR3.1 comes from DEC. Our SA can't do what you recommend, since the ULIMIT is apparently hard coded to 2048, even when you set the tunable ULIMIT to what you want (I don't have all the gory details). (BTW: Blocks on this system are 1024 bytes. Sdb crashes the system, using STREAMS crashes the system; and lately I noticed that sneezing crashes the system :-) ) Omigosh. Now DEC is doing to UNIX what they did to VMS?!?! VMS, with its eleventy-seven thousand fences, quotas, restrictions, etc., is definitely NOT a development system. Consider the case where, at 3 A.M., being the only logged-in online or batch user on a 64 MB RAM system, one would be unable to use more than 1MB RAM due to all those quotas, restrictions, etc. Guess DEC has never heard of dynamic resource sharing. (And note the absence of a ":-)" after that last line above) Thad Floryan [ thad@cup.portal.com (OR) ..!sun!portal!cup.portal.com!thad ]