compata@cup.portal.com (David H Close) (08/23/90)
Recently Matt Landau (mlandau@bbn.com) wrote: " Increasing paging space from 60 MB to 120 MB made that problem disappear. " Yes, that's 120 MB of paging space, to create a 7 MB binary image. Why " it takes so much, I'll never understand, but needless to say we won't be " trying it on a 320 with a 120 MB disk :-) I have been trying to make perl 3.28 on AIX 3.1. Using a 320 with about 400 MB of disk total. 34 MB paging space with 16 MB RAM. It dies compiling eval.c, and several others if I bypass that one. I noticed that when it dies, the console reports paging space is low. So I investigated. Immediately after booting, with only root logged-in on the hft, smit reports that paging space is about 50% used! How can this be? There is no obvious reason for any paging to have occured at that time. During normal operation, we have up to six X terminals running, each with 2 or 3 windows. smit will usually show paging space over 80%. Not much of an increase over the fairly quiescent state right after boot. When originally installing, I took the default paging space of 32 MB, not knowing better. Later I found only an extra 2 MB available to extend it. The space is divided between two disks, in logical volumes hdisk6 and hdisk61. AIX doesn't seem to efficiently use the two areas, reporting problems while hdisk61 is still below 50% used. Is this normal? This problem is getting serious enough that I would back-up and re-allocate the logical volumes. Except that the 320 is supposed to be replaced with a 520 "any day". And I need the user space. Should I really plan to allocate 120 MB paging space on the 520? !!! That seems totally unreasonable. Dave Close, Compata, Arlington, Texas compata@cup.portal.com compata@mcimail.com
bonnett@umd5.umd.edu (hdb) (08/24/90)
* Discussion of xlc's insatiable appetite for swap space * I have run into another page space quirk, this one with tar. When I attempt to untar a LARGE (86megs) file on my 530 I get lots of disk action and after a few minutes a last gasp from tar: "Low on Paging Space!!!" and a crash. I have 48mb RAM and 64 Mb of Swap. From what I can tell, tar should fail if the link table is too large, but it should not read the entire file into memory. Also, if you are running X(Motif) with redirected console, you get no error message, just a locked machine. Does anyone have an idea what gives? -dave bonnett; Academic Software Dev Grp. Univ of MD
markus@cernvax.UUCP (markus baertschi) (08/24/90)
In <7170@umd5.umd.edu> bonnett@umd5.umd.edu (hdb) writes: >I have run into another page space quirk, this one with tar. >When I attempt to untar a LARGE (86megs) file on my 530 I >get lots of disk action and after a few minutes a last gasp >from tar: "Low on Paging Space!!!" and a crash. >-dave bonnett; Academic Software Dev Grp. Univ of MD Dave, This is a feechur of tar on AIX 3.1. Tar just allocates one block of memory (which is just as big as the file). There is a switch for tar (-Bn, where n ist the block size) to set the size of the chunks tar allocates at one moment. Markus -- Markus Baertschi | markus@cernvm.cern.ch CERN (European Particle Research Center) Geneva, Switzerland
au0005@dundee.austin.ibm.com (08/25/90)
In article <7170@umd5.umd.edu>, bonnett@umd5.umd.edu (hdb) writes: > From: bonnett@umd5.umd.edu (hdb) > Subject: Re: AIX 3.1 paging space > Date: 24 Aug 90 12:22:48 GMT > > * Discussion of xlc's insatiable appetite for swap space * > > I have run into another page space quirk, this one with tar. > When I attempt to untar a LARGE (86megs) file on my 530 I > get lots of disk action and after a few minutes a last gasp > from tar: "Low on Paging Space!!!" and a crash. I have 48mb > RAM and 64 Mb of Swap. From what I can tell, tar should > fail if the link table is too large, but it should not read > the entire file into memory. Also, if you are running X(Motif) > with redirected console, you get no error message, just a locked > machine. Does anyone have an idea what gives? > > -dave bonnett; Academic Software Dev Grp. Univ of MD Tar has been fixed. The problem was it was trying to read as much of the input file into memory as it could. With a Virtual Address space the size the Risc System/6000 has this can accomodate quite large files! It's in the next PTF available from the IBM support centers. Regards, Peter May. Peter May, Advisory Program Services Representative. IBM Australia. Sydney Support Center, 1-55 Rothschild Avenue, Rosebery. NSW. 2018. Australia. ***************************************************************************** AWDNet: au0005@dundee.austin.ibm.com, peter@price.austin.ibm.com Vnet : AU0005 at AUSVMQ, PETERMAY at SYDVM1. uucp : ...!cs.utexas.edu!ibmaus!auschs!price.austin.ibm.com!peter ...!cs.utexas.edu!ibmaus!auschs!dundee.austin.ibm.com!croc ... An Aussie lost in Austin ... #include <standard.disclaimer> /* My comments are my own: I do not represent IBM here in any way. */
drake@drake.almaden.ibm.com (08/26/90)
In article <7170@umd5.umd.edu> bonnett@umd5.umd.edu (hdb) writes: >I have run into another page space quirk, this one with tar. >When I attempt to untar a LARGE (86megs) file on my 530 I >get lots of disk action and after a few minutes a last gasp >from tar: "Low on Paging Space!!!" and a crash. I have 48mb >RAM and 64 Mb of Swap. Yes, this is a known bug. Specify "-b 20" on the tar command line as a temporary circumvention. Due to a trivial bug, tar is indeed reading the entire file into memory...I believe this will be fixed in the near future. Sam Drake / IBM Almaden Research Center Internet: drake@ibm.com BITNET: DRAKE at ALMADEN Usenet: ...!uunet!ibmarc!drake Phone: (408) 927-1861
bonnett@umd5.umd.edu (hdb) (08/29/90)
I just wanted to say thanks for the responses on the tar bugs. From the online info, it appeared to me that the -b switch only affected tapes, not files. Oh Well, Live and Learn... Again thanks to all who responded (Swiss and Aussi!!) -dave bonnett-