[comp.unix.aix] AIX 3.1 C compiler needs a tty ?

pim@cti-software.nl (Pim Zandbergen) (08/21/90)

During a port of one of our applications to a model 320,
I discovered a strange bug. I had this really large make job
which I wanted to do overnight.

So I typed 
	nohup make > make.out 2>&1 &
and logged out, switched the terminal off and went home.

When I returned to the porting center, I found out that
after actually logging off, all cc invocations exited
with status code 9 (killed), without any error messages
other than the ones "make" reported.

What gives? cc bug? OS bug? Or did I goof ?
-- 
Pim Zandbergen                            domain : pim@cti-software.nl
CTI Software BV                           uucp   : uunet!mcsun!hp4nl!ctisbv!pim
Laan Copes van Cattenburch 70             phone  : +31 70 3542302
2585 GD The Hague, The Netherlands        fax    : +31 70 3512837

mlandau@bbn.com (Matthew Landau) (08/21/90)

pim@cti-software.nl (Pim Zandbergen) writes:
[Recounts starting a "large make" before going home for the evening,
 and continues:]

>When I returned to the porting center, I found out that
>after actually logging off, all cc invocations exited
>with status code 9 (killed), without any error messages
>other than the ones "make" reported.

Just how large WAS this "large" make?  And how much swap space does your
machine have?  We found that under AIX 3.1, if a job runs out of swap 
space (as our makes used to do), they're sent a signal 9 by the kernel.  
No warning, no error messages, just "Killed by signal 9."

Increasing paging space from 60 MB to 120 MB made that problem disappear.
Yes, that's 120 MB of paging space, to create a 7 MB binary image.  Why
it takes so much, I'll never understand, but needless to say we won't be
trying it on a 320 with a 120 MB disk :-)
--
 Matt Landau			Waiting for a flash of enlightenment
 mlandau@bbn.com			  in all this blood and thunder

rcd@ico.isc.com (Dick Dunn) (08/22/90)

mlandau@bbn.com (Matthew Landau) writes, in response to a make-dying
problem:

> Just how large WAS this "large" make?  And how much swap space does your
> machine have?  We found that under AIX 3.1, if a job runs out of swap 
> space (as our makes used to do), they're sent a signal 9 by the kernel.  

That much I can believe OK...it's harsh but ya gotta do something and there
aren't a lot of options.  What I don't get is...

> Increasing paging space from 60 MB to 120 MB made that problem disappear.
> Yes, that's 120 MB of paging space, to create a 7 MB binary image.  Why
> it takes so much, I'll never understand,...

No, wait, please try to understand.  Inquiring minds want to know--how on
earth can you eat that much swap space???  Either you've misconstrued
something that's going on, but I don't think so, or there is some dread-
ful problem (i.e., bug, not just performance).  60 Mb of swap space, even
to create a large executable, is just *not* realistic.  (Disk may be cheap,
but it ain't *that* cheap!:-)

Could somebody who's down in the internals please explain to someone
sitting on the sidelines how you could possibly need > 60 Mb of swap???
This is just unreal.  What causes this problem?
-- 
Dick Dunn     rcd@ico.isc.com -or- ico!rcd       Boulder, CO   (303)449-2870
   ...Are you making this up as you go along?

pim@cti-software.nl (Pim Zandbergen) (08/22/90)

mlandau@bbn.com (Matthew Landau) writes:

>Just how large WAS this "large" make?  And how much swap space does your
>machine have?

It consisted of a couple of hundred relatively small, standalone programs.
I don't think this has to do anything with swap space.

Just repeat after me:

1) Extract the hello.c program and the loop-cc shell script from
   the shar archive appended to this article.

2) On terminal #1, type:
   nohup loop-cc > loop-cc.out 2>&1 &

3) On terminal #2, type:
   tail -f loop-cc.out

4) Keep watching terminal #2 a few seconds

5) Now log terminal #1 out.

6) Watch terminal #2 again.

7) Answer the question: What gives?  :-)

#--------------------------------CUT HERE-------------------------------------
#! /bin/sh
echo 'x - hello.c'
if test -f hello.c; then echo 'shar: not overwriting hello.c'; else
sed 's/^X//' << '________This_Is_The_END________' > hello.c
Xmain()
X{
X	printf("Hello World!\n");
X}
________This_Is_The_END________
if test `wc -c < hello.c` -ne 38; then
	echo 'shar: hello.c was damaged during transit (should have been 38 bytes)'
fi
fi		; : end of overwriting check
echo 'x - loop-cc'
if test -f loop-cc; then echo 'shar: not overwriting loop-cc'; else
sed 's/^X//' << '________This_Is_The_END________' > loop-cc
Xwhile :
Xdo
X	cc -o hello hello.c
X	echo $?
X	sleep 1
Xdone
________This_Is_The_END________
if test `wc -c < loop-cc` -ne 55; then
	echo 'shar: loop-cc was damaged during transit (should have been 55 bytes)'
fi
chmod 755 loop-cc
fi		; : end of overwriting check
exit 0
-- 
Pim Zandbergen                            domain : pim@cti-software.nl
CTI Software BV                           uucp   : uunet!mcsun!hp4nl!ctisbv!pim
Laan Copes van Cattenburch 70             phone  : +31 70 3542302
2585 GD The Hague, The Netherlands        fax    : +31 70 3512837

marc@ibmpa.awdpa.ibm.com (Marc Pawliger) (08/23/90)

In article <1990Aug22.002727.13323@ico.isc.com>, rcd@ico.isc.com (Dick
Dunn) writes:
|> > Increasing paging space from 60 MB to 120 MB made that problem
disappear.
|> > Yes, that's 120 MB of paging space, to create a 7 MB binary image. 
Why
|> > it takes so much, I'll never understand,...
|> 
|> No, wait, please try to understand.  Inquiring minds want to
know--how on
|> earth can you eat that much swap space???  Either you've
misconstrued
|> something that's going on, but I don't think so, or there is some
dread-
|> ful problem (i.e., bug, not just performance).  60 Mb of swap space,
even
|> to create a large executable, is just *not* realistic.  (Disk may be
cheap,
|> but it ain't *that* cheap!:-)
|> 
|> Could somebody who's down in the internals please explain to someone
|> sitting on the sidelines how you could possibly need > 60 Mb of
swap???
|> This is just unreal.  What causes this problem?

Ahem.  The '6000 compiler (read: linker) in the release code is not
very
shrewd with memory management, and it is entirely possibly to get a 
70 Meg in-core image size when compiling a "small" source file with
optimization turned on.  This seems to be because some of the
techniques
used in the linker are inherited from VM compiler technology where
using
huge areas of (virtual) memory was much less of a performance hit than
it
is on the '6000.  I know that this has been an often-reported problem
with the linker, and it should be improved in future releases.

+---Marc Pawliger---IBM Advanced Workstations Division---Palo Alto,
CA---+
|    Internet: marc%ibmsupt@uunet.uu.net     VNET:    MARCP at AUSVM6   
|
|     UUCP:     uunet!ibmsupt!ibmpa!marc      Phone:   (415) 855-3493   
|
+------IBMnet:   marc@ibmpa.awdpa.ibm.com------IBM T/L: 
465-3493--------+

I don't speak for IBM, and they don't make me wear a suit.