[comp.unix.aix] Linker heartburn in 9021

peter@msinc.msi.com (Peter Blemel) (08/17/90)

System: 520, AIXV3.1 9021, 64mb ram, 1.2 gb disk., page/swap=64mb

   Our programs have been increasing rapidly in size as our development effort
moves along. This problem first manifested itself during Motif development, but
soon it became obvious that most sizeable programs are affected.

Scenario:
1) A C program compiles and links fine, producing a rougly 2mb executable. This
is a mindless X-Windows front end to a fortran program, but being compiled 
stand-alone with c-stubs for the fortran modules. Minor additions to the
program (in the most recent case, a single assignment statement was addedd)
result in the linker complaining about unresolved externals in unrelated modules
that were not changed or recompiled. The modules it declares to be undefined
are all callbacks (not directly called), but all have been declared "extern
void foo()" before they are referenced. I tried removing the "extern" from the
statement, but this had no effect.

2) A FORTRAN program compiles and links fine, producing a roughly 3.5 mb exec. I
am trying to put a C X-Windows front end on it (Namely the program in #1). When
I try to link it with the C program above (having corrected for the main()s),
none of the C routines are unresolved (even though they remain unresolved if
compiled stand-alone), but the linker complains about several of the fortran
functions being unresolved.

Things I've tried to date:
Rearranging the order of the objects in the link command. Some modules become
resolved, others become unresolved. Certain modules will never become
resolved, although I can not see a connection here.

I thought perhaps the system limits for filesize and memoryuse were choking
the linker (I.e. temp files were too large), so I raised the limits via
smit's user's menus. See the side notes below about this fiasco. No effect on
the link problems. 

I put the objects in question into an archive and I ranlib'd them. No effect
(tried cc -o prog libfoo.a , and cc -o prog -L. -lfoo).

I tried putting all of the files into on BIG archive, different modules are
unresolved, but reproduceably so (I get the same ones twice in a row).

Any ideas???

The limit's fiasco:
I reset the limit's on the uid's having the problems, but I decided that I 
really don't like limits, and especially not the default ones (I HATE having
a huge job run for hours and then crap out because its temp files were 
getting large (tar, for example). So, in the smit menu, I put 0's into the
fields (I tried leaving them blank, but then the values in question were set
to default, not unlimited). Typing limit at the csh prompt reports "unlimited"
on everything. The side effects of this are nasty. I can not ftp into my account
becuase ftp says "can't get resource limits" after prompting me for a password.
rsh has problems, but gives no messages (just dies, having done nothing). 

How does one go about getting unlimited resources?

---------------
Peter Blemel
Management Sciences, Inc.

This uucp path is known to be extremely flakey (We're changing software at the
present time), so any email replies need to be sent to msinc@jupiter.nmt.edu.

RAH@IBM.COM ("Russell A. Heise") (10/10/90)

 peter@msinc.msi.com (Peter Blemel) writes:

 > System: 520, AIXV3.1 9021, 64mb ram, 1.2 gb disk., page/swap=64mb
 >
 >    Our programs have been increasing rapidly in size as our development
 effort
 > moves along. This problem first manifested itself during Motif development,
 but
 > soon it became obvious that most sizeable programs are affected.
 >
 > Scenario:
 > 1) A C program compiles and links fine, producing a rougly 2mb executable.
 This
 > is a mindless X-Windows front end to a fortran program, but being compiled
 > stand-alone with c-stubs for the fortran modules. Minor additions to the
 > program (in the most recent case, a single assignment statement was addedd)
 > result in the linker complaining about unresolved externals in unrelated
 modules
 > that were not changed or recompiled. The modules it declares to be undefined
 > are all callbacks (not directly called), but all have been declared "extern
 > void foo()" before they are referenced. I tried removing the "extern" from th
 e
 > statement, but this had no effect.

 It sounds like you are having a case of the infamous "underscore-at-end-of-
 function-name" problem.  This shows up when FORTRAN code tries to call
 functions written in C.  Most older FORTRAN compilers would append an
 underscore to the end of the function name they tried to resolve; i.e.

    CALL FOO(args)            <-- as written in *.f

    FOO_                      <-- what older FORTRAN compilers look for

    FOO_(args)                <-- what had to be written in *.c

 By default, 'xlf' does not do this; it looks for function names as
 written.  However, you can make it work like the older style by using
 the the flag "-qextname".  The potential for real conflict occurs if you
 write code one way and use or do not use -qextname to tell the compiler
 to handle the situation the other way.  The solution is to either
 (1) put underscores in your C code and use the -qextname flag or
 (2) don't use underscores and don't use the flag.
 >
 > 2) A FORTRAN program compiles and links fine, producing a roughly 3.5 mb
 exec. I
 > am trying to put a C X-Windows front end on it (Namely the program in #1).
 When
 > I try to link it with the C program above (having corrected for the main()s),
 > none of the C routines are unresolved (even though they remain unresolved if
 > compiled stand-alone), but the linker complains about several of the fortran
 > functions being unresolved.

 Here, I suspect that you (or your makefile) are not including the xlf
 library on your link step.  You might want to confirm that you have a
 "-lxlf" flag on your compile/link statement(s).
 >
 >  ... list of attempts to work around problems ...
 >
 > The limit's fiasco:
 > I reset the limit's on the uid's having the problems, but I decided that I
 > really don't like limits, and especially not the default ones (I HATE having
 > a huge job run for hours and then crap out because its temp files were
 > getting large (tar, for example). So, in the smit menu, I put 0's into the
 > fields (I tried leaving them blank, but then the values in question were set
 > to default, not unlimited). Typing limit at the csh prompt reports "unlimited
 "
 > on everything. The side effects of this are nasty. I can not ftp into my
 account
 > becuase ftp says "can't get resource limits" after prompting me for a
 password.
 > rsh has problems, but gives no messages (just dies, having done nothing).
 >
 > How does one go about getting unlimited resources?

 When I tried this I didn't have near the number of problems you report.
 Putting 0 in the fields just made the system use the defaults (and since
 the default file size is some 2 million blocks, that isn't so bad).  I
 certainly had no problems with rsh/ftp/csh, although csh 'limit' reports
 the default limits instead of "unlimited".

Russ Heise, AIX Technical Support, IBM