[comp.lang.c] C function prototyping and large projects

russ@motto.UUCP (Russell Crook) (09/14/88)

(line eater fodder)

(Posted for a development group here that doesn't have direct net
access. Please mail replies to russ@motto).

We are just starting to use function prototypes, and are looking for
suggestions on how to use them.

We know that the Microsoft C compiler can automatically generate
function prototypes.  Where do you go from there?  When developing
a large program, composed of many source files, how do you 
make sure each file picks up the right prototypes for the functions
it uses?  Are there conventions about where prototypes are stored -
do you put them in '.h' files, or right in the source file, or
somewhere else?  Do you put all the prototypes for an entire program
in one file, or do you use some means of only picking up the ones
which are needed?  If you put them all in, does it affect compile
time significantly?

Do you regenerate the prototypes automatically, every time you rebuild
or whenever the source file changes, or what?

We also do cross-development under VMS - does anyone have a portable
program to generate function prototypes?

-- 
Russell Crook (UUCP: ...!uunet!mnetor!motto!russ)
Disclaimer: "...we're all mad here. I'm mad. You're mad."
            "How do you know I'm mad?" said Alice.
            "You must be", said the Cat, "or you wouldn't have come here."

gwyn@smoke.ARPA (Doug Gwyn ) (09/16/88)

In article <24@motto.UUCP> russ@motto.UUCP (Russell Crook) writes:
>Are there conventions about where prototypes are stored -
>do you put them in '.h' files, or right in the source file, or
>somewhere else?

Prototypes don't change the recommended C practice, namely use
header files to define/declare all interface information.  The
best approach is to use separate headers (and separate source
files) for each group of functionally related capabilities.
For example, "parse.h" would declare the parsing function(s)
that are of use to other parts of the application, and it would
also define any manifest constants, data types, etc. that are
specific to parsing.  Other headers would cover other functional
areas.  And of course, a Makefile or equivalent compilation
specification could be used to limit recompilation to just the
affected sources when a change has been made.

swarbric@tramp.Colorado.EDU (Frank Swarbrick) (09/17/88)

In article <24@motto.UUCP> russ@motto.UUCP (Russell Crook) writes:
>We are just starting to use function prototypes, and are looking for
>suggestions on how to use them.
>
>We know that the Microsoft C compiler can automatically generate
>function prototypes.  Where do you go from there?  When developing
>a large program, composed of many source files, how do you 
>make sure each file picks up the right prototypes for the functions
>it uses?  Are there conventions about where prototypes are stored -
>do you put them in '.h' files, or right in the source file, or
>somewhere else?  Do you put all the prototypes for an entire program
>in one file, or do you use some means of only picking up the ones
>which are needed?  If you put them all in, does it affect compile
>time significantly?

Well, I've only been using C for a little over a year, so I've always
had a compiler that uses prototypes.  This is probably why it's so hard
for me to understand why people find them so hard to use.

ANYWAY...  I put all of the prototypes for non-static functions in a
separate header file and then include it.  For all the static (local to the
file) functions I write them in the file itself.  Here is an example.

-------------------------
/* myfile.h */
void lalaland(void);
void zzz(void);
-------------------
/* myfile.c */

#include "myfile.h"

static void foofoo(void);

void main()
{
   lalaland();
   zzz();
}

void lalaland()
{
   /* stuff */
}

void zzz()
{
   /* more stuff */
}

static void foofoo()
{
   /* local stuff */
}

>Do you regenerate the prototypes automatically, every time you rebuild
>or whenever the source file changes, or what?

I'm not quite sure what you mean.  The header files are compiled every
time they are #included, yes.  It may waste a few seconds, depending
on how large the header file(s) is(are).

Frank Swarbrick (and, yes, the net.cat)              | "1001001 -- S.O.S.
University of Colorado, Boulder                      |  1001001 -- in distress
swarbric@tramp.Colorado.EDU                          |  100100"
:...!{ncar|nbires}!boulder!tramp!swarbric            |                    -Rush 

dalegass@dalcsug.UUCP (Dale Gass) (09/18/88)

My personal preference when using prototypes, is to put all the prototypes 
into one header file (proto.h, or whatever), which is included by all modules
in the project; however, I don't declare the dependency of all the source
files upon proto.h.

This avoids having to recompile *all* the modules whenever you simply add
the prototype of a new function to proto.h.  It could prove dangerous if
you *change* the syntax of the prototype in proto.h carelessly; whenver
you change a syntax in proto.h, you should 'touch *.c' to remake all
modules.  But I find in development, the ability to quickly add to proto.h
without having to recompile everything is a major time-saver.

For maintenance work on a large project, I would probably declare the
make dependency of the source files on proto.h, since modification of proto.h
(when you change the syntax of a prototyped function) could be dangerous
to existing modules (which won't be recompiled).  Howeven, when one changes
the syntax of a function, he should go through all his souces modules and
change all the references to it, anyway (in which case, those modules will
be recompiled and prototype-checked against proto.h)...

--------------

-dalegass@dalcsug
{watmath|uunet}!dalcs!dalcsug!dalegass

will.summers@p6.f18.n114.z1.fidonet.org (will summers) (09/18/88)

 In article <24@motto.UUCP> russ@motto.UUCP (Russell Crook) writes:
 > >Are there conventions about where prototypes are stored -
 > >do you put them in '.h' files, or right in the source file, or
 > >somewhere else?
 
Doug Gwyn writes:
 > Prototypes don't change the recommended C practice, namely use
 > header files to define/declare all interface information.  The
 > best approach is to use separate headers (and separate source
 > files) for each group of functionally related capabilities.
 > For example, "parse.h" would declare the parsing function(s)
 > that are of use to other parts of the application, and it would
 
Agreed.  In an environment that has an automatic prototype generator I find 
the following a convient way to implement that during development:
 
   parse.h  is the file included when using parsing functions.
      if the parseing functions are implemented in files parse1.c
      parse2.c and parse3.c, then parse.h contains
   #include "parse1.hp"
   #include "parse2.hp"
   #include "parse3.hp"
 
The compile script causes  foo.hp  to be automatically generated
whenever  foo.c  is compiled.  .hp generation *always* happens.  This slows
compiles but prevents "accidents" when .hp updating is left to an ageing
programmer's memory...
 
I guess it's possible to write a SED script to update the prototypes imbedded
in parse.h, but the script would need to "know" which .c files go with which 
.h and so be package dependant.
 
To keep the source conventional, .c files never #include .hp's directly. Only 
.h's #include .hp's. 
 
I find this more convienent than updating parse.h  whenever adding/changing
a function in parse?.c.
 
    \/\/ill <tm>
 


--  
St. Joseph's Hospital/Medical Center - Usenet <=> FidoNet Gateway
Uucp: ...{gatech,ames,rutgers}!ncar!noao!asuvax!stjhmc!18.6!will.summers

dwp@k.gp.cs.cmu.edu (Doug Philips) (09/19/88)

LineEaterHappiness++

I've used the following method on a medium sized project that run on
PC clones under microsoft windows.  The development environment started
out as plain DOS (3.1) using MicroSoft C version 4.0.  About 3 months
into the project we switched to using the Mortice Kern systems
package (ksh, diff, awk, sed, etc.).  The microsoft compiler's ability to
generate prototypes was the only mechanism used to generate prototypes.
The underlying assumption was that the more that could be done automaticlly,
the more it would be correct.

For each source file F.c, there is a corresponding "internal
interface" file F.i, which is the prototype output of the compiler having
been run on F.c.  This file is included only by F.c.  Since the
compiler was good enough to augment the prototypes with comments that
indicate "global" functions, a simple script was used to extract the
global routine prototypes into a file called "global.rtn", which was
included by all source files.  The rationale here is that since C
would allow the function to be called anyway, it was best to make sure
the prototypes were available.  This meant adding new functions to a
modules "external interface" was as easy as adding the routine and
using it.

Since it was too agonizingly slow to always compute these files, the
makefile was doctored to break the dependency between F.c and F.i.
Everytime I knew I made an interface change I would remove the F.i
file.  The rule for generating "global.rtn" would run a script.  The
script would generate a potential new file, but install it only if it
was a change from the old version.  About once a day, sometimes twice,
I would remove all the generated files (.i, .o, etc) and let the world
rebuild just to make sure.  A good time to go get a coke, or whatever.

Since then I've read a book called "Portable C and Unix System
Programming" which had ideas similar to this.  I don't remember the
author (it's not in front of me now), but I THINK it was "Lapin".

-Doug Philips
ARPA:  Doug.Philips@k.cs.cmu.edu
-- 
Doug Philips                  "All that is gold does not glitter,
Carnegie-Mellon University       Not all those who wander are lost..."
dwp@cs.cmu.edu                     -J.R.R. Tolkien

ray@micomvax.UUCP (Ray Dunn) (09/20/88)

In article <24@motto.UUCP> russ@motto.UUCP (Russell Crook) writes:
>We are just starting to use function prototypes, and are looking for
>suggestions on how to use them.
>....
>Do you regenerate the prototypes automatically, every time you rebuild
>or whenever the source file changes, or what?

As others have said in various ways to the first part of the question - use
function prototypes in the same way other global data is handled - include
them in *appropriate* .h files - do *not* just use one big .h file for
everything.

Get into the habit of creating a prototype in an appropriate header file
when you create a global function, in the same way you currently create any
other extern statement.  This .h file should be included in the "parent"
source file, as well as in referencing files.

If the procedure is static, put the prototype somewhere at the top of the
source file.

The answer to the automatic *generation* question is simple - do it once if
your code has not used prototypes before (this is why MS provided the
feature), disseminate the output into header files, then maintain the
information manually as you would for any other global information.

Continually re-generating prototypes can only lead to trouble, does not
adequately separate "static" and "extern" declarations, and doesn't
automatically get the information into the correct places.

Define things as global only on a "Need to know" basis - i.e. put static in
front of everything in sight!  (:-)/2

Hopefully prototyping will increase the diligence of using that little word
"static" when defining procedures which do not *need* to be global
(desperately trying to avoid flaming the inappropriate overloading of the
keyword "static" and the fact that the static/extern default is reversed
from what it should be - something not worth arguing about because it is
cast in 'C'oncrete)!

-- 
Ray Dunn.                      |   UUCP: ..!philabs!micomvax!ray
Philips Electronics Ltd.       |   TEL : (514) 744-8200   Ext: 2347
600 Dr Frederik Philips Blvd   |   FAX : (514) 744-6455
St Laurent. Quebec.  H4M 2S9   |   TLX : 05-824090

mike@pmafire.UUCP (mike caldwell) (09/21/88)

In article <659.2333D240@stjhmc.fidonet.org> 
will.summers@p6.f18.n114.z1.fidonet.org (will summers) writes:
>
> In article <24@motto.UUCP> russ@motto.UUCP (Russell Crook) writes:
> 
>I guess it's possible to write a SED script to update the prototypes imbedded
>in parse.h, but the script would need to "know" which .c files go with which 
>.h and so be package dependant.
> 
>To keep the source conventional, .c files never #include .hp's directly. Only 
>.h's #include .hp's. 
> 

If you encased your SED script in a shell script that is passed the
source .c file and the target .hp files then only the makefile would
have to know the dependencies.  As a quick, dirty example:

In the makefile:

parse1.hp: parse1.c
	shell_script parse1.c parse1.hp

In shell_script:

rm $2
sed -f SEDscript $1 >$2

or whatever.

peter@thirdi.UUCP (Peter Rowell) (09/23/88)

In article <1281@micomvax.UUCP> ray@micomvax.UUCP (Ray Dunn) writes:
>In article <24@motto.UUCP> russ@motto.UUCP (Russell Crook) writes:
>>We are just starting to use function prototypes, and are looking for
>>suggestions on how to use them.
>>....
>>Do you regenerate the prototypes automatically, every time you rebuild
>>or whenever the source file changes, or what?
>
> ...
>
>The answer to the automatic *generation* question is simple - do it once if
>your code has not used prototypes before (this is why MS provided the
>feature), disseminate the output into header files, then maintain the
>information manually as you would for any other global information.
             ^^^^^^^^
My experience shows that manual maintenance of this type of information,
if done incorrectly, can lead to bugs that are incredibly difficult to
find - particularly when the parameter types are used for doing
automatic coercion, as specified for ANSI C.

Our solution is (and has been for many years) to completely automate
the generation of prototypes and general external information.  Some
languages/environments may support some or all of this process, but
we have found portability requires we "roll our own".

Using two new "keywords" (export and exportdefine) and a variety
of shell and sed scripts, we regularly "re-edit" .h files so that
they contain the latest definitions of constants, macro routines,
global variables, functions and type information.  Appropriate
dependencies are maintained via our version of "make depend" -
both for when the .h's need to be rebuilt and for when .c's need
to be recompiled.

During intense development cycles there can be quite a bit of
unnecessary reebuilds/recompiles, so we kill the dependencies and
depend on programmer knowledge of probable side effects.  Every once in
a while this fails and you get a weird bug, but then we do a "forced"
rebuild of the universe and that normally fixes everything.  Code
shipped to customers always has full dependency information.

We have discussed building a more knowledgable dependency system where
.c files would be made explicitly dependent on each (implicitly or
explicitly) imported value/variable/procedure/type.  Although I know
we will do this eventually, we are trying to get alpha 1.2 out the door,
and we just don't have the time (I'm sure everyone else has oodles of time
for stuff like this :-).

Anyone wishing the actual sed/shell/.h files can email me at
...!pyramid!thirdi!peter.

----------------------------------------------------------------------
Peter Rowell		"Gee, Dr. Science, can we depend on that prototype?"
Third Eye Software, Inc.		(415) 321-0967
Menlo Park, CA  94025			...!pyramid!thirdi!peter

will.summers@p6.f18.n114.z1.fidonet.org (will summers) (09/23/88)

In article <1281@micomvax.UUCP> ray@micomvax.UUCP (Ray Dunn) writes:

RD> Continually re-generating prototypes can only lead to trouble, does not
RD> adequately separate "static" and "extern" declarations, and doesn't
RD> automatically get the information into the correct places.

Segrating statics is not a problem as long as the generator makes
a distinction between these (either has an option to only generate
for "extern"s or carries the distinction to the output).  What other
"trouble" can it lead to?

RD> Define things as global only on a "Need to know" basis - i.e. put static
RD> in front of everything in sight!  (:-)/2

Hear Hear!!

RD> Hopefully prototyping will increase the diligence of using that little
RD> word "static" when defining procedures which do not *need* to be global

Many will sneer, but my local "universal" include contains:

#define public /* nothing */
#define private static

'static' only gets used for storage duration, 'private' for linkage.

Consistently using 'public' serves 3 purposes:
   1>  It makes it easier for automatic header generators to find the publics.
   2>  It removes the temptation to take the shortcut of leaving the linkage
       unspecified when the function really should be 'private'.  In other
       words it forces a decision to be made and puts the public and
       private choice on even footing...
   3>  Closely related to <2>, it informs the reader that I -choose- to make
       the function/variable public and it is in fact used in other files 
       (ie. check other files before mucking with it).

    \/\/ill <tm>


--  
St. Joseph's Hospital/Medical Center - Usenet <=> FidoNet Gateway
Uucp: ...{gatech,ames,rutgers}!ncar!noao!asuvax!stjhmc!18.6!will.summers

karl@haddock.ima.isc.com (Karl Heuer) (09/24/88)

In article <432@thirdi.UUCP> peter@thirdi.UUCP (Peter Rowell) writes:
>In article <1281@micomvax.UUCP> ray@micomvax.UUCP (Ray Dunn) writes:
>>[Generate the prototypes automatically the first time,] then maintain the
>>information manually as you would for any other global information.
>             ^^^^^^^^
>My experience shows that manual maintenance of this type of information,
>if done incorrectly, can lead to [hard-to-find bugs].

The compiler should catch them, if the header which contains the prototype is
also included in the module that defines the function itself.

>Our solution ...  During intense development cycles there can be quite a bit
>of unnecessary reebuilds/recompiles, so we kill the dependencies and depend
>on programmer knowledge of probable side effects.

My experience shows that manual maintenance of this type of information,
if done incorrectly, can lead to hard-to-find bugs.  :-)

Karl W. Z. Heuer (ima!haddock!karl or karl@haddock.isc.com), The Walking Lint

peter@thirdi.UUCP (Peter Rowell) (09/28/88)

In article <8032@haddock.ima.isc.com> karl@haddock.ima.isc.com (Karl Heuer) writes:
>The compiler should catch them, if the header which contains the prototype is
>also included in the module that defines the function itself.

As I responded via e-mail to another person, you are *assuming* that:

1. the prototype is in a header which is included by the file which defines
   the procedure.
AND
2. that the dependency in the makefile is such to cause the compiler to
   go recompile the defining module.
   
Although I agree that this is *normally* the case, there are times when
it is not.  I can easily think of some instances of case 1 that would
not have the compiler catch the error.

Minor Flame:  I do *not* understand why people seem to feel that there
is some moral benefit to manual maintenance of information that is
trivially kept correct by automatic means.  If it is OK for the compiler
to "automatically" check these things, what is wrong with creating them
automatically?

What would be *very* useful, productive, etc., etc.  would be an
>environment< that kept track (automatically, of course) of exact
variable/type/procedure dependencies on a per/procedure basis and only
recompiled what was needed.  We have discussed doing this for our own
local consumption but we don't have the time to do it right (at least
not at this time).

Re: your smiley-faced comment at the end.  People who unintentionally
ignore compile dependencies often get nailed.  We *consciously* make
this decision, know what the bugs often look like, and know what to do
if we suspect that it is the case.  We get the benefits of being able
to build a *known correct* set of prototypes + we don't have to pay the
recompile penalty when we know that that is all that changed.

----------------------------------------------------------------------
Peter Rowell
Third Eye Software, Inc.		(415) 321-0967
Menlo Park, CA  94025			...!pyramid!thirdi!peter

gwyn@smoke.ARPA (Doug Gwyn ) (10/03/88)

In article <435@thirdi.UUCP> peter@thirdi.UUCP (Peter Rowell) writes:
>Minor Flame:  I do *not* understand why people seem to feel that there
>is some moral benefit to manual maintenance of information that is
>trivially kept correct by automatic means.  If it is OK for the compiler
>to "automatically" check these things, what is wrong with creating them
>automatically?

In the specific case of function prototypes, if you are following
recommended software engineering procedure, your specification for
function interfaces precedes writing the code to implement (or use)
them.  Automatic generation of prototypes goes in exactly the wrong
direction.

cramer@optilink.UUCP (Clayton Cramer) (10/04/88)

In article <8597@smoke.ARPA>, gwyn@smoke.ARPA (Doug Gwyn ) writes:
> In article <435@thirdi.UUCP> peter@thirdi.UUCP (Peter Rowell) writes:
> >Minor Flame:  I do *not* understand why people seem to feel that there
> >is some moral benefit to manual maintenance of information that is
> >trivially kept correct by automatic means.  If it is OK for the compiler
> >to "automatically" check these things, what is wrong with creating them
> >automatically?
> 
> In the specific case of function prototypes, if you are following
> recommended software engineering procedure, your specification for
> function interfaces precedes writing the code to implement (or use)
> them.  Automatic generation of prototypes goes in exactly the wrong
> direction.

Amazing, though, how often the function interfaces change after you
start debugging -- and how easy it is to forget to add new function
prototypes when you add new functions.

The compiler will catch the mismatches of prototypes when you change
an existing function specification, but it's still darn annoying to
realize you need to make the change after 20 modules have recompiled,
and you have to recompile them again.

It's a bit of a nuisance setting up the make files to do it (at least
with Microsoft C), but automatic generation of function prototypes
is the only way to go.
-- 
Clayton E. Cramer
..!ames!pyramid!kontron!optilin!cramer

jim.nutt@p11.f15.n114.z1.fidonet.org (jim nutt) (10/04/88)

 > From: cramer@optilink.UUCP (Clayton Cramer)
 > Message-ID: <534@optilink.UUCP>
 > In article <8597@smoke.ARPA>, gwyn@smoke.ARPA (Doug Gwyn ) writes:
 > > In article <435@thirdi.UUCP> peter@thirdi.UUCP (Peter Rowell) writes:
 > > >Minor Flame:  I do *not* understand why people seem to feel that there
 > > >is some moral benefit to manual maintenance of information that is
 > > >trivially kept correct by automatic means.  If it is OK for the 
 > compiler
 > > >to "automatically" check these things, what is wrong with creating 
 > them
 > > >automatically?
 > > 
 > > In the specific case of function prototypes, if you are following
 > > recommended software engineering procedure, your specification for
 > > function interfaces precedes writing the code to implement (or use)
 > > them.  Automatic generation of prototypes goes in exactly the wrong
 > > direction.
 > 
 > Amazing, though, how often the function interfaces change after you
 > start debugging -- and how easy it is to forget to add new function
 > prototypes when you add new functions.
 > 
 > The compiler will catch the mismatches of prototypes when you change
 > an existing function specification, but it's still darn annoying to
 > realize you need to make the change after 20 modules have recompiled,
 > and you have to recompile them again.
 > 
 > It's a bit of a nuisance setting up the make files to do it (at least
 > with Microsoft C), but automatic generation of function prototypes
 > is the only way to go.

there is a better way to do it if you have a decent editor...  simply write a 'prototype macro'.  then use the newstyle function declarations (if you can).  all the prototype macro has to do is copy the function declaration to either a header or to the top of the file and add a semicolon to the end.  instant prototype.  better yet, just get into the habit of doing it yourself automatically.

jim nutt
'the computer handyman'


--  
St. Joseph's Hospital/Medical Center - Usenet <=> FidoNet Gateway
Uucp: ...{gatech,ames,rutgers}!ncar!noao!asuvax!stjhmc!15.11!jim.nutt

gwyn@smoke.ARPA (Doug Gwyn ) (10/05/88)

In article <534@optilink.UUCP> cramer@optilink.UUCP (Clayton Cramer) writes:
>Amazing, though, how often the function interfaces change after you
>start debugging -- and how easy it is to forget to add new function
>prototypes when you add new functions.

I suggest you check your methodology.
We don't have any problem with that on our current large project.

throopw@xyzzy.UUCP (Wayne A. Throop) (10/07/88)

> cramer@optilink.UUCP (Clayton Cramer)
>> gwyn@smoke.ARPA (Doug Gwyn )
>>> peter@thirdi.UUCP (Peter Rowell)
>>>I do *not* understand why people seem to feel that there
>>>is some moral benefit to manual maintenance of information that is
>>>trivially kept correct by automatic means.  
>> In the specific case of function prototypes, if you are following
>> recommended software engineering procedure, your specification for
>> function interfaces precedes writing the code to implement (or use)
>> them.

Automatically generating one or many instances of a declaring instance
of a function from a single defning instance of a function does not
imply that the bodies of functions are written before elaborating all
the interfaces.

>Amazing, though, how often the function interfaces change after you
>start debugging -- and how easy it is to forget to add new function
>prototypes when you add new functions.
>> I suggest you check your methodology.
>> We don't have any problem with that on our current large project.
	
I presume that Doug means that one ought to habitually introduce
interfaces before implementations.  I quite agree.  Nevertheless, it
is still much more efficent, foolproof, and convenient to have one
contiguous patch of text to edit to change any and all things about a
function.  The fact that a compiler will catch you later if you let
multiple copies get out of sync is no consolation if you are used to
it not being physically possible to be out of sync.

Having done it both ways, I agree with Clayton.  Automatic generation
of declaring instances of things from defining instances of things is
"the only way to go" (in the sense that it is much more convenient).

--
There are two ways to write error-free programs.
Only the third one works.
                                        --- Alan J. Perlis
-- 
Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw