coggins@coggins.cs.unc.edu (Dr. James Coggins) (11/04/88)
Managing C++ Libraries: Dependencies and Headers
James Coggins and Greg Bollella
Computer Science
UNC-Chapel Hill
The use of libraries in C++ is complicated by dependencies among the
classes in the library. An application program must #include the
header files for all classes on which the application depends,
directly or transitively. Direct dependencies are clear from the
code of the application program itself. Finding transitive
dependencies requires knowledge of the internal structure of the
library. We might expect a program author to know and to declare what
resources he is using directly, but it is unreasonable to require him
to know internal structures of the libraries he is using.
For this discussion, a dependency between classes A and B exists if
the header for class A or any member function of class A refers to an
object of class B as a member, an argument, or a local variable.
(This definition is more conservative but much simpler than the
optimal definition for our purpose.)
Due to the complex dependencies between classes in a library, many
header files may be required even for applications that use objects of
just one class. Without management techniques such as we will
describe, ensuring that all of the necessary headers are included
requires analysis of the entire dependency hierarchy of the library by
the library user. We consider knowledge of the internal structure of
the library to be an unacceptable burden on the application developer
(or on the library developer!). We seek to minimize the interference
of such incidental concerns in the development of code that uses the
library. Fortunately, we have developed a scheme that ensures that
the necessary headers are included while requiring minimal effort from
the application or member function writer.
An ideal solution to the problems of header file and dependency
management would posess the following characteristics:
1. Whatever is needed gets included.
2. You do not pay for what you do not need.
3. You do not need to know the entire dependency hierarchy when
writing main() or member functions.
4. The system should be easy to use. To make this concrete, we want
only one '#include' directive to be required in main() or member
functions.
5. The solution should support good software engineering practice.
6. The solution should be compatible with multiple inheritance and other
anticipated evolutionary changes in C++.
7. A program written using our management system should read only the
header files that are necessary and should read them only once.
The scheme we have developed conforms to these objectives and allows
enough flexibility to handle unforseen situations with minimal hassle.
Dependencies
Consider the small inheritance hierarchy below. Class foobar is a
base class with derived classes foo and bar. Class baz is not part of
the inheritance hierarchy, but since class foobar uses objects of
class baz, any compilation of foobar requires inclusion of the header
for class baz. The dependency structure of these classes is shown in the
figure at right below. Notice that we include direct dependencies
only; transitive dependencies (foo requires foobar which requires baz)
are not noted. (If there were a direct dependency between bar and baz,
for example, we would include that link.)
Inheritance hierarchy Dependency hierarchy
foobar foo bar
/ \ baz \ /
foo bar foobar
|
baz
The determination of which header files to include when compiling an
application or a member function depends on the dependency hierarchy,
which depends on internal details of the design of the whole system of
classes, most of which is embodied in the header files. The creation
of such dependencies is essential to the library's usefulness. If
objects are to work together at all, and if code is to be reused at
all, then dependencies must exist. We need a method for declaring
direct dependencies and reliably tracing those dependencies throughout
the dependency hierarchies when needed.
Rejected Approaches
1. Include what you need
In this approach, each member function and each application program
must contain #include directives to obtain whatever is needed. This
requires the application developer to understand the entire dependency
structure of the library, which we find unacceptable. Furthermore,
this approach leads to a long list of #include directives, whose
creation interferes with the task of software development.
2. Include EVERYTHING
We considered #include-ing everything, but this violates the
objective of not paying for what you don't need. In a large library,
the time required to process all of the .h files is not negligible, so
we reject this option.
3. Use #ifndef SYM ... #endif chains
A common solution in practice requires surrounding each .h file
with compiler directives to test whether a symbol unique to that class
is defined and if not to process the header. If the symbol is
defined, the translator must still scan the file until reaching the
#endif at the end of the file. This is a reasonable solution, which
we rejected for several reasons. First, this system allows a header
file to be #included many times - it will be processed only once, but
we prefer that it not be touched at all if it is not required.
Second, this approach requires that the user know the path names to
many header files - details that are incidental to the coding task and
should be eliminated from his concern. With our hierarchical
directory scheme described in a previous article, specifying path
names requires that users know the whole directory hierarchy, which we
find unacceptable. Third, we find the intrusion of the
#ifndef...#endif directives in our header files aesthetically
displeasing. We prefer a less invasive approach.
The solution we have developed is noninvasive, it requires no
knowledge of the dependency structure or the directory structure of
the library, and it causes header files to be touched only when
required, and only once even then. The following sections explain our
scheme in a basic form. If you are a UNIX makefile hacker you can
probably improve on the scheme in several ways. Feel free. If you
work on PCs you shouls be able to understand and implement this approach
without becoming a UNIX wizard.
Solution Part 1: Dependency files
In the subdirectory for each class, we define a dependency file (with
a .d extension) that declares direct dependencies by defining symbols
of the form D_<CLASS_NAME>. After the appropriate symbols are defined,
we check to see whether the "prelude" for the whole library has been
defined. If not, we #include the library's prelude file.
For the example above, the dependency file for classes bar and foobar
are as follows:
............................. .............................
file bar.d file foobar.d
............................. .............................
#define D_BAR #define D_FOOBAR
#define D_FOOBAR #define D_BAZ
#ifndef D_PRELUDE #ifndef D_PRELUDE
#include "../../libprelude.h" #include "../../libprelude.h"
#endif #endif
The structure of the dependency file is determined entirely by the
dependency structure of the library. Typically, a class will declare
a dependency on itself, its base class if any, and the classes
referenced by the class as arguments to messages or as local variables
in member functions.
Other additions to the .d file can handle special situations. If there
are classes with mutual dependencies, the forward declaration of the
sibling classes can be inserted in the .d files of each class. The
#includes for header files of specialized libraries may be inserted in
the dependency file. In Dr. Coggins' library, for example, header
files for the suntools libraries are #included in the .d file of the
"imagetool" class which handles image display on Sun workstations.
(Note for wizard readers: these special cases limit the desirability
of automatic generation of dependency files!)
The ability to place #include directives for special .h files in the
.d file (thereby placing the #include in every compilation involving
the .h file of the class) does not preclude the option of placing
#include directives for some system libraries in the specific member
functions that require them, or even placing special #include
directives in the .h file itself.
Solution Part 2: The Prelude File
In the main directory for the library, we define a "prelude" file
which has three parts. The prelude file for the above example is given
below. The first part of the prelude file #includes system header
files that we want always to be included. The second part is a
level-by-level traversal of the dependency graph from the top down in
which the .d files of all classes that have been declared as being
required are #included.
The top-down traversal is critical to allow all of the transitive
dependencies to be correctly noted. For example, if the application
program references only class foo, we know to include foo.d, which
contains the definition of D_FOOBAR. Since we are going top-down
through the dependency hierarchy, we will *later* check D_FOOBAR and
include foobar.d which defines D_BAZ and so on.
In the third part of the prelude file, the header files of all classes
that have been declared to be needed are #included, once and once
only. The classes are checked in bottom-up order according to the
dependency hierarchy so that every .h file that is required is defined
before it is needed by another class definition. Thus, all of the .h
files that are needed are included, and they are included only once.
.......................................
file libprelude.h
.......................................
#define D_PRELUDE
#include <stream.h>
#include <string.h>
#include <math.h>
#ifdef D_FOO
#include "/.../foo/foo.d" Include .d files in top-down
#endif order traversal of dependency
#ifdef D_BAR hierarchy. This determines
#include "/.../bar/bar.d" which .h files will be needed
#endif using just comiler symbols.
#ifdef D_FOOBAR
#include "/.../foobar/foobar.d"
#endif
#ifdef D_BAZ
#include "/.../baz/baz.d"
#endif
#ifdef D_BAZ
#include "/.../baz.h" Include .h files in bottom-up
#endif order traversal of dependency
#ifdef D_FOOBAR hierarchy.
#include "/.../foobar.h"
#endif
#ifdef D_BAR
#include "/.../bar.h"
#endif
#ifdef D_FOO
#include "/.../foo.h"
#endif
Note to wizards: This file could be automatically produced by an awk
program from input resembling the input to make. We show the basic
method here so that users without awk or make can implement the approach.
We'll probably post wizard-level implementations later ourselves.
The prelude file looks more complex than it is. Maintaining the
prelude file is also easier than it looks. Classes at the same level
in the dependency hierarchy can be listed in any order, so the
ordering of the sections is not as critical as it might appear. Also,
the only situation requiring modification of the prelude file is the
implementation of a new class, which happens relatively infrequently
compared to changes in member functions. We have found that
development of new classes outside the library is a safe and effective
strategy. The classes can be incorporated into the directory structure
and the prelude file as they reach maturity.
Using the System
To use our strategy for managing dependencies, the application
programmer must declare the classes used in the program and include
the library prelude file. The writer of member functions must simply
include the class's dependency file. Examples are given below:
..................... ........................
prog.c foo::reset.c
..................... ........................
#define D_FOO #include "foo.d"
#define D_BAZ .
#include "/.../mainlib/prelude.h" .
. .
.
.
We have found this to be a minimal level of invasion in the process of
preparing a .c file, and while our scheme is rather costly to set up,
it appears to be easy to understand and maintain.
Robustness of the Scheme
If the programmer omits a symbol definition for a class that turns out
to be required by another class he does list, then everything works
normally and there is no error. Thus, if the programmer does know
something about the structure of the library, he can take advantage of
that knowledge and minimize the administrivia in his .c file.
If the programmer omits a symbol definition for a class that is indeed
required, the C++ translator will flag syntax errors claiming that
"foo is not a class name" when you know very well that it is.
If the programmer omits the +e1 flag on his program compilation and
links with a library compiled with +e0, the linker will give error
messages similar to " __foobar_vtbl__ is not defined".
If the checks for a new class are placed in the first (second)
traversal in the prelude file anywhere above (below) the highest
(lowest) existing class used by the new class, then everything will
work correctly, at least until the next class is entered.
rfg@nsc.nsc.com (Ron Guilmette) (11/07/88)
In article <5078@thorin.cs.unc.edu> coggins@coggins.cs.unc.edu (Dr. James Coggins) writes: ... >An ideal solution to the problems of header file and dependency >management would posess the following characteristics: ... >7. A program written using our management system should read only the > header files that are necessary and should read them only once. ... >Rejected Approaches ... >3. Use #ifndef SYM ... #endif [to surround header files] > A common solution in practice... ... >... This is a reasonable solution, which >we rejected for several reasons. First, this system allows a header >file to be #included many times - it will be processed only once, but >we prefer that it not be touched at all if it is not required. ... >... Third, we find the intrusion of the >#ifndef...#endif directives in our header files aesthetically >displeasing. We prefer a less invasive approach. OK. So how about a slightly more intelligent pre-processor which would (a) keep a list of all files included so far, and (b) avoid including files which have already been included once (even if they are called for again via further #include's)? This would have the same effect as the encapsulation of header files within #ifdef's (which many people are using now). Such a scheme could be implemented very easily (say in the GNU pre-processor) and it could either be made the default action, or it could be invoked (only when desired) via a new pre-processor option. It would meet most of the stated requirements (with the possible exception of the somewhat vague "good software engineering" requirement). The only problem is that this approach is *not* compatible with certain tricky uses of header files (i.e. those cases when a given header file *must* actually be included more than once for some reason). Maybe this is a good thing. Such "tricky" uses of header files are probably not compatible with "good software engineering practices". >We have found this to be a minimal level of invasion in the process of >preparing a .c file, and while our scheme is rather costly to set up, >it appears to be easy to understand and maintain. I like mine better :-) I think that it more fully satisfies the KISS principal. -- Ron Guilmette National SemiConductor, 1135 Kern Ave. M/S 7C-266; Sunnyvale, CA 94086 Internet: rfg@nsc.nsc.com or amdahl!nsc!rfg@ames.arc.nasa.gov Uucp: ...{pyramid,sun,amdahl,apple}!nsc!rfg
mjr@vax2.nlm.nih.gov.nlm.nih.gov (Marcus J. Ranum) (11/07/88)
In article <7573@nsc.nsc.com> rfg@nsc.nsc.com.UUCP (Ron Guilmette) writes: >OK. So how about a slightly more intelligent pre-processor which would >(a) keep a list of all files included so far, and (b) avoid including >files which have already been included once (even if they are called for >again via further #include's)? This would have the same effect as the >encapsulation of header files within #ifdef's (which many people are using >now). How about a pre-processor designed for C++ from the start ? Not that I propose to go off tomorrow and write one, but if we assume that using consts and inlines MOST of the functionality of '#define' is provided (except for conditional compilation). If we assume that the job of the preprocessor is to handle file inclusion and conditional compilation, I'd suggest that it could be replaced with something that acted more like a linker than a pre-processor. Suppose we had some "pre-processor" that was (better) able to cope with conditional directives (a must), and otherwise maintained some small database of what object was defined where, and what other objects it depended on. At that point, the "pre-processor" would be expected to only include exactly what was needed. Another advantage of a system like that is that it would allow pretty quick checks to see if something already existed in the programmer's name space. I'd expect a big compilation speed gain could be had, as well, if the objects to be included were arranged for faster access through some form of index, or even stored in a 'semi-compiled' state. If such a system were designed right, it would possibly support the eventual thrust towards being able to compile only those functions in a source module that were modified, and so on. I realize I am talking pretty much through my hat, and that there would be a lot of problems with such an approach (managing multiple include trees, nested includes, etc, etc.). I suspect, however, that some kind of similar approach will be needed sooner or later. "Strange women lying in ponds, distributing swords is no basis for a system of government. Supreme executive power derives from a mandate from the masses, not from some farsical aquatic ceremony."
bright@Data-IO.COM (Walter Bright) (11/08/88)
While the scheme presented is interesting, I find it unnecessarilly complex. Here's the scheme I used which satisfies the basic requirements: 1. In each header file, put a 'wrapper' around it of the form: // This is foo.hpp #ifndef FOO_HPP #define FOO_HPP ... // text of header file #endif // FOO_HPP 2. In the text of each header file, for each header that it is dependent on, add the following: #ifndef DEPEND_HPP #include "depend.hpp" #endif 3. In the application code, just insert #includes for the classes or definitions that are *used*, the dependent ones will be automatically included. Using this scheme requires minimal discipline, it doesn't require unusual .d files, or awk scripts. It's only disadvantage is that sometimes causes a header file to be scanned twice (but never more than twice). Here's an example: Class abc depends on class def. Class def depends on ghi and jkl. Class mno depends on ghi. The application code has instances of abc, mno and def. The files look like: // abc.hpp #ifndef ABC_HPP #define ABC_HPP #ifndef DEF_HPP #include "def.hpp" #endif .... // text of definition #endif // ABC_HPP // def.hpp #ifndef DEF_HPP #define DEF_HPP #ifndef GHI_HPP #include "ghi.hpp" #endif #ifndef JKL_HPP #include "jkl.hPP" #endif .... // text of definition #endif // DEF_HPP // ghi.hpp #ifndef GHI_HPP #define GHI_HPP .... // text of definition #endif // GHI_HPP // jkl.hpp #ifndef JKL_HPP #define JKL_HPP .... // text of definition #endif // JKL_HPP // mno.hpp #ifndef MNO_HPP #define MNO_HPP #ifndef GHI_HPP #include "ghi.hpp" #endif .... // text of definition #endif // MNO_HPP The application code source file looks like (recall that abc, def and mno were used): #include "abc.hpp" #include "mno.hpp" #include "def.hpp" Note that these #includes can be done in any order. The application code is not cluttered with #includes or #ifndef..#endif pairs. The correct dependent .hpp files are automatically #included in the correct order. Note also that the above will #include def.hpp twice. This is the worst case, it's only effect is to slow the compilation by the time it takes the preprocessor to scan it (because of the 'wrapper'). I don't believe this is a serious limitation.
coggins@retina.cs.unc.edu (Dr. James Coggins) (11/08/88)
Responses to some followups to our Managing C++ Libraries series... ---------------------------------------------------------------------- In article <7573@nsc.nsc.com> rfg@nsc.nsc.com.UUCP (Ron Guilmette) writes: >OK. So how about a slightly more intelligent pre-processor which would > ... nice capabilities deleted ... In article <8370@nlm-mcs.arpa> mjr@vax2.nlm.nih.gov (Marcus J. Ranum) writes: >How about a pre-processor designed for C++ from the start ? Not that I >propose to go off tomorrow and write one... > ... more interesting ideas deleted ... Our objective in "Managing C++ Libraries: Dependencies and Headers" was to present a practical scheme that would work under the current c++ implementation, which we did. But if you want to discuss possible changes... Several lines of argument can be marshalled to support the proposition "C++ should get rid of cpp!" Less fanatical versions of the proposition would permit file inclusion, and also perhaps limited forms of symbol definition and conditional compilation. I remember that complexities in cpp make precompilation of header files impractical, and I heard at the C++ conference some other features or language simplifications that would be nice to have but are impractical due to cpp. (Edit the subject line and jump right in, folks!) ------------------------------------------------------------------------ >From: bright@Data-IO.COM (Walter Bright) >While the scheme presented is interesting, I find it unnecessarilly complex. >Here's the scheme I used which satisfies the basic requirements: I'm glad you have a scheme that you can use. However, it does not satisfy the objectives we stated as well as the scheme we presented. I'll show you below where I found problems when we examined this method. (most of which is described as a rejected option in our article): >1. In each header file, put a 'wrapper' around it of the form: >// This is foo.hpp >#ifndef FOO_HPP >#define FOO_HPP > ... // text of header file >#endif // FOO_HPP Call me a fanatic, but I don't like this wrapper business. Get this administrative stuff out of my code! (Our approach involves less invasion into files containing C++ code and isolates administrative stuff in the dependency (.d) files while maintaining lots of flexibility. Concerns of coding and library administration are separated into different files.) >2. In the text of each header file, for each header that it is dependent >on, add the following: >#ifndef DEPEND_HPP >#include "depend.hpp" >#endif Strike one: more administrivia in my header file! Strike two: I have to specify path names?!? I don't want to have to memorize the entire library directory structure! (Our approach uses symbols to specify dependencies; only in the library's prelude file are associations between symbols and paths established, again minimizing intrusion in the coding process and isolating details of the library's storage structure away from the programmer - who has enough on his or her mind already.) Strike three: This looks worse if, as happens in my library, there are special situations requiring special administrative actions. For example, consider classes with mutual dependencies (forward class declarations required). Or other external libraries required for particular classes (yet more administrivia #includes). Strike four: (so I'm not a baseball player) Without a prelude.h file for the library, you have to duplicate #includes in every class that could be handled neatly in the prelude file. >3. In the application code, just insert #includes for the classes or >definitions that are *used*, the dependent ones will be automatically >included. So I can't write a program without knowing the directory structure of the library so I can #include the right header files? No, thanks. Soft links? OK, if you have them, you could store links to all header files in a single directory. Now you still have to enter the #includes with path names. (In our system the only #include in application code is for the library's prelude file, and classes used are specified by symbol definitions - no path names to remember or to type.) Fanatic? Extremism in the defense of simplicity is no vice! (Not bad. Hey - write that down!) >Using this scheme requires minimal discipline, it doesn't require >unusual .d files, or awk scripts. .d files won't be unusual when our scheme catches on. We don't use awk scripts- that was a suggestion for how to produce the prelude file automatically if you wanted to. C++ environment builders might want to make that option available, but since I'm building only one library, I don't need it. So I guess I stand by our original posting (so far). Keep those cards and letters coming, folks! --------------------------------------------------------------------- Dr. James M. Coggins coggins@cs.unc.edu Computer Science Department "Make it in Massachusetts" - ad slogan UNC-Chapel Hill "I made it OUT of Massachusetts" Chapel Hill, NC 27514-3175 - my slogan ---------------------------------------------------------------------
robert@pvab.UUCP (Robert Claeson) (11/08/88)
In article <8370@nlm-mcs.arpa>, mjr@vax2.nlm.nih.gov.nlm.nih.gov (Marcus J. Ranum) writes: > If we assume that the job of the > preprocessor is to handle file inclusion and conditional compilation, > I'd suggest that it could be replaced with something that acted more > like a linker than a pre-processor. If such a pre-linker/processor is implemented, maybe we wouldn't need to explicitly #include files anymore? The preprocessor could look for declarations of external objects and identifiers in a database or library much in the same way as 'ld' does today. -- Robert Claeson, ERBE DATA AB, P.O. Box 77, S-175 22 Jarfalla, Sweden Tel: +46 758-202 50 Fax: +46 758-197 20 EUnet: rclaeson@erbe.se ARPAnet: rclaeson%erbe.se@uunet.uu.net
hansen@pegasus.ATT.COM (Tony L. Hansen) (11/09/88)
< >7. A program written using our management system should read only the < > header files that are necessary and should read them only once. < < OK. So how about a slightly more intelligent pre-processor which would < (a) keep a list of all files included so far, and (b) avoid including < files which have already been included once (even if they are called for < again via further #include's)? This would have the same effect as the < encapsulation of header files within #ifdef's (which many people are using < now). < ... < The only problem is that this approach is *not* compatible with certain < tricky uses of header files (i.e. those cases when a given header file < *must* actually be included more than once for some reason). Maybe this < is a good thing. Such "tricky" uses of header files are probably not < compatible with "good software engineering practices". The 4th generation make system, nmake, comes with an enhanced version of the C preprocessor which does what you suggest with a major modification: A header file will not be #included twice unless it is re-#included from the same level as previously. In other words, if both a.c and b.h #include <stdio.h>, and a.c #includes <stdio.h> as well, <stdio.h> will be #included only once. However, if a.c should #include <regex.h> twice, it will actually get #included twice. This takes care of almost all cases where you don't want files #included more than once, plus it still allows header files which do tricky things to be #included more than once if need be. Tony Hansen att!pegasus!hansen, attmail!tony
rw@beatnix.UUCP (Russell Williams) (11/10/88)
In article <7573@nsc.nsc.com> rfg@nsc.nsc.com.UUCP (Ron Guilmette) writes: >OK. So how about a slightly more intelligent pre-processor which would >(a) keep a list of all files included so far, and (b) avoid including >files which have already been included once (even if they are called for >again via further #include's)? This would have the same effect as the >encapsulation of header files within #ifdef's (which many people are using >now). We did this in our Pascal compiler to support construction of the Embos operating system (several hundred thousand lines of source code) and found it to work quite well. We adopted the convention that all header files include all files on which they depend, so programs include exactly those header files they use and don't worry about dependencies. >The only problem is that this approach is *not* compatible with certain tricky >uses of header files (i.e. those cases when a given header file *must* actually >be included more than once for some reason). Maybe this is a good thing. We added a "%hardinclude" directive which forces inclusion for these applications. If you're willing to slay the ageing dragon of cpp, many elegant solutions are possible. Russell Williams ..uunet!elxsi!rw ..ucbvax!sun!elxsi!rw
bright@Data-IO.COM (Walter Bright) (11/10/88)
In article <5151@thorin.cs.unc.edu> coggins@cs.unc.edu (Dr. James Coggins) writes:
<<From: bright@Data-IO.COM (Walter Bright)
<<While the scheme presented is interesting, I find it unnecessarilly complex.
<<Here's the scheme I used which satisfies the basic requirements:
<<1. In each header file, put a 'wrapper' around it of the form:
<<// This is foo.hpp
<<#ifndef FOO_HPP
<<#define FOO_HPP
<< ... // text of header file
<<#endif // FOO_HPP
<
<Call me a fanatic, but I don't like this wrapper business. Get this
<administrative stuff out of my code!
It's not in the application code or the application header files. It's
in the LIBRARY header files.
<<2. In the text of each header file, for each header that it is dependent
<<on, add the following:
<<#ifndef DEPEND_HPP
<<#include "depend.hpp"
<<#endif
<Strike one: more administrivia in my header file!
Again, it's not in the application code or the application header files. It's
in the LIBRARY header files.
<Strike two: I have to specify path names?!?
I *never* specify path names in #include's. A pox on anyone who does! People
who do this have never ported between VMS, MSDOS and UNIX.
I use -Ipath on the command line or set the INCLUDE environment variable.
(As a matter of personal taste, I also dislike the "1 subdirectory per
class" style, I find I spend more time cd'ing than doing useful work.)
<Strike three: This looks worse if, as happens in my library, there are
<special situations requiring special administrative actions. For
<example, consider classes with mutual dependencies (forward class
<declarations required).
Sorry, but I don't see the difficulty here.
<Or other external libraries required for
<particular classes (yet more administrivia #includes).
In the header for that class, put a #include for the external .h file.
What's the problem?
<Strike four: you have to duplicate #includes in every class that
<could be handled neatly in the prelude file.
Not for every class, for every library *header* file. The #includes form
the documentation of the dependencies of the class.
<Fanatic? Extremism in the defense of simplicity is no vice!
Hey, I claim that my system is simpler than yours! (Obviously a matter of
opinion!)
<So I guess I stand by our original posting (so far).
So do I by mine!
bam@hplsla.HP.COM (Ben Mejia) (11/10/88)
I use the scheme Walter Bright recommends and it works very well. This covers a lot of bases. The next thing to worry about is how to write a Makefile for a program. I use the nmake program (from the ATT toolchest), which automatically examines the #include's to build its own dependency graph. So when one changes a particluar header file, all the dependents get updated. I highly recommend nmake. Most Makefiles collapse to: program :: <source file list> -- bam bam%hplsla@hplabs.hp.com (206) 335-2203
coggins@retina.cs.unc.edu (Dr. James Coggins) (11/11/88)
In article <1749@dataio.Data-IO.COM> bright@dataio.Data-IO.COM (Walter Bright) writes: >In article <5151@thorin.cs.unc.edu> coggins@cs.unc.edu (Dr. James Coggins) writes: ><<From: bright@Data-IO.COM (Walter Bright) ><<While the scheme presented is interesting, I find it unnecessarilly complex. ><<Here's the scheme I used which satisfies the basic requirements: ><<1. In each header file, put a 'wrapper' around it of the form: >< ><Call me a fanatic, but I don't like this wrapper business. Get this ><administrative stuff out of my code! > >It's not in the application code or the application header files. It's >in the LIBRARY header files. But I (and many others out there!) are primarily LIBRARY DEVELOPERS. Library header files ARE my code. And I still don't want this administrative stuff cluttering it up. >I use -Ipath on the command line or set the INCLUDE environment variable. >(As a matter of personal taste, I also dislike the "1 subdirectory per >class" style, I find I spend more time cd'ing than doing useful work.) If you are working on small enough projects to keep your code in one directory, and you aren't bothered by administrivia in your (library) code, and you are unable or unwilling to modularize in such a way as to minimize the directory switching that is required, then you may disagree with our approach with my best wishes. ><Fanatic? Extremism in the defense of simplicity is no vice! >Hey, I claim that my system is simpler than yours! (Obviously a matter of >opinion!) Not for library developers. Anything works on small projects. On larger projects both of our header management schemes address the principal issues. My approach is marginally superior for library developers because it eliminates clutter from the code and decreases memory load on the library developer. Not an earth-shaking improvement over your management scheme, but it's there. James Coggins coggins@cs.unc.edu
hardin@hpindda.HP.COM (John Hardin) (11/11/88)
>If such a pre-linker/processor is implemented, maybe we wouldn't need >to explicitly #include files anymore? The preprocessor could look for >declarations of external objects and identifiers in a database or library I believe that Eiffel does just that. John Hardin hardin%hpindda@hplabs.hp.com
rfg@nsc.nsc.com (Ron Guilmette) (11/14/88)
In article <5151@thorin.cs.unc.edu> coggins@cs.unc.edu (Dr. James Coggins) writes: >Responses to some followups to our Managing C++ Libraries series... >---------------------------------------------------------------------- >In article <7573@nsc.nsc.com> rfg@nsc.nsc.com.UUCP (Ron Guilmette) writes: >>OK. So how about a slightly more intelligent pre-processor which would... Please, if you are going to quote me then I think it would be nice to include the idea I proposed. Which can be summarized in about 1 line, thus: Add a flag to cpp which would cause it *not* to re-include any file already included. >Our objective in "Managing C++ Libraries: Dependencies and Headers" >was to present a practical scheme that would work under the current >c++ implementation, which we did. At this point in time, I think that it would be a mistake for anyone to start up a new enterprize to manufacture vinyl-record cleaning kits. Lets face it, the world is going CD. Likewise, the situation with C++ is fluid and evolving. The approach I suggested requires a minor evolution of cpp, but is nonetheless largely compatible with current tools and practices. >Several lines of argument can be marshalled to support the proposition >"C++ should get rid of cpp!" Less fanatical versions of the proposition... Is this belief now considered fanatic? Count me in anyway. >Call me a fanatic, but I don't like this wrapper business. Get this >administrative stuff out of my code! (Our approach involves less >invasion into files containing C++ code and isolates administrative >stuff in the dependency (.d) files while maintaining lots of >flexibility. Concerns of coding and library administration are >separated into different files.) Ah, ha! Another fanatic. Well, I don't like that wrapper nonsense either, but my approach involves *NO* invasion into files containing C++ code (if you're already using #ifdef wrappers in your .H files you may want to delete that junk, but you don't have to). Further, my approach requires *NO* administrivia anywhere and has just as much "flexibility" as yours does. So there! :-) One potential problem with my solution (noted in a message to me from M. Tiemann) is that some preexisting code may be written which REQUIRES multiple inclusions of the same single header file. I have two counter arguments: First, my idea is to have cpp work just like it always has (read "backward compatibility") *unless* you give it the new option (perhaps indirectly via CC or g++). Second, even if there were no option, and if cpp *always* prevented multiple inclusions of the same single file, you could easily setup your makefiles so that they made multiple links (in your source directory) to each such rogue header file. Thus, you could easily fool such a modified cpp into believing that it is including different files when it is in fact including the same file multiple times. >>Using this scheme requires minimal discipline, it doesn't require >>unusual .d files, or awk scripts. This is true for my scheme also. >So I guess I stand by our original posting (so far). I admire anyone who stands by his convictions even in the face of good evidence to the contrary. :-) -- Ron Guilmette National SemiConductor, 1135 Kern Ave. M/S 7C-266; Sunnyvale, CA 94086 Internet: rfg@nsc.nsc.com or amdahl!nsc!rfg@ames.arc.nasa.gov Uucp: ...{pyramid,sun,amdahl,apple}!nsc!rfg
rick@pcrat.UUCP (Rick Richardson) (11/14/88)
In article <7775@nsc.nsc.com> rfg@nsc.nsc.com.UUCP (Ron Guilmette) writes: > > Add a flag to cpp which would cause it *not* to re-include any file > already included. While you're in there, add a check for each file included. If nothing from that file ever gets used, print a message: cpp: Warning: no reason to #include "gorp.h". -- Rick Richardson | JetRoff "di"-troff to LaserJet Postprocessor|uunet!pcrat!dry2 PC Research,Inc.| Mail: uunet!pcrat!jetroff; For anon uucp do:|for Dhrystone 2 uunet!pcrat!rick| uucp jetroff!~jetuucp/file_list ~nuucp/. |submission forms. jetroff Wk2200-0300,Sa,Su ACU {2400,PEP19200} 12013898963 "" \r ogin: jetuucp
fox@marlow.uucp (Paul Fox) (11/19/88)
Isnt this what the ADA APSE is all about ? Why dont we just use their principles. Seems to be the same problem domain. ===================== // o All opinions are my own. (O) ( ) The powers that be ... / \_____( ) o \ | /\____\__/ Tel: +44 628 891313 x. 212 _/_/ _/_/ UUCP: fox@marlow.uucp