[comp.lang.c] C + Make

bevan@cs.man.ac.uk (Stephen J Bevan) (09/11/90)

If you have .h files that include other .h files, such as the
following :-

/* foo.c */
#include "foo.h"

/* foo.h */
#include "a.h"
#include "b.h"

/* a.h */
lots of #includes

/* b.h */
lots of #includes

then inorder to get `make' to update foo.c if anything in the headers
change, you have to do something like :-

foo.o:	foo.c foo.h a.h b.h ... any file that a.h/b.h includes

i.e. you have to flatten the hierarchy you have built up.

What I'd like to know is there any way of writing this as :-

foo.o:	foo.c foo.h
foo.h:	a.h b.h

a.h:	a.h' includes
b.h:	b.h's includes

i.e. maintaining the hierarchy, but so still forcing foo.c to be
recompiled if one of a.h/b.h (or its includes) changes.

I have a kludgy solution, but as I store stuff under RCS, it can take
2 or 3 `makes' to get all the dependencies to update correctly (a bit
like using latex + bibtex :-)

What I'd like to know is :-

1. Should I even be trying to do this or should I go back to the ONE
   TRUE WAY of flattening the hierarchy.

2. If its an ok idea, has somebody got a solution that will update all
   the dependencies with just one run of `make'.

Ta

Stephen J. Bevan	bevan@cs.man.ac.uk

flaps@dgp.toronto.edu (Alan J Rosenthal) (09/11/90)

bevan@cs.man.ac.uk (Stephen J Bevan) writes:
>If you have .h files that include other .h files ...

>then inorder to get `make' to update foo.c if anything in the headers
>change, you have to do something like :-
>
>foo.o:	foo.c foo.h a.h b.h ... any file that a.h/b.h includes
...
>2. If its an ok idea, has somebody got a solution that will update all
>   the dependencies with just one run of `make'.

How about something like:

    /lib/cpp -I{stuff} {.c file} | grep '#' | awk '{ print $3 }' | sort -u

For greater excitement, also pipe through:  tr -s '"\012 ' ' '
The tr is probably bsd-specific; other things may be as well but I don't think
so.  They're certainly unix-specific.

Various compilers have dependency-listing features...
I also wrote my own dependency-listing program so we could use it on multiple
platforms.  It was easy.  (Although I didn't put in the last 10% to get the
sibling-include stuff quite right.)

mullen@sj.ate.slb.com (Lew Mullen) (09/11/90)

In article <BEVAN.90Sep10220041@rhino.cs.man.ac.uk> bevan@cs.man.ac.uk (Stephen J Bevan) writes:
>If you have .h files that include other .h files ...
>
>foo.o:	foo.c foo.h a.h b.h ... any file that a.h/b.h includes
>i.e. you have to flatten the hierarchy you have built up.
>
>2. If its an ok idea, has somebody got a solution that will update all
>   the dependencies with just one run of `make'.
>
>Stephen J. Bevan	bevan@cs.man.ac.uk


There are several "dependency makers" on the net.  They are
based on a feature of make, that a target may have more than one
dependency line, but only one may have commands with it .. i.e.:

	foo.o:	foo.c
	foo.o:	foo.h
	foo.o:	a.h
	foo.o:	b.h

	foo.o:	;
		cc foo.c -g foo.o


Since this is *exactly* what /usr/lib/cpp -M outputs,
it makes it possible to create a self-editing Makefile,
which updates it's own dependency "section".

Here is an example: (this is mine, not from the net)

SRC=foo.c

########################################################################
####################  automatic dependency making  #####################
########################################################################

MAKEFILE=Makefile

dependencies:	$(SRC) $(MAKEFILE)
	@echo " ";echo make $@ - newer files are: $? ;echo " "
	@rm -rf /tmp/a /tmp/$(MAKEFILE)
	echo "###automatic dependencies only below this line###" > /tmp/a
	(for i in $(SRC);do echo " ";/usr/lib/cpp -M $(CPPFLAGS) $$i;done)>>/tmp/a
	sed -e '/^###automatic/,$$d' < $(MAKEFILE)  > /tmp/$(MAKEFILE)
	sed -e 's/:/ dependencies:/' < /tmp/a >> /tmp/$(MAKEFILE)
	chmod -f 644 $(MAKEFILE).old ; cp $(MAKEFILE) $(MAKEFILE).old
	@if ( cmp /tmp/$(MAKEFILE) $(MAKEFILE) 1>/dev/null 2>&1 ) ; then \
	  echo "no changes to $(MAKEFILE)" ; \
	else \
	  set -x ; mv /tmp/$(MAKEFILE) $(MAKEFILE) ; \
	fi
	@if ( cmp /tmp/a ./dependencies 1>/dev/null 2>&1 ) ; then \
	  echo "no changes to $@" ; \
	  touch dependencies ; \
	else \
	  set -x ; \
	  if [ -f dependencies ] ; then diff dependencies /tmp/a ; fi ;\
	  mv /tmp/a ./dependencies ; \
	fi
	@cmp $(MAKEFILE) $(MAKEFILE).old 1>/dev/null 2>&1 || exit 4
#
# these prerequisites were generated by using the
# -M option to the C preprocessor (cpp), which is
# designed to do this.
# Example dependencies: '/usr/lib/cpp -M foo.c'
#
# To recreate it, delete the file 'dependencies'
# and type 'make dependencies' ... this will delete
# the remainder of this file and replace it.
#
###automatic dependencies only below this line###
 
foo.o:	foo.c
foo.o:	/usr/include/stdio.h
foo.o:	foo.h

browns@iccgcc.decnet.ab.com (Stan Brown, Oak Road Systems) (09/12/90)

In article <BEVAN.90Sep10220041@rhino.cs.man.ac.uk>, bevan@cs.man.ac.uk (Stephen J Bevan) writes:
> If you have .h files that include other .h files, such as the
> following :-
> 
> /* foo.c */
> #include "foo.h"
> 
> /* foo.h */
> #include "a.h"
> #include "b.h"
> 
> /* a.h */
> lots of #includes
> 
> /* b.h */
> lots of #includes
> 
> then inorder to get `make' to update foo.c if anything in the headers
> change, you have to do something like :-
> 
> foo.o:	foo.c foo.h a.h b.h ... any file that a.h/b.h includes
> 
> i.e. you have to flatten the hierarchy you have built up.
> 
> What I'd like to know is there any way of writing this as :-
> 
> foo.o:	foo.c foo.h
> foo.h:	a.h b.h
> 
> a.h:	a.h' includes
> b.h:	b.h's includes
> 
> i.e. maintaining the hierarchy, but so still forcing foo.c to be
> recompiled if one of a.h/b.h (or its includes) changes.

[sound of throat clearing]  This might go better in one of the
comP...programmers or comp.os... groups, because it depends on _which_
kind of MAKE you're using.  MAKE is definitely not part of C, and there
is no standard that I'm aware of.  So the following answer might
possibly not work with your version of MAKE.

Now that I've explained why I shouldn't attempt to answer your question,
here's my answer.  :-)  Here's how I've attacked it in the past.  This
works for IBM's MAKE with C/2 (therefore presumably for Microsoft C 6.x)
and for MMS in VAX/VMS.

In the description file, define a macro for each of the bottom-level
header files:

PRIM_H   = prim.h
PROPER_H = proper.h

Then for the "composite" header files, the macro wouldd include the file
itself and the _macro_ for its first-level dependents:

MIDLEVEL_H = midlevel.h $(PRIM_H) $(PROPER_H)
A_H = a.h $(MIDLEVEL_H)

foo.obj : foo.c $(A_H)

The trick is _never_ to use an explicit header file name on the right
side of a dependency definition; always use the corresponding macro.

The technique is also useful when the header files are scattered through
different directories as well.

BTW, why are the components called "dependents"?  Seems to me that the
target is the dependent because it depends on the components.


Stan Brown, Oak Road Systems, Cleveland, Ohio, U.S.A.         (216) 371-0043
The opinions expressed are mine. Mine alone!  Nobody else is responsible for
them or even endorses them--except my cat Dexter, and he signed the power of
attorney only under my threat to cut off his Cat Chow!

grimlok@hubcap.clemson.edu (Mike Percy) (09/12/90)

[This doesn't really belong in this group but...]

If you want to make your makefile reflect the include heirarchy try
what I do.  Given some files like:
[NOTE: I use TurboC++ and the make and touch provided with it.
 Your milage may be different.]

File test.c
  #include "a.h"
  #include "b.h"

  main()
  {
    printf("testing");
  }

File a.h
  #include "c.h"

File a.h
  #include "d.h"
  #include <stdio.h>

File c.h
  #define test1 1

File d.h
  #define test2 2


I use a makefile like:

test.exe : test.obj
    tcc test.obj

test.obj : test.c a.h b.h
    tcc -c test.c

a.h : c.h
    touch a.h

b.h : d.h \tc\include\stdio.h
    touch b.h

This does have the side effect of perhaps changing the time stamps on
a,h and b.h when their contents haven't changed, but I don't mind that.
You might leave off the touch commands...I'm not sure how that would
work.

"I don't know about your brain, but mine is really...bossy."
Mike Percy                    grimlok@hubcap.clemson.edu
ISD, Clemson University       mspercy@clemson.BITNET
(803)656-3780                 mspercy@clemson.clemson.edu

userAKDU@mts.ucs.UAlberta.CA (Al Dunbar) (09/13/90)

In article <852.26ed037c@iccgcc.decnet.ab.com>, browns@iccgcc.decnet.ab.com (Stan Brown, Oak Road Systems) writes:
>In article <BEVAN.90Sep10220041@rhino.cs.man.ac.uk>, bevan@cs.man.ac.uk (Stephen J Bevan) writes:
>  <<stuff deleted>>
>BTW, why are the components called "dependents"?  Seems to me that the
>target is the dependent because it depends on the components.
>
I find the terminology confusing too. Maybe they use "dependent"
because there is no such word as "dependee". Or maybe because
"dependable" would mislead programmers into thinking the code in
their "dependables" must be OK.
 
-------------------+-------------------------------------------
Al Dunbar          |
Edmonton, Alberta  |   this space for rent
CANADA             |
-------------------+-------------------------------------------
#! r

chris@mimsy.umd.edu (Chris Torek) (09/14/90)

[Although this is not strictly a `C topic', a sufficient number of C
systems come with a `make' program that I decided to leave it here.]

In article <1990Sep11.165709.24875@sj.ate.slb.com>
mullen@sj.ate.slb.com (Lew Mullen) writes:
>There are several "dependency makers" on the net.  They are
>based on a feature of make, that a target may have more than one
>dependency line, but only one may have commands with it ...

Actually, strinctly speaking this is not quite right.  Unix make (and
any clones that follow it sufficiently well) allow more than one `recipe'
if and only if double-colon rules are used:

	foo::
		@echo foo
	foo::
		@echo bar

`make foo' prints `foo\nbar\n'.

>[This] makes it possible to create a self-editing Makefile,
>which updates it's own dependency "section".

I have two recommendations, both of which were learned through experience.
These are:

 1. Put the dependency-making in a separate program.  The method by which
    dependency extraction is done varies from system to system.  This way
    all the details are in one place (the `mkdep' script or whatever).

 2. Do not make `mkdep' edit the makefile.  Put the generated dependency
    lists in a separate file.  4.3BSD-tahoe and later versions of make
    read a file called `.depend', if it exists and no `-f' options are
    given.  Other makes require a subterfuge: instead of running the
    regular `make' program directly, run a front-end that checks for
    .depend.  If the file exists (and no -f arguments are given), run

	make -f Makefile -f .depend <original args>

    or

	make -f makefile -f .depend <original args>

    (use the same rules your `make' uses to locate makefiles to decide
    which is the `main' makefile).

In Bourne shell, the latter can be written as

	if [ -f .depend ]; then
		if [ -f makefile ]; then
			f="-f makefile -f .depend"
		else
			f="-f Makefile -f .depend"
		fi
		for i do
			case "$i" in -f*) f=;; esac
		done
	else
		f=
	fi
	
	exec /bin/make $f ${1+"$@"} MACHINE=${MACHINE-`machine`}

(the MACHINE= is for trees that hide machine-specific sources in
machine-specific directories: a useful trick).  This is what we
used to use on our non-BSD machines, though now we use pmake.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 405 2750)
Domain:	chris@cs.umd.edu	Path:	uunet!mimsy!chris

throopw@sheol.UUCP (Wayne Throop) (09/16/90)

> From: grimlok@hubcap.clemson.edu (Mike Percy)
> If you want to make your makefile reflect the include heirarchy try
>    [.. having "make" steps that touch ".h" files ..]
> This does have the side effect of perhaps changing the time stamps on
> a,h and b.h when their contents haven't changed, but I don't mind that.

There are problems with this method.  Mike may not mind them, but

  1) You lose last-edited time information.  (This may well be moot if 
     you keep audits in comments or seperately.)
  2) Other tools that keep track of timestamps or use timestamp+filename
     as a UIDlet can get confused.  Eg, archiving or "file motel" systems.
  3) It is not quite an accurate description of the situation to make.
     For example, if processes other than compiles read the .h files,
     especially processes that don't follow #includes (Eg: automatic
     documentation generators) this would provoke much extra work, since 
     THIS .h file didn't really change, just THAT one over there.

> You might leave off the touch commands...I'm not sure how that would
> work.

It wouldn't work.

But having bitched, I can propose an alternative.  The problem is that there
is a pseudo-object which is all-the-include-files-recursively-reached from
any given node.  So, what I do (have done) is represent the time currency
of this state of affairs with an empty file (to store the timestamp).

For a situation where b.h includes c.h, and a.c includes b.h and c.h,
the rules go something like

     a: a.o
     a.o: a.c b.h+i c.h+i
     b.h+i: b.h c.h+i
             touch b.h+i
     c.h+i: c.h
             touch b.h+i

The general notion is that for each .h file, you say that the .o depends
on the corresponding .h+i file.  The timestamp on this file represents
the latest time of all the .h files reachable from the corresponding .h
file.  Now make will do the recursion effects for you, and you only have
to keep straight which files are locally included, not the transitive
closure. 

(The touch rules could be made general "suffix" rules, of course.)

BTW, you might think that touching foo.h+i files is as bad as touching
the .h files as far as archives and file motels and other UIDish tools
are concerned, but in this case the file is empty and we WANT it to
be treated as a new UIDed file as far as archives and such go.

Anyhow, wayne-bob says: checkitout.
--
Wayne Throop <backbone>!mcnc!rti!sheol!throopw or sheol!throopw@rti.rti.org

throopw@sheol.UUCP (Wayne Throop) (09/16/90)

> From: chris@mimsy.umd.edu (Chris Torek)
> I have two recommendations, both of which were learned through experience.
> 1. Put the dependency-making in a separate program.

Much as I am aware of the dangers of disagreeing with somebody as
usually-correct as Chris, I disagree slightly here.  My experience
says that packing all the dependency-making into a single step 
seperated from the construction step is very bad for information
hiding, and makes for monolithic, hard-to-maintain, hard-to-enhance
dependency "expert systems".  All very well if you are doing things
the way the expert system expects, but deviate or innovate, and
BOOM... explosion in a damp spaghetti factory.

In fact, Chris' second rule:

> 2. Do not make `mkdep' edit the makefile.  [.. instead ..] [...]
>        make -f Makefile -f .depend <original args>
> [...]

... obliquely illustrates part of the problem with the first rule.
The dependency occurs in a single massive step before the "real"
work of construction begins.  Thus, any source that is generated
"on the fly" (like yacc or lex, but more complicated... for example
computing a perfect hash literal array, or whatever) must be
treated as a stylized special case in the dependency check.  The
natural way of computing the dependencies of the output files
using the method of an existing subcase won't work because the
files don't exist yet.

Not that this problem is insoluable.  I've seen it solved.  It's just
that the solution, while effective, seems not to have an audience. 

( The basic notion of the solution is that the construction engine must
  construct the dependency graph on the fly...  quite doable, trust me.  )

--
Wayne Throop <backbone>!mcnc!rti!sheol!throopw or sheol!throopw@rti.rti.org

chris@mimsy.umd.edu (Chris Torek) (09/16/90)

I wrote:
>>I have two recommendations ... 1. Put the dependency-making in a
>>separate program.

In article <0949@sheol.UUCP> throopw@sheol.UUCP (Wayne Throop) writes:
>Much as I am aware of the dangers of disagreeing with somebody as
>usually-correct as Chris, I disagree slightly here.  My experience
>says that packing all the dependency-making into a single step 
>seperated from the construction step is very bad for information
>hiding, and makes for monolithic, hard-to-maintain, hard-to-enhance
>dependency "expert systems".

Hmm, well, would you prefer `in separate programs'?  I.e., `mkdep-C'
extracts information on C source files, `mkdep-EP' extracts Extended
Pascal information (cannot do ANSI Standard Pascal since it has no
inclusion mechanism!), `mkdep-KCL' extracts Kyoto Common Lisp information,
and so forth.  In the case of the Berkeley source tree, the C-only
mkdep handles at least 95% of the job, i.e., it is `good enough'.

>In fact, Chris' second rule:
>> Do not make `mkdep' edit the makefile.  [.. instead ..]
>>       make -f Makefile -f .depend <original args>
>... obliquely illustrates part of the problem with the first rule.
>The dependency occurs in a single massive step before the "real"
>work of construction begins.  Thus, any source that is generated
>"on the fly" (like yacc or lex, but more complicated... for example
>computing a perfect hash literal array, or whatever) must be
>treated as a stylized special case in the dependency check.  The
>natural way of computing the dependencies of the output files
>using the method of an existing subcase won't work because the
>files don't exist yet.

True.  In the case of lex and yacc files, one cheats: since the output
from both these programs is a C file (which has not expanded any
`#include's---that is, if foo.y yaccs to foo.c and foo.y `#include's
foo.h, then foo.o depends on foo.c and foo.h, but foo.c does not depend
on foo.h) one adds the C files to the list of `things for mkdep' and
makes the `depend' rule depend on the .c files themselves.  I.e.:

	# Makefile for foo, built from foo.y->foo.c->foo.o and aux.c->aux.o
	CSRCS=	foo.c aux.c
	OBJS=	foo.o aux.o
	all: foo
	foo: ${OBJS}
		${CC} ${LDFLAGS} -o $@ ${OBJS}
	clean:
		rm -f ${OBJS} foo a.out core
	depend: ${CSRCS}
		mkdep ${CSRCS}

Note that foo.y need not be mentioned at all: `make' keys off its
existence, plus make's implicit rule for `.y' -> `.c'.

>Not that this problem is insoluable.  I've seen it solved.  It's just
>that the solution, while effective, seems not to have an audience. 
>
>( The basic notion of the solution is that the construction engine must
>  construct the dependency graph on the fly...  quite doable, trust me.  )

Indeed.  Unfortunately, this requires the construction engine (`make')
to have access to the actual dependency information, which means either
pushing all those rules from mkdep-* into `make' itself, or else an
incestuous relationship between the compiler(s) and `make'.  The latter
is certainly more maintainable (what else but the program that creates
an output file knows for certain what inputs were used?), and can be
quite efficient---the dependency information for any one source is correct
if and only if the corresponding output file exists, and if not that
output needs rebuilding anyway---but, unfortunately, this is MUCH more
work to add to a system that does not already have it.

(Too, it has the drawback of requiring that there be room in every
output file for the information needed by `make', or else that output
files come in pairs: foo.o and .depend.foo.o, or some such.  Which is
`better' is to some extent a matter of taste.  One can argue that every
`object' file should in fact be a directory: foo.o/text, foo.o/data,
foo.o/symbols, foo.o/debug, foo.o/depend....  Let the file system work
for you.)

In any case, it is important that a makefile---even one as minimal as a
source list (`bill of materials', as it were) for a system like the one
above---not be altered by the dependency-update step, because such a
file *is* a source file, just as much as any C program.  If you are
using a revision control system, you will not want revisions made
merely to reflect automatically-derived changes.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 405 2750)
Domain:	chris@cs.umd.edu	Path:	uunet!mimsy!chris

chris@mimsy.Refeau (Chris Torek) (09/16/90)

I wrotts ass>>I have two recommendations ... 1. Put the dependency-making in a
>>separate program.

In article <0949@sheol.UUCP> throop: Vsheol.UUCP (Wayne Throop) writes:
>Much as I am aware of the dangers of disagreeing with somebody as
>usually-correct as Chris, I disaaccuvax.nwu.edu>clightly here. ly? hexperience
>says that packing all the dependency-making into a single step 
>seperated from the construction step is very bad for information
>hiding, and makes for monolithic, hard-to-maintain, hard-to-enhance
>dependency "expert systems".

Hmm, well, would you prefer `in separate programs'?  I.e., `mkdep-C'
extracos
lnformation on C source files, `mkdep-EP' extracts Extended
Pascal information (cannot do ANSI Standard Pascal since it has no
inclusion mechanism!), `mkdep-KCL' extracts Kyoto Common Lisp information,
and so forth.  In the case of the Berkeley source tree, the C-only
mkdep handles at least 95% of the job, i.e., it is `good enough'.

>In fact, Chris' shice rults ass>> Do not make `mkdep' edit the makefile.  [.. instead ..]
>>       make -f Makefile -f .depend <original args>
>... obliquely illustrates pare on9f the problem with the first rule.
>The dependency occurs in a single massive step before the "real"
>work of construction begins.  Thus, any source that is generated
>"on the fly" (like yacc or lex, but more complicated... for example
>computing a perfect hash literal array, or whatever) must be
>treated as a stylized special case in the dependency check.  The
>natural way of computing the dependencies of the output files
>using the method of an existing subcase won't work because the
>files don't exist yet.

True.  In the case of lex and yacc files, one cheats: since the output
from both these programs is a C file (which has not expanded any
`#include's---that is, if foo.y yaccs to foo.c and foo.y `#include's
foo.h, then foo.o dep
mostlon foo.c and foo.h, but foo.c does not depend
on foo.h) one adds the C files to the list of `things for mkdeply? nnd
makes the `depend' rule depend on the .c files themselves.  I.e.:

	# Makefile for foo, built from foo.y->foo.c->foo.o and aux.c->aux.o
	CSRCS=	foo.c aux.c
	OBJS=	foo.o aux.o
	all: foo
	foo: ${OBJS}
		${CC} ${LDFLAGS} -o $@ ${OBJS}
	clean:
		rm -f ${OBJS} foo a.out core
	depend: ${CSRCS}
		mkdep ${CSRCS}

Note that foo.y need not be mentioned at all: `make' keys off its
existence, plus make's implicit rule for e.]fy' -> `.c'.

>Not that this problem is insoluable.  I've seen it solved.  It's just
>that the solution, while effective, seems not to have an audience. 
>
>( The basic notion of the soan'r is that the construction engine must
>  construct the dependency graph on the fly...  quite doable, trust me.  )

Indeed.  Unfortunately, this requires the construction engine (`make')
to have access to the actual dependency information, which means either
pushing all those rules from mkdep-* into `make' itself, or else an
incestuous relationship his ref compiler(s) and `make'.  The latter
is certainly more maintainable (what else but the program that creates
an output file knows for certain what inputs were used?), and can be
quite efficient---the dependency information for any one source comfDrect
if and only if the corresponding output file exists, and if not that
output needs rebuilding anyway---but, unfortunately, this is MUCH more
work to add to a system that d
of the it already have it.

(Too, it has the dThe Nwback of requiring that there be room in every
output file for the information ed frtd by `make', or else that output
files come in pairs: foo.o and .depend.foo.o, or some such.  Which is
`better' is to some extent a matter of taste.  One can argue that every
`object' file should in fact be a directory: foo.o/text, foo.o/data,
foo.o/symbols, foo.o/debug, foo.o/depend....  Let the file system work
for you.)

In any case, it is important that a makefile---even one as minimal as a
source list (`bill of materials', as iscribhe) for a system like the one
above---not be altered by the dependency-update step, because such a
file *is* a source file, just as much as any C program.  If you are
using a revision control system, you will not want revisions made
merely to reflect automatically-derived changes.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 405 2750)
Domain:	chris@cs.uand CiafteuPath:	uunet!mimsy!chris
#! rnews 12838
Path: texsun!newstop!sun-barr!lll-winken!decwrl!mips!zaphod.mps.ohio-state.edu!la

throopw@sheol.UUCP (Wayne Throop) (09/17/90)

>,>>> chris@mimsy.umd.edu (Chris Torek)
>> throopw@sheol.UUCP (Wayne Throop)

>>>I have two recommendations ... 1. Put the dependency-making in a
>>>separate program.
>>Much as I am aware of the dangers of disagreeing with somebody as
>>usually-correct as Chris, I disagree slightly here.  
> would you prefer `in separate programs'?

Ah.  The clarification shows me where I was misinterpreting.
I fully agree that different types of dependency should be packaged
apart, for the usual modularity and information-hiding reasons.

I misinterpreted Chris to mean that it *should* be packaged
separately from the construction step.  This and other comments in
the referenced posting show me that he is accepting this latter
separation (to which I was objecting) only as a matter of current
necessity.  (I've probably distorted what Chris means a little
here again, but I'm probably close enough this time... I hope.)

>> the construction engine must [.. should ..]
>> construct the dependency graph on the fly
> Indeed.  Unfortunately, this requires the construction engine (`make')
> to have access to the actual dependency information, which means either
> pushing all those rules from mkdep-* into `make' itself, or else an
> incestuous relationship between the compiler(s) and `make'.

Yes make must have a way of accessing dependency information other than
the static data it starts up with, but the two solutions mentioned
aren't (quite) exhaustive.  There is middle ground, in that make could
have an "incestuous" relationship with mkdep-like subprograms.

And note that it is no less "incestuous" or shocking that make should
delegate construction tasks than that it should delegate dependency
derivation tasks.  In fact, people often do that both now in the
two-step process Chris detailed in an earlier posting.  I think it
is a simple step up to have make invoke dependency rules as needed
rather than all at once and "up front".  It is this segregation that
leads to problems of deriving dependencies of intermediate files, or
files in archive or library systems, and so on and on.

Now, "ideally" the construction steps (eg: compilers) and make should
talk to each other about dependencies.  For example, a compiler shouln't
just open a .h file willy-nilly... it should make sure the file is
up-to-date by invoking make as a coprocess, then open it.  

( BTW, a way of tackling this is via virtual file systems.  Then, the
  "open" system call uttered by the compiler would invoke the make
  co-process "automagically", and existing compilers and tools could be
  cleanly integrated into a "smart" build system...  note that Sun's NSE
  is just a small bit removed from such a possibility (but they don't seem
  to pursue it, but on the other hand I'm not privy).  )

But the poor man's version of having the compiler co-operate with make
is to have the much more modest mkdep-like tools co-operate with make. 
This leads to mis-predictions as to which files the compiler will open,
but the predictions can be made very good, and are no worse than the
predictions such tools make today.  And while no worse, they have the
added advantage that they can be applied in a much more modular way.

> (Too, it has the drawback of requiring that there be room in every
> output file for the information needed by `make', or else that output
> files come in pairs: foo.o and .depend.foo.o, or some such.

Right.  On the other hand, the way "shape" uses to extend the filesystem
to produce an "attributed file system" isn't terribly unatractive.  Or,
another way of looking at it is that make shouldn't work directly with
the file system at all, but with an OODB some of the objects in which
have data stored in the file system.  This has the advantage of solving
the where-to-put-the-timestamp problems for make-able objects which have
no natural file system component, like "release-3.00" or "the .h files
for foo are ready" and other such non-file-y-but-make-able states of
affairs.  It also has the advantage that you can uniformly deal with
buildable entities that have other data, but store it in places other
than one file per object in the the file system, such as database
schemata, smalltalk classes, C declarations, and so on and on.  (As an
example application of this last, you could recompile only when the .h
you include changed something that you actually use.)

But to come down from the blue sky, a simple lookaside database
a-la shape or Atherton Backplane or whatever could do very much to
improve make's capabilities quickly and naturally, and then the more
blue-sky-ish notions could be brought to reality a bit at a time. 
( Sun NSE's make has one as well, but it is not really adequate or
  general purpose enough (IMHO, ie: dependency one compile out of sync). )

And as to integrating make and source code libraries, ideas like "boxes"
from the ptools (or was that ktools... anyway, from the Usenet meeting
on software management) would go (and have gone) a long way towards
making things smooth for the developer working as part of a group.

> In any case, it is important that a makefile---even one as minimal as a
> source list (`bill of materials', as it were) for a system like the one
> above---not be altered by the dependency-update step, because such a
> file *is* a source file, just as much as any C program.

Absolutely.  It is (I think strongly) bad practice to mix the notions
of source and object in a single entity (though sometimes one is "forced"
to do so, eg: patching object code during testing and analogous stuff...
but the point remains that this is something to avoid.)

( BTW, inserting dynamically interpreted code during a debugging session
       to work around bugs isn't what I'd call patching, and is entirely
       acceptable.  But I digress.  And there are dratted few debuggers
       that allow you to do it in full generality :-(     )
--
Wayne Throop <backbone>!mcnc!rti!sheol!throopw or sheol!throopw@rti.rti.org

jh4o+@andrew.cmu.edu (Jeffrey T. Hutzelman) (09/17/90)

Chris Torek@mimsy.umd.ed writes:

> (Too, it has the drawback of requiring that there be room in every
> output file for the information needed by `make', or else that output
> files come in pairs: foo.o and .depend.foo.o, or some such.  Which is
> `better' is to some extent a matter of taste.  One can argue that every
> `object' file should in fact be a directory: foo.o/text, foo.o/data,
> foo.o/symbols, foo.o/debug, foo.o/depend....  Let the file system work
> for you.)

On a non-unix system, I use a derivation of this mechanism regularly.  I
create a driectory OBJECTS (not a case-sensitive filesystem) in which
all my objects are placed.  So my makefile looks somethng like this:

objects/foo1.o : foo1.c foo1.h
	cc foo1.c keep=objects/foo1

objects/foo2.o : foo2.c foo2.h
	cc foo2.c keep=objects/foo2

objects/foo3.o : foo3.c foo3.h
	cc foo3.c keep=objects/foo3

foo : objects
	link objects/* keep=foo

and the last line does all the work for me.
-----------------
Jeffrey Hutzelman
America Online: JeffreyH11
Internet/BITNET:jh4o+@andrew.cmu.edu, jhutz@drycas.club.cc.cmu.edu

>> Apple // Forever!!! <<

grimlok@hubcap.clemson.edu (Mike Percy) (09/18/90)

throopw@sheol.UUCP (Wayne Throop) writes:

>> From: grimlok@hubcap.clemson.edu (Mike Percy)
>> If you want to make your makefile reflect the include heirarchy try
>>    [.. having "make" steps that touch ".h" files ..]
>> This does have the side effect of perhaps changing the time stamps on
>> a,h and b.h when their contents haven't changed, but I don't mind that.

>There are problems with this method.  Mike may not mind them, but
 
I've received some comments on this.  When I posted this, I tried to
make it clear that "this works for me," to ephasize that I may be
counting on some strangeness of TC's make.  If it's wrong or "ugly"
then you have my appologies.
 
It works for me, but I'd like to point out that I've only had to do it
for a very limited number of things that I've ported (i.e. someone
else's code!).  I personally don't have includes in a .h file, so I
never have to deal with it.  I also am not running in any sort of
multi-programmer situations, so I don't have to worry about mangling the
timestamps.
 
Obviously, folks, there are better ways...

"I don't know about your brain, but mine is really...bossy."
Mike Percy                    grimlok@hubcap.clemson.edu
ISD, Clemson University       mspercy@clemson.BITNET
(803)656-3780                 mspercy@clemson.clemson.edu

versto@cs.vu.nl (Verstoep C) (09/18/90)

chris@mimsy.umd.edu (Chris Torek) writes:

>Indeed.  Unfortunately, this requires the construction engine (`make')
>to have access to the actual dependency information, which means either
>pushing all those rules from mkdep-* into `make' itself, or else an
>incestuous relationship between the compiler(s) and `make'.

In fact, this is done in Amake, the configuration management tool for
the Amoeba OS.  Besides offering parallellism, it allows it to define
that a tool publishes the inputs it has read during an invocation.
This information is automatically merged in a `statefile', which also
maintains information about compilation flags used.  As the C-preprocessor
is most of the time available as a seperate program, it is easy
to `plug in' a slightly enhanced version, which also reports the header
files encountered.  (As a last resort, the "cc-c" tool can be defined
to use a "mkdep"-like program, but on average we have measured a 20%
overhead in that case.)

>(Too, it has the drawback of requiring that there be room in every
>output file for the information needed by `make', or else that output
>files come in pairs: foo.o and .depend.foo.o, or some such.  Which is
>`better' is to some extent a matter of taste.  One can argue that every
>`object' file should in fact be a directory: foo.o/text, foo.o/data,
>foo.o/symbols, foo.o/debug, foo.o/depend....  Let the file system work
>for you.)

In amake, these files are kept seperately, in a subdirectory.  They are
completely hidden, and temporarily moved back when they are needed
(i.e., when the loader or archiver are to be run).  The statefile contains
the mapping of tool invocations to the objects produced, so it knows which
ones to pick, during a rebuild.  Afterwards, only the target objects are
directly visible. The sources and source descriptions are usually kept
elsewhere, so that several independent (mc68000/sparc/vax; Amoeba/Unix)
configurations can be conveniently maintained.

Kees Verstoep (versto@cs.vu.nl)

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (09/18/90)

In article <0955@sheol.UUCP> throopw@sheol.UUCP (Wayne Throop) writes:
> Now, "ideally" the construction steps (eg: compilers) and make should
> talk to each other about dependencies.

Exactly. This can be done rather cleanly: cc has a -F flag to output a
list of all files it'll open. Every output file has a dependency list,
consisting of just the commands needed to create that file. make zooms
through the commands, giving each one a -F to find out what files it'll
use. It continues recursively, then invokes real compiles as necessary.
A file is always re-made if its dependency list has changed.

For efficiency, make can cache the -F output in another spot, knowing
that it must reinvoke cc -F whenever any of those files are touched.

If all constructors (including make itself) have a -F flag, this scheme
becomes quite reliable and modular.

---Dan

andrew@alice.UUCP (Andrew Hume) (09/21/90)

In article <1326@mts.ucs.UAlberta.CA>, userAKDU@mts.ucs.UAlberta.CA (Al Dunbar) writes:
~ >  <<stuff deleted>>
~ >BTW, why are the components called "dependents"?  Seems to me that the
~ >target is the dependent because it depends on the components.
~ >
~ I find the terminology confusing too. Maybe they use "dependent"
~ because there is no such word as "dependee". Or maybe because
~ "dependable" would mislead programmers into thinking the code in
~ their "dependables" must be OK.
~  


in mk, the terminology used is prerequisites, mostly to avoid
these problems.
	andrew