[comp.unix.wizards] sources in a heterogenous enviroment

wallen@sdics.ucsd.EDU (Mark Wallen) (08/20/87)

With all the discussion about how to handle binaries
for different processors in a heterogenous NFS environment,
I have a related question.  How do you folks handle
your source files?  For instance, we've got a fair
amount of home-growed software and keep much of it
on a NFS partition.  So far so good; I've won on
both disk space and having to worry about multiple
versions of source (there's only one now).  Here's
the dinger--if I had just done a "make install" on
my Vax system and turn around and to a "make install"
on my Sun, there are probably a lot of Vaxish .o
files hanging around.  Sure confuses make and the
Sun loader.  The only safe thing to do is "make clean"
and start from scratch.  This is a drag.  One
solution that seemed feasible to me was to hack
the makefile and add SUFFIXES like .vo for Vax
.o's and maybe .10o, .20o, and .4o for the various
sun flavors.  Then have parallel OBJS type definitions
and targets for the different machines.  Is this more
trouble than it's worth?  It does let me do development
on both/all machines at the same time without having
to recompile the WORLD each time a make a change to
a single .c.  If it's been done, is there a consistent
naming scheme (i.e., what do you call the vax objects,
etc).

Another different solution also occurred to me and
kinda helps the multiple binary problem.  How about
different manufacturers have different A_MAGIC
numbers for their a.outs.  Someone (Sun, AT&T, me :-)
could referee the assignment of those numbers similar
to the way Xerox doles out Ethernet numbers.  The advantage
is that you won't be stumbling over the wrong kind
of binary--it's not an a.out on this machine.  "file"
could know that it's a Pyramid, Gould, or Vax binary
just by looking at the magic number.  It would break
some things, but I could imagine a transition period
where the kernel and few concerned utilities recognized
both the old 0407 numbers and the new ones (and always
produced the new, of course).  (Perhaps the old 0407, 0410,
etc are just a bit worn out--they were different versions
of the PDP-11 branch instruction to jump over the rest
of the a.out header if you tried to directly load and
execute an a.out).

Thanks for listening, er, reading.

Mark R. Wallen

Cognitive Science
UC San Diego

{ucbvax,decvax,ihnp4}!sdcsvax!sdics!wallen.uucp
wallen%sdics@sdcsvax.ucsd.edu
wallen@nprdc.mil

dce@mips.UUCP (David Elliott) (08/20/87)

In article <390@sdics.ucsd.EDU> wallen@sdics.ucsd.EDU (Mark Wallen) writes:
>With all the discussion about how to handle binaries
>for different processors in a heterogenous NFS environment,
>I have a related question.  How do you folks handle
>your source files?  For instance, we've got a fair
>amount of home-growed software and keep much of it
>on a NFS partition.  So far so good; I've won on
>both disk space and having to worry about multiple
>versions of source (there's only one now).  Here's
>the dinger--if I had just done a "make install" on
>my Vax system and turn around and to a "make install"
>on my Sun, there are probably a lot of Vaxish .o
>files hanging around.  Sure confuses make and the

I've seen 3 solutions to this type of problem, which not only
plagues NFS users, but also people with cross-compiler environments,
which are quite common in the early stages of most new systems
development projects.

1. Set up all makefiles so that "all" depends on "$(CLEAN)", which
   defaults to "clean". You can force it not to clean by saying

	make CLEAN= ...

   so it doesn't always clean. This solution isn't really great,
   but it works if you are fairly reasonable (like not aliasing
   make to "make CLEAN=").

2. Set up makefiles so that object files are stored in subdirectories.
   This either requires a mod to make to look in subdirectories, or
   requires more complicated makefiles. This also eats disk space
   pretty quickly.

3. Write a make front-end or front-ends that will create "semaphore"
   files, and have it do a "make clean" if the current state doesn't
   match what you are going to build.

   I've used two versions of this method. The first involves having
   a separate filename for each type of object you are building,
   such as "standard", "debug", "vax", "vaxdebug", and so forth.
   This works well if you have a limited set of things you are building.
   The other involves placing the values of the macros given on the
   command line and environmental parameters in a file. For example,
   if you are on a Sun and say

	femake CFLAGS=-g

   it should build a file containing "CFLAGS:-g" and "HOST-TYPE: sun",
   compare that against the file built for the previous build, and if
   it is different, do a "make clean", move the "new" data file to
   the "previous build" file, and execute the real make. If the files
   are the same, then no "make clean" is needed.

   You can also write a command called "make_same" that will run
   make with the same information as was previously used.

   If you want to keep multiple objects around, you can have the
   front-end move the objects to a subdirectory (using tar or
   cpio to preserve dates, and only if there is enough disk space),
   and have it be smart enough to look around for object files that
   match the build you are trying to do.

I could go on with this forever, but you get the idea.

The main thing I advise is that you try hard not to put all of
the work into the makefiles. I've seen some awfully complicated
makefiles in the systems I've worked with, and I'm convinced
that you can do everything you ever need with 4 targets: all
(make the commands locally), install (make the commands and
install them), clean (obvious), and generic (interface to shell
scripts that can do the tough stuff).

-- 
David Elliott		{decvax,ucbvax,ihnp4}!decwrl!mips!dce

jc@minya.UUCP (John Chambers) (08/21/87)

In article <390@sdics.ucsd.EDU>, wallen@sdics.ucsd.EDU (Mark Wallen) writes:
> With all the discussion about how to handle binaries
> for different processors in a heterogenous NFS environment,
> I have a related question.  How do you folks handle
> your source files?  

This comes up all the time in places where you have to do cross-compiling
for multiple target machines.  There are two solutions.  Both of them
involve putting your source in one directory, say ".../src", and having
other parallel directories for compilation.  Say you are compiling for
machines "foo" and "bar".  You have the directories:
	.../src
    .../foo
	.../bar
To compile for foo, you cd to .../foo, where there is a makefile with
entries like:
	blech.o: blech.c; cc $(CFLAGS) -O blech.c
	blech.c: ../src/blech.c; ln .../src/blech.c .
Alternatively, you can take advantage of the fact that many C compilers
put their .o files into the current directory by default, even if the
source is off somewhere else.  If your compiler for foo acts this way, 
you can simply say:
	blech.o: ../src/blech.c; cc $(CFLAGS) -O ../src/blech.c
and all will work just fine.  In both cases, of course, CFLAGS contains
the stuff to customize the software for the foo processor.

There's no way to avoid multiple .o files, but by doing links or remote
compiles, you at least avoid having multiple source files.  (Unless the
src directory is on another file system, in which case you lose! :-)

To get really clever, you can put a makefile in the ... directory that
knows how to cd to each subdirectory and make all.

-- 
	John Chambers <{adelie,ima,maynard}!minya!{jc,root}> (617/484-6393)

mouse@mcgill-vision.UUCP (09/05/87)

In article <390@sdics.ucsd.EDU>, wallen@sdics.ucsd.EDU (Mark Wallen) writes:
> With all the discussion about how to handle binaries for different
> processors in a heterogenous NFS environment, I have a related
> question.  How do you folks handle your source files?  For instance,
> we've got a fair amount of home-growed software [...].  [This wins
> and I don't have to] worry about multiple versions of source (there's
> only one now).  Here's the dinger--if I had just done a "make
> install" on my Vax system and turn around and to a "make install" on
> my Sun, there are probably a lot of Vaxish .o files hanging around.

We handle things rather differently.  For concreteness, let us consider
two machines: "larry", a VAX, and "apollo", a Sun.  We have a directory
containing source to a program, say "foobar".  We wish to have only one
copy of this source and build both binaries from it.

Then one machine, say larry, has all the source in a directory
somewhere.  In the example, this would most likely be
/local/src/bin/foobar (that being our convention).  Then apollo will
have a distinct /local/src/bin/foobar containing one symbolic link per
source file, these links pointing to the corresponding files on larry.
Then the source files are perforce in sync, but the two machines have
distinct directories so that make, cc, etc don't get confused.  Some
programs will share their Makefile as well; some will need slightly
different Makefiles.  Details.

Works fine for us.  (Don't know how well it would work if the machines
were in different timezones.  Might have to wait N hours before running
the make on the other machine.  But how often do you nfs such machines
together?)

					der Mouse

				(mouse@mcgill-vision.uucp)