[comp.unix.wizards] 'nmake'

sparker@unisoft.UUCP (Steve Parker) (05/23/89)

In article <11570@ulysses.homer.nj.att.com>, ekrell@hector.UUCP (Eduardo Krell) writes:
> 
> As I said before, I don't know whether the new nmake will be offered
> through the Toolchest or with SVR4, but the bottom line is that it
> will.

I am truly sorry to hear that.  I'm convinced it's the wrong tool for the job,
encourages complicated solutions to problems, and has extremely poor compatiblity
with its predecessor.

Back in college, I was told a number of times how many students leave school wanting
to write a hot new operating system, and a hot new compiler.  Nmake strikes me as
an example of exactly this disease.  Nmake has an odd programming language, with
a complicated, ill-defined paradigm to describe the class of problems it intends to
solve.  It seems to me to have been bred to "solve all the world's problems" -- A
very old human folly.

The whole reason 'make' was a win in the first place was that it found a simple
paradigm for describing how to regenerate software:  Files depend upon one another,
and a simple set of steps record how to bring a file up to date.  Can anyone state
in one or two sentences what nmake's paradigm is?  I sure can't.  There are file
dependencies, state variable dependencies, attributes, ...  Ugh.

To rattle on a bit more, the following is from "The Fourth Generation Make", by
Glenn Fowler, from the summer USENIX in Portland:

	"As a testimony to the strength of this metalanguage, most new make
	features and ideas have resulted in changes to the standard builtin rules
	(written as a makefile) rather than in changes to the [nmake source]."

This statement bothers me.  It reminds me of saying "Ada is the perfect programming
language, because you can do _anything_ with it."  (Substitute for Ada your favorite
over-burdened, complex language.)  It makes me suspicious of the authors:  Do they
know when to quit?  I doubt it.

One particular nmake gripe I have has to do with its scan for implicit dependencies.
(I don't want 'make' knowing how to scan C files for dependencies, anyway.  That should
belong to a separate tool, but let's forget that for a moment.)  In an attempt to make
nmake fast enough to be a satisfactory tool, one optimization was to make scanning
C files for '#include' dependencies built into the nmake code.  This saves fork/exec
of /lib/cpp don't you see?  Of course, all it does is essentially:

	egrep '^#[ \t]*include'

This does the wrong thing for some #ifdef'd source.  It still makes nmake slower
than it need be, and wrong to boot!?!?

> 
> Remember, Toolchest programs are totally unsupported. You're expecting
> too much from something that didn't promise anything.
>     
> Eduardo Krell                   AT&T Bell Laboratories, Murray Hill, NJ

Been hanging out with the lawyers lateley?  "No claim is made about fitness
for any purpose whatsoever."

I know nmake doesn't come with any claims.  I guess I just have the idealistic hope
that a responsible engineer doesn't let a buggy, untested piece of software loose
on the world.  And I look at AT&T, who charges money for nmake, and expect that
they would be concerned about their reputation.  They would send it out in a form
that works pretty much.  Silly me.

And even if it _were_ completely free, unsupported software, there is nothing wrong
with calling a spade a spade.  People are doing exactly this.  Or are there hoards
of people who are happy and pleased with nmake out there?

To decide if a tool is successful, one criterion is how many people seek it out to
help them.  Is it "part of the furniture"?  Patch and tons of GNU software are great
examples of good software tools.  They work, and they do useful things.  Some vendors
ship GNU sources on their systems because so many of their customers use it.

In my opinion, nmake is not such a success.  I have had to eliminate from my
software development.  I was unable to justify the amount of overhead it involved,
versus the benefit over make.  (The only benefit I derived from nmake was the "-j",
parallel job option.  All the rest of the 'features' did not do anything constructive,
or at least not constructive enough to justify itself.)

No nmake!

Steve Parker
sparker@unisoft.com
{ucbvax,sun,uunet}!unisoft!sparker
----------------------------------
"Start your own revolution, and cut out the middle man"   --Billy Bragg

ekrell@hector.UUCP (Eduardo Krell) (05/23/89)

In article <2060@unisoft.UUCP> sparker@unisoft.UUCP (Steve Parker) writes:

>> As I said before, I don't know whether the new nmake will be offered
>> through the Toolchest or with SVR4, but the bottom line is that it
>> will.
>
>I am truly sorry to hear that.  I'm convinced it's the wrong tool for the job,
>encourages complicated solutions to problems, and has extremely poor compatiblity
>with its predecessor.

I don't understand.  What is it that bothers you: that nmake will be offered
and people will have a chance of using it?  You have a right not to use it
if you chose not to, but why shouldn't others get to decide if they like it
or not?

>The whole reason 'make' was a win in the first place was that it found a simple
>paradigm for describing how to regenerate software:  Files depend upon one another,
>and a simple set of steps record how to bring a file up to date.  Can anyone state
>in one or two sentences what nmake's paradigm is?  I sure can't.  There are file
>dependencies, state variable dependencies, attributes, ...  Ugh.

The need for these is that the make model is too simplistic: As I said
before, time stamps are not good enough to determine when a file should
be recompiled. When projects get too big, the makefiles get too complicated
and one is never 100% sure that make is recompiling all the files it REALLY
needs to and that it's not compiling too much.

On a small system, this might not make a big difference, but on a big
project where building the system takes a day of CPU time or more,
it's critical that this process be as efficient and as reliable as
possible.  Many of the projects using nmake now have cut their building
times because back when they used make, they had to rebuild everything
from scratch as they didn't trust make to do the right thing.

>	"As a testimony to the strength of this metalanguage, most new make
>	features and ideas have resulted in changes to the standard builtin rules
>	(written as a makefile) rather than in changes to the [nmake source]."

But what bothers you: that he did succeed in doing that? Are you annoyed
that someone can write a general purpose make engine which can be tailored
with higher level rules?

>Of course, all it does is essentially:
>
>	egrep '^#[ \t]*include'
>
>This does the wrong thing for some #ifdef'd source.  It still makes nmake slower
>than it need be, and wrong to boot!?!?

First of all, you can turn off source scanning if you want. And, anyway,
source files are rescanned only if they've changed since the last time
they were compiled, so you're clearly exaggerating the overhead.

And it doesn't do the wrong thing for #include's within #ifdef's:
it knows about them and it doesn't require the included file to
be there.

>And even if it _were_ completely free, unsupported software, there is nothing wrong
>with calling a spade a spade.  People are doing exactly this.  Or are there hoards
>of people who are happy and pleased with nmake out there?

Depends on the version of nmake: you're talking about a 4 year old piece
of code which we haven't been allowed to update. nmake 2.0 is a complete
rewrite and it's being used by lots of internal AT&T projects, all sizes.

I've rewritten the Unix kernel makefiles to a single nmake makefile.
It went down from about 40 pages of makefile to 1.5 and the system
can be recompiled much faster now.

>(The only benefit I derived from nmake was the "-j",
>parallel job option.  All the rest of the 'features' did not do anything constructive,
>or at least not constructive enough to justify itself.)

How about the use of a co-shell so that individual actions don't need
to spawn new shells all the time?
How about having compiled makefiles so that they don't need to be reparsed
every time you run make? How about binding of source and header files so
that if you add a different include or source directory which changes the
binding of some source/header file names, nmake will recompile whatever
needs to? How about using a state variable so that if you change its
value in the Makefile, all the files which use that symbol will be
recompiled (with the new -D flag)? How about using .SOURCE rules
to specify lists of directories where different kinds of source and
header files are and let nmake generate the right -I flags and compile
source from those directories?

The list could go on, but I've already made the point.
    
Eduardo Krell                   AT&T Bell Laboratories, Murray Hill, NJ

UUCP: {att,decvax,ucbvax}!ulysses!ekrell  Internet: ekrell@ulysses.att.com

gwyn@smoke.BRL.MIL (Doug Gwyn) (05/24/89)

In article <11581@ulysses.homer.nj.att.com> ekrell@hector.UUCP (Eduardo Krell) writes:
>Depends on the version of nmake: you're talking about a 4 year old piece
>of code which we haven't been allowed to update. nmake 2.0 is a complete
>rewrite and it's being used by lots of internal AT&T projects, all sizes.

Why aren't you allowed to update it?  Several of the ToolChest packages
have been updated.  Also, why should we care about the properties of
nmake 2.0 if we can't get our hands on it?

robert@sysint.UUCP (Robert Nelson) (05/24/89)

In article <11581@ulysses.homer.nj.att.com> ekrell@hector.UUCP (Eduardo Krell) writes:
>
>Depends on the version of nmake: you're talking about a 4 year old piece
>of code which we haven't been allowed to update. nmake 2.0 is a complete
>rewrite and it's being used by lots of internal AT&T projects, all sizes.
>

I think this is the key.  We are talking about two different products:
	Us:
		- A buggy poorly written unsupported piece of alpha code.
	You:
		- A well written supported product.

When we have access to the product you are using the tone of the discussion
will likely be considerably different.

The issue of the toolchest code being unsupported wouldn't be so hard to take
if it weren't for the fact that AT&T already has fixes to most of the
complaints and just chooses not to make them available.  Yet still has the
gall to charge for the _old_ _buggy_ version.

>
>The list could go on, but I've already made the point.
>    
>Eduardo Krell                   AT&T Bell Laboratories, Murray Hill, NJ
>
>UUCP: {att,decvax,ucbvax}!ulysses!ekrell  Internet: ekrell@ulysses.att.com


-- 
Robert B. Nelson                                Systems Interface Inc.
                                                560 Rochester St, 2nd Floor
UUCP: uunet!mitel!sce!cognos!sysint!robert      Ontario, CANADA  K1S 5K2

hamish@unisoft.UUCP (Hamish Reid) (05/24/89)

In article <11581@ulysses.homer.nj.att.com> ekrell@hector.UUCP (Eduardo Krell) writes:
>
>The need for these is that the make model is too simplistic: As I said
>before, time stamps are not good enough to determine when a file should
>be recompiled. When projects get too big, the makefiles get too complicated
>and one is never 100% sure that make is recompiling all the files it REALLY
>needs to and that it's not compiling too much.

Fine. If it actually worked that way. With nmake's poor documentation
and inscrutable syntax, there was a long time during a recent project
when we were almost always 100% sure that nmake was *not* recompiling
the correct things. Nmake was simply too hard to understand and use in
almost all situations - a tool like nmake should, above all, be
*trustable*. Trust comes from understanding. Understanding (tends to)
come from simplicity, clarity, and compatibility with past paradigms.
The versions available was/is none of these, mostly due to bugs
(presumably fixable), a (IMO) tendency to try to solve every problem
with the one program, and the terse and difficult syntax.

The issue here is *overall* effectiveness and efficiency. I have no
trouble with co-shells, the idea of a generalised make engine, etc.
It's just that our experiences with the actual implementation were
uniformly bad.  We noticed that nmake bugs and problems derived from
the combination of paradigm confusion and complexity, opaque syntax,
and poor documentation, accounted for a significant proportion of the
development and testing time on one of our larger recent projects. My
personal estimate is that we spent perhaps 10-15% of our engineering
effort towards final release in nmake-related bug fixing - suddenly the
libraries wouldn't compile, or nmake decided that something was
out-of-date when it wans't (or vice versa), the rules were subtly wrong
but almost incomprehensible so we had to appoint an nmake guru who
became a bottleneck for the project, etc etc. At least with make
dependecies were simple and well-documented, (almost) always explicit.
I know the problems with make - but they were always well-understood
and usually easy to work-around.

Nmake is (perhaps) a good idea. For software engineering reasons,
though, I would currently prefer something simpler, more focused on
doing-one-thing-right, and more understandable - especially in a tool
that is crucial to project success. Then again, I would prefer an
entire IPSE-like system, but ....

I will never willingly use nmake again on any project unless we can be
convinced that it has been fixed, and that there are no better tools
available. We will probably go back to make or MAKE or mk or whatever -
at least yer average injuneer can both understand and trust them.

Please, if there is a better nmake available, let us all have it,
evaluate it, and maybe use it the way it was intended. If it isn't
available, there isn't much use talking about it here - we are
unfortunately limited to talking about what's available.

	Hamish
-----------------------------------------------------------------------------
Hamish Reid             UniSoft Corp, 6121 Hollis St, Emeryville CA 94608 USA
+1-415-420-6400         hamish@unisoft.com,         ...!uunet!unisoft!hamish

sparker@unisoft.UUCP (Steve Parker) (05/25/89)

In article <11581@ulysses.homer.nj.att.com>, ekrell@hector.UUCP (Eduardo Krell) writes:
> The need for these is that the make model is too simplistic: As I said
> before, time stamps are not good enough to determine when a file should
> be recompiled. When projects get too big, the makefiles get too complicated
> and one is never 100% sure that make is recompiling all the files it REALLY
> needs to and that it's not compiling too much.

However, writing a program that knows everything there is to know about
software regeneration is bound to be folly.  For the same reason there is
no one programming language that fits all, no one method for software
regeneration fits all.  Instead, I prefer to choose an understandable,
predictable, and easily adaptable tool.  I especially prefer that to
a complex, difficult to understand tool, with a difficult to read syntax.
Thus, I view nmake as an example of the de-evolution of make.

> On a small system, this might not make a big difference, but on a big
> project where building the system takes a day of CPU time or more,
> it's critical that this process be as efficient and as reliable as
> possible.  Many of the projects using nmake now have cut their building
> times because back when they used make, they had to rebuild everything
> from scratch as they didn't trust make to do the right thing.

So nmake maximizes both efficiency and reliablility?  Let me see:  The
most reliable way to regenerate software is to recompile the whole world
every time.  (No errors or variability are possible.)  Efficient is 
recompiling only the absolute minimum number of files.  Sounds like
these are trade-offs to me.  And again, I assert nmake is based on a
complicated and confused paradigm of software regeneration, that falls
somewhere in the middle.

> >	"As a testimony to the strength of this metalanguage, most new make
> >	features and ideas have resulted in changes to the standard builtin rules
> >	(written as a makefile) rather than in changes to the [nmake source]."
> 
> But what bothers you: that he did succeed in doing that? Are you annoyed
> that someone can write a general purpose make engine which can be tailored
> with higher level rules?

No.  I have always been able to get make to do pretty damn close to anything
I want.  (Granted sometimes it has not been as pretty as I would have
liked.)  And in fact, I would probably be much happier if he had succeeded
at making a general purpose make engine.  Based on my experiences, it was
not a success.

> >This does the wrong thing for some #ifdef'd source.  It still makes nmake
> >slower than it need be, and wrong to boot!?!?
> 
> First of all, you can turn off source scanning if you want. And, anyway,
> source files are rescanned only if they've changed since the last time
> they were compiled, so you're clearly exaggerating the overhead.

Sorry, it makes it slightly slower.  And wrong.

> And it doesn't do the wrong thing for #include's within #ifdef's:
> it knows about them and it doesn't require the included file to
> be there.

Maybe _your_ version doesn't.  Mine does.  It can be turned off, or
cpp can be used.  (The latter is clearly the right way.)  But the default
behavior I see is to generate false dependencies.

> I've rewritten the Unix kernel makefiles to a single nmake makefile.
> It went down from about 40 pages of makefile to 1.5 and the system
> can be recompiled much faster now.

Interesting.  After I'ld reached the point of having burned three weeks
of my time on nmake bugs with a kernel nmake file, I converted it to
a make file.  It was less than 10% larger than the nmake file.  (I work
in a fairly complicated cross-development environment.)  While I haven't
made measurements, my feel is that make is about 30% faster at figuring
out if anything needs to be made.  And it recompiles the right things
every time now!  Wow!  Make sure is a great tool....

> How about the use of a co-shell so that individual actions don't need
> to spawn new shells all the time?

Fine idea.  Right now I go to build a library and either a bug in ksh
or nmake causes process to be left hanging around spinning forever.
(It does build the library okay though.  I'm thankful for that :-()

> How about having compiled makefiles so that they don't need to be reparsed
> every time you run make?

Bad idea.  Besides, you were just telling me about how short nmake files
are.  Why, the amount of time spent parsing them must be so insignificant
now.  I don't like things I can't see!  I guess I'm a moss-covered old-timer.
I remember when UNIX meant all files are streams of bytes, and the only
ones I couldn't 'cat' were machine executables.  Something about
flexibility and combining tools at will comes to mind....

> How about using a state variable so that if you change its
> value in the Makefile, all the files which use that symbol will be
> recompiled (with the new -D flag)?

How about a shell script that takes the name of a variable I've changed,
uses grep to go through source files looking for actual dependencies on
that variable, and touching only _those_ files?  That eliminates the
need for that other nasty binary file that has all saved state.

> How about using .SOURCE rules
> to specify lists of directories where different kinds of source and
> header files are and let nmake generate the right -I flags and compile
> source from those directories?

Path searching for source files is available in numerous makes.  Many
of them are predictable and reliable.

> The list could go on, but I've already made the point.
>     
> Eduardo Krell                   AT&T Bell Laboratories, Murray Hill, NJ
> UUCP: {att,decvax,ucbvax}!ulysses!ekrell  Internet: ekrell@ulysses.att.com

Pleeez.

In my opinion nmake is a bad idea, poorly done.  Whether it is fixed in
mumble-fatz is a secondary point at best.  My experiences with it lead
me to believe the quality level is lower than I require of myself before
I release software.

My main feelings, however, are that it is a bad idea.  It violates the
rules of good software design.  The following items are quotes from
Rob Pike's paper on designing windowing interfaces in USENIX's Computing
Systems, V1#4:

	o Simple systems are more expressive.
	o Balance the benefit of a feature against its overall cost.
	o Design with a specific customer (or task) in mind.

Nmake isn't simple.  Putting features like scanning C source files for
dependencies is a poor cost/benefit decision.  Nmake's design reflects
an attempt to design with _all_ possible customers in mind.

In my opinion, nmake is a poorer tool even than make.  And other new
makes have a less confusing, more sensible view of the world.  (e.g.,
mk and GNUmake)

Steve Parker
sparker@unisoft.com
{sun,uunet,ucbvax}!unisoft!sparker

ggs@ulysses.homer.nj.att.com (Griff Smith) (05/25/89)

In article <2066@unisoft.UUCP>, sparker@unisoft.UUCP (Steve Parker) writes:
...
> > First of all, you can turn off source scanning if you want. And, anyway,
> > source files are rescanned only if they've changed since the last time
> > they were compiled, so you're clearly exaggerating the overhead.
> 
> Sorry, it makes it slightly slower.  And wrong.

I haven't noticed any slowdown caused by this.  But that's not the
point; in practice, this feature saves me a lot of effort.  I tried to
use `make' to direct compilation of about 80 C++ source files, plus
associated function prototype headers.  I could not reliably keep the
header references up to date.  I realize that I could have stolen a
`make depend' script from some of the BSD source, but my headers were
changing so frequently that I would have had to `make depend' before
most compilations.  The default nmake rules did the right thing.  I
will add any flames that `you should have done it this way, you dolt'
to my bag of tricks.  I haven't seen any better ones than what I get
from nmake.

> Nmake isn't simple.  Putting features like scanning C source files for
> dependencies is a poor cost/benefit decision.  Nmake's design reflects
> an attempt to design with _all_ possible customers in mind.
> 
> In my opinion, nmake is a poorer tool even than make.  And other new
> makes have a less confusing, more sensible view of the world.  (e.g.,
> mk and GNUmake)

I share many of your concerns; I don't care for the `slices, dices,
chops and grates' syndrome either.  My dependence on nmake is similar
to that of an addict for drugs.  I can do without it for most C
compilations, but when I try to use `make' for building C++ systems I
get withdrawal symptoms.  The automatic maintenance of source
dependencies eliminates a major headache.  Perhaps this means that I
should also discard C++ because it also attempts to address too many
problems without delegation.

I'm sorry that the Toolchest version of nmake has caused you so much
grief.  I have been using the new version for about a year now, and it
has usually served me well (it also helps to know Glenn's phone
number).  I doubt that it will ever overcome your philosophical
objections, but it is considerably more polished than it was three
years ago.
-- 
Griff Smith	AT&T (Bell Laboratories), Murray Hill
Phone:		1-201-582-7736
UUCP:		{most AT&T sites}!ulysses!ggs
Internet:	ggs@ulysses.att.com

ekrell@hector.UUCP (Eduardo Krell) (05/26/89)

In article <2066@unisoft.UUCP> sparker@unisoft.UUCP (Steve Parker) writes:

>Let me see:  The
>most reliable way to regenerate software is to recompile the whole world
>every time.  (No errors or variability are possible.)

With old make (and any other versions of make wich relly only on time
stamps), yes.

> Efficient is 
>recompiling only the absolute minimum number of files.  Sounds like
>these are trade-offs to me.

With old make, yes.  This is one of the advantages of nmake: that
you get both reliability (by having a state file and not depending on
just comparing time stamps) and efficiency (by not having to recompile
more than is needed).

>No.  I have always been able to get make to do pretty damn close to anything
>I want.

Please tell me how you get make to do the right thing when you replace
either your source of header files with different versions whose time
stamps is still older than the corresponding .o's ?

>Based on my experiences, it was not a success.

And based on my experience (and the hundreds of nmake installations
within AT&T), it was.

>Interesting.  After I'ld reached the point of having burned three weeks
>of my time on nmake bugs with a kernel nmake file, I converted it to
>a make file.  It was less than 10% larger than the nmake file.

This shows you didn't know how to take advantage of nmake. All the
makefiles I've converted to nmake have been reduced by about an order
of magnitude.

> While I haven't
>made measurements, my feel is that make is about 30% faster at figuring
>out if anything needs to be made.  And it recompiles the right things
>every time now!

Does it? See the example above.

>How about a shell script that takes the name of a variable I've changed,
>uses grep to go through source files looking for actual dependencies on
>that variable, and touching only _those_ files?

Then you'll have to tell that shell script which variables have changed.
Why? I just want to change the symbol in my makefile and let nmake
figure out what needs to be recompiled.

>Path searching for source files is available in numerous makes.  Many
>of them are predictable and reliable.

I don't know of any other version of make which generated -I flags
based on what each source actually includes (instead of issuing all
the -I flags to all the source files).

>In my opinion nmake is a bad idea, poorly done.

And in my opinion nmake is a great program. It takes guessing out
of the software building process.

>Nmake isn't simple.

Who said it is? It's not solving an easy problem. It can't be
simple because the rules for building systems in a Unix/C environment
are complex.

> Putting features like scanning C source files for
>dependencies is a poor cost/benefit decision.

I'd like to know how you can do the same thing more efficiently.
I don't want to manually list every header file I include (the
transitive closure, actually) in my makefile, yet I want my
file to be recompiled whenever one of those header files is changed.

> Nmake's design reflects
>an attempt to design with _all_ possible customers in mind.

Nonsense. It was designed to do the right thing in a Unix/C environment
(which is the standard development environment at AT&T).

>In my opinion, nmake is a poorer tool even than make.  And other new
>makes have a less confusing, more sensible view of the world.  (e.g.,
>mk and GNUmake)

I don't know how big a project you've been working, but for big projects
with thousands of files and hundreds of makefiles, make's simple model
isn't good enough.

I suggest we take this discussion off-line since it's getting too
boring.
    
Eduardo Krell                   AT&T Bell Laboratories, Murray Hill, NJ

UUCP: {att,decvax,ucbvax}!ulysses!ekrell  Internet: ekrell@ulysses.att.com

peter@ficc.uu.net (Peter da Silva) (05/26/89)

I don't have nmake. Until this discussion I never heard of it, but I don't
understand one thing... how do you maintain the state file in the face of
an arbitrary number of editors, etc, capable of being used to munge a
file without telling the state file? Or do you have to run some program to
update the state file whenever you edit a file?

At least time stamps take a deliberate effort to subvert.
-- 
Peter da Silva, Xenix Support, Ferranti International Controls Corporation.

Business: uunet.uu.net!ficc!peter, peter@ficc.uu.net, +1 713 274 5180.
Personal: ...!texbell!sugar!peter, peter@sugar.hackercorp.com.

ekrell@hector.UUCP (Eduardo Krell) (05/27/89)

In article <4321@ficc.uu.net> peter@ficc.uu.net (Peter da Silva) writes:
>how do you maintain the state file in the face of
>an arbitrary number of editors, etc, capable of being used to munge a
>file without telling the state file? Or do you have to run some program to
>update the state file whenever you edit a file?

nmake stores in the state file the info it needs to determine what would
have to be recompiled next time. Things like time stamps of files, binding
of source and header files (ie, foo.h came from /usr/myinclude and bar.h
came from /project1/release2/include), command line options to cc,
etc.

The state file doesn't need to be kept up to date between nmake runs.
Next time you run nmake, it will recompile whatever needs to be recompiled
and update the state file.
    
Eduardo Krell                   AT&T Bell Laboratories, Murray Hill, NJ

UUCP: {att,decvax,ucbvax}!ulysses!ekrell  Internet: ekrell@ulysses.att.com