[alt.sources.d] different tasks, different tools

tchrist@convexe.uucp (Tom Christiansen) (11/26/89)

If you want to affix one piece of wood to another, you can do so in any of
several different ways.  If you have nails handy, you might just nail them
together.  You reach into your toolbox and pull out a hammer and off you
go.  Now if you don't happen to have a hammer, you can probably use a
screwdriver: turn it upside down and use that and that bottom of your shoe
to hammer it in there.  If you have only a screw and a hammer, it might
also work, although not as well.

Like physical construction, the building of software is more easily
accomplished if you employ different tools for different tasks.  If you
find something tedious or difficult to do with one tool such that you find
yourself bending over backwards in painful contortions, then probably you
should reassess the situation.  Are you trying to put up a steel bridge
with only a hammer and nails?  Are you trying to to nail together small
kitchen chair using a jack hammer?

One of most remarkable things about UNIX is the way it so often lends
itself to a pluralistic approach: any of several solutions are feasible.
Different factors will influence which approach you select: the
availability of the tool, its portability, security concerns, the time it
takes you to get the software written and debugged, and the time it takes
to actually execute the task.  Depending on which of these factors is
foremost in importance, you will likely elect to use a different tool.

System administrators are supposed to do everything, the jack of trades of
the UNIX world.  Sometimes they double for dumpers and tape operators;
sometimes they act as hardware logistic planners; sometimes they must be
systems programmers (which I define to be programmers who write programs
for other programmers); sometimes they must even act as electronic
policemen.  At USENIX's LISA II and LISA III conferences polls were taken
to see how many end-users that sysadmins had to support, and the results
were telling:  very few had 1:50 or better ratios, and many were in the
1:200 or above range.  For these reasons, the overriding factor is often
the time it takes to get the software written and debugged, followed by
portability, since a good many sites have to support different
architectures and O/S versions.

I use a variety of tools to get my job done, which happens to be writing
software tools primarily to streamline software development and
secondarily to automate systems administration.  These include sh, sed,
awk, perl, C, and yacc.  Of late I've been doing less and less in the
first three.  If I had ksh and nawk for all the machines I run on, I might
use those: both are significant improvements over their ancestors.  But I
don't, so I must restrict myself to using versions of those tools without
subroutines, just to mention one significant difference.

I don't even have cut and paste (besides buggy PD versions) for all the
machines I use; look on the Tahoe tape -- they're not there.  I would use
the GNU versions for gawk and bash, except I believe they're still in beta
test last I looked, and I've seen too many patches come by for them in the
gnu groups.  I'll wait until they're stable.  I'm afraid to use bison for
fear of infecting my software with the GPL if I don't want it there.  I
don't mind (and actually support) the use of the GPL for perl.  I sure
don't think anyone but its author (who's not) should be making money on perl.  
And since it's an interpreter, I'm not concerned about the bison-style
potential infection of the GPL into my code.  To give someone the program,
I must give the source, not so much due to licensing as due to the nature
of an interpreter.  And you know, I kind of like it that way.

I have seen some pretty amazing sh scripts, some several thousand lines
long.  I'm a little reticent The csh I do not consider suitable for
programming tasks.  As the Sun man page states in the BUGS sections:
"Although robust enough for general use, adventures into the esoteric
periphery of the C shell may reveal unexpected quirks."  I've been too
often frustrated by these quirks to tolerate the csh for any
non-interactive work (for which I use tcsh anyway).  Also: I'm a bit leery
of trying to make a command interpreter be a programming language or vice
versa.  I expect very sophisticated interactive capabilities in 
a command interpreter, like visual command line editing with my preferred 
editor, history expressions, dynamic spelling correction, and rebinding of 
command keys just to name a few.   I would laugh if I found those
in a programming language.

I really don't do much sh anymore except for makefiles, and I use
little sed or awk, but not none whatsoever.  For example, I still have 
old aliases or shell scripts like:

     alias ush 'sed "1,/^\#/d" < `mhpath cur` | sh'

I particularly don't like the execution time hit I take in shell scripts
that make repeated calls to each of sed, grep, sort, tr, expand, and awk
to munge their I/O.  I used to rewrite these in C.  There are still people
I know who write in csh and to extract a line from a file do 'sed -n -e
${i}p' iterating through thousands of lines of the file.  Needless to
stay, I avoid those machines when those scripts are running.  :-)  Worse
than the execution time problem is that it is often more natural (to me)
to express the problem in perl making a single pass through the input
stream.  

I certainly still write in C, although more often than not perl works nicely,
reducing my development and debugging time to perhaps 10% of what it would
have taken in C.  If I really need speed, I can manually translate the perl
prototype, but this has only been necessary twice in the last six months.
For truly complex parsing tasks, I certainly use yacc.  For example, I've
been wanting a ctags that knows about struct|union|enum declarations and
where variables are defined like cxref, and after an experience in parsing
C in perl (see pstruct, soon to be posted to alt.sources) I've decided to
take one of the available yacc grammars for C and just stick in appropriate 
actions at the right places.

Maybe some of us, myself included, have in recent weeks verged upon
"overzealous" in trying to show the world that perl is a neat thing.  I
think it was Paul O'Neill who said that perl makes programming fun again.
I confess to experiencing this same feeling -- as a programmer, I feel
much like a kid with nifty new toy.  I don't think it's the tool to end
all tools though.   You don't want to design million-line programs in perl;
for that you don't even want C, you want C++.  Neither do you really want to
write a symbolic debugger for compiled C in perl, although it was quite
natural and appropriate for Larry Wall to write perl's own debugger in
perl.  I just find that for a substantial portion of the high-level
systems programming tasks that I routinely encounter, perl makes a great
alternative to a shell-script-from-hell or weeks of potentially painful C
debugging.  It's not THE solution, but it's ANOTHER solution, and IMHO
often a better one.  As I said at the top of this note, one of the
nice things about UNIX is you actually have the possibility to employ
different solutions: you're not forced to use DCL and Bliss or JCL and
PL-8, you can use whatever best suits your needs at the time.  It's
a pluralistic society and a pluralistic programming environment, and I 
feel fortunate to exist in both.  Imagine being confined to CP/M in 
Rumania as an alternative. :-)

--tom

    Tom Christiansen                       {uunet,uiucdcs,sun}!convex!tchrist 
    Convex Computer Corporation                            tchrist@convex.COM
		 "EMACS belongs in <sys/errno.h>: Editor too big!"