[net.lang.c] more about programming style

DHowell.ES@Xerox.ARPA (07/10/85)

Ok, let me clarify a few things here.  By "idioms", I mean a piece of
code in a language which can be directly translated into another piece
of code in that ame language and looks similar in syntax to the same
piece of code in many other languages.  This would include i++ for i = i
+ 1 and if((a=b)==c) for a=b; if (a==c).  Idioms are language-specific.
These are the idioms that I would avoid to make programs more readable.

Pointers are a programming feature, not necessarily specific to a
language.  While BASIC, FORTRAN, and APL don't have them per se, they
could be simulated using procedures/subroutines.  When confronted with
two different ways of doing something in two different languages, the
more understandable, the better.   A pointer is generally easier to
handle than a PEEK or POKE function.

Structured programming is basic to the understandability of a program.
If a language does not support structured programming, either the
language should not be used, or it should be used very carefully with
comments explaining the structure of the programming.

Of course comments are important to any program, but they're not always
all that is important to a good program.  Let me give a possible
scenario.

Let's say I am a person whose programming experience consisted a few
high school and college courses in Basic & Pascal.  Ok, now let's say I
have a problem that I want solved by a programmer.  The programmer comes
back to me with a program which doesn't do quite what I wanted.  Now
from the comments it looks like the program should work right.  The
problem is in the code.  Now the programmer goes off and tries to fix
it, thinking he knows exactly what I want.  But when he comes back with
the revised program, it still doesn't do what I wanted.  Now the
comments were not enough to understand why the program doesn't do what I
wanted.  Therefore I must look at the code ("Foul", I hear you say.
"What are you doing looking at his code; you're not an experienced C
programmer!"  Well who else can look at it, if the programmer can't fix
it himself?  At least I know what I want done).  The program turns out
to be a full of these strange little idioms, which I've never seen
before.  Luckily, some of the code is readable, so I hope that what is
wrong is in that.  Let's say the problem with the program is that a
variable is being incremented when it shouldn't be.  However I don't
know that the variable is being incremented because I see this cryptic
"stuff++" which I pretty much ignore because the code is full of things
like that which gets lost in the shuffle.  I'm lost, the programmer
doesn't know what's wrong, and we're stuck.

However if the program said "stuff = stuff + 1" or even
"increment(stuff)", I could say "Aha! I think I know why it's not
working. That variable 'stuff', why is it being incremented.  I don't
think it should be incremented".  The programmer removes that line, the
program is fixed, and we all live happily ever after.

I know this was a rather long story, but I had to get my point across
somehow.  Remember that the "I" in that story could be someone you will
be working for.

Dan

AI.Mayank@MCC.ARPA (Mayank Prakash) (07/11/85)

>From: DHowell.ES@XEROX.ARPA
>Subject: more about programming style
>
>Ok, let me clarify a few things here.  By "idioms", I mean a piece of
>code in a language which can be directly translated into another piece
>of code in that ame language and looks similar in syntax to the same
>piece of code in many other languages.  This would include i++ for i = i
>+ 1 and if((a=b)==c) for a=b; if (a==c).  Idioms are language-specific.
>These are the idioms that I would avoid to make programs more readable.
>
.
.
.
>program is fixed, and we all live happily ever after.
>
>I know this was a rather long story, but I had to get my point across
>somehow.  Remember that the "I" in that story could be someone you will
>be working for.
>
>Dan

Aren't you really saying that all languages should follow the syntax of Pascal.
Since Pascal uses i := i+1, that is how all languages ought to be. The next
thing you would say is that I don't know any programming, but if I can't debug
this large complex program (written in any language), then it is poorly written
program and uses obscure idioms etc. etc. The point is, each language has the
right to have its own syntax, and conventions and whatever. While you may like
one syntax over another, you certainly cannot blame programmers for using those
elements of a language yuo don't like. Remember to a large extent, the whole
subject is very subjective, and is really a matter of what you are used to more
than anything else. If you had been brought up in the C tradition, you probably
would have taken the opposite sides.

Note: I am not an expert C programmer, nor am I trying to defend idiosyncracies
of C (such as i = i++), but I would certainly agree with the C experts on this
issue.

- mayank.

-------

jpm@BNL44.ARPA (John McNamee) (07/11/85)

C programming is something that should be attempted only by professionals.
There are other languages (such as BASIC) which the average luser might be
able to learn if they put their mind to it.

I think your example of why programmers shouldn't use C idioms was absurd.
Just apply it to another profession, and you will see how absurd it is.
For example....

    I went to the doctor because I had a pain in my chest. He gave me some
pills, but that didn't make the pain go away. After doing more tests, it
was decided that I need an operation. After the operation, I still had pains
in my chest. Since the doctor couldn't make the pains go away, I decided
to cut myself open and see if I couldn't find the problem on my own. If
the human body wasn't so complicated, and if medical textbooks were only
written so anybody could understand them, maybe I wouldn't have died trying.

Some things, like medicine and C programming, are best left to professionals.
--

			 John McNamee
		..!decvax!philabs!sbcs!bnl44!jpm
			jpm@BNL44.ARPA

		  "MS-DOS is a communist plot"

michael@python.UUCP (M. Cain) (07/11/85)

>                                                     Ok, now let's say I
> have a problem that I want solved by a programmer.  The programmer comes
> back to me with a program which doesn't do quite what I wanted.  Now
> from the comments it looks like the program should work right.  The
> problem is in the code.  Now the programmer goes off and tries to fix
> it, thinking he knows exactly what I want.  But when he comes back with
> the revised program, it still doesn't do what I wanted.

   On a slight tangent (the above was written in regard to the use of
language idioms and readibility), why doesn't the program do quite what
was wanted?  Were the requirements wrong?  Incomplete?  Does the
programmer just do sloppy work?  In a way, I am reminded of the old
answer to the question "When will the programmers deliver code on
time and within budget?" -- when the requirements are complete and
don't change midway through the project!

Michael Cain
Bell Communications Research
..!bellcore!python!michael

craig@loki.ARPA (Craig Partridge) (07/11/85)

    I'm afraid I don't find your story convincing for a couple of reasons.

    First, and maybe I'm rare, but when I learned to program several years
ago, my university took the view that as good programmers, we were expected
to be able to learn a new computer language in a few hours or days.  Indeed, 
after the first couple of courses, professors would simply announce they
expected a programming assignment to be in X programming language.  If you
didn't already know it (and frequently people didn't) you had to learn it,
fast.  So I have little sympathy for your poor person who doesn't
know the language (take a book home and read for a couple of hours tonight).

    Second, I think all languages have special idioms that people who
program in them typically use.  In most cases, I think new programmers
have some obligation to use those idioms.  Why? Because (contrary to your
example) in most cases someone reading your code is more likely than
not to be well versed in the language, and will be more confused by code which
doesn't use the established idioms.  Witness the letters already submitted
in which people say strongly they find i++ more intuitive than i = i + 1.


Craig Partridge
craig@bbn-loki (ARPA)
craig%loki.arpa@csnet-relay (CSNET)
{decvax,ihnp4,wjh12}!bbncca!craig (USENET)

moss@BRL.ARPA (Gary S. Moss (AMXBR-VLD-V)) (07/11/85)

I object to people saying "stuff = stuff + 1" when what they really mean
more specifically is "stuff++" or "stuff += 1"".  Way before I learned 'C'
I felt that this was an awkward way of incrementing a variable.  Its like
repeating the subject of a sentence, i.e. "Mark is a good boy because Mark
ate his peas".

I also object to people who don't take the time to learn the basics of
a language and pick at the syntax of someone's program because they don't
understand it.

Writing maintainable programs goes several levels of organization above
idiomatic expression, and communication of this to supervisors is far
more difficult than someone with your troubles probably realizes.

-moss

tomk@ur-laser.uucp (Tom Kessler) (07/11/85)

>Luckily, some of the code is readable, so I hope that what is
>wrong is in that.  Let's say the problem with the program is that a
>variable is being incremented when it shouldn't be.  However I don't
>know that the variable is being incremented because I see this cryptic
>"stuff++" which I pretty much ignore because the code is full of things
>like that which gets lost in the shuffle.  I'm lost, the programmer
>doesn't know what's wrong, and we're stuck.

If you're too lazy to leaf through a book on the programming language
and try to understand the fundamental idioms of a language ( even if
you're not too bright it shouldn't take more than a few hours to become
familiar enough with a new language to be able to read it ) you have
no business trying to read the code.
-- 
--------------------------
		   Tom Kessler {allegra |seismo }!rochester!ur-laser!tomk
Laboratory for Laser Energetics               Phone: (716)- 275 - 5101
250 East River Road
Rochester, New York 14623

shaddock@rti-sel.UUCP (Mike Shaddock) (07/11/85)

In article <11457@brl-tgr.ARPA> DHowell.ES@Xerox.ARPA writes:
>Ok, let me clarify a few things here.  By "idioms", I mean a piece of
>code in a language which can be directly translated into another piece
>scenario.
> ... (Some stuff here)
>wanted.  Therefore I must look at the code ("Foul", I hear you say.
>"What are you doing looking at his code; you're not an experienced C
>programmer!"  Well who else can look at it, if the programmer can't fix
>it himself?  At least I know what I want done).  The program turns out
> ... (Some more stuff).

I don't know about you, but I don't jump into programs written in a
language that I don't know without first either learning a little about
the language or having someone help me through the program.  Anyone who
took ten minutes to learn something about C would know what "stuff++"
did, and wouldn't ignore it.  As for the comments (not made by DHowell,
I'm just concatenating followups) about experienced C programmers, how
many people put non-experienced C programmers on a big project?  If
they don't know a language, they shouldn't be mucking with the code!
This applies to any language, not just C.  Most of the common "idioms"
(((fp = fopen()) == ERR), i++) are not that hard to understand.  This
doesn't mean that I advocate using every trick in the book, I'm just
saying that using the "idioms" of a language is not so bad.
-- 
Mike Shaddock			{decvax,seismo}!mcnc!rti-sel!shaddock

mjs@eagle.UUCP (M.J.Shannon) (07/12/85)

> In most cases, I think new programmers
> have some obligation to use those idioms.  Why? Because (contrary to your
> example) in most cases someone reading your code is more likely than
> not to be well versed in the language, and will be more confused by code which
> doesn't use the established idioms.  Witness the letters already submitted
> in which people say strongly they find i++ more intuitive than i = i + 1.
> 
> Craig Partridge

Sorry, but I don't buy this.  As a long term user of various UNIX Systems, I
have used many of the `language' tools once, or perhaps as many as a dozen
times.  Mostly I use the C compiler, occasionally assisted by yacc and lex, but
every once in a while a tool such as awk is more appropriate.  Now, I am aware
that some folks whose code I have access to, and have done extensive work in
awk, have rather large sets of idioms (idia?), but how much time should I spend
examining their code to ascertain the semantics of their idioms?  What I have
always done in the past (and, I dare say, shall continue to do) is write the
application in such a way that someone fluent in the language in question can
easily comprehend both the algorithm and the implementation of it.  I will be
the first to agree that "++i" is more readable (to experienced C programmers)
than "i = i + 1", but if I were mostly a fortrash programmer writing some C
interface programs to f77 (for example), I'd probably use the latter
expression, because it more closely resembles the syntax and semantics of the
language with which I am most familiar (hypothetically).

One way of stating my opinion on this (admittedly religious) matter is:

	Sometimes it pays to be eloquent.  It always pays to be clear,
	precise, and accurate.  When I can achieve eloquence at little
	cost while maintaining clarity, precision, and accuracy, I will
	do so -- but only then.

Can anyone claim that this is an unreasonable stance?  If so, please do so
via private mail; there is no need to further clutter this discussion with
religious debates.
-- 
	Marty Shannon
UUCP:	ihnp4!eagle!mjs
Phone:	+1 201 522 6063

greenber@timeinc.UUCP (Ross M. Greenberg) (07/12/85)

Dan tells a nice story about not understanding "i++" and the horrendous
implications thereof.

Funny. I looked at a COBOL program the other day and couldn't figure
out what it did.  I guess one could say that either I didn't understand
the language or that the damned language is full of idioms.

Since I don't know COBOL, I guess I should say it was full of idioms,
according to Dan.

I don't think so..


-- 
------------------------------------------------------------------
Ross M. Greenberg  @ Time Inc, New York 
              --------->{ihnp4 | vax135}!timeinc!greenber<---------

I highly doubt that Time Inc. they would make me their spokesperson.

tan@ihlpg.UUCP (Bill Tanenbaum - AT&T Bell Labs - Naperville IL) (07/12/85)

> [A long story of how i++ is incomprehensible]
> I know this was a rather long story, but I had to get my point across
> somehow.  Remember that the "I" in that story could be someone you will
> be working for.
> 
> Dan

I doubt it.  If I couldn't find the bug myself, you should fire me.
If you would insist on my using i = i + 1 instead of i++, I should quit.
This is not a joke.
-- 
Bill Tanenbaum - AT&T Bell Labs - Naperville IL  ihnp4!ihlpg!tan

vijay@topaz.ARPA (P. Vijay) (07/12/85)

> [DHowell.ES@Xerox writes about not using idioms...]
>.....
> variable is being incremented when it shouldn't be.  However I don't
> know that the variable is being incremented because I see this cryptic
> "stuff++" which I pretty much ignore because the code is full of things
> like that which gets lost in the shuffle.  I'm lost, the programmer
> doesn't know what's wrong, and we're stuck.
> 
> However if the program said "stuff = stuff + 1" or even
> "increment(stuff)", I could say "Aha! I think I know why it's not

	It all comes down to personal taste. You quote "stuff = stuff
+ 1" as a very readable statement. But is it not another of those
'idioms' that you seem to have a distaste for?

	While I do think usage of obscure idioms does tend to make the
program text difficult to understand, certain common idioms are in
fact quite useful in getting the point across both to the compiler and
to the human reader. If you are into the business of fixing code in
language X, I am afraid sooner or later you going to have to learn the
idomatic usages (at least the common ones) of X, not only so that you
could understand other's code, but also that others may understand
your code.

					--Vijay--

mauney@ncsu.UUCP (Jon Mauney) (07/12/85)

> Some things, like medicine and C programming, are best left to professionals.

Here is another example from medicine: doctors that scribble prescriptions
using arcane abbreviations.  Of course, since the prescription will be filled
by a professional pharmacist or nurse, there is no problem;  they all know
what the Latin phrases mean and are experienced in interpreting ink blots.

Nevertheless, Joe Graedon -- author of "The People's Pharmacy" and host
of a local radio call-in program -- has collected many examples of 
misunderstood prescriptions.  Despite explicit advice in the most widely
respected drug textbook, many doctors insist on using "time-saving" 
"standard" medical idioms instead of writing out the prescription in
plain English; they ignore the difficulties they cause to fellow health
professionals (not to mention the patients).  Similarly, many programmers 
insist on using well-known language idioms instead of arranging their code
to maximize readability.

A true professional increases efficiency by making things as easy as possible.
-- 

Jon Mauney,    mcnc!ncsu!mauney
North Carolina State University

"The door's not shut on my genius, but...  I just don't have the time."

faustus@ucbcad.UUCP (Wayne A. Christopher) (07/13/85)

> Let's say I am a person whose programming experience consisted a few
> high school and college courses in Basic & Pascal.  Ok, now let's say I
> have a problem that I want solved by a programmer.  The programmer comes
> back to me with a program which doesn't do quite what I wanted.  Now
> from the comments it looks like the program should work right.  The
> problem is in the code.  Now the programmer goes off and tries to fix
> it, thinking he knows exactly what I want.  But when he comes back with
> the revised program, it still doesn't do what I wanted.  Now the
> comments were not enough to understand why the program doesn't do what I
> wanted.  Therefore I must look at the code ("Foul", I hear you say.
> "What are you doing looking at his code; you're not an experienced C
> programmer!"  Well who else can look at it, if the programmer can't fix
> it himself?  At least I know what I want done).  The program turns out
> to be a full of these strange little idioms, which I've never seen
> before.  Luckily, some of the code is readable, so I hope that what is
> wrong is in that.  Let's say the problem with the program is that a
> variable is being incremented when it shouldn't be.  However I don't
> know that the variable is being incremented because I see this cryptic
> "stuff++" which I pretty much ignore because the code is full of things
> like that which gets lost in the shuffle.  I'm lost, the programmer
> doesn't know what's wrong, and we're stuck.
> 
> However if the program said "stuff = stuff + 1" or even
> "increment(stuff)", I could say "Aha! I think I know why it's not
> working. That variable 'stuff', why is it being incremented.  I don't
> think it should be incremented".  The programmer removes that line, the
> program is fixed, and we all live happily ever after.
> 
> I know this was a rather long story, but I had to get my point across
> somehow.  Remember that the "I" in that story could be someone you will
> be working for.

You should either: (1) Hire programmers who do the right thing, or (2)
Learn C. If there are nice features in a language like "i++", it is
silly to expect programmers not to use them because non-programmers
won't understand them. If you don't understand "i++", you sure won't
understand "p->foo". Besides, many people have pointed out that you
sometimes have to be a bit obscure to get efficient code -- if you write

	while (p->duck)
		p++;

instead of

	while (p++->duck)
		;

you are likely not to get the loop compiled with auto-increment instructions...

Anyway, you should admit that you are fighting a losing battle -- C
programmers write programs that should be understood by and only by other
C programmers, and if you don't know C you have no business trying to
understand it.

	Wayne

mff@wuphys.UUCP (Swamp Thing) (07/13/85)

> C programming is something that should be attempted only by professionals.
> There are other languages (such as BASIC) which the average luser might be
> able to learn if they put their mind to it.

Of all the arrogant, swellheaded piles of bullshit, this is the biggest I've
run across in quite a while.  What this person is implying is "You can't
criticize my programming style because I'm a professional programmer and you're
not, so there!"  Real class.  BTW, is "luser" a typo, or another barb cast upon
the masses from on high?




						Mark F. Flynn
						Department of Physics
						Washington University
						St. Louis, MO  63130
						ihnp4!wuphys!mff

------------------------------------------------------------------------------

"There is no dark side of the moon, really.
 Matter of fact, it's all dark."

				P. Floyd

root@bu-cs.UUCP (Barry Shein) (07/14/85)

ARGGH, forget it, C is a tiny little easily learned programming
language. If you can't be bothered to learn it then you shouldn't
be bothered to either program in it or manage people who do...period.

Now, consider a *real* case: PL/I, this language has thousands of
features, many of them redundant, many of them obscure.  Several
linear shelf feet of *basic* documentation, you could spend years
holding PL/I trivia contests (what does a leading '$' sign in a
decimal constant type do?)

My suggestion for *that* language was to enforce, by a compiler
option, that only a locally approved subset of the language be
allowed (say, a table that could be locally modified.)

My point is: Yes, this has come up in other contexts before. C
was designed by and large as a reaction to it, look at the C
reference manual, about 25 pages long (the back of K&R.)

And what about subroutines you 'dont understand', like scanf(),
which no one understands (:-).

Move over and let a pro do the work, your amateur status is showing.

	-Barry Shein, Boston University

root@bu-cs.UUCP (Barry Shein) (07/14/85)

Ok, ok, everyone calm down one minute.

There seems to be this neverending battle between people who call
themselves 'pros' and people who sneer at them as effete snobs.

First, I consider myself a 'pro', so maybe if you're really hot-headed
about this save your system some adrenalin and hit the 'n' key now.

The difference in attitude is largely this: Do you program for yourself
(or maybe a small group of compatriots) or strangers and the public
at large (who, of course, might include some friends, but not necessarily.)
There's one heck of a difference...try it sometimes.

If you have a bug in your code, do you make excuses/apologies and fix it

or do you get sued....or at least lose lots of $$ and/or reputation.

Trust me, it changes your attitudes about these things a lot!  It
changes, quickly, the 'everyone oughta program' types into
'it takes a pro to do it right type'.

Look, think of it like medicine, pros are enormously responsible to get
it right, or else all hell breaks lose. Your aunt edna tells you to take
aspirins for your brain tumor whaddya gonna do? Sue her?  Programming is
the same way.

To follow what I believe is a reasonable analogy, the doctor, sometimes
overcautiously, orders tests and hospital stays, expensive specialists.
Aunt Edna or your local health food guru listens for five minutes (10 if
you're upset) and prescribes something. Maybe correctly, maybe not, but
in either case, not with much accountability.

Look, I've done it, hacking a little (even hard) code in your science
lab or office gives you sense of pride, but it doesn't usually make you
a pro.  It's too easy for you to back out and say 'hey, I'm not really a
programmer, you just couldn't afford one so I filled in', like Aunt Edna
(who herself may be very good, no disparagement here, and she raised
three healthy boys! :-)

Pros are under too much pressure usually to rely on amateurs except in a
training relationship. It's not snobbishness (tho I agree, it sounds
like it often), it's accountability. Try to put yourself in a pair of
very tight fitting shoes and understand (block that metaphor.)

enough sermonizing.

	-Barry Shein, Boston University

steiny@scc.UUCP (Don Steiny) (07/15/85)

>
> and if you don't know C you have no business trying to
> understand it.
> 	Wayne

	Really, do you want me to be out of a job?  

	I think that standards of correctness are determined
by the massive amount of existing C.   The ++ convention is
easy for me to understand.   An example is variable names.
People who come from other backgrounds than C programming
often use long variable names, for instance:  

	char	*name_array[];
	int	name_array_index

A C programmer might declare

	char	*names[];
	int	i;

Supposedly this is harder to understand.   In the book "Learning
to Program in C" by Thomas Plum he mentions that they looked
at a bunch of C code and found out that 90% of C programmers
use i and j as index variables.  If people write programs
that are unconventional C, they get hard to understand (the V7
Bourne Shell, for instance).    One can use C macros to make
their code look like other languages.  The worst I have seen
is programs by people whose first language was Fortran.  I have
written simple programs that will compile with either the
C compiler or the F77 compiler.  It is possible to use lots
of goto's in C, but, why?   

Instead of trying to make C like COBOL, Pascal, or Fortran,
why not learn all of the languages and use whichever one is
appropriate to the task?   

franka@mmintl.UUCP (Frank Adams) (07/15/85)

In article <68@ucbcad.UUCP> faustus@ucbcad.UUCP (Wayne A. Christopher) writes:

> if you write
>
>	while (p->duck)
>		p++;
>
>instead of
>
>	while (p++->duck)
>		;
>

This will probably not be the only posting pointing out that your examples
are different.  The first will leave p pointing to the first structure for
which duck is zero; the second leaves it pointing one past it.

The ease with which this kind of mistake can be made in c by an experienced
c programmer, and the difficulty of finding such mistakes, is to me
the main weakness of c.

To me, serious programming does not mean playing with a piece of code to
get every last instruction cycle out of it, (nor writing as many lines
of code as possible, the opposite fallacy), but getting as much function
implemented as quickly as possible.  (Note that code that runs too slowly
is not functional.  Usually this happens when excess I/O is being done.
There are relatively few places where saving an instuction or two is
worthwhile - and those should be extracted in small subroutines (e.g.,
strcpy) which can be written in assembler if necessary.)

jpm@BNL44.ARPA (John McNamee) (07/16/85)

>> = John McNamee <jpm@bnl44.arpa> (me)
>> C programming is something that should be attempted only by professionals.
>> There are other languages (such as BASIC) which the average luser might be
>> able to learn if they put their mind to it.

> = Mark Flynn <mff@wuphys.uucp>
> Of all the arrogant, swellheaded piles of bullsh*t, this is the biggest I've
> run across in quite a while.  What this person is implying is "You can't
> criticize my programming style because I'm a professional programmer and
> you're not, so there!"  Real class.  BTW, is "luser" a typo, or another barb
> cast upon the masses from on high?

I am saying that I am a professional programmer. I write programs that can
be understood and modified by other professional programmers. It is not my
job to write programs so any Joe Blow off the street can read them. C is
not easy to learn or use, at least when compared to things like BASIC (which
is often used by non-professional-programmers). As a full-time computer
professional, I am expected to know how to make the most out of C, and I am
hired because of that skill. I write programs that are compact, execute
quickly, have a good user interface, and I get them done on time. People
pay me good money because I can do all that. If I also had to write code
that ANYBODY with no C training could read, I would not be able to do any of
the above. I will continue to maintain that langauges (such as BASIC) exist
for non-professional-programmers to implement solutions to their particular
problems, and that such people have no business using C unless they want to
invest the time to learn it. If they want a professional solution to their
problems, they should hire somebody like me or spend the time themselves
to learn what they are doing. You can't have it both ways; ease of use is
always in a tradeoff with power. C is total power for the professional,
and I wouldn't want it any other way.

BTW, "luser" was not a typo. It is hacker slang for somebody who learns
only enough about a computer system to get his/her particular job done,
and does not explore the limits of what the machine can do (that sort of
person is called a hacker).

weltyrp@rpi.UUCP (Richard Welty) (07/17/85)

> did, and wouldn't ignore it.  As for the comments (not made by DHowell,
> I'm just concatenating followups) about experienced C programmers, how
> many people put non-experienced C programmers on a big project?  If
> they don't know a language, they shouldn't be mucking with the code!

The world would be an extremely nice place if you could always get
experienced C programmers when you need them.  But consider the following:

My company has three large projects right now - the two older ones are
written in an old Fortran preprocessor called Pixel, and the newer one is
written in VMS C.  Suppose that we have an excess of manpower on a Pixel
project, and a critical need on the C project.  Should we lay of a Pixel
programmer who is not necessary and hire an experienced C programmer?
That is lousy labor relations.

Suppose we start converting one of the Pixel projects to C (something that
is being considered).  Should we fire all the Pixel programmers who
understand the application, the algorithams, and the concepts, even though
they have been with the company for years, and hire experienced C
programmers to replace them, even though the C programmers know
nothing about the application, etc?

I am not arguing against the use of "i++", and am not too bothered by
assignment to a file pointer inside an if.  On the other hand, I used to
write assignments in if statements a bit.  I stopped about 6 months ago, and
a month ago I went through and ripped all the old ones out of my code.  Why?
Because I felt that it took too much effort to read my own code.  It isn't
that I can't understand the construct, hell, I wrote it.  The problem is
that I have better uses for my time than decoding stuff that can be easier
to read.
-- 
				Rich Welty

	"Here young Walter is remembering his early days on the
	 planet Krypton, with his father Jor-El Cronkite"
					- David Brinkley, on SCTV

	CSNet:   weltyrp@rpi
	ArpaNet: weltyrp.rpi@csnet-relay
	UUCP:  seismo!rpics!weltyrp

alexis@reed.UUCP (Alexis Dimitriadis) (07/17/85)

> [...] Besides, many people have pointed out that you
> sometimes have to be a bit obscure to get efficient code -- if you write
> 
> 	while (p->duck)
> 		p++;
> 
> instead of
> 
> 	while (p++->duck)
> 		;
> 
> you are likely not to get the loop compiled with auto-increment instructions.

  (giggle) 
  As any "pro" must have noticed, the two versions given will
not do the same thing.  Whose side are you on, anyway?  :-)

  Seriously, C is always being derided for having terse operators whose
functions are not immediately obvious.  "{" and "}" are comment delimiters
in Pascal.  Should I "#define begin {" for better readability?  
Using the _basic_ features of C as intended by the implementers should
be prefectly acceptable to everyone.

  Expressions like "while (var++) ;" introduce problems that take
(bitter) experience before they can be handily detected, even if the
reader is familiar with the semantics of the expression.  That is the
real reason it takes a seasoned C programmer to debug seasoned C code.

  On the other hand, C programmers have enough to worry about, what
with portability considerations and readability to other _experienced_ 
C programmers, not to mention efficiency.  If you can't please
everyone, you might as well cater to those who are most likely to
benefit from being able to read your program.  (i.e., those who could
do something useful to it after they understand it).

  And finally, about "if ((fp=fopen("file", "r")) == NULL)", et al.
I find the idiom is an excellent aid to ensuring that the variable returned
by the test WILL be used.  Not applicable to "professional" work, you say.
Haven't you ever been tempted to skip tests that "never" return true?  It
is easier to do if the test is a separate statement that could just as well
be skipped (or forgotten!).  As it is, I think fopen() looks funny when 
it is called WITHOUT an enclosing if() test.  Idioms like that are a 
good way of always doing things the same way.  It may look confusing the
first time, but it's reassuring after the twentieth...

I sympathize with those trying to comprehend C idioms, but at least they
are language features, if used in a fairly consistent manner, and there 
are real benefits to be gained from their consistent use.  

Alexis Dimitriadis
	alexis @ reed
----------------------
  Disclaimer:  I routinely use, (but with no misconceptions about its 
clarity), code like
		...
		switch (*++*argv) {
		...
to read the next character in an option argument.  (yes, I did finally 
get a public-domain getopt, I will be using it).  I am generally proud
of the readability of my code.
-- 
_______________________________________________
  As soon as I get a full time job, the opinions expressed above
will attach themselves to my employer, who will never be rid of
them again.

             alexis @ reed

	         ...teneron! \
...seismo!ihnp4! - tektronix! - reed.UUCP
     ...decvax! /

jack@boring.UUCP (07/19/85)

In article <505@scc.UUCP> steiny@scc.UUCP (Don Steiny) writes:
>
>	I think that standards of correctness are determined
>by the massive amount of existing C.   The ++ convention is
>easy for me to understand.   An example is variable names.
>People who come from other backgrounds than C programming
>often use long variable names, for instance:  
>
>	char	*name_array[];
>	int	name_array_index
>
>A C programmer might declare
>
>	char	*names[];
>	int	i;
This is clearly untrue. With the exception of COBOL programmers (who
aren't real programmers anyway:-) there is probably no difference
in the way people choose variable names. If there is, probably the
C programmers choose clearer names. Upper/lower case was in C right
from the start, remember?

Also, I think that after some years of experience, most people come
to a naming scheme where they use names like i,j,p and q 
for index variables and the like, and more
elaborate names for other things. It depends on taste whether you use
"names", "name_array", "UserNames", or whatever.
-- 
	Jack Jansen, jack@mcvax.UUCP
	The shell is my oyster.

faustus@ucbcad.UUCP (Wayne A. Christopher) (07/20/85)

> > if you write
> >
> >	while (p->duck)
> >		p++;
> >
> >instead of
> >
> >	while (p++->duck)
> >		;
> >
> 
> This will probably not be the only posting pointing out that your examples
> are different.  The first will leave p pointing to the first structure for
> which duck is zero; the second leaves it pointing one past it.

I didn't say they were the same, I was just trying to make the point that
the compiler might use a better strategy for compiling one than the
other. So I was careless -- I would never do such a thing in an actual
program, and if I did it wouldn't take long to discover and fix the
mistake...

	Wayne

cg@myriasb.UUCP (Chris Gray) (07/22/85)

Perhaps I'm being a bit sarcastic, but could it be that the reason experienced
C programmers often use

	if ((fp = fopen("file", "r")) == NULL) ...

is that the compiler will complain if you write

	if ((fp = fopen("file", "r")) = NULL) ...

whereas it won't complain about

	fp = fopen("file", "r");
	if (fp = NULL) ...

???

bob@pedsgd.UUCP (Robert A. Weiler) (07/25/85)

Organization : Perkin-Elmer DSG, Tinton Falls NJ
Keywords: 

Ill try to keep this short. This has gone on long enough. People use ++ and
other funny assignments because they are part of C and occur all over
the place in existing C code and C programming books. It should take
the average individual 5-15 minutes to figure out what these do, and then
he/she/it should remember it for life. This is a non-issue. A real problem
in C is type declaration syntax, as should be obvious from the number of
requests Chris Torek apparently got for cdecl source ( thanks Chris ).
I find a partial solution to the def problem is to use the following
style of declarations religously.

#define MYSTRUCT_T	struct mystruct
struct mystruct {
	int		whatever;
	MYSTRUCT_T	*next;
};
typedef MYSTRUCT_T	mystruct_t, *mystruct_p;
#define	MYSTRUCTSZ	(sizeof(mystruct_t))
#define	NULLMYSTRUCT	((mystruct_p)0)

I would appreciate comments, suggestions, etc. on additional ways to
make type declarations more readable. But enough already about ++.

Bob Weiler.

bright@dataio.UUCP (Walter Bright) (07/26/85)

In article <218@pedsgd.UUCP> bob@pedsgd.UUCP (Robert A. Weiler) writes:
>I would appreciate comments, suggestions, etc. on additional ways to
>make type declarations more readable. But enough already about ++.

Howze about:

char (*f[6])();	/* <array of><pointer to><function returning><char> */

I don't think that burying the declaration in macros and typedefs
necessarilly makes it more readable. Using comments to describe the
more obscure delarations makes them more understandable.

ark@alice.UUCP (Andrew Koenig) (07/29/85)

>I would appreciate comments, suggestions, etc. on additional ways to
>make type declarations more readable.

The easiest way to make them more readable is to learn how to read them.

gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (07/29/85)

> char (*f[6])();	/* <array of><pointer to><function returning><char> */

The comment provides no useful information.  Any experienced C
programmer should be able to read the declaration as fast as he
can read the comment, maybe faster.  And the declaration provides
a PICTURE of the context in which `f' has meaning, unlike the
comment.

lcc.niket@LOCUS.UCLA.EDU (Niket K. Patwardhan) (07/30/85)

Typedef is my bugaboo too. When whoever invented it invented it he broke what
was uptil then a cardinal rule of C........ you could look at a indentifier
and what was immediately around it and tell exactly what it was
variable
array[]
function()
"null terminated string"
'character'
along with the non-enforced rule that defined constants were in capitals.
CONSTANT
maybe what is needed is some such rule (as for defined constants) that lets
you know immediately that you are looking at a type name rather than a variable.
Also at Intel we used to use first letter capitalized to indicate a macro
function.

chris@umcp-cs.UUCP (Chris Torek) (07/30/85)

(Entering personal religious belief flammable mode)

Typedefs are very nice when applied sparingly.  Before you use
typedefs, figure out why some intrinsic type won't do---perhaps
the type will have to change on some machines, but once changed
will be fine (e.g., if you need 24 bit integers, perhaps int is
good enough, but maybe you need longs; perhaps you should use a
typedef).  Typedefs can also help if a complex type is used very
often.  Other than that they just it harder to figure out what's
really happening.

I think I overdid it when I wrote "win.h" (part of Maryland Windows):
it has a Win and a Buf (both aliases for structures, and maybe used
often enough), but also a Pos, which should never have got out---it's
not even used outside the structure definitions themselves....
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 4251)
UUCP:	seismo!umcp-cs!chris
CSNet:	chris@umcp-cs		ARPA:	chris@maryland

david@ecrhub.UUCP (David M. Haynes) (07/31/85)

> (Entering personal religious belief flammable mode)
> 
> Typedefs are very nice when applied sparingly.  Before you use
> typedefs, figure out why some intrinsic type won't do---perhaps
> the type will have to change on some machines, but once changed
> will be fine (e.g., if you need 24 bit integers, perhaps int is
> good enough, but maybe you need longs; perhaps you should use a
> typedef).  Typedefs can also help if a complex type is used very
> often.  Other than that they just it harder to figure out what's
> really happening.
> 
> -- 
> In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 4251)
> UUCP:	seismo!umcp-cs!chris
> CSNet:	chris@umcp-cs		ARPA:	chris@maryland
> 
Around these parts, its a standard thing to use typedefs with structs.
ie: struct foo { };    <- shorter than normal
    typedef struct foo FOO;

Now we can use a declaration like:

	FOO record;

and know that since its capitalized its declared locally.

Its also fairly common to use a typedef stream (or numerous #defines)
to declare portable declarators. 

ie:
	typedef uint16 unsigned int;	or	#define uint16 unsigned int
	typedef uint8 char;		or	#define uint8 char

This helps greatly when moving stuff from machine to machine.
-- 
--------------------------------------------------------------------------
						David M. Haynes
						Exegetics Inc.
						..!utzoo!ecrhub!david

"I am my own employer, so I guess my opinions are my own and that of
my company."

bjorn@dataio.UUCP (Bjorn Benson) (07/31/85)

In article <261@brl-tgr.ARPA> gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) writes:
>> char (*f[6])();	/* <array of><pointer to><function returning><char> */
>
>The comment provides no useful information.  Any experienced C
>programmer should be able to read the declaration as fast as he
>can read the comment, maybe faster.  

Right, any PROGRAMMER can read it faster.  However some of us who spend
more time designing, testing and documenting than actually writing
code find terse C a bit cryptic.  As an ENGINEER, I like explict
correct comments, as they really help when I re-read code.

Don't get me wrong, I like C's terseness when I am writing and actively
editting code but later in the development cycle it can be annoying...

					Bjorn Benson

weltyrp@rpics.UUCP (Richard Welty) (08/01/85)

> Perhaps I'm being a bit sarcastic, but could it be that the reason experienced
> C programmers often use
> 
> 	if ((fp = fopen("file", "r")) == NULL) ...
> 
> is that the compiler will complain if you write
> 
> 	if ((fp = fopen("file", "r")) = NULL) ...
> 
> whereas it won't complain about
> 
> 	fp = fopen("file", "r");
> 	if (fp = NULL) ...
> 
> ???

Actually, the = vs. == mistake has caused me to take the following
steps ...

1. #define EQ ==

2. NEVER use == directly

3. NEVER make an assignment in an if statement

The result is that any occurance of = in an if is
an error, or a reversion to an older style that should
be corrected.  It does, perhaps, make the code look a little
like fortran, but I've spent one hell of a lot less time on the
= vs. == bug since I started doing this ...

-- 
				Rich Welty

	(I am both a part-time grad student at RPI and a full-time
	 employee of a local CAE firm, and opinions expressed herein
	 have nothing to do with anything at all)

	CSNet:   weltyrp@rpi
	ArpaNet: weltyrp.rpi@csnet-relay
	UUCP:  seismo!rpics!weltyrp

alex@ucla-cs.UUCP (08/05/85)

In article <4067@alice.UUCP> ark@alice.UUCP (Andrew Koenig) writes:
>>I would appreciate comments, suggestions, etc. on additional ways to
>>make type declarations more readable.
>
>The easiest way to make them more readable is to learn how to read them.

WRONG!  The easiest way to make programs more readable is to keep
readability in mind when you write them.  It has nothing to do with
learning how to read very obscure type declarations or having memorized
the C operator precedence table.  So you can understand:

    int (*foo[10])();

That doesn't help someone who is reading your program who doesn't.
But writing the declaration as done below does.

    typedef int (*PFI)();          /* pointer to function returning int */
    PFI foo[10];

While knowing how to read complicated declarations may help you meet
girls at cocktail parties, it doesn't help your program's reader!

Alex

savage@ssc-vax.UUCP (Lowell Savage) (08/05/85)

*** REPLACE THIS LINE WITH YOUR MICRO, PLEASE ***
From Bob Weiler there was this hint:
> I find a partial solution to the def problem is to use the following
> style of declarations religously.
> 
> #define MYSTRUCT_T	struct mystruct
> struct mystruct {
> 	int		whatever;
> 	MYSTRUCT_T	*next;
> };
> typedef MYSTRUCT_T	mystruct_t, *mystruct_p;
> #define	MYSTRUCTSZ	(sizeof(mystruct_t))
> #define	NULLMYSTRUCT	((mystruct_p)0)
> 
> I would appreciate comments, suggestions, etc. on additional ways to
> make type declarations more readable. But enough already about ++.

Hear! Hear!  Another suggestion that I will add is to use typedefs.  In this
case:

	typedef
	struct mystruct {
		int		whatever;
		mystruct	*next;
	};
	mystruct	*mystruct_p;
	#define		MYSTRUCTSZ	(sizeof(mystruct))

I have always HAD to do this when I was declaring arrays of pointers to
functions, as in:

	typedef
	struct  mystruct {
		int		x;
		mystruct	*nextrv;
	};
	typedef
	mystruct *ret_val;
	typedef
	ret_val	func_type();
	func_type *func_pt;
	func_pt	func_array[10];	/* An array of pointers to functions returning
				   a pointer to a structure mystruct. */

Perhaps some of the steps could be run together in the same declaration, but
it certainly helps when a nested declaration turns into a nested mess of *'s,
()'s, []'s, etc.

				There's more than one way to be savage,

				Lowell Savage

P.S.  I have not checked my declarations very carefully, so they could be
wrong, I just tried to bang this out to get the idea across.

philipl@azure.UUCP (Philip Lantz) (08/07/85)

> Perhaps I'm being a bit sarcastic, but could it be that the reason experienced
> C programmers often use
> 
> 	if ((fp = fopen("file", "r")) == NULL) ...
> 
> is that the compiler will complain if you write
> 
> 	if ((fp = fopen("file", "r")) = NULL) ...
> 
> whereas it won't complain about
> 
> 	fp = fopen("file", "r");
> 	if (fp = NULL) ...
> 
> ???

Sure it will; at least PCC-based compilers will say "statement not reached".

chris@umcp-cs.UUCP (Chris Torek) (08/09/85)

(Perhaps I should just let this one slide by, but I'm feeling
particularly ornery this morning :-) ....)

>So you can understand:
>    int (*foo[10])();
>That doesn't help someone who is reading your program who doesn't.
>But writing the declaration as done below does.
>   typedef int (*PFI)();          /* pointer to function returning int */
>   PFI foo[10];

Much better than either of those is, instead, doing the following:

	/*
	 * Each input character in the range '0'..'9' invokes the
	 * corresponding translation function from the table below.
	 * The translation function returns the new state.
	 */
	int (*trans_function[10])() = {
		...

(or	PFI trans_function[10] = { ... ).

Describe the *purpose* of the data structures!
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 4251)
UUCP:	seismo!umcp-cs!chris
CSNet:	chris@umcp-cs		ARPA:	chris@maryland

jerry@uwmcsd1.UUCP (Jerry Lieberthal) (08/09/85)

> In article <4067@alice.UUCP> ark@alice.UUCP (Andrew Koenig) writes:
> >>I would appreciate comments, suggestions, etc. on additional ways to
> >>make type declarations more readable.
> 
> WRONG!  The easiest way to make programs more readable is to keep
> readability in mind when you write them.  It has nothing to do with
> learning how to read very obscure type declarations or having memorized
> 
> While knowing how to read complicated declarations may help you meet
> girls at cocktail parties, it doesn't help your program's reader!
> 
> Alex

So that's what I have to do!  And all these years I thought it was my
ability to wow the girls will APL! :-)  I certainly will change from now
on!!
	- jerry

olson@lasspvax.UUCP (Todd Olson) (08/11/85)

[]

What chance is there of having assignment denoted by := in the standard?
(None, right?  To much code is written with = already. And if I really
insist I could use the preprocessor.  None the less ... )
The reason I would like this is then ALL assignments ( +=, |=, etc) would
be two characters long (consistancy!!). Among other things this would minimize
the = vs == type errors.

Todd Olson        
-----------

mjs@eagle.UUCP (M.J.Shannon) (08/12/85)

> What chance is there of having assignment denoted by := in the standard?
> (None, right?  To much code is written with = already. And if I really
> insist I could use the preprocessor.  None the less ... )
> The reason I would like this is then ALL assignments ( +=, |=, etc) would
> be two characters long (consistancy!!). Among other things this would minimize
> the = vs == type errors.
> 
> Todd Olson        

Um, what about <<= & >>=?
-- 
	Marty Shannon
UUCP:	ihnp4!eagle!mjs
Phone:	+1 201 522 6063

Warped people are throwbacks from the days of the United Federation of Planets.

tim@callan.UUCP (Tim Smith) (08/13/85)

> > What chance is there of having assignment denoted by := in the standard?
> > (None, right?  To much code is written with = already. And if I really
> > insist I could use the preprocessor.  None the less ... )
> > The reason I would like this is then ALL assignments ( +=, |=, etc) would
> > be two characters long (consistancy!!). Among other things this would minimize
> > the = vs == type errors.
> > 
> > Todd Olson        
> 
> Um, what about <<= & >>=?

Simple.  Replace <= with .le. and >= with .ge., < with .lt., > with .gt..
Then Shift can become < or >, and shift assignment can be <= or >=.
-- 
					Tim Smith
				ihnp4!{cithep,wlbr!callan}!tim

norman@lasspvax.UUCP (Norman Ramsey) (08/13/85)

> What chance is there of having assignment denoted by := in the standard?
> (None, right?  To much code is written with = already. And if I really
> insist I could use the preprocessor.  None the less ... )
> The reason I would like this is then ALL assignments ( +=, |=, etc) would
> be two characters long (consistancy!!). Among other things this would minimize
> the = vs == type errors.
> 
> Todd Olson        

I think there is about as much chance of this as of the UNIX community
switching to MODULA-2. It just makes too much sense.
-- 
Norman Ramsey

ARPA: norman@lasspvax  -- or --  norman%lasspvax@cu-arpa.cs.cornell.edu
UUCP: {ihnp4,allegra,...}!cornell!lasspvax!norman
BITNET: (in desperation only) ZSYJARTJ at CORNELLA
US Mail: Dept Physics, Clark Hall, Cornell University, Ithaca, New York 14853
Telephone: (607)-256-3944 (work)    (607)-272-7750 (home)

rbutterworth@watmath.UUCP (Ray Butterworth) (08/19/85)

> > > What chance is there of having assignment denoted by := in the standard?
> > > The reason I would like this is then ALL assignments ( +=, |=, etc) would
> > > be two characters long (consistancy!!). Among other things this would minimize
> > > the = vs == type errors.
> > > Todd Olson        
> > Um, what about <<= & >>=?
> Simple.  Replace <= with .le. and >= with .ge., < with .lt., > with .gt..
> Then Shift can become < or >, and shift assignment can be <= or >=.
>                     Tim Smith     ihnp4!{cithep,wlbr!callan}!tim

Even simpler.  Replace "<=" with "!>", and ">=" with "!<".  This gives
even more consistency since you can now have "!<", "!>", and "!=".

(On my first job, programming in COBOL, we were taught to always use
"is not greater than" instead of "is less than or equal to", since the
particular IBM compiler we were using actually generated two tests
because of the "or":  one for the "less than" and another for the
"equal to".)

freeman@spar.UUCP (Jay Freeman) (08/21/85)

[libation to line-eater]

In article <16220@watmath.UUCP> rbutterworth@watmath.UUCP (Ray Butterworth) writes:

>Even simpler.  Replace "<=" with "!>", and ">=" with "!<".  This gives
>even more consistency since you can now have "!<", "!>", and "!=".

I suspect that Ray Butterworth may have had a :-) in mind when he wrote
these lines, but I kind of _like_ "!<" and "!>".  I guess I always think of
"<=" -- for example -- as two separate tests and have to combine both of
them mentally when I am figuring out what something does.  (Or maybe it's
brain-damage from writing too many lines of assembler.)  Anyway, those might
be a reasonable enhancement, and surely would be all but free in terms of
additional compiler complexity.


-- 
Jay Reynolds Freeman (Schlumberger Palo Alto Research)(canonical disclaimer)