gb@cs.purdue.EDU (Gerald Baumgartner) (02/21/90)
For a research project I am collecting information about the risk of choosing the wrong programming language. In particular I am looking for problems that could have been avoided if another (a better) programming language would have been used. I know of these three such stories: 1. There is the famous story that a Mariner probe got lost because of the Fortran statement `DO 3 I = 1.3' (1.3 instead of 1,3) (see Peter Neumann: A Few Old War Stories Reappear. ACM SIGSOFT 11(5), Oct. 1986, pp. 16-18). It is a nice story but, as far as I know, NASA used Jovial at that time and not Fortran. 2. One of the security holes the Internet Worm took advantage of was in fingerd (the finger deamon). The deamon uses the gets routine for input. This routine, written in C, reads input without checking for bounds on the buffer involved. By overrunning the buffer, the worm rewrote the stack frame (see Eugene H. Spafford: Crisis and Aftermath. Communications of the ACM 32(6), June 1989). There would be no security hole in the finger daemon if a programming language would have been used for the I/O routines, where the compiler takes care of boundary checks for arrays. Pascal doesn't work since variable length strings are needed, but Ada would be fine. A language a la ML, where these checks are done at compile time, would be even better. 3. The AT&T breakdown a month ago was caused by a break statement in C. See the following mail (multiple forwarding headers deleted): Subject: AT&T software problem Subject: Cautionary note on C programming...AT&T learns from experience >From: kent@wsl.dec.com Subj: I've always thought C looked like line noise. Subj: the bug Subj: AT&T's bug, for you C users out there... Subj: I C what they mean! Subj: "c" considered dangerous to telephones Subj: Be careful from where you break! (else no long distance calls will make it thru...) Subj: C switch breaks AT&T switches! Subj: your "c users" list might appreciate this.... I received the following on AT&T's famous bug (and have deleted multiple forwarding headers): | | Subject: AT&T Bug | | Date: Fri Jan 19 12:18:33 1990 | | | | This is the bug that cause the AT&T breakdown | | the other day (no, it wasn't an MCI virus): | | | | In the switching software (written in C), there was a long | | "do . . . while" construct, which contained | | a "switch" statement, which contained | | an "if" clause, which contained a | | "break," which was intended for | | the "if" clause, but instead broke from | | the "switch" statement. | | Again it looks like this bug wouldn't have occurred in another programming language. You C what I mean? Do you know other stories like these, if possible with references? I don't want to praise Ada or pick at C and Fortran; I am looking for any story where a proveably inappropriate/insecure programming language has been used. Gerald Baumgartner gb@cs.purdue.edu ...!{decwrl,gatech,ucbvax}!purdue!gb
kjeld@iesd.auc.dk (Kjeld Flarup) (02/23/90)
In article <9790@medusa.cs.purdue.edu> gb@cs.purdue.EDU (Gerald Baumgartner) writes: >| | This is the bug that cause the AT&T breakdown >| | the other day (no, it wasn't an MCI virus): >| | >| | In the switching software (written in C), there was a long >| | "do . . . while" construct, which contained >| | a "switch" statement, which contained >| | an "if" clause, which contained a >| | "break," which was intended for >| | the "if" clause, but instead broke from >| | the "switch" statement. > Again it looks like this bug wouldn't have occurred in another > programming language. Is the intent of using better programming languages to avoid bad programmers. A normal thinking C programmer would not use a break out of an if statement. Sure you get out of the if statement, but according my c compiler the break statement searches back to a switch, while or for statement. Besides breaking out of an if statement doesn't make sence. -- * I am several thousand pages behind my reading schedule. * Kjeld Flarup Christensen kjeld@iesd.auc.dk
msb@sq.sq.com (Mark Brader) (02/24/90)
Gerald Baumgartner (gb@cs.purdue.EDU) writes in many groups: > There is the famous story that a Mariner probe got lost > because of the Fortran statement `DO 3 I = 1.3' (1.3 instead > of 1,3) ... It is a nice story but, as far as I know, NASA used > Jovial at that time and not Fortran. Just for the record, the above was definitively shown to be fictional according to authoritative references given in comp.risks (= Risks Digest), issue 9.75 (I hear), not too many months ago. There is at least one textbook that states it as truth; this is wrong. The actual reason for the loss of Mariner I was an error in code used in recovering from a hardware failure; the code had been based on handwritten equations, and in transcribing one of these, an overbar was deleted from one letter. A story which may have been the true origin of the "DO statement myth" was posted fairly recently in alt.folklore.computers; the article cited a program at NASA that did enter production use with a dot-for-comma bug in a DO statement, but it wasn't a spacecraft flight control program. (I didn't save the details and would be happy to see them again.) Followups directed to alt.folklore.computers. -- Mark Brader "I'm not going to post a revision: even USENET utzoo!sq!msb, msb@sq.com readers can divide by 100." -- Brian Reid This article is in the public domain.
bill@ssd.harris.com (Bill Leonard) (02/28/90)
In article <9790@medusa.cs.purdue.edu> gb@cs.purdue.EDU (Gerald Baumgartner) writes: I received the following on AT&T's famous bug (and have deleted multiple forwarding headers): | | Subject: AT&T Bug | | Date: Fri Jan 19 12:18:33 1990 | | | | This is the bug that cause the AT&T breakdown | | the other day (no, it wasn't an MCI virus): | | | | In the switching software (written in C), there was a long | | "do . . . while" construct, which contained | | a "switch" statement, which contained | | an "if" clause, which contained a | | "break," which was intended for | | the "if" clause, but instead broke from | | the "switch" statement. | | Again it looks like this bug wouldn't have occurred in another programming language. I can't resist saying that this last statement seems to me to be utter nonsense. What programming language (read, compiler) can read the programmer's mind and tell what he meant? The use of the "break" statement was a logic error (actually, it sounds like it was a lack of knowledge of the language, since "break" does not apply to "if"). I can't imagine a programming language that could discern this type of error. [If I use WHILE instead of IF, for instance, I can expect some things to work and some not. Yet I seriously doubt any compiler could possibly detect this error.] I certainly think programmers often choose an inappropriate language, but I shy away from anecdotal stories like these because they seem (to me) to spread a lot of misinformation. Unless you implement a project in multiple languages, it is nothing more than a guess to say what would have happened if the project had been implemented in some other language. Perhaps you would have discovered an even more serious flaw in that language, or you might simply find it was no better or worse than the one you chose, just different. Most of the stories I have heard along these lines all struck me as missing the point: how well was the program tested? Were there code reviews? Design reviews? All of these techniques are proven to reduce errors. Most of the errors in these stories (e.g., the infamous dot-versus-comma one) should have been found with even rudimentary testing. Use of an inappropriate language is no excuse for abandoning other techniques of good software engineering. -- Bill Leonard Harris Computer Systems Division 2101 W. Cypress Creek Road Fort Lauderdale, FL 33309 bill@ssd.csd.harris.com or hcx1!bill@uunet.uu.net
chewy@apple.com (Paul Snively) (03/01/90)
In article <BILL.90Feb27143004@hcx2.ssd.harris.com> bill@ssd.harris.com (Bill Leonard) writes: > In article <9790@medusa.cs.purdue.edu> gb@cs.purdue.EDU (Gerald Baumgartner) writes: > Again it looks like this bug wouldn't have occurred in another > programming language. > > I can't resist saying that this last statement seems to me to be utter > nonsense. What programming language (read, compiler) can read the > programmer's mind and tell what he meant? The use of the "break" statement > was a logic error (actually, it sounds like it was a lack of knowledge of > the language, since "break" does not apply to "if"). I can't imagine a > programming language that could discern this type of error. [If I use > WHILE instead of IF, for instance, I can expect some things to work and > some not. Yet I seriously doubt any compiler could possibly detect this > error.] > > I certainly think programmers often choose an inappropriate language, but I > shy away from anecdotal stories like these because they seem (to me) to > spread a lot of misinformation. Unless you implement a project in multiple > languages, it is nothing more than a guess to say what would have happened > if the project had been implemented in some other language. Perhaps you > would have discovered an even more serious flaw in that language, or you > might simply find it was no better or worse than the one you chose, just > different. > > Most of the stories I have heard along these lines all struck me as missing > the point: how well was the program tested? Were there code reviews? > Design reviews? All of these techniques are proven to reduce errors. Most > of the errors in these stories (e.g., the infamous dot-versus-comma one) > should have been found with even rudimentary testing. > > Use of an inappropriate language is no excuse for abandoning other techniques > of good software engineering. I don't think that anyone's claiming that it is an excuse; I believe the point was that some languages applied to some tasks lend themselves to error more than another language applied to the same task. If you wish to interpret the above story as a rather pointed jab at the C programming language, and object to C being treated that way, that's fine, but please just say so. For what it's worth, my personal opinion is that C lends itself to precisely the kinds of errors noted above--when does break work and when doesn't it, and why in God's name do you need it in switch statements in the first place, etc. I believe that it's C's historical looseness that is simultaneously its greatest weakness, when it leads to errors like this, and its greatest strength--C doesn't restrict you; C is mean and lean; C is close to the hardware; real programmers use C; even, God help us, C is the only language you need! We all know C programmers whose machismo is thus huffed and puffed up (another of my personal opinions is that the per capita arrogance of C programmers far outweighs the per capita arrogance of any other language-aficionado group). Now to get back to the important point: what language would have been better for the task in question? Well, I hate to say it, but it's extremely unlikely that such an error would have been made in Pascal, since Pascal doesn't require you to explicitly break from case...of constructs. Before the flames start, let me just add: no, I don't necessarily prefer Pascal over C for all tasks. I generally attempt to choose the right tool for the job, rather than falling into the "when all you have is a hammer, everything looks like a nail" trap. Standard Disclaimer.
jk0@image.soe.clarkson.edu (Jason Coughlin) (03/01/90)
From article <6960@internal.Apple.COM>, by chewy@apple.com (Paul Snively): > For what it's worth, my personal opinion is that C lends itself to > precisely the kinds of errors noted above--when does break work and when > doesn't it, and why in God's name do you need it in switch statements in > the first place, etc. Gee, if you read the language defn you'd know exactly when break applies and when break doesn't. It seems to me that it is the programmer's responsibility to know the language in which he is going to implement said project -- it's not necessarily the language's responsibility to know the programmer didn't read the defn. > Well, I hate to say it, but it's extremely unlikely that such an error > would have been made in Pascal, since Pascal doesn't require you to > explicitly break from case...of constructs. And without knowing the project, you have no business making the assertion that Pascal was better than C [especially on a Unix box] or that C was better than Pascal [especially on a VMS box]. -- Jason Coughlin ( jk0@sun.soe.clarkson.edu , jk0@clutx ) "Every jumbled pile of person has a thinking part that wonders what the part that isn't thinking isn't thinking of." - They Might Be Giants
sue@murdu.oz (Sue McPherson) (03/01/90)
>From article <6960@internal.Apple.COM>, by chewy@apple.com (Paul Snively): >> For what it's worth, my personal opinion is that C lends itself to >> precisely the kinds of errors noted above--when does break work and when >> doesn't it, and why in God's name do you need it in switch statements in >> the first place, etc. > I think its a mistake to think that any language can prevent programming errors, especially by programmers who do not have a full understanding of a language. For instance, I recently helped someone who couldn't work out why a minor change to a program caused it to crash, and when he removed the change, it still crashed. ... CHARACTER ARRAY(12)*3 CHARACTER PATT*3 INTEGER I ... I = 12 10 IF ((ARRAY(I).NE.PATT).AND.(I.GT.0)) THEN I = I - 1 GOTO 10 ENDIF ... Of course, if PATT isn't in ARRAY it means that the check on ARRAY(0) caused an access violation which only caused the program to crash when the "/check=all" option was used to compile. Its a dumb mistake and its easy to see how it happened but it would be wrong to say that it was caused by the lack of good control structures in FORTRAN, just as its a poor excuse to blame C for the mistakes programmers make when using it. As the saying goes... A BAD WORKMAN BLAMES HIS TOOLS Sue McPherson sue@murdu.unimelb.edu.au
phg@cs.brown.edu (Peter H. Golde) (03/01/90)
In article <1990Feb28.213543.21748@sun.soe.clarkson.edu> jk0@image.soe.clarkson.edu (Jason Coughlin) writes: >From article <6960@internal.Apple.COM>, by chewy@apple.com (Paul Snively): >> For what it's worth, my personal opinion is that C lends itself to >> precisely the kinds of errors noted above--when does break work and when >> doesn't it, and why in God's name do you need it in switch statements in >> the first place, etc. > > Gee, if you read the language defn you'd know exactly when break >applies and when break doesn't. It seems to me that it is the >programmer's responsibility to know the language in which he is going to >implement said project -- it's not necessarily the language's responsibility >to know the programmer didn't read the defn. However, every programmer, no matter how good, makes stupid mistakes -- ones in which s/he knows better, but for some reason, s/he did anyway. These might be simple syntax errors, or left out statements, etc. The higher the percentage of these error which the compiler catches, the more reliable the program will be and the less time it will take to be debugged. This is why redundancy in a language can be a good thing. The C language might be made "simpler" if all undeclared variable were automatically declared as auto int; thus saving the need for "useless" declarations. I would not like to program in such a language, would you? To take a more "real-life" example, I have, at times, mis-typed a C program as follows: c = foo(d); /* update count of flibbets * bam_whiz(c, d); /* and propagate change to zip module */ return; If I had used another language, this error would have been caught by the compiler. Clearly this is a small point, but it illustrates my point: some languages and compilers permit a larger percentage of minor errors to pass than others. --Peter Golde
melling@cs.psu.edu (Michael D Mellinger) (03/01/90)
In article <31039@brunix.UUCP> phg@cs.brown.edu (Peter H. Golde) writes: The C language might be made "simpler" if all undeclared variable were automatically declared as auto int; thus saving the need for "useless" declarations. I would not like to program in such a language, would you? To take a more "real-life" example, I have, at times, mis-typed a C program as follows: c = foo(d); /* update count of flibbets * bam_whiz(c, d); /* and propagate change to zip module */ return; If I had used another language, this error would have been caught by the compiler. Clearly this is a small point, but it illustrates my point: some languages and compilers permit a larger percentage of minor errors to pass than others. --Peter Golde Some compilers will warn you if you have nested comments. gcc, for example, will warn you about this if you use the -Wcomment option. But your point is well taken. Personally, it's the little things like this that make me believe that everyone should ABANDON C and move on to C++(two others being function prototyping and strong type checking). Waddya think? Wither C? void count_flibbets(int d) // int d; Stop doing this!! { c = foo(d); // update count of flibbets bam_whiz(c, d); // and propagate change to zip module return; } -Mike
jeff@aiai.ed.ac.uk (Jeff Dalton) (03/01/90)
In article <6960@internal.Apple.COM> chewy@apple.com (Paul Snively) writes: >machismo is thus huffed and puffed up (another of my personal opinions is >that the per capita arrogance of C programmers far outweighs the per >capita arrogance of any other language-aficionado group). Except for Pascal programmers. Even Wirth has moved on by now.
barmar@think.com (Barry Margolin) (03/02/90)
In article <1990Feb28.213543.21748@sun.soe.clarkson.edu> jk0@image.soe.clarkson.edu (Jason Coughlin) writes: > Gee, if you read the language defn you'd know exactly when break >applies and when break doesn't. It seems to me that it is the >programmer's responsibility to know the language in which he is going to >implement said project -- it's not necessarily the language's responsibility >to know the programmer didn't read the defn. What would you say if a car designer used a similar excuse: Gee, if you'd read the owner's manual for the 6000SUX you'd know that you have to turn the radio off before stepping on the brake pedal. It seems to me that it is the driver's responsibility to know the car he's driving -- it's not necessarily the manufacturer's responsibility to know that the driver didn't read the manual. Yes, it's the resposibility of the programmer to know the language. But it's the responsibility of language designers to design languages reasonably. If programmer-friendliness weren't an issue we'd still be programming in machine language. -- Barry Margolin, Thinking Machines Corp. barmar@think.com {uunet,harvard}!think!barmar
dalamb@qucis.queensu.CA (David Lamb) (03/02/90)
In article <BILL.90Feb27143004@hcx2.ssd.harris.com> bill@ssd.harris.com (Bill Leonard) writes: >In article <9790@medusa.cs.purdue.edu> gb@cs.purdue.EDU (Gerald Baumgartner) writes: > Again it looks like this bug wouldn't have occurred in another > programming language. > >I can't resist saying that this last statement seems to me to be utter >nonsense. What programming language (read, compiler) can read the >programmer's mind and tell what he meant? The use of the "break" statement >was a logic error ... I can't imagine a >programming language that could discern this type of error. [If I use >WHILE instead of IF, for instance, I can expect some things to work and >some not. Yet I seriously doubt any compiler could possibly detect this >error.] I think Baumgartner's point was that languages have "characteristic errors" that seem to be properties of the language rather than the programmer. For example, in C people use = instead of ==; in Pascal the corresponding error is caught by the compiler. In many languages, beginners get confused by where to put semicolons; in Turing that doesn't happen. In Bliss there were no types, and variable names and assigment didn't mean what you thought they meant; instead of the required x := .x+1, people left out the dot; the erroneous result means "store the address of the word after variable x into x". Characteristic errors of this sort can legitemately be considered "human engineering" flaws in the language, albeit sometimes minor ones. David Alex Lamb ARPA Internet: David.Lamb@cs.cmu.edu Department of Computing dalamb@qucis.queensu.ca and Information Science uucp: ...!utzoo!utcsri!qucis!dalamb Queen's University phone: (613) 545-6067 Kingston, Ontario, Canada K7L 3N6 Newsgroups: poster Subject: Re: problems/risks due to programming language, stories requested Summary: Expires: References: <9790@medusa.cs.purdue.edu> <BILL.90Feb27143004@hcx2.ssd.harris.com> Sender: Reply-To: dalamb@qucis.queensu.CA (David Lamb) Followup-To: Distribution: Organization: Queen's University, Kingston Ontario Keywords: In article <BILL.90Feb27143004@hcx2.ssd.harris.com> bill@ssd.harris.com (Bill Leonard) writes: >In article <9790@medusa.cs.purdue.edu> gb@cs.purdue.EDU (Gerald Baumgartner) writes: > Again it looks like this bug wouldn't have occurred in another > programming language. > >I can't resist saying that this last statement seems to me to be utter >nonsense. What programming language (read, compiler) can read the >programmer's mind and tell what he meant? The use of the "break" statement >was a logic error ... I can't imagine a >programming language that could discern this type of error. [If I use >WHILE instead of IF, for instance, I can expect some things to work and >some not. Yet I seriously doubt any compiler could possibly detect this >error.] I think Baumgartner's point was that languages have "characteristic errors" that seem to be properties of the language rather than the programmer. For example, in C people use = instead of ==; in Pascal the corresponding error is caught by the compiler. In many languages, beginners get confused by where to put semicolons; in Turing that doesn't happen. In Bliss there were no types, and variable names and assigment didn't mean what you thought they meant; instead of the required x := .x+1, people left out the dot; the erroneous result means "store the address of the word after variable x into x". Characteristic errors of this sort can legitemately be considered "human engineering" flaws in the language, albeit sometimes minor ones. David Alex Lamb ARPA Internet: David.Lamb@cs.cmu.edu Department of Computing dalamb@qucis.queensu.ca and Information Science uucp: ...!utzoo!utcsri!qucis!dalamb Queen's University phone: (613) 545-6067 Kingston, Ontario, Canada K7L 3N6
jk0@image.soe.clarkson.edu (Jason Coughlin) (03/02/90)
From article <34416@news.Think.COM>, by barmar@think.com (Barry Margolin): > In article <1990Feb28.213543.21748@sun.soe.clarkson.edu> jk0@image.soe.clarkson.edu (Jason Coughlin) writes: >> Gee, if you read the language defn you'd know exactly when break >>applies and when break doesn't. It seems to me that it is the >>programmer's responsibility to know the language in which he is going to >>implement said project -- it's not necessarily the language's responsibility >>to know the programmer didn't read the defn. > > What would you say if a car designer used a similar excuse: Gee, if you'd > read the owner's manual for the 6000SUX you'd know that you have to turn > the radio off before stepping on the brake pedal. It seems to me that it > is the driver's responsibility to know the car he's driving -- it's not > necessarily the manufacturer's responsibility to know that the driver > didn't read the manual. > Oh come one -- cars != programming languages and your analogy is useless... People know how to work the radio and usually the semantics of driving a ford as opposed to a chevy is pretty clear. The differences between C, Pasal, LISP, APL are more than syntactical *AND* semantical. This is why some people are good programmers and some people write crap. [Besides, who turns the radio off before stepping on the brake pedal?????] > Yes, it's the resposibility of the programmer to know the language. But > it's the responsibility of language designers to design languages > reasonably. If programmer-friendliness weren't an issue we'd still be > programming in machine language. True, language designers should design languages reasonably. But, the language *IS* clear, and it's clearly defined in K & R and K&R with the ANSI extensions. The problem is that programmers have to be aware of language design issues -- and they aren't. -- Jason Coughlin ( jk0@sun.soe.clarkson.edu , jk0@clutx ) "Every jumbled pile of person has a thinking part that wonders what the part that isn't thinking isn't thinking of." - They Might Be Giants
lins@Apple.COM (Chuck Lins) (03/02/90)
In article <1883@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes: >In article <6960@internal.Apple.COM> chewy@apple.com (Paul Snively) writes: > >machismo is thus huffed and puffed up (another of my personal opinions is > >that the per capita arrogance of C programmers far outweighs the per > >capita arrogance of any other language-aficionado group). > >Except for Pascal programmers. Even Wirth has moved on by now. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Yup. Even beyond Modula-2 to Oberon. And several colleagues at ETHZ have enhance Oberon with object-oriented extensions. -- Chuck Lins | "Exit left to funway." Apple Computer, Inc. | Internet: lins@apple.com 20525 Mariani Avenue | AppleLink: LINS Mail Stop 41-K | Cupertino, CA 95014 | "Self-proclaimed Object Oberon Evangelist" The intersection of Apple's ideas and my ideas yields the empty set.
perry@apollo.HP.COM (Jim Perry) (03/02/90)
Peter H. Golde writes: >However, every programmer, no matter how good, makes stupid mistakes -- ones >in which s/he knows better, but for some reason, s/he did anyway. These >might be simple syntax errors, or left out statements, etc. The higher the >percentage of these error which the compiler catches, the more reliable >the program will be and the less time it will take to be debugged. This >is why redundancy in a language can be a good thing. A good place to jump in with a tale from life -- On the subject of language and errors, it so happens that I just completed initial development of a smallish (6000 line) program, of some complexity, on an accelerated schedule. Since it's fresh in my mind and I have ready access to the source history, I thought I'd look over the bugs I found and see how they broke down. I tried to be honest and count all bugs; I don't include changes that were the result of interface semantics misunderstandings (correctly implementing the wrong thing), or bugs/deficiencies that I happened to turn up in existing code. These are my bugs: mistakes, some stupid, that I made. I don't include compiler-detected errors/warnings, and I used an ANSI compiler (with full prototypes; argument mismatch is a huge bug source in pre-ANSI C, and the compiler detected quite a few such errors), and a compiler that does a reasonable job of catching "warning" situations (including nested comments, for instance -- it found several of those). I program in C for a living, and haven't written in another language in 3 years. C is not my native programming language, so I speak C with a PL/I-ish accent (several of the bugs I turned up involved pointer arithmetic/arrays). I have done systems programming in XPL, PL/I, Pascal, BASIC, various assemblers, and a smattering of others. I have never written in Ada. In no particular order: 1. A function had an output parameter which was a numeric count, i.e. a pointer to an integer. I wrote the code to increment the count as *count++; which of course does entirely the wrong thing (it should be "(*count)++;" or, as I rewrote it, "*count += 1;"). Clearly this particular mistake is strictly limited to C: in another language this parameter would be a reference/out/var, not a pointer; the ++ and thus the ambiguity of what's incremented is obviously unique to C; and of course the stupid notion of unused-expression-as-statement is also uncommon. However, a better C compiler could have flagged the fact of the unused expression, i.e. that while "count++" was presumably an intended side effect, "*count" was unused. 2. I wanted to fill in a record whose structure was something like: struct { struct a fixed_length_stuff; struct b variable_length_array[fixed_length_stuff.size]; char string[]; /* variable-length null-terminated */ struct c more_stuff; } foo; Clearly this isn't expressible in this format in C, so the code to fill in the record used pointer arithmetic to develop pointers to the start of the variable length array, to the string, and to the following data. In one case I made a simple omission of one addition: [this is approximate; assume all the right casting] ptr = &foo + sizeof(a); /* pointer to b */ strcpy(ptr + (size*sizeof(b)), name); /* put name after b */ ptr += strlen(name)+1; /* step past name to more_stuff */ fill_in_more_stuff(ptr); /* and put values there */ The first two lines work, and were copied from an instance where more_stuff wasn't of interest; the error is that the + in the line 2 should logically be changed to a +=, or as I prefer, the line should be expanded to ptr += (size*sizeof(b)); /* step past b to string */ strcpy(ptr, name); /* fill in name */ This was just an error, and would have been so in any language where I was trying to represent such a structure extralingually. Other languages, however, would have allowed me to describe such a structure within the language (PL/I, for one). Where possible, I let compilers do my address computation for me. There were a couple of more bugs along similar lines, coming up with a pointer to entries in such contrived records; I'll count all together. 3. Due to an interface confusion an off-by-one situation arose where a function filled in one more entry of an array than the caller had allocated, thus trashing the next thing on the stack. In a system with runtime array bounds checking, this would have been detected quickly and painlessly. As C doesn't really have arrays it's very unlikely that a C runtime implementation could do this. 4. A function to allocate, initialize, and return a new node to go in a linked list had two bugs: I neglected to set the link field to NULL, and there was no return statement. In most languages the former would have been a bug, although if I'd been using a language where I could define initial values for newly created storage (C++, PL/I...) I would probably have done it there. The absence of a return statement could and should have been caught by the compiler. 5. I neglected to maintain a node count field when nodes were added to or removed from a list. Just a dumb oversight. 6. In a few instances I didn't set function output parameters correctly in cases of exceptions (errors) -- i.e. not setting the "number of objects returned" variable to 0. This matters because I'm writing distributed code and such variables get used by the RPC mechanism to determine how much stuff to send across the wire on return. Not a C issue. 7. I had one bug caused by omission of an item in an initializer list for a struct (a vector of function pointers). The compiler could have caught that if the language didn't allow partial initializer lists. 8. In one spot in one algorithm I used a break when I needed a continue (I didn't confuse the two, I got the algorithm wrong). Overall, that's 8 bugs or classes of bugs. 5 of the 8 could have been avoided or detected by a smarter compiler or a different language. I've tried to cover everything (i.e. I've been through the audit trail of edits). These were the show stoppers, there may be subtler bugs lurking. - Jim Perry perry@apollo.hp.com HP/Apollo, Chelmsford MA This particularly rapid unintelligible patter isn't generally heard and if it is it doesn't matter.
ok@goanna.oz.au (Richard O'keefe) (03/02/90)
In article <48f0d9c2.20b6d@apollo.HP.COM>, perry@apollo.HP.COM (Jim Perry) offers a list of 8 mistakes he made. Before commenting, I would remind you that I am not a C bigot; I would love to have the chance to use Ada, and I have fallen in love with Eiffel (which generates C...). Comment 0: Jim Perry didn't say what operating system and compiler he was using. This turns out to matter. > 1. A function had [int *count; as a parameter] [and he wrote] > *count++; > However, a better C compiler could have flagged the fact of > the unused expression, i.e. that while "count++" was presumably an > intended side effect, "*count" was unused. Just so. The compiler *could* have flagged it. 'lint' _would_ have, and I've used non-pcc compilers on Unix that would have. > 2. I wanted to fill in a record whose structure was something like: > struct { > struct a fixed_length_stuff; > struct b variable_length_array[fixed_length_stuff.size]; > char string[]; /* variable-length null-terminated */ > struct c more_stuff; > } foo; Not a C problem. Instead of using variable length fields, which C quite explicitly doesn't support, it would be better to use pointers. I think this mistake should be counted as refusal to do things the C way. > Other languages, however, would have allowed me to describe such a > structure within the language (PL/I, for one). It's worth pointing out that PL/I is not a strongly typed language (*much* weaker than C) and although it has record variables it does not have record types. > 3. [an off-by-one error accessing an array] > In a system > with runtime array bounds checking, this would have been detected > quickly and painlessly. As C doesn't really have arrays it's very > unlikely that a C runtime implementation could do this. Note that some systems with runtime bounds checking only check that the result of a multidimensional array access is somewhere in the array as a whole (read the VS Fortran 2 manual for instance) which means that this can happen _within_ an array. Note that there is not one tiny little thing anywhere in the ANSI Pascal standard (which I have read) that *requires* bounds checking. PL/I used to be the same way. Ada (LRM 4.1.1, 11.1) _does_ require bounds checking (nice one). However, the claim that "C doesn't really have arrays" is not quite true. It is only legal to access the value of a pointer variable itself in C when the variable points into (or just one pas t the end of) an array, and it is only legal to *dereference* a pointer when it points properly into an array. A C implementation may, for example, maintain pointer values as triples: (Arr,Lim,Off) where Arr is the address of the beginning of the array, Lim is the size of the array, and Off is the offset of the pointer from the base of the array, so that 0 <= Off <= Lim for the pointer to be valid and 0 <= Off < Lim for dereferencing to be valid. Not only is this possible in principle, but Symbolics C does something similar, and I believe that the Saber-C debugger also checks array bounds. > 4. [a field in a record wasn't set; not a C problem] > [a function had both return e; and return;] > The absence of a return statement could > and should have been caught by the compiler. Another quality-of-implementation issue. lint _would_ have caught it and several C compilers I've used would have caught it. > 5. [an oversight; not a C problem] > 6. [output variables not set on exception; not a C problem] > 7. I had one bug caused by omission of an item in an initializer list > for a struct (a vector of function pointers). The compiler could have > caught that if the language didn't allow partial initializer lists. Caught by a feature, I'm afraid. There are several methods you can use to help yourself catch this kind of mistake. One is to do some_type the_table[ /*note size not specified*/ ] = { ... ... } void check_the_table() { assert(sizeof the_table == expected * sizeof the_table[0]); /* other tests */ } Another trick is to use the preprocessor to help you check your counting. #define ten(A,B,C,D,E,F,G,H,I,J) A,B,C,D,E,F,G,H,I,H some_type the_table[] = { ten( ten(x00, x01, ..., x09), ... ten(x90, x91, ..., x99) )}; Now if you miscount, the preprocessor will complain. > 8. [not a C problem] > Overall, that's 8 bugs or classes of bugs. 5 of the 8 could have been > avoided or detected by a smarter compiler or a different language. My totals are 4 Not a C problem 1 Can be avoided by exploiting the preprocessor 2 Would have been caught by Lint or some existing compilers 1 Would have been caught by a C interpreter That's *one* C-related problem where a V7 UNIX programmer would have been left lamenting. One of the nice things about Dijkstra's notation is that he can distinguish between variables a program text is _allowed_ to change and variables it is _obliged_ to update. That means that a simple flow analysis can catch procedure output parameters which are not assigned. This would have caught one of the "Not a C problem" mistakes. C would have a hard time doing this because it doesn't know when you intend a pointer parameter to be a pointer input and when it is the address of an output. On the other hand, Ada doesn't know the difference between "MAY assign" and "SHOULD assign" either. It is a fair criticism of many existing C *compilers* that they do not issue enough warning messages, and we *should* tell compiler vendors that warning messages are worth money to us. If Bill Wolfe (say) had done a survey of magazine reviews of C compilers and told us that reviewers consistently rated speed of compilation as more important than good warning messages, that would have been a legitimate criticism. (I get that impression, but I _haven't_ done the survey and don't claim it as anything more than an impression.)
peter@ficc.uu.net (Peter da Silva) (03/02/90)
From a quick reading of your message, it seems to me that running lint over your progra would have picked up most, if not all, of the C-language bugs you mentioned. Lint is your friend. -- _--_|\ Peter da Silva. +1 713 274 5180. <peter@ficc.uu.net>. / \ \_.--._/ Xenix Support -- it's not just a job, it's an adventure! v "Have you hugged your wolf today?" `-_-'
perry@apollo.HP.COM (Jim Perry) (03/02/90)
In article <2935@goanna.oz.au> ok@goanna.oz.au (Richard O'keefe) writes: >In article <48f0d9c2.20b6d@apollo.HP.COM>, perry@apollo.HP.COM (Jim Perry) >offers a list of 8 mistakes he made. Before commenting, I would remind >you that I am not a C bigot; I'm not an anti-C bigot, however I am an anti-(C-bigot). I was simply presenting a post-mortem analysis of the specific errors I made, with commentary on whether the compiler or language could have helped me detect or avoid them. I was explicit in several places that a different C compiler (or interpreter, or lint, though I didn't mention these by name, or, as it happens, the same C compiler with a higher level of informative output) could have done so. Very few of my bugs were explicitly C related in that no conceivable C translator could have detected them, nor did I make that claim. >> 2. I wanted to fill in a record whose structure was something like: >Not a C problem. Instead of using variable length fields, which C >quite explicitly doesn't support, it would be better to use pointers. >I think this mistake should be counted as refusal to do things the C way. Patient: Doctor, it hurts when I do this! Doctor: Then don't do that! False assumption. It happens that this program manipulates records with variable-length parts, which ultimately must reside in a database in contiguous format. While actively manipulating the data, it is represented in a C-natural way as pointers and lists, but coming off the disk and going back on it has to be turned into a contiguous form. >> Other languages, however, would have allowed me to describe such a >> structure within the language (PL/I, for one). > >It's worth pointing out that PL/I is not a strongly typed language >(*much* weaker than C) >and although it has record variables it does not have record types. It also uses funny names for integers. What do either of these comments have to do with the basic premise that while C "quite explicitly doesn't support" variable length fields in structures, other languages do? I didn't say "PL/I is better than C", I used it as an example. If C supported this capability it would have simplified my life and avoided bugs in my program. >>In a system >>with runtime array bounds checking, this would have been detected >>quickly and painlessly. >Note that some systems... This is all absolutely irrelevant. I didn't advocate any particular language or implementation, just that a feature, array bounds checking, which many languages/implementations support, would have helped me. Naturally I implied that such an implementation would have been such that it would have detected my bug; I've used such systems. >> 7. I had one bug caused by omission of an item in an initializer list >> for a struct (a vector of function pointers). The compiler could have >> caught that if the language didn't allow partial initializer lists. > >Caught by a feature, I'm afraid. It's a truism in this business that one programmer's "feature" is another's "bug". In any event I'm unswayed by your workaround kludges. That you've found the down side of this "feature" a sufficient impediment to think of two separate workarounds doesn't speak well for its value. >> Overall, that's 8 bugs or classes of bugs. 5 of the 8 could have been >> avoided or detected by a smarter compiler or a different language. > >My totals are >4 Not a C problem >1 Can be avoided by exploiting the preprocessor >2 Would have been caught by Lint or some existing compilers >1 Would have been caught by a C interpreter In other words, you restated my conclusion (after blithely deciding that I had no business wanting to deal with variable-length fields in records, which I reject): 3 Not a C problem (i.e. would have been a bug in other likely implementation languages, were not detectable by C translator) 1 "C quite explicitly does not support" (i.e. could have been avoided by a different language) 4 could have been detected by smarter compiler/translator, or a different language. Sounds like we're in pretty close agreement, except to the extent that you seem to think I'm out to get C, or that I'm promoting some other language. I'm just observing that better tools (than I happened to take advantage of) could have helped me save debugging time on a specific program, by a factor of over 50%. - Jim Perry perry@apollo.hp.com HP/Apollo, Chelmsford MA This particularly rapid unintelligible patter isn't generally heard and if it is it doesn't matter.
dave@micropen (David F. Carlson) (03/03/90)
In article <6960@internal.Apple.COM>, chewy@apple.com (Paul Snively) writes: > > > For what it's worth, my personal opinion is that C lends itself to > precisely the kinds of errors noted above--when does break work and when > doesn't it, and why in God's name do you need it in switch statements in > the first place, etc. What break does is *very* well defined and is no more prone to misinterpretation that any other non-linear control flow statement in any other PL. From K&R2 p 244: A9.5: iteration statement is (for, while, do)... A break statement may appear only in an iteration statement or a switch statement; control passes to the statement following the terminated statement. A multi-case switch is very handy in many situations to reduce identical treatments for similar cases. That you ask the question of the usefulness of break-per-case/multiple-cases implies that you haven't sufficient experience with the construct to judge its merits/weaknesses. Dijkstra notes that no programming language can prevent a poor programmer from creating bad programs. -- David F. Carlson, Micropen, Inc. micropen!dave@ee.rochester.edu "The faster I go, the behinder I get." --Lewis Carroll
desj@idacrd.UUCP (David desJardins) (03/03/90)
From article <2935@goanna.oz.au>, by ok@goanna.oz.au (Richard O'keefe): > Another trick is to use the preprocessor to help you check your counting. > #define ten(A,B,C,D,E,F,G,H,I,J) A,B,C,D,E,F,G,H,I,H > > some_type the_table[] = { > ten( > ... > )}; > > Now if you miscount, the preprocessor will complain. The problem with this solution is well-illustrated by the posting itself. It would be better to have ways of detecting existing bugs which would not introduce new, possibly even harder-to-find bugs. -- David desJardins extra lines for inews
billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe, 2847 ) (03/03/90)
From dave@micropen (David F. Carlson): >> For what it's worth, my personal opinion is that C lends itself to >> precisely the kinds of errors noted above--when does break work and when >> doesn't it, and why in God's name do you need it in switch statements in >> the first place, etc. > > A multi-case switch is very handy in many situations to reduce identical > treatments for similar cases. So is a multi-alternative case, as provided by Ada: case Foo is when 1 | 3 | 5 => statement1; when 2 | 4 | 6 => statement2; when others => statement3; end case; The difference is that Ada takes care of exiting the case statement for you, whereas C requires (unsafely) that you use a break to avoid being sucked into the code associated with subsequent cases. Bill Wolfe, wtwolfe@hubcap.clemson.edu
bouma@cs.purdue.EDU (William J. Bouma) (03/03/90)
In article <6960@internal.Apple.COM> chewy@apple.com (Paul Snively) writes: >We all know C programmers whose >machismo is thus huffed and puffed up (another of my personal opinions is >that the per capita arrogance of C programmers far outweighs the per >capita arrogance of any other language-aficionado group). Oh, really! I can tell you have never met any FORTH programmers. >Well, I hate to say it, but it's extremely unlikely that such an error >would have been made in Pascal, since Pascal doesn't require you to >explicitly break from case...of constructs. Well isn't that special! The way I see it, C gives you the flexibility to not break if you don't want to, where PASCAL restricts you to break. >Before the flames start, Too late! > let me just add: no, I don't necessarily prefer Pascal over C for all tasks. The only place I can see to prefer PASCAL over C is a beginner programming class. Isn't PASCAL usually thrown out along with the diapers? -- Bill <bouma@cs.purdue.edu> | And don't forget my dog... "Astronomy" -- BOC
jlg@lambda.UUCP (Jim Giles) (03/03/90)
From article <1004@micropen>, by dave@micropen (David F. Carlson): > [... explicit breaks in the C switch construct ...] > Dijkstra notes that no programming language can prevent a poor programmer from > creating bad programs. He also notes that the choice of programming language can have a strong effect on the quality of the resulting code. (His indictment of PL/I as being similar to flying a widebodied jet with all the windows taped over and no labels on the thousands of controls was quite apropos.) This effect of the language choice is mainly psychological - and it CAN be overcome (which is the main thrust of many of Dijkstra's works). But, be honest, how many programmers do you know who _really_ construct their programs abstractly _before_ even selecting their implementation language? This is the proper way (a' la Dijkstra) to make sure the you aren't negatively impacted by the language - you select the proper language for the job at hand - you don't mangle the job to fit the language. Dijkstra's statement, while true, should not be used to excuse poorly designed language features (as you are trying to do). A better design for C would have been _not_ to require breaks after each case and to provide some other syntax for the representation of multiple choices on the same case. It's easy to see these kinds of design errors in retrospect (C _is_ nearly 20 years old you know). J. Giles
manis@cs.ubc.ca (Vincent Manis) (03/03/90)
I might note that B's syntax, and hence C's syntax, was a definite *dis*improvement [sic] over that of its predecessor, BCPL. I would in fact post an article saying exactly that, except for the fact that this entire thread most certainly belongs somewhere, but not in comp.lang.scheme. Would you please edit the Newsgroups: line in further articles on this subject? -- \ Vincent Manis <manis@cs.ubc.ca> "There is no law that vulgarity and \ Department of Computer Science literary excellence cannot coexist." /\ University of British Columbia -- A. Trevor Hodge / \ Vancouver, BC, Canada V6T 1W5 (604) 228-2394
desj@idacrd.UUCP (David desJardins) (03/03/90)
From article <2935@goanna.oz.au>, by ok@goanna.oz.au (Richard O'keefe): > Another trick is to use the preprocessor to help you check your counting. > #define ten(A,B,C,D,E,F,G,H,I,J) A,B,C,D,E,F,G,H,I,H > > some_type the_table[] = { > ten( > ... > )}; > > Now if you miscount, the preprocessor will complain. The posting above vividly illustrates not only one possible "solution," but the problem with that solution as well. It would be better to have ways of detecting/preventing bugs which would not introduce new, possibly even harder-to-find bugs. -- David desJardins
jbaker@gmu90x.gmu.edu (jbaker) (03/06/90)
In article <8218@hubcap.clemson.edu> Bill Wolf writes: >From dave@micropen (David F. Carlson): >>> For what it's worth, my personal opinion is that C lends itself to >>> precisely the kinds of errors noted above--when does break work and when >>> doesn't it, and why in God's name do you need it in switch statements in >>> the first place, etc. >> >> A multi-case switch is very handy in many situations to reduce identical >> treatments for similar cases. But the real usefulness of requiring break in a switch statement is for SIMILAR treatments of similar cases, for example you may require a few assignments in one case before a more complicated computation which must be performed for several of the cases. This could be done in other languages using conditionals or multiple case statements, but it's not quite as nice. Bill Wolf writes: > So is a multi-alternative case, as provided by Ada: > > case Foo is > when 1 | 3 | 5 => > statement1; > when 2 | 4 | 6 => > statement2; > when others => > statement3; > end case; > > The difference is that Ada takes care of exiting the case statement > for you, whereas C requires (unsafely) that you use a break to avoid > being sucked into the code associated with subsequent cases. > But this is just one example of the design philosophy of C: flexibility; if the machine will let you do it (or naturally WANTS to do it), let the programmer do it the same way. Other examples of such flexibilty are the lack of type-checking, as well as allowing assignments just about anywhere. Some languages, such as Pascal, have more limitations (or less extentions) in their constructs. This usually is perfectly adequate, but for someone who writes code while thinking about how the machine will execute that code, as I do, flexibility can be useful; a small amount of speed-up, and more compact code can be the result. However, this capability has a trade-off; flexibility for follow-ability. Humans do not think like computers. We can not precisely process syntax. When code becomes too involved, it can become very difficult to follow for even the author. What could be a straight-foward program may now be a twisted mess. It becomes easy to overlook bugs that would be obvious in other languages. This is why C programmers rely heavily on a debugger. What one calls "safety" in a language, then, is just how well humans can follow a construct, without regard for its usefulness. C is not "safe," but while being quite simple and relatively low-level, it contains many flexible constucts. Sometimes, though, flexibility is present in other languages in a more "safe" fashion. For example, type conversion is available in Modula-2 IF it is explicitly done in the code. In order to use a pointer as a integer, for example, one might use: INTEGER(ch^). This flags the compiler that "we meant to do that" and warns humans that something tricky is going on. But C can be delightful to use, if you are very careful to write clear code. John Baker jbaker@gmuvax.gmu.edu Now about deciphering all those }++|#{ symbols....
sanders@sanders.austin.ibm.com (Tony Sanders) (03/07/90)
Just a coupla comments not directed at Jim but just for the record: In article <48f0d9c2.20b6d@apollo.HP.COM> perry@apollo.HP.COM (Jim Perry) writes: >1. A function had an output parameter which was a numeric count, i.e. >a pointer to an integer. I wrote the code to increment the count as > *count++; >which of course does entirely the wrong thing (it should be >"(*count)++;" or, as I rewrote it, "*count += 1;"). Clearly this >particular mistake is strictly limited to C: in another language this >parameter would be a reference/out/var, not a pointer; the ++ and >thus the ambiguity of what's incremented is obviously unique to C; and This is the same as misunderstanding what "2+3*4" does. If you assume it adds 2+3 then multiplys by 4 you'll be sorry. It's a simple matter of understanding the precedence rules, thus not limited to C. FYI: There is a nifty little program called "cparen" for times when you are unsure of the precedence. >of course the stupid notion of unused-expression-as-statement is also >uncommon. However, a better C compiler could have flagged the fact of >the unused expression, i.e. that while "count++" was presumably an >intended side effect, "*count" was unused. You have a point that C allows you to have "dangling" expression (those that have no side effect like "1;" or "*count"). lint will detect lines that have no effect like "a*b;" but not "*count++;". I assume that could be added without too much trouble. >4. A function to allocate, initialize, and return a new node to go in >... >probably have done it there. The absence of a return statement could >and should have been caught by the compiler. The absence of a return statement shouldn't have been caught by the compliler, it should have been and would have been caught with lint (see my new and improved .sig). -- sanders The 11th commandment: "Thou shalt use lint" Reply-To: cs.utexas.edu!ibmaus!auschs!sanders.austin.ibm.com!sanders "she was an innocent bystander, it's a democracy" -- Jim Morrison
lou@atanasoff.rutgers.edu (Lou Steinberg) (03/08/90)
In article <2596@gmu90x.gmu.edu> jbaker@gmu90x.gmu.edu (jbaker) writes: > In article <8218@hubcap.clemson.edu> Bill Wolf writes: > >From dave@micropen (David F. Carlson): > >> A multi-case switch is very handy in many situations to reduce identical > >> treatments for similar cases. > > But the real usefulness of requiring break in a switch statement is for > SIMILAR treatments of similar cases, for example you may require a > few assignments in one case before a more complicated computation which > must be performed for several of the cases. ARGHHH!! That is what subroutines (and macros) are for - to handle common code. And if your language makes them too expensive, either in terms of run time or in terms of programmer effort, then THAT is an even worse problem with the language than the problems with break. -- Lou Steinberg uucp: {pretty much any major site}!rutgers!aramis.rutgers.edu!lou arpa: lou@cs.rutgers.edu
ciardo@software.org (Gianfranco Ciardo) (03/09/90)
In article <Mar.8.10.19.49.1990.3812@atanasoff.rutgers.edu> lou@atanasoff.rutgers.edu (Lou Steinberg) writes: > > >> A multi-case switch is very handy in many situations to reduce identical > > >> treatments for similar cases. > ARGHHH!! That is what subroutines (and macros) are for - to handle > common code. And if your language makes them too expensive, either in > terms of run time or in terms of programmer effort, then THAT is an > even worse problem with the language than the problems with break. I think you miss completely the point. Using subroutines is not going to help you make the code shorter, more compact, or less repetitious (which is not) in a case like this: switch (what_to_do) { case FIVE_THINGS: <statementA>; case FOUR_THINGS: <statementB>; case THREE_THINGS: <statementC>; case TWO_THINGS: <statementD>; case ONE_THING: <statementE>; case NOTHING: break; }
kassover@jupiter.crd.ge.com (David Kassover) (03/10/90)
In article <672@software.software.org> ciardo@software.org (Gianfranco Ciardo) writes: ... > >I think you miss completely the point. >Using subroutines is not going to help you make the code shorter, more compact, >or less repetitious (which is not) in a case like this: > > switch (what_to_do) { > case FIVE_THINGS: > <statementA>; > case FOUR_THINGS: > <statementB>; > case THREE_THINGS: > <statementC>; > case TWO_THINGS: > <statementD>; > case ONE_THING: > <statementE>; > case NOTHING: > break; > } No, but without fall through, you would write such a thing upside down. Or do something else. A couple of weeks ago I mentioned a (please bear with me) Fortran preprocessor called FLEX, which provided 4 kinds of case statement, two with fall through, two without. One instance: A particular programmer, whom I have worked with for about 10 years, rarely, if ever, used the FLEX cases-with-fallthrough. Now that he has learned C (and not recently, bTW), it seems like he goes out of his way to *USE* fall-through. I wonder why it is so difficult for language designers to provide more than one way to do things?
gat@robotics.Jpl.Nasa.Gov (Erann Gat) (03/10/90)
In article <672@software.software.org>, ciardo@software.org (Gianfranco Ciardo) writes: > Using subroutines is not going to help you make the code shorter, more compact, > or less repetitious (which is not) in a case like this: > > switch (what_to_do) { > case FIVE_THINGS: > <statementA>; > case FOUR_THINGS: > <statementB>; [etc.] > case ONE_THING: > <statementE>; > case NOTHING: > break; > } No, but writing the code like this will: if (what_to_do >= ONE_THING) <statementE>; if (what_to_do >= TWO_THINGS) <statementD>; if (what_to_do >= THREE_THINGS) <statementC>; if (what_to_do >= TWO_THINGS) <statementB>; if (what_to_do >= ONE_THING) <statementA>; If you wish to quibble over my use of inequalities, replace them with a disjunction of equalities. Erann Gat gat@robotics.jpl.nasa.gov
sanders@sanders.austin.ibm.com (Tony Sanders) (03/10/90)
In article <8218@hubcap.clemson.edu> billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu writes: > So is a multi-alternative case, as provided by Ada: How do you do this in ADA? switch(n) { case 0: count++; case 1: ocount++; case 2: printf("%d %d\n",count,ocount); break; default: printf("unknown n\n"); break; } See how I left out the breaks on purpose. In ADA you wouldn't be able to do this without duplicating either the case-expression (they aren't always simple numbers) or the statements. -- sanders The 11th commandment: "Thou shalt use lint" For every message of the day, a new improved message will arise to overcome it. Reply-To: cs.utexas.edu!ibmaus!auschs!sanders.austin.ibm.com!sanders
amull@Morgan.COM (Andrew P. Mullhaupt) (03/11/90)
> or less repetitious (which is not) in a case like this: > > switch (what_to_do) { > case FIVE_THINGS: ... This stuff doesn't belong in comp.lang.pascal. It goes in comp.lang.c, right(?). The fistfight over bill wolfe's complaints about C should stay in comp.lang.c. If anyone wants to complain about Pascal, then put it in here. BTW - fall through and the 'double duty' break keyword are definitely examples of C flaws. If you must, flame me, but in comp.lang.c, (OK?) Later, Andrew Mullhaupt
peter@ficc.uu.net (Peter da Silva) (03/12/90)
> This stuff doesn't belong in comp.lang.pascal. It goes in comp.lang.c, > right(?). No, it doesn't belong in comp.lang.c either. Language lawyers should hold their court in comp.lang.misc or alt.religion.computers. this thread may have some slight relevence to comp.software-eng. It doesn't belong in any of comp.lang.{c,ada}... and even less in comp.lang.{lisp,modula2,pascal}. -- _--_|\ `-_-' Peter da Silva. +1 713 274 5180. <peter@ficc.uu.net>. / \ 'U` \_.--._/ v
jclark@SRC.Honeywell.COM (Jeff Clark) (03/13/90)
In article <775@s5.Morgan.COM> amull@Morgan.COM (Andrew P. Mullhaupt) writes:
This stuff doesn't belong in comp.lang.pascal. It goes in comp.lang.c,
Actually, I don't think it belongs in any of these groups. This "discussion"
seems to have no end and no reasonable resolution since the antagonists
arguments are based on personal opinions, preferences, and emotions. I've not
seen anyone quote studies of the influence of "human factors" in programming
language design nor has any one proposed such a study as a useful outcome of
this recent flame war (although I must admit I'm wearing out the 'n' key on my
workstation).
comp.lang.religious-wars anyone?
Jeff Clark Honeywell Systems and Research Center Minneapolis, MN
inet: jclark@src.honeywell.com tel: 612-782-7347
uucp: jclark@srcsip.UUCP fax: 612-782-7438
DISCLAIMER: If you think I speak for my employer, you need serious help ...
lindsay@comp.vuw.ac.nz (Lindsay Groves) (03/14/90)
In article <1004@micropen>, dave@micropen (David F. Carlson) writes: > What break does is *very* well defined and is no more prone to misinterpretation > that any other non-linear control flow statement in any other PL. A number of people in this discussion (which I haven't reached the end of yet!) have said things like this, and appear to be suggesting that because something is well defined there is no excuse for anyone misusing it. I disagree with that and also with the second part of this statement. There are languages in which any kind of exit has to explicitly name the construct to be exitted -- so there is no possiblity of consfusion about which construct the exit/break/etc. applies to. > A multi-case switch is very handy in many situations to reduce identical > treatments for similar cases. That you ask the question of the usefulness > of break-per-case/multiple-cases implies that you haven't sufficient experience > with the construct to judge its merits/weaknesses. > > Dijkstra notes that no programming language can prevent a poor programmer from > creating bad programs. So why aren't we all still using FORTRAN (or some older dialect)? Why did we all think that unlabelled CASE statements (as in Algol-W and Burroughs Algol) were a big improvement over computed GOTOs in FORTRAN (which is basically what the switch in C is), or that the labelled CASE statement (as in Pascal) was a big improvement over that? Maybe the whole of the last 30 years of work in programming language design has been a dream!!! Lindsay Groves
jaws@chibacity.austin.ibm.com (03/15/90)
Mr Wolf: C allows you to combine cases that have portions of similiar code but may have extra lead in code for a specific case: switch var: case A: /* do stuff only case A needs */ case B: /* do stuff case A and case B need done */ . . break; /* rest of switch */ this construct in impossible to do cleanly in almost every language I have ever seem, especially ADA. This kind flexiability is what makes C so powerfull, and dangerous. You have know what you are doing to do it. [ Jeff Wilson :: jaws@chibacity.austin.ibm.com ] [ Consultant from Pencom, Inc. at Human Factors, AWD, IBM Austin. ] [My comments are wholly my own and as such take them for what they are worth. ]
vanavermaet@kerber.dec.com (03/16/90)
with standard_disclaimer; use standard_disclaimer; In article <1819@awdprime.UUCP>, jaws@chibacity.austin.ibm.com writes... >This kind flexiability is what makes C so powerfull, and dangerous. >You have know what you are doing to do it. I think this is a very sensible remark. O.K., the semantics are well-defined (as may people have pointed out), but it still IS dangerous. That (IMHO) is a very important factor (and to me, a reason not to use C). Peter Van Avermaet
raw@math.arizona.edu (Rich Walters) (03/17/90)
In article <159@uninet.vbo.dec.com> vanavermaet@kerber.dec.com writes: > >with standard_disclaimer; use standard_disclaimer; > >In article <1819@awdprime.UUCP>, jaws@chibacity.austin.ibm.com writes... >>This kind flexiability is what makes C so powerfull, and dangerous. >>You have know what you are doing to do it. > >I think this is a very sensible remark. > >O.K., the semantics are well-defined (as may people have pointed out), >but it still IS dangerous. That (IMHO) is a very important factor (and to me, >a reason not to use C). > >Peter Van Avermaet Do you refuse to drive a car because an irresponsible person could drive one through a crowded play ground? Do you refuse to own a gun because someone could use a (not necessarily your) gun to kill someone else? Do you refuse to own a (any) knife because someone could use a knife to injure/ kill someone else? Yes C can be dangerous. But many useful things can be or are dangerous. That is why users(programmers) need to be trained in its correct use, just as drivers need to be trained in the correct use of an automobile. To continue the analogy, if the rules of the road aren't obeyed, chaos and destruction reign. Richard Walter
freek@fwi.uva.nl (Freek Wiedijk) (03/17/90)
In article <1527@amethyst.math.arizona.edu> raw@math.arizona.edu (Rich Walters) writes: > >Do you refuse to drive a car because an irresponsible person could drive one >through a crowded play ground? > >Do you refuse to own a gun because someone could use a (not necessarily your) >gun to kill someone else? I would prefer to live in a world in which cars and guns don't exist. -- Freek "the Pistol Major" Wiedijk Path: uunet!fwi.uva.nl!freek #P:+/ = #+/P?*+/ = i<<*+/P?*+/ = +/i<<**P?*+/ = +/(i<<*P?)*+/ = +/+/(i<<*P?)**
perry@apollo.HP.COM (Jim Perry) (03/20/90)
In article <1527@amethyst.math.arizona.edu> raw@math.arizona.edu (Rich Walters) writes: >In article <159@uninet.vbo.dec.com> vanavermaet@kerber.dec.com writes: >>O.K., the semantics are well-defined (as may people have pointed out), >>but it still IS dangerous. That (IMHO) is a very important factor (and to me, >>a reason not to use C). >> >>Peter Van Avermaet > > >Do you refuse to drive a car because an irresponsible person could drive one >through a crowded play ground? No, but I refuse to drive a car without a fuel gauge, or without brakes (even though I could opt to have aftermarket brakes, controlled by a bicycle brake handle clamped onto the gearshift, installed), or with no interlock preventing me from shifting into reverse at highway speed ("but that's a feature! it means you don't need brakes!") >Yes C can be dangerous. But many useful things can be or are dangerous. That >is why users(programmers) need to be trained in its correct use, just as >drivers need to be trained in the correct use of an automobile. To continue >the analogy, if the rules of the road aren't obeyed, chaos and destruction >reign. This isn't the issue, the problem with C is not that untrained programmers misuse it, it's that its "flexibility" and lack of error detection allow trained software engineers to make mistakes, through simple typos or similar dumb but common gaffes, that are detected by other languages. [Or, as has come out here, by better C compilers or interpreters than are commonly encountered, or by some versions of "lint"]. C is not magic, it's just another algol-derived programming language; it doesn't let you do anything that you can't do in other languages (not some particular other language, or implementation of some other language, but other, similar, languages). What C does have going for it is that it's simple, easy to write compilers for, and implemented on all sorts of iron. This is more a historical issue related to its association with UNIX than a feature of the language design. Nevertheless, it's why I at least write in C. A well-trained driver with years of experience can still run out of gas if they forget to check the gas dipstick before starting the trip. Presumably after this happens a few times, that driver will remember to check, or develop the habit of filling the tank when the car has travelled half the total expected range since the last fill-up, but I'd rather have a car with a fuel gauge in the first place and never have the problem arise. - Jim Perry perry@apollo.hp.com HP/Apollo, Chelmsford MA This particularly rapid unintelligible patter isn't generally heard and if it is it doesn't matter.
runyan@hpcuhc.HP.COM (Mark Runyan) (03/20/90)
>/ freek@fwi.uva.nl (Freek Wiedijk) / 5:27 am Mar 17, 1990 / >> raw@math.arizona.edu (Rich Walters) writes: >> >>Do you refuse to drive a car because an irresponsible person could drive one >>through a crowded play ground? >> >>Do you refuse to own a gun because someone could use a (not necessarily your) >>gun to kill someone else? > >I would prefer to live in a world in which cars and guns don't exist. OK, then what about knives, electricity, die-stamp machines, or fire? All of these are dangerous and all are useful. Just because C is dangerous is *no* reason to avoid using it. C is incredibly powerful and it gets the job done. Instead of saying that it should be thrown out, how about saying that engineers using it should be trained in proper software engineering techniques. No matter what language you program in, as a professional, you are interested in techniques that will improve your work. People advocating one language over another, whether that language be C, Ada, Pascal, Cobol, or APL, seem to forget that each language has a purpose and place. Might as well suggest that Esperanto is better than English or French, for you may win about as many converts. Mark Runyan
jharkins@sagpd1.UUCP (Jim Harkins) (03/20/90)
In article <159@uninet.vbo.dec.com> vanavermaet@kerber.dec.com writes: >In article <1819@awdprime.UUCP>, jaws@chibacity.austin.ibm.com writes... >>This kind flexiability is what makes C so powerfull, and dangerous. >>You have know what you are doing to do it. > >I think this is a very sensible remark. I think the best way I've heard this put is "Pascal gives you a water pistol filled with distilled water. C not only gives you a loaded .357, it points it at your head as a default. Why do you think Pascal is taught in school? And which would you rather have when there was a hungry bear in the area?" -- jim jharkins@sagpd1 "My son beat up the Citizen of the Month at Gage elementry school."
jamesth@microsoft.UUCP (James THIELE) (03/20/90)
In article <159@uninet.vbo.dec.com> vanavermaet@kerber.dec.com writes: > >with standard_disclaimer; use standard_disclaimer; > >In article <1819@awdprime.UUCP>, jaws@chibacity.austin.ibm.com writes... >>This kind flexiability is what makes C so powerfull, and dangerous. >>You have know what you are doing to do it. > >I think this is a very sensible remark. > >O.K., the semantics are well-defined (as may people have pointed out), >but it still IS dangerous. That (IMHO) is a very important factor (and to me, a >reason not to use C). > >Peter Van Avermaet Are you implying that Ada is perfectly safe? I can think of places in Ada that are, at least in some sense, semantically well defined but dangerous. Tasking comes to mind - you must be very careful to use it correctly. Generics (which I adore) can have some gorgeous pitfalls for the unwary. In fact, while I think Hoare overstates his case in the famous article where he suggests that Ada not be used in critical applications his arguments against Ada shouldn't be casually dismissed. James Thiele uunet!microsoft!jamesth
bruce@menkar.gsfc.nasa.gov (Bruce Mount 572-8408) (03/20/90)
>>> [Stuff deleted] >>>That (IMHO) is a very important factor (and to me, a reason not to use C). >>> >>>Peter Van Avermaet >Do you refuse to drive a car because an irresponsible person could drive one >through a crowded play ground? I won't drive a car without seatbelts or bumpers. > >Do you refuse to own a gun because someone could use a (not necessarily your) >gun to kill someone else? I won't use a gun without a safety. > >Do you refuse to own a (any) knife because someone could use a knife to injure/ >kill someone else? I am very careful whenever I use a sharp knife. I've had profession driving classes, gun classes, and cooking classes, but I still make mistakes. Don't you? Whenever I use something dangerous (e.g. a loaded gun) I use it very slowly and carefully. Not from lack of training, but BECAUSE of training, survival training. I use C and UNIX every day (and have for years), but I can write fully finished and tested software faster in almost any other language. Why? Because even the most experienced C programmer eventually gets bitten by their own mistakes. AND I FULLY ADMIT THAT THEY ARE MY OWN MISTAKES, but so what? Of course I make mistakes in all languages (dammit I HATE saying that, but it's true), but most other languages provide features that limit my ability to self destruct. C does provide wonderful low-level access to bits and bytes, but these day so do many other languages, at reduced risk. --Bruce ================================================= | Bruce Mount "Brevity is best" | | bruce@atria.gsfc.nasa.gov | =================================================