conor@lion.inmos.co.uk (Conor O'Neill) (06/03/91)
Consider the following function:
void f(void)
{
void (*ptr)(void); /* pointer to void function returning void */
ptr = 0; /* 1 */
ptr = (void *)0; /* 2 */
ptr = (void (*)(void))0; /* 3 */
ptr = (void (*)(void))((void *)0); /* 4 */
}
When compiled with gcc version 1.39, with -ansi -pedantic,
it reports a warning on the line marked /* 2 */:
c.c:6: warning: assignment between incompatible pointer types
(Note, without -pedantic gcc is silent)
On our system, NULL is #defined in a header file to be ((void *)0),
thus gcc will give a warning when attempting to set a function pointer
to NULL.
ANSI standard states (section 3.3.16.1) that a constraint on simple assignment
is that (...) "the left operand is a pointer and the right is a null pointer
constant".
The index points me to section 3.2.2.3 for a definition:
"An integral constant expression with the value 0, or such an expression
cast to type void *, is called a null pointer constant.
If a null pointer constant is assigned to or compared for equality to a
pointer, the constant is converted to a pointer of that type"
Thus, I claim that /* 2 */ should be converted by the compiler to /* 4 */
and thus should not create a warning. Hence gcc is incorrect in this point.
---
Conor O'Neill, Software Group, INMOS Ltd., UK.
UK: conor@inmos.co.uk US: conor@inmos.com
"It's state-of-the-art" "But it doesn't work!" "That is the state-of-the-art".
diamond@jit533.swstokyo.dec.com (Norman Diamond) (06/04/91)
In article <16386@ganymede.inmos.co.uk> conor@inmos.co.uk () writes: > void (*ptr)(void); /* pointer to void function returning void */ > ptr = (void *)0; /* 2 */ > ptr = (void (*)(void))((void *)0); /* 4 */ >When compiled with gcc version 1.39, with -ansi -pedantic, >it reports a warning on the line marked /* 2 */: >c.c:6: warning: assignment between incompatible pointer types > >ANSI standard states (section 3.3.16.1) that a constraint on simple assignment >"the left operand is a pointer and the right is a null pointer constant". >The index points me to section 3.2.2.3 for a definition: >"An integral constant expression with the value 0, or such an expression >cast to type void *, is called a null pointer constant. >If a null pointer constant is assigned to or compared for equality to a >pointer, the constant is converted to a pointer of that type" >Thus, I claim that /* 2 */ should be converted by the compiler to /* 4 */ >and thus should not create a warning. Hence gcc is incorrect in this point. I agree. A null pointer constant of type (void *) has a certain privilege which is not inherited from either of its parent categories; it is compatible with function pointer types as well as with object pointer types. (Non-constant null pointers of type (void *) are compatible only with object pointer types, and constant non-null pointers of type (void *) are also compatible only with object pointer types.) Now to be pedantic, my statement might be too broad. If an integral constant expression with value 0 is cast to type (void *), it is a null pointer constant. But if a null pointer constant is cast to type (void *), is it still a null pointer constant? It is still constant (I think) and is still a null pointer, but does it still have its extra privilege? The standard doesn't say! ptr = (void *)0; /* Conversion to type of function pointer, yes */ ptr = (void *)(void*)0; /* Not defined! */ >On our system, NULL is #defined in a header file to be ((void *)0), So, be careful casting your uses of the NULL macro!! -- Norman Diamond diamond@tkov50.enet.dec.com If this were the company's opinion, I wouldn't be allowed to post it. Permission is granted to feel this signature, but not to look at it.
bhoughto@pima.intel.com (Blair P. Houghton) (06/10/91)
Old chesnuts crack hard... In article <1991Jun4.012914.25418@tkou02.enet.dec.com> diamond@jit533.enet@tkou02.enet.dec.com (Norman Diamond) writes: >>On our system, NULL is #defined in a header file to be ((void *)0), >So, be careful casting your uses of the NULL macro!! Better, su to root and erase the `(void *)' part. The most general, and therefore most valuable, way to define NULL is to simply map it to the digit 0. Anything else only complicates the situation unnecessarily (although it might illuminate the occasional compiler bug.) --Blair "Blink-blink."
sef@kithrup.COM (Sean Eric Fagan) (06/10/91)
In article <4641@inews.intel.com> bhoughto@pima.intel.com (Blair P. Houghton) writes: >Better, su to root and erase the `(void *)' part. The most >general, and therefore most valuable, way to define NULL is >to simply map it to the digit 0. This does not handle the case where a prototype is not in scope. E.g. void foo() { bar(NULL); } void bar(char *b) { if (NULL == b) { ... } else { ... } } -- Sean Eric Fagan | "I made the universe, but please don't blame me for it; sef@kithrup.COM | I had a bellyache at the time." -----------------+ -- The Turtle (Stephen King, _It_) Any opinions expressed are my own, and generally unpopular with others.
diamond@jit533.swstokyo.dec.com (Norman Diamond) (06/10/91)
In article <1991Jun10.061202.25199@kithrup.COM> sef@kithrup.COM (Sean Eric Fagan) writes: >In article <4641@inews.intel.com> bhoughto@pima.intel.com (Blair P. Houghton) writes: >>Better, su to root and erase the `(void *)' part. The most >>general, and therefore most valuable, way to define NULL is >>to simply map it to the digit 0. > >This does not handle the case where a prototype is not in scope. E.g. > void > foo() { > bar(NULL); > } Yes indeed, the best way to implement a processor for the language does not handle the case where a programmer doesn't know how to use the language. So what? -- Norman Diamond diamond@tkov50.enet.dec.com If this were the company's opinion, I wouldn't be allowed to post it. Permission is granted to feel this signature, but not to look at it.
sef@kithrup.COM (Sean Eric Fagan) (06/10/91)
In article <1991Jun10.073125.25120@tkou02.enet.dec.com> diamond@jit533.enet@tkou02.enet.dec.com (Norman Diamond) writes: >Yes indeed, the best way to implement a processor for the language does not >handle the case where a programmer doesn't know how to use the language. >So what? ANSI does not require a prototype for non-variardic functions (as far as I can tell). As a result, for a certain class of popular machines, under certain compiler options, defining NULL as 0 will be incorrect, while '(void*)0' is correct. And the program that would core-dump on this would be perfectly correct, according to ANSI. The header file would, therefore, be wrong. -- Sean Eric Fagan | "I made the universe, but please don't blame me for it; sef@kithrup.COM | I had a bellyache at the time." -----------------+ -- The Turtle (Stephen King, _It_) Any opinions expressed are my own, and generally unpopular with others.
ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (06/10/91)
In article <1991Jun10.081924.26439@kithrup.COM>, sef@kithrup.COM (Sean Eric Fagan) writes: > ANSI does not require a prototype for non-variadic functions (as far as I > can tell). As a result, for a certain class of popular machines, under > certain compiler options, defining NULL as 0 will be incorrect, while > '(void*)0' is correct. > And the program that would core-dump on this would be perfectly correct, > according to ANSI. The header file would, therefore, be wrong. A header file #defining NULL to be 0 would _not_ be "wrong", because that is one of the values that the standard explicitly allows. A program which breaks when NULL is #defined to be 0 is wrong, not the header. Nor is it the case that (void*)0 is any more correct than 0. It too is one of the values which the ANSI committee blessed, and, for a certain class of popular machines, under certain compiler options, passing (void*)0 to a function which was expecting (int*) will break that function, whereas passing 0 would have worked. The moral of the story is that -- if there is a prototype in scope, you're ok with either (void*)0 or 0 -- if there is no prototype, but the programmer has followed the discipline recommended by the net.c.wizards I learned it from "always provide an explicit cast for NULL arguments", you are again ok with either (void*)0 or 0 -- if you have an *uncast* NULL argument to a function without a prototype in scope, (void*) will save you in *some* systems and kill you in others, and so will 0. TANSTAAFL! My own experience is that a plain 0 is less trouble than (void*)0. -- Should you ever intend to dull the wits of a young man and to incapacitate his brains for any kind of thought whatever, then you cannot do better than give him Hegel to read. -- Schopenhauer.
hp@vmars.tuwien.ac.at (Peter Holzer) (06/10/91)
sef@kithrup.COM (Sean Eric Fagan) writes: >ANSI does not require a prototype for non-variardic functions (as far as I >can tell). As a result, for a certain class of popular machines, under >certain compiler options, defining NULL as 0 will be incorrect, while >'(void*)0' is correct. >And the program that would core-dump on this would be perfectly correct, >according to ANSI. The header file would, therefore, be wrong. No. Consider the following program on a machine where the representation of char/void * and int * is different: ----------------------------------------------------------------------- #define NULL ((void *)0) int printf (char const * fmt, ...); int main () { a (NULL); return 0; } int a (ip) int * ip; { if (ip != NULL) printf ("%d\n", * ip); } ----------------------------------------------------------------------- The function expects a pointer to int, but gets a pointer-to-void. Thus the contents of ip are undefined and it may well compare != NULL, causing a core dump (or crash, or rude mail to your boss :-) on the printf. Moral: If no prototypes are in scope you always have to cast NULL to the correct pointer type. So it does not matter if NULL is defined as 0 or (void *)0. Of course (void *)0 may save many sloppy programmers, but it can cause problems if you want to cast NULL to a function pointer (as the original poster mentioned). -- | _ | Peter J. Holzer | Think of it | | |_|_) | Technical University Vienna | as evolution | | | | | Dept. for Real-Time Systems | in action! | | __/ | hp@vmars.tuwien.ac.at | Tony Rand |
gwyn@smoke.brl.mil (Doug Gwyn) (06/10/91)
In article <1991Jun10.061202.25199@kithrup.COM> sef@kithrup.COM (Sean Eric Fagan) writes: >This does not handle the case where a prototype is not in scope. But then, nothing will, if different pointer types have different sizes.
gwyn@smoke.brl.mil (Doug Gwyn) (06/10/91)
In article <1991Jun10.081924.26439@kithrup.COM> sef@kithrup.COM (Sean Eric Fagan) writes: >The header file would, therefore, be wrong. No, Norman was right. #define NULL 0 is always a proper way for an implementation to define NULL in the standard headers.
steve@groucho.ucar.edu (Steve Emmerson) (06/11/91)
In <1991Jun10.081924.26439@kithrup.COM> sef@kithrup.COM (Sean Eric Fagan) writes: >ANSI does not require a prototype for non-variardic functions (as far as I >can tell). As a result, for a certain class of popular machines, under >certain compiler options, defining NULL as 0 will be incorrect, while >'(void*)0' is correct. >And the program that would core-dump on this would be perfectly correct, >according to ANSI. The header file would, therefore, be wrong. No. Since the program didn't cast the macro NULL into the appropriate type, it's relying on implementation-defined (or is it undefined?) behavior; hence, the program isn't strictly conforming ("correct" in your terminology). Steve Emmerson steve@unidata.ucar.edu ...!ncar!unidata!steve
peter@ficc.ferranti.com (Peter da Silva) (06/15/91)
In article <4641@inews.intel.com> bhoughto@pima.intel.com (Blair P. Houghton) writes: > Old chesnuts crack hard... > The most general, and therefore most valuable, way to define NULL is > to simply map it to the digit 0. And you're working at Intel... On an 80x86 (x<3), there exist models where int = 16 bits, pointer = 32 bits. On such a machine, execl("/bin/sh", "sh", "-c", "echo", NULL); (which is a common idiom in UNIX source groups) will display amazing things, assuming you don't get a core dump... -- Peter da Silva; Ferranti International Controls Corporation; +1 713 274 5180; Sugar Land, TX 77487-5012; `-_-' "Have you hugged your wolf, today?"
dkeisen@leland.Stanford.EDU (Dave Eisen) (06/15/91)
In article <EE-B=O2@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: > execl("/bin/sh", "sh", "-c", "echo", NULL); > >(which is a common idiom in UNIX source groups) will display amazing It is a common BUG in UNIX source groups. -- Dave Eisen dkeisen@leland.Stanford.EDU 1101 San Antonio Road, Suite 102 Mountain View, CA 94043 (415) 967-5644
gwyn@smoke.brl.mil (Doug Gwyn) (06/16/91)
In article <EE-B=O2@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: > execl("/bin/sh", "sh", "-c", "echo", NULL); >(which is a common idiom in UNIX source groups) ... A common INCORRECT idiom! This error is well known and has been discussed to death, both on Usenet and in C programming textbooks and style guides.
bhoughto@nevin.intel.com (Blair P. Houghton) (06/16/91)
In article <EE-B=O2@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: >In article <4641@inews.intel.com> bhoughto@pima.intel.com (Blair P. Houghton) writes: >> Old chesnuts crack hard... And old cocoanuts post garbage... >> The most general, and therefore most valuable, way to define NULL is >> to simply map it to the digit 0. > >And you're working at Intel... So? Not everyone at Intel is on the Intel*86(tm) design teams (but I _do_ have a poster of a plot of an 80386DX(tm) behind my chair, so if you want to you can go right ahead and believe that I drew it freehand...), and I couldn't tell you if _anyone_ at Intel is the owner of the design of _any_ compiler. >On an 80x86 (x<3), there exist models where >int = 16 bits, pointer = 32 bits. On such a machine, > > execl("/bin/sh", "sh", "-c", "echo", NULL); > >(which is a common idiom in UNIX source groups) will display amazing >things, assuming you don't get a core dump... 1. I am not responsible for bad/typical/good design of compilers. 2. I am not responsible for your incorrect use of the semantics of function calls. 3. I do, however, feel responsible for your education and edification as long as you are part of my community, so I will remind you that we've been 'round and 'round the argument-size vs. parameter-size ambiguity bush, and that the standard defines this problem, even if it is a bit shrouded in documentational mumbo-jumbo. --Blair "But if you ever need an Intel186(tm)-based, custom microcontroller, give me a call and I'll explain how I'm not in sales, either..."
fischer@iesd.auc.dk (Lars P. Fischer) (06/19/91)
>>>>> bhoughto@pima.intel.com (Blair P. Houghton) writes: Blair> The most general, and therefore most valuable, way to define NULL is Blair> to simply map it to the digit 0. >>>>> On 14 Jun 91 19:29:11 GMT, peter@ficc.ferranti.com (Peter da Silva) said: Peter> And you're working at Intel... On an 80x86 (x<3), there exist Peter> models where int = 16 bits, pointer = 32 bits. ... Oh, no, no, no. The NULL pointer war hits comp.std.c. Next thing we'll have a FAQ and Chris Torek will have to explain how to deal with NULL pointers once a month and .... Let's not create a new comp.lang.c. Please. /Lars -- Lars Fischer, fischer@iesd.auc.dk | It takes an uncommon mind to think of CS Dept., Univ. of Aalborg, DENMARK. | these things. -- Calvin
scs@adam.mit.edu (Steve Summit) (06/20/91)
In article <FISCHER.91Jun18210809@cauchy.iesd.auc.dk> fischer@iesd.auc.dk (Lars P. Fischer) writes: >Oh, no, no, no. The NULL pointer war hits comp.std.c. Next thing we'll >have a FAQ and Chris Torek will have to explain how to deal with NULL >pointers once a month and .... I share Lars's disgust. Mark Brader has suggested that I try posting a copy of the comp.lang.c FAQ list here in comp.std.c, at least on a trial basis. I'd rather not clutter this group with it, but of course I'd also rather not see this group get mired in the same old tired inanities. If I post it at all, I will probably post only the abridged version (~650 lines), once a month. Please mail me your comments on this suggestion. Steve Summit scs@adam.mit.edu
peter@ficc.ferranti.com (Peter da Silva) (06/20/91)
In article <16418@smoke.brl.mil> gwyn@smoke.brl.mil (Doug Gwyn) writes: > In article <EE-B=O2@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: > > execl("/bin/sh", "sh", "-c", "echo", NULL); > >(which is a common idiom in UNIX source groups) ... > A common INCORRECT idiom! Oh, I know it's incorrect. It's also common enough that a compiler vendor on a system where 0 doesn't have the same size and bit pattern as (void *)0 would be foolish to #define NULL as 0 in <stdio.h>. Yes, it's better that everyone write correct code. But be liberal with what you accept... after all, the person you're punishing with a B&D definition of NULL is your customer. -- Peter da Silva; Ferranti International Controls Corporation; +1 713 274 5180; Sugar Land, TX 77487-5012; `-_-' "Have you hugged your wolf, today?"
peter@ficc.ferranti.com (Peter da Silva) (06/20/91)
In article <4728@inews.intel.com> bhoughto@nevin.intel.com (Blair P. Houghton) writes: > 1. I am not responsible for bad/typical/good design of compilers. > 2. I am not responsible for your incorrect use of the semantics > of function calls. > 3. I do, however, feel responsible for your education and > edification as long as you are part of my community, I'm not responsible for that code. I know it's wrong. It is, however, something that I have to fix over and over and over again because one of your co-workers at intel decided to use a bondage-and-discipline definition of NULL in <stdio.h>. Technically correct, but practically a problem. I have better things to do with my time than fixing all the broken software in comp.sources. If I can get it working by futzing around in a defs.h file instead of groveling through the source to elm (a particularly poorly written example), I will. And: on an intel 80x86 (x<3) the best definition for NULL is (void *)0. Period. -- Peter da Silva; Ferranti International Controls Corporation; +1 713 274 5180; Sugar Land, TX 77487-5012; `-_-' "Have you hugged your wolf, today?"
peter@ficc.ferranti.com (Peter da Silva) (06/20/91)
In article <1991Jun19.182420.12673@athena.mit.edu> scs@adam.mit.edu writes: > I share Lars's disgust. Why? The "best" definition for "NULL" is entirely compiler/hardware dependent. *Most* of the time 0 is unequivocally correct. There are, however, computers and compilers for which ((void *)0) is more useful. No more "correct", from a standpoint of satisfying the ANSI standard, but more practical for people who want to port programs to that platform. And that's the point of *having* a standard for the language, after all. -- Peter da Silva; Ferranti International Controls Corporation; +1 713 274 5180; Sugar Land, TX 77487-5012; `-_-' "Have you hugged your wolf, today?"
gwyn@smoke.brl.mil (Doug Gwyn) (06/20/91)
In article <MR0CV2H@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: >Oh, I know it's incorrect. It's also common enough that a compiler vendor >on a system where 0 doesn't have the same size and bit pattern as (void *)0 >would be foolish to #define NULL as 0 in <stdio.h>. >Yes, it's better that everyone write correct code. But be liberal with >what you accept... after all, the person you're punishing with a B&D >definition of NULL is your customer. You're NOT doing your customer any favor by catering to his misconceptions. What about systems where different pointer types have different sizes? There is no way the implementation can fully compensate for the programmer having incorrectly coded his use of the NULL macro, and by trying to accommodate such abuse at all you're merely reinforcing the mistaken notion that caused the programmer to make the mistake in the first place. Sooner or later it is going to catch up with him, and the sooner the better.
jtn@potomac.ads.com (John T. Nelson) (06/20/91)
>peter@ficc.ferranti.com (Peter da Silva) writes: > <1991Jun19.182420.12673@athena.mit.edu> scs@adam.mit.edu writes: >> I share Lars's disgust. > >Why? > >The "best" definition for "NULL" is entirely compiler/hardware dependent. >*Most* of the time 0 is unequivocally correct. There are, however, computers >and compilers for which ((void *)0) is more useful. No more "correct", from >a standpoint of satisfying the ANSI standard, but more practical for people >who want to port programs to that platform. > >And that's the point of *having* a standard for the language, after all. I believe that there is some sort of caveat to using "0" with variable argument lists. Perhaps this is Lar's problem with the use of "0." Remember that the example given was a call to execl which takes variable arguments until terminated with null. I have to agree with Peter Silva though. The use of "0" or a symbol called NULL which is defined to be "0" should be identical. The compiler will generate the same null pointer. NULL defined as (void *)0 should also work although it isn't necessary. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= SPOKEN: John T. Nelson ORGANIZATION: Advanced Decision Systems PHONE: (703) 243-1611 UUCP: kzin!speaker@mimsy.umd.edu INTERNET: jtn@potomac.ads.com =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
jimp@cognos.UUCP (Jim Patterson) (06/21/91)
In article <ND1CKO3@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: > >The "best" definition for "NULL" is entirely compiler/hardware dependent. I don't agree. Any compiler should generate the SAME code whether you define NULL to be 0 or (void*)0, provided it is coerced to the correct pointer type where used. So, (void*)0 should always be correct if you follow the rules, regardless of compiler/hardware, and generate equivalent code. On the other hand, defining NULL to be 0 allows NULL to be misused as a non-pointer constant. For example, I've seen this code around here the odd time: char x[5]; x[4] = NULL; /* sic */ If NULL is (void*)0, these slipups will likely be caught. If only some compilers define it this way, you only find out about it when you compile with those compilers. >And that's the point of *having* a standard for the language, after all. Which is why there should be a standard definition. It avoids suprises when porting to new environments. Except for the substantial prior art for #define NULL (0), I think (void*)0 is ALWAYS the best definition. Of course, prior art is an important consideration, so I can't fault the X3J11 group for leaving the definition somewhat ill-defined, but if it were designed again from scratch I think the argument for #define NULL (0) would be pretty weak. -- Jim Patterson Cognos Incorporated UUNET:uunet!cognos.uucp!jimp P.O. BOX 9707 BITNET:ccs.carleton.ca!cognos.uucp!jimp 3755 Riverside Drive PHONE:(613)738-1440 x6112 Ottawa, Ont K1G 3Z4
scs@adam.mit.edu (Steve Summit) (06/21/91)
Let's not have any more null pointer discussion here (of any kind, neither unfounded speculation nor knowledgeable, correct responses) until every masochist who is actually interested in discussing it has trotted over to comp.lang.c and read the frequently-asked questions list there. (If you can't find a copy, be patient until July 1 when it will be posted again. Null pointers have been dogging Usenet for many years; a week's delay won't kill anybody.) Followups to alt.religion.computers, and apologies to comp.std.c for this content-free, inflammatory meta-posting. Steve Summit scs@adam.mit.edu
diamond@jit533.swstokyo.dec.com (Norman Diamond) (06/21/91)
In article <9752@cognos.UUCP> jimp@cognos.UUCP (Jim Patterson) writes: >Which is why there should be a standard definition. It avoids suprises >when porting to new environments. Correct coding avoids most surprises. If a correctly written program gets a surprise, ask the compiler vendor for a refund. (DISCLAIMER: This is personal opinion; my emplyer did not write this.) >Except for the substantial prior art for #define NULL (0), I think >(void*)0 is ALWAYS the best definition. >Of course, prior art is an important consideration, so I can't fault >the X3J11 group for leaving the definition somewhat ill-defined, but >if it were designed again from scratch I think the argument for >#define NULL (0) would be pretty weak. If C were designed again from scratch, it might have (void*) from scratch. Remember, a lot of null pointers were coded when C didn't have (void*)? Also, if C were designed again from scratch, NULL might be a reserved word, built into the language like in every other language that has pointers, instead of being #defined in terms of anything. Also, if C were designed again from scratch, it wouldn't be C. Kernighan and Ritchie would probably be willing to apply lessons of the 70's. -- Norman Diamond diamond@tkov50.enet.dec.com If this were the company's opinion, I wouldn't be allowed to post it. Permission is granted to feel this signature, but not to look at it.
bhoughto@hopi.intel.com (Blair P. Houghton) (06/21/91)
In article <SR0CXCH@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: >I'm not responsible for that code. I know it's wrong. It is, however, >something that I have to fix over and over and over again because one >of your co-workers at intel decided to use a bondage-and-discipline >definition of NULL in <stdio.h>. Technically correct, but practically >a problem. Do a global search-and-replace on the files, especially if you ever expect to redistribute the code. That's why standard headers were invented, to be standard. By mucking about in yours, you engender bad design in locally-produced software. >I have better things to do with my time than fixing all the broken >software in comp.sources. There are about eight ways to attack this obvious piece of bait, none of which I'm about to bother with. >If I can get it working by futzing around >in a defs.h file instead of groveling through the source to elm (a >particularly poorly written example), I will. Grovelling? All you need is sed 's/\([^_A-Za-z]\)NULL\([^_A-Za-z]\)/\1(void *)NULL\2/g' >And: on an intel 80x86 (x<3) the best definition for NULL is (void *)0. I wouldn't know. I've never owned one. (But I bet Barry Margolin never owned a Connection Machine, either... :-)). I would guess, however, that the "best" definition is still `0'. --Blair "It's what I'd use on my personal Delta..."
peter@ficc.ferranti.com (Peter da Silva) (06/21/91)
In article <16468@smoke.brl.mil> gwyn@smoke.brl.mil (Doug Gwyn) writes: > In article <MR0CV2H@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: > >Yes, it's better that everyone write correct code. But be liberal with > >what you accept... after all, the person you're punishing with a B&D > >definition of NULL is your customer. > You're NOT doing your customer any favor by catering to his misconceptions. "My" customer [1] here isn't the one who wrote the code. The code may have been written years ago on a VAX, by some grad student hacker who'd never used anything else. "My" customer is simply trying to get "elm" [2] to compile on his 80286 based Xenix box. "He" isn't going to be satisfied by some ivory tower assertion that the bug is in "elm" ("he" knows that already) and he should spend the next week grovelling through the code fixing the bug for his own good. "He" has other work to do. > What about systems where different pointer types have different sizes? Then a different definition of NULL is appropriate. > There is no way the implementation can fully compensate for the programmer > having incorrectly coded his use of the NULL macro, True. But if there are cases where an implementation *can* do so, it should. "be liberal with what you accept, conservative with what you generate" > and by trying to > accommodate such abuse at all you're merely reinforcing the mistaken notion > that caused the programmer to make the mistake in the first place. Sooner > or later it is going to catch up with him, and the sooner the better. The programmer in question will probably never use this compiler or hardware. Why punish the innocent victim of his code? [1] For "My" customer, "he", etcetera... read "me". [2] For "elm"... read "elm". Horrible code. -- Peter da Silva; Ferranti International Controls Corporation; +1 713 274 5180; Sugar Land, TX 77487-5012; `-_-' "Have you hugged your wolf, today?"
jef@well.sf.ca.us (Jef Poskanzer) (06/22/91)
Hey, I've got an idea: how about not using NULL? Using a 0 explicitly cast to the appropriate pointer type works perfectly for all machines and compilers. Furthermore, it makes the code more self-documenting and maintainable, and less error-prone. Oh, wait. It means typing a few more characters. So much for that idea. --- Jef Jef Poskanzer jef@well.sf.ca.us {apple, ucbvax, hplabs}!well!jef "You can put me in jail, but you cannot give me narrower quarters than as a seaman I have always had. You cannot give me coarser food than I have always eaten. You cannot make me lonlier than I have always been." -- Andrew Furuseth, Emancipator of Seamen
gwyn@smoke.brl.mil (Doug Gwyn) (06/22/91)
In article <C62C9-2@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: >> What about systems where different pointer types have different sizes? >Then a different definition of NULL is appropriate. NO NO NO. There is NO appropriate definition of NULL if your criterion be that it "work" without proper casting. If you declare it as, say, something that will work as an int*, then it cannot work as a char* on a system where different pointer types have different sizes. In other words, the problem you think you're trying to solve cannot be solved in general. As I said, you do nobody a favor when you make the programmer think that he doesn't need to be careful (because your implementation will "fix" his sloppiness). All this stuff was thoroughly discussed in X3J11, by the way. The outcome was what you now see specified in the C standard.
brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (06/23/91)
In article <25572@well.sf.ca.us> Jef Poskanzer <jef@well.sf.ca.us> writes: > Hey, I've got an idea: how about not using NULL? Using a 0 explicitly > cast to the appropriate pointer type works perfectly for all machines > and compilers. Furthermore, it makes the code more self-documenting > and maintainable, and less error-prone. Yeah. It's been a long time since I've used NULL in any program. When I start maintaining someone else's code, the first thing I do is trash all the NULLs. It's amazing how many bugs this catches. (Admittedly, they'd be caught by ANSI prototypes too, but most C code isn't ANSI C.) > Oh, wait. It means typing a few more characters. So much for that idea. :-) I'm not sure what people are arguing here. Yes, programmers should always cast NULL appropriately if they use it at all, because there exist vendors which #define NULL 0. Yes, vendors should define NULL as ((char *) 0) or ((void *) 0), because there exist programmers who use NULL without casting and it's better to detect such misuse. What's the issue? ---Dan
gwyn@smoke.brl.mil (Doug Gwyn) (06/23/91)
In article <1146.Jun2221.20.2291@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: >I'm not sure what people are arguing here. Yes, programmers should >always cast NULL appropriately if they use it at all, because there >exist vendors which #define NULL 0. Yes, vendors should define NULL as >((char *) 0) or ((void *) 0), because there exist programmers who use >NULL without casting and it's better to detect such misuse. What's the >issue? The issue is that #defining NULL as ((void*)0) does NOT detect such misuse, nor does it adequately compensate for it in all cases. #defining NULL as ((char*)0) is simply wrong.
r_miller@apollo.hp.com (Roger Miller) (06/24/91)
> The issue is that #defining NULL as ((void*)0) does NOT detect such > misuse, nor does it adequately compensate for it in all cases. Another reason you might prefer plain 0 to (void*)0 is that C++ does not allow implicit casts from void* to other pointer types. So if NULL is defined as (void*)0 you can't write "int *p = NULL". This is of course irrelvant in a pure C or pure C++ environment, but C++ programmers often want to share C header files, and I have seen this lead to #ifdef NULL #undef NULL #define NULL my-way battles in the source code.
peter@ficc.ferranti.com (Peter da Silva) (06/25/91)
In article <25572@well.sf.ca.us> Jef Poskanzer <jef@well.sf.ca.us> writes: > Hey, I've got an idea: how about not using NULL? Using a 0 explicitly > cast to the appropriate pointer type works perfectly for all machines > and compilers. Furthermore, it makes the code more self-documenting > and maintainable, and less error-prone. Agreed 100%. For me, at least, I've never been talking about how one should write code. There is no question but that this is the best coding practice (for myself, I often use "nil(type)", defined as "((type)0)", which can't be easily abused). -- Peter da Silva; Ferranti International Controls Corporation; +1 713 274 5180; Sugar Land, TX 77487-5012; `-_-' "Have you hugged your wolf, today?"
peter@ficc.ferranti.com (Peter da Silva) (06/25/91)
In article <4796@inews.intel.com> bhoughto@hopi.intel.com (Blair P. Houghton) writes: > Do a global search-and-replace on the files, > especially if you ever expect to redistribute the code. Only to real computers. > That's why standard headers were invented, to be standard. Fine. ((void *)0) still conforms to the standard. > sed 's/\([^_A-Za-z]\)NULL\([^_A-Za-z]\)/\1(void *)NULL\2/g' Right. Boy, that's going to win friends and influence people. Plus, what do you think that'd do when I want to apply the next patch that comes from the net? Instead of grovelling around in code beforehand, I'll have to do it over and over again for every patch. -- Peter da Silva; Ferranti International Controls Corporation; +1 713 274 5180; Sugar Land, TX 77487-5012; `-_-' "Have you hugged your wolf, today?"
peter@ficc.ferranti.com (Peter da Silva) (06/25/91)
In article <16481@smoke.brl.mil> gwyn@smoke.brl.mil (Doug Gwyn) writes: > In article <C62C9-2@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: > >> What about systems where different pointer types have different sizes? > >Then a different definition of NULL is appropriate. > NO NO NO. There is NO appropriate definition of NULL if your criterion > be that it "work" without proper casting. No, my criterion is that it allow the maximum possible amount of comp.sources to compile and execute. If there is no definition that'll catch 100%, then we'll have to be satisfied with 75%, or 50%. If an unadorned "0" is the best solution, that's the one to use. If "((void *)0)" is the best solution, then that's the one to use. One or the other is appropriate for any given machine. Whichever one it is depends entirely on the machine. > If you declare it as, say, > something that will work as an int*, then it cannot work as a char* on > a system where different pointer types have different sizes. That's right, and on that system you use "0". If ints and pointers *also* have different sizes, you're screwed. Too bad. > In other words, the problem you think you're trying to solve cannot be > solved in general. No, the problem *you* think I'm trying to solve can't be solved. That's OK. Since I'm not trying to solve that particular problem it doesn't bother me. > All this stuff was thoroughly discussed in X3J11, by the way. The > outcome was what you now see specified in the C standard. Uh, huh. And that is that "0" and "((void *)0)" are acceptable. Pick one. Depending on the hardware and compiler. -- Peter da Silva; Ferranti International Controls Corporation; +1 713 274 5180; Sugar Land, TX 77487-5012; `-_-' "Have you hugged your wolf, today?"
djimenez@ringer.cs.utsa.edu (Daniel Jimenez) (06/26/91)
In article <XZ4C6T6@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: >In article <16481@smoke.brl.mil> gwyn@smoke.brl.mil (Doug Gwyn) writes: >> In article <C62C9-2@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: >> >> What about systems where different pointer types have different sizes? >> >Then a different definition of NULL is appropriate. >depends entirely on the machine. >[more stuff deleted] >> If you declare it as, say, >> something that will work as an int*, then it cannot work as a char* on >> a system where different pointer types have different sizes. > >That's right, and on that system you use "0". If ints and pointers *also* >have different sizes, you're screwed. Too bad. I thought 0 in a context where a pointer is expected (e.g., int *p; p = 0;) wasn't the integer 0, rather whatever that machine's representation of a null pointer is. So it shouldn't matter if a pointer is 21.5 bits and an integer is 67 bits; '0' is a pointer value, which could be all bits zero or some other unique value. Anyway, if you have a machine where pointers to different types are of different sizes, then you have much bigger problems than what to #define NULL as. Like, let's say your machine has one size for character pointers and another for integer pointers. Let's also say that your <stdlib.h> file, like mine, contains char *malloc (blah blah blah); What if you wanted to allocate space for an integer like this: int *p; ... /* decision made to allocate integer */ p = (int *) malloc (sizeof (int)); (maybe you would want to do this in a situation where int*'s are 16 bits and ints are 32 bits, to save space) malloc would return a value of one size which would be a valid character pointer, but the wrong size for an integer pointer. I've never heard of machines like that. Could someone give an example? For what it's worth, here's my opinion on NULL: We should all contribute to a fund to help build a time machine so someone can go back in time and tell K&R to include something like Pascal's nil in C. :-) -- * Daniel A. Jimenez * Please excuse my longwindedness. * djimenez@ringer.cs.utsa.edu * This Sun terminal makes everything * dajim@lonestar.utsa.edu * I write seem important. * Opinions expressed here are mine only, and not those of UTSA.
brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (06/26/91)
In article <16506@smoke.brl.mil> gwyn@smoke.brl.mil (Doug Gwyn) writes: > In article <1146.Jun2221.20.2291@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: > >I'm not sure what people are arguing here. Yes, programmers should > >always cast NULL appropriately if they use it at all, because there > >exist vendors which #define NULL 0. Yes, vendors should define NULL as > >((char *) 0) or ((void *) 0), because there exist programmers who use > >NULL without casting and it's better to detect such misuse. What's the > >issue? > The issue is that #defining NULL as ((void*)0) does NOT detect such > misuse, nor does it adequately compensate for it in all cases. Sorry, I meant compensation. It *does* compensate in *some* cases, don't you agree? Like a whole big pile of freely availble and often useful even if non-``standard''-conforming code, right? So it's better to stop the bugs---and possibly crashes---in nine cases out of ten than to pretend that all programs conform to the standard. Right? > #defining NULL as ((char*)0) is simply wrong. 'Scuse me for thinking about backwards compatibility in a standards group. There are still a whole bunch of unfixed pcc-based compilers which blow up on void *, and it's perfectly reasonable for a vendor to decide that the header files should be pcc-compatible first and foremost, and support ANSI in ways that won't make pcc blow up. (DEC and Convex, for instance, currently support both pcc and ANSI compilers.) By the way, I'm curious: Why is ((char *)0) ``simply wrong''? Remember the as-if rule: unless you can exhibit a working program whose results depend on whether (void *) or (char *) was used, there's no problem. And I was under the impression that (void *) and (char *) had exactly the same internal representation and in fact semantics, except that (void *) can't be dereferenced... ---Dan
kremer@cs.odu.edu (Lloyd Kremer) (06/26/91)
Newsgroups: comp.std.c Subject: Re: gcc and NULL function pointers. Summary: Expires: References: <16481@smoke.brl.mil> <XZ4C6T6@xds13.ferranti.com> <1991Jun26.053508.3634@ringer.cs.utsa.edu> Sender: Followup-To: Distribution: Organization: Old Dominion University, Norfolk, VA Keywords: In article <1991Jun26.053508.3634@ringer.cs.utsa.edu> djimenez@ringer.cs.utsa.edu (Daniel Jimenez) writes: >I thought 0 in a context where a pointer is expected (e.g., int *p; p = 0;) >wasn't the integer 0, rather whatever that machine's representation of >a null pointer is. Yes. But problems most often occur when the compiler does not recognize that a pointer interpretation is appropriate such as when passing the pointer 0 as a function argument with no cast and no prototype in scope. >Anyway, if you have a machine where pointers to different types are >of different sizes, then you have much bigger problems than what to >#define NULL as. Yes, the ANSI committee had to grapple with these problems, and solved them. >Like, let's say your machine has one size for >character pointers and another for integer pointers. Yes, there are some like that. All that I've heard of have the char* larger than the int*, since the char* contains both a word address and a byte offset. >What if you wanted to allocate space for an integer like this: > >int *p; >... >/* decision made to allocate integer */ >p = (int *) malloc (sizeof (int)); > >malloc would return a value of one size which would be a valid character >pointer, but the wrong size for an integer pointer. A conformant malloc would return a void* that had been specially selected to comply with the most stringent alignment requirements of the architecture, so as to be safely castable to any other pointer type. In the case of "char* larger than int*" all of the bits that the char* has but that the int* doesn't have would be zero, so that the pointer cast (i.e. truncation) would be harmless and reversible. >For what it's worth, here's my opinion on NULL: >We should all contribute to a fund to help build a time machine so >someone can go back in time and tell K&R to include something like >Pascal's nil in C. :-) OK, I'll hold the money until funding is complete. :-) Lloyd Kremer Hilton Systems, Inc. kremer@cs.odu.edu
gwyn@smoke.brl.mil (Doug Gwyn) (06/27/91)
In article <1991Jun26.053508.3634@ringer.cs.utsa.edu> djimenez@ringer.cs.utsa.edu (Daniel Jimenez) writes: >malloc would return a value of one size which would be a valid character >pointer, but the wrong size for an integer pointer. That's okay; the conversion to int* takes care of that. >I've never heard of machines like that. Could someone give an example? Check the comp.lang.c Frequently-Asked Questions posting. Essentially any word-oriented machine will be like that.
gwyn@smoke.brl.mil (Doug Gwyn) (06/27/91)
In article <17605.Jun2607.39.3591@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: >So it's better to stop the bugs---and possibly crashes---in nine cases >out of ten than to pretend that all programs conform to the standard. Right? No, it's better to detect coding errors as soon as possible so they can be remedied. Anyone porting crappy code is being unreasonable to think that it "works" just because he doesn't NOTICE any errors in it. I'm not concerned with crappy code, but rather with correct code. >By the way, I'm curious: Why is ((char *)0) ``simply wrong''? Because it is! It doesn't have the right properties (automatic coercion into other pointer types, for example).
diamond@jit533.swstokyo.dec.com (Norman Diamond) (06/27/91)
In article <17605.Jun2607.39.3591@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: >In article <16506@smoke.brl.mil> gwyn@smoke.brl.mil (Doug Gwyn) writes: >>#defining NULL as ((char*)0) is simply wrong. > >By the way, I'm curious: Why is ((char *)0) ``simply wrong''? Remember >the as-if rule: unless you can exhibit a working program whose results >depend on whether (void *) or (char *) was used, there's no problem. This particular argument by Mr. Bernstein seems to be correct. I would say that #defining NULL as ((char*)0) is ugly and morally repugnant, but not simply wrong. A strictly conforming program could not detect if the processor has #defined NULL as ((char*)0) (as long as the processor takes other necessary steps in conjunction with this, such as allowing implicit coercion to and from other pointer types). So it seems to be allowable. Of course, only a foolish user would write code that depends on it. -- Norman Diamond diamond@tkov50.enet.dec.com If this were the company's opinion, I wouldn't be allowed to post it. Permission is granted to feel this signature, but not to look at it.
djimenez@ringer.cs.utsa.edu (Daniel Jimenez) (06/27/91)
In article <1991Jun26.134355.29334@cs.odu.edu> kremer@cs.odu.edu (Lloyd Kremer) writes: >In article <1991Jun26.053508.3634@ringer.cs.utsa.edu> djimenez@ringer.cs.utsa.edu (Daniel Jimenez) writes: >>I thought 0 in a context where a pointer is expected (e.g., int *p; p = 0;) >>wasn't the integer 0, rather whatever that machine's representation of >>a null pointer is. > >Yes. But problems most often occur when the compiler does not recognize that >a pointer interpretation is appropriate such as when passing the pointer 0 as >a function argument with no cast and no prototype in scope. I know. When I said that, I was responding to someone who was considering 0 in a pointer context. I know about execl and other functions where 0 will not do. I found my mailbox stuffed this morning with responses to my article. Ok, ok, there exist machines where char*'s are larger than int*'s. That being the case, I have to revise this: >>For what it's worth, here's my opinion on NULL: >>We should all contribute to a fund to help build a time machine so >>someone can go back in time and tell K&R to include something like >>Pascal's nil in C. :-) > >OK, I'll hold the money until funding is complete. :-) (I'll e-mail you my cash donation :-) It won't do just to have a Pascal 'nil' in all pointer contexts, because pointers can be of different sizes. So we should tell K&R just to have a macro #define NULL(type) (type*)0 /* no, I didn't make it up. I stole it. */ instead of just #define NULL (your_favorite_type*) 0 I'm sure everyone can agree on that one. The problem is, time travel aside, what to do with all those programs where NULL is defined as 0 or (some_type*)0 when they are ported to wierd machines with big char*'s. On those architectures, you could add a compiler switch "-stupid_NULL" that would compile the program in such a way that all pointers passed to or received by functions would be guaranteed to be the largest size pointer the machine has. Then, when those parameters are dealt with in the function, they would be cast to the right size automagically. The only case when they wouldn't be converted would be cases like char *p; ... if (p == 0) blah_blah (); because to check for the null pointer requires only checking for all bits zero. (before you flame me, read on) The null pointer would have to be represented by "all bits zero" in this case, but could be something different during normal compilation. Any ints passed to or received by functions would also have to be converted into things the size of the largest pointer (unless integers are larger, in which case everything would be converted to integers), lest an integer 0 should make its way into a place where null was expected. This switch would be used whenever compiling one of those programs. The decreased efficiency of those programs would be an incentive to the programmers to change their ways. The fact that I am proposing such a silly solution characterizes the futility of the whole #define NULL question. -- * Daniel A. Jimenez * Please excuse my longwindedness. * djimenez@ringer.cs.utsa.edu * This Sun terminal makes everything * dajim@lonestar.utsa.edu * I write seem important. * Opinions expressed here are mine only, and not those of UTSA.
peter@ficc.ferranti.com (Peter da Silva) (06/27/91)
In article <1991Jun26.053508.3634@ringer.cs.utsa.edu> djimenez@ringer.cs.utsa.edu (Daniel Jimenez) writes: > I thought 0 in a context where a pointer is expected (e.g., int *p; p = 0;) > wasn't the integer 0, rather whatever that machine's representation of > a null pointer is. True, but there is a lot of broken but useful and important code floating around the net that uses NULL in contexts where a pointer is needed but an integer is expected. My claim is that it is in a compiler vendor's interest to, whenever possible, define NULL such that such code continues to work. It may not be possible. The value you need to use (either a void cast 0 or just plain 0) is system dependent. There are lots of problems. But sometimes it's possible, and when it is that's what the vendor should provide. At least until the VAXisms are all fixed (one of these decades, maybe?)... -- Peter da Silva; Ferranti International Controls Corporation; +1 713 274 5180; Sugar Land, TX 77487-5012; `-_-' "Have you hugged your wolf, today?"
peter@ficc.ferranti.com (Peter da Silva) (06/27/91)
In article <16540@smoke.brl.mil> gwyn@smoke.brl.mil (Doug Gwyn) writes: > No, it's better to detect coding errors as soon as possible so they can ^^^^^^ This is the critical word. In the case I brought up, and other similar cases, the compiler cannot detect the coding error. It will silently produce bad code. That, to me, is the worst possible outcome. -- Peter da Silva; Ferranti International Controls Corporation; +1 713 274 5180; Sugar Land, TX 77487-5012; `-_-' "Have you hugged your wolf, today?"
peter@ficc.ferranti.com (Peter da Silva) (06/27/91)
In article <1991Jun27.011959.14714@ringer.cs.utsa.edu> djimenez@ringer.cs.utsa.edu (Daniel Jimenez) writes: > #define NULL(type) (type*)0 /* no, I didn't make it up. I stole it. */ I use #define NIL(t) ((t)0) /* sometimes lowercase on older code */ That way it won't confuse people (what's this NULL with an argument?), and you can use typedeffed pointers or macros in there: typedef void (*funptr)(); if(fun == NIL(funptr)) ... And I don't think it's a silly idea. -- Peter da Silva; Ferranti International Controls Corporation; +1 713 274 5180; Sugar Land, TX 77487-5012; `-_-' "Have you hugged your wolf, today?"
mcdaniel@adi.com (Tim McDaniel) (06/27/91)
In article <1991Jun27.011959.14714@ringer.cs.utsa.edu> djimenez@ringer.cs.utsa.edu (Daniel Jimenez) writes: It won't do just to have a Pascal 'nil' in all pointer contexts, because pointers can be of different sizes. But if "nil" occurred in a context where the compiler couldn't determine the proper pointer type, an error message would be output. That would, I think, solve all the problems; both int i = nil; execl("this", "that", "the other", "and", "more", nil); would produce error messages, as I would expect. In the absence of a time machine, I think we could do #define NULL __nil today, and "__nil" could have the semantics above. I don't think this could break any strictly-conforming ANSI C programs. "__nil" could instead mean "0" or "(void *) 0" with special switches, say "-ansi -pedantic". Anyone want to suggest it to the gcc maintainers? -- "Of course he has a knife; he always has a knife. We all have knives. It's 1183 and we're barbarians." -- Eleanor of Aquitaine, "A Lion in Winter" Tim McDaniel Applied Dynamics Int'l.; Ann Arbor, Michigan, USA Internet: mcdaniel@adi.com UUCP: {uunet,sharkey}!amara!mcdaniel
dhesi@cirrus.com (Rahul Dhesi) (06/28/91)
In <17605.Jun2607.39.3591@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: >'Scuse me for thinking about backwards compatibility in a standards >group. Good point! Let's keep this in mind. >By the way, I'm curious: Why is ((char *)0) ``simply wrong''? It's wrong for the same reason that ((void *) 0) is wrong. K&R said that NULL is defined to be 0. Therefore existing code that (unwisely) uses NULL to stand for a zero in a non-pointer context may break using either ((void *) 0) or ((char *) 0). However, ((void *) 0) is blessed by ANSI and ((char *) 0) is not. Therefore we may summarize: ((void *) 0) is ANSI-conformant but wrong. ((char *) 0) is not ANSI-conformant and wrong. And the bottom line: All definitions of NULL other than 0 are wrong, no matter how much or how little ANSI-conformant they may be. -- Rahul Dhesi <dhesi@cirrus.COM> UUCP: oliveb!cirrusl!dhesi
gwyn@smoke.brl.mil (Doug Gwyn) (06/28/91)
In article <WB7CF-2@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: >In article <16540@smoke.brl.mil> gwyn@smoke.brl.mil (Doug Gwyn) writes: >> No, it's better to detect coding errors as soon as possible so they can >This is the critical word. In the case I brought up, and other similar cases, >the compiler cannot detect the coding error. I was talking about the software tester, not the compiler, but in any case as I recall the example, "lint" would catch the error.
volpe@camelback.crd.ge.com (Christopher R Volpe) (06/28/91)
In article <1991Jun27.001642.10658@tkou02.enet.dec.com>, diamond@jit533.swstokyo.dec.com (Norman Diamond) writes: |>In article <17605.Jun2607.39.3591@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: |>>In article <16506@smoke.brl.mil> gwyn@smoke.brl.mil (Doug Gwyn) writes: |>>>#defining NULL as ((char*)0) is simply wrong. |>> |>>By the way, I'm curious: Why is ((char *)0) ``simply wrong''? Remember |>>the as-if rule: unless you can exhibit a working program whose results |>>depend on whether (void *) or (char *) was used, there's no problem. |> |>This particular argument by Mr. Bernstein seems to be correct. |>I would say that #defining NULL as ((char*)0) is ugly and morally repugnant, |>but not simply wrong. A strictly conforming program could not detect if the |>processor has #defined NULL as ((char*)0) (as long as the processor takes |>other necessary steps in conjunction with this, such as allowing implicit |>coercion to and from other pointer types). I believe the following is true: (1) The following code is strictly conforming: struct foo *fooptr = NULL; (2) The following code requires a diagnostic: struct foo *fooptr = (char *)0; because an explicit cast is required to convert an expression of type (char *) to one of type (stfuct foo *). Therefore, if the implementation #defines NULL as (char *)0, it's going to have a helluva time maintaining (1) and (2) above. ================== Chris Volpe G.E. Corporate R&D volpecr@crd.ge.com
gwyn@smoke.brl.mil (Doug Gwyn) (06/28/91)
In article <20969@crdgw1.crd.ge.com> volpe@camelback.crd.ge.com (Christopher R Volpe) writes: >I believe the following is true: Yup.
bhoughto@pima.intel.com (Blair P. Houghton) (06/28/91)
In article <1991Jun27.190107.627@cirrus.com> dhesi@cirrus.com (Rahul Dhesi) writes: >And the bottom line: > > All definitions of NULL other than 0 are wrong, no matter how > much or how little ANSI-conformant they may be. Well, no, they aren't "wrong", they're just redundant, because it is true in all cases that the only "right" way to _use_ NULL is to cast it to a type compatible with the object, pointer, or function parameter being set to or compared against NULL, and then not to use it for anything but pointer types (which are explicitly defined to be tolerant of wacky chains of casts of their null value). --Blair "But this will have changed by the next time I see this posting, so I'm going to go take another nap..."
peter@ficc.ferranti.com (Peter da Silva) (06/29/91)
In article <1991Jun27.190107.627@cirrus.com> dhesi@cirrus.com (Rahul Dhesi) writes: > ((void *) 0) is ANSI-conformant but wrong. > ((char *) 0) is not ANSI-conformant and wrong. > All definitions of NULL other than 0 are wrong, no matter how > much or how little ANSI-conformant they may be. Sorry, that's wrong. -- Peter da Silva; Ferranti International Controls Corporation; +1 713 274 5180; Sugar Land, TX 77487-5012; `-_-' "Have you hugged your wolf, today?"
karish@mindcraft.com (Chuck Karish) (06/29/91)
In article <1991Jun27.190107.627@cirrus.com> dhesi@cirrus.com (Rahul Dhesi) writes: >In <17605.Jun2607.39.3591@kramden.acf.nyu.edu> >brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes: >>By the way, I'm curious: Why is ((char *)0) ``simply wrong''? > >It's wrong for the same reason that ((void *) 0) is wrong. K&R said >that NULL is defined to be 0. Not in my copy. They #defined NULL as 0 in an EXAMPLE. >Therefore existing code that (unwisely) >uses NULL to stand for a zero in a non-pointer context may break using >either ((void *) 0) or ((char *) 0). From K&R Classic, p. 192: The compilers currently allow a pointer to be assigned to an integer, an integer to a pointer, and a pointer to a pointer of another type. The assignment is a pure copy operation, with no conversion. This usage is nonportable, and may produce pointers which cause addressing exceptions when used. However, it is guaranteed that assignment odf the constant 0 to a pointer will produce a null pointer distinguishable from a pointer to any object. I don't see how anyone could read this as prohibiting implementations from defining NULL to be a pointer. It does suggest that pointers of different types may not be universally inter-assignable without careful type casting. >However, ((void *) 0) is blessed by ANSI and ((char *) 0) is not. >Therefore we may summarize: > > ((void *) 0) is ANSI-conformant but wrong. It does not allow maximal portability of sloppily-written (minimalist?) code (see K&R's warning, above), but it's not wrong. To answer Dan's question, ((char *)0) is wrong because the universally-castable storage type is (void *) under Standard C, not (char *). That's why malloc() now returns (void *). -- Chuck Karish karish@mindcraft.com Mindcraft, Inc. (415) 323-9000
karish@mindcraft.com (Chuck Karish) (06/29/91)
In article <20969@crdgw1.crd.ge.com> volpe@camelback.crd.ge.com (Christopher R Volpe) writes: >I believe the following is true: > >(1) The following code is strictly conforming: > struct foo *fooptr = NULL; > >(2) The following code requires a diagnostic: > struct foo *fooptr = (char *)0; Is there a restriction that would prevent the implementation from producing a diagnostic for (1)? I wouldn't be surprised to see either "possible pointer type mismatch" or "illegal combination of pointer and integer", depending on how NULL is defined. -- Chuck Karish karish@mindcraft.com Mindcraft, Inc. (415) 323-9000
gwyn@smoke.brl.mil (Doug Gwyn) (06/29/91)
In article <678149027.20102@mindcraft.com> karish@mindcraft.com (Chuck Karish) writes: >Is there a restriction that would prevent the implementation >from producing a diagnostic for (1)? Literally speaking, an implementation can generate spurious diagnostics if it wishes. However, there is supposed to be a well-defined notion of "accepting" a strictly conforming program, and such a diagnostic if it is indeed generated should be syntactically distinguishable from a real diagnostic (such as the ones required by the standard). >I wouldn't be surprised to see either "possible pointer type mismatch" >or "illegal combination of pointer and integer", depending on how NULL >is defined. Why in the world not? All conforming implementations must define NULL such that example (1), after NULL is macro replaced, is a strictly conforming excerpt. Why would they generate diagnostics for perfectly fine code?
diamond@jit533.swstokyo.dec.com (Norman Diamond) (07/01/91)
In article <16583@smoke.brl.mil> gwyn@smoke.brl.mil (Doug Gwyn) writes: >In article <678149027.20102@mindcraft.com> karish@mindcraft.com (Chuck Karish) writes: >>Is there a restriction that would prevent the implementation >>from producing a diagnostic for (1)? >Literally speaking, an implementation can generate spurious diagnostics >if it wishes. Yes, almost well-known by now. >However, there is supposed to be a well-defined notion of >"accepting" a strictly conforming program, Yes, the correct output, determined by the program, must be produced. >and such a diagnostic if it is indeed generated should be syntactically >distinguishable from a real diagnostic This is a different issue, and "should" is a weasel-word. For quality of implementation, I think everyone would agree that such diagnostics should be syntactically distinguishable and should serve some useful purpose etc. etc. However, the standard does not require it. >Why would they generate diagnostics for perfectly fine code? The standard doesn't demand an answer to this question. Even though we dislike low-quality implementations, this question is irrelevant (for this newsgroup). -- Norman Diamond diamond@tkov50.enet.dec.com If this were the company's opinion, I wouldn't be allowed to post it. Permission is granted to feel this signature, but not to look at it.