54394gt@hocda.UUCP (G.TOMASEVICH) (01/18/84)
Why all this fuss about NULL vs 0? Just define a set of NUL's: #define INUL (int*)0 #define CNUL (char*)0 #define FNUL (struct foo*)0 and so on, whatever you need.
mjs@rabbit.UUCP (01/19/84)
The point is that there are an increasing number of machines whose compilers choose 16 bits to represent an int, and 32 for a pointer. On such a machine, the following code is simply wrong: #define NULL ((char *) 0) int f(p,x,y) char * p; int x, y; { /* stuff */ return (x); } int g() { #ifdef INCORRECT (void) f(0, 17, 42); /* 3 16-bit quantities */ #else !INCORRECT (void) f(NULL, 17, 42); /* 32 bits of 0 & 2 16-bit ints */ #endif INCORRECT } All that's been asked of anyone is to enhance portability to machines where it is advantageous (due to things like bus bandwidth, cpu power, etc.) to use 16 bits to represent type int, and 32 for pointer types. -- Marty Shannon UUCP: {alice,rabbit,research}!mjs Phone: 201-582-3199
chris@umcp-cs.UUCP (01/20/84)
I have one little minor thing to say here. I don't know about those 68k systems that have sizeof (int) == 2, sizeof (char *) == 4, but all the stdio.h files I've seen say #define NULL 0 (NOT #define NULL ((char *) 0)), so it won't make a bit of difference if you write f () { g (NULL, 1, 2); } instead of f () { g (0, 1, 2); } What you must do instead is write f () { g ((char *) NULL, 1, 2); } /* or (char *) 0 */ (assuming g expects it's first argument to be of type "char *"). I agree that it's good practise to include type casts for function parameters; however, as a "midnight hacker" I know how easy it is to miss these. (But I *do* use lint!) (Ever run lint on a 4.1BSD kernel after installing CMU IPC?) -- In-Real-Life: Chris Torek, Univ of MD Comp Sci UUCP: {seismo,allegra,brl-bmd}!umcp-cs!chris CSNet: chris@umcp-cs ARPA: chris.umcp-cs@CSNet-Relay
jhh@ihldt.UUCP (01/20/84)
What would break if stdio.h defined NULL as 0L for those systems for which sizeof (int) == 2 and sizeof (anything *) == 4? It seems that this is the right level to keep such machine dependent information. John Haller
rdsmith@exodus.UUCP (01/20/84)
Chris Torek says: I have one little minor thing to say here. I don't know about those 68k systems that have sizeof (int) == 2, sizeof (char *) == 4, but all the stdio.h files I've seen say #define NULL 0 (NOT #define NULL ((char *) 0)), so it won't make a bit of difference ... Chris is headed in the right direction with the stdio.h reference, but the rest of the article really blows it. Of course all of the stdio.hs you've seen have NULL defined that way; that is the way THOSE machines represent NULL. The point is that << stdio.h IS THE PLACE WHERE SUCH A MACHINE DEPENDENCY BELONGS >>, not scattered throughout user programs. Stick to NULL, and expect each machine to properly define it in stdio.h. Randy D. Smith CSO, Inc. HL 3L-528 (201) 564-3797
Pucc-H:Physics:crl@CS-Mordred.UUCP (01/20/84)
What makes a program portable? Adhering strictly to the C reference manual is the answer I'd give. Since the manual states that 0 == NULL, I believe that's that. It is up to the implementation to assure that this works. If I came along with a machine and implementation that disallowed some other construct, like *i++, for example, I know for a fact that everyone would scream at how ``I'' should change, and not how ``C'' should be modified. Offhand, I could not find anything in the manual that says that function arguments on the stack are no smaller than type int. (I could have easily overlooked this, however.) Couldn't machines with 32 bit pointers and 16 bit ints push 32 bits on the stack always. This is analogous to how chars are done now. Charles LaBrec UUCP: pur-ee!Physics:crl, purdue!Physics:crl INTERNET: crl @ pur-phy.UUCP
guy@rlgvax.UUCP (01/21/84)
Probably, nothing would break if NULL were defined as 0L in <stdio.h> on machines with sizeof(int) == 2 and sizeof(XXX *) == 4, but 1) it would not shut "lint" up - and when it complains that you really should cast 0 to (char *)0, one should listen as there may be machines on which the cast is necessary - and 2) it wouldn't help if you just used 0 instead of NULL. Remember, NULL is just sugar over the 0; the language says 0 may be coerced to a pointer. I've seen tons of programs which use 0 rather than NULL. The program may not include <stdio.h>, and if every program #defined NULL as 0L the program would no longer be portable. Guy Harris {seismo,ihnp4,allegra}!rlgvax!guy
guy@rlgvax.UUCP (Guy Harris) (01/21/84)
Unfortunately, NULL should *not* be (char *)0, because there is no such thing as a generic null pointer in C. Each type of pointer has its own flavor of null pointer. If you declare NULL as (char *)0, then if you pass NULL to a routine which expects (int *)0, you will get a complaint from "lint" at best and a dead program at worst - what if you have a word-addressed machine in which (int *) takes 16 bits but (char *) takes 32? Guy Harris {seismo,ihnp4,allegra}!rlgvax!guy
gwyn%brl-vld@sri-unix.UUCP (01/21/84)
From: Doug Gwyn (VLD/VMB) <gwyn@brl-vld> People who are concerned with such things as widening of arguments to functions should wait to see the proposed ANSI C standard. The committee that is working on this have done a really nice job of considering such details.
gwyn%brl-vld@sri-unix.UUCP (01/29/84)
From: Doug Gwyn (VLD/VMB) <gwyn@brl-vld> I sure am glad you requested complaints about flexnames. My complaint is that a DEFINITE GUARANTEE is needed as to how many characters in the long names are checked for uniqueness. (externs probably still need to be limited to 6 chars with case ignored.) Otherwise this feature will interfere with writing portable code. For example, int xxxxxxxxxxxxxxxxxxxyxxxxxxxxxxxxxxxxxx; int xxxxxxxxxxxxxxxxxxxzxxxxxxxxxxxxxxxxxx; Are these going to be guaranteed to be distinct identifiers in the new C language standard? What if there were 50 x characters? 100? It is not possible to write portable code without some guarantee about this. Another drawback to flexnames is that the portable programmer cannot use them until they are covered by the language standard. At present, C compilers often support only the 8 chars promised in the K&R book.
holmes@dalcs.UUCP (Ray Holmes) (02/02/84)
The problem here is with the C `bible' and with the C compilers. NULL should NOT be defined in the `stdio.h' package as this assumes (de facto) that there is a common interpretation. If a generic NULL is to be recognized it *has* to be done by the compiler, NOT the preprocessor. Only the compiler has the info (if it does) to correctly interpret the `current' meaning of NULL. The idea that NULL could be something simple, like 0, doesn't work as we have seen over the weeks. If there is to be a generic NULL pointer it MUST be known to the compiler.
guy@rlgvax.UUCP (Guy Harris) (02/04/84)
> The problem here is with the C `bible' and with the C compilers. NULL > should NOT be defined in the `stdio.h' package as this assumes (de facto) > that there is a common interpretation. If a generic NULL is to be > recognized it *has* to be done by the compiler, NOT the preprocessor. Only > the compiler has the info (if it does) to correctly interpret the `current' > meaning of NULL. The idea that NULL could be something simple, like 0, > doesn't work as we have seen over the weeks. If there is to be a generic > NULL pointer it MUST be known to the compiler. Unfortunately, the only way the compiler could know the proper type to case 0/NULL to would be if there was a way to declare the types of the arguments that a function expects; however, there is no such provision in the C language at present. It is being considered by the ANSI C language standard committee. Guy Harris {seismo,ihnp4,allegra}!rlgvax!guy
edhall%rand-unix@sri-unix.UUCP (02/04/84)
From: Ed_Hall <edhall@rand-unix> As I believe has been pointed out here before, the constant `0' has a special interpretation when assigned or compared to a pointer. It behaves as the `null pointer' in these cases, *whatever that happens to be*. Thus, on a given computer a null pointer could have a non-zero value, and the constant `0' would be interpreted as that value in a pointer context. Of course, the place this breaks down is in procedure arguments, as C currently has no means of declaring whether a given argument to a procedure call is a pointer or not. This would then require a cast to produce the correct null-pointer value. Another solution is to have the compiler produce code which checks each pointer parameter for zero and converts it to the proper null- pointer value before it is used inside the called procedure. Compiler writers have been forced to do worse things by unusual (to C) architectures. This should work except when an actual zero is a legal (and non-null) pointer. (This, by the way, would be possible on a split I/D PDP-11 if the loader didn't start the data segment at an address of 2; I'm certain loaders for other machines could be hacked if necessary to make zero unique.) -Ed Hall Rand Corporation Santa Monica, CA edhall@rand-unix (ARPA decvax!randvax!edhall (UUCP)
edhall%rand-unix@sri-unix.UUCP (02/04/84)
From: Ed_Hall <edhall@rand-unix> Before another insomniac flames me on it, let me state that my `solution' for the null-pointer argument problem assumes that a constant zero and a pointer otherwise match as arguments (e.g. are the same width). Obviously, this need not be so (such as 16-bit int's and 32-bit pointers, for a common example). Let's cloud the issue more by insisting that all parameters be the same (maximum) width. Such consistancy has other benifits, though there obviously can be an efficiency penalty. All I am trying to do is show that even an admittedly faulty language feature (the inability to declare function parameters apart from the function definition) can be worked around if the compiler designer is willing to go through enough contortions. Happily, it appears that the ANSI standards committee is considering a reasonable means (in my opinion) for declaring function parameter types in an external declaration. -Ed
gwyn%brl-vld@sri-unix.UUCP (02/05/84)
From: Doug Gwyn (VLD/VMB) <gwyn@brl-vld> In C, 0 is explicitly NOT ALLOWED to be a pointer to actual data. The C Standards Committee seems to be inclined to support declaration of procedure parameters, along with automatic coercion of arguments to the declared type. I am not happy with automatic coercion as I think it encourages sloppy coding, but at least it would let you blissfully supply 0 as a pointer argument and have the compiler turn it into the correct width of null pointer. However, this would not work for such functions as execl() where it is not possible to declare the type of all parameters, so you would still have to use (char *)0 as an execl() argument list terminator. I really don't see what all the discussion is about. Just use the appropriately typecast 0 where you need a null pointer in your C code and you have taken care of the matter for once and for all.
henry@utzoo.UUCP (Henry Spencer) (02/08/84)
Doug Gwyn asks: int xxxxxxxxxxxxxxxxxxxyxxxxxxxxxxxxxxxxxx; int xxxxxxxxxxxxxxxxxxxzxxxxxxxxxxxxxxxxxx; Are these going to be guaranteed to be distinct identifiers in the new C language standard? What if there were 50 x characters? 100? It is not possible to write portable code without some guarantee about this. Another drawback to flexnames is that the portable programmer cannot use them until they are covered by the language standard. At present, C compilers often support only the 8 chars promised in the K&R book. The way I heard it at UniSnorum, the ANSI standard is going to say that the standard is 8 characters, but arbitrary-length names will be in the appendix describing "standard extensions". That is, "we think it's a good idea but we don't want to require everyone to do it". If you don't like this, blame Berkeley: as Dennis Ritchie has been heard to say (I'm told), "we *had* a standard". -- Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,linus,decvax}!utzoo!henry