[comp.lang.c] pointer debate raging on...

dave@sds.UUCP (dave schmidt x194) (04/25/87)

[This disucssion has been going on in news.software.b; while I was under the
belief that it had been cross-posted to comp.lang.c, I don't believe that
it has.  If this duplicates too much material, my apologies...]

Summary: Someone asked if anyone had ported the news software to the IBM AT;
	 I replied that I had done so, but had a few problems with the
	 programs assuming that sizeof(ptr) == sizeof(int).  I also commented
	 that constructs such as "if (!charptr)" were present.  Having had
	 bad experience with such constructs, I changed them to 
	 "if (charptr == (char *)NULL)" before compilation.

	guy%gorodish@Sun.COM commented that if such code didn't work,
	the compiler was in error as the two statements are identical
	in meaning.  Based on what I had seen more than one C compiler
	(or "alledged C compiler") do, I asked why the statements were
	identical rather than "if ((int)charptr == 0)".  gwyn@brl.arpa,
	among others, correctly pointed out that the language
	definition required the statements to be interpreted as Guy
	had said.

	We now join the conversation in progress...

gwyn@brl.arpa (Doug Gwyn (VLD/VMB) <gwyn> writes;

>No cast is needed for the 0; 0 has special pointer properties in addition
>to its use as an integer constant....
>Guy's explanation is EXACTLY RIGHT independent of machine
>architecture or implementation, including bit pattern used to represent a
>null pointer (always written as 0 at the C source level regardless of the
>implementation).

All this seems to say is when faced with a statement such as

	if ( charptr == 0 )

the 0 is cast to (char *), just as 0 is cast to (double)0 in

	if ( doublevar == 0 )

In a certain sense, therefore, I still maintain that 

	if ( charptr == (char *)0 )

is more correct and better style.  Again, this bias of mine comes with
working with brain-damaged compilers which do not correctly compare 32-bit
pointers to 16-bit ints.

guy%gorodish@Sun.COM (Guy Harris) writes:

>So a C compiler *must* properly handle "if (charptr == 0)".  It MUST
>not do an integer comparison if that would give a different result
>from a pointer comparison.  In effect, it must convert the "0" to a
>character pointer of the appropriate type.

What I said.  I incorrectly stated that in an implementation where NULL
was #defined as 0xffffffff, that "if (!charptr)" would not work;  Guy,
Doug, and blarson@castor.usc.edu.UUCP correctly pointed out that this
was wrong...

blarson@castor.usc.edu.UUCP writes:
>>(from me)
>>Coding the above as "if (charptr == (char *)NULL)" saves you from the
>>"queer machine"; it is also more correct in that it explicitly states what
>>you desire and is more portable.  

>Writing code to avoid everything that is broken in some compiler is impossible.
>Why bother trying?

Well, you obviously can't save yourself from every potential muck-up a
compiler can inflict on you.  On the other hand, if a construct has caused
you problems in more than one compiler, and an equivalent construct never
has, which would you write?  And wouldn't you be a little frustrated if
you had to modify (note I did not say fix) virtually every piece of
p.d. software that you tried to bring up because of this construct?
(Patch was a notable exception to this; not one line of code needed
to be modified.  The author(s) should be whole-heartedly commended.)


john@viper.UUCP (John Stanley) writes:

>In article <5787@brl-smoke.ARPA> gwyn@brl.arpa (Doug Gwyn (VLD/VMB) <gwyn>) writes:
 >>
 >>He, I, and others have explained more than once before in the C newsgroup
 >>that the only fully correct definition of the macro NULL, as C now stands,
 >>is as 0.  (Under the proposed ANSI standard you could probably get away
 >>with defining it as (void *)0.)
 >>

>Actualy, you -must- use (void *)0 -OR- 0 and be consistant one way or
>the other.  It's not an option...  If I had to choose, I'd say use

>	#define NULL	((void*)0)

>because it's used as a pointer in 98% of the code I've seen and for
>the following reason...

>This isn't important on machines where sizeof(pointer) == sizeof(int),
>but if you're using (as a simple example) a machine where pointers are
>32 bit and ints are 16 bits then code containing the following function
>may have problems if you change how NULL is defined:

	[example deleted]

>If you use NULL as a pointer, it -must- always actualy be a pointer or
>your code will break any time you try to pass it as a constant to a 
>function.


Applaud, applaud, applaud.  John has correctly pointed out that
sizeof(pointer) == sizeof(int)  is not the universal constant that 
many people believe it to be.  It is for this exact reason that 
NULL should ***NOT*** be defined as 0.

A minor problem still exists even with defining NULL as (void *)0;
NULL cannot be passed to a routine as function pointer without
a cast since sizeof(void (*)()) may not be the same as sizeof(void *).
That, however, is something that shouldn't cause too much trouble.


Dave Schmidt

guy@gorodish.UUCP (04/27/87)

>Again, this bias of mine comes with working with brain-damaged compilers
>which do not correctly compare 32-bit pointers to 16-bit ints.

Any compiler that thinks that it should be comparing a pointer to an
"int" when confronted with

	if (charptr == 0)

is *prima facie* wrong, regardless of whether pointers and "int"s are
the same size.  The problem is *not* that said compilers weren't
properly comparing pointers and "int"s; the problem is that said
compilers weren't coercing the 0 to (char *)0 and comparing pointers
and pointers.

>Applaud, applaud, applaud.  John has correctly pointed out that
>sizeof(pointer) == sizeof(int)  is not the universal constant that 
>many people believe it to be.  It is for this exact reason that 
>NULL should ***NOT*** be defined as 0.

OK, now given that

	sizeof (char *) == sizeof (int *)

is also not a tautology, what should NULL be defined as?  (char *)0
won't cut it, nor will (int *)0.

If you have function prototypes, function calls - in *most*, but not
*all*, cases - will behave like other operators; the compiler will be
able to properly coerce arguments (*or* complain if the coercion
doesn't make any sense, which is why I like the idea of function
prototypes *and* would prefer that compilers generate warning
messages whenever a function is used and there is no prototype
declaration or definition for that function in scope).  Given this,
as long as you remember to properly cast null pointers passed to
functions like "execl", NULL can be defined as 0.

If you don't have function prototypes, you can stick in the
appropriate cast to coerce NULL into a null pointer of the proper
type.  Given this, NULL can be defined as 0, and probably should,
since "lint" will not be happy about casting a "char *" into, say, an
"int *".

peter@thirdi.UUCP (Peter Rowell) (04/27/87)

Distribution:


Having watched this for a while, I feel the need to comment.  The
following should be read in the context of the *real* world of *real* C
compilers.  Although ANSI may someday save us from our sins, it ain't
doing it yet.  Anyone who is trying to create code that will work on a
broad spectrum of machine/os/compiler combinations must necessarily use
coding paranoia sufficient for the worst case.

Although it is correct that the comparison (charptr == 0) is guaranteed
to work by the Holy Writ (aka K&R), all other bets are off as you stray
from the homogenized, pasteurized world of the VAXen.

As pointed out, environments where sizeof(int) != sizeof(char *)
require NULL to be define'd as ((char *)0) (or some variation on that
theme).  Unfortunately, this is not the end of this train of thought.
There is NO guarantee I am aware of that sizeof(char *) ==
sizeof(struct foo *)!

For this reason (and to adhere to a local naming convention), we use a
specifically typed nil pointer for each and every type.  For example,

#define fooNil ((struct FOO *)0)
#define sbNil  ((char *)0)		/* string of bytes (aka ASCII) */
#define pbNil  ((unsigned char *)0)	/* pointer to arbitrary bytes */
    etc.

Since we always have a "correct" nil for everything, we always pass
exactly the correct value.  True, this *almost* always means that the
same 32 bit (16 bit/48 bit/64 bit) "0" is being passed, but it saves
you should some compiler person become excessively clever.

It also means that your comparisons are of the form:

    if (foo != fooNil) ...  Which is clearly correct,
       vs.
    if (foo != NULL) ...    Which just doesn't feel safe to me.

This method has stood us in good stead (except when we have foolishly
violated it) in porting a source-level debugger to over 70 different
systems - not all of which were done by followers of the True Faith.

Peter Rowell
Third Eye Software
(415) 321-0967

chris@mimsy.UUCP (04/28/87)

In article <31@thirdi.UUCP> peter@thirdi.UUCP (Peter Rowell) writes:
>As pointed out, environments where sizeof(int) != sizeof(char *)
>require NULL to be define'd as ((char *)0) (or some variation on that
>theme).

This is not true.  NULL can be, and indeed, in K&R C, NULL *must*
be, defined as 0.

>Unfortunately, this is not the end of this train of thought.
>There is NO guarantee I am aware of that sizeof(char *) ==
>sizeof(struct foo *)!

This *is* true.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7690)
UUCP:	seismo!mimsy!chris	ARPA/CSNet:	chris@mimsy.umd.edu

guy%gorodish@Sun.COM (Guy Harris) (04/28/87)

> As pointed out, environments where sizeof(int) != sizeof(char *)
> require NULL to be define'd as ((char *)0) (or some variation on that
> theme).

Excuse me?  On the contrary, it has been repeatedly pointed out that
NOTHING requires that you define NULL in that fashion, and it has
been pointed out several times that there are problems with defining
it thus.

I worked for several years on a machine where sizeof(int) !=
sizeof(char *), and where NULL was defined as 0, as it should be.
Incorrect C programs did not work, but so what?  Correct C programs
worked correctly.  Yes, it took some work to make some programs pass
"lint", but you ended up with better programs afterwards; "lint"
would sometimes even uncover bugs in those programs.

This one counterexample is more than sufficient to refute the claim
that definitions of NULL as something other than just plain old 0 is
somehow "required" by such environments.

> Unfortunately, this is not the end of this train of thought.
> There is NO guarantee I am aware of that sizeof(char *) ==
> sizeof(struct foo *)!

The lack of such a guarantee is the end of that train of thought,
though.  Defining NULL as (char *)0 is not sufficient for the general
case.

> For this reason (and to adhere to a local naming convention), we use a
> specifically typed nil pointer for each and every type.

Which means you can define NULL as 0 and not worry about it.

>     if (foo != fooNil) ...  Which is clearly correct,
>        vs.
>     if (foo != NULL) ...    Which just doesn't feel safe to me.

But is clearly correct nonetheless.

> This method has stood us in good stead (except when we have foolishly
> violated it) in porting a source-level debugger to over 70 different
> systems - not all of which were done by followers of the True Faith.

Umm, err, it's not a matter of "faith", so the use of terms connoting
religion is inappropriate and somewhat irritating here.  K&R is quite
clear on this matter; if somebody doesn't "believe in" K&R (or ANSI
C), they're perfectly welcome to implement some language that does
things otherwise, they're just not welcome to call it C.

A number of the participants in this discussion seem to be of the
opinion that there is no objective standard against which the
correctness of various implementations or opinions about the language
can be judged.  This is simply not true.

Others seem to be of the opinion that it is somehow impossible to
make C work on a machine with pointers and "int"s of different sizes
without defining NULL as 0.  This is also not true.

mason@gargoyle.UChicago.EDU (Tony Mason) (04/28/87)

In article <6479@mimsy.UUCP> chris@mimsy.UUCP (Chris Torek) writes:
>In article <31@thirdi.UUCP> peter@thirdi.UUCP (Peter Rowell) writes:
>>As pointed out, environments where sizeof(int) != sizeof(char *)
>>require NULL to be define'd as ((char *)0) (or some variation on that
>>theme).
>
>This is not true.  NULL can be, and indeed, in K&R C, NULL *must*
>be, defined as 0.

Perhaps someone is confusing 0 the int, to 0 the NULL pointer symbol?  K & R
seem to make a distinction here:

(p.192, K&R)

"The compilers currently allow a pointer to be assigned to an integer, an
integer to a pointer, and a pointer to a pointer of another type.  The
assignment is a pure copy operation, with no conversion.  This usage is
nonportable, and may produce pointers which cause addressing exceptions when
used.  However, it is guaranteed that assignment of the constant 0 to a
pointer will produce a null pointer distinguishable from a pointer to any
object."

They DON'T say the "integer 0" but rather the "constant 0".  This must be
considered a context sensitive problem that a compiler designer must take
into account.

Equally, though, K & R don't explicitly say NULL *must* be defined as 0.
Indeed, I have had problems with this area before and don't use NULL much
anymore, as I have seen systems where NULL came DEFINED (from the
distribution) as (char *)0.  It did make code break.  Thus, the safest thing
to do is use 0.  Not the integer, but the constant.  For those who don't
trust compiler writers to stay up to spec, you can use (char *)0, (int *)0,
etc.

There has been a traditional attitude of laxness in the UNIX environment.  I
have cursed at VAX writers who assume that a long was an int (case statements
break when used with ANYTHING besides ints.)  It is only the largess of
compiler writers that allowed this type of behavior.

Tony Mason
Univ. of Chicago, Dept. of C.S.
ARPA: mason@gargoyle.uchicago.edu
UUCP: ...ihnp4!gargoyle!mason

throopw@dg_rtp.UUCP (Wayne Throop) (04/28/87)

> dave@sds.UUCP (dave schmidt x194)
> Applaud, applaud, applaud.  John has correctly pointed out that
> sizeof(pointer) == sizeof(int)  is not the universal constant that 
> many people believe it to be.  It is for this exact reason that 
> NULL should ***NOT*** be defined as 0.

Boo, hissss, razzzz.  Dave seems to incorrectly think that
sizeof(pointer) == sizeof(void *).  Now, granted, he comes breifly to
his senses and says...

> A minor problem still exists even with defining NULL as (void *)0;
> NULL cannot be passed to a routine as function pointer without
> a cast since sizeof(void (*)()) may not be the same as sizeof(void *).
> That, however, is something that shouldn't cause too much trouble.

...  which is correct, as far as it goes.  But it doesn't go far enough.
The problem isn't minor, since there exist machines where (void *) is
larger than many other pointer types.  Further, machines where there
*IS* no single type which has the correct length for all likely uses of
null pointers are *COMMON*.   (You've heard of the 80x86 series, right?)
Thus, giving the constant NULL a specific pointer type, *ANY* specific
pointer type, does *NOT* solve the problem it is purported to solve,
that is, passing NULL as an actual argument to a formal argument of
pointer type.  NULL must *STILL* *ALWAYS* be cast to the appropriate
type in such cases, even when the constant is (incorrectly) defined as
(void *)0.

--
 It is easy to find fault, if one has that disposition.  There
 was once a man who, not being able to find any other fault with
 his coal, complained that there were too many prehistoric
 toads in it.
                --- Pudd'nhead Wilson's Calendar (Mark Twain)
-- 
Wayne Throop      <the-known-world>!mcnc!rti-sel!dg_rtp!throopw

peter@thirdi.UUCP (04/28/87)

In article <6479@mimsy.UUCP> chris@mimsy.UUCP (Chris Torek) writes:
>In article <31@thirdi.UUCP> peter@thirdi.UUCP (Peter Rowell) writes:
>>As pointed out, environments where sizeof(int) != sizeof(char *)
>>require NULL to be define'd as ((char *)0) (or some variation on that
>>theme).
>
>This is not true.  NULL can be, and indeed, in K&R C, NULL *must*
>be, defined as 0.

The *intention* of my remark (poorly expressed) was:

There are a number of programmers who use "NULL" as their standard null pointer.
I will not argue that they "should not do that".  I agree they shouldn't.
The fact of the matter is that they do so with amazing regularity. My use
of the word "require" was meant in the sense that if C programmers in
general insist on using NULL as a generic null pointer, then it is not
unreasonable to ask that the person defining NULL should do so in
such a way as to cause as few "gotchya's" as possible.  This may not
always be possible, but then Life is a Bitch.

As stated in my previous note, I am addressing What Is as opposed to
What Should Be.  The ANSI committe is working on the later.  In the
mean time, we must all live with the former.

grodberg@kodak.UUCP (04/29/87)

In article <17498@sun.uucp> guy%gorodish@Sun.COM (Guy Harris) writes:
>>... we use a
>> specifically typed nil pointer for each and every type.
>> This method has stood us in good stead (except when we have foolishly
>> violated it) in porting a source-level debugger to over 70 different
>> systems - not all of which were done by followers of the True Faith.
>
>Umm, err, it's not a matter of "faith", so the use of terms connoting
>religion is inappropriate and somewhat irritating here.  K&R is quite
>clear on this matter; if somebody doesn't "believe in" K&R (or ANSI
>C), they're perfectly welcome to implement some language that does
>things otherwise, they're just not welcome to call it C.

     Unfortunately, a lot of compiler writers have produced compilers which
they think are C and sell as C, but are not C.  I have worked with enough
brain-damaged pseudo-C compilers masquerading as real C compilers to appreciate
any advice which will help make code transportable to sub-standard compilers
while not harming compatiblity with real C compilers.  Explicitly casting
nil pointers should not cause any problem on real C compilers (if 0 represents
a proper nil pointer, than casting 0 to a pointer should not have any effect
at all), yet it can certainly looks like it will keep most of the pseudo-C 
compilers from breaking.
    Yes, we can't make everything work everywhere, but it doesn't really hurt
to write compiler tolerant code, and the butt you save may be your own.
-- 
          Jeremy Grodberg

Usenet: ...rochester!kodak!grodberg or kodak!grodberg@cs.rochester.edu
Arpa: 	grodberg@kodak or kodak!grodberg@rochester

john@viper.UUCP (John Stanley) (05/01/87)

In article <17446@sun.uucp> guy@sun.UUCP (Guy Harris) writes:
 >
 >>Applaud, applaud, applaud.  John has correctly pointed out that
 >>sizeof(pointer) == sizeof(int)  is not the universal constant that 
 >>many people believe it to be.  It is for this exact reason that 
 >>NULL should ***NOT*** be defined as 0.
 >
 >OK, now given that
 >
 >	sizeof (char *) == sizeof (int *)
 >
 >is also not a tautology, what should NULL be defined as?  (char *)0
 >won't cut it, nor will (int *)0.

The answer to this was given in my original reply.  The only pointer
type which is data nonspecific is void*, thus NULL should be ((void*)0).
I have used this on enough projects to know it works and have only
had problems with porting a bug infested program where the author
used NULL every place you or I would have used a 0.  (It turned out
that the program was originaly written by a college sophmore who had
only had one class in C...  He wrote it on a machine where sizeof
a pointer and an int were the same...)

I've looked into this quite a bit in the last 2 weeks and have come
to the conclusion that K&R made a mistake. (gasp :)  Dispite the fact
that they constantly and consistantly refer to NULL as a "pointer",
they fail to ever pass NULL as a parameter in the examples (something 
that would have brought this problem to light much earlier) and 
they consistantly use 0 to #define NULL rather than a pointer.  This 
combined with their only having access to machines where pointers 
were the same size as ints has caused the current state of confusion.

The clearest example of this is K&R page 192 in the first paragraph.  
"The compilers currently allow a pointer to be assigned to an integer,
an integer to be assigned to a pointer, and a pointer to a pointer
of another type.  The assignment is a pure copy operation with no
conversions."  They go on to point out that this assumption is non-
portable...unfortunately, they failed to say why or to deal with the
ambiguity of the integer constant 0 being used as if it were really
a pointer.  While this doesn't effect a majority of applications or
systems, the few where it does require a re-examination of NULL and
a redefinition of the "pointer constant" to something that is in
-all- respects a -true- pointer constant....


  (Please, no jokes about my reply being 
   about a subject that's "NULL and void*"...  :-)

--- 
John Stanley (john@viper.UUCP)
Software Consultant - DynaSoft Systems
UUCP: ...{amdahl,ihnp4,rutgers}!{meccts,dayton}!viper!john

john@viper.UUCP (John Stanley) (05/01/87)

I realize it's a bit odd to "reply" to my own message, but I noticed
a problem with my reply and felt compelled to qualify one point...

In article <917@viper.UUCP> john@viper.UUCP (John Stanley) writes:
 >
 >The answer to this was given in my original reply.  The only pointer
 >type which is data nonspecific is void*, thus NULL should be ((void*)0).
 >I have used this on enough projects to know it works
................

  I should have mentioned that I haven't yet worked with one of the
(rumored?) systems or compilers which have different sized pointers
for different data types.

  BTW:  Has anyone actualy used one of these odd systems?  I've only
heard the traditional rumors but have never heard actual systems or
compilers named...  Anyone know of a specific example?



  The message by Peter Rowell which gave the recomendation of having 
seperate "NULL" pointers for each data type is a good workable 
alternative but is not likely to aquire much of a following because 
the number of systems it effects is so small. (That doesn't make it 
a bad idea, but it's something "extra" the programmer would have to 
deal with that's not likely to give an advantage on his/her system..
The general response will be "why bother?")

--- 
John Stanley (john@viper.UUCP)
Software Consultant - DynaSoft Systems
UUCP: ...{amdahl,ihnp4,rutgers}!{meccts,dayton}!viper!john

jv@mhres.UUCP (Johan Vromans) (05/02/87)

In article <17498@sun.uucp>, guy%gorodish@Sun.COM (Guy Harris) writes:
> I worked for several years on a machine where sizeof(int) !=
> sizeof(char *), and where NULL was defined as 0, as it should be.
> Incorrect C programs did not work, but so what?  Correct C programs
> worked correctly.  Yes, it took some work to make some programs pass
> "lint", but you ended up with better programs afterwards; "lint"
> would sometimes even uncover bugs in those programs.

I have had the bad experience that, on Xenix systems, lint performs its
task assuming the small memory model (sizeof int == sizeof char). If your
program is going to use large model (sizeof int != sizeof char), lint
doesn't help ..... there is no way to tell lint to check with regard to
another memory model.
-- 

Johan Vromans @ Multihouse Research, Gouda, the Netherlands
      uucp: ..{seismo!}mcvax!mhres!jv

chris@mimsy.UUCP (Chris Torek) (05/02/87)

In article <917@viper.UUCP> john@viper.UUCP (John Stanley) writes:
>The answer to this was given in my original reply.  The only pointer
>type which is data nonspecific is void*,

It is still specific:  It is, specifically, the largest pointer
type the compiler understands.  That is, it represents the pointer
type that has the most bits, and that can therefore represent all
other pointers distinctly, so that any other pointer can be converted
to (void *), then converted back, without losing any information.

>thus NULL should be ((void*)0).

Perhaps; but this will not solve the general case.

It is correct to say that the integer constant zero is not a pointer
type.  It is NOT true to say that there is only one pointer type
(whether John meant to imply this is unclear to me).  Given that
there are (theoretically) an infinite number of different pointer
types, with (theoretically) infinite numbers of different
representations, it is then obvious that there is no single kind
of nil pointer:  There are (theoretically) an infinite number of
nil pointers, each as different from one another as a float is
different from an unsigned short.

Now an infinite number of different nil pointers should make it
obvious that any single definition of NULL will not suffice.  But
in fact any one compiler does *not* provide an infinite number of
different nil pointers.  Indeed, in most compilers, there is only
one nil pointer; in some, there are two, or three; there may even
exist a compiler now that has more than three.  I would guess that
there are no compilers now with more than nine different kinds of
nil, to wit, (char *)0, (short *)0, (int *)0, (long *)0, (float *)0,
(double *)0, (struct any *)0, (union any *)0, (any (*)())0.  So
nine definitions of NULL should do, right?

Wrong.

There is nothing in the C language definition that limits the number
of kinds of nil pointers, and someone designing a system and a
compiler is free to invent new ones:  (struct 0to3bytes *)0,
(struct 4to15bytes *)0, ... and so forth.

Providentially (and if you believe that...), C provides a single
`unformed' nil pointer that may be converted to any of the infinite
different true nil pointers.  If this generic nil pointer were the
word `nil', people might not assume it happened to be a zero-valued
`int'.  Alas, this generic nil pointer is the integer constant
value 0, not the word nil.  It is easy to tell from context when
what appears to be a zero is in fact a specific nil pointer, and
compilers use these contexts to convert the one to the other.
These contexts include assignments to pointers, tests on pointers,
and casts.  They do *not* include function calls.

Exercise:  For each of the following, decide whether there is enough
context to allow a compiler to convert the integer constant zero
to an appropriate nil pointer:

	% cat example.c
	f()
	{
		char *cp;
		int i, *ip;
		extern int a;

1.		cp = 0;
2.		i = 0; cp = (char *) i;
3.		ip = 0 * a;
4.		if (cp != 0) { ... }
5.		if (!ip) { ... }
6.		if (ip == (char *) i) { ... }
7.		g(0);		/* assuming g takes one `long *' */
8.		g((long *) 0);
	}
	% 

ANSWERS (no peeking! and poking is frowned upon too :-) ):

	1. yes
	2. no
	3. PCC complains, and one can argue either way
	4. yes
	5. yes
	6. no
	7. no
	8. yes

(I would say the answer to 3 is `yes', but one could argue that
this is not an integer constant expression, since not all the
subexpressions are constant.  Oddly enough, PCC reduces the expression
internally, but not until after deciding it is not a constant
expression.)

With the addition of function prototypes, the call to g in 7. can
become correct:

	f()
	{
		...
		int g(long *);
		...
		g(0);
	}

as this is now an assignment context.

No matter what the definition of NULL---0 or (char *)0 or (void
*)0 or whatever---there will be some situation in which this by
itself is not appropriate.  (The stubborn cases are functions with
variable argument lists).  A solution that works every time is to
cast to the appropriate kind of nil pointer the integer constant
zero (or, in dpANS C, the nil pointer to nothing `(void *)0')
everywhere it appears.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7690)
Domain:	chris@mimsy.umd.edu	Path:	seismo!mimsy!chris

guy%gorodish@Sun.COM (Guy Harris) (05/03/87)

> I have had the bad experience that, on Xenix systems, lint performs its
> task assuming the small memory model (sizeof int == sizeof char). If your
> program is going to use large model (sizeof int != sizeof char), lint
> doesn't help ..... there is no way to tell lint to check with regard to
> another memory model.

Which just means that the Xenix "lint" is inadequate.  One would hope
that it would *still* complain about

	setbuf(stdout, NULL);

even if "sizeof(int)" == "sizeof(char *)" (which I presume is what
you meant above).

The System V lint has a particularly stupid feature - if
"sizeof(long)" == "sizeof(int)", it is silent about NULL not being
properly cast, unless you use the "-p" flag.

I might - barely - be willing to tolerate this *if* the "-p" flag
weren't significantly more stringent that is necessary, in most
cases, in the modern world.  Unfortunately, there are several
different flavors of portability, any of which might be of interest
in a particular case:

	1) Does it port to any reasonably "modern" UNIX C
	   implementation?  This check would require type-correctness
	   (i.e., "int", "short", and "long" should all be considered
	   different data types, and NULL should require a cast when
	   passed as an argument), and would check function calls against
	   a library that is either S5, 4.2BSD, or the intersection
	   thereof, but would permit very long names.

	2) Does it even port to less-modern UNIX C implementations?
	   This would also check that external names are unique in
	   the first 7 characters and non-external names are unique
	   in the first 8 characters.  It might check function calls
	   against a smaller library, too.

	3) Does it port to any ANSI C implementation (once said
	   standard emerges from the draft stage and is commonly
	   implemented)?  This would check that external names are
	   unique in the first 6 characters *after* case-folding, and
	   that non-external names are unique in the first 31
	   characters.  It would check function calls against a
	   library containing *only* the routines specified by the
	   ANSI standard.  (It would also disallow "#endif FOO",
	   since the ANSI C standard permits an implementation to
	   disallow this.)

	4) Does it port to any ANSI C implementation on a reasonably
	   modern OS?  This would loosen the restrictions on external
	   names to some degree.

It's not clear which of the above "-p" should specify.

flaps@utcsri.UUCP (05/21/87)

In article <149@sds.UUCP> dave@sds.UUCP (dave schmidt x194) writes:
>A minor problem still exists even with defining NULL as (void *)0;
>NULL cannot be passed to a routine as function pointer without
>a cast since sizeof(void (*)()) may not be the same as sizeof(void *).

I think Dave Schmidt fails to realize that sizeof(char *) might not
be the same as sizeof(int *) or sizeof(void *).  Truly the only
solution when passing NULL as a parameter is to cast it to the correct
pointer type.  Furthermore, other parameters should be cast to (char *)
when being passed to routines expecting (char *), as in:

    struct somethinghuge a,b;

    memcpy((char *)&a,(char *)&b);

in case (char *) is not a no-op, as it isn't on machines that number
their words (as opposed to bytes) consecutively which therefore use
some bits of a char * to say where in the word the char is.

-- 

      //  Alan J Rosenthal
     //
 \\ //        flaps@csri.toronto.edu, {seismo!utai or utzoo}!utcsri!flaps,
  \//              flaps@toronto on csnet, flaps at utorgpu on bitnet.


Kermit - Protocol error