hjelm@g.gp.cs.cmu.edu (Mark Hjelm) (09/05/90)
What is the compiler allowed/required to do for this:
f(a, b)
float a;
char b;
{
}
Is the type of "a" (as seen inside of "f") float or double or either?
Is the type of "b" char or int or either?
Mark
hjelm@cs.cmu.edu
gwyn@smoke.BRL.MIL (Doug Gwyn) (09/05/90)
In article <10391@pt.cs.cmu.edu> hjelm@g.gp.cs.cmu.edu (Mark Hjelm) writes:
-What is the compiler allowed/required to do for this:
- f(a, b)
- float a;
- char b;
- {
- }
-Is the type of "a" (as seen inside of "f") float or double or either?
-Is the type of "b" char or int or either?
Assuming the absence of any prototype for this function, its actual
arguments would be passed as double and int, which upon entry to the
function would be in effect assigned to local variables a and b
having types float and char, respectively.
henry@zoo.toronto.edu (Henry Spencer) (09/06/90)
In article <10391@pt.cs.cmu.edu> hjelm@g.gp.cs.cmu.edu (Mark Hjelm) writes: > f(a, b) > float a; > char b; > { > >Is the type of "a" (as seen inside of "f") float or double or either? >Is the type of "b" char or int or either? The types are, in all cases except arrays (which silently turn into pointers), precisely as declared. In general, this can involve conversions as part of the function-entry sequence. -- TCP/IP: handling tomorrow's loads today| Henry Spencer at U of Toronto Zoology OSI: handling yesterday's loads someday| henry@zoo.toronto.edu utzoo!henry
bright@Data-IO.COM (Walter Bright) (09/06/90)
In article <10391@pt.cs.cmu.edu> hjelm@g.gp.cs.cmu.edu (Mark Hjelm) writes: <What is the compiler allowed/required to do for this: < f(a, b) < float a; < char b; < { < } It is semantically equivalent to: f(atmp,btmp) double atmp; int btmp; { float a = atmp; char b = btmp; .... }
nazgul@alphalpha.com (Kee Hinckley) (09/08/90)
In article <2689@dataio.Data-IO.COM> bright@Data-IO.COM (Walter Bright) writes: >In article <10391@pt.cs.cmu.edu> hjelm@g.gp.cs.cmu.edu (Mark Hjelm) writes: ><What is the compiler allowed/required to do for this: >< f(a, b) >< float a; >< char b; >< { >< } > >It is semantically equivalent to: > f(atmp,btmp) > double atmp; > int btmp; That part I understand. I have a related question though. Consider the following function, defined in K&R: foo(c, i) char c; int i; {} Clearly the function expects to pick two ints up off of the stack. Now consider the same function, declared and used from ANSI C (or C++). extern foo(char c, int i); foo('g', 0xF0F0); As far as I can tell the ANSI spec (but I'm just reading it through the second edition K&R book) doesn't explicitly address the compatibility issue raised here. In other words, in some implementations the compiler may do an explicit promotion of 'char c' to 'int', even though it retains the correct semantics. Whereas other compilers may actually put a single byte on the stack, in which case the call will not work properly. I note that the X Intrinsics seem to recognize this problem and define both "Wide" and "Narrow" prototypes for the functions, where the "Wide" prototypes explicitly redefine all small integral types to "int" and the "Narrow" ones leave them be. The default, for compatibility reasons, is "Wide". So I guess the question is. Does the ANSI spec mandate that the above be compatible, mandate that they aren't, or not say? And, does it make any difference whether the definition of the function (same syntax) is compiled using a K&R or an ANSI compiler? Here is a sample program to test this with, consisting of two files, the first of which must be compiled with a prototyping compiler, the second of which may be compiled with or without one. Bind them together and see what happens. /* * * p1.c * * Here are the combinations: * declared defined * exact prototype exact prototype * exact prototype wide prototype * exact prototype no prototype * wide prototype exact prototype * wide prototype wide prototype * wide prototype no prototype * no prototype exact prototype * no prototype wide prototype * no prototype no prototype * * and, since I here GNU cheats on this and uses ANSI semantics even * if the definition is K&R style * * exact prototype K&R wide * wide prototype K&R wide * no prototype K&R wide */ extern void EE(char c, int i); extern void EW(char c, int i); extern void EN(char c, int i); extern void WE(int c, int i); extern void WW(int c, int i); extern void WN(int c, int i); extern void NE(); extern void NW(); extern void NN(); extern void EG(char c, int i); extern void WG(int c, int i); extern void NG(); void main() { EE('x', 999); EW('x', 999); EN('x', 999); WE('x', 999); WW('x', 999); WN('x', 999); NE('x', 999); NW('x', 999); NN('x', 999); EG('x', 999); WG('x', 999); NG('x', 999); } /* * p2.c * * Here are the combinations: * declared defined * exact prototype exact prototype * exact prototype wide prototype * exact prototype no prototype * wide prototype exact prototype * wide prototype wide prototype * wide prototype no prototype * no prototype exact prototype * no prototype wide prototype * no prototype no prototype * * and, since I here GNU cheats on this and uses ANSI semantics even * if the definition is K&R style * * exact prototype K&R wide * wide prototype K&R wide * no prototype K&R wide */ #include <stdio.h> #ifdef __STDC__ void EE(char c, int i) { printf("Exact Exact: '%c', %d\n", c, i); } void EW(int c, int i) { printf("Exact Wide: '%c', %d\n", c, i); } #endif void EN(c, i) char c; int i; { printf("Exact None: '%c', %d\n", c, i); } #ifdef __STDC__ void WE(char c, int i) { printf("Wide Exact: '%c', %d\n", c, i); } void WW(int c, int i) { printf("Wide Wide: '%c', %d\n", c, i); } #endif void WN(c, i) char c; int i; { printf("Wide None: '%c', %d\n", c, i); } #ifdef __STDC__ void NE(char c, int i) { printf("None Exact: '%c', %d\n", c, i); } void NW(int c, int i) { printf("None Wide: '%c', %d\n", c, i); } #endif void NN(c, i) char c; int i; { printf("None None: '%c', %d\n", c, i); } void EG(c, i) int c, i; { printf("Exact Wide-K&R: '%c', %d\n", c, i); } void WG(c, i) int c, i; { printf("Wide Wide-K&R: '%c', %d\n", c, i); } void NG(c, i) int c, i; { printf("None Wide-K&R: '%c', %d\n", c, i); } --- Here is the result of running this on an Apollo. Exact Exact: 'x', 999 Exact Wide: '', 65498744 Exact None: '', 65498744 Wide Exact: '', 7864320 Wide Wide: 'x', 999 Wide None: 'x', 999 None Exact: '', 7864320 None Wide: 'x', 999 None None: 'x', 999 Exact Wide-K&R: '', 65471463 Wide Wide-K&R: 'x', 999 None Wide-K&R: 'x', 999 Basically Exact is Exact and it isn't compatible with anything that wasn't defined without a prototype, Wide and None are identical. This risks compatibility with old code (you have to be careful how you define things and you never want to inconsistantly use prototypes in a single program). On the other hand, it enhances compatibility with other languages. -- Alphalpha Software, Inc. | motif-request@alphalpha.com nazgul@alphalpha.com |----------------------------------- 617/646-7703 (voice/fax) | Proline BBS: 617/641-3722 I'm not sure which upsets me more; that people are so unwilling to accept responsibility for their own actions, or that they are so eager to regulate everyone else's.
gwyn@smoke.BRL.MIL (Doug Gwyn) (09/10/90)
In article <1990Sep8.053408.2005@alphalpha.com> nazgul@alphalpha.com (Kee Hinckley) writes: >foo(c, i) >char c; >int i; >{} >extern foo(char c, int i); These are not guaranteed to be compatible, since the definition uses default-widened parameter types but the prototype declaration uses unwidened (in general) types. Implementations that always default widen arguments even for new-style functions linkage will probably support this mixture, but other implementations will not. The general coding rule for fixed-argument functions is: EITHER always use already-default-widened parameter types OR ELSE always use prototypes, never old-style declarations or definitions (For variable-argument functions, always use prototypes including ",..." and use appropriate <stdarg.h> macros in the function definitions.) If you try to mix old- and new-style syntax, you risk running afoul of the genuine differences in linkage for the two styles.
bright@Data-IO.COM (Walter Bright) (09/11/90)
In article <1990Sep8.053408.2005@alphalpha.com> nazgul@alphalpha.com (Kee Hinckley) writes:
<Consider the following function, defined in K&R:
<
< foo(c, i)
< char c;
< int i;
< {}
<Clearly the function expects to pick two ints up off of the stack.
<
<Now consider the same function, declared and used from ANSI C (or C++).
< extern foo(char c, int i);
<
<So I guess the question is. Does the ANSI spec mandate that the above be
<compatible, mandate that they aren't, or not say?
They are *NOT* compatible. The prototype for the first one is:
int foo(int,int);
which is not the same as:
int foo(char,int);
even though some compilers allow this.
hugh@dgp.toronto.edu (D. Hugh Redelmeier) (09/15/90)
In article <2700@dataio.Data-IO.COM> bright@Data-IO.COM (Walter Bright) writes: >In article <1990Sep8.053408.2005@alphalpha.com> nazgul@alphalpha.com (Kee Hinckley) writes: ><Consider the following function, defined in K&R: >< >< foo(c, i) >< char c; >< int i; >< {} ><Clearly the function expects to pick two ints up off of the stack. >< ><Now consider the same function, declared and used from ANSI C (or C++). >< extern foo(char c, int i); >< ><So I guess the question is. Does the ANSI spec mandate that the above be ><compatible, mandate that they aren't, or not say? > >They are *NOT* compatible. The prototype for the first one is: > int foo(int,int); >which is not the same as: > int foo(char,int); >even though some compilers allow this. Actually, Walter is slightly wrong: the default argument promotions on certain implementations may promote char to unsigned int. But he is right in that they cannot leave char as char, so the two declarations are not compatible. The fact that the default argument promotions are implementation dependant is quite unfortunate. This will even cause problems with library functions. For this reason, I think X3J11 should have required that all char values be representable in "int"; if sizeof(char)==sizeof(int), then char should be signed. I said so in a formal comment, but to no avail. Hugh Redelmeier {utcsri, yunexus, uunet!attcan, utzoo, scocan}!redvax!hugh When all else fails: hugh@csri.toronto.edu +1 416 482-8253
chris@mimsy.umd.edu (Chris Torek) (09/15/90)
In article <1990Sep15.011126.23112@jarvis.csri.toronto.edu> hugh@dgp.toronto.edu (D. Hugh Redelmeier) writes: >The fact that the default argument promotions are implementation >dependant is quite unfortunate. Indeed. Note that this problem would not exist, had X3J11 chosen the correct (i.e., sign-preserving) extension rules. As things stand now, you CANNOT calculate the type of an unsigned expression in a widening context. Apparently the committee decided that the value represented by some bit pattern was more important than the type needed to hold that value---not minding the fact that without knowing that type, it is not possible to describe that value! -- In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 405 2750) Domain: chris@cs.umd.edu Path: uunet!mimsy!chris
gwyn@smoke.BRL.MIL (Doug Gwyn) (09/16/90)
In article <26564@mimsy.umd.edu> chris@mimsy.umd.edu (Chris Torek) writes: >Note that this problem would not exist, had X3J11 chosen the correct >(i.e., sign-preserving) extension rules. [harangue omitted] Actually the alternative was unsignedness-preserving. Both sets of rules had substantial existing practice, and the choice was not at all easy to make. In the end, the committee decided that the probability of programming errors was somewhat greater with the unsignedness-preserving rules than with value-preserving rules, and that maintaining arithmetic values would more often be important than maintaining type-unsignedness. One of the strongest advocates for unsignedness-preserving rules later did an experiment to determine how much actual existing code would be affected by the choice, and discovered that the change fixed more bugs in existing code than it introduced, but much more often it had no effect. That pretty much confirmed the original committee evaluation.
chris@mimsy.umd.edu (Chris Torek) (09/16/90)
>In article <26564@mimsy.umd.edu> I wrote: >>Note that this problem would not exist, had X3J11 chosen the correct >>(i.e., sign-preserving) extension rules. [harangue omitted] In article <13865@smoke.BRL.MIL> gwyn@smoke.BRL.MIL (Doug Gwyn) writes: >Actually the alternative was unsignedness-preserving. Er, that is what `sign-preserving' above means: in short, `unsigned is sticky'. >Both sets of rules had substantial existing practice, This is true; >and the choice was not at all easy to make. but this should have been false, for the reason I stated earlier: >One of the strongest advocates for unsignedness-preserving rules later >did an experiment to determine how much actual existing code would be >affected by the choice, and discovered that the change fixed more bugs >in existing code than it introduced, but much more often it had no effect. >That pretty much confirmed the original committee evaluation. In almost all cases, the change has no effect: the only cases where it has an effect are in places that really `ought to' have a cast, and--- in my (on this point, not at all humble) opinion, far more importantly ---in unprototyped arguments. In the latter case, it is impossible to calculate the type (and thus the value) of the expression without knowing whether sizeof(short) < sizeof(int). Now, if you accept as an axiom that unprototyped arguments are bad, the above argument almost goes away (`almost' because you are also arguing that *printf are evil, but they are here to stay). If, on the other hand, you have mountains of existing code and little time to rewrite it all.... -- In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 405 2750) Domain: chris@cs.umd.edu Path: uunet!mimsy!chris