lav@mtsbb.UUCP (L.A.VALLONE) (05/16/86)
I am interested in the precision used by various C compilers when passing arguments on the stack to a called routine. For example: some_routine_or_main() { char a,b; foo(a,b) } foo (a,b) long a,b; { printf("%#lx\t%#lx\n", a, b); } I tried a program similar to the above on a vax and a UNIX PC and in both cases the values of 'a' and 'b' were correctly passed. Given that a long on both machines is 32 bits and a char is 8 bits, from this I assumed that both compilers are expanding 'a' and 'b' to 32 bits before putting them on the stack. I might have expected this for the vax, but not for the UNIX PC (68000 based). I also tried seperating the functions among 2 files to eliminate the possibility of the compiler detecting the inconsistency and correcting (though this doesn't mean the loader isn't being smart). Again, it worked. Does anyone know if my assumption is correct? Is my model not telling me what I think it is? Are there any C compilers that don't behave this way? Please respond by Email and I'll post if anyone else is interested. Thanks in advance -- Lee Vallone AT&T Information Systems Merlin {... ihnp4, mtuxo}!mtsbb!lav
ark@alice.UucP (Andrew Koenig) (05/17/86)
> I am interested in the precision used by various C compilers when > passing arguments on the stack to a called routine. For example: > > some_routine_or_main() > { > char a,b; > > foo(a,b) > } > > foo (a,b) > long a,b; > > { > printf("%#lx\t%#lx\n", a, b); > } > > I tried a program similar to the above on a vax and a UNIX PC > and in both cases the values of 'a' and 'b' were correctly > passed. Given that a long on both machines is 32 bits and a char > is 8 bits, from this I assumed that both compilers are expanding > 'a' and 'b' to 32 bits before putting them on the stack. I might > have expected this for the vax, but not for the UNIX PC (68000 > based). The definition of C is that char or short arguments are expanded to ints. Both the VAX and UNIX PC (and the 3B machines, for that matter) support 32-bit ints. Since subscripts are ints rather than longs, a machine that implements only 16-bit ints has a rather hard time dealing with arrays with more than 32767 elements. This is, of course, often the case on little machines like PDP-11 or 8086.