guido@mcvax.UUCP (Guido van Rossum) (03/19/84)
A week ago, I asked in this group whether the following is portable: foo(c) char c; { ... } bar(i) int i; { ... } main() { foo(100); bar(' '); } Thanks to all of you who took the time to reply. The general opinion is: Mixing chars and ints as parameter types is portable, because (section 6.1 of K&R) "A character . . . may be used wherever an integer may be used. In all cases the value is converted to an integer." The implication is that in the call bar(' '), the space is converted to an integer before parameter passing. The same argument shows that foo(c) must expect that its argument, even when stated as a character (e.g. foo(' ')) has been converted to an integer, so foo((int)' ') is portable too. (My own opinion is that this stems at least partly from the fact that on the PDP-11 the stack pointer must be an even address, so that characters passed as parameters had to be aligned anyway.) Several people pointed out that my example (which was chosen rather careless) is NOT portable because the conversion between char and int is not defined, and there may be machines where 100 does not fit in a character; there is also the annying problem whether characters are considered signed or not. However, this is not what bothered me (I don't use foo(100), just foo(i) where i is returned by getchar() and not EOF). From the results of this questionnaire, I conclude that it is best not to use simple char variables or parameters, but always declare them as int. This has the additional benefit of allowing them to be put in registers, while the Berkely C compiler won't put characters in registers (is this still true?). Any comments? Guido (soon to be included in the SRI Phone Book) van Rossum, CWI, Amsterdam guido@mcvax
gwyn@brl-vgr.ARPA (Doug Gwyn ) (03/21/84)
' ' is NOT a char, despite appearances. It is an int. So is 'ab'. Actual arguments to functions are widened depending on type (this is a side-effect of the PDP-11 implementation, now embedded in the language). Chars are widened to ints and floats are widened to doubles. Therefore it makes little sense to declare the formal parameters of a function to be type char or float, since actual arguments never will be.
gnu@sun.uucp (John Gilmore) (03/21/84)
> Therefore it makes little sense to declare the formal parameters of a > function to be type char or float, since actual arguments never will be. I disagree. For example, a char or short can be multiplied much faster on a 68000 than a (32-bit) int or long. Also, if the parameter is declared to be a char, one-byte compares can be done on it, rather than 4-byte compares. Since the "conversion" from an int to a char or short costs nothing, but gains something, you might as well just say what you mean and call it a char. This is NOT true for IEEE float, since the float format and the double format have different bit configurations (more exponent bits in the doubles). Conversion to float will cost, but then computation can be cheaper, so it's a tradeoff. (Our C compiler has a switch to generate float expressions in float rather than double, to save time.)