BJORNDAS%CLARGRAD.BITNET@WISCVM.WISC.EDU (09/06/86)
Very simple question here, folks. What is the correct way to
declare the variable in the called function below:
main()
{
char c = 'W';
foo(c);
}
foo(ch)
char ch; /* Or should this be "int ch;" because it gets promoted? */
{}
On my micro it MUST be declared an int; I think this is screwy, but
I'm not an expert.
---
Sterling Bjorndahl, Claremont Graduate School
BJORNDAS @ CLARGRAD on BITNET
qwerty@drutx.UUCP (Brian Jones) (09/08/86)
In the sequence:
main()
{
char c = 'w';
foo(c)
}
foo(ch)
char ch;
char ch is the correct declaration. The compiler/code generator should
handle pulling the character portion of the promoted variable off the
stack correctly. Declaring it 'int' is asking for trouble.
Brian Jones aka {ihnp4,}!{drutx,druhi}!qwerty @ AT&T-IS, Denver
--
Brian Jones aka {ihnp4,}!{drutx,druhi}!qwerty @ AT&T-IS, Denver
gt6294b@gitpyr.UUCP (SCHEUTZOW,MICHAEL J) (09/08/86)
In article <3594@brl-smoke.ARPA> BJORNDAS%CLARGRAD.BITNET@WISCVM.WISC.EDU writes: >Very simple question here, folks. What is the correct way to >declare the variable in the called function below: > >main() > { > char c = 'W'; > foo(c); > } > >foo(ch) >char ch; /* Or should this be "int ch;" because it gets promoted? */ > {} > >On my micro it MUST be declared an int; I think this is screwy, but >I'm not an expert. > >--- >Sterling Bjorndahl, Claremont Graduate School >BJORNDAS @ CLARGRAD on BITNET
thomps@gitpyr.UUCP (Ken Thompson) (09/09/86)
In article <1219@drutx.UUCP>, qwerty@drutx.UUCP (Brian Jones) writes: > In the sequence: > > main() > { > char c = 'w'; > foo(c) > } > > foo(ch) > char ch; > > char ch is the correct declaration. The compiler/code generator should > handle pulling the character portion of the promoted variable off the > stack correctly. Declaring it 'int' is asking for trouble. > If I understand what Kernighan and Ritchie say in their book, then ch is automatically converted to an int when foo is called because a function argument is an expression and a character is always converted to an int in an expression. See K&R page 41 - 42. I know that some compilers take care of this and allow you to still declare ch as a char. However, I note that K&R always declare ch as an int in their examples. I would suppose that the compiler being used requires the int declaration. Since the conversion occurs by definition of the language, there is no danger in declaring it an int and this is commonly done in most C code. Declare it as an int and go to it. It also sounds like the asker of the original question should get hold of K&R and learn about the relationship between char and int in C. -- Ken Thompson Georgia Tech Research Institute Georgia Insitute of Technology, Atlanta Georgia, 30332 ...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!gitpyr!thomps
karl@haddock (09/09/86)
BJORNDAS%CLARGRA@WISCVM.WISC.EDU (Sterling Bjorndahl) writes: >foo(ch) >char ch; /* Or should this be "int ch;" because it gets promoted? */ >{} > >On my micro it MUST be declared an int; I think this is screwy... This relates to what I just said in another topic. Discounting function prototyping (ANSI proposed), actual arguments of type char, short, float, and array are converted; therefore one should never declare formal arguments of these types. However, you are right -- the C language is supposed to silently fix it for you by interpreting your "char" declaration as an "int". Your compiler is broken. Karl W. Z. Heuer (ima!haddock!karl; karl@haddock.isc.com), The Walking Lint
karl@haddock (09/12/86)
BJORNDAS%CLARGRA@WISCVM.WISC.EDU (Sterling Bjorndahl) writes: >>foo(ch) >>char ch; /* Or should this be "int ch;" because it gets promoted? */ haddock!karl (Karl Heuer) replies: | drutx!qwerty (Brian Jones) replies: >[int is better;] one should never | >char ch is the correct declaration. >declare formal arguments of [type | >The compiler should handle pulling >char, short, float, or array]. [But] | >the character portion. Declaring >your compiler is broken. | >it 'int' is asking for trouble. Well, now that's cleared up. :-) Actually, I think I may have spoken too quickly. It *is* misleading to declare a float or array formal parameter -- the compiler silently converts it to a double or pointer declaration -- but char and short aren't affected the same way, at least not here (SVR2 vax). example: foo(ch, sh, fl, ar) char ch; short sh; float fl; int ar[10]; { ... } sizeof(fl)==sizeof(double) not sizeof(float); &fl is "double *" not "float *" sizeof(ar)==sizeof(int *) not sizeof(int[10]); &ar is "int **" not "int(*)[]" but, sizeof(ch)==sizeof(char) and &ch is "char *". So maybe it is safe; I can't find a definitive statement in K&R. However, I disagree that "declaring it 'int' is asking for trouble". It may or may not undergo sign extension, but that's true of any use of char. The actual argument *was* converted from char to int by the caller, so the value of the (int) formal argument is predictable. If it's declared "char" it has a predictable value, but I'm not convinced it has a well-defined type. In any case (unless you were doing more than your posting implied) you have a broken compiler. Karl W. Z. Heuer (ima!haddock!karl; karl@haddock.isc.com), The Walking Lint
rlk@chinet.UUCP (Richard Klappal) (09/13/86)
In article <2233@gitpyr.UUCP> thomps@gitpyr.UUCP writes: > > >In article <1219@drutx.UUCP>, qwerty@drutx.UUCP (Brian Jones) writes: >> In the sequence: >> >> main() >> { >> char c = 'w'; >> foo(c) >> } >> >> foo(ch) >> char ch; >> >> char ch is the correct declaration. The compiler/code generator should >> handle pulling the character portion of the promoted variable off the >> stack correctly. Declaring it 'int' is asking for trouble. >> > If I understand what Kernighan and Ritchie say in their book, then ch > is automatically converted to an int when foo is called because a function > argument is an expression and a character is always converted to an int in > an expression. See K&R page 41 - 42. > I know that some compilers take care of this and allow you to > still declare ch as a char. However, I note that K&R always declare > ch as an int in their examples. I would suppose that the compiler being > used requires the int declaration. Since the conversion occurs by definition > of the language, there is no danger in declaring it an int and this is > commonly done in most C code. Declare it as an int and go to it. It also > sounds like the asker of the original question should get hold of K&R and > learn about the relationship between char and int in C. >-- >Ken Thompson >Georgia Tech Research Institute >Georgia Insitute of Technology, Atlanta Georgia, 30332 >...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!gitpyr!thomps Balderdash!!!!! K&R declare all chars as ints so that routines that return EOF will work correctly regardless of whether chars are signed or unsigned in your hardware/software combination. Your compiler is broken if it will not work as shown in the example, and if you define the arg as an int in the subroutine, you \'should\' cast the char arg to an int in the calling sequence. The automatic promotion of char to integer only means that everything will probably work if you don't do the casting. -- --- UUCP: ..!ihnp4!chinet!uklpl!rlk || MCIMail: rklappal || Compuserve: 74106,1021 ..!ihnp4!ihu1h!rlk ---
chris@umcp-cs.UUCP (Chris Torek) (09/14/86)
>In article <2233@gitpyr.UUCP> thomps@gitpyr.UUCP writes: >>If I understand what Kernighan and Ritchie say in their book, then ch >>is automatically converted to an int when foo is called because a function >>argument is an expression and a character is always converted to an int in >>an expression. See K&R page 41 - 42. This is correct. Actually, this is an understatement: C does not have a character type. Wait! Let me explain what I mean by this. The C language has what I call `expression types' and `storage types'. There is a character storage type, but there is no character expression type. The following is a list of storage types and equivalent expression types: Storage Type Expression Equivalent ------------ --------------------- char int unsigned char unsigned int short int unsigned short unsigned int int int unsigned int unsigned int long long unsigned long unsigned long float double double double pointer pointer (Some C compilers have different flavours of pointer; I believe the Data General MV series distinguish between `char *' and `other pointer', as do PR1ME compilers. One proposed architecture has a different kind of pointer for each possible size datum.) This peculiarity simplifies the compiler, often at the cost of some run-time efficiency. It also leads to several pitfalls. In general, if you write what you mean, you should be safe; and if you write what the compiler does, you should also be safe. (There is an advantage to writing what you mean: A good compiler may be able to produce better code in such cases. Unfortunately, there is a disadvantage as well: A poor or mediocre compiler will often produce worse code.) >>I know that some compilers take care of this and allow you to >>still declare ch as a char. However, I note that K&R always declare >>ch as an int in their examples. I would suppose that the compiler being >>used requires the int declaration. Then it is not a C compiler. >>Since the conversion occurs by definition of the language, there is >>no danger in declaring it an int (other than possible confusion on the part of the programmer) >>and this is commonly done in most C code. As I mentioned in the `soundex debate', I do this myself, with some reservations. In article <547@chinet.UUCP> rlk@chinet.UUCP (Richard Klappal) writes: >Balderdash!!!!! Not at all. >K&R declare all chars as ints so that routines that return EOF will >work correctly regardless of whether chars are signed or unsigned >in your hardware/software combination. K&R, p. 127: type(c) /* return type of ASCII character */ int c; { if (c >= 'a' && c <= 'z' || c >= 'A' && c <= 'Z') return(LETTER); else if (c >= '0' && c <= '9') return(DIGIT); else return(c); } p. 79: char buf[BUFSIZE]; /* buffer for ungetch */ int bufp = 0; /* next free position in buf */ ... ungetch(c) int c; { if (bufp > BUFSIZE) printf("ungetch: too many characters\n"); else buf[bufp++] = c; } (To be fair, an exercise on page 80 or so suggests extending ungetch() to handle ungetch(EOF).) p. 42: Since a function argument is an expression, type conversion also take place when arguments are passed to functions: in particular, char and short become int, and float becomes double. This is why we have declared function arguments to be int and double even when the function is called with char and float. >Your compiler is broken if it will not work as shown in the example, Agreed. >and if you define the arg as an int in the subroutine, you \'should\' >cast the char arg to an int in the calling sequence. Now *this* is (to borrow your word) balderdash. If by this you mean `for clarity, I think you should', then I will agree; if you mean `for proper operation, in strict C compilers, you must', to this I object: it is not so. -- In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 1516) UUCP: seismo!umcp-cs!chris CSNet: chris@umcp-cs ARPA: chris@mimsy.umd.edu
guy@sun.uucp (Guy Harris) (09/17/86)
> Balderdash!!!!! > K&R declare all chars as ints so that routines that return EOF will > work correctly regardless of whether chars are signed or unsigned > in your hardware/software combination. This is irrelevant to how *arguments to functions* are declared. K&R declares those variables into which "getc" or "getchar" store their values as "int"s because the value of "getc" (and thus "getchar") *is* an "int". This also has nothing to do, strictly speaking, with whether "char"s are signed or unsigned. In *either* case, storing the value of "getc" into a "char" and comparing it with EOF will fail. Assuming that EOF is -1 (as it is in most, if not all, UNIX implementations of the standard I/O library), then: If "char"s are signed, then reading in a character with the value '\377' (assuming 8-bit characters; perform appropriate translation for other character sizes) and assigning it to a "char" variable assigns a value to that variable that will be considered equal to -1. Thus, a program looking for EOF will see one, and think it reached the end of the file. If "char"s are unsigned, then when "getc" encounters the end of the file, it will return -1, which when assigned to a "char" variable will give that variable a value equal to '\377' (again, assuming 8-bit characters; your mileage may vary). This value will not be considered equal to -1, but will be considered equal to 255. Thus, a program looking for EOF will never see it. Declaring to argument to "foo" in the example given in previous postings will not affect whether the program can properly detect end-of-file, unless the result of "getc" is being passed to "foo" and "foo" is testing whether it's an EOF. If the result of "getc" is being passed to any routine, that argument to that routine should be declared as "int" since the result of "getc" is an "int". > Your compiler is broken if it will not work as shown in the example, > and if you define the arg as an int in the subroutine, you \'should\' > cast the char arg to an int in the calling sequence. The automatic > promotion of char to integer only means that everything will probably > work if you don't do the casting. No, the automatic promotion of "char" to "int" is 100% equivalent to a promotion using a "cast". If you declare: char c = 'w'; then foo(c); and foo((int)c); are equivalent - if this isn't ANSI C or there is no function prototype declarator in scope that declares the argument to "foo" as a "char". If this is ANSI C, and there is such a function prototype *declarator* in scope, then the argument is *not* promoted automatically. Note the use of the word "declarator": void foo(ch) char ch; { ... } int main(argc, argv) int argc; char **argv; { char c = 'w'; foo(c); ... } will cause the promotion, but replacing the definition of "foo" with void foo(char ch) { ... } will not cause the promotion, because in the first example when "foo" is used it is declared as "void foo(...)", while in the second example it is declared as "void foo(char)". -- Guy Harris {ihnp4, decvax, seismo, decwrl, ...}!sun!guy guy@sun.com (or guy@sun.arpa)
karl@haddock (09/19/86)
chinet!rlk writes: >In article <2233@gitpyr.UUCP> thomps@gitpyr.UUCP writes: >>[text deleted --kwh] >Balderdash!!!!! This is not a very informative comment following a long quote. Do you mean that all of the quoted text is balderdash, or just the part you attempt to refute in the next paragraph? (I think everything Ken said was correct.) >K&R declare all chars as ints so that routines that return EOF will >work correctly regardless of whether chars are signed or unsigned >in your hardware/software combination. That's a valid reason for declaring SOME variables "int" -- namely those that will be used to hold the result of getchar() et al -- but this has no relevance to the question at hand. (And whether char is signed or not is irrelevant to both questions.) >Your compiler is broken if it will not work as shown in the example, >and if you define the arg as an int in the subroutine, you \'should\' >cast the char arg to an int in the calling sequence. The automatic >promotion of char to integer only means that everything will probably >work if you don't do the casting. What does "probably" mean in this context? If uncasted char is automatically promoted to int, and the argument is declared int, how can it fail? Are you saying that the cast should be present for documentation/aesthetic reasons? Karl W. Z. Heuer (ima!haddock!karl or karl@haddock.isc.com), The Walking Lint
karl@haddock (09/19/86)
umcp-cs!chris (Chris Torek) writes: >C does not have a character type. ... [It] has what I call `expression >types' and `storage types'. There is a character storage type, but there >is no character expression type. In the conventional terminology: C has no rvalues of type char. Karl W. Z. Heuer (ima!haddock!karl or karl@haddock.isc.com), The Walking Lint