graham@sce.carleton.ca (Doug Graham) (06/06/90)
I encountered a program which does something like: unsigned char c = 0x80; unsigned long l = c << 8; On a machine with 16 bit integers, and 32 bit longs, what should the value of "l" be in ANSI C? Microsoft, and Zortech C thinks it is 0x8000 which makes sense to me, and Turbo C thinks it is 0xffff8000, which is what K&R2 would seem to indicate is the correct answer. When "c" is converted to integer before being used in the expression "c << 8", is it converted to unsigned or signed integer? I think K&R2 (A6.1) says signed, because all possible values of an unsigned char can be represented in a signed integer. Thus the signed integer 0x80 would be shifted left to give the signed integer 0x8000, which is then sign extended to a signed long 0xffff8000, before being assigned to "l". Doug.
karl@haddock.ima.isc.com (Karl Heuer) (06/11/90)
In article <866@sce.carleton.ca> graham@sce.carleton.ca (Doug Graham) writes: > unsigned char c = 0x80; > unsigned long l = c << 8; >On a machine with 16 bit integers, and 32 bit longs... The correct (ANSI) behavior is to produce a (presumably unintended) result of 0xffff8000. The polite behavior is to also produce a warning that a questionably signed value has been used in a context where the signedness is significant. The appropriate fix is to use an explicit cast, so that the code will produce 0x8000 on either the value-preserving model or the unsigned-preserving model. Karl W. Z. Heuer (karl@ima.ima.isc.com or harvard!ima!karl), The Walking Lint