frisk@rhi.hi.is (Fridrik Skulason) (06/15/90)
I recently noticed that a program I had written behaved in different ways when
compiled with different compilers. The code below illustrates this
main()
{
unsigned char a;
unsigned long i;
a = 255;
i = 256 * a;
printf("%ld\n",i);
}
Now - if I compile this on a compiler with 32-bit ints .. no problems. The
value 65280 is printed out.
The problems arise on compilers with 16-bit ints. Sometimes I get 65280
and sometimes I get -256. I had thought that when one operand
("a" in this case) was unsigned, the other one (256) and the result would also
be treated as unsigned, so the result of "256*a" should indeed always be
65280.
However, this is not always the case. Why not ? What am I missing ?
-frisk
--
Fridrik Skulason University of Iceland |
Technical Editor of the Virus Bulletin (UK) | Reserved for future expansion
E-Mail: frisk@rhi.hi.is Fax: 354-1-28801 |
karl@haddock.ima.isc.com (Karl Heuer) (06/17/90)
In article <1790@krafla.rhi.hi.is> frisk@rhi.hi.is (Fridrik Skulason) writes: >[On a machine with 8-bit char, 16-bit int, and 32-bit long, > unsigned char a = 255; > unsigned long i = 256 * a; >produces 65280 on some compilers, -256 on others.] I had thought that when >one operand ("a" in this case) was unsigned, ... It's an unsigned char. Since C has no char-typed rvalues, it promotes to unsigned int (in the unsigned-preserving model) or signed int (in the value-preserving model). An ANSI compiler must use the value-preserving model, in which the computation overflows but probably generates the value -256. If the compiler is high-quality, it will also warn about your using a questionably-signed expression in a signedness-sensitive context. See also <16836@haddock.ima.isc.com>, where we just finished discussing a similar problem. Karl W. Z. Heuer (karl@kelp.ima.isc.com or harvard!ima!karl), The Walking Lint