[net.micro.amiga] LATTICE BUG

dillon@CORY (Matt Dillon) (10/02/86)

	Here is an obscure lattice bug.   V3.03 (no suffix)  I'm
	probably the only one who has run into it.


CODE:

main()
{
   unsigned short x, xx;
   unsigned short y = 0xFFFF;
   unsigned short z = 2;

   x = (y + z) / 16;
   xx= ((y + z) / 16);
   printf ("x = %ld %ld\n", x, xx);
}


EXPECTED RESULT: (remember, y and z are supposed to be expanded to ints first)

   4096 4096

ACTUAL RESULT:

   1 1


SUSPECTED CAUSE:
	seems to be related to the fact that I'm assigning TO a short (x,xx)
	rather than an int.

BACKGROUND:
	This came up while I was programming some 3-D graphics routine.. It
	was a particulary tricky line which failed miserably when the 
	compiler failed to follow the standard C def.

	It's very obscure, but should be fixed.

					-Matt

----------------------------
OMD assembly:



LATTICE OBJECT MODULE DISASSEMBLER V2.00

Amiga Object File Loader V1.00
68000 Instruction Set

EXTERNAL DEFINITIONS

_main 0000-00

SECTION 00 "text" 00000050 BYTES
0000 4E56FFF8                   LINK      A6,FFF8
0004 48E72000                   MOVEM.L   D2,-(A7)
0008 70FF                       MOVEQ     #FF,D0
000A 7202                       MOVEQ     #02,D1
000C 3D40FFFA                   MOVE.W    D0,FFFA(A6)
0010 D041                       ADD.W     D1,D0
0012 243C00040000               MOVE.L    #00040000,D2		<----
0018 E468                       LSR.W     D2,D0			<----
001A 3D40FFFC                   MOVE.W    D0,FFFC(A6)		should be move
001E 3D40FFFE                   MOVE.W    D0,FFFE(A6)		imm #4 and 
0022 02800000FFFF               ANDI.L    #0000FFFF,D0		LSR.L 
0028 7400                       MOVEQ     #00,D2
002A 342EFFFC                   MOVE.W    FFFC(A6),D2
002E 2F02                       MOVE.L    D2,-(A7)
0030 2F00                       MOVE.L    D0,-(A7)
0032 4879 00000000-01           PEA       01.00000000
0038 3D41FFF8                   MOVE.W    D1,FFF8(A6)
003C 4EB9 00000000-XX           JSR       _printf
0042 4FEF000C                   LEA       000C(A7),A7
0046 4CDF0004                   MOVEM.L   (A7)+,D2
004A 4E5E                       UNLK      A6
004C 4E75                       RTS

SECTION 01 "data" 00000010 BYTES
0000 78 20 3D 20 25 6C 64 20 25 6C 64 0A 00 00 00 00 x = %ld %ld.....

SECTION 02 "udata" 00000000 BYTES


					-Matt

mark@ece-csc.UUCP (Mark Lanzo) (10/08/86)

In article <8610020736.AA09001@cory.Berkeley.EDU>
 dillon@CORY (Matt Dillon) writes:
==>
==>	Here is an obscure lattice bug.   V3.03 (no suffix)  I'm
==>	probably the only one who has run into it.
==>
==>
==>CODE:
==>
==>main()
==>{
==>   unsigned short x, xx;
==>   unsigned short y = 0xFFFF;
==>   unsigned short z = 2;
==>
==>   x = (y + z) / 16;
==>   xx= ((y + z) / 16);
==>   printf ("x = %ld %ld\n", x, xx);
==>}
==>
==>
==>EXPECTED RESULT: (remember, y and z are supposed to be expanded to ints first)
==>
==>   4096 4096

Actually, I wonder about this.  Shouldn't the expected result be zero?

The partial sum (y+z) should be done completely in ushort, giving
      0xFFFF + 0x0002  = 0x0001 (retaining only the lower 16 bits)

so that the quotient (1/16) [done in integer math] should be zero.
I don't see any reason why the code generated for "x=..." and "xx=..."
should differ.

Am I missing something here?

==>
==>ACTUAL RESULT:
==>
==>   1 1

Of course, by my reasoning, the actual result is also wrong, which would
still indicate that you were correct about a compiler bug.


If I've said something blatantly stupid here, please don't flame
too hard :-)

   --- Mark ---

dillon@CORY.BERKELEY.EDU (Matt Dillon) (10/11/86)

>Actually, I wonder about this.  Shouldn't the expected result be zero?
>
>The partial sum (y+z) should be done completely in ushort, giving
>      0xFFFF + 0x0002  = 0x0001 (retaining only the lower 16 bits)
>
>so that the quotient (1/16) [done in integer math] should be zero.
>I don't see any reason why the code generated for "x=..." and "xx=..."
>should differ.
>
>Am I missing something here?

	All integral arguments, baring floating expressions, for arithmatic 
operations are supposed to be extended to INT if they are <= INT.  On Lattice,
this is 32bits.

						-Matt
	

dale@amiga.UUCP (Dale Luck) (10/16/86)

In article <8610111659.AA11477@cory.Berkeley.EDU> dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
>>Actually, I wonder about this.  Shouldn't the expected result be zero?
>>
>>The partial sum (y+z) should be done completely in ushort, giving
>>      0xFFFF + 0x0002  = 0x0001 (retaining only the lower 16 bits)
>>
>>so that the quotient (1/16) [done in integer math] should be zero.
>>I don't see any reason why the code generated for "x=..." and "xx=..."
>>should differ.
>>
>>Am I missing something here?
>
>	All integral arguments, baring floating expressions, for arithmatic 
>operations are supposed to be extended to INT if they are <= INT.  On Lattice,
>this is 32bits.
>
>						-Matt
>	
However this is pretty poor coding practice. If you expected the results
to overflow a 16bit the programmer should have made allowances for this.
It's those kind of problems that give compiler writers a severe headache
when trying to do some intelligeant optimizing code generation.
Dale Luck

dillon@CORY.BERKELEY.EDU (Matt Dillon) (10/16/86)

>>
>>	All integral arguments, baring floating expressions, for arithmatic 
>>operations are supposed to be extended to INT if they are <= INT.  On Lattice,
>>this is 32bits.
>>
>>						-Matt
>>	
>However this is pretty poor coding practice. If you expected the results
>to overflow a 16bit the programmer should have made allowances for this.
>It's those kind of problems that give compiler writers a severe headache
>when trying to do some intelligeant optimizing code generation.
>Dale Luck

	#@$#@$ it is NOT poor coding practice to assume your compiler will
follow the ansi (or K&R for that matter) standard.  Frankly, I was doing
some pretty mean thing with the code that blew (it was the core of some
3-D graphics functions) and it took me hours to trace down the problem.

	"If you expected the results to overflow a 16bit"... I expected the
results to be 32bit, so obviously I could not have expected the results to
overflow a 16bit.  It should be fixed.

					-Matt

(Lattice 3.03 documented unsigned short bug)

main()
{
   unsigned short a, b, c;

   a = 0xFFFF;
   b = 1;
   c = (a + b) / 2;
   printf ("result = %ld expect %ld\n", (long)c, 0x10000 / 2);
}

a.out
result = 0 expect 32768