gnu@hoptoad.uucp (John Gilmore) (12/19/86)
Ken Ballou seems to have presented good and valid reasons derived from H&S that (unsigned)-1 must have all the bits on. So far nobody has refuted him. I think he's right -- the cast to unsigned *must*, by the C language definition, convert whatever bit pattern -1 has into all ones. This is no worse than casting -1 to float causing a change its bit pattern -- and it's for the same reason. I think it's funny that Ben Mejia's message says that a word containing all ones in ones complement machines is "an illegal representation" but then goes on to tell us that such an illegal value is easily and portably generated with ~0. The value does not know how it was generated, Ben; why is it illegal to get all ones with cast and a -, but legal to do it with ~? And what legislature is passing laws about bit patterns? Invalid values I can see, but are they going to arrest me for contraband bit patterns? -- John Gilmore {sun,ptsfa,lll-crg,ihnp4}!hoptoad!gnu jgilmore@lll-crg.arpa Call +1 800 854 7179 or +1 714 540 9870 and order X3.159-198x (ANSI C) for $65. Then spend two weeks reading it and weeping. THEN send in formal comments!
rjk@mrstve.UUCP (Richard Kuhns) (12/26/86)
In article <1527@hoptoad.uucp> gnu@hoptoad.uucp (John Gilmore) writes: >Ken Ballou seems to have presented good and valid reasons derived from H&S >that (unsigned)-1 must have all the bits on. So far nobody has refuted >him. I think he's right -- the cast to unsigned *must*, by the C language >definition, convert whatever bit pattern -1 has into all ones. This >is no worse than casting -1 to float causing a change its bit pattern -- >and it's for the same reason. I don't understand. On a ones-complement machine, -1 is represented by sizeof(whatever) - 1 ones followed by a zero. How does casting this value to unsigned get rid of the zero? To wit: 00000001(binary) = 1 (decimal) 11111110(binary) = -1 (decimal, ones complement, signed) If the second value above is cast to unsigned, we end up with 254(decimal). What does this have to do with a bit pattern of all ones? -- Rich Kuhns {ihnp4, decvax, etc...}!pur-ee!pur-phy!mrstve!rjk
rjk@mrstve.UUCP (Richard Kuhns) (12/26/86)
In article <595@mrstve.UUCP> I wrote: >sizeof(whatever) - 1 ones followed by a zero. How does casting this ^ +--- I meant (bit)sizeof(whatever). Sorry... -- Rich Kuhns {ihnp4, decvax, etc...}!pur-ee!pur-phy!mrstve!rjk
jsdy@hadron.UUCP (Joseph S. D. Yao) (01/02/87)
In article <595@mrstve.UUCP> rjk@mrstve.UUCP (Richard Kuhns) writes: >I don't understand. On a ones-complement machine, -1 is represented by >11111110(binary) = -1 (decimal, ones complement, signed) >If the ... value above is cast to unsigned, we end up with 254(decimal). Casts convert. "The value is the least unsigned integer congruent to the signed integer (modulo 2^wordsize)." -C REF 6.5, Unsigned. Casts do not necessarily maintain the same bit pattern. [inews placater] -- Joe Yao hadron!jsdy@seismo.{CSS.GOV,ARPA,UUCP} jsdy@hadron.COM (not yet domainised)
jsdy@hadron.UUCP (Joseph S. D. Yao) (01/02/87)
Forgot to mention why "usually" all ones. On a ternary machine (what???) which some "solder-crazed EE" (was it?) might construct, 2^n-1 will of course be some mix of 0's, 1's, and 2's (or -1's?). Of course, lots of other things would break, too. Divides by shifting, and even the meaning (to most folk) of shifting. Of course, that will never happen. We will always have our binary, transistorised, 16- and 18-bit Neumann minicomputers. ;-}? -- Joe Yao hadron!jsdy@seismo.{CSS.GOV,ARPA,UUCP} jsdy@hadron.COM (not yet domainised)
mouse@mcgill-vision.UUCP (01/12/87)
In article <408@hadron.UUCP>, jsdy@hadron.UUCP (Joseph S. D. Yao) writes: > On a ternary machine (what???) which some "solder-crazed EE" (was > it?) might construct, > Of course, lots of other things would break, too. Divides by > shifting, and even the meaning (to most folk) of shifting. Could you even have C (as defined by K&R/H&S, or, alternatively, ANSI) on such a machine? Would you read "bit" to mean one of these ternary digits (I won't abbreviate in a family newsgroup...:-) in things like shifting and bitfields? Does ANSI explicitly require binary arithmetic? I seem to recall reading a posting that talked about it requiring "straight binary" representation for "unsigned int". > Of course, that will never happen. We will always have our binary, > transistorised, 16- and 18-bit Neumann minicomputers. Until they start building their own successors - after a while, we won't know any more. After a while longer, we may not be able to understand any more. Oh, sorry, this isn't net.sf-lovers :-? der Mouse USA: {ihnp4,decvax,akgua,utzoo,etc}!utcsri!mcgill-vision!mouse think!mosart!mcgill-vision!mouse Europe: mcvax!decvax!utcsri!mcgill-vision!mouse ARPAnet: think!mosart!mcgill-vision!mouse@harvard.harvard.edu
gwyn@brl-smoke.ARPA (Doug Gwyn ) (01/20/87)
In article <599@mcgill-vision.UUCP> mouse@mcgill-vision.UUCP (der Mouse) writes: >Does ANSI explicitly require binary arithmetic? For purposes of signed<->unsigned conversion of parameters and bitwise operations, X3J11 decreed semantics "as if" a binary numeration system were used. The underlying architecture can, however, be quite different, so long as the implementation properly mimics the abstract machine.
msb@sq.UUCP (01/26/87)
> >Does ANSI explicitly require binary arithmetic? Doug Gwyn responds: > For purposes of signed<->unsigned conversion of parameters > and bitwise operations, X3J11 decreed semantics "as if" a > binary numeration system were used. The underlying architecture > can, however, be quite different, so long as the implementation > properly mimics the abstract machine. However, section 3.1.2.5, page 21, lines 38-39 of the Draft says: # The values of integral types shall be interpreted in a pure # binary numeration system. I see no "as if" here, and cannot make out that one is implied. Mark Brader
gwyn@brl-smoke.UUCP (01/28/87)
In article <1987Jan26.132135.9624@sq.uucp> msb@sq.UUCP (Mark Brader) writes: ># The values of integral types shall be interpreted in a pure ># binary numeration system. > >I see no "as if" here, and cannot make out that one is implied. I think we're quibbling over inessentials here. Does use of "interpreted in" imply "internally represented in" or "as if internally represented in"? I think the latter, but the important point is that the underlying "abstract machine" that executes the C program must behave like a binary one.