gdykes@batcomputer.tn.cornell.edu (Gene Dykes) (09/13/88)
The Facts --------- "Dimension", "Position", and "Cardinal" are typedef'd as unsigned ints in lib/Xt/Intrinsics.h. The Problem ----------- The following code fragment is a very reasonable thing to find in a geometry manager widget: Dimension new_width, old_width, adjustment ; Cardinal number_of_children ; ... new_width = old_width + adjustment / number_of_children ; I think that it's reasonable to think of an "adjustment" to a Dimension as being of type Dimension, too. But if adjustment is, say, -100, then this code clearly won't work. But, for the sake of argument, let's say that adjustment should definitely be an "int". Then, the code above still croaks out, and I think that this violates the principle of least surprise. One needs a cast as follows: new_width = old_width + adjustment / (int) number_of_children ; I got tired of trying to find every place that needed a cast in my widgets, so I threw up my hands and put the following in my header file: #define Dimension int #define Position int #define Cardinal int There are probably other circumstances like this, and I contend that there will soon be lots of these bugs lurking around, just waiting for the unexpected negative number to come along and blow the program away. Isn't it worth limiting Dimensions to two billion or so, by specifying them as "int" instead of "unsigned int"? I would presume that "Cardinal", etc., are typedef'd because the programmer is not supposed to need to know what they really are, when in fact that ignorance leads to big trouble. (My apologies if this has been hashed out before...) -- Gene Dykes, gdykes@tcgould.tn.cornell.edu
swick@ATHENA.MIT.EDU (Ralph R. Swick) (09/13/88)
Well, that's C for ya. You're quite correct; there will soon be lots of bugs lurking around, especially in your code. Nowhere in the specification does it say that you may assume Dimensions and Positions are of any particular size. If you want to write portable C code, you will have to use the (opaque) definitions of Dimension, Position and Cardinal and do the appropriate casts to get signed arithmetic where you want it. It is quite possible that the spec should give a larger warning to C programmers.
gdykes@batcomputer.tn.cornell.edu (Gene Dykes) (09/13/88)
In article <8809122025.AA00876@LYRE.MIT.EDU> swick@ATHENA.MIT.EDU (Ralph R. Swick) writes: Well, I hate to have to take on one of the heavyweights, but... > Nowhere in the specification does it say that you may assume Dimensions > and Positions are of any particular size. Size of the variable is not an issue here - only signed vs. unsigned. > If you want to write portable C code, you will have to use the (opaque) > definitions of Dimension, Position and Cardinal and do the appropriate > casts to get signed arithmetic where you want it. It is quite possible > that the spec should give a larger warning to C programmers. I think you missed the point. The point is that having Dimension, et.al., typedef'd as unsigned ints has no redeeming virtue whatsoever, and in fact can cause the unsuspecting programmer a lot of grief. They should be typedef'd as int's and I suppose the spec should say that the programmer can expect them to be usable in expressions containing signed arithmetic. Consider: The things which have been declared with Dimension, Position, and Cardinal can be reasonably expected to be involved in expressions with signed arithmetic. Furthermore, a variable which contains the difference between two Dimensions ought to be declared a Dimension as well, and that can itself be a negative number. The only possible rationale I can think of for declaring something unsigned int rather than int is if the variables are never expected to be used in conjunction with signed arithmentic. If one of those values got so big that it needed another bit of precision, signed arithmetic won't work, casts or no casts. Well, there is another possible rationale: "Let's see how much havoc we can wreak by typedef'ing something contrary to its expected use..." :-) Naturally, I prefer to think it was just a small oversight which can be set straight with just a few seconds editing before R3 is shipped out. I don't see how this could have any backwards compatibility problems (except that the meticulous programmers will have unnecessarily cluttered up there code with a lot of extraneous casts.) -- Gene Dykes, gdykes@tcgould.tn.cornell.edu
jim@EXPO.LCS.MIT.EDU (Jim Fulton) (09/13/88)
> The point is that having Dimension, et.al., typedef'd as unsigned ints > has no redeeming virtue whatsoever Yes they do. They are unsigned because things that are Dimensions are cardinal in nature. And, they are frequently passed to routines that expect unsigned values. If you are doing something that generates "negative" dimensions, you'll need to use a different data type (note that Positions are signed). > Well, there is another possible rationale: "Let's see how much havoc we > can wreak by typedef'ing something contrary to its expected use..." :-) Its expected use is to be an unsigned quantity. > Naturally, I prefer to think it was just a small oversight which can be set > straight with just a few seconds editing before R3 is shipped out. It wasn't an oversight. In any case, it's up to Ralph. Jim Fulton MIT X Consortium
hania@WSL.DEC.COM (09/14/88)
>> The point is that having Dimension, et.al., typedef'd as unsigned ints >> has no redeeming virtue whatsoever >Yes they do. They are unsigned because things that are Dimensions are >cardinal in nature. And herein lies the problem: many people think of the C type "unsigned" as "cardinal". This is a mistake. There is no "cardinal" type in C. The best way to think of "unsigned" is as a bit field; the only examples of the use of "unsigned" that I can find in Kernighan and Richie use them in that capacity. In addition, there is no type-checking done on "unsigned" values, to make sure that they are positive, as one would expect for "cardinal" types. These are all arguments that "unsigned" should never be used in C to mean "cardinal"; such use is, at best, misleading. I believe that at least some of the toolkit implementers believe that it was a mistake to have things like Dimension typedef'd as unsigned. The best the rest of us can do is agree. Hania
garya@stan.com (Gary Aitken) (09/15/88)
>> Isn't it worth limiting Dimensions to two billion or so, by specifying them >> as "int" instead of "unsigned int"? I would presume that "Cardinal", etc., >> are typedef'd because the programmer is not supposed to need to know >> what they really are, when in fact that ignorance leads to big trouble. I agree. We ran into the same problem when implementing a C++ set of widgets. The widget sizing procedures can check for valid sizes to screen out errors due to negative numbers.
RWS@ZERMATT.LCS.MIT.EDU (Robert Scheifler) (09/27/88)
In the Intrinsics implementation to be shipped by MIT in R3, Position is "short" and Dimension is "unsigned short". The types were changed to shorts to conserve space (widgets are relatively expensive). To give protocol-equivalent value ranges on 16-bit architectures, Dimension cannot be "short". Programmers should not depend on whether Position and Dimension are 16 or 32 or 64 bits, but they should know that Position is signed and Dimension is unsigned. The use of unsigneds as cardinals is probably "the wrong thing" (you don't have to argue too loudly, I know how many times it's bitten us in the server), but it is consistent with the Xlib interface, and the space savings in using shorts is useful.