gwyn@Brl-Vld.ARPA (VLD/VMB) (03/03/85)
ANSI C will guarantee minimum sizes for integer types. A short or an int will have 16 bits minimum and a long will have 32 bits minimum. For other sizes, round up. For more than 32 bits, find another way.
mwm@ucbtopaz.CC.Berkeley.ARPA (03/04/85)
In article <8871@brl-tgr.ARPA> gwyn@Brl-Vld.ARPA (VLD/VMB) writes: >ANSI C will guarantee minimum sizes for integer types. >A short or an int will have 16 bits minimum and a long >will have 32 bits minimum. For other sizes, round up. >For more than 32 bits, find another way. You mean, if I've got a program that needs values around 5 billion, and a compiler/machine that has 60 bit longs, I shouldn't use C???? Likewise, if I've got a machine that has 18 bit shorts and (expensive!) 36 bit longs, and I need values around 100,000, I should use the longs? Nuts to that. I'll use "uint60" and "int17", and give you a copy of the include file that defines them. If you refuse to fix the include file, the code will die. But that's your problem, not mine. <mike Here's a the first piece of that include file, just to show you how easy this is to do: /* * Size definition for a generic byte oriented machine. Compiler needs to * understand "signed/unsigned byte", and "unsigned short/long" types. */ /* signed ints, size must have extra bit for sign */ typedef signed char int1 ; typedef signed char int2 ; typedef signed char int3 ; typedef signed char int4 ; typedef signed char int5 ; typedef signed char int6 ; typedef signed char int7 ; typedef short int8 ; typedef short int9 ; typedef short int10 ; typedef short int11 ; typedef short int12 ; typedef short int13 ; typedef short int14 ; typedef short int15 ; typedef long int16 ; typedef long int17 ;
gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (03/06/85)
You failed to show the #define for "uint60" on e.g. a VAX. How are you going to port your code to a 32-bit machine? By implementing your very-long-integer data types in a way that does not exceed the guaranteed minimum limits of any ANSI C conforming implementation, you would significantly improve the portability of your code. There are well-known techniques for implementing extended-precision arithmetic (e.g. Knuth Vol. 2); having such routines in the standard library could be useful but forcing the compiler to handle this issue directly is counter to the general flavor of C. If you really want non-portable code (e.g., for reasons of speed), you can still do that too, but there is no reason the C language should be made considerably more elaborate just so you can code in terms of "uint60".
mwm@ucbtopaz.CC.Berkeley.ARPA (03/09/85)
In article <8988@brl-tgr.ARPA> gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) writes: >You failed to show the #define for "uint60" on e.g. a VAX. >How are you going to port your code to a 32-bit machine? With more difficulty than porting it to a 64-bit machine. The code will die at compile time, leaving a pointer to the variables of types to large for the machine. At that point, you can either restrict the problem range (if possible) or add support for the long integers (perhaps via mp). This is far better than producing spurious answers, as not using such a system would do. Of course, you also get the benefit of using the smallest type on machines with funny byte/short sizes, *without* having to worry about portability. >By implementing your very-long-integer data types in a way >that does not exceed the guaranteed minimum limits of any >ANSI C conforming implementation, you would significantly >improve the portability of your code. First, 60 bits is *not* a "very-long-integer" data type. Second, portability is the problem of the porter, not the original author. I *will not* make my job significantly harder for purposes of portability. I do cast things (more than the compiler insists on, actually), cast functions to void, etc. But not using the capabilities of the machine/OS/compiler to their fullest is silly. For instance, do you likewise recommend that I make all external variable names six characters, monocase, because that's the "guaranteed minimum?" >There are well-known >techniques for implementing extended-precision arithmetic >(e.g. Knuth Vol. 2); having such routines in the standard >library could be useful but forcing the compiler to handle >this issue directly is counter to the general flavor of C. I'm well aware of those technics, having made use of Knuth in implementing some of them. Unix (at least post-v7 unices) has just such a library, /usr/lib/libmp.a. Or is this another one of the nice v7 tools that didn't make it to System X.y.z.w? I don't think the compiler should have to handle the problem. I don't think the person writing the code should have to worry about whether things will fit in a short/long/int on every machine the code might be run on, either. The include file full of typedefs is a reasonable solution for C, and the best I've been able to come up with. If you've got a better solution, I'd like to hear it. >If you really want non-portable code (e.g., for reasons of >speed), you can still do that too, but there is no reason >the C language should be made considerably more elaborate >just so you can code in terms of "uint60". Adding an include file with ~100 typedefs is making the C language "considerably more elaborate"????? I'm not proposing any changes to C *at all*. Just adding some typedefs to the set that programs can expect to find, like the "jmp_buf" typedef in <setjmp.h>. The point of this wasn't to make code with long integers more readable, but to make it possible to write code which expects ints to have some minimum number of bits that is both efficient and portable. If long ints were the only problem, I'd be writing in a language that supports integer properly, as opposed to C. <mike