cepek@spanky.mgi.com (06/06/90)
In article <16784@haddock.ima.isc.com>, karl@haddock.ima.isc.com (Karl Heuer) writes: > The very existence of a typedef named `int8' is questionable. (Yes, > I know the reason.) Since we both seem to understand reasons why "int8" would come into existence, perhaps we could discuss why it shouldn't. - - - - - - - - - - - - - - - - - - - - I'm not trying to start a war here. I've only been using C for 4 years (programming for 12). I'm all for learning new methods and discussing alternative approaches; in fact I enjoy it. +------------------------------------------------------------------------+ | Michael Cepek "Engage." | | Programmer/Analyst | | Internet: Cepek@MGI.COM | | Management Graphics, Inc. Voice: +1 612/851-6112 | | 1401 East 79th Street Operator: +1 612/854-1220 | | Minneapolis, MN 55425 USA Fax: +1 612/854-6913 | +------------------------------------------------------------------------+
karl@haddock.ima.isc.com (Karl Heuer) (06/11/90)
In article <27.266bddfd@spanky.mgi.com> cepek@spanky.mgi.com writes: >[What's wrong with a typedef `int8'?] People who've used PL/I can probably give first-hand stories about why explicit sizes are a bad idea. If you were on a machine with 36-bit ints, 18-bit shorts, and 9-bit bytes, what typedefs would you have? If you define only `int9', `int18', and `int36', then none of your code using `int8' will compile. So I presume you actually maintain a file of bitsize-typedefs `intNN' for all reasonable values of NN. But then why did you specify `int8' for the boolean type instead of the more logical `int1'? (Probable answer: since C requires that `char' be at least 8 bits anyway, there's no need to distinguish anything smaller. This isn't entirely true, since an implementation on a bit-addressible machine might provide a `short char' extension, and one should be able to take advantage of that.) Karl W. Z. Heuer (karl@ima.ima.isc.com or harvard!ima!karl), The Walking Lint