[comp.lang.c] Standard int sizes

dsill@NSWC-OAS.arpa (04/08/87)

kyle@xanth.cs.odu.edu (kyle jones) wrote:
>I would like to see the sizes of C integral types standardized.
>
>One proposal might be:
>
>	char	 8 bits
>	short	16 bits
>	int	32 bits
>	long	64 bits

I'd rather have:
	short    8 bits
	int	16 bits
	long	32 bits
and something like "xlong" for 64 bits.

-Dave Sill
 dsill@nswc-oas.arpa

neal@weitek.UUCP (Neal Bedard) (04/09/87)

In article <6759@brl-adm.ARPA> dsill@NSWC-OAS.arpa writes:
>kyle@xanth.cs.odu.edu (kyle jones) wrote:
>>I would like to see the sizes of C integral types standardized.
>>One proposal might be:
>>
>>	char	 8 bits
>>	short	16 bits
>>	int	32 bits
>>	long	64 bits
>
>I'd rather have:
>	short    8 bits
>	int	16 bits
>	long	32 bits
>and something like "xlong" for 64 bits.

K&R states that "int" typically indicates the "natural" size of the machine,
i.e., whatever the "integer datapath size" is. This is typically easiest entity
to generate addresses for - so "int" should probably stay as it is. However, it
would be nice to eliminate the ambiguity in the other sizes.

*My* half-baked size proposal would be:

	char		8 bits
	short		16 bits
	long		32 bits	= float
	octaword	64 bits = double
	hexaword	128 bits = extend (extended floating-point formats)

This setup would accommodate pretty much everybody whose "char" size is 8 bits.
I have no quarrel over the actual names really, as long as they're distinct
from each other and mean the same thing from compiler to compiler - which is
what everyone wants, no?

Now, *my* question is, what do you do for machines that address things that
aren't 2**n bits wide? This is is where the *real* fun begins...



-Neal

devine@vianet.UUCP (04/10/87)

> kyle@xanth.cs.odu.edu (kyle jones) wrote:
> I would like to see the sizes of C integral types standardized.
> One proposal might be:
> 	char	 8 bits
> 	short	16 bits
> 	int	32 bits
> 	long	64 bits

  If you need to know exact sizes of types, you should define your
own types using the preprocessor or typedefs.  That is, if you need
a 16 bit arithmetic type, use:

If "machine has compiler that uses 16 bits for 'short'"
#define int16  short

If "machine has compiler that uses 16 bits for 'int'"
#define int16  int

aeusesef@csun.UUCP (04/11/87)

In article <230@ems.UUCP> mark@ems.UUCP (Mark H. Colburn) writes:
>In article <6759@brl-adm.ARPA> dsill@NSWC-OAS.arpa writes:
>>kyle@xanth.cs.odu.edu (kyle jones) wrote:
>>>I would like to see the sizes of C integral types standardized.
[various proposals for C sizes]
>How about those machines out there with
>64 bit words?  Or to make it really interesting, toss a Cyber into the 
>forray with it's 60 bit word.
>Mark H. Colburn    UUCP: ihnp4!meccts!ems!mark, mark@ems.uucp      
Actually, in thinking about porting an un-Small C (we got just about everything
except for typing the changed code in...), we decided that it would be easier
to make char's,  int's, long's, short's, and float's the same length
(double's could be longer).  We always thought how amusing it would be to
have 2**59 possible characters...

 -----

 Sean Eric Fagan
 Office of Computing and Communications Resources (OCCR)
 Suite 2600
 5670 Wilshire Boulevard
 Los Angeles, CA 90036
 (213) 852-5086
 AGTLSEF@CALSTATE.BITNET
{litvax, rdlvax, psivax, hplabs, ihnp4}!csun!aeusesef
--------------------------------------------------------------------------------
My employers do not endorse my   | "I may be slow,  but I'm not  stupid.
opinions,  and, at least in my   |  I can count up to five *real* good."
preference  of Unix,  heartily   |      The Great Skeeve
disagree.                        |      (Robert Asprin)

flaps@utcsri.UUCP (04/11/87)

In article <6759@brl-adm.ARPA> dsill@NSWC-OAS.arpa writes:
>kyle@xanth.cs.odu.edu (kyle jones) wrote:
>>I would like to see the sizes of C integral types standardized.
>>
>>One proposal might be:
>>
>>	char	 8 bits
>>	short	16 bits
>>	int	32 bits
>>	long	64 bits
>
>I'd rather have:
>	short    8 bits
>	int	16 bits
>	long	32 bits
>and something like "xlong" for 64 bits.

How about:
	char	7 bits
	short	12 bits
	int	15.6 bits
	long	5*PI*e bits


...  For Pete's sake.

-- 

Alan J Rosenthal

flaps@csri.toronto.edu, {seismo!utai or utzoo}!utcsri!flaps,
flaps@toronto on csnet, flaps at utorgpu on bitnet.

"Probably the best operating system in the world is the [operating system]
made for the PDP-11 by Bell Laboratories." - Ted Nelson, October 1977

metro@asi.UUCP (04/12/87)

I am not actually suggesting a change to the C language, so please do not
flame the following.

Wouldn't it be interesting to be able to specify the size of an integer
variable.  Perhaps a method like the one used in IBM's fortran 66 compiler.
It was actually an IBM extension to the language i believe.

	int*4	value1;			/* 4 byte integer */
	int*2	value2;			/* 2 byte integer */
	int*1	value3;			/* 1 byte integer */

The only integer sizes which were implemented on the IBM were those which
made sense to the instruction set (I.E. word, half-word, and byte).

It would seem that if the above definitions were used on a machine which
did not support that particular size, a syntax/semantics error would be
appropriate.

Just some more fuel for the fire.

-- 
Metro T. Sauper, Jr.                              Assessment Systems, Inc.
Director, Remote Systems Development              210 South Fourth Street
(215) 592-8900                 ..!asi!metro       Philadelphia, PA 19106

edw@ius2.cs.cmu.edu (Eddie Wyatt) (04/12/87)

In article <5744@brl-smoke.ARPA>, gwyn@brl-smoke.ARPA (Doug Gwyn ) writes:
> In article <170@vianet.UUCP> devine@vianet.UUCP (Bob Devine) writes:
> > ...  That is, if you need a 16 bit arithmetic type, use:
> >
> > If "machine has compiler that uses 16 bits for 'short'"
> > #define int16  short
> >
> > If "machine has compiler that uses 16 bits for 'int'"
> > #define int16  int
> 
> If you simply need at least 16 bits in a signed integer data type,
> use "short".  That way whoever reads the code doesn't have to learn
> what your invention "int16" means.

   For one, the mnemonics to int16 are clear - A data object of at 
least 16 bits.

   Second and most important - shorts are NOT guaranteed to be 16 bits.
Someone else correct me if I am wrong but you are only guaranteed:

	sizeof(short) <= sizeof(int) <= sizeof(long).

And there C implentations that use another size for short other than 16
bits (2 bytes).

-- 
					Eddie Wyatt

They say there are strangers, who threaten us
In our immigrants and infidels
They say there is strangeness, too dangerous
In our theatres and bookstore shelves
Those who know what's best for us-
Must rise and save us from ourselves

Quick to judge ... Quick to anger ... Slow to understand...
Ignorance and prejudice and fear [all] Walk hand in hand.
					- RUSH 

bzs@bu-cs.BU.EDU (Barry Shein) (04/13/87)

Posting-Front-End: GNU Emacs 18.41.4 of Mon Mar 23 1987 on bu-cs (berkeley-unix)



From: metro@asi.UUCP (Metro T. Sauper)
>Wouldn't it be interesting to be able to specify the size of an integer
>variable.  Perhaps a method like the one used in IBM's fortran 66 compiler.
>It was actually an IBM extension to the language i believe.
>
>	int*4	value1;			/* 4 byte integer */
>	int*2	value2;			/* 2 byte integer */
>	int*1	value3;			/* 1 byte integer */

Hey, why not go whole hog (pun intended) and go for the PL/I solution?

	declare foo fixed bin(31);
	declare goo fixed bin(15);
	declare moo packed decimal (17,3);

The numbers are BITs, can't get finer tuned control than that on most
systems!

The problem with your suggestion is that it only works if the machine
you use has a reasonable notion of a byte (that is, *4 WHATs?) A Byte
is not standardized and it's not at all obvious that an integral number
of them fit in a word (the PDP-10 would need int*4.5, the S1 uses 9-bit
bytes as I remember.)

The problem with my solution is that it leads to software anarchy.

The purpose of compilers is not necessarily to provide as many choices
as possible. That's the purpose of marketing departments.  Better to
try to provide some clean leadership rather than a zillion choices.

No, the issue is how to come up with abstracted constructs that
can be mapped reasonably portably to a lot, if not all, architectures.
Not more rope to shoot yourself in the foot with (?!)

	-Barry Shein, Boston University

john@viper.UUCP (04/13/87)

In article <5744@brl-smoke.ARPA> gwyn@brl.arpa (Doug Gwyn (VLD/VMB) <gwyn>) writes:
 >In article <170@vianet.UUCP> devine@vianet.UUCP (Bob Devine) writes:
 >> ...  That is, if you need a 16 bit arithmetic type, use:
 >>
 >> If "machine has compiler that uses 16 bits for 'short'"
 >> #define int16  short
 >>
 >> If "machine has compiler that uses 16 bits for 'int'"
 >> #define int16  int
 >
 >If you simply need at least 16 bits in a signed integer data type,
 >use "short".  That way whoever reads the code doesn't have to learn
 >what your invention "int16" means.
 >

  Right Doug...  Then I just have to "learn what your invention" short
means...  I've used 3 compilers where "short" == 8bit signed...

I don't see any real porting problems with intXX.  Any programmer who
sees a header file full of lines like Bob gave who can't figure out
that int16 is a sixteen bit integer doesn't belong in front of a CRT.
On the other hand, ANY assumption you try making about "int, short, 
long, or even char" which assumes a fixed size can be proven wrong
given enough compilers...

  I agree with Bob even though using intXX -everywhere- is, to put it
mildy, a nusance.  I'd probably write the program using int, char, etc and
then go thru it just before I release it to change everything (except maybe
char) to intXX form.  On the other hand, I'd -much- prefer having to change
one header file containing several intXX defines instead of having to
go thru someone elses code (assumption-riddled buggy code at that) and try
to figure out how big each and every type was on the originating machine 
by looking at the code.  (I've had to do the latter and it's more of a pain
than I -ever- want to have to do again...)

--- 
John Stanley (john@viper.UUCP)
Software Consultant - DynaSoft Systems
UUCP: ...{amdahl,ihnp4,rutgers}!{meccts,dayton}!viper!john

rickc@pogo.UUCP (04/14/87)

In article <5744@brl-smoke.ARPA> gwyn@brl.arpa (Doug Gwyn (VLD/VMB) <gwyn>) writes:
>If you simply need at least 16 bits in a signed integer data type,
>use "short".  That way whoever reads the code doesn't have to learn
>what your invention "int16" means.

Maybe short should be at least 16 bits.  However, I have used compilers with
8 bit shorts.

>It is worth noting that exact size of a data type is seldom
>important, so long as it is "big enough". 

I agree.  The only time I need exactly 8, 16 or 32 bits is when I am
describing hardware.  And, that code will never be portable.

stevesu@copper.UUCP (04/15/87)

Slavishly coding an entire program with things like int16 and
int32 does nothing to improve the overall portability,
readability, or efficiency of the program.  (In fact, all three
attributes can be significantly diminished.)  Wanton use of such
types is a perfect example of blindly doing something that
somebody you thought knew what he was talking about said to,
without really understanding what it is (or isn't) good for.

The vast majority of the time, what you really, really want is a
simple int.  A long would unnecessarily waste space, a char or
short would risk overflow or sign-extension problems, and any of
the alternatives could waste time.  (Remember that an int ought
to be the "natural" size for the machine, and presumably the
fastest and easiest to generate compact code for.)

If you do have a special requirement, either for range or
compactness, and you're going to go to the trouble of defining a
new type, don't just use something that's got the size in bits
wired into the name or something.  Code strewn with
undifferentiated int16's and int32's is just as hard to
understand as code littered with shorts and longs.

Say what you mean!  You probably don't need a type that can hold
a 32-bit value just because it can hold a 32-bit value.  You
probably have a more abstract quantity in mind, like "distance"
or "temperature" or "furlongs per fortnight."  Then you can say

	typedef long int Distance;		/* distance in millimeters */
	typedef int Temperature;		/* temperatures in degrees C */
	typedef char FurlongsPerFortnight;	/* velocity ad absurdium */

Not only have you guaranteed that the "Distance" type can hold
something over 2,000 miles, and that the type is easily
changeable on a machine where a long int somehow isn't
appropriate, but you have also made the code much easier to read
and verify (is this int a temperature or a distance?) and you
have additionally simplified any type modifications required by
changes in the program (as opposed to changes in the
implementation).

Suppose one day you decide you need a completely different type for
distances -- a floating point value, perhaps, or a structure
containing feet and inches.  If distances have their own typedef
name, this change is easy.  If all distances are "int32" (which
you thought was going to make the program "portable") you can't
change the distances without changing everything else that
happens to be an int32, or examining every int32 in the program
and trying to remember if it's a distance or not.

To consider a previously-posted example (so artificial it should
probably be ignored), if for some reason you really need a 16-bit,
signed value, say

	typedef long int funnyvalue;	/* 16 bits PLUS sign */

If that ends up being unacceptably inefficient on your
hypothetical Queer Machine for C, where longs are 36 bits and
slow, you can say

	#ifndef QMC
	typedef long int funnyvalue;	/* 16 bits PLUS sign */
	#else
	typedef int funnyvalue;		/* ints are 18 bits on QM/C */
	#endif

If what you want to do is stamp your foot and demand 16 (or 32 or
27 or whatever) bits, and let somebody else be totally
responsible for figuring out how to do it, then use PL/I or ADA.

I say again, though, that the number of cases where you even care
is small.  When I'm writing things like

	for(i = 1; i <= 12; i++)
		days += monthsize[i];

I declare i as an int.  Not a char, not an int16, not a
typedef MonthCounter.

                                           Steve Summit
                                           stevesu@copper.tek.com

mwm@eris.BERKELEY.EDU (Mike (My watch has windows) Meyer) (04/15/87)

In article <981@copper.TEK.COM> stevesu@copper.TEK.COM (Steve Summit) writes:
>Slavishly coding an entire program with things like int16 and
>int32 does nothing to improve the overall portability,
>readability, or efficiency of the program.

Quite correct. This just means that these creatures, like any other
creature, can be abused as well as used.

Most of the rest of your article, discussing abstract data types and
the like is also quite correct. But you missed a few points that make
such creatures nice to have. For instance, you said:

>Say what you mean!  You probably don't need a type that can hold
>a 32-bit value just because it can hold a 32-bit value.  You
>probably have a more abstract quantity in mind, like "distance"
>or "temperature" or "furlongs per fortnight."  Then you can say
>
>	typedef long int Distance;		/* distance in millimeters */
>	typedef int Temperature;		/* temperatures in degrees C */
>	typedef char FurlongsPerFortnight;	/* velocity ad absurdium */

Very nice. And probably sufficient over 90% of the time.  But not
complete. To take an example from something I did this weekend, I had
(more accurately, should have had) the following:

/*
 * We're counting objects of size 8 or larger in something of size 16Meg.
 * This can be bigger than 64K, so use a long.
 */
typedef long	chunk_count ;

Whereas I'd really like to say:

/*
 * We're counting objects of size 2^3 or larger in something of size 2^24.
 * Minimum count is zero, maximum is 2^21, so use a uint21.
 */
typedef uint21	chunk_count ;

In this case, I have a hard upper bound, not some "a short's to small"
size. If this might run on strange hardware (say a harris/6), the
above could be a major win.

>To consider a previously-posted example (so artificial it should
>probably be ignored), if for some reason you really need a 16-bit,
>signed value, say
>
>	typedef long int funnyvalue;	/* 16 bits PLUS sign */
>
>If that ends up being unacceptably inefficient on your
>hypothetical Queer Machine for C, where longs are 36 bits and
>slow, you can say
>
>	#ifndef QMC
>	typedef long int funnyvalue;	/* 16 bits PLUS sign */
>	#else
>	typedef int funnyvalue;		/* ints are 18 bits on QM/C */
>	#endif


Ugh. Wouldn't it be a _lot_ nicer to say:

#include <int_sizes.h>

typedef int16	funny_value ;

>If what you want to do is stamp your foot and demand 16 (or 32 or
>27 or whatever) bits, and let somebody else be totally
>responsible for figuring out how to do it, then use PL/I or ADA.

No, I don't want to force someone else to figure out how to do it. I
want a nice, portable way of handling the cases where I know exactly
how big something needs to be. That's the idea behind the int_sizes.h
include, it gives me that. Wanna see how easy that is to do? Here's a
version that will be correct on all machines with 8 bit chars, 16 bit
shorts and 32 bit longs (of which there are lots...):

/* Signed ints */
typedef signed char	int1,  int2,  int3,  int4,  int5,  int6,  int7;
typedef short int	int8,  int9,  int10, int11, int12, int13, int14, int15;
typedef long int	int16, int17, int18, int19, int20, int21, int22, int23,
			int24, int25, int26, int27, int28, int29, int30, int31;
/* Unsigned ints */
typedef unsigned char	uint1,  uint2,  uint3,  uint4,  uint5,  uint6,  uint7,
			uint8;
typedef unsigned short	uint9,  uint10, uint11, uint12, uint13, uint14, uint15,
			uint16;
typedef unsigned long	uint17, uint18, uint19, uint20, uint21, uint22, uint23,
			uint24, uint25, uint26, uint27, uint28, uint29, uint30,
			uint31, uint32;

Gee, that was easy. Why don't I do one for the QM/C (which does twos
complement), just for grins? We'll assume that the QM/C is word
addressed, so a char is also a short.

/* Signed ints */
typedef short int	int1,  int2,  int3,  int4,  int5,  int6,  int7,  int8,
			int9,  int10, int11, int12, int13, int14, int15, int16,
			int17;
typedef long int	int18, int19, int20, int21, int22, int23, int24, int25,
			int26, int27, int28, int29, int30, int31, int32, int33,
			int34, int35;
/* Unsigned ints */
typedef unsigned short	uint1,  uint2,  uint3,  uint4,  uint5,  uint6,  uint7,
			uint8,  uint9,  uint10, uint11, uint12, uint13, uint14,
			uint15, uint16, uint17, uint18;
typedef unsigned long	uint19, uint20, uint21, uint22, uint23, uint24, uint25,
			uint26, uint27, uint28, uint29, uint30, uint31, uint32,
			uint33, uint34, uint35, uint36;

Now, you may well ask (and it's well you do, as that gives me a chance
to point out another advantage :-), what happens if some person on the
QM/C decides they need a 33 bit signed quantity for some reason, and
so quite correctly codes:

/*
 * This type need this many bits because blah blah blah.
 */
typedef int33	my_funny_type ;

Then later, some poor, unsuspecting programmer comes along and has to
port this to something normal, like an 8086? Well, they'll drop it
into the compiler, which will blow up in their faces because there
_isn't_ a type int33. With luck, they'll type C-x `, GNU emacs will
dump them on the above typedef, and they can then read the comments
and decide how to handle it. Ah, but what if the programmer didn't
bother putting in a comment? Then you can chase down the
type"my_funny_value", and figure out what needs to be done.

On the other hand, if the person blithly coded "long" on the QM/C
because there wasn't an int_sizes.h, then the code compiles. With
luck, it'll die in the test suite or some other innocent point. More
likely, there'll be no test suite, and somebody will start getting
ludicrous answers out of the program. Worse yet, they'll get wrong but
not ludicrous answers, and assume that the answers are right.

In summary: most of the time, you really don't need the ability to
specify sizes to the bit. Every once in a while, you do. An include
file that gives the smallest size for each flavor on the machine is
about 5 minutes work with a good text editor. It requires no changes
to the compiler. If used incorrectly, it won't hurt much (about the
most that can be said for any feature). If used correctly, it allows
portable code to be efficient on machines with odd-ball word sizes,
and allows the compiler to catch assumptions about word sizes that
deviate from the minimum.

	<mike
--
Here's a song about absolutely nothing.			Mike Meyer        
It's not about me, not about anyone else,		ucbvax!mwm        
Not about love, not about being young.			mwm@berkeley.edu  
Not about anything else, either.			mwm@ucbjade.BITNET

chris@mimsy.UUCP (04/15/87)

In article <981@copper.TEK.COM> stevesu@copper.TEK.COM (Steve Summit) writes:
>If you do have a special requirement, either for range or
>compactness, and you're going to go to the trouble of defining a
>new type, don't just use something that's got the size in bits
>wired into the name or something. ...

>Say what you mean!

But `what you mean' might be just `something that holds at least
32 bits'.

>You probably don't need a type that can hold a 32-bit value just
>because it can hold a 32-bit value.

---Unless, of course, you are dealing with a pre-defined file format
which makes extensive use of 8, 16, 24 and 32 bit values.

The key concept (with which I agree) is that you must *think about
the uses* for the types you define.  Are the types properly descriptive?

There is a problem with this approach, though.  It can lead to a
profusion of types, incomprehensible due to sheer numbers.  There
is a balance point between descriptive but basic types and exact
types; that balance depends on many things, including your own
sense of aesthetics.

Whoever said programming was not an art?	:-)
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7690)
UUCP:	seismo!mimsy!chris	ARPA/CSNet:	chris@mimsy.umd.edu

shaun@buengc.UUCP (04/16/87)

It is clear that there is no standard length for int.  It is also good to
tell how much range you need for a variable because it helps the code be
self documenting.  Studies of values used in programs shows that most are very
small with 0 and 1 predominating.  These small values can be given their own
special arithmetic instructions that take only one fetch (incr is obvious).
Compilers could do a better job if they knew down to the byte what size is
needed even considering that the ALU doesn't care. The only portability
problem I had with my last large C program was differing interpretations
of int (VAX to IBM-PC).  -Shaun  (int4, int8, int_time_to_transubstantiate)

bzs@bu-cs.UUCP (04/16/87)

It seems clear to me that if there is to be any alternate approach to
declaring portable integer sizes then Pascal (lord forgive me) had the
right approach. Let the programmer declare a range and then let that
be resolved automatically in a machine independant way:

	int foo range {-10000..10000};

And forget about how many bits/bytes are needed as you're not sure
what the right thing is anyhow in advance. This proposal involves
saying exactly what you mean.

Too much trouble to specify a range? Not known in advance? Ooops,
you got bugs (or you're always defaulting to max.)

This could probably be handled adequately by a simple C
pre-pre-processor which picked up the ranges and replaced them with
the appropriate declarations, I could imagine:

	INT(-10000,10000) foo, bar;
	INT(0,255) baz;

as an easily parsed syntax, although not terribly aesthetic. Perhaps
doing it as psuedo-typedefs would be superior:

#TYPEDEF int with range {-10000,10000} medium_int
#TYPEDEF int with range {0,255} small_int

	...

	medium_int foo, bar;
	small_int baz;

You can almost do this with the pre-processor but it's hard to nest
#if's intelligently into #defines (maybe someone has an idea.)  The
important thing is looking at the range specs and deciding
intelligently which is the best type to use. Just defining a bunch of
small_int defines is not powerful enough (tho I'd love to be convinced
otherwise, it would simplify this.) The output would be normal
typedefs.

Note that this, I claim, solves the range problem in a portable way
without requiring any modification in the language definition, just
a utility (in the spirit of YACC or LEX.)

	-Barry Shein, Boston University

devine@vianet.UUCP (04/16/87)

In article <5744@brl-smoke.ARPA> gwyn@brl.arpa (Doug Gwyn (VLD/VMB) <gwyn>) writes:
>If you simply need at least 16 bits in a signed integer data type,
>use "short".  That way whoever reads the code doesn't have to learn
>what your invention "int16" means.

  That may work (though others have replied that 'short' is not even
guarenteed to be 16 bits) if I want "at least 16 bits".  I suggested the
use of "int16" for those cases where a programmer wants exactly 16 bits.

  A similar situation exists for 32 bits.  Someone could easily create
an 'int32' type by selecting either an int or a long if they wanted a
type that they could use portably across different compile environments.

  [Of course, this attempt at portability won't work if a machine's
architecture is incapable of supplying 16- and 32-bit entities.]

kyle@xanth.UUCP (04/16/87)

I agree completely that the standard cannot demand that integral types be an
EXACT size but it should demand that each integral type be AT LEAST a certain
size.  The sizes that I mentioned in my earlier article on this subject where
by no means intended to be definitive; those simply were the first numbers
that came to mind.

Using #define's like int16 to choose the right sized int type is just too
_compiler_ dependent.  Contending with machine dependencies is enough work,
without this added burden.  Setting a minimal size for each integral type
still looks best to me.

kyle@xanth.cs.odu.edu    (kyle jones @ old dominion university, norfolk, va)

twb@hoqax.UUCP (BEATTIE) (04/16/87)

In article <170@vianet.UUCP>, devine@vianet.UUCP writes:
>   If you need to know exact sizes of types, you should define your
> own types using the preprocessor or typedefs.  That is, if you need
> a 16 bit arithmetic type, use:
> 
> If "machine has compiler that uses 16 bits for 'short'"
> #define int16  short
> 
> If "machine has compiler that uses 16 bits for 'int'"
> #define int16  int

It is not a good idea to use #define where you mean typedef.
Think about the difference between:

typedef int *INT_PTR;
INT_PTR a, b;

and

#define INT_PTR int *
INT_PTR a, b;

The first defines a and b to be pointers to int ("int *a, *b").
The second defines a as pointer to int and b as int ("int *a, b").

Remember that #define is a string substitution and typedef defines a new
type.
Tom.

devine@vianet.UUCP (04/17/87)

In article <835@xanth.UUCP>, kyle@xanth.UUCP (kyle jones) writes:
> standard [...] should demand that each type be AT LEAST a certain size
> Using #define's like int16 to choose the right sized int type is just too
> _compiler_ dependent.  Contending with machine dependencies is enough work,
> without this added burden.  Setting a minimal size for each integral type
> still looks best to me.

  Since I kicked in the 'int16' posting, let's see if I can bring this
to a close.

  I agree that the supplied types should at the minimum give a programmer
some idea of the possible range it may hold.  The K&R rule of short <= int
<= long leaves too much up to the compiler writers (even though tradition
restrains wild interpretation).  So, yes, there should be consistent rules
for the size of types -- both minimums and maximums.

  As for the 'int16' suggestion: there are cases where one needs to have
exactly defined sizes.  A portable program for file manipulation or network
exchanges is much easier to write if a type is the same across machines.
(This only leaves the *small* problems of structure packing and alignment,
and byte-swapping....)

  My favorite for portably providing data type sizes is Pascal's ranges.
But, this is 'comp.lang.c'.

Bob Devine

jtr485@umich.UUCP (Johnathan Tainter) (04/19/87)

In article <764@hoqax.UUCP>, twb@hoqax.UUCP (BEATTIE) writes:
> > #define int16  int

> It is not a good idea to use #define where you mean typedef.
> Think about the difference between:

> typedef int *INT_PTR;
> INT_PTR a, b;
>--Tom.

Your objection is not really valid for the int16 case.
However, there is the fact that you will get more cryptic errors
if you ever try to use int16 as a variable or function name.

--j.a.tainter

flaps@utcsri.UUCP (04/21/87)

In article <764@hoqax.UUCP> twb@hoqax.UUCP (BEATTIE) writes:
:>> If "machine has compiler that uses 16 bits for 'short'"
:>> #define int16  short
:>> 
:>> If "machine has compiler that uses 16 bits for 'int'"
:>> #define int16  int
:>
:>It is not a good idea to use #define where you mean typedef.
:>Think about the difference between:
:>
:>typedef int *INT_PTR;
:>INT_PTR a, b;
:>
:>and
:>
:>#define INT_PTR int *
:>INT_PTR a, b;

IRRELEVANT!!!
So what that "#define INTPTR int *" doesn't work?  The original author never
claimed it did!  "#define int16 int" works fine!

In fact, 4.2bsd stdio.h contains "#define FILE struct _iobuf" (and so
do many other stdio.h's, I believe).

-- 

Alan J Rosenthal

flaps@csri.toronto.edu, {seismo!utai or utzoo}!utcsri!flaps,
flaps@toronto on csnet, flaps at utorgpu on bitnet.

"Probably the best operating system in the world is the [operating system]
made for the PDP-11 by Bell Laboratories." - Ted Nelson, October 1977

flaps@utcsri.uucp (04/21/87)

In article <764@hoqax.UUCP> twb@hoqax.UUCP (BEATTIE) writes:
:>> If "machine has compiler that uses 16 bits for 'short'"
:>> #define int16  short
:>>
:>> If "machine has compiler that uses 16 bits for 'int'"
:>> #define int16  int
:>
:>It is not a good idea to use #define where you mean typedef.
:>Think about the difference between:
:>
:>typedef int *INT_PTR;
:>INT_PTR a, b;
:>
:>and
:>
:>#define INT_PTR int *
:>INT_PTR a, b;

IRRELEVANT!!!
So what that "#define INTPTR int *" doesn't work?  The original author never
claimed it did!  "#define int16 int" works fine!

In fact, 4.2bsd stdio.h contains "#define FILE struct _iobuf" (and so
do many other stdio.h's, I believe).

--

Alan J Rosenthal

flaps@csri.toronto.edu, {seismo!utai or utzoo}!utcsri!flaps,
flaps@toronto on csnet, flaps at utorgpu on bitnet.

"Probably the best operating system in the world is the [operating system]
made for the PDP-11 by Bell Laboratories." - Ted Nelson, October 1977

mouse@mcgill-vision.UUCP (05/07/87)

In article <4632@utcsri.UUCP>, flaps@utcsri.UUCP writes:
> So what that "#define INTPTR int *" doesn't work?  The original
> author never claimed it did!  "#define int16 int" works fine!

Until you want to shadow it:

typedef int int16;
foo()
{ typedef struct foo int16;
....
}

works, but

#define int16 int
foo()
{ typedef struct foo int16;
....
}

doesn't.

> In fact, 4.2bsd stdio.h contains "#define FILE struct _iobuf" (and so
> do many other stdio.h's, I believe).

I know.  I consider it a bug, or at best a misfeature.

					der Mouse

				(mouse@mcgill-vision.uucp)

tps@sdchem.UUCP (05/13/87)

In article <761@mcgill-vision.UUCP> mouse@mcgill-vision.UUCP (der Mouse) writes:

>In article <4632@utcsri.UUCP>, flaps@utcsri.UUCP writes:

>> So what that "#define INTPTR int *" doesn't work?  The original
>> author never claimed it did!  "#define int16 int" works fine!

>Until you want to shadow it:
>
>typedef int int16;
>foo()
>{ typedef struct foo int16;
>....
>}
>
>works,

It _should_ work, but it doesn't on BSD
systems.  Typedef identifiers can't be
redefined in a narrower scope.  K&R says
they can, but many, many systems say
they can't.








|| Tom Stockfisch, UCSD Chemistry	tps%chem@sdcsvax.ucsd.edu
					or  sdcsvax!sdchem!tps