[comp.lang.c] int32 et al.

rtm@christmas.UUCP (Richard Minner) (01/15/91)

I've gathered from this discussion (and others) that it is unlikely
that long will ever be implemented to be larger than int, unless int
is less than 32-bits (in a `quality' implementation?).  Is this so?
If it is, then just using long when you need at least 32 bits should
present no problems.  If it is not so, then using long could possibly
be `wasteful' (of space and time) if your code lived long enough to
be ported to, say, an environment with 32-bit ints and 64-bit longs.
As I said, I'm not too concerned, but could someone confirm my
suspicions?

Based on the above assumption about longs, I more or less go by the
following:

Requirements:                   Use:
 1 <= bits <=  8, save space    char
 9 <= bits <= 16, save space    short
 1 <= bits <= 16, save time     int
17 <= bits <= 32                long
and then maybe (not ANSI-C)
33 <= bits <= 64                long long?

Is that reasonable?

-- 
Richard Minner  rtm@island.COM  {uunet,sun,well}!island!rtm
Island Graphics Corporation  Sacramento, CA  (916) 736-1323

gwyn@smoke.brl.mil (Doug Gwyn) (01/18/91)

In article <26@christmas.UUCP> rtm@island.COM (Richard Minner) writes:
>I've gathered from this discussion (and others) that it is unlikely
>that long will ever be implemented to be larger than int, unless int
>is less than 32-bits (in a `quality' implementation?).  Is this so?

No, I would disagree.  As file systems get ever larger, pressure to
directly implement the (type "long") file offsets with more than 31-
bit range will increase.  Thus, even if the architecture encourages
32-bit integer representation, "long" could well be implemented as
e.g. a 64-bit quantity, simply to reduce hassles for customers of
such systems.

At present there is a lot of nonportable code that discourages some
vendors (I know of one for sure) from implementing C with
sizeof(long)!=sizeof(int) or even sizeof(int)!=4.  However, one would
hope that programmers would eventually learn better than to write code
that unnecessarily depends on such things.

>... using long could possibly be `wasteful' (of space and time) if
>your code lived long enough to be ported to, say, an environment with
>32-bit ints and 64-bit longs.

But the effort needed to design and write applications that try to
accommodate "optimal" choices for such throw-away data types as counters
probably exceeds any savings that would be gained thereby, in most cases.

tp@mccall.com (Terry Poot) (01/19/91)

In article <867@TALOS.UUCP>, jerry@TALOS.UUCP (Jerry Gitomer) writes:
>:Requirements:                   Use:
>: 1 <= bits <=  8, save space    char
>: 9 <= bits <= 16, save space    short
>: 1 <= bits <= 16, save time     int
>:17 <= bits <= 32                long

That's the way I do it. Note, however, that if you are dealing with
signed numbers, you MUST specify signed char, since the machine may
implement a char as unsigned. 
--
Terry Poot <tp@mccall.com>                The McCall Pattern Company
(uucp: ...!rutgers!ksuvax1!mccall!tp)     615 McCall Road
(800)255-2762, in KS (913)776-4041        Manhattan, KS 66502, USA

bruce@seismo.gps.caltech.edu (Bruce Worden) (01/19/91)

In article <14889@smoke.brl.mil> gwyn@smoke.brl.mil (Doug Gwyn) writes:
>In article <26@christmas.UUCP> rtm@island.COM (Richard Minner) writes:
>>I've gathered from this discussion (and others) that it is unlikely
>>that long will ever be implemented to be larger than int, unless int
>>is less than 32-bits (in a `quality' implementation?).  Is this so?
>
>No, I would disagree.  As file systems get ever larger, pressure to
>directly implement the (type "long") file offsets with more than 31-
>bit range will increase.  Thus, even if the architecture encourages
>32-bit integer representation, "long" could well be implemented as
>e.g. a 64-bit quantity, simply to reduce hassles for customers of
>such systems.

Mr. (Dr.?) Gwyn makes an excellent point here.  Disks of > 1Gbyte are 
common and cheap now.  I am working on a project where I had to implement 
a simple file system with coarse-grained disk striping over several such 
disks.  Single files could exceed 4Gbytes, so we resorted to specifying 
offsets and sizes in terms of logical blocks.  This method was not much of
a problem for this application, but in other situations it would be much 
less suitable or desirable.  There is no question that >32 bit longs are 
on the way, if for no other reason than support of big disks/file systems.
--------------------------------------------------------------------------
C. Bruce Worden                            bruce@seismo.gps.caltech.edu
252-21 Seismological Laboratory, Caltech, Pasadena, CA 91125

benson@odi.com (Benson I. Margulies) (01/20/91)

I can't remember the last time I was inclined, even briefly, to
disagree with Doug Gwyn. But here I go.

Some of us use structs to lay our persistent (that is, disk-resident)
storage. The size of the items never changes, as we move from platform
to platform. If we used int for a 32 byte int, we are nailed on the
PCs. If we use long, C++ compilers tend to moan piteously about
passing longs to int parameters, even when they are the same size. The
AIX ANSI C compiler does the same. So we have a typedef which we set
to int on some places, and long on others. If someone ever does turn
up with 64 bit longs, we will pat each other on the back and save
a lot of work.


-- 
Benson I. Margulies

gwyn@smoke.brl.mil (Doug Gwyn) (01/20/91)

In article <1991Jan19.185101.27554@odi.com> benson@odi.com (Benson I. Margulies) writes:
>Some of us use structs to lay our persistent (that is, disk-resident)
>storage. The size of the items never changes, as we move from platform
>to platform. If we used int for a 32 byte int, we are nailed on the
>PCs. If we use long, C++ compilers tend to moan piteously about
>passing longs to int parameters, even when they are the same size. The
>AIX ANSI C compiler does the same. So we have a typedef which we set
>to int on some places, and long on others. If someone ever does turn
>up with 64 bit longs, we will pat each other on the back and save
>a lot of work.

But you didn't address the problems I pointed out, for example the
complete lack of ANY type whose size if precisely 32 bits in some
implementations.

I also don't understand the type mismatch problem.  Certainly you
should make all types match properly, no matter what choice you
have made for the "int32"s.  It is not just C++ that should complain..

henry@zoo.toronto.edu (Henry Spencer) (01/21/91)

In article <26@christmas.UUCP> rtm@island.COM (Richard Minner) writes:
>I've gathered from this discussion (and others) that it is unlikely
>that long will ever be implemented to be larger than int, unless int
>is less than 32-bits (in a `quality' implementation?).  Is this so?

No.  There are already implementations which do this, although the folks
using them report considerable trouble porting sloppy code, and some
ended up changing their decisions about representation as a result.
-- 
If the Space Shuttle was the answer,   | Henry Spencer at U of Toronto Zoology
what was the question?                 |  henry@zoo.toronto.edu   utzoo!henry

benson@odi.com (Benson I. Margulies) (01/21/91)

In article <14905@smoke.brl.mil> gwyn@smoke.brl.mil (Doug Gwyn) writes:
>In article <1991Jan19.185101.27554@odi.com> benson@odi.com (Benson I. Margulies) writes:
>>Some of us use structs to lay our persistent (that is, disk-resident)
>>storage. The size of the items never changes, as we move from platform
>>to platform. If we used int for a 32 byte int, we are nailed on the
>>PCs. If we use long, C++ compilers tend to moan piteously about
>>passing longs to int parameters, even when they are the same size. The
>>AIX ANSI C compiler does the same. So we have a typedef which we set
>>to int on some places, and long on others. If someone ever does turn
>>up with 64 bit longs, we will pat each other on the back and save
>>a lot of work.
>
>But you didn't address the problems I pointed out, for example the
>complete lack of ANY type whose size if precisely 32 bits in some
>implementations.

Well, if we ever hit such a beast, there's always

typedef char [4] int32;

and a lot of unpleasant accessor macros.  However, we have a real
problem today with 16 and 32 bit in machines, which the obvious
typedefs solve. I don't disagree that wierder machines will provide
other problems.

>
>I also don't understand the type mismatch problem.  Certainly you
>should make all types match properly, no matter what choice you
>have made for the "int32"s.  It is not just C++ that should complain..

The system include files contain

extern int blahblah (int, char *);

I don't control that declaration. If I call blahblah with a long,
the compiler bleats a warning.

for 

extern int quux (int *);

if I pass a long * I get an error, not just a warning.
So I can't just use long all the time unless I type in all my
own system function prototypes.

-- 
Benson I. Margulies

gwyn@smoke.brl.mil (Doug Gwyn) (01/22/91)

In article <1991Jan21.135216.23447@odi.com> benson@odi.com (Benson I. Margulies) writes:
>In article <14905@smoke.brl.mil> gwyn@smoke.brl.mil (Doug Gwyn) writes:
>>But you didn't address the problems I pointed out, for example the
>>complete lack of ANY type whose size if precisely 32 bits in some
>>implementations.
>Well, if we ever hit such a beast, there's always
>typedef char [4] int32;

You mean typedef char int32[4]; however, that is much worse than
simply using long, because array types don't behave the same as
integral types and all sorts of havoc is likely to ensue.

>The system include files contain
>extern int blahblah (int, char *);
>I don't control that declaration. If I call blahblah with a long,
>the compiler bleats a warning.

As well it should!  Assuming that there is no useful information in
the high-order part of the long, you should simply cast it to int
when passing it to the blahblah() function; if there IS significant
information, then the blahblah() function is inappropriate anyway.

>for 
>extern int quux (int *);
>if I pass a long * I get an error, not just a warning.

Again, if sizeof(int)==sizeof(long) you can simply use a cast.
Otherwise, you need to write a bit of extra code, but that is
unavoidable.

karl@ima.isc.com (Karl Heuer) (01/22/91)

In article <1991Jan21.135216.23447@odi.com> benson@odi.com (Benson I. Margulies) writes:
>>>If we used int for a 32 byte int, we are nailed on the PCs.  If we use
>>>long, C++ compilers tend to moan piteously about [type clash]
>
>The system include files contain
>	extern int blahblah (int, char *);
>I don't control that declaration. If I call blahblah with a long,
>the compiler bleats a warning.

How does int32 help, then?  You said you define int32 as long on some machines
(PCs), so passing an int32 to this function is just wrong.  Looks to me as
though you need to cast% it to int, regardless of whether you're using int32
or long.

>for 
>	extern int quux (int *);
>if I pass a long * I get an error, not just a warning.
>So I can't just use long all the time unless I type in all my
>own system function prototypes.

And here, it's worse.  If you fake it with a cast or by writing a fake
prototype, you're likely to get the wrong answer, on machines where int and
long have different sizes.  Again, I see no advantage of int32 over long.

Karl W. Z. Heuer (karl@ima.isc.com or uunet!ima!karl), The Walking Lint
________
% Assuming you already know it fits in an int.

rtm@christmas.UUCP (Richard Minner) (01/22/91)

In article <14889@smoke.brl.mil> gwyn@smoke.brl.mil (Doug Gwyn) writes:
>In article <26@christmas.UUCP> rtm@island.COM (Richard Minner) writes:
>>I've gathered that it is unlikely that long will ever be larger than int,
>>unless int is less than 32-bits...
>
>No, I would disagree.  As file systems get ever larger, pressure to
>directly implement the (type "long") file offsets with more than 31-
>bit range will increase.  Thus, even if the architecture encourages
>32-bit integer representation, "long" could well be implemented as
>e.g. a 64-bit quantity, simply to reduce hassles for customers of
>such systems.
>
Hmmm. That's what I generally thought at first, but then got the
idea that `long long' would likely become the de facto standard
way to handle `longer than int, longer than 32-bit' longs.  (Hey,
why not looong, or longer and longest (for 128-bits)?)

Given your reasonable reasoning, I may reconsider using `int32'.
(int8 and int16 (and int17, int29 etc.) still seem unnecessary,
unless one did a lot of work on a machine with 18-bit shorts and
really needed the extra 2 bits, but...)

>>... using long could possibly be `wasteful' (of space and time)
>>[in] an environment with 32-bit ints and 64-bit longs.
>
>But the effort needed to design and write applications that try to
>accommodate "optimal" choices for such throw-away data types as counters
>probably exceeds any savings that would be gained thereby, in most cases.

Come again?  I usually appreciate your terseness, but you lost me here.
My code may be unusual (graphics, mostly rasters at present), but I
have a lot of code that would be hurt if longs were more than 32-bits
and ints weren't.  If a simple int32 def in one config file could
help, I'd rather use that than a bunch of independent typedefs all doing
the same thing.  Please elaborate if I missed something.
-- 
Richard Minner  rtm@island.COM  {uunet,sun,well}!island!rtm
Island Graphics Corporation  Sacramento, CA  (916) 736-1323

benson@odi.com (Benson I. Margulies) (01/23/91)

In article <1991Jan22.023844.29849@dirtydog.ima.isc.com> karl@ima.isc.com (Karl Heuer) writes:
>In article <1991Jan21.135216.23447@odi.com> benson@odi.com (Benson I. Margulies) writes:
>>>>If we used int for a 32 byte int, we are nailed on the PCs.  If we use
>>>>long, C++ compilers tend to moan piteously about [type clash]
>>
>>The system include files contain
>>	extern int blahblah (int, char *);
>>I don't control that declaration. If I call blahblah with a long,
>>the compiler bleats a warning.
>
>How does int32 help, then?  You said you define int32 as long on some machines
>(PCs), so passing an int32 to this function is just wrong.  Looks to me as
>though you need to cast% it to int, regardless of whether you're using int32
>or long.
>
>>for 
>>	extern int quux (int *);
>>if I pass a long * I get an error, not just a warning.
>>So I can't just use long all the time unless I type in all my
>>own system function prototypes.
>
>And here, it's worse.  If you fake it with a cast or by writing a fake
>prototype, you're likely to get the wrong answer, on machines where int and
>long have different sizes.  Again, I see no advantage of int32 over long.
>

I'm concerned, at the instant, with precisely three machines:

machine		int 		long		size_t		int32
---------------------------------------------------------------------
sun et. al.	32		32		int		int
PC		16		32		long		long
RS/6000		32		32		unsigned long	long

This works a lot better than using long. 

1) when I need to specify the layout for data that is stored on disk
or transported across the net, and I want 32 bits, just like in a TCP
packet, I say "int32." 

2) when I go to pass a value to a system routine that takes an "int"
on the sun, I don't get whining and complaining about passing a long
to an int.

If I said "int" case (1) would break on the PC. If I said "long" I'd get
warnings on the sun. And some things are really just integers, they
don't have any abstract nature at all.

If I have to deal with 36 or 24 or whatever, I'll have a harder
problem to hack. No question. 


-- 
Benson I. Margulies

fangchin@portia.Stanford.EDU (Chin Fang) (01/23/91)

In article <1991Jan22.175900.24941@odi.com> benson@odi.com (Benson I. Margulies) writes:
>In article <1991Jan22.023844.29849@dirtydog.ima.isc.com> karl@ima.isc.com (Karl Heuer) writes:
>>In article <1991Jan21.135216.23447@odi.com> benson@odi.com (Benson I. Margulies) writes:
[stuff deleted]..
>
>
>I'm concerned, at the instant, with precisely three machines:
>
>machine		int 		long		size_t		int32
>---------------------------------------------------------------------
>sun et. al.	32		32		int		int
>PC		16		32		long		long
>RS/6000		32		32		unsigned long	long
>
I hope my opinion is not a fussy one.  I often see people mention PC like 
the example above in this news group.  I run UNIX System V/386 on my i386
box (a PC?) and definietly ints are 32 bits too.  Please don't mix MSDOS
imposed constraint on i386 and the superb i486 chip with what they can do
with a different OS!  Unless you are stuck with 8086/80286, then the above
example is incorrect, and even UNIX can't magically transform an aweful chip
into a better one.  Please say in MODOS, int is 16.  Programming is a 
precise exercise, do we need to be more precise in what we mean?  
 
Regards,
 
Chin Fang
Mechanical Engineering Department
Stanford University
fangchin@portia.stanford.edu

ps. you want a proof?  here are a few lines from my /usr/include/limits.h
 
#define INT_MAX     2147483647   /* max decimal value of an "int" */
#define INT_MIN     -2147483648  /* min decimal value of an "int" */
#define LING_MAX    2147483647L  /* max decimal value of a "long" */
......

datangua@watmath.waterloo.edu (David Tanguay) (01/23/91)

In article <1991Jan21.135216.23447@odi.com> benson@odi.com (Benson I. Margulies) writes:
|>But you didn't address the problems I pointed out, for example the
|>complete lack of ANY type whose size if precisely 32 bits in some
|>implementations.
|Well, if we ever hit such a beast, there's always
|typedef char [4] int32;

Which still isn't (necessarily) 32 bits. There is no guarantee that a char
is exactly 8 bits (36 bit machines use 9 bit chars, and I wouldn't be
surprised to see a 16 bit char on a 16 bit word machine like the
Honeywell DPS-6).
-- 
David Tanguay            Software Development Group, University of Waterloo

merce@iguana.uucp (Jim Mercer) (01/24/91)

In article <1991Jan23.120327.17759@watmath.waterloo.edu> datangua@watmath.waterloo.edu (David Tanguay) writes:
>In article <1991Jan21.135216.23447@odi.com> benson@odi.com (Benson I. Margulies) writes:
>|>But you didn't address the problems I pointed out, for example the
>|>complete lack of ANY type whose size if precisely 32 bits in some
>|>implementations.
>|Well, if we ever hit such a beast, there's always
>|typedef char [4] int32;
>
>Which still isn't (necessarily) 32 bits. There is no guarantee that a char
>is exactly 8 bits (36 bit machines use 9 bit chars, and I wouldn't be
>surprised to see a 16 bit char on a 16 bit word machine like the
>Honeywell DPS-6).

[ i'm just jumping into this thread, so please forgive me if this was already
  said ]

could the problem be resolved as such:

[defs.h] (or some similar file)

/* define the type which is 8 bits on your system */

#define BITS8	char
/* #define BITS8	int */
/* #define BITS8	some_other_type */


[sample.c]

...

BITS8	bits8;
BITS16	bits16[2];
BITS32	bits32[4];

this could also be done with typedefs i guess.  (i don't use them much)

this also assumes your system has an 8 bit type.

-- 
[ Jim Mercer   work: jim@lsuc.on.ca  home: merce@iguana.uucp  +1 519 570-3467 ]
[                "Clickity-Click, Barba-Trick" - The Barbapapas               ]

diamond@jit345.swstokyo.dec.com (Norman Diamond) (01/25/91)

In article <1991Jan24.031542.7790@iguana.uucp> merce@iguana.uucp (Jim Mercer) writes:

>[ i'm just jumping into this thread, so please forgive me if this was already
>  said ]

It hasn't been said (I think).  But it still can't be forgiven.

>could the problem be resolved as such:
>/* define the type which is 8 bits on your system */
>#define BITS8	char
>/* #define BITS8	int */
>/* #define BITS8	some_other_type */

No.

>this also assumes your system has an 8 bit type.

Exactly.  And in C, there are no types smaller than char.  Inside a
structure, bitfields can be smaller than a char, but that doesn't
help this problem.
--
Norman Diamond       diamond@tkov50.enet.dec.com
If this were the company's opinion, I wouldn't be allowed to post it.