[comp.lang.c] Style guides and portability

gwyn@smoke.brl.mil (Doug Gwyn) (01/12/91)

In article <1163@tredysvr.Tredydev.Unisys.COM> paul@tredysvr.Tredydev.Unisys.COM (Paul Siu) writes:
>While looking through Thomas Plum's style guide, I notice he mention that one
>should set up a seperate #define file for data types.  The file will contain
>data types such as ushort for unsigned 8-bit numbers, and your program will use
>only the data types define in this file.  He justify this by saying that normal
>C datatypes varies from machine to machine, an int for example can be 8-bit,
>or 16-bit depending on machines.  The only time you should use int is on
>function returns values.

No, any C compiler worth using (and certainly any that conforms to the
standard) will provide at least 16 bits for an int, at least 32 bits
for a long, and at least 8 bits for a char.  While there are uses for
user-defined primitive data types (for example, I use "bool" and
(generic object) "pointer" types), I don't think that int16, int32, etc.
are justifiable.

It is possible that you might have to deal with a C environment that
simply doesn't support "unsigned char", in which case you might need
to devise some kludergy to cope with the situation.  However, the only
basic type I know of that you are likely to encounter problems with is
"signed char".  There is no exact equivalent for this in many C
implementations.  The simplest way to deal with this is to never try
to write code that depends on "signed char"; it can always be avoided.

bevan@cs.man.ac.uk (Stephen J Bevan) (01/12/91)

> No, any C compiler worth using (and certainly any that conforms to the
> standard) will provide at least 16 bits for an int, at least 32 bits
> for a long, and at least 8 bits for a char.  While there are uses for
> user-defined primitive data types (for example, I use "bool" and
> (generic object) "pointer" types), I don't think that int16, int32, etc.
> are justifiable.

What about the cases where it is a requirement that a particular int
MUST be able to hold 32 bit numbers.  If you transfer this to a 16 bit
int system, your software is going to die horribly.

The only way I know around this is to define types like int32 and a
lot of macros/functions that go along with them.  For example,
int32plus, int32divide, ... etc.

Does anybody have a better solution ?

Stephen J. Bevan		bevan@cs.man.ac.uk

dave@cs.arizona.edu (Dave P. Schaumann) (01/13/91)

In article <BEVAN.91Jan12120920@orca.cs.man.ac.uk> bevan@cs.man.ac.uk (Stephen J Bevan) writes:
|| No, any C compiler worth using (and certainly any that conforms to the
|| standard) will provide at least 16 bits for an int, at least 32 bits
|| for a long, and at least 8 bits for a char.  While there are uses for
|| user-defined primitive data types (for example, I use "bool" and
|| (generic object) "pointer" types), I don't think that int16, int32, etc.
|| are justifiable.
|
|What about the cases where it is a requirement that a particular int
|MUST be able to hold 32 bit numbers.  If you transfer this to a 16 bit
|int system, your software is going to die horribly.
|
|The only way I know around this is to define types like int32 and a
|lot of macros/functions that go along with them.  For example,
|int32plus, int32divide, ... etc.
|
|Does anybody have a better solution ?

  How about using something like:

#include <limits.h>

  [...]

  if( sizeof(int) * CHAR_BITS / sizeof(char) < 32 ) { /* have 32 bit ints? */
    fputs( "Must have at least 32 bit ints!\n", stderr ) ; exit(1) ;
    }

While this won't make the program work, it will certainly indicate to the
people porting the code that they need to use a more capacious integer type.

Also, I believe that the ANSI standard *requires* that longs be at least 32
bits.  Of course, you may run across a compiler that is less than 100%
compliant...

|Stephen J. Bevan		bevan@cs.man.ac.uk


Dave Schaumann      | We've all got a mission in life, though we get into ruts;
dave@cs.arizona.edu | some are the cogs on the wheels, others just plain nuts.
						-Daffy Duck.

scjones@thor.UUCP (Larry Jones) (01/14/91)

In article <BEVAN.91Jan12120920@orca.cs.man.ac.uk>, bevan@cs.man.ac.uk (Stephen J Bevan) writes:
> What about the cases where it is a requirement that a particular int
> MUST be able to hold 32 bit numbers.  If you transfer this to a 16 bit
> int system, your software is going to die horribly.

If the variable is required to hold 32 bit numbers, it should be a long
(which is guaranteed to be large enough) rather than an int.  I completely
agree with Doug -- INT32 and friends are of no real value.
----
Larry Jones, SDRC, 2000 Eastman Dr., Milford, OH  45150-2789  513-576-2070
Domain: scjones@thor.UUCP  Path: uunet!sdrc!thor!scjones
It's going to be a long year. -- Calvin

scs@adam.mit.edu (Steve Summit) (01/14/91)

Doug Gwyn (I think) wrote:
> No, any C compiler worth using (and certainly any that conforms to the
> standard) will provide at least 16 bits for an int, at least 32 bits
> for a long, and at least 8 bits for a char.  While there are uses for
> user-defined primitive data types... I don't think that int16, int32, etc.
> are justifiable.

In article <BEVAN.91Jan12120920@orca.cs.man.ac.uk>, Stephen Bevan writes:
>What about the cases where it is a requirement that a particular int
>MUST be able to hold 32 bit numbers.  If you transfer this to a 16 bit
>int system, your software is going to die horribly.
>The only way I know around this is to define types like int32 and a
>lot of macros/functions that go along with them.

Perhaps I am missing something ridiculously subtle, but where I
come from, a "requirement that a particular int MUST be able to
hold 32 bit numbers" is (assuming "int" means "int only") an
oxymoron at best.  Standard C provides the type "long int" which
fulfills the requirement precisely.  Why make your life miserable
by cluttering the code with "a lot of macros/functions" to
implement this int32 pseudo-type?

(It's true that "Classic" C made no guarantees about type sizes,
but as Doug pointed out, ANSI X3.159 does specify that ints and
short ints are at least 16 bits, while long ints are at least 32
bits.  I thought there was language somewhere in the Standard
referring explicitly to bit counts, but I can't find it just now.
In any case, the "minimum maxima" for <limits.h> in section
2.2.4.2.1, combined with the requirement of a "pure binary
numeration system" and other language in section 3.1.2.5,
effectively imply the 16 and 32 bit sizes.)

>...define types like int32 and a
>lot of macros/functions that go along with them.  For example,
>int32plus, int32divide, ... etc.

What does this mean?  C isn't C++, but it has always defined
binary operators such as "+" as working correctly for any
"arithmetic type" (i.e. integers and floating-point numbers of
all sizes) with implicit casts inserted as necessary.  The only
problem I have with user-defined types in C is printing them.
If you have

	int32 bigint;

do you print it with %d or %ld?  (Come to think of it, this is
another strong argument in favor of "long int" over "int32".)

                                            Steve Summit
                                            scs@adam.mit.edu

P.S. The answer to "How do you print something declared with
`int32 bigint;' ?" is that you have to abandon printf in favor of
something you define, like "print32".  I find this awkward, and
far less convenient than printf.  C++ has another syntax, which
isn't perfect, either.  User-defined output is tricky, and I'm
still waiting for an ideal solution. CLU's was, as I recall,
fairly clever.  I heard that 8th edition research Unix has a way
to "register" new printf %-formats, which I'd love to learn the
details of.

pt@geovision.uucp (Paul Tomblin) (01/14/91)

scs@adam.mit.edu (Steve Summit) writes:
>The only
>problem I have with user-defined types in C is printing them.
>If you have

>	int32 bigint;

>do you print it with %d or %ld?  (Come to think of it, this is
>another strong argument in favor of "long int" over "int32".)

Simple, you cast all your integer types to long, and printf them as %ld.
{The cast probably isn't even needed, because it's a variadic function.
If I remember correctly, variadic functions are type promoted the same way
classic C functions are, integer types (char, int, unsigned, etc) are 
promoted to long, real types (float, double) are promoted to double.

Anybody know what happens to "long double" types in variadic functions, or 
is "long double" not an ANSI-C type?
-- 
Paul Tomblin, Department of Redundancy Department.       ! My employer does 
The Romanian Orphans Support Group needs your help,      ! not stand by my
Ask me for details.                                      ! opinions.... 
pt@geovision.gvc.com or {cognos,uunet}!geovision!pt      ! Me neither.

gwyn@smoke.brl.mil (Doug Gwyn) (01/15/91)

In article <BEVAN.91Jan12120920@orca.cs.man.ac.uk> bevan@cs.man.ac.uk (Stephen J Bevan) writes:
>What about the cases where it is a requirement that a particular int
>MUST be able to hold 32 bit numbers.

The application should use "long" not "int" for such variables.

pds@lemming.webo.dg.com (Paul D. Smith) (01/15/91)

[] In article <BEVAN.91Jan12120920@orca.cs.man.ac.uk>, bevan@cs.man.ac.uk (Stephen J Bevan) writes:

[] > What about the cases where it is a requirement that a particular
[] > int MUST be able to hold 32 bit numbers.  If you transfer this to
[] > a 16 bit int system, your software is going to die horribly.

[] If the variable is required to hold 32 bit numbers, it should be a
[] long (which is guaranteed to be large enough) rather than an int.
[] I completely agree with Doug -- INT32 and friends are of no real
[] value.

At the risk of putting my foot in something unpleasant, I would like
to say I believe INT32 & friends *are* of real value.  Sure, right now
"long int" is 32 bits on any machine which supports it.  I firmly
believe that in the relatively near future 64-bit & above machines
will become a common reality.  I don't know what C, etc. will do to
support 64-bit integers, but I *do* know that whatever it is, I will
be able to port my code with at most a simple change of the type of
INT32.  It helps me sleep at night ... :-)

However, an even more important reason (IMHO) for the existence of
INT32 et.al. is that it tells you what is expected.  If you see a type
INT32 in the code, you say "he wants a 32-bit integer".  If you see a
"long int" in the code, you say "he wants a *big* integer".  In cases
where it is important that the number of bits == 32, or # of bits ==
16, or whatever (in networking software I find this to be a common
case), a type helps the reader see what is going on.

If you feel types such as "bool" are valuable as abstractions, then
why not INT32?  It is an abstracted name given to a quantity with
particular, defined uses and properties and is distinct (IMO) from a
simple "long int".

                                                                paul
-----
 ------------------------------------------------------------------
| Paul D. Smith                          | pds@lemming.webo.dg.com |
| Data General Corp.                     |                         |
| Network Services Development Division  |   "Pretty Damn S..."    |
| Open Network Systems Development       |                         |
 ------------------------------------------------------------------

bull@ccs.carleton.ca (Bull Engineers) (01/15/91)

> ... I completely
> agree with Doug -- INT32 and friends are of no real value.
                                                 ^^^^

Ha ha ha ha!  Good pun!

dbrooks@osf.org (David Brooks) (01/15/91)

In article <14848@smoke.brl.mil>, gwyn@smoke.brl.mil (Doug Gwyn) writes:
|> In article <BEVAN.91Jan12120920@orca.cs.man.ac.uk> bevan@cs.man.ac.uk (Stephen J Bevan) writes:
|> >What about the cases where it is a requirement that a particular int
|> >MUST be able to hold 32 bit numbers.
|> 
|> The application should use "long" not "int" for such variables.

Does anyone know any implementations where:
  - int is 32 bits
  - long is 64 bits
  - there's no 64-bit support in hardware, and long arithmetic is inefficient?
-- 
David Brooks				dbrooks@osf.org
Systems Engineering, OSF		uunet!osf.org!dbrooks
"Home is the bright cave under the hat." -- Lance Morrow

wirzeniu@cs.Helsinki.FI (Lars Wirzenius) (01/15/91)

In article <1991Jan13.182655.17672@athena.mit.edu> scs@adam.mit.edu writes:
>P.S. The answer to "How do you print something declared with
>`int32 bigint;' ?" is that you have to abandon printf in favor of
>something you define, like "print32".  I find this awkward, and

Is there any problem in using 

	printf("%ld", (long) bigint)

other than that it's clumsy?

Lars Wirzenius    wirzeniu@cs.helsinki.fi    wirzenius@cc.helsinki.fi

richard@aiai.ed.ac.uk (Richard Tobin) (01/15/91)

>>where it is important that the number of bits == 32, or ...

>But there is no guarantee that there will BE such an integral type!

True.  But at least it tells you that the programmer wrote the code on
the assumption that there would be.

Of course, most code has no need of such assumptions, but sometimes the
need for efficiency outweighs the need for portability.

In general, I would say it was better to typedef a name for the specific
purpose, with a comment saying that it must be 32 bits.

-- Richard
-- 
Richard Tobin,                       JANET: R.Tobin@uk.ac.ed             
AI Applications Institute,           ARPA:  R.Tobin%uk.ac.ed@nsfnet-relay.ac.uk
Edinburgh University.                UUCP:  ...!ukc!ed.ac.uk!R.Tobin

darcy@druid.uucp (D'Arcy J.M. Cain) (01/16/91)

In article <10608@hydra.Helsinki.FI> Lars Wirzenius writes:
>In article <1991Jan13.182655.17672@athena.mit.edu> scs@adam.mit.edu writes:
>>P.S. The answer to "How do you print something declared with
>>`int32 bigint;' ?" is that you have to abandon printf in favor of
>>something you define, like "print32".  I find this awkward, and
>Is there any problem in using 
>	printf("%ld", (long) bigint)
>other than that it's clumsy?

I'm currently writing code that has this problem.  I have some types
such as:

typedef long TASK_ID;
typedef int HANDLE;
etc...

where I may want to change the types in the future.  I solved the printf
problem by doing the following:

#define f_TASK_ID "ld"
#define f_HANDLE "d"
...
printf("Current task is %5" f_TASK_ID " for handle %" f_HANDLE "\n", t, h);

Now if I change the type I just change the corresponding define and
re-compile.  Of course it is even clumsier but I think it makes the
changes easier to handle.

-- 
D'Arcy J.M. Cain (darcy@druid)     |
D'Arcy Cain Consulting             |   There's no government
West Hill, Ontario, Canada         |   like no government!
+1 416 281 6094                    |

adrian@mti.mti.com (Adrian McCarthy) (01/16/91)

In article <1991Jan13.182655.17672@athena.mit.edu> scs@adam.mit.edu writes:
>Doug Gwyn (I think) wrote:
>> No, any C compiler worth using (and certainly any that conforms to the
>> standard) will provide at least 16 bits for an int, at least 32 bits
>> for a long, and at least 8 bits for a char.

Who cares how many bits are used in the representation?  What really matters
is the range of legal values.  Just because an int is 32 bits doesn't
guarantee that its range is -(2^32) -- +(2^32 - 1).  That assumes a
twos-complement machine.  While this may be the only representation you
ever run in to, if you're trying to remain portable I'd watch out for this.
Someday you might meet a sign-magnitude or even a BCD (Binary Coded
Decimal) machine.

Granted, if you're trying to put a bitmask into an int, its the number of
bits that counts.

>In any case, the "minimum maxima" for <limits.h> in section
>2.2.4.2.1, combined with the requirement of a "pure binary
>numeration system" and other language in section 3.1.2.5,
>effectively imply the 16 and 32 bit sizes.)

Yes, use limits.h.  But does "pure binary numeration system" imply that
you can't make an ANSI-compliant C compiler for a BCD machine?

Aid.  (adrian@gonzo.mti.com)

gwyn@smoke.brl.mil (Doug Gwyn) (01/16/91)

In article <1291@mti.mti.com> adrian@mti.UUCP (Adrian McCarthy) writes:
>Who cares how many bits are used in the representation?  What really matters
>is the range of legal values.

That's what I was referring to.  I used the shorthand terminology for
brevity, on the assumption that everyone who cared would understand
the abbreviation.

>Yes, use limits.h.  But does "pure binary numeration system" imply that
>you can't make an ANSI-compliant C compiler for a BCD machine?

<limits.h> doesn't necessarily exist in non-ANSI C environments, which
many of us still have to contend with.  However, the guaranteed minimum
sizes apply whether or not <limits.h> is #included.

Certainly one could produce a conforming implementation on an inherently
BCD machine; however, care would have to be taken to ensure that bitwise
operations and unsigned arithmetic were properly implemented.  There is
no requirement that <limits.h> (for example) fully describe hardware
representation capabilities, merely the ranges officially constituting
the conforming implementation of the C standard.  Any application that
attempted to exploit values outside the official range would be non-
strictly conforming.

karl@ima.isc.com (Karl Heuer) (01/17/91)

In article <1291@mti.mti.com> adrian@mti.UUCP (Adrian McCarthy) writes:
>But does "pure binary numeration system" imply that you can't make an
>ANSI-compliant C compiler for a BCD machine?

Sort of.  More precisely, the compiler would have to generate code that makes
the machine emulate a binary architecture where necessary.  (E.g. for "|".)

Karl W. Z. Heuer (karl@ima.isc.com or uunet!ima!karl), The Walking Lint

henry@zoo.toronto.edu (Henry Spencer) (01/18/91)

In article <1332@geovision.UUCP> pt@geovision.gvc.com writes:
>Anybody know what happens to "long double" types in variadic functions, or 
>is "long double" not an ANSI-C type?

It's an ANSI C type, in fact it was ANSI C that introduced it.  Long double
stays long double, it doesn't get changed at the variadic interface.
-- 
If the Space Shuttle was the answer,   | Henry Spencer at U of Toronto Zoology
what was the question?                 |  henry@zoo.toronto.edu   utzoo!henry

john@sco.COM (John R. MacMillan) (01/18/91)

|At the risk of putting my foot in something unpleasant, I would like
|to say I believe INT32 & friends *are* of real value.  Sure, right now
|"long int" is 32 bits on any machine which supports it.  I firmly
|believe that in the relatively near future 64-bit & above machines
|will become a common reality.  I don't know what C, etc. will do to
|support 64-bit integers, but I *do* know that whatever it is, I will
|be able to port my code with at most a simple change of the type of
|INT32.  It helps me sleep at night ... :-)

If the new machine has a 32 bit integer type, perhaps.

I ported a large amount of software to a 64-bit machine, and one
application which broke most often used int16, int32, et al, so the
argument that simply using these constructs will allow upward
portability is not true.

Much of the confusion in this piece of software stemmed from the fact
that in some places int16 was thought of as being *at least* 16 bits
and in others it was *exactly* 16 bits.  The porting guide stated the
former, but obviously not all the developers adhered to it.  The
machine had no 16 bit integer type.
-- 
John R. MacMillan  | I guess I lied to you when I told you I like baseball
SCO Canada, Inc.   | It's not so much the game I like it's the hats.
john@sco.COM       |      -- barenaked ladies

dhesi%cirrusl@oliveb.ATC.olivetti.com (Rahul Dhesi) (01/18/91)

In <1163@tredysvr.Tredydev.Unisys.COM>
paul@tredysvr.Tredydev.Unisys.COM (Paul Siu) writes:

     While looking through Thomas Plum's style guide, I notice he
     mentions that one should set up a separate #define file for data
     types.

It's a good idea.  In the future I am planning to use defines or
typedefs similar to the following.

     name               property
     ----               --------
     t_int8              8 bits or more, signed
     t_xint8             8 bits or more, signed or unsigned
     t_uint8             8 bits or more, unsigned
     t_int16            16 bits or more, signed
     t_xint16           16 bits or more, signed or unsigned
     t_uint16           16 bits or more, unsigned

The idea behind having a "signed or unsigned" data type is that when
you don't care (e.g. when storing bits or smallish unsigned values),
you can use whichever type (signed or unsigned) is handled more
efficiently on a given machine.
--
Rahul Dhesi <dhesi%cirrusl@oliveb.ATC.olivetti.com>
UUCP:  oliveb!cirrusl!dhesi

karl@ima.isc.com (Karl Heuer) (01/19/91)

In article <1332@geovision.UUCP> pt@geovision.gvc.com writes:
>Simple, you cast all your integer types to long, and printf them as %ld.

This is probably the least clumsy solution.

>If I remember correctly, variadic functions are type promoted the same way
>classic C functions are

Yes, the default argument-widening rules apply to the non-fixed arguments.

>[i.e.] integer types are promoted to long,

But that isn't one of them.  Small types will widen as far as (signed or
unsigned) int, but nothing smaller than long will widen to long.%

>[so] the cast probably isn't even needed

Conclusion fails due to faulty premise.  The cast is in fact required.

(Note that this is not

>Anybody know what happens to "long double" types in variadic functions, or
>is "long double" not an ANSI-C type?

It is.  A float promotes to double, but double and long double stay as-is.  So
there is no printf format for float; "%f" means double (including a widened
float); "%Lf" means long double.  "%lf" is an illegal format, though many
implementations treat it as identical to "%f".

Karl W. Z. Heuer (karl@ima.isc.com or uunet!ima!karl), The Walking Lint
________
% Mixing an int with a long in an expression such as i+li will cause the int
  to be promoted to long by the "usual arithmetic conversions", but that isn't
  what we're talking about here.

sarima@tdatirv.UUCP (Stanley Friesen) (01/23/91)

In article <1332@geovision.UUCP> pt@geovision.gvc.com writes:
>scs@adam.mit.edu (Steve Summit) writes:
>If I remember correctly, variadic functions are type promoted the same way
>classic C functions are, integer types (char, int, unsigned, etc) are 
>promoted to long, real types (float, double) are promoted to double.

BEEP, you remember wrong.  Even for classic C you are wrong.
The 'short' integer types (char, short) are promoted to int, but
int is not (and never was) promoted to long (at least in funtion calls).

If int had been promoted to long, the %ld would be unnecessary.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)