[comp.std.c] Sizes, alignments, and maxima

karl@haddock.ima.isc.com (Karl Heuer) (02/23/89)

In article <830@atanasoff.cs.iastate.edu> hascall@atanasoff.cs.iastate.edu (John Hascall) writes:
>In article <8943@alice.UUCP> ark@alice.UUCP (Andrew Koenig) writes:
>>For that reason it's hard to see how a C implementation could possibly
>>do anything but put [an array] in contiguous memory.
>
>How about:  Assume int's are (say) 2 bytes.  Assume further that ... all
>accesses must be on an 8-byte boundary.

Then sizeof(int) is 8, and the elements of the array consists of contiguous
8-byte units, of which only two bytes are significant.  This sounds much like
a Cray-2, in fact.

Question for comp.std.c (to which I've redirected followups): I've been told
that the Cray-2 has sizeof(int) == 8, yet INT_MAX == 0x7FFFFFFF (i.e. the
arithmetic is only accurate to 4 bytes when using int).  Is this legal in a
conforming implementation?  I think I can prove that UINT_MAX must be 2*^64-1,
but I'm less sure about INT_MAX.  Section 3.1.2.5 has a restriction to binary
architectures, which by the definition in the footnote seems to require every
bit except the highest to represent a power of two; should this be interpreted
as a requirement that 2*^63-1 must be representable in an 8-byte int?

Karl W. Z. Heuer (ima!haddock!karl or karl@haddock.isc.com), The Walking Lint
(I've implicitly assumed 8-bit bytes above, simply because it would be too
cumbersome to type the more correct expressions involving CHAR_BIT.)

gwyn@smoke.BRL.MIL (Doug Gwyn ) (02/23/89)

In article <11838@haddock.ima.isc.com> karl@haddock.ima.isc.com (Karl Heuer) writes:
>Question for comp.std.c (to which I've redirected followups): I've been told
>that the Cray-2 has sizeof(int) == 8, yet INT_MAX == 0x7FFFFFFF (i.e. the
>arithmetic is only accurate to 4 bytes when using int).  Is this legal in a
>conforming implementation?  I think I can prove that UINT_MAX must be 2*^64-1,
>but I'm less sure about INT_MAX.  Section 3.1.2.5 has a restriction to binary
>architectures, which by the definition in the footnote seems to require every
>bit except the highest to represent a power of two; should this be interpreted
>as a requirement that 2*^63-1 must be representable in an 8-byte int?

I think an implementation such as you describe is legal.
What is required is that integers be represented in a binary
numeration system, that a nonnegative signed integer of a given size
have the same representation as the corresponding unsigned integer
with the same value, that all integers in the ranges given by INT_MAX
etc. be representable, and that unsigned arithmetic be performed
modulo UINT_MAX+1 (similarly for unsigned long, etc.).  I don't see
any way these requirements can be combined to "prove" that every bit
pattern contained in the "sizeof" soace has to be interpretable as a
valid integral value.  In fact I'm pretty sure we didn't want to
require that, since (as you note) some architectures really don't
support it.