[comp.sys.apple] Long and short integers

NU133716@VM1.NODAK.EDU (Brian Glaeske) (11/22/88)

In reply to the question about how the computer stores numbers.
  Most, I dare say ALL computers use what is called Most Significant Bit,
and Least Significant Bit.  These are the biggest and the smallest part
of the the number respectfully.  The MSB is store in the highest memory
location, because it is bigger.  And the LSB is stored in the lowest memory
location, because of course it it smaller.
  I hope this answers the senders question.

Brian G.
  Nu133716@vm1.nodak.edu        nu133716@ndsuvm1

bfox%eagle@HUB.UCSB.EDU (Brian Fox) (11/23/88)

   Posted-Date:         Tue, 22 Nov 88 02:55:53 CST
   Date:         Tue, 22 Nov 88 02:55:53 CST
   From: Brian Glaeske <NU133716@vm1.nodak.edu>

   In reply to the question about how the computer stores numbers.
     Most, I dare say ALL computers use what is called Most Significant Bit,
   and Least Significant Bit.  These are the biggest and the smallest part
   of the the number respectfully.  The MSB is store in the highest memory
   location, because it is bigger.  And the LSB is stored in the lowest memory
   location, because of course it it smaller.
     I hope this answers the senders question.

Hah!  That's pretty funny.

Even though I suspect you are joking, some people may believe you, and so I
feel compelled to explain MSB and LSB.

The MSB is the Most Significant because it is has the most significance in
determing the value of the number.

In the decimal system the most significant digits are always on the left; put
another way, the digits closer to the left represent higher magnitudes than
the digits closer to the right.  In the number 235, the `2' is the most
significant digit, and the `5' is the least significant.

When a computer CPU (brain) is manufactured, the designer makes a decision as
to which `end' of a byte is going to contain the MSB.  This decision is
practically arbitrary.  It is usually based on a combination of hardware
design and compatibilty with other existing hardware.

In the Apple, the MSB is the leftmost bit in the byte.  If you need several
bytes worth of bits to represent a large number, then the byte at the lowest
memory address contains the MSB (and in fact, is known as the Most Significant
Byte).

Brian Fox

PS: If this isn't clear, learn how to count in binary on your fingers, and
then do it in a mirror.

c60c-3aw@e260-3d.berkeley.edu (Andy McFadden) (11/23/88)

In article <8811220359.aa03637@SMOKE.BRL.MIL> NU133716@VM1.NODAK.EDU (Brian Glaeske) writes:
[...]
>  Most, I dare say ALL computers use what is called Most Significant Bit,
>and Least Significant Bit.  These are the biggest and the smallest part
>of the the number respectfully.  The MSB is store in the highest memory
>location, because it is bigger.  And the LSB is stored in the lowest memory
>location, because of course it it smaller.

The Mac...?

>Brian G.

-- 
fadden@zen.berkeley.edu [crashed]
c60c-3aw@widow.berkeley.edu (Andy McFadden)
(Outgoing E-mail has about a 40% chance of successfully reaching you.  Feel
 free to respond through the mail, but I probably can't answer.)

gwyn@smoke.BRL.MIL (Doug Gwyn ) (11/23/88)

In article <8811221659.AA15060@hub.ucsb.edu> bfox%cornu@hub.ucsb.edu writes:
>Hah!  That's pretty funny.
>In the Apple, the MSB is the leftmost bit in the byte.

That's even funnier!

dnelson@umbio.MIAMI.EDU (Dru Nelson) (11/23/88)

in article <8811221659.AA15060@hub.ucsb.edu>, bfox%eagle@HUB.UCSB.EDU (Brian Fox) says:
> 
> 
>Most, I dare say ALL computers use what is called Most Significant Bit,
...
>of the the number respectfully.  The MSB is store in the highest memory
>location, because it is bigger.  And the LSB is stored in the lowest memory
...
>I hope this answers the senders question.
> 
> Hah!  That's pretty funny.

I agree, I would hate to see this guy program an Apple II in machine
:-)

] Some deleted lines [

> 
> When a computer CPU (brain) is manufactured, the designer makes a decision as
> to which `end' of a byte is going to contain the MSB.  This decision is
     ^^^^^^^^^^^^^^^^^^^^^
> 
> Brian Fox

Actually, every CPU designer recognizes the most signifigant bit on
the left.  As you said earlier, it is because that is the way the
number systems are set up.  However, the only difference is where the
most signifigant byte is stored.  As you already stated, ththe 65xx series
the most signifigant byte is lowest in memory.  On 68xx and others the least
signifigant byte is lowest in memory.  The msb and lsb are always
in the same position.

p.s.  that was a good explanation.
-- 
Dru Nelson                    UUCP: ....!uunet!gould!umbio!dnelson
Miami, Florida                 MCI: dnelson
                          Internet: dnelson%umbio@umigw.miami.edu

bfox%eagle@HUB.UCSB.EDU (Brian Fox) (11/23/88)

Boy, is my face red!

I said:

    In the Apple, the MSB is the leftmost bit in the byte.

That is correct.

    If you need several bytes worth of bits to represent a large number, then
    the byte at the lowest memory address contains the MSB (and in fact, is
    known as the Most Significant Byte).

That is incorrect.  The Apple stores the least significant byte at the lower
address.

Sorry,

	Brian Fox

rich@pro-exchange.cts.COM (Rich Sims) (11/23/88)

Interesting discussion on number storage, but the varied responses *still*
don't tell the whole (or correct) story.

IMPORTANT:  There is a distinction here between "addresses" and "numbers"!!

1.  In a single byte value, the MSB/LSB position is arbitrarily determined
when the chip is manufactured, and there is *no* general rule.  The Apple II
series (6502) happens to use the format wherein the MSB is the first/leftmost
bit, whichever term you prefer.

2.  The Apple can only deal with 8-bit (single byte) numbers.  It takes
software to handle anything requiring more than 1 byte of storage.  Therefore,
the method of storing multi-byte numbers is strictly software dependent.

3.  There's an exception to most rules, so here's the one for #2 above:  the
Apple can deal with a two-byte address (16 bit value).  Those are stored with
the less significant *byte* at the lower memory address, or "first" if you
will.  Note that when you're talking about the GS and its 65816, the
addresses can be longer than two bytes -- but the same byte ordering is used.

(Other computers handle addressing differently -- again, no "standard"!)

Because of the way the addresses are handled, many Apple programmers have used
the same convention to store multi-byte numbers -- but there is no requirement
that they do so as far as the machine is concerned.

The Apple also contains built-in number manipulation routines in ROM.  Those
routines recognize yet another method of storing numbers, the "floating point"
format.  This is also software (firmware??) dependent because it's a function
of the code, which happens to be in ROM.  Naturally, if you're going to use
those routines, you (may) have to know how to move numbers into and out of
that format.

And then, there's SANE!  :-)

Bottom line - there's no general rule that applies to any computer, except
that in each type, the bit-order will be consistent within each byte, and the
byte order of addresses (as opposed to numbers) will be consistent.  In those
computers which can handle multi-byte numbers, the byte order of the largest
number the processor can deal with as an entity, will also be consistent for
each type of processor.

Everything else is up to the programmer -- gripe at her!
                                                     ^
            Note use of non-sexist language above ---|

-Rich Sims-

UUCP: [ sdcsvax nosc ] !crash!pro-exchange!rich || pro-exchange: 305/431-3203
ARPA: crash!pro-exchange!rich@nosc.mil          ||  300/1200/2400/9600 (HST)
INET: rich@pro-exchange.cts.com                 ||     login = 'register'

"People will do strange and amazing things -- if you give them money!"

mcgurrin@MITRE.ARPA (11/24/88)

I'm glad we've got all that cleared up.  Just to expand, the only time I know
of when bit (not byte) order gets cloudy is in communications.  Some systems
send the lsb first, some the msb (I'm talking about synchronous comm. in 
particular).  As I recall, token bus and CSMA/CD LANs send it one way, whereas
token ring LANs handle it the opposite way.

bfox%eagle@HUB.UCSB.EDU (Brian Fox) (11/24/88)

   Posted-Date: 22 Nov 88 22:57:17 GMT
   Date: 22 Nov 88 22:57:17 GMT
   From: Doug Gwyn  <haven!adm!smoke!gwyn@ames.arc.nasa.gov>
   Organization: Ballistic Research Lab (BRL), APG, MD.
   References: <8811220359.aa03637@SMOKE.BRL.MIL>, <8811221659.AA15060@hub.ucsb.edu>
   Sender: info-apple-request@brl.mil

   In article <8811221659.AA15060@hub.ucsb.edu> bfox%cornu@hub.ucsb.edu writes:
   >Hah!  That's pretty funny.
   >In the Apple, the MSB is the leftmost bit in the byte.

   That's even funnier!

	ASL  foo	;multiply foo by 2 by arithmetic shifting LEFT.

friedman@porthos.rutgers.edu (Gadi ) (11/24/88)

In article <986@umbio.MIAMI.EDU> dnelson@umbio.MIAMI.EDU (Dru Nelson) writes:


> Actually, every CPU designer recognizes the most signifigant bit on
            ^^^^^
> the left.  As you said earlier, it is because that is the way the
> number systems are set up.  However, the only difference is where the
> most signifigant byte is stored.  As you already stated, ththe 65xx series
> the most signifigant byte is lowest in memory.  On 68xx and others the least
> signifigant byte is lowest in memory.  The msb and lsb are always
> in the same position.
> 
> p.s.  that was a good explanation.
> -- 
> Dru Nelson                    UUCP: ....!uunet!gould!umbio!dnelson
> Miami, Florida                 MCI: dnelson
>                           Internet: dnelson%umbio@umigw.miami.edu

Never say "every".  I friend of mind programs on Perken(sp?) Elmer
machines.  For them, bit 0 is the MSBit.  We had lots of fun
trying to interface our chip simulation subroutines that we wrote
for a class.  We both assumed 'our way' and all the numbers
were reversed.


                                 Gadi
-- 


uucp:   {ames, cbosgd, harvard, moss}!rutgers!aramis.rutgers.edu!friedman
arpa:   FRIEDMAN@ARAMIS.RUTGERS.EDU

dnelson@umbio.MIAMI.EDU (Dru Nelson) (11/24/88)

Well, I stand corrected by Gadi.  I know 2 things now.  Never say
ever and never forget to read the manual when programming a Perkins
Elmer machine.  :-) Stick'n to the old ][.


-- 
Dru Nelson                    UUCP: ....!uunet!gould!umbio!dnelson
Miami, Florida                 MCI: dnelson
                          Internet: dnelson%umbio@umigw.miami.edu

gwyn@smoke.BRL.MIL (Doug Gwyn ) (11/24/88)

In article <8811231628.AA10036@hub.ucsb.edu> bfox%cornu@hub.ucsb.edu writes:
>   From: Doug Gwyn  <haven!adm!smoke!gwyn@ames.arc.nasa.gov>
>   In article <8811221659.AA15060@hub.ucsb.edu> bfox%cornu@hub.ucsb.edu writes:
>   >Hah!  That's pretty funny.
>   >In the Apple, the MSB is the leftmost bit in the byte.
>   That's even funnier!
>	ASL  foo	;multiply foo by 2 by arithmetic shifting LEFT.

Of course, bytes don't have left and right sides.  That's purely an
artifact of one way of drawing diagrams of them on paper.  Putting the
MSB on the left in such a diagram is consistent with conventional
(Arabic) positional numeric notation.

When dealing with matters of "reality", it is extremely important to
distinguish between things that concretely exist and things that are
just part of our way of describing what exists.  Instead of "first"
(implying time sequencing) or "left" (implying spatial arrangement),
one should talk about "lowest-addressed" or "least significant", both
of which have physical/mathematical significance.

In the majority of modern architectures, when several bytes are
clumped into a word by the hardware, the lowest-addressed byte
contains the most significant bits of the word (considered as an
integer).  The 65xx, PDP-11, VAX, and NS32xxx families follow the
opposite convention.  Actually, PDP-11 hardware support (FP11) for
32-bit integers is a crazily scrambled order due to the FPU designer
thinking big-endian while the CPU designer thought little-endian.
The VAX (also the PDP-11 Fortran compiler!) straightened out the
integer byte order.  The PDP-11 Fortran compiler is a good example
of how much of this stuff is convention; although the only hardware
support for 4-byte integers used one order, the compiler used a
different storage order (and swapped it around when necessary during
computation).

As you note with your example, hardware designers often don't keep
their sequencing concepts straight.  That may help to explain why
there are so many poor choices made by computer architects, for
example use of big-endian byte suborder in the MC680x0.

By the way, time sequence IS important in bit-serial communication.

gwyn@smoke.BRL.MIL (Doug Gwyn ) (11/24/88)

In article <8811231856.AA01000@crash.cts.com> pnet01!pro-simasd!pro-exchange!rich@nosc.mil writes:
>1.  In a single byte value, the MSB/LSB position is arbitrarily determined
>when the chip is manufactured, and there is *no* general rule.  The Apple II
>series (6502) happens to use the format wherein the MSB is the first/leftmost
>bit, whichever term you prefer.

This is nonsense.  The 6502 transfers 8 bits of data to/from memory in
parallel.  There is no "first" bit, and which is "leftmost" I suppose
depends on whether you're looking into the top of your Apple II from
the left side of the case or the right.

>2.  The Apple can only deal with 8-bit (single byte) numbers.  It takes
>software to handle anything requiring more than 1 byte of storage.  Therefore,
>the method of storing multi-byte numbers is strictly software dependent.

More nonsense.  Several 6502 operations involve 16-bit address arithmetic,
and the 65816 has a 16-bit accumulator (and 24-bit addressing).  There is
no reason to consider address arithmetic as distinct from accumulator
arithmetic for the 65xx architecture.

>Bottom line - there's no general rule that applies to any computer, except
>that in each type, the bit-order will be consistent within each byte,

That's an untestable assertion.  Infra-byte "bit order" (whatever that
means; have you seen the way RAM is organized?) is not visible to the
programmer.

>Everything else is up to the programmer -- gripe at her!
>                                                     ^
>            Note use of non-sexist language above ---|

And a final piece of nonsense.  That is about as sexist as one could imagine.

rich@pro-exchange.cts.com (Rich Sims) (11/25/88)

in reply to Doug Gwyn

(and out of sequence with your comments....)

> And a final piece of nonsense.  That is about as sexist as one could
> imagine.

That was *meant* to be somewhat "tongue-in-cheek"...  sorry you didn't
recognize it as such!

> This is nonsense.  The 6502 transfers 8 bits of data to/from memory in
> parallel.  There is no "first" bit, and which is "leftmost" I suppose
> depends on whether you're looking into the top of your Apple II from
> the left side of the case or the right.

I'll go along with that -- from a purely hardware viewpoint (although,
I suspect that "front or back of the case" would have been a better
selection).

In terms of the representation and usage, which is apparently what other
folks are dealing with, the statement would seem to be correct.  As far
as the concept of "first" is concerned, there are a fairly substantial
number of independently developed applications which are doing (or rather,
allowing the Apple to do) parallel <-> serial conversions, with a reasonably
high level of success.  There must be *some* sort of "starting point" that's
fairly widely accepted as first|leading|initial|whatever you like!!

It is also interesting to note that the design specification sheets on both
the 6502 and 65816 list instructions which shift bits "left" or "right".
I wonder how they do that, if there is no "left" or "right"?

> More nonsense.  Several 6502 operations involve 16-bit address arithmetic,
> and the 65816 has a 16-bit accumulator (and 24-bit addressing).  There is
> no reason to consider address arithmetic as distinct from accumulator
> arithmetic for the 65xx architecture.

No?  Perhaps I'm mistaken about the internal organization of the chip.
I was always under the impression that "address" arithmetic and "data"
arithmetic were not only handled differently, but didn't even use the
same registers.  At least, that's what the references available to me
seem to indicate.

I understand the 6502's capabilities in the area of address arithmetic,
but perhaps you'd be good enough to provide me with a brief example of
the same sort of thing using data other than an address... perhaps
storing a 16-bit value somewhere with a single instruction, or just
incrementing one.

By the way, the 'thread' was fairly specific to the 6502, but your
comment concerning the 65816's capabilities was correct.  I fail to see
where it's any different in principle, though.  The chip simply has the
capability of handling 8 additional bits in either mode.  It can't deal
with 24 bit data any better than the 6502 can with 16 bit data.

> That's an untestable assertion.  Infra-byte "bit order" (whatever that
> means; have you seen the way RAM is organized?) is not visible to the
> programmer.

Perhaps not "easily" testable, but certainly not "untestable".  I'll agree
that it's not "visible" to the programmer, but if it's not consistent, then
how do you explain the large number of operations involving bit manipulation
that have been carried out correctly over the years?  Has everyone just
been very lucky?

-Rich Sims-

UUCP: { sdcsvax nosc } !crash!pro-exchange!rich
ARPA: crash!pro-exchange!rich@nosc.mil
INET: rich@pro-exchange.cts.com

gwyn@smoke.BRL.MIL (Doug Gwyn ) (11/29/88)

In article <8811251119.AA18101@crash.cts.com> pnet01!pro-simasd!pro-exchange!rich@nosc.mil writes:
>As far as the concept of "first" is concerned, there are a fairly substantial
>number of independently developed applications which are doing (or rather,
>allowing the Apple to do) parallel <-> serial conversions, with a reasonably
>high level of success.  There must be *some* sort of "starting point" that's
>fairly widely accepted as first|leading|initial|whatever you like!!

There are standards such as RS-232-C for serial bit order, based on the
order in which the selector bars had to be triggered on mechanical
teleprinters in order to set up for a given character.  (In fact any
serial link-level protocol will have to specify how to interpret the
time sequence of bits.)

I don't 100% recall whether the least significant bit of the "ASCII
code value" was sent first or last for the teleprinters (I think it
was first).  The hardware could have been designed either way,
which is in fact my point: most-significant is not necessarily the
"first" bit although in a serial context it could be, depending on
the conventions that apply in the specific case.

>I was always under the impression that "address" arithmetic and "data"
>arithmetic were not only handled differently, but didn't even use the
>same registers.

But the assembly-level PROGRAMMER has to be able to mix these, for
example to perform arithmetic on a pushed value on the stack that
was placed there as a return address by the hardware when a
subroutine call was made.  The chip internals are irrelevant.

>[give example of] storing a 16-bit value somewhere with a single instruction

	LDA	something
(with extended mode active)  After all, this WAS a 65816 discussion
originally.  I don't know at what point you thought it became 8-bit
specific.