[comp.arch] *Why* do modern machines mostly have 8-bit bytes?

franka@mmintl.UUCP (Frank Adams) (01/01/70)

In article <699@elmgate.UUCP> ram@elmgate.UUCP (Randy Martens) writes:
|I have just completed an assignment working on a bunch of COMPAQ deskpro-286
|PC's.  They use 18 bit words ( 9 bit bytes).
|
|The reason - ... because there is hardware parity checking on all memory

Parity or error-correction bits are normally not included when counting the
size of bytes, words, or whatever.  The machine above is described as having
8 bit bytes and 16 bit words, with a parity bit per byte.
-- 

Frank Adams                           ihnp4!philabs!pwa-b!mmintl!franka
Ashton-Tate          52 Oakland Ave North         E. Hartford, CT 06108

roy@phri.UUCP (Roy Smith) (07/21/87)

In article <8315@utzoo.UUCP> henry@utzoo.UUCP (Henry Spencer) writes:
> Another example worth mentioning is the BBN C/70 and its kin, which have
> 10-bit bytes as I recall.

	Two related questions, now that I've pontificated enough on the
byte != 8 bits issue.

	First, why did older machines have all sorts of strange word
lengths -- 12, 36, and 60 being sizes that I know of, but I'm sure there
were others.

	Second (sort of the inverse of question #1), why do modern machines
have such a strong trend towards having power-of-2 word and byte lengths?
Other than holding ASCII characters nicely and making shift counts fit
nicely, I don't see any real strong reason.  In fact, to really make ASCII
fit nicely, you would want a 7-bit byte size, and if RISC is really the
wave of the future, I would expect to see multiple-shift instructions fall
by the wayside.  Nothing magic about power-of-2 bus widths.

	Anybody for a bit-aligned processor with variable word size (in the
same way the pdp-10 had variable byte size)?  You could do "ADD X, Y, I"
where X and Y are the operands and I is the number of bits of precision
wanted.  I should really mark that with a :-), but I'm partly serious (a
small part).
-- 
Roy Smith, {allegra,cmcl2,philabs}!phri!roy
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016

jbn@glacier.STANFORD.EDU (John B. Nagle) (07/22/87)

      The 8-bit byte was an IBM innovation; the term "byte" first appeared
with the IBM System/360 product announcement.  Much of the 8-bit trend 
stemmed from the desire to be IBM compatible.  But, more importantly,
the view of memory as raw bits whose form and meaning were determined by
the program started to replace the view that the CPU architecture determined
the way that memory was to be used.  

      Variable-width machines have been built; the IBM 1401 and IBM 1620
were 1960's vintage machines with variable-length decimal arithmetic as the
only system of arithmetic.  Burroughs built some machines with bit-addressable
memory and variable-length binary arithmetic in the late 1960s.  As memory
became cheaper, complicating the CPU to save some memory faded out as a goal.

      Power-of-two architecture is definitely an IBM idea, as is, for
example, K=1024.  (UNIVAC machines were quoted as 65K, 131K, 262K, etc. for
decades.)  If you want to sell iron, the notion that the next increment
of capacity after N is 2*N has considerable appeal.  Today everybody does
it, but it was definitely more of an IBM thing in the 1970s.

					John Nagle

farren@hoptoad.uucp (Mike Farren) (07/22/87)

In article <2807@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
>
>	Second (sort of the inverse of question #1), why do modern machines
>have such a strong trend towards having power-of-2 word and byte lengths?

Because most manufacturers of LSI and VLSI devices have standardized on
eight-bit wide devices (four bits for some, sixteen for others).  There are,
I'm sure, many reasons for this, including the popularity of microprocessors,
which are invariably (to my knowledge) based on an eight bit byte (exception:
those few four-bit devices); possible efficiencies in chip design when
working with a power of two (it's MUCH easier to implement 4-fold symmetry
than, say, 5-fold, and these symmetries greatly reduce the cost of designing/
manufacturing the chips); last but not least, the great benefits of standar-
dization, whatever the standard decided on.



-- 
----------------
                 "... if the church put in half the time on covetousness
Mike Farren      that it does on lust, this would be a better world ..."
hoptoad!farren       Garrison Keillor, "Lake Wobegon Days"

baxter@milano.UUCP (07/22/87)

I agree, there doesn't seem to be anything magic about 8 bits.

If I remember right, the Burroughs 1700 was a microprogrammed machine
in which the macroinstruction set (there were several, one per
programming language) treated the memory as if it had different
bit sizes.  The sizes chosen were based pretty much on the expected
size of data operands, and on what was effectively a Huffman encoding
of the particular macroinstruction set.  The microarchitecture
did have a fixed size word, with a fancy field extractor used by
the microcode to get a the size chunks it wanted.


-- 
-- Ira Baxter     Microelectronics and Computer Technology Corporation
(512) 338-3795    3500 West Balcones Center Drive, Austin, Texas 78759

bradley@think.COM (Bradley Kuszmaul) (07/22/87)

In article <2807@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
>...
>	Anybody for a bit-aligned processor with variable word size (in the
>same way the pdp-10 had variable byte size)?  You could do "ADD X, Y, I"
>where X and Y are the operands and I is the number of bits of precision
>wanted.  I should really mark that with a :-), but I'm partly serious (a
>small part).

The Connection Machine has 64K processors and the instruction set does
include exactly the ADD instruction you describe.  (Except of course, it
is a "vector add").  It can be argued that many applications gain a lot
by not having to do 32 bit operations when the data only has 6 bits of
precision (e.g. vision algorithms).
 -Brad

Bradley C. Kuszmaul, Thinking Machines Corporation, Cambridge, MA
 bradley@think.com
 bradley@think.uucp (i.e. seismo!think!bradley)

perry@apollo.uucp (Jim Perry) (07/22/87)

In some old architectures the byte structure was more closely related to 
the input format -- Hollerith code.  Thus each byte corresponded to a column
on a card, with at least 12 bits: X, Y, 0..9.  The Honeywell 200 (to the best
of my memory, I played with this beast briefly in 1974) had these plus a
parity bit and two bits called "word mark" and "item mark"; they could both
be set resulting in a "record mark" (my recollection: permute well before
believing).  Most of the machine operations were oriented to decimal 
arithmetic and COBOL-style editing (zero-fill, etc), and words truly were 
variable length.  Some operations, as I recall, scanned right-to-left until
a <foo>-mark (arithmetic, presumably), others right-to-left until a
<bar>-mark.  I don't recall more so I won't pursue it.  It's interesting
in retrospect how different the use of the machine was: how many recent
arcitectures incorporate floating-dollar-sign leading-zero suppression,
with check-protect (asterisk fill)?

Jim Perry (perry@apollo)  Apollo Computer, Chelmsford MA

aa1@j.cc.purdue.edu (Saul Rosen) (07/22/87)

>
>	First, why did older machines have all sorts of strange word
>lengths -- 12, 36, and 60 being sizes that I know of, but I'm sure there
>were others.
>
>	Second (sort of the inverse of question #1), why do modern machines
>have such a strong trend towards having power-of-2 word and byte lengths?

	The early vacuum tube machines were very expensive to build,
and the cost increased very markedly with word length.  Word length
was a compromise between cost and function.  The IBM 701 was the first
large scale scientific computer that was sold in fairly large numbers
(about 18).  It provided only fixed point arithmetic.  The designers
didn't feel that 32 bits provided enough precision, and 64 bits would
be extravagant.  Also 36 bits, a multiple of 6, went well with the 
6-bit character that could represent all of the codes produced on the
standard keypunch.  Once the 701 designers settled on 36 bits,
compatibility requirements kept the 704 and 709, and then all of
the transistorized 7090 and 7040 series using the  36 bit word.

	A number of competing systems, The Philco Transac S-2000
and the Control Data 1604, were 48-bit word machines.  They stuck
to a multiple of 6, with a word length long enough to provide
reasonable precision for floating point numbers.  The same reasoning
carried one level further ler to the 60-bit supercomputer of its 
day, the CDC 6600.  That was a spectacular computer, many years
ahead of its time.  We are still running one of them here at Purdue.
It would have been an even better computer if it had used a 
64 bit word, and thus conveniently accomodate the 8-bit byte.

haynes@ucscc.UCSC.EDU.ucsc.edu (99700000) (07/23/87)

Several old one-of-a-kind machines had 40-bit words because there was
no floating point in those days and the people using them thought
40 bits was about enough.

With the really ancient mercury delay line memory machines you could
have just about any word size you wanted, because the amount of hardware
was nearly the same regardless of word size.  It was just a speed/pre-
cision tradeoff.  Since addresses were small (small memories) and
words were wide they often had multi-address architectures.

36 bits was popular because it has more divisors than any other number
of about that size. (1,2,3,4,6,12,18) so you could conveniently pack
various sizes of operands into integral words.  AND in those days of
punched cards and upper-case-only 6 bits was enough for an alphanumeric
character set.

8 bits as a byte size came about for a number of rational and emotional
reasons.  A decimal digit takes four bits, so you can pack two of them
into an 8-bit byte.  6 bits wasn't enough for alphanumeric characters
with upper and lower case, and if we were going to go to all the trouble
of a new character set we probably should make it plenty big, hence
7 bits might not be enough.  The widest punched paper tape equipment
in production was 8 bits wide, and a lot of people thought 7-bit 
ASCII should be punched with a parity bit added.  7 is a prime number,
whereas 8 has lots of divisors (1,2,4,8) so aside from decimal digits
there were other kinds of things that might be packed into an 8-bit byte.
The IBM Project Stretch furnished a lot of ideas that were used in S/360,
as well as a lot that were not.  An 8-bit character set was designed
for Stretch, along with a 64-bit word and addressability down to the
bit.

If you're going to address down to the bit, or down to the byte, you
would like to have addresses with no unused bit combinations, for
maximum information density.  For instance, if you have 6 bytes
per word then the byte part of an address goes 000, 001, 010, 011,
100,101   and then the combinations 110 and 111 are not used.
Aside from the wasted information density this leads to complexity
in doing arithmetic on addresses - you'd have to do the byte part
modulo-6 and the rest of it modulo-2.

So IBM decided a 32 bit or 64 bit word was a reasonable way to go,
and the rest of the world had to follow.

(Personally I'm partial to 48 bit floating point as in the Burroughs
machines, but...  The B6500 and later had to accomodate 4-bit packed
decimal and 8-bit bytes, and for reasons of compatibility they also
handle 6-bit characters and floating point numbers with the exponent
being a power of eight.  All these different data sizes must have
complicated the machine enormously.)

The PDP-10 scheme for arbitrary size bytes looks pretty good to me.
Of course there is some wasted space in the word if the byte size
is not an integral sub-multiple of 36.  The GE/Honeywell scheme
(36 bit words with 6 and 9 bit byte sizes) leads to a lot of
grubbiness.  Either way there are annoyances.  In a PDP10 if you
want to write data from memory to tape you have to choose whether
to write integral 36-bit words, or whether the data are bytes and
should be written byte-by-byte to tape and the unused bits in the
word left out.  In the Honeywell machines they can represent ASCII
in 9-bit bytes.  So if you write to tape should you similarly have
to decide whether to write all the bits in the word, or the 8 bits
of each byte that contain an ASCII character and omit the other
bits.  Whereas in our 8/16/32/64 bit machines you just write all
the bits to tape as they come regardless of what the bits mean.
You may have a problem with characters written backwards (the
big-endian versus little-endian problem) but at least you don't lose
any bits.  Not that tape has to be 8 bits wide either; but if
the dominant vendor makes 8 bit wide tapes any other vendor had better
be able to read and write them.


bits in the word, or
haynes@ucscc.ucsc.edu
haynes@ucscc.bitnet
..ucbvax!ucscc!haynes

davidsen@steinmetz.steinmetz.UUCP (William E. Davidsen Jr) (07/23/87)

In article <2807@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
$ ... <stuff> ...
$ 	Anybody for a bit-aligned processor with variable word size (in the
$ same way the pdp-10 had variable byte size)?  You could do "ADD X, Y, I"
$ where X and Y are the operands and I is the number of bits of precision
$ wanted.  I should really mark that with a :-), but I'm partly serious (a
$ small part).
$ -- 
$ Roy Smith, {allegra,cmcl2,philabs}!phri!roy
$ System Administrator, Public Health Research Institute
$ 455 First Avenue, New York, NY 10016

I believe that the Intel 432 had the ability to address object in this
manner. The NS32xxx family has the ability to do loads and stores to
registers on bit boundaries, using variable length words. I have seen
this in the CPU manual, not used it.

-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {chinet | philabs | sesimo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

johnl@ima.ISC.COM (John R. Levine) (07/23/87)

It seems to me that all of the eight-bit byte machines we have are following
the lead of the IBM 360.  In 1964, before the 360 came out, the most common
word size for binary machines was 36 bits and bytes, if the hardware supported
them at all were 6 bits.  Various six-bit codes were adequate for the upper
case alphabet, digits, and a smattering of punctuation.

The 360 was intended to replace both the word-addressed binary 7094 and
various character addressed BCD machines.  A six-bit character set was no
longer enough, because ASCII had recently been invented and there were an
increasing number of ASCII terminals in TWX service.  Seven bits was enough
for ASCII, but that's a fairly ugly byte size.  Eight bits has the added
advantage that you can put two BCD digits into an eight-bit byte, which the
360 and most subsequent machines do, so eight bits it was.

(I realize that you can encode two decimal digits in 7 bits, but in 1964
the logic needed to deal with such a format was too complicated.)

The 36 bit crowd tried to counterattack with either 9-bit bytes (the GE, later
Honeywell, 635 series and perhaps the Univac 110x) or any byte size you want
(the PDP-6, -10, and -20) but it was too late. Uniform character addressing
was a big win even on machines used for scientific computing (think of all
those Fortran compiles, after all) and the era of word addressed machines
was drawing to a close.  There are still some non-byte addressed architectures
out there, such as the Univac (subsequently Sperry, now Unisys) 1100 series,
the Burroughs (now Unisys) B 5000 and its many descendants, and the
GE 635 and its many Honeywell descendants.  But they seem to be fading out.
Even Seymour Cray has byte addressing in the Cray 2, doesn't he?
-- 
John R. Levine, Javelin Software Corp., Cambridge MA +1 617 494 1400
{ ihnp4 | decvax | cbosgd | harvard | yale }!ima!johnl, Levine@YALE.something
U.S. out of New Mexico!

nelson@ohlone.UUCP (Bron Nelson) (07/23/87)

In article <624@ima.ISC.COM>, johnl@ima.ISC.COM (John R. Levine) writes:
> Even Seymour Cray has byte addressing in the Cray 2, doesn't he?

No.

-----------------------
Bron Nelson     {ihnp4, lll-lcc}!ohlone!nelson
Not the opinions of Cray Research

martin@felix.UUCP (Martin McKendry) (07/24/87)

In article <2807@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
>
>	First, why did older machines have all sorts of strange word
>lengths -- 12, 36, and 60 being sizes that I know of, but I'm sure there
>were others.
>-- 
>Roy Smith, {allegra,cmcl2,philabs}!phri!roy
>System Administrator, Public Health Research Institute
>455 First Avenue, New York, NY 10016

Last year I was with Burroughs, doing a little work on the history
of the A-series (B5500, etc).  This machine has a 48-bit word.  Turns
out that it was set to 48 because at that time they thought characters
were going to be 6 bits (we're talking 1957 here, folks), and they
wanted a power-of-2 wordsize.  Of course now they do these wonderful
divide-by-3's to do character addressing (its word addressed).  And,
naturally, no true Burroughs hackers believe that there is any advantage
to having a power-of-2 number of characters in a word.  You ought to
see the code and microcode executed to do Cobol!

--
	Martin S. McKendry
	FileNet Corp
	{hplabs,trwrb}!felix!martin

eugene@pioneer.arpa (Eugene Miya N.) (07/24/87)

>Even Seymour Cray has byte addressing in the Cray 2, doesn't he?

Byte addressing?  What's that?  Speaking from personal experience?

mark@rtech.UUCP (Mark Wittenberg) (07/24/87)

From article <2807@phri.UUCP>, by roy@phri.UUCP (Roy Smith):
> 
> 	First, why did older machines have all sorts of strange word
> lengths -- 12, 36, and 60 being sizes that I know of, but I'm sure there
> were others.
> 
> Roy Smith, {allegra,cmcl2,philabs}!phri!roy
> System Administrator, Public Health Research Institute
> 455 First Avenue, New York, NY 10016
>

Anybody remember the Bendix G-15 (hi Doug) with 29-bit words?
-- 
Mark Wittenberg
Relational Technology, Inc.
Alameda, CA
ihnp4!zehntel!rtech!mark    or    ucbvax!mtxinu@uihl
>th@ne

pf@diab.UUCP (Per Fogelstrom) (07/24/87)

WHY 8-bit bytes..  The smallest ammount of data modern machines can deal with
is a bit, and to simplify address computation it is best if the next larger
addressable objects size is of any power of two size above. Take for example
the trivial task to compute the address to the n'th element in an array. So
an 8 bit "byte" (eight bits is a byte, aint it) is a power of two size object
that is larger than a bit. (Imagine bit processing with a machine having 12 bit
words. (using mod/div instead of and/shift).)  But, on the other hand take a
graphic engine with 24 bits as the smallest addessable object (8 bits each for
red,green and blue). So all object sizes is "right" in their own context but
for general processing, power of two's are most suitable. (I think.)

per

davidsen@steinmetz.steinmetz.UUCP (William E. Davidsen Jr) (07/24/87)

In article <624@ima.ISC.COM> johnl@ima.UUCP (John R. Levine) writes:
|It seems to me that all of the eight-bit byte machines we have are following
|the lead of the IBM 360.  In 1964, before the 360 came out, the most common
|word size for binary machines was 36 bits and bytes, if the hardware supported
|them at all were 6 bits.  Various six-bit codes were adequate for the upper
|case alphabet, digits, and a smattering of punctuation.
|
The GE line (now Honeywell) of 36 bit machines had hardware 9 bit bytes
as well. Very nice for graphics characters, etc.
-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {chinet | philabs | sesimo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

mac@uvacs.CS.VIRGINIA.EDU (Alex Colvin) (07/24/87)

Reasons for 8-bit bytes (instead of, e.g., 6)
	ASCII
	EBCDIC

Reasons for machines that operate on bytes instead  of just e.g., words.
	text processing
	text files
	character stream I/O

Remember BCD?  Fieldcode? PDP-8?

				mac the naive

gwyn@brl-smoke.ARPA (Doug Gwyn ) (07/25/87)

In article <3766@felix.UUCP> martin@felix.UUCP (Martin McKendry) writes:
>... it was set to 48 because ... and they wanted a power-of-2 wordsize.

??

grr@cbmvax.UUCP (George Robbins) (07/26/87)

In article <6171@brl-smoke.ARPA> gwyn@brl.arpa (Doug Gwyn (VLD/VMB) <gwyn>) writes:
> In article <3766@felix.UUCP> martin@felix.UUCP (Martin McKendry) writes:
> >... it was set to 48 because ... and they wanted a power-of-2 wordsize.
> 
> ??

8 6-bit characters per word.  8==2^3, etc.  I guess you can't blame them for
a little shortsightedness, since they tried so hard to do other neat things.
Handling integers a special case of floating point seems kind of strange too.

-- 
George Robbins - now working for,	uucp: {ihnp4|seismo|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@seismo.css.GOV
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)

davidsen@steinmetz.steinmetz.UUCP (William E. Davidsen Jr) (07/27/87)

The GE 225 series had a 22 bit word. This allowed for 3 ASCII characters
and a flag bit. Since BASIC was developed on one of these machines, it's
fairly easy to see that there was a bias toward recognizing the
statement type by a word compare on the first three characters. This is
why BASIC was single case (and many versions still are, except for
strings), and why the original BASIC had "LET" in front of every
assignment statement.

The 225 had 8k words of memory, with an additional 8k available from
FORTRAN as KOMMON (sic), pronounced "K-common." With this we supported
16 users! It also had an "AAU" (auxilary arithmetic unit) which did
floating point arithmetic in hardware. This was state of the art in
1962.

-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {chinet | philabs | sesimo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

davidsen@steinmetz.steinmetz.UUCP (William E. Davidsen Jr) (07/27/87)

In article <6814@steinmetz.steinmetz.UUCP> davidsen@kbsvax.steinmetz.UUCP (William E. Davidsen Jr) writes:
>
>The GE 225 series had a 22 bit word. This allowed for 3 ASCII characters
>and a flag bit. Since BASIC was developed on one of these machines, it's

Oops! My memory played me false. After looking at the hardware manuals
brought in by another old programmer, I recall that the 225 was actually
only 20 bits, three 6 bit characters (in a BCD like encoding) and two
flag bits. The communication processor was only 18 bits and DID function
in ASCII, doing the compression on the fly.

As an interesting side note, the two machines shared a disk, and the 225
memory was the same as the GE400 series, which was 24 bits wide.

-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {chinet | philabs | sesimo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

ejbjr@ihlpg.ATT.COM (Branagan) (07/27/87)

> >	First, why did older machines have all sorts of strange word
> >lengths -- 12, 36, and 60 being sizes that I know of, but I'm sure there
> >were others.

Just some trivia on strange word sizes...

Some time ago (very long ago in fact) I came across a machine called 
an `ALWAC III'.  It had 33 bit words!??  There was some logic in it
though - four 8 bit bytes and a sign bit (though the sign bit was in the
least significant position, and was 1 for positive, 0 for negative).
Just think how much worse things could be ...
-- 
-----------------
Ed Branagan
ihnp4!ihlpg!ejbjr
(312) 369-7408 (work)

thomson@udel.EDU (Richard Thomson) (07/27/87)

An interesting sidenote is that there has been recently introduced a
microprocessor that supports variable bit length fields and bit addressing
(at least internally).  This is the new TMS34010 graphics processor from TI.

The internal memory bus is bit-oriented (to provide flexibility in addressing
variable sized pixels) and the 'bit address' is passed to an external memory
interface unit that fetches the appropriate 16-bit external word and performs
all the necessary shifting and masking before supplying the CPU with the data.

This allows the TI chip to handle variable sized pixels and fields (the fields
need not contain graphics display data, but could be anything... i.e. spectral
data, boolean fields, 6-bit characters, etc.).

[Please forgive me if I am mis-remembering the 34010 docs]  Rich Thomson

radford@calgary.UUCP (Radford Neal) (07/28/87)

In article <3532@ihlpg.ATT.COM>, ejbjr@ihlpg.ATT.COM (Branagan) writes:
> Just some trivia on strange word sizes...
> 
> Some time ago (very long ago in fact) I came across a machine called 
> an `ALWAC III'.  It had 33 bit words!??  There was some logic in it
> though - four 8 bit bytes and a sign bit (though the sign bit was in the
> least significant position, and was 1 for positive, 0 for negative).
> Just think how much worse things could be ...

Yup. The LGP-30, of 1950's vintage, had 31 bit memory words. The accumulator
had 32 bits, though. They seem to have thought of their memory as being
32-bit words, but the low-order bit was always zero...

    Radford Neal

rpw3@amdcad.AMD.COM (Rob Warnock) (07/28/87)

In article <1037@vaxb.calgary.UUCP> radford@calgary.UUCP (Radford Neal) writes:
+---------------
| Yup. The LGP-30, of 1950's vintage, had 31 bit memory words. The accumulator
| had 32 bits, though. They seem to have thought of their memory as being
| 32-bit words, but the low-order bit was always zero...
|     Radford Neal
+---------------

Close... the accumulator was continously recirculating (reading data off the
drum and writing it back 32 bits "in front" of the read head), so you never
had to turn off the write gate on the drum. The memory, on the other hand, was
(merely) word addressed (actually track+sector), and you nearly nearly a bit
time to turn off the write gate if you weren't going to overwrite the next word
in memory. It was a "Little-Endian" machine (at the bit level -- had to be,
it used bit-serial arithmetic!), but numbered the bits "Big-Endian" style (the
opposite of the 68000 confusion!), and bit 31 (the LSB of the AC) was the one
that got snuffed (well... ignored) when you wrote to memory. Thus the low-order
bit of an input character would be lost if you didn't multiply it by 2 before
storing!

Further, the LSB of the address field in an instruction was bit 29 of the
AC (there were a *lot* of unused bits in its single-address instructions!),
and when keying in absolute machine instructions (like a bootstrap), you
had to manually account for the lost bit 31 and unused bit 30, so that
sequential words were addressed as "0, 4, 8, J, 10, 14, 18, 1J,..."
(pronounced "zero, four, eight, jay, ten, fourteen, eighteen, jayteen").

Oh yes, the LGP-30 used a version of hexadecimal long before IBM decreed
hex was "0-9A-F". The LGP-30 programmer counted: "0, 1, 2, 3, 4, 5, 6, 7,
8, 9, F, G, J, K, Q, W". The low-order 4 bits of the Friden "Flexowriter"
character code for the "hex" digits happened to be, of course, the correct
binary values, so no "conversion" was needed. That would work in ASCII iff
hex was defined as "0-9J-O", but nobody could *stand* a number system with
both "zero" and "oh" in it! (Oddly enough, the low-order 4 bits of the first
letters of the assembler opcodes were also the correct bit pattern for that
opcode, so "b0004" (4-bit mode) == 10004 == "Bring (load) loc. 4", "a123j"
was "Add 123j", etc. There was a separate 6-bit input mode for doing real
text input.)


Rob Warnock
Systems Architecture Consultant

UUCP:	  {amdcad,fortune,sun,attmail}!redwood!rpw3
ATTmail:  !rpw3
DDD:	  (415)572-2607
USPS:	  627 26th Ave, San Mateo, CA  94403

res@ihlpe.ATT.COM (Rich Strebendt @ AT&T Information Systems - Indian Hill West; formerly) (07/29/87)

In article <1037@vaxb.calgary.UUCP>, radford@calgary.UUCP (Radford Neal) writes:
  
> Yup. The LGP-30, of 1950's vintage, had 31 bit memory words. The accumulator
> had 32 bits, though. They seem to have thought of their memory as being
> 32-bit words, but the low-order bit was always zero...

GADS!!!  It has been a long time since I worked on that funny little
beast -- the predecessor of the microcomputer!

Memories come flooding to the surface:

	The highlevel language for the machine - the "24.2 Floating
	Point Interpretive System" (just a tad higher than machine
	code).

	A blackjack program that cheated running on a machine so slow
	that the machine got caught at it!

	Operation cycle times measured in milliseconds instead of
	nanoseconds.

	A front-panel CRT display that REALLY was a CRT display.  To
	read the number displayed, you interpretted the pulse train
	waveform on the display.

	The one-an-only I/O device was a Frieden Flexowriter with a
	paper tape reader/punch.  Noisy, but quite reliable.

A primitive machine by today's standards, but a lot of fun to work
with.

					Rich Strebendt
					iwsl6!res

ram@elmgate.UUCP (Randy Martens) (08/06/87)

I have just completed an assignment working on a bunch of COMPAQ deskpro-286
PC's.  They use 18 bit words ( 9 bit bytes).

The reason - to boost memory chip sales by 11%.

No, not really.  It is done because there is hardware parity checking on all 
memory, which can be useful when you have one of the poor little beasties
stuffed with 3.7 meg of RAM.  I believe you can disable the parity checking
if you want to, but you don't want to.  The systems tend to whack out on you
if you do.

By the way, if I am not mistaken, the Data General Nova 2200 of yesteryear 
 used a 10 bit byte.  And a nine bit tape.  This was weird

Randy Martens , ( forgeive me if I have sinned but I am new to the net)
mail -> Eastman Kodak, Dept 646 2-9 , 901 Elmgrove Rd, Rochester, NY, 14650
" Reality - What a concept ! " - r.williams

e


i

baxter@milano.UUCP (08/06/87)

In article <699@elmgate.UUCP>, ram@elmgate.UUCP (Randy Martens) writes:

> By the way, if I am not mistaken, the Data General Nova 2200 of yesteryear 
>  used a 10 bit byte.  And a nine bit tape.  This was weird
> 

As a survivor of the Data General 16 bit machines, I am puzzled to hear
about the "yesteryear Nova 2200".  I never heard of it... all the DG
machines had 16 bit-wide memories, especially the DG 1200 series, which
is the only thing which comes close to "2200".  I assume Randy has
a parity error in his memory, but it seems to be disabled.

-- 
-- Ira Baxter     Microelectronics and Computer Technology Corporation / STP
(512) 338-3795    3500 West Balcones Center Drive, Austin, Texas -ecgro

mark@applix.UUCP (Mark Fox) (08/06/87)

In article <699@elmgate.UUCP> ram@elmgate.UUCP (Randy Martens) writes:
>By the way, if I am not mistaken, the Data General Nova 2200 of yesteryear 
> used a 10 bit byte.  And a nine bit tape.  This was weird

You ARE mistaken.

The first Nova's could only transfer 16 bit words between accumulators or
I/O devices and memory so byte manipulation was done entirely in software.
However, the later Nova's and microNova's had byte instructions that were
modeled after widely-used load- and store-byte subroutines.

These subroutines/intructions worked as follows: When a byte is moved from
memory to an accumulator, the high-order half (16 / 2 = 8 bits!!) of the
destination ac is cleared. When a byte is moved from an accumulator to
memory, the other byte in the memory word is unchanged.

DG sold both 9 and 7 track tape systems. The ninth track was used for parity. I
forget how the 7 track tapes were formatted.

By the way, there were Nova 1200's, 1220's, and 2's. I have never heard of a
2200.
-- 
                                    Mark Fox
       Applix Inc., 112 Turnpike Road, Westboro, MA 01581, (617) 870-0300
                    uucp:  seismo!harvard!m2c!applix!mark

ebg@clsib21.UUCP (Ed Gordon) (08/19/87)

There is such a thing as 8-bit ascii.  Originally, 8 bit ascii consisted
of 7-bit ascii and a parity bit as the msbit, whether even or odd.  ANSI
ASCII x3.64 describes extended ASCII, which are 8-bit ascii command
sequences, better known in the vernacular as "esc" sequences.  Most systems
use escape sequences with 7-bit ascii, rather than using the 8-bit ascii
control character.

			--Ed Gordon,Data Systems Associates
				Clinton, MA