[net.micro] 4->8->16->32->64? bit micros

eugene@ames.UUCP (Eugene Miya) (12/16/84)

Some time ago, there was some discussion about the likelihood of
64-bit micros and possible future trends.  One morning at 6 AM while
driving to the snow, I pondered several things.

1) We probably will have 64-bit micros, contrary to what some people
belived.  I think there are several reasons for this:
a) there has been a considerable jump in the number of manufacturers of
64-bit machines in the last few years: Cray, CDC, Denelcor, ELXSI, Convex,
and so on.
b) makers of 32-bit supercomputers such as the TI ASC have not faired will
in the marketplace.
c) Wafer scale technology is probably the right scale at the right time
for this to happen.  Just a couple of more problems, and we'll be better off.
d) Manufacturing technology was the reason why the super, mini, micro, etc.
distinctions were made.  Now the gap is closing been micros and
supercomputers.  What is the real difference between micros and minis these
days?
2) Based on this, I note from a local DECUS presentation, that the VAX 8600
with an even more complex instruction set than the 780, two years late, might
be among the last of the CISC 32-bit machines [there might be more].
a) The 8600 is pipelined.  Techniques used in bigger machines.  I suspect
we will be seeing more machines with vector registers and pipelining,
especially in future micros.  Vector registers might not be used much,
by neither are many instructions on CISC machines.

This raises several questions: is DEC considering 64-bit architectures, and
would VMS be ported?  Unix has been moved to several 64-bit machines.
Will we be seeing lots of 32-bit compatibility mode?  Don't forget
that today's micros have the power of yesterday's supercomputers, doesn't
this mean that today's supercomputers will be tomorrow's micros? Are we
writing tomorrow's dusty decks?

--eugene miya
  NASA Ames Research Center
  {hplabs,ihnp4,dual,hao,vortex}!ames!aurora!eugene
  emiya@ames-vmsb.ARPA

davet@oakhill.UUCP (Dave Trissel) (12/16/84)

In article <707@ames.UUCP> eugene@ames.UUCP (Eugene Miya) writes:
>
>1) We probably will have 64-bit micros, contrary to what some people
>belived.  I think there are several reasons for this:
>a) ..........

It may seem logical that 64-bit architectures will eventually become dominant
since progressions have gone from 8-bit to 16-bit and then 16-bit to 32-bit.
However, I think the extension to 64-bits will not generally occur.

Technology plays a part, of course.  Ten years ago it would have been
literally impossible to build a 32 bit architecture on a single chip.
Today with 200,000 devices on a chip there could certainly be designed and
built   64-bit implementations. But I think there are other, more important
factors involved in deciding the size of an architecture.

If you look at the early 8-bit architectures all registers were only 8 bits.
Of course an 8 bit address bus only accessed 256 bytes of memory.This quickly
became a limiation and was removed by supplying higher addressing bits with
a so-called "page" register which the programmer would setup whenever he/she
wanted to access a different 256 byte page in the external memory space.

The next improvement was to actually implement one or more 16 bit registers
to assist in larger memory space accesses.  Examples being the Z80 and the
MC6800.  Note, however, that the 16 bit registers were only practically
usefull for addressing purposes.  Neither could do other than very primative
data operations such as increment and decrement by one in the 16-bit regs.
Almost all data manipulations were 8 bits in size and only worked in 8 bit
registers.

Next we find the introduction of chips such as the 8088/8086 which
provide 16-bit registers and operations for not only addresses but
data as well.  In fact that chip family provided a method to access
more than just the 16-bit address space via an enhancement to the
page register scheme called segment registers.

Now, why have 32-bit micro architectures evolved?  Primarily because the
16 bit ones could not efficiently handle 32-bit data and the larger than
64K address space.  The early MC68000 chip allowed a linear addressing range
of 16 Megabytes and provided almost a full range of 32-bit operations.  These
were all due to the fact that it implemented a full set of 32-bit registers
both for data and addresses.

Lets examine now the potential extension to 64 bits.  I would suggest that
the criterian will be the same.  If there are data types (address or data)
which will not effectively operate in a 32-bit register environment then
there will be a push for a larger than 32-bit architecture.

Concerning data types, there is only one I can think of that would merit
more than 32-bits: the floating-point number.  Most architectures which
support floating-point offer both a 32-bit (36-bit or whatever) single
precision along with a 64-bit (or whatever) double precision format.  Clearly
64-bit general purpose registers would be the propper support for a
double format.  Most architectures, however, provide a completely different
set of registers for floating-point operation.  Thus, they need not be the
exact same size as the general purpose data and address registers and can
provide for the larger sizes of the floating data type. Now this may not
be as good as having TRUE general purpose  registers which would allow
floating-point as well as all other operations but it seems to be
adequate enough.

As for addressing more than a 32 bit memory space there are several
alternatives.  One is the 8087/8086 method of segment register extension
to the address.  Another is to support virtual memory which would potentially
allow each user task or process access to its own 4 gigabyte address space.
The one of interest here, however, is the support of a     more-than-32-bit
linear address space which would be ideally implemented with more-than-32-bit
registers, thus providing the push for a 64-bit register architecture.

There are a lot of finer points that can be made in this discussion but I
believe the bottom line boils down to  the requirement of supporting data
or addresses of more than 32-bits in size.  I simply do not find much
interest around for either.  The floating-point users seem quite content
to have thier own unique set of registers for floating-point support.
Likewise, although most system designers are demanding a completely linear
and large addressing space they seem more interested in using virtual memory
facilities or a larger partitioning scheme to oversee the individual 32-bit
spaces involved.

I would think that in the realm of large-scale number crunching you would find
the most interest in a 64-bit architecture.  But I also believe that outside
that area there are not very many applications which require more than a
4 Gigibyte address space that 32-bit registers provide.

It would be interesting to hear other viewpoints on the net, especially from
those that are already using 64-bit (or large) architectures or are
investigating same.

Motorola Semiconductor                           Dave Trissel
Austin, Texas            {ihnp4,seismo,ctvax,seismo}!ut-sally!oakhill!davet

kds@intelca.UUCP (Ken Shoemaker) (12/18/84)

One possible application of a >4GByte virtual address space might
deal with direct mapped files on huge devices, such as might
be found when using CDs as storage devices.  I'm sure you can
come up with others!
-- 
I've got one, two, three, four, five senses working overtime, 
	trying to take this all in!

Ken Shoemaker, Intel, Santa Clara, Ca.
{pur-ee,hplabs,amd,scgvaxd,dual,idi,omsvax}!intelca!kds
	
---the above views are personal.  They may not represent those of Intel.

rap@oliven.UUCP (Robert A. Pease) (12/18/84)

.

>It may seem logical that 64-bit architectures will eventually become dominant
>since progressions have gone from 8-bit to 16-bit and then 16-bit to 32-bit.
>However, I think the extension to 64-bits will not generally occur.
>

This reminds me of a person I talked to many years ago while
working  at  Cromemco.  I  asked  him  if he thought 64K bit
DRAMs would come out soon and  he  replyed  that  they  will
never  make 64K bit DRAMs.  Thats the maximum CPU addressing
size and doesn't leave any room for ROMs.  Needless  to  say
he  just  had  a  limited  idea  of what might happen in the
world.  The point is, someone may decide to do it  just  for
the fun of it and thats all it takes.
-- 

					Robert A. Pease
    {hplabs|zehntel|fortune|ios|tolerant|allegra|tymix}!oliveb!oliven!rap

freds@tekigm.UUCP (12/19/84)

I can think of one good reason for 64 bit micro if what is meant by 64
bits is a 64 bit data bus. A data path wider the the processor word size
allows the process to fill its instruction or data cache ahead of
instruction execution. This allows the processor to make use of a clock
that is faster then its memory cycle time. This method was/is used by
some IBM mainframes. This presuposes micro with a cache.


                                           Fred Saxonberg
														 tektronix!tekigm!freds

jlg@lanl.ARPA (12/19/84)

The main problem with 64 bit micros is the pin count on the chip
(I am new to this discussion, so please ignore if this stuff has been
pointed out before).  32 address lines and 64 data lines already make
96 pins on the chip!  Multiplexing these lines only defeats the purpose
behind going to 64 bits to begin with.  I'm not saying that there will
be no 64 bit micros, but they are likely to be bit-sliced machines.

------------------------------------------------------------------------------
The greatest derangement of the mind is to believe in something
because one wishes it to be so - Louis Pasteur

                                              James Giles

geoff@desint.UUCP (Geoff Kuenning) (12/20/84)

I have to confess I find this discussion just a bit amazing.  If you go back
in the history of computing, you will find the same thing ("what we have
now is big enough") being said for the past 35 years -- and being wrong every
time.

Things that need more than 32 bits:

	Times kept in resolutions of more than one second (32 bits of seconds,
    like Unix uses, is actually only good for 136 years, not sufficient if
    you are doing many things).
	Monetary figures kept in pennies (32 bits gives you +/- 20 million)
	File sizes and file pointers on big disks (it is practical right now
    to put four Eagles on a 68000 and write a driver that makes them look like
    one disk -- this is about 1.6 Gb formatted).
	Pointers in exceedingly large virtual address spaces (admittedly this
    will appear first on big computers) -- especially if you want bit
    addressability, capabilities, or similar funny stuff.

That's from only 5 minutes of trying to come up with examples -- how many
could you come up with in 24 hours?

64-bit micros will find lots of uses.  Just wait and see.
-- 

	Geoff Kuenning
	...!ihnp4!trwrb!desint!geoff

henry@utzoo.UUCP (Henry Spencer) (12/20/84)

>  The point is, someone may decide to do it  just  for
> the fun of it and thats all it takes.

Actually, not necessarily.  There have been cases where company X did
silly-and-pointless-but-spiffy-looking thing Y, and everybody else
recognized the stupidity of the idea and ignored it.  You don't see
many Intel 432 clones around, despite an incredible amount of hype at
the time about how it was the obvious next step in machines.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

binder@dosadi.DEC (12/21/84)

Dave Trissel says the Z80 and MC6800 had only very primitive 16-bit functions.
I take issue with that statement, at least for the Z80, which could (and
still can!) use various of its 16-bit registers for accumulators for add and
subtract, as well as increment/decrement, and for indexed and displaced
addressing as well.  They can also be used to control looping memory or I/O
block search and move operations.  They can alse be used in halves as 8-bit
registers.  It's no VAX, that's true, but I wouldn't call its 16-bit stuff
really PRIMITIVE.

Cheers,
Dick Binder   (The Stainless Steel Rat)

UUCP:  {decvax, allegra, ucbvax...}!decwrl!dec-rhea!dec-dosadi!binder
ARPA:  binder%dosadi.DEC@decwrl.ARPA

phil@amdcad.UUCP (Phil Ngai) (12/21/84)

> be no 64 bit micros, but they are likely to be bit-sliced machines.

There is no such thing as a "bit sliced machine". There are bit slice
devices and bit slice machines but no bit sliced machines.

A good example of a bit slice device is the Am2901C Four-Bit Bipolar
Microprocessor Slice.
-- 
 This could very well represent my opinion and AMD's.

 Phil Ngai (408) 749-5790
 UUCP: {ucbvax,decwrl,ihnp4,allegra}!amdcad!phil
 ARPA: amdcad!phil@decwrl.ARPA

david@bragvax.UUCP (David DiGiacomo) (12/21/84)

In article <18341@lanl.ARPA> James Giles writes:

>The main problem with 64 bit micros is the pin count on the chip
>32 address lines and 64 data lines already make
>96 pins on the chip!  
This is not too bad-- 144 and even 216 lead packages are defined and
will be fairly common by the time 64 bit micros are practical.
Peripheral chips tend to require many more leads than processors.

>Multiplexing these lines only defeats the purpose 
>behind going to 64 bits to begin with.  
Not necessarily true-- most current (and future) 32 bit microprocessors
and system buses use multiplexed addresses and data.  There is very
little performance penalty; addresses and data are naturally separated
in time so they can share a bus effectively.

-- 
--
One moment of pleasure, a lifetime of regret: Usenet Madness

David DiGiacomo, BRAG Systems Inc., San Mateo CA  (415) 342-3963
{cbosgd hplabs intelca rhino}!bragvax!david

ech@spuxll.UUCP (Ned Horvath) (12/24/84)

There are perfectly good reasons for wanting larger address SPACES even
though one does not expect to have very-much-larger physical memories.
The venerable MULTICS provides an excellent example of a very elegant design
cramped by an inadequate address size; more recent experimental systems
(HYDRA and the Cambridge CAP) provide other examples.

When one wishes to have what is conceptually a two-dimensional address space --
(segment, offset) in the MULTICS case or (capability, offset) in the other
examples -- it is very nice to, among other things, remove the distinction
between "file" and "segment"; the capability-based systems tend to create
even more "segments" (a capability may be thought of as a segment with
additional protection information -- yes I know that is an oversimplification,
no flames please!).

Now, it is pretty clear that 16 bits is inadequate as a file offset; the early
MULTICS hardware had 18 bit offsets and, even given a 36-bit word, segments
were limited to about a megabyte.  A 24-bit byte offset is probably not
adequate by today's standards: you could't treat even a small disk slice
as a single segment.  32 bits might be adequate for most purposes.

Looking at the other dimension, the capability systems create and destroy large
numbers of small objects: one calls "services" by passing them capabilities
to small objects (conceptually, one passes arguments in capabilities) and
gets results similarly contained.  A 16-bit capability namespace might be
barely adequate.  If one wanted to have "lots" of simultaneously open
files -- er, segments -- even that might not be enough.

All of the foregoing presumes that a high-powered memory-mapping unit, and the
ability to address the entire memory hierarchy as if it were all RAM, is a
good idea.  Personally, I like it a lot, and I think I have established that
a minimum of 48 bit addresses -- preferably 64, perhaps even 96 -- would be
nice to support the abstraction properly.

Now, before this raises a lot of needless flaming, I should note that this
does NOT imply that every instruction need generate 96-bit addresses.  The
CAP system, for example, held capabilities in registers, and relied on the
compiler to keep both the number of registers and the instruction-word size
down to a reasonable level; the MMU uses an associative memory of modest size,
also: it works.  For more data, consult the CAP monograph, or (perhaps more
accessible) the '77 SIGOPS proceedings.

=Ned=

henry@utzoo.UUCP (Henry Spencer) (12/30/84)

The question is not whether 64-bit numbers have uses, but whether they
have enough uses to justify the costs of making them all-pervasive.
The arguments for having them seem significant, but the arguments for
making them the basic type seem weak.  Note that 16 bits is in fact
enough for a lot of variables, if 32 is available for the exceptions;
the push from 16 to 32 came from addresses, not data.  Whether we will
eventually start to find 32-bit addresses confining is an interesting
question.  (Except in specialized areas where it's a foregone conclusion,
like bulk number-crunching -- *those* machines have had monster address
spaces for quite a while.)  It is clear that 32 bits will suffice for
the next little while; whether the past growth in space requirements
will continue past 32 is harder to predict.  It might.

An interesting sidelight on this:  if you're wondering why the 68000
has that silly separation between A and D registers, it's because the
first 68000 design had 16-bit D registers but bigger A registers.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry