[comp.arch] More than 32 bits needed where?

tbray@watsol.waterloo.edu (Tim Bray) (01/31/88)

In article <28200089@ccvaxa> aglew@ccvaxa.UUCP writes:
>>	I really don't think the real world really needs anything more
>>expansive than a 32 bit processor to get most jobs done.
>I'm sure that most people wouldn't need this, but some might - and I'd 
>like to get a feel for the size of such a niche, if it exists. 

Here at the New Oxford English Dictionary Project, we are in the business
of software for large, structured, full-text databases.   This involves keeping
a lot of pointers into the text.  With the OED, we are fortunate in that the 
text is `only' about 500 Mb in size.  However, 32 bits only allows you to
address a 4Gb database at the character level.  In terms of text databases,
4 Gb is big but not that big.  There are fantastic performance advantages
to be gained by having your database pointers atomic, integer-like objects
so that you can do very fast comparison, interpolation searching, Patricia 
trees, and the like.

So here's one application for 64-bit ints.  Let's see, 64 bits gives about
(4 * (10**9)) ** 2, about 1.6 * 10**19 characters, should be enough to get
us to 2000 with luck...
Tim Bray, New OED Project, U of Waterloo

lamaster@ames.arpa (Hugh LaMaster) (02/03/88)

In article <3104@watcgl.waterloo.edu> tbray@watsol.waterloo.edu (Tim Bray) writes:

>Here at the New Oxford English Dictionary Project, we are in the business

>text is `only' about 500 Mb in size.  However, 32 bits only allows you to
>address a 4Gb database at the character level.  In terms of text databases,
>4 Gb is big but not that big.  There are fantastic performance advantages

I also note that large multi-CPU mainframes (e.g. IBM, NAS, Amdahl,...) as
well as supercomputers are approaching the 32 bit addressing limit for
PHYSICAL memory.  So, I predict a healthy future for 64 bit machines in
the "data processing" world, as well as in the scientific computing world.
It might even be coming sooner than some people think in the micro-computing
world.

keld@diku.dk (Keld J|rn Simonsen) (02/04/88)

Another use of more than 32 bits is for normal accounting purposes.
31 bits with sign is just able to handle up to 20 million dollars
including cents. Not that impressing. In other currencies this might
only amount to 2 million or 200000 dollars worth. This is only
enough for amateur firms.

keld

freeman@spar.SPAR.SLB.COM (Jay Freeman) (02/04/88)

The limits of a 32-bit address-space may not necessarily show up at the
high-price end of the market first:

To begin with, if I am not mistaken, the Saxpy Matrix -- a rather low-price
supercomputer -- can be ordered with up to 512 MByte of physical memory; I
believe Saxpy uses dynamic RAM in its system, and I think they are still
using 41256s.  If so, then an upgrade of this machine to a fully-populated
32-bit address space would appear feasible in the not too distant future,
when 4-MBit DRAMs become available.

Another argument:  At the site where I work there are several people who run
single tasks on Symbolics 3600 Lisp machines, that eat up several hundred
megabytes of address space.  The machines don't have nearly that much
physical memory, of course; most of the space used is swapped out to disc.
Notwithstanding, these users (a) are using almost 10 percent of the number
of bytes that a 32-bit byte-addressed machine can address; (b) are
continually asking for more swapping space (please, can't we buy some more
500-Meg discs?); and (c) have been doing so for at least three years.  On
this basis alone, I would say that the days of a 32-bit address space are
numbered.  Symbolics machines are on the pricey end of Lisp systems, but it
is only happenstance that these users are running on Symbolics hardware and
not on Suns.  For that matter, a lot of the newer micros are fast enough to
run decent Lisps; I can imagine needing the same several hundred megabytes
of swapping space for a Lisp system running on (say) a 386 box with Unix, or
on a Macintosh II with A/UX.

It kind of makes me wonder about many of the whizzy new processors; why
would anyone go to all the trouble to design and implement a new machine
with only a 32-bit bus?  If the new machines are as fast as reputed, then
probably Lisp users will want to run larger tasks than they can address.
What shall I do when my Sun-4 (or whatever) has 4 gigabytes of swapping
space and it's not enough?

						-- Jay Freeman

(canonical disclaimer: the opinions expressed are personal)

daveb@geac.UUCP (David Collier-Brown) (02/04/88)

>In article <28200089@ccvaxa> aglew@ccvaxa.UUCP writes:
>>>	I really don't think the real world really needs anything more
>>>expansive than a 32 bit processor to get most jobs done.
>>I'm sure that most people wouldn't need this, but some might - and I'd 
>>like to get a feel for the size of such a niche, if it exists. 

In article <3104@watcgl.waterloo.edu> tbray@watsol.waterloo.edu (Tim Bray) writes:
>Here at the New Oxford English Dictionary Project, we are in the business
>of software for large, structured, full-text databases....
>                                With the OED, we are fortunate in that the 
>text is `only' about 500 Mb in size.  However, 32 bits only allows you to
>address a 4Gb database at the character level.  In terms of text databases,
>4 Gb is big but not that big.  

  Niche?

  One of the VM types at IBM admitted that they were going through
about one additional address bit about every 18 months (about 18
months ago, as it happens).

  That ain't no niche, its a market.


--dave (Multicians often found 36 bits too few and used 72) c-b
-- 
 David Collier-Brown.                 {mnetor yunexus utgpu}!geac!daveb
 Geac Computers International Inc.,   |  Computer Science loses its
 350 Steelcase Road,Markham, Ontario, |  memory (if not its mind) 
 CANADA, L3R 1B3 (416) 475-0525 x3279 |  every 6 months.

radford@calgary.UUCP (Radford Neal) (02/05/88)

In article <4340@ames.arpa>, lamaster@ames.arpa (Hugh LaMaster) writes:

> I also note that large multi-CPU mainframes (e.g. IBM, NAS, Amdahl,...) as
> well as supercomputers are approaching the 32 bit addressing limit for
> PHYSICAL memory.  So, I predict a healthy future for 64 bit machines in
> the "data processing" world, as well as in the scientific computing world.
> It might even be coming sooner than some people think in the micro-computing
> world.

Why is everyone assuming that if you want greater than 32 address bits
you need 64? My guess is that 48 (or maybe even 40) bit addresses will
be used. If you assume that memory is dominated by such addresses, or
other quantities held in the same word size, going to 64 costs 33% more
than using 48. Note that for the systems mentioned, memory must surely 
be a large part of the total cost.

Now I can see the flames coming on this one... How increadibly short
sighted! History shows that we *always* need more address bits... Please
calculate the amount of memory addressable with 48 bits, then ask whether
systems with this much memory, if they ever appear, will have anything
like a current machine architecture anyway...

Of course, there are other reasons for wanting long addresses, than just
access to physical memory. If you want sparse address spaces, typed
pointers, ring brackets, whatever, more power to you (MULTICS already
has *72* bit pointers...)

    Radford Neal
    The University of Calgary

bzs@bu-cs.BU.EDU (Barry Shein) (02/05/88)

Posting-Front-End: GNU Emacs 18.41.4 of Mon Mar 23 1987 on bu-cs (berkeley-unix)



>Another use of more than 32 bits is for normal accounting purposes.
>31 bits with sign is just able to handle up to 20 million dollars
>including cents. Not that impressing. In other currencies this might
>only amount to 2 million or 200000 dollars worth. This is only
>enough for amateur firms.
>
>keld

This was the motivation for packed decimal instruction sets on many
computers aimed at the business market. They typically used "BCD",
decimal digits packed one per 4-bit nibble and supported variable
length operands up to 15 decimal digits.

This put them somewhere between strings and integers in format, one
advantage was that they were simple to unpack into decimal strings
(usually just a matter of expanding the nibbles to bytes and or'ing in
a character zero.)

The 370 supports these and pack/unpack in hardware to/from EBCDIC
(whoop-tee-doo) and even EDIT and EDMK which will do things like fill
on the left with asterisks, put $ signs in and append the characters
"CR" all in hardware!  Well, it seems to make the COBOL crowd happy.

DEC put basically a clone implementation of the 370 decimal
instruction set in the VAX tho it may have had some basis in the
PDP-11 CIS chip (a coprocessor, "Commercial Instruction Set" which
added various opcodes), I don't remember if CIS had packed decimal,
probably (you had a spot for either the CIS or an FPU but not both, I
worked in science stuff so we always got the FPU although we were
always curious if the block move in the CIS would help UNIX's buffer
motions, I digress.) Actually, we got an EIS but that's another story
(gee Martha, a 32-bit shift instruction AND real hardware multiply and
it's only a few thousand dollars!  Quick, write a PO!!)

I'm sure the packed decimal stuff isn't all that fast tho I have no
data (heck, I guess I'd have to mention a machine first, I wouldn't be
shocked to hear that some old Burroughs or Nixdorf was faster on BCD
than integer) so I'd still agree that a 64-bit integer could be
useful, but it's not entirely an unaddressed issue.

	-Barry Shein, Boston University

ok@quintus.UUCP (Richard A. O'Keefe) (02/05/88)

In article <19667@bu-cs.BU.EDU>, bzs@bu-cs.BU.EDU (Barry Shein)
writes about "Commercial" instruction sets.

The Burroughs B6700 and its successors (I can never remember the latest
name for these machines, is it "E-mode"?) had 48-bit single precision
arithmetic and 96-bit double precision arithmetic.  Integers were a
special case of floats, meaning that you got about 11 decimal digits
(single) or 23 (double).  Having used 48-bit (B6700) and 36-bit (DEC-10)
machines, even working at the instruction level of both (a friend and I
had a hacked version of the B6700 Algol compiler which had an equivalent
of asm("..")), I have never understood why "word-length = a power of two"
has such a hypnotic effect on people.  Burroughs considered the needs of
COBOL, and decided that they'd get more value for money by concentrating
on binary arithmetic and providing fast decimal<->integer conversion.

If 32 bits isn't enough, why not go to 48 as the next step?

rw@beatnix.UUCP (Russell Williams) (02/06/88)

In article <4340@ames.arpa> lamaster@ames.arc.nasa.gov.UUCP (Hugh LaMaster) writes:
>I also note that large multi-CPU mainframes (e.g. IBM, NAS, Amdahl,...) as
>well as supercomputers are approaching the 32 bit addressing limit for
>PHYSICAL memory.  So, I predict a healthy future for 64 bit machines in
>the "data processing" world, as well as in the scientific computing world.
>It might even be coming sooner than some people think in the micro-computing
>world.

   The per-process virtual (user instruction set) addressing limit of a machine 
doesn't necessarily limit the physical memory.  Our machine uses 32 bit virtual
addresses and 38 bit physical addresses.  Many years ago,
Honeywell machines has a 256 Kword per-process limit but could handle more
physical memory.   Of course things get more
complicated in the memory management software if you want to handle
physical > per-process, but it's often not nearly as hard to expand the
physical addressing of an architecture as it is the per-process logical limit.
It's traditionally been easier to recode your memory manager than persuade all 
your users to recompile, but Unix & C are changing that.

Russell Williams
..uunet!elxsi!rw

kludge@pyr.gatech.EDU (Scott Dorsey) (02/06/88)

In article <625@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>  Having used 48-bit (B6700) and 36-bit (DEC-10)
>machines, even working at the instruction level of both (a friend and I
>had a hacked version of the B6700 Algol compiler which had an equivalent
>of asm("..")), I have never understood why "word-length = a power of two"
>has such a hypnotic effect on people.

   I routinely code on the CDC 170 machines (60 bit word, 6 bit character)
and the CDC 180 machines (64 bit word, real ASCII).  Overall, the character
handling capability of the long word architecture is pretty good.  The 180's
provide nice conversion routines, and block off 8 character conversions and
string searches with a minimum of bus cycles.  
   Overall, though, it's not worth it.  These machines are excellent number
crunchers (having a 64-bit real is a spectacular thing... Double Precision
is 128 bits!), but most of the power is wasted.  



Scott Dorsey   Kaptain_Kludge
SnailMail: ICS Programming Lab, Georgia Tech, Box 36681, Atlanta, Georgia 30332

   "To converse at the distance of the Indes by means of sympathetic
    contrivances may be as natural to future times as to us is a 
    literary correspondence."  -- Joseph Glanvill, 1661

Internet:  kludge@pyr.gatech.edu
uucp:	...!{decvax,hplabs,ihnp4,linus,rutgers,seismo}!gatech!gitpyr!kludge

dwc@homxc.UUCP (Malaclypse the Elder) (02/06/88)

i can also think of applications of greater than 32 bits of
address space in the operating system.  for one, i have been
thinking of the amount of copying that is done in passing
data through the various layers of network protocols.  perhaps
with a very LARGE kernel address space and suitable alignment
of data structures, this data can be memory-mapped through the
different layers instead of copied.

i haven't really thought much about it.  what do people think?

danny chen
ihnp4!homxc!dwc

haynes@ucscc.UCSC.EDU (99700000) (02/06/88)

In article <625@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>In article <19667@bu-cs.BU.EDU>, bzs@bu-cs.BU.EDU (Barry Shein)
>writes about "Commercial" instruction sets.
>
>If 32 bits isn't enough, why not go to 48 as the next step?

Hear! Hear!  I love Burroughs the B5500, even tho it's been gone (sob)
for several years.


haynes@ucscc.ucsc.edu
haynes@ucscc.bitnet
..ucbvax!ucscc!haynes

aeusesef@csuna.UUCP (sean fagan) (02/07/88)

In article <4947@pyr.gatech.EDU> kludge@pyr.UUCP (Scott Dorsey) writes:
>   I routinely code on the CDC 170 machines (60 bit word, 6 bit character)
>and the CDC 180 machines (64 bit word, real ASCII).  Overall, the character
>handling capability of the long word architecture is pretty good.  

Character handling is strange on these machines (for those of you who are
unfortunate enough to never have worked on one):  CDC normally uses a 6-bit
character set, allowing 10 characters per word.  (This is, by the way, why
the linker, compilers, etc. usually allow only 7 character identifiers:  it
words out to 7 characters, plut 18 bits leftover for the address [only an
18-bit address].)  You can also use 8 bit ASCII, or a modified version which
uses up 12 bits per word.  To get any individual character, you need to form
a mask, load the desired mask, do an AND, shift, and, bingo, you're done.
Since you can't use a memory location for the source or destination, you
have to use up a register for each operation (true, you can overlap, but
that slows the bigger models down).  Compared to other machines, it's
horrendous.  However, since it is such a fast machine, it tends to go MUCH
faster than most others.  (this is a partial plug for equipment CDC no
longer manufactures 8-))  Oh, you could also get a Compare Move Unit for
most (or at least some) of the models, which could move any arbitray 6-bit
string to any other 6-bit string (that way, you could get a 6-bit character
out of one word into another in about 3 clock-cycles [plus memory access
times]).

>The 180's
>provide nice conversion routines, and block off 8 character conversions and
>string searches with a minimum of bus cycles.  

Actually, the byte-accessibility is handled by the hardware (actually by the
microcode; it does allow byte addressing, however, for data fetches), and
the string searces are also handled by the microcode.  Since the slowest
machine in this line I've worked with has a 20-MHZ clock, yes, it does tend
to do things rather quickly.  180 state machines also (with one exception)
have 170 state emulation; and, since some customers wanted it, they added
the CMU to the 170 microcode.  Also of interest, a 180/830 has up to 16
states it can run (i.e., it can have 16 different microcodes running at the
same time [like multi-tasking]; sadly, they never did anything with this,
and later models only had 2 states [and their really low end mainframe, the
180/930 [aka the washing machine because of its size and shape] only has 1:
it doesn't run NOS, just NOS/VE [Networked Operating System/Virtual
Enviornment]).

>   Overall, though, it's not worth it.  These machines are excellent number
>crunchers (having a 64-bit real is a spectacular thing... Double Precision
>is 128 bits!), but most of the power is wasted.  
What?! Power is never wasted.  We (at CSU) have, at times, up to 120
students on a 170/750 AT THE SAME TIME!  Without the power it provides, it
would take *minutes* to compile a 5-line FORTRAN program.  As it is, it can
do it in under 1 minute!  It's also the only machine I've been on where I
can compile a 10,000 line FORTRAN program in under 3 minutes (conservative
estimate, somewhat).  I will admit, most people don't use the Double
Precision, but, when you need it, you *really* need it.

Well, sorry for lecturing about it.  Since I started working for a living on
the machines, my respect for the hardware has grown incredibly.  The OS, on
the other hand...

>Scott Dorsey   Kaptain_Kludge
>SnailMail: ICS Programming Lab, Georgia Tech, Box 36681, Atlanta, Georgia 30332

 -----

 Sean Eric Fagan          Office of Computing/Communications Resources
 (213) 852 5742           Suite 2600
 1GTLSEF@CALSTATE.BITNET  5670 Wilshire Boulevard
                          Los Angeles, CA 90036
{litvax, rdlvax, psivax, hplabs, ihnp4}!csun!csuna!aeusesef

lee@ssc-vax.UUCP (Lee Carver) (02/09/88)

It appears in this discussion that the obvious reason for more then
32 bits has been overlooked - money!

There are lots of companies out there that need to add up more then
$4 billion in sales.  Even bigger numbers if you count transactions,
not just year end totals.

And those numbers better be right, down to the penny.

david@sun.uucp (David DiGiacomo) (02/09/88)

In article <1333@vaxb.calgary.UUCP> radford@calgary.UUCP (Radford Neal) writes:
>Now I can see the flames coming on this one... How increadibly short
>sighted! History shows that we *always* need more address bits... Please
>calculate the amount of memory addressable with 48 bits, then ask whether
>systems with this much memory, if they ever appear, will have anything
>like a current machine architecture anyway...

I can easily imagine a network with more than 16K, 16 Gb disks connected to
it.  It would be silly to arbitrarily restrict file size in such a system.

aglew@ccvaxa.UUCP (02/09/88)

>It kind of makes me wonder about many of the whizzy new processors; why
>would anyone go to all the trouble to design and implement a new machine
>with only a 32-bit bus?  If the new machines are as fast as reputed, then
>probably Lisp users will want to run larger tasks than they can address.
>What shall I do when my Sun-4 (or whatever) has 4 gigabytes of swapping
>space and it's not enough?
>
>						-- Jay Freeman

Heartily second this emotion. Maybe the implementation should only have 
a 32 bit bus, but the architecture should make 64 bit work easier.
You can prepare for 64 registers by leaving spare opcodes, and by leaving
1 (or maybe 2) extra bits in your bitfield and shift instructions
- instead of a 5 bit shiftwidth, 6 (8 sounds even better).



Andy "Krazy" Glew. Gould CSD-Urbana.    1101 E. University, Urbana, IL 61801   
    aglew@gould.com     	- preferred, if you have nameserver
    aglew@gswd-vms.gould.com    - if you don't
    aglew@gswd-vms.arpa 	- if you use DoD hosttable
    aglew%mycroft@gswd-vms.arpa - domains are supposed to make things easier?
   
My opinions are my own, and are not the opinions of my employer, or any
other organisation. I indicate my company only so that the reader may
account for any possible bias I may have towards our products.

martin@felix.UUCP (Martin McKendry) (02/09/88)

In article <625@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>
>The Burroughs B6700 and its successors (I can never remember the latest
>name for these machines, is it "E-mode"?) had 48-bit single precision

The machines are A-series, the instruction set is E-mode.

>arithmetic and 96-bit double precision arithmetic.  Integers were a
>special case of floats, meaning that you got about 11 decimal digits
>(single) or 23 (double).  Having used 48-bit (B6700) and 36-bit (DEC-10)
>machines, even working at the instruction level of both (a friend and I
>had a hacked version of the B6700 Algol compiler which had an equivalent
>of asm("..")), I have never understood why "word-length = a power of two"
>has such a hypnotic effect on people.  

You want to try doing Cobol, or pointer arithmetic, maintaining separate
character and word portions etc?  Its a nightmare.  Dog slow, too.
Burroughs has patents on some pretty nifty divide-by-3 hardware.

Burroughs considered the needs of
>COBOL, and decided that they'd get more value for money by concentrating
>on binary arithmetic and providing fast decimal<->integer conversion.

Actually, this is not exactly correct.  Burroughs went with 48 bits
precisely because they thought it WAS a power of two.  At the time,
characters were 6 bits (1957-8, remember).  I had the pleasure of
researching this during 1986, when I was employed by Burroughs in
Detroit.  At the time, I was examining the performance of the A-series.
Among the issues that arose was the non-power-of-2 wordsize.

>
>If 32 bits isn't enough, why not go to 48 as the next step?

We proposed taking 48 bits to 64.  They laid us off.



--
Martin S. McKendry;    FileNet Corp;	{hplabs,trwrb}!felix!martin
Strictly my opinion; all of it

franka@mmintl.UUCP (Frank Adams) (02/09/88)

In article <4947@pyr.gatech.EDU> kludge@pyr.UUCP (Scott Dorsey) writes:
>   I routinely code on the CDC 170 machines (60 bit word, 6 bit character)
>and the CDC 180 machines (64 bit word, real ASCII). ...
>   Overall, though, it's not worth it.  These machines are excellent number
>crunchers (having a 64-bit real is a spectacular thing... Double Precision
>is 128 bits!), but most of the power is wasted.

So what!  Your car runs only, maybe, 5% of the time; the rest of the time
it's sitting idle, its capacity wasted.  Does this bother you?

Computer power is cheap and getting cheaper.  Today, if you occasionally use
the full power of your machine, the power is worth having.  In the future,
that will be true if you *ever* use the full power.
-- 

Frank Adams                           ihnp4!philabs!pwa-b!mmintl!franka
Ashton-Tate          52 Oakland Ave North         E. Hartford, CT 06108

eugene@pioneer.arpa (Eugene N. Miya) (02/10/88)

In article <973@ssc-bee.ssc-vax.UUCP> lee@ssc-vax.UUCP (Lee Carver) writes:
> Article summarizing word size and equating the maxint to maximum
> transactable dollars. (i.e. 4 G)

A similar argument can be made about world population.  But I think
this is week.

It occurs to me (since this is the 4th time I've seen this discussion) that
there is a wildcard in this line of logic which we over look.  We should
not equate word size with maximum machine precision, especially in this
world where 68Ks and 80386s predominate.  Before we see the
proliferation of 64 bit micros, I think we will see the proliferation of
extended precision math software packages (floating point as well as integer).
I think this will happen because of Ada and the way Ada handles
machine precision (scary thought to me actually).  I have to think about
the consequences of this more (I didn't think of this 3 times hence).
The DOD may control software more than I thought they would.

From the Rock of Ages Home for Retired Hackers:

--eugene miya, NASA Ames Research Center, eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
  {uunet,hplabs,hao,ihnp4,decwrl,allegra,tektronix}!ames!aurora!eugene

lamaster@ames.arpa (Hugh LaMaster) (02/10/88)

In article <1333@vaxb.calgary.UUCP> radford@calgary.UUCP (Radford Neal) writes:

>
>Why is everyone assuming that if you want greater than 32 address bits
>you need 64? My guess is that 48 (or maybe even 40) bit addresses will
>be used. If you assume that memory is dominated by such addresses, or


Actually, it is extremely important in a networked world for word sizes
to be a power of two bits long.  As someone who has spent a considerable
amount of time maintaining code to move data between machines of 
various sizes (in my case, DEC and IBM 8 bit byte/32 bit word machines
and CDC 60 bit word machines), and helping users convert data written
on one such machine to another, etc. etc., I can say with complete
conviction:

Word size (or the size of any addressable data) should ALWAYS be a 
power of two.

In addition to making life much easier for those who have to move
binary data between machines (and with NFS, there are a lot more
such people out there), it also makes it much easier to move CODE
between machines.  Yes, I have converted bit-level code from IBM's
to Cyber 170's and back to 8 bit machines again.

Now, theoretically, 48 bit machines are great.  The address size is
very nice and 48 bits is probably perfect for SINGLE PRECISION
floating point (as has been observed by many people, 32 bits is too
small for many f.p. problems), and 96 bits is probably perfect
for double precision.  However, I stand by my previous judgement
that the money you save in memory with a 48 bit word size you will
pay back many times in programming costs. 

>
>Now I can see the flames coming on this one... How increadibly short
>sighted! History shows that we *always* need more address bits... Please
>calculate the amount of memory addressable with 48 bits, then ask whether

>Of course, there are other reasons for wanting long addresses, than just
>access to physical memory. If you want sparse address spaces, typed
>pointers, ring brackets, whatever, more power to you (MULTICS already
>has *72* bit pointers...)

The ability to support sparse address spaces is useful now and will be
even more useful in the future when capability based systems become
commonplace- It is a natural idea to support "objects" using virtual
memory "segments".  With 2**31 segments, each one of which is 2**32
bits or bytes long, it would be very natural to map an object to a
segment and be able to support many segments.  As well as shared virtual
memory without using up all of your precious address space.

As far as I know, the only thing holding back 64 bit word machines
in the micro world is the difficulty of getting enough pins into the chips
with enough chip area left to do something useful.  

The question I have, since the 64 bit alternative seems to controversial, is:
Are there any OTHER reasons for NOT using 64 bits?

donahn@ucbvax.BERKELEY.EDU (Don Ahn) (02/10/88)

In article <28200094@ccvaxa>, aglew@ccvaxa.UUCP writes:
> 
> >It kind of makes me wonder about many of the whizzy new processors; why
> >would anyone go to all the trouble to design and implement a new machine
> >with only a 32-bit bus?  If the new machines are as fast as reputed, then
> >probably Lisp users will want to run larger tasks than they can address.
> >What shall I do when my Sun-4 (or whatever) has 4 gigabytes of swapping
> >space and it's not enough?
> >
> >						-- Jay Freeman
> 
> Heartily second this emotion. Maybe the implementation should only have 

If I may put in my two cents worth,  I would think the majority of 
applications out there today on PC (IBM,Apple) systems and Unix
systems  (especially BUSINESS) applications would do quite fine in
a 4 gigabyte address space.  There are some applications that
could use a 64 bit address space (Lisp, Image Processing, Digitization),
but there are the exceptions to the rule.  Most people are quite
happy to run the same, or similar, applications they have been
running so far as long as they can run them faster.  Lets face it,
faster sells much better that addressing space to the business world
and to the general public.  Until 64-bit dBase XXI come out there
will be relatively little (dollar) demand for 64-bit addressing
machines.  I know people that would be perfectly happy with a
100 Mhz 8088 PC with 640k of 10ns Ram and a 5ms 10 Meg disk running
PC-DOS.  To may said system would be the epitamy of computer "advancement".
Sickening but true.



-- 
Don Ahn
UC/Berkeley Dept. of Zoology
1576 LSB			USENET: ...!ucbvax!donahn
(415) 643-6299			ARPA:   donahn@ucbvax.berkeley.edu

daveh@cbmvax.UUCP (Dave Haynie) (02/10/88)

in article <19667@bu-cs.BU.EDU>, bzs@bu-cs.BU.EDU (Barry Shein) says:

>>Another use of more than 32 bits is for normal accounting purposes.

> This was the motivation for packed decimal instruction sets on many
> computers aimed at the business market. They typically used "BCD",
> decimal digits packed one per 4-bit nibble and supported variable
> length operands up to 15 decimal digits.

That old business workhorse, the MOS 6502, also had a BCD math mode.

It does, at least, have an accuracy advantage, in there's no binary rounding
problem with BCD, you get what you pay for in decimal digits.  Of couse, it's
in most cases going to be much slower than normal binary math, and it takes up
more space.

> 	-Barry Shein, Boston University
-- 
Dave Haynie  "The B2000 Guy"     Commodore-Amiga  "The Crew That Never Rests"
   {ihnp4|uunet|rutgers}!cbmvax!daveh      PLINK: D-DAVE H     BIX: hazy
		"I can't relax, 'cause I'm a Boinger!"

davidsen@steinmetz.steinmetz.UUCP (William E. Davidsen Jr) (02/10/88)

I've had this discussion with one of the Cray gurus here, and I'll just
mention my point again in relation to the current discussion.

Given any computer, there are a certain percent of all problems which do
not exceed its addressing, mass storage and accuracy limits, and which
execute in a reasonable time (see below).  As the hardware becomes more
powerful the percent of soluble problems increases. 

There are people who say that there will always be a need for more power
are only partially correct. There will always be a *use* for more power,
but as the percentage of problems requiring a given level of power
decreases, the economic justification for creating such hardware
decreases. The problems tend to be more abstract and the value of the
solutions harder to determine.

I am not saying that we are at that "stopping point," and if the cost of
hardware continues to decrease there may not be such a point. But when
someone claims that they need a system two orders of magnitude faster
than a Cray, I have to question the term "need." Not that there is any
lack of problems requiring that level of power for solution, but that
the solution is needed as opposed to being technically interesting, I
question.

The typical accountants answer to buying more CPU power is "what will
happen if we don't get it?" If the answer is "the paychecks won't go out
on time" or "we can't do our billing," there is a "need." If the answer
is "our weather map will be 5% less accurate than it could be," you
better be making money on weather maps (or have the taxpayers bottomless
pockets to tap).

There will always be people who could ligitimately use more than 2 GB of
memory, or more than 64 bit words, the question is if the problems to be
solved justify the expense. Looking at the history of hardware cost, I
would guess that there will be more problems solved with existing
hardware, and fewer which need more than state of the art.
--
Acceptable response: depends on the time taken to use the data. Someone
making a ray traced motion picture generates frames which are used in a
fraction of a second, and lot of them. Some engineer who takes three
weeks to analize the output of one simulation run will not be a lot less
productive waiting four hours to get those results.
-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {uunet | philabs | seismo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

mike@arizona.UUCP (02/11/88)

From article <9495@steinmetz.steinmetz.UUCP>, by davidsen@steinmetz.steinmetz.UUCP (William E. Davidsen Jr):
> There are people who say that there will always be a need for more power
> are only partially correct. There will always be a *use* for more power,
> but as the percentage of problems requiring a given level of power
> decreases, the economic justification for creating such hardware
> decreases. The problems tend to be more abstract and the value of the
> solutions harder to determine.

I think that experience shows that we quickly develop "need" for any
amount of processing power we have available.  I mean "need" in the
sense of "makes economic sense".  When we had 4K of memory we didn't
"need" much except an assembler and front pannel lights.  Now we need
huge editors, compilers, debuggers, window systems, spread sheets, ...
At the moment, I'm not sure what I "need": a shell that lets me
manipulate three dimensional shaded pipes to connect processes?  Voice
input?  More likely, something I haven't even imagined yet.
-- 

Mike Coffin				mike@arizona.edu
Univ. of Ariz. Dept. of Comp. Sci.	{allegra,cmcl2,ihnp4}!arizona!mike
Tucson, AZ  85721			(602)621-4252

peter@gen1.UUCP (Peter CAMILLERI) (02/11/88)

It seems to me that the primary need for more than 32 bit operations lies
in the need to address large ( possibly networks of ) data arrays and such.

Perhaps the optimum soloution would be to have a machine in which the size
of an address where unkown. For example the inmos transputer contsructs
its address out of instruction components that can be concatenated to
form an arbritarily large value. Address then would essentially be passed 
betwwen machines in a string like format. Internally the address would be
evaluated to 64 or whatever bits. The advantages of this scheme are that
small program would not be penalized, and hardware can be upgraded with-
out trashing all of the softwrare.

No flames please. This is only an attempt to define a soloution that
won't "run out" in 10 years

Peter



-- 
Peter  Camilleri         UUCP:       ...!mnetor!yunexus!gen1!peter
                                       ...utzoo!/   

farren@gethen.UUCP (Michael J. Farren) (02/12/88)

davidsen@crdos1.UUCP (bill davidsen) writes:
>
>There are people who say that there will always be a need for more power
>are only partially correct. There will always be a *use* for more power,
>but as the percentage of problems requiring a given level of power
>decreases, the economic justification for creating such hardware
>decreases. The problems tend to be more abstract and the value of the
>solutions harder to determine.

The flaw in this argument is presuming that we have knowledge of all the
possible applications of such power.  Relevant examples abound - the
statement that 'the whole world will only need a small number of these
machines' applied to the early computers is a particularly good one,
as it points out that our vision is necessarily limited by our current
knowledge.  It's not what you can do with a powerful machine now that
is intriguing, but what you might be able to do with one if they were
commonly available, and that you cannot foresee.

The widespread acceptance of a bit-mapped graphical interface among
personal computer users is a fine counter-example.  It's only been
ten years since such an interface was a laboratory exercise, and
infeasible for general use.  It's really only now, with the advent
of the 68020 and 80386, that such a concept can begin to move from
the lab into common usage, with basically good results.  I don't
know what the equivalent use is for a gigabyte/gigaflop machine -
if I did, I'd probably be making a lot more money than I am now -
but I'd be willing to bet that it will be unexpected and useful.

-- 
Michael J. Farren             | "INVESTIGATE your point of view, don't just 
{ucbvax, uunet, hoptoad}!     | dogmatize it!  Reflect on it and re-evaluate
        unisoft!gethen!farren | it.  You may want to change your mind someday."
gethen!farren@lll-winken.llnl.gov ----- Tom Reingold, from alt.flame 

eugene@pioneer.arpa (Eugene N. Miya) (02/12/88)

In article <9495@steinmetz.steinmetz.UUCP> davidsen@crdos1.UUCP (bill davidsen) writes:
> An excellent summary justifying the tradeoffs of capability versus cost.
> . . . then:
>memory, or more than 64 bit words, the question is if the problems to be
>solved justify the expense.

Somehow, I can see it in the headlines of the near future:
	"AIDS virus can't be mapped due to insufficient computer power..."
I quote something I heard from a net reader who never posts:
	"You can't call them supercomputers until their costs equal
	aircraft carriers and particle acclerators."
	;-)
Please excuse the non-technical posting, I thought the quote would bring
useful perspective to the world of funding.

From the Rock of Ages Home for Retired Hackers:

--eugene miya, NASA Ames Research Center, eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
  {uunet,hplabs,hao,ihnp4,decwrl,allegra,tektronix}!ames!aurora!eugene

amos@nsc.nsc.com (Amos Shapir NSTA) (02/12/88)

While it's true that today's fancy stuff may become tomorrow's necessity,
it is very hard to find out in advance what will become of specific
features: refrigirators and telephones are here to stay; CB radios and
8-track tapes are gone, except for specific use.

-- 
	Amos Shapir				My other CPU is a NS32532
National Semiconductor 7C/266  1135 Kern st. Sunnyvale (408) 721-8161
amos@nsc.com till March 1, 88; Then back to amos%taux01@nsc.com 

hqm@mit-vax.LCS.MIT.EDU (Henry Minsky) (02/14/88)

In article <28200094@ccvaxa> aglew@ccvaxa.UUCP writes:
>
>>It kind of makes me wonder about many of the whizzy new processors; why
>>would anyone go to all the trouble to design and implement a new machine
>>with only a 32-bit bus?  If the new machines are as fast as reputed, then
>>probably Lisp users will want to run larger tasks than they can address.
>>What shall I do when my Sun-4 (or whatever) has 4 gigabytes of swapping
>>space and it's not enough?
>>
>>						-- Jay Freeman
>


In fact, the new Symbolics Ivory lisp-machine microprocessor addresses
words, not bytes, and the words are 40 bits long (32 bit address + 8 tag).
So, you get 4 Gwords which is a little more than 16 Gbytes. Still,
people's applications expand to fill all available memory...


Henry Minsky, Symbolics Inc.

freeman@spar.SPAR.SLB.COM (Jay Freeman) (02/15/88)

In article <328@gen1.UUCP> peter@gen1.UUCP (Peter CAMILLERI) writes:

>Perhaps the optimum soloution would be to have a machine in which the size
>of an address where unkown. For example the inmos transputer contsructs
>its address out of instruction components that can be concatenated to
>form an arbritarily large value. Address then would essentially be passed 
>betwwen machines in a string like format. Internally the address would be
>evaluated to 64 or whatever bits. The advantages of this scheme are that
>small program would not be penalized, and hardware can be upgraded with-
>out trashing all of the softwrare.

Speaking off the cuff, I think there is merit to that suggestion:  Suppose
addresses were encoded with a variable-length coding scheme.  A processor
with a non-variable-width (!)  bus would do a bus-wide fetch, then start
decoding; it would only go and do another fetch if it turned out it needed
to.  With sane design of the coding scheme, a substantial address space
could be described in the set of encoded addresses that could be fetched in
one bus transaction.  (I am guessing, but I suspect it might be as high as
2**(N-1) encoded locations for an N-bit bus width.)  That would mean that
most of the time (one hopes) the processor could run at one fetch per
address location, but that longer addresses would be available when needed,
at no more expense than additional fetches to get them.  Possibly there
would be useful circumstances in which odd-length addresses could be packed
so that they did not all start at word boundaries.

One problem here is that you do not know the length of any specific pointer
until run-time; that perhaps creates a problem for storing pointers in
memory.  You can't just store a "forwarding pointer" instead, cause there's
no guarantee that it will fit, either.  However, it is possible to imagine a
reasonable work-around; one could require that a programmer declare a
maximum length-of-pointer at compile-time, and make the compiler allocate
that much storage for each pointer.  Having a pointer come out too long
would then be a run-time error, which could be fixed merely by changing the
compile-time declaration and recompiling.  That's a little painful, but not
nearly so much as having to wait for the hardware people to implement a
longer word.

There is also perhaps a problem for how the CPU addresses memory; one can
imagine the poor CPU saying "let me see, the user wants to provide a
1000-bit address, do I have enough address registers to store all those
bits, and even if I do, how shall I put them out on the address bus ..."
But there probably is an answer involving cycling through an address in
fetch-wide chunks, with the aid of an MMU that knows how to bank-switch (or
whatever) on all but the last "fetch" number of bits.

Another win for such a scheme would be if frequently-accessed data were
stored at places for which the address was short.  One could map registers
into address space, and have them get the shortest addresses, or use
relative addressing off the stack top or off a base register, or ...



					-- Jay Freeman


(canonical disclaimer -- these statements are my opinions and nothing else)

keld@diku.dk (Keld J|rn Simonsen) (02/17/88)

About dollar and other currencies: you need to represent them in the
smallest unit to be sure of having a correct representation, and you
need integer or BCD arithmetic to preserve precision.

So for USA you have to do your operations in cents. And if we leave
one bit for the sign you can have a max of 2**31 cents represented in 
a 32 bit word. 2 G cents = 20 million dollar. Not much.

In other currencies the smallest unit may be worth 0.1 cents or less.
Danish kroner (DKK) or French francs (FFR) are amongst these 
currencies. Italian lire is even less. So here you are only able
to have a maximum worth about 2 million US dollar. 

There is a problem with C that long ints are usually not more
than 32 bits, even if the hardware is capable of doing 64 bits
adds and subtracts. This I foresee to cause severe problems for
C-based accounting software! Comments?

Other uses of more that 32 bits is for social security numbers.
They are 10 digits here in Denmark, and that needs 34 bits.
Other countries may have even longer numbers.  I think 64 bit
integers are needed! And has to be supported by languages as C!

Keld Simonsen, U of Copenhagen    keld@diku.dk  uunet!diku!keld

jxdl@beta.UUCP (Jerry DeLapp) (02/20/88)

In article <3815@megaron.arizona.edu>, mike@arizona.edu (Mike Coffin) writes:

[elided]

> I think that experience shows that we quickly develop "need" for any
> amount of processing power we have available.  I mean "need" in the
> sense of "makes economic sense".  When we had 4K of memory we didn't
> "need" much except an assembler and front pannel lights.  Now we need
> huge editors, compilers, debuggers, window systems, spread sheets, ...
> At the moment, I'm not sure what I "need": a shell that lets me
> manipulate three dimensional shaded pipes to connect processes?  Voice
> input?  More likely, something I haven't even imagined yet.

Address space is like disk space is like memory is like money. You can
always use up everything you have and then some.

A question: Rather than going to larger addresses only, how about larger
data items too? Eight bit data units with 32-bit addresses may be nice for
character processing, but aren't most of the really large applications
mainly concerned with larger items (i.e. floating-point numbers). I admit
to being a number cruncher, so I'm biased. I'd much rather pay the overhead
of extra work to handle "small" things in order to have "big" things handled
easily.

Standard disclaimers apply.
My opinions are my own, and sometimes even I don't agree with me.

davidsen@steinmetz.steinmetz.UUCP (William E. Davidsen Jr) (02/20/88)

In article <3671@diku.dk> keld@diku.dk (Keld J|rn Simonsen) writes:
> [...]
>Other uses of more that 32 bits is for social security numbers.
>They are 10 digits here in Denmark, and that needs 34 bits.
>Other countries may have even longer numbers.  I think 64 bit
>integers are needed! And has to be supported by languages as C!

  I proposed to the ANSI C (X3J11) committee that we have a more
independent way of specifying required precision, such as "int*4 foo"
or some such. It was rejected as not in the spirit of the languageand
"too much like FORTRAN." Additional float precision is specified as
"long double," which I admit is in the spirit of the language. I left
the committee after two years dur to lack of time, so I am probably not
up on all the changes since then.
>
>Keld Simonsen, U of Copenhagen    keld@diku.dk  uunet!diku!keld


-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {uunet | philabs | seismo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

rw@beatnix.UUCP (Russell Williams) (02/20/88)

In article <3671@diku.dk> keld@diku.dk (Keld J|rn Simonsen) writes:
>
>There is a problem with C that long ints are usually not more
>than 32 bits, even if the hardware is capable of doing 64 bits
>adds and subtracts. This I foresee to cause severe problems for
>C-based accounting software! Comments?
>
   We tried making longs 64 bits and lots of programs wouldn't run, so
we put int=long=32 bits and added a "long long" type.  Another example
of common practice along the lines of dereferencing null pointers.  
Customers don't like being told their programs are non-portable ("fine,
I'll buy a Vax"), and of course we're stuck with fixing the AT&T and
Berkeley code.  In the real world the portability of a C program is
not really defined by K&R but by what a Vax running Unix does.

Russell Williams
..uunet!elxsi!rw

bzs@bu-cs.BU.EDU (Barry Shein) (02/20/88)

From: jxdl@beta.UUCP (Jerry DeLapp)
>Address space is like disk space is like memory is like money. You can
>always use up everything you have and then some.

Actually I've always felt that this belief was untrue. It's more like
the amount of memory or disk space is a function of the speed of
accessing it and using it. What use would 1GB of main memory be on a
Vax750 for example? Just zeroing it would probably take about 20
minutes (one 4 byte word per loop, 4 instruction times per loop, about
1000 seconds, cut it in half if you like, it doesn't change the
argument), let alone actually making much use of it in real
calculations.

NOTE: This is not to say that *no* one can make use of massive memory,
but the applications diminish as the memory grows, and what
constitutes "massive" memory is dependant upon other elements of a
system (1GB on a Cray-2 is modest if not limiting.) The real point is
that there are economic trade-offs and there is definitely a point
where you'd be better off spending your money on a faster CPU
(assuming finite money) than more memory. A "truism" like the above
is actually not quite true.

What you really want is to find some sort of balance. Given the past
history of computing it's not surprising to hear people say there is
no limit to the need for more memory (although I will say that the
poster's address at Los Alamos Natl Lab might indicate a special case
environment.)

I actually think this is more important a comment than it might at
first seem and even applies to mere CPU speed which eventually gets
out of balance with the software for all but the most exotic
applications (how many people out there would throw away their current
workstation and pay for a new one merely to double its CPU speed? How
many would rather spend the same money on some software to make their
work easier (assuming finite funds)?)

See, the world is changing for a lot of folks, it's an important
change.  Granted these choices have always existed, but I do think the
threshold is moving rapidly, there are limits to human reaction time.

	-Barry Shein, Boston University

P.S. I know I've been thru this before in the past, it's kinda like
when I teach a class and get this urge on the first day to say "didn't
I explain this last* year?!"

earl@mips.COM (Earl Killian) (02/20/88)

In article <3671@diku.dk>, keld@diku.dk (Keld J|rn Simonsen) writes:
> There is a problem with C that long ints are usually not more
> than 32 bits, even if the hardware is capable of doing 64 bits
> adds and subtracts. This I foresee to cause severe problems for
> C-based accounting software! Comments?

Actually, it's trivial to do some 64-bit arithmetic in C today.

However, lest I appear to be arguing against 64-bit support, let me
first say that, in my opinion, every architecture designed after the
early 90s will be a 64-bit architecture.  Even PCs will probably use
64-bit architectures by the end of the 90s.  64 bit addresses and
integers are too important.

Anyway, here are some macros that I use for doing 64-bit addition and
subtraction in C.  People commonly assume you need to use an add with
carry instruction to do this stuff, which is not so.  All you need is
an unsigned comparison.  64-bit multiply/divide are the ones that
can't be done trivially in C.

/* dw.h */
typedef unsigned dw[2];

#define ZERO(_dw) \
    {_dw[0] = 0; _dw[1] = 0;}

#define INC1(_dw) \
    {_dw[0] += 1; if (_dw[0] == 0) _dw[1] += 1;}

#define INC(_dw, _i) \
    {_dw[0] += _i; if (_dw[0] < (unsigned)_i) _dw[1] += 1;}

/* be careful that _a is not the same location as _c in the following */
#define ADD(_a, _b, _c) \
    {_a[0] = _b[0] + _c[0]; _a[1] = _b[1] + _c[1] + (_a[0] < _c[0]);}

#define SUB(_a, _b, _c) \
    {register unsigned _borrow = (_b[0] < _c[0]); _a[0] = _b[0] - _c[0]; _a[1] = _b[1] - _c[1] - _borrow;}

#define DOUBLE(_dw) \
    ((double)_dw[0] + (double)_dw[1] * 4294967296.0)

fouts@orville.nas.nasa.gov (Marty Fouts) (02/21/88)

In article <711@elxsi.UUCP> rw@beatnix.UUCP (Russell Williams) writes:

>>
>   We tried making longs 64 bits and lots of programs wouldn't run, so
>we put int=long=32 bits and added a "long long" type.  Another example
>of common practice along the lines of dereferencing null pointers.  
>Customers don't like being told their programs are non-portable ("fine,
>I'll buy a Vax"), and of course we're stuck with fixing the AT&T and
>Berkeley code.  In the real world the portability of a C program is
>not really defined by K&R but by what a Vax running Unix does.
>

I frequently use computers from two vendors with machines capable of
64 bit integer arithmetic.  Amdahl also has a "long long" data type,
but Cray bit the bullet and made long 64 bits.  The Cray software
people had to fix a lot of problems up front, but now they are in a
better position for dealing with 64 bit addresses.

Amdahl is going to have to face this problem soon anyway, because we
need the ability to address files > 4 GB in length (generated on the
Cray) on the Amdahl.  Of course, under UTS lseek has a long (not long
long) argument parameter because of all of the programs that would
break if sizeof(long) != sizeof(int), so lseek won't work as is on
these big files.  So, if we are going to get > 4 GB files, we either
need to change the lseek parameter to 64 bits (by either making it
long long, or by making long 64 bits) or we have to invent a new
system call (say "llseek" for seek with a long long parameter.)

In either case, programs are going to break and have to be fixed.  And
none of the fixes are going to be trivial. More programs than if
Amdahl had made long 64 bits in the first place.  To me the moral of
the story is that expediency *always* comes home to roost eventualy.

C'est La Guerre.

Marty

mac3n@babbage.acc.virginia.edu (Alex Colvin) (02/21/88)

 >Other uses of more that 32 bits is for social security numbers.

Note that you don't really integers for this, unless you plan to do arithmetic
on SSNs.

eugene@pioneer.arpa (Eugene N. Miya) (02/21/88)

In article <20022@bu-cs.BU.EDU> bzs@bu-cs.BU.EDU (Barry Shein) writes:
>What use would 1GB of main memory be on a Vax750 for example?

This IS Lipton's Massive Memory Machine at Princeton.  (Or did it have
128 MBs on a 750? ;-)

>NOTE: This is not to say that *no* one can make use of massive memory,
>but the applications diminish as the memory grows,

I would question this.

>The real point is
>that there are economic trade-offs and there is definitely a point
>where you'd be better off spending your money on a faster CPU
>(assuming finite money) than more memory.
> . . .
>poster's address might indicate a special case environment.)

Arvin Park (SIGMETRICS 87 and one of Lipton's grad students) noted that
they felt it was better to buy memory to fit the application then get
the CPU (note: it just tend to happen that the bigger expensive machines
also tend to have the bigger physical address spaces as well).  I think
that we should not separate the world into special case environments.
This is what got us into trouble to begin with.  Computing demand expands
to fill available compute power.  Ten years from now, we may think
what people put on machines would in some cases be frivolous (imagine
sending a spreadsheet back to the days of the early 360s: "Ah! such a
waste of cycles!").  The best thing to do is look at the computational
complexity of various problems: graphics is frequently O(n^2) and O(n^3),
mostly because of the data volume, and so on.

From the Rock of Ages Home for Retired Hackers:

--eugene miya, NASA Ames Research Center, eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
  {uunet,hplabs,hao,ihnp4,decwrl,allegra,tektronix}!ames!aurora!eugene

jesup@pawl22.pawl.rpi.edu (Randell E. Jesup) (02/21/88)

In article <1638@gumby.mips.COM> earl@mips.COM (Earl Killian) writes:
>In article <3671@diku.dk>, keld@diku.dk (Keld J|rn Simonsen) writes:
>> There is a problem with C that long ints are usually not more
>> than 32 bits, even if the hardware is capable of doing 64 bits
>> adds and subtracts. This I foresee to cause severe problems for
>> C-based accounting software! Comments?

	So who says that long can't be 64 bits?  Last I knew, it had to be
minimum 32 bits (ANSI, I think), and had to be greater than or equal to
int.  If this software uses longs, just recompile.

>However, lest I appear to be arguing against 64-bit support, let me
>first say that, in my opinion, every architecture designed after the
>early 90s will be a 64-bit architecture.  Even PCs will probably use
>64-bit architectures by the end of the 90s.  64 bit addresses and
>integers are too important.

	I doubt it.  Maybe a few will, but PC's?  No way.  64-bit architectures
have few or no advantages for PC work, even most mini work.  Supercomputers
MIGHT decide they need it, but they mostly deal with FP stuff anyway.
Remember, when you go to 64 bits EVERYTHING gets MUCH larger on the chip.
this means larger chips, lower yields, more pins, etc, etc.  Also, ALU's
get slower.  In a CISC, you might see some double-register operations, with
multiple passes through the ALU, but not native.

	When I see a PC with a gig of memory on it, actually USED, I might
reconsider.

     //	Randell Jesup			      Lunge Software Development
    //	Dedicated Amiga Programmer            13 Frear Ave, Troy, NY 12180
 \\//	beowulf!lunge!jesup@steinmetz.UUCP    (518) 272-2942
  \/    (uunet!steinmetz!beowulf!lunge!jesup) BIX: rjesup

bzs@bu-cs.BU.EDU (Barry Shein) (02/22/88)

> >Other uses of more that 32 bits is for social security numbers.
>
>Note that you don't really integers for this, unless you plan to do arithmetic
>on SSNs.

Comparison is "arithmetic". I believe most machines will perform
load register, compare registers faster than memory-memory compares,
as in:

	LOAD	R1,WANTED
	LOAD	R2,ARRAY
LOOP:
	CMP	R1,@R2
	BEQ	FOUND
	INCRL	R2
	BR	LOOP

vs.
	LOAD	R1,WANTEDP
	LOAD	R2,ARRAY
LOOP:
	CMPLC	LENGTH,R1,R2	# compare logical character
	BEQ	FOUND
	BR LOOP

(obviously a test for end is missing.)

It's no accident that most of the various flavors of identifiers in
OS/370 are limited to 8 bytes (see compare double), or similarly the 6
SIXBIT chars (36 bits) in PDP10 systems.

I will agree that in many cases such concerns for speed at the cost
of human interface (upper case only, six character file names) was
not a very good trade-off, but large databases (which SSN's imply)
are a different matter.

	-Barry Shein, Boston University

earl@mips.COM (Earl Killian) (02/22/88)

In article <401@imagine.PAWL.RPI.EDU> jesup@pawl22.pawl.rpi.edu (Randell E. Jesup) writes:

   >However, lest I appear to be arguing against 64-bit support, let me
   >first say that, in my opinion, every architecture designed after the
   >early 90s will be a 64-bit architecture.  Even PCs will probably use
   >64-bit architectures by the end of the 90s.  64 bit addresses and
   >integers are too important.

   I doubt it.  Maybe a few will, but PC's?  No way.  64-bit architectures
   have few or no advantages for PC work, even most mini work.  Supercomputers
   MIGHT decide they need it, but they mostly deal with FP stuff anyway.

Supercomputers made this decision 10-20 years ago.  "MIGHT" is hardly
appropriate.

   Remember, when you go to 64 bits EVERYTHING gets MUCH larger on the chip.
   this means larger chips, lower yields, more pins, etc, etc.  Also, ALU's
   get slower.  In a CISC, you might see some double-register operations, with
   multiple passes through the ALU, but not native.

The yields for a 64-bit cpu in the time frame I was talking about will
be hundreds of die per wafer.  Your arguments are simply why we have
32-bit processors TODAY.  Your same sentences could have been used 10
years ago to justify why micros would have to be 8-16 bits.  You'd have
been right, then, but if you'd said that meant micros would never be
32b, you'd have been dead wrong.

   When I see a PC with a gig of memory on it, actually USED, I might
   reconsider.

There were three papers last Friday at ISSCC on 16Mb DRAMs.  That's 2M
bytes per chip.  Even assuming they're sold as 4Mx4 parts, you'll use
8 of them to make a 32b wide memory.  Hence your PC will have 2x8 =
16M bytes of it, simply to satisfy the width requirements of the cpu.
That's the minimum; the maximum memory sold on a PC will probably be
many times higher; e.g. 128Mb, which is enough memory to be profitably
managed with a virtual address space > 4G.  64b micros will exist for
higher end applications than PCs, and it will be natural for the PC
industry to use them.

With the papers showing up this year, it'll be 4-6 years until the
16Mb DRAMs are on the market, and 6-8 years until they're common in
PCs.  32b PCs will have been around long with sufficient horsepower
that 75% of the applications will require that horsepower.  One of my
friends has an old Macintosh on his desk that he no longer even powers
on, mostly because it is no longer capable of running any of the
interesting new Macintosh software.  It can still run the software he
bought with it at the time, but that's no longer good enough.

Another interesting development at ISSCC: a 20ns CMOS DRAM.  Following
a paper last year on a 30ns BiCMOS DRAM, this suggests that our main
memory is FINALLY going to start keeping up with the recent surge in
processor performance.  20ns DRAMs will allow us to eventually build
small, cheap 250 mips micro-based systems.  Isn't it fun finally being
on the microprocessor growth curve, instead of that stodgy old
mini/mainframe growth curve?

jbs@eddie.MIT.EDU (Jeff Siegal) (02/22/88)

In article <20076@bu-cs.BU.EDU> bzs@bu-cs.BU.EDU (Barry Shein) writes:
>> >Other uses of more that 32 bits is for social security numbers.
>>Note that you don't really integers for this, unless you plan to do arithmetic
>>on SSNs.
>Comparison is "arithmetic". I believe most machines will perform

In all this, nobody seems to notice that SSN's fit quite nicely into
32 bit (signed even) integers.  Or am I missing something?

Jeff Siegal

terry@wsccs.UUCP (terry) (02/23/88)

In article <1333@vaxb.calgary.UUCP>, radford@calgary.UUCP (Radford Neal) writes:
> Why is everyone assuming that if you want greater than 32 address bits
> you need 64?

	Because it's orthogonal if I want to grab the memory in ( GOD *FORBID*)
Intel style chunks.  Besides, if I'm silly, I may want to multiplex by putting
it together like so:

   ----   ----   ----     32  \ 64
___|  |___|  |___|  |___      /
			  data  address

> My guess is that 48 (or maybe even 40) bit addresses will be used.

	48?  48?  Is this some throwback to the old VOS systems from Harris
that had 24?  Porting to and fro from such an architecture is pure pain,
regardless of how much time the company has been building the OS.

> If you assume that memory is dominated by such addresses, or
> other quantities held in the same word size, going to 64 costs 33% more
> than using 48. Note that for the systems mentioned, memory must surely 
> be a large part of the total cost.

	Why, simply becuase I have the ability to access more memory without
some kludge of a paging algorythm, does my 64 meg cost me more?

> Now I can see the flames coming on this one... How increadibly short
> sighted! History shows that we *always* need more address bits... Please
> calculate the amount of memory addressable with 48 bits, then ask whether
> systems with this much memory, if they ever appear, will have anything
> like a current machine architecture anyway...

	There are *already* chips capable of addressing a Terabyte.  And
they look just like they look... in other words, semi-standard architecture.

> Of course, there are other reasons for wanting long addresses, than just
> access to physical memory.

	You bet!  What if I, for instance, wanted a seperate byte address
for every character in every book in the library of congress; Or maybe I
want to do interesting thing with a distributed system over an ethernet.

	The main point is to plan ahead.  As to your argument about
whether or not the architecture will be recognizable, so what?  How are
we going to develop new architectures without the capability to play
around with them in the first place?  You have to start somewhere...


| Terry Lambert           UUCP: ...!decvax!utah-cs!century!terry              |
| @ Century Software       or : ...utah-cs!uplherc!sp7040!obie!wsccs!terry    |
| SLC, Utah                                                                   |
|                   These opinions are not my companies, but if you find them |
|                   useful, send a $20.00 donation to Brisbane Australia...   |

davidsen@steinmetz.steinmetz.UUCP (William E. Davidsen Jr) (02/23/88)

In article <15781@beta.UUCP> jxdl@beta.UUCP (Jerry DeLapp) writes:
> [...]
>Address space is like disk space is like memory is like money. You can
>always use up everything you have and then some.

While there are always problems which could be solved to another
significant digit with more power, even on the Cray2, which can have up
to 2GB of memory, few problems larger than 500MB are run because the CPU
takes a lot of real time to clear/search/etc it. Until faster CPUs are
common, I doubt that there will be a switch to much larger (ie. 64 bit)
address space, because of market pressures.

Vendors don't make computers, they make money. If a change increases the
speed of the CPU, virtually every user will notice. If the address space
is made larger only a few users will notice, and most of those have
problems too large for the CPU.

I expect the next step to be in supercomputers, to 36 or 40 bits,
although the address *registers* will probably jump to 64 bits, the CPU
won't have that many address pins.

Educational activity: if you have access to the accounting file and can
see the process size, a look at the data will probably convince you that
few processes are pushing what we have available now.
-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {uunet | philabs | seismo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

abe@j.cc.purdue.edu (Vic Abell) (02/25/88)

In article <9653@steinmetz.steinmetz.UUCP> davidsen@kbsvax.steinmetz.UUCP (William E. Davidsen Jr) writes:
>
>I expect the next step to be in supercomputers, to 36 or 40 bits,

The CYBER 205, its predecessors, the STAR-100 and the CYBER 205, and its
successors, the ETA-10 series, all have 48 bit addressing.