[comp.arch] 64 bits

aglew@mcdurb.Urbana.Gould.COM (12/16/88)

I would encourage everyone to read George Gobel's article
on the various helical scan tape devices in comp.periphs.

In it he makes several comments about 32 bit overflow:
can't write a file > 2.0GB; must lie to "dump" to get estimate
of tape size correct due to 32 bit overflow.

It's long past time that computer systems - compilers and/or
ALUs - supported >32 bit integers. 64 bits here we come!

rpw3@amdcad.AMD.COM (Rob Warnock) (12/16/88)

In article <28200249@mcdurb> aglew@mcdurb.Urbana.Gould.COM writes:

+---------------
| I would encourage everyone to read George Gobel's article
| on the various helical scan tape devices in comp.periphs.
| In it he makes several comments about 32 bit overflow:
| can't write a file > 2.0GB; must lie to "dump" to get estimate
| of tape size correct due to 32 bit overflow.
+---------------

Of course, it would have been a little nicer if people had used
unsigned numbers for unsigned quantities. I mean, 32 bits lets
you count 4.29+ GB, not 2GB....  ;-}   ;-}


Rob Warnock
Systems Architecture Consultant

UUCP:	  {amdcad,fortune,sun}!redwood!rpw3
ATTmail:  !rpw3
DDD:	  (415)572-2607
USPS:	  627 26th Ave, San Mateo, CA  94403

mac3n@babbage.acc.virginia.edu (Alex Colvin) (12/16/88)

> It's long past time that computer systems - compilers and/or
> ALUs - supported >32 bit integers. 64 bits here we come!

uh... would 36 hold you for a few days?
and how about running out of characters at 256?

seanf@sco.COM (Sean Fagan) (12/20/88)

> It's long past time that computer systems - compilers and/or
> ALUs - supported >32 bit integers. 64 bits here we come!

Well, Crays are 64-bit machines, as are Elxsis (nice machine that looks a
bit like a 64-bit VAX; everybody should buy one 8-)).  Cybers (my favorite
machines 8-)) are 60-bits in 170-state, and 64-bit in 180-state.  The SPARC
has support, it looks like, for 64- and 128-bit quantities.

The code generator for gcc has support for 64-bit and, I think, 128-bit
objects (actually, the objects can be any length you want; it does, however,
have support for being able to have char, short, int, long, and long long),
if I remember correctly.  All you have to do, if you're porting it, is
describe the instructions that use the correct mode(s).

Then there's the Connection Machine:  64k-bit objects.

-- 
Sean Eric Fagan  | "Merry Christmas, drive carefully and have some great sex."
seanf@sco.UUCP   |     -- Art Hoppe
(408) 458-1422   | Any opinions expressed are my own, not my employers'.

khb%chiba@Sun.COM (Keith Bierman - Sun Tactical Engineering) (12/22/88)

In article <1951@scolex> seanf@sco.COM (Sean Fagan) writes:
>> It's long past time that computer systems - compilers and/or
>> ALUs - supported >32 bit integers. 64 bits here we come!
>
>Well, Crays are 64-bit machines, 

But not with 64-bit integers! 

	do i = 1, n
           stuff
	end do

is compiled (if I recall correctly) on a cray x series machine to use
the address registers for i. The result is 24-bit max do loop.

It is possible that later compilers, and the new machines (2, Y),
don't have this limitation.



Keith H. Bierman
It's Not My Fault ---- I Voted for Bill & Opus

smcmahon@watvlsi.waterloo.edu (Scott H. McMahon) (12/22/88)

In article <1951@scolex> seanf@sco.COM (Sean Fagan) writes:
>bit like a 64-bit VAX; everybody should buy one 8-)).  Cybers (my favorite
>machines 8-)) are 60-bits in 170-state, and 64-bit in 180-state.  
>

I agree totally.. CYBERs (especially the 180 product line) is my favourite
architecture among the heavy duty 64bit frames. I believe the 180 is a good
clean design that too many people are unaware of (not necessarily their fault).
Part of this "ignorance" of the 180 line is CDC's fault. I  have architectural
specs for the VAX, MIPS R/2000, etc. which were all bought from various
publishers at the school bookstore, BUT any knowledge I have about the 180's
comes from a summer job I had. 

It is neat to hear how some people on the net consider CDC machines to pretty
good, but depressing when they all refer to the 170 line. Does anyone out there
have an opinion re: the 180 line??

-Scott McMahon ......... humble computer geek and student slave.
 (no disclaimers necessary.. student says it all..  :-)
-- 
 S.H. McMahon - 4A Electrical Engineering - University of Waterloo
 UUCP  : {allegra,decvax,utzoo,clyde,uunet}!watmath!watvlsi!smcmahon
 BITNET: smcmahon@watvlsi.UWaterloo.ca
 CDNnet: smcmahon@watvlsi.waterloo.cdn

marc@apollo.COM (Marc Gibian) (12/28/88)

Quite a few years ago, while employed by CDC, I had the pleasure of working on early
180 machines.  They were quite capable and VERY FAST.  The only problem at the time
was that the OS was not yet mature, and we had NO DOCUMENTATION on it.  Therefore,
it was a real chore to accomplish anything on that machine since it was running
NOS/VE not NOS.  It was clear, though, that it was FAR superior to the 170s running
NOS for accomplishing many of the tasks I had.  I have always been disappointed at
the way CDC has be unable to sell these beasts, but that is not news... they have
always had products that where real nice, but never sold.
-- 
Internet: marc@apollo.COM            UUCP: {decvax,mit-erl,yale}!apollo!marc
NETel:    Apollo: 508-256-6600 x7490
(Copyright 1988 by author. All rights reserved.  Free redistribution allowed.)

rw@beatnix.UUCP (Russell Williams) (12/28/88)

In article <1951@scolex> seanf@sco.COM (Sean Fagan) writes:
>> It's long past time that computer systems - compilers and/or
>> ALUs - supported >32 bit integers. 64 bits here we come!
>
>Well, Crays are 64-bit machines, as are Elxsis (nice machine that looks a
>bit like a 64-bit VAX; everybody should buy one 8-)).  Cybers (my favorite

   The Elxsi architecture has very little in common with the VAX except
for a 32 bit virtual address.  The instruction set is not risc, but is
much closer to a risc than to a VAX -- typical cpi for our latest CPU (which
is made out of 2 big boards of ECL) is significantly less than 2.
The registers are 64 bits wide, and 64 bit objects are full citizens.

   Floating-point definitely benefits from this, not just because of fast
64 bit math, but because you can move the items around fast.
The advantages of 64-bit integers are less obvious, though at the lowest
levels it helps because our physical memory addresses are 38 bits wide.

   We went through the same problems other 64 bit folks seem to go through
in porting C programs.  It seems that longs "ought" to be
64 bits, but if they are a lot of code doesn't work;  C wants modulo-2^32
arithmetic even if you're in 64 bit registers, etc.  Word size is another
one of those things like byte ordering -- it's possible to write software
which ports across variations, but it's often ignored when writing 
"portable" software.

   The Elxsi can look like a VAX to software: the FORTRAN compiler accepts VAX
extensions, and various aspects of VMS are emulated to provide source-level
compatibility in many cases (MACRO modules don't run too well...)  Like
the VAX, ports of Unix V.3 and 4.3 BSD are available; unlike the VAX, you
can run all three operating systems simultaneously.

   Nobody wants to look like a VAX, they just want to sell like one.

*****************
My statements do not represent Elxsi's opinions; all copyright and trade mark 
infringements are solely my responsibility.

Russell Williams
..uunet!elxsi!rw
..ucbvax!sun!elxsi!rw

bcase@cup.portal.com (Brian bcase Case) (01/01/89)

>The (ELXSI) instruction set is not risc, but is
>much closer to a risc than to a VAX -- typical cpi for our latest CPU (which
>is made out of 2 big boards of ECL) is significantly less than 2.

Hmmm, this is interesting.  I claim CPI has very little to do with the
RISCiness or CISCiness of a *architecture*, to which the classifications
refer.  The reasons are:  (1) CPI is heavily determined by implementation,
but implementations are not CISC or RISC (perhaps they ought to be, or we
should have classifications for implementations) and (2) CPI is heavily
determined by what instructions are executed, e.g., I can write a program
that uses only the fast instructions of the upcoming re-implementations
of popular CISCs to get the CPI down (this is, in fact, what the updated
compilers for these micros will do), but does this make these architectures
RISCs?  Perhaps some would argue yes, but realistically, the architecture
hasn't changed, i.e., the old, slow instruction are still there, and they
are still slow (and old :-).  Comments?

pcg@aber-cs.UUCP (Piercarlo Grandi) (01/03/89)

In article <13096@cup.portal.com> bcase@cup.portal.com (Brian bcase Case)
writes:

    determined by what instructions are executed, e.g., I can write a program
    that uses only the fast instructions of the upcoming re-implementations
    of popular CISCs to get the CPI down (this is, in fact, what the updated
    compilers for these micros will do), but does this make these
    architectures RISCs?

What about the idea (from the 801 project) that old S/360 architecture is
RISCy, if one uses only RR format instructions and uses RX only for loads and
stores? In most medium/large S/360 and subsequent machines, RR and RX
instruction were hardwired and RISCy in flavour indeed; 16 general purpose
registers etc... can be argued to make for a pretty RISCy machine
architecture in general. I am not sure, but probably the 801 team started
thinking of RISC by looking at the implementation of RR and RX subset of
medium/large S/360s and successors indeed.

    Perhaps some would argue yes, but realistically,
    the architecture hasn't changed, i.e., the old, slow instruction are
    still there, and they are still slow (and old :-).  Comments?

I take Patterson's 1985 CACM paper as a reasonable (generic) definition for
what is RISC; as I understand that paper, RISC is also the idea that old and
slow instructions DO affect the implementation, by lengthening the cycle time
and therefore slowing down even simple, fast ones.

One thing is RISC architecture, one is implementation indeed; however as far
as I understand it one of the essential claims of RISC advocates is that
while simple architectures may be "less efficient" under some architectural
metric than complex architectures, but they are far "more efficient" at
exploiting implementation advances, e.g. new or special technologies that are
much faster but that cannot be used for complex architectures because of low
density.

With this I agree; indeed I think it is one of the main benefits of simple
architectures, whether they exhibit most of the traditional paraphernalia of
RISCs (large caches, large register sets, multiple register windows, or
whatever else the designer of a particular RISC thinks is RISCy :->) or not
(e.g. the transputer, or even the burroughs mainframes :->).
-- 
Piercarlo "Peter" Grandi			INET: pcg@cs.aber.ac.uk
Dept of CS, UCW Aberystwyth, Wales		UUCP: ...!mcvax!ukc!aber-cs!pcg

rw@beatnix.UUCP (Russell Williams) (01/04/89)

In article <13096@cup.portal.com> bcase@cup.portal.com (Brian bcase Case) writes:
>>The (ELXSI) instruction set is not risc, but is
>>much closer to a risc than to a VAX -- typical cpi for our latest CPU (which
>>is made out of 2 big boards of ECL) is significantly less than 2.
>
>Hmmm, this is interesting.  I claim CPI has very little to do with the
>RISCiness or CISCiness of a *architecture*, to which the classifications
>refer.  The reasons are:  (1) CPI is heavily determined by implementation,
>but implementations are not CISC or RISC (perhaps they ought to be, or we
>should have classifications for implementations) and (2) CPI is heavily
>determined by what instructions are executed, e.g., I can write a program
>that uses only the fast instructions of the upcoming re-implementations
>of popular CISCs to get the CPI down (this is, in fact, what the updated
>compilers for these micros will do), but does this make these architectures
>RISCs?  Perhaps some would argue yes, but realistically, the architecture
>hasn't changed, i.e., the old, slow instruction are still there, and they
>are still slow (and old :-).  Comments?

   I agree with you.  I only used the CPI as a rough shorthand because I
didn't want to go into details.  At present, I do think there's a
rough correlation between architectures for which there
are (or could be) implementations with moderate amounts of hardware running 
at low CPIs on typical code and the common vague notions of "risciness".  
Conversely, the most commonly cited disadvantages of complex instruction sets 
are those things that make them difficult to execute in a small number of
cycles, suitable for pipelining: complex addressing modes, instruction formats 
that must be decoded interpretively, lots of backward dependencies in the
code (condition codes set by most instructions), and generally anything that
takes a variable number of cycles.  

   With only a couple of exceptions, the Elxsi instruction set can be divided 
into three classes:
1. Instructions which can be executed with moderate amounts of hardware in
   a single cycle on a pipelined CPU (not counting the inevitable enemies of
   cache misses, register conflicts, etc.)
2. Instructions which can be executed by a few cycles of low-level microcode
   or fancy hardware (e.g. floating point).
3. Instructions which can be punted to software for emulation through type-1
   instructions.  This is hypercode in our terminology; macrocode in Amdahl's
   Our message system and low-level scheduler fall into this category.

   Type 1 and 2 instructions are the same kinds of things found on any risc
machine; the reasons I consider us further towards the cisc end of the
continuum than SPARC or MIPS are:
1. Class 1 instructions require somewhat more hardware than a "real" risc --
   instructions are hardware-decodeable and there are no multi-step address
   modes, but there are 14 or so of them.
2. There are more class-2 instructions than some riscs (e.g. there are
   several instructions such as exit which would require lots of hardware to
   reduce to 1 cycle).
3. "Real" riscs don't define class 3-type things as instructions at all.

   On the other hand, there are almost no things that fall into class 2 which
you can (and obviously should) do with class 1 instructions.  This is in 
contrast to the forthcoming low-CPI implementations of popular CISCs.

Russell Williams
..uunet!elxsi!rw
..ucbvax!sun!elxsi!rw

aglew@mcdurb.Urbana.Gould.COM (01/13/89)

>/* Written  7:45 am  Dec 16, 1988 by mac3n@babbage.acc.virginia.edu in mcdurb:comp.arch */
>> It's long past time that computer systems - compilers and/or
>> ALUs - supported >32 bit integers. 64 bits here we come!
>
>uh... would 36 hold you for a few days?
>and how about running out of characters at 256?
>/* End of text from mcdurb:comp.arch */

I just saw an ad for a data vault for a VAX that holds 4-8G.
Given the "3 bits per year" rule of thumb, 36 bits for disk
will be passed soon (if you do the "right thing" and make the disk
an array of bytes - I know, I know, this isn't what's done. But it
should be).

Of course, banks have had terabytes of data for years now.

fritz@cataract.caltech.edu (Fritz Nordby) (01/16/89)

In article <28200260@mcdurb> aglew@mcdurb.Urbana.Gould.COM writes:
>
>>/* Written  7:45 am  Dec 16, 1988 by mac3n@babbage.acc.virginia.edu in mcdurb:comp.arch */
>>> It's long past time that computer systems - compilers and/or
>>> ALUs - supported >32 bit integers. 64 bits here we come!
>>
>>uh... would 36 hold you for a few days?
>>and how about running out of characters at 256?
>>/* End of text from mcdurb:comp.arch */
>
>I just saw an ad for a data vault for a VAX that holds 4-8G.
>Given the "3 bits per year" rule of thumb, 36 bits for disk
>will be passed soon (if you do the "right thing" and make the disk
>an array of bytes - I know, I know, this isn't what's done. But it
>should be).
>
>Of course, banks have had terabytes of data for years now.

"3 bits per year"?  Where do you get that idea?  That's a lot faster than
memories or memory usage is growing.  Consider:

Semiconductor memory devices are growing at about 2 bits per 3 years ...
they've been growing about that fast since the early '70s.  This scaling
is a combination of device scaling (Mead/Conway "lambda" scaling, roughly)
and chip area increases, both of which are roughly exponential processes.

User memory demands are (even more roughly) exponential processes as well.
The best information I have on this is from a CACM article: "Case Study:
IBM's System/360-370 Architecture" (CACM, vol. 30, no. 4 (April 1987),
pp. 291-307).  This article is an interview with Richard Case and Andris
Padegs, "two of the key people responsible for the 360/370 architecture ...."
In the published interview (p.305), Case states (AS is one of the
interviewers):

	Case:  ....  The reason we needed the System/360 in the early 1960s was
	for its 24-bit address space.  That was enough for a decade, and then
	we had to expand to 31 bits.  We had laid the groundwork for that back
	in 1964.  At present we are using up one bit of address space every
	30 months.  That means we are going to eventually reach the point where
	a 31-bit address space is insufficient.  For a while we'll be able to
	get by with a collection of patches and switches, but they will not
	last forever.  Probably by the time we need 34 bits or so there will
	have to be another major upheval.

	AS:  I presume that XA came out just in the nick of time in 1983.

	Case:  It was late.  Some customers could have taken advantage of it
	earlier.

	AS:  Okay, so in several years you will run out of 31 bits.  You have
	a couple more bits you said you could handle by ad hoc means.  So
	you've got until 1995, and then your upheval will occur.

	Case:  It's plus or minus a couple of years, at today's rates.  Now
	whether today's rates continue, or whether something else will happen,
	I don't know.  But addressing is the most important driving force in
	instruction-set architecture.

Note that this is about the same as the scalings of memory configurations for
the Cray and VAX processor lines over the 1977-1988 period.  So, 2 bits per
5 years seems a reasonable estimate of how user memory requirements grow.

Note that chip capacities are growing faster than system memory requirements.
Over a 15 year time span, this says that we will need 6 bits more memory
(for the user), and we'll have 10 more bits of addressing per chip; thus
about every 4 years, memory systems halve the number of chips they use.
What does this mean?  Well, since memories all seem to be the same width,
this means that the von Neumann bottleneck is getting narrower and narrower,
at least in scaled terms (i.e., scaled by processor speed): witness the
growing importance of cacheing to system performance in aggressive designs.

		Fritz Nordby.	fritz@vlsi.caltech.edu	cit-vax!cit-vlsi!fritz

aglew@mcdurb.Urbana.Gould.COM (01/17/89)

>"3 bits per year"?  Where do you get that idea?

Woops, you're right. The rule of thumb is 1 bit every 3 years.

Anybody know an attribution for it, anyway?

daveb@geaclib.UUCP (David Collier-Brown) (01/20/89)

From article <28200261@mcdurb>, by aglew@mcdurb.Urbana.Gould.COM:
>>"3 bits per year"?  Where do you get that idea?
> Woops, you're right. The rule of thumb is 1 bit every 3 years.
> Anybody know an attribution for it, anyway?

A partial one: it was attributed to a IBM type, commenting on the
expected lifetime of MVS/XA (31 bits instead of 24, you understand).

--dave
-- 
 David Collier-Brown.  | yunexus!lethe!dave
 Interleaf Canada Inc. |
 1550 Enterprise Rd.   | He's so smart he's dumb.
 Mississauga, Ontario  |       --Joyce C-B

henry@zoo.toronto.edu (Henry Spencer) (08/09/90)

In article <1990Aug8.042631.7093@nlm.nih.gov> states@tech.NLM.NIH.GOV (David States) writes:
>>	... 64-bit integers & pointers, not just 64-bit
>>	datapaths, which micros have had for years in FP).
>
>Maybe, but aside from address generation and floating point, what are
>people going to do with all those bits?  Setting aside address arithmatic,
>most of the time you don't need 32 bit integers and lots of work involves
>bytes or smaller (character strings etc.).

You've just answered your own question.  They'll use 64 bits for the same
thing they use 32 bits for:  address arithmetic.  Making integers and
pointers the same size will be primarily a concession to badly-written
programs (which *know* the two are the same size) and marketing (which
wants to be able to say "64 bits!" without qualifications).
-- 
The 486 is to a modern CPU as a Jules  | Henry Spencer at U of Toronto Zoology
Verne reprint is to a modern SF novel. |  henry@zoo.toronto.edu   utzoo!henry

news@ism780c.isc.com (News system) (08/10/90)

In article <1990Aug8.215735.4197@zoo.toronto.edu> henry@zoo.toronto.edu (Henry Spencer) writes:
<In article <1990Aug8.042631.7093@nlm.nih.gov> states@tech.NLM.NIH.GOV (David States) writes:
<<<     ... 64-bit integers & pointers, not just 64-bit
<<<     datapaths, which micros have had for years in FP).
<<
<<Maybe, but aside from address generation and floating point, what are
<<people going to do with all those bits?  Setting aside address arithmatic,
<<most of the time you don't need 32 bit integers and lots of work involves
<<bytes or smaller (character strings etc.).
<
<You've just answered your own question.  They'll use 64 bits for the same
<thing they use 32 bits for:  address arithmetic.  Making integers and
<pointers the same size will be primarily a concession to badly-written
<programs (which *know* the two are the same size) and marketing (which
<wants to be able to say "64 bits!" without qualifications).

But because the difference between two pointers is a number we still need a
numeric type with the same number of bits as a pointer.

     Marv Rubinstein

weaver@weitek.WEITEK.COM (08/10/90)

In article <1990Aug8.215735.4197@zoo.toronto.edu> henry@zoo.toronto.edu (Henry Spencer) writes:
>
>You've just answered your own question.  They'll use 64 bits for the same
>thing they use 32 bits for:  address arithmetic.  Making integers and
>pointers the same size will be primarily a concession to badly-written
>programs (which *know* the two are the same size) and marketing (which

Don't forget array indices, which should be about the same size 
as pointers to support large arrays of small objects. If you are
writing programs with large arrays, you won't want to be limited 
to a size much less than the memory available. 

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (08/10/90)

In article <46173@ism780c.isc.com> marv@ism780.UUCP (Marvin Rubenstein) writes:

| But because the difference between two pointers is a number we still need a
| numeric type with the same number of bits as a pointer.

  True, but it's not as bad as it might seem, at least in C, since it's
the number of items, not bytes, and unless the array is type char the
limit is larger.

  I would expect new systems to have an int large to hold the ordinal
of any byte in memory, but that doesn't imply that a pointer must be
the same format or even size as an int. There may be good reasons why a
pointer is larger (more bits) than an int, so storing a pointer in an
int might not work, even if the number of bytes addressable by a
pointer will fit in an int.

-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
 "This is your computer. This is your computer on OS/2. Any questions?"

meissner@osf.org (Michael Meissner) (08/10/90)

Regarding 64 bit ints, and such -- another thing that will soon need
to be 64 bits on UNIX systems, is the value time_t returns.  This is
the number of seconds since the UNIX epoch (Jan 1, 1970), and with 32
bit signed ints, it runs out on Monday January 18 at 22:14:07 in the
year 2038.

--
Michael Meissner	email: meissner@osf.org		phone: 617-621-8861
Open Software Foundation, 11 Cambridge Center, Cambridge, MA, 02142

Do apple growers tell their kids money doesn't grow on bushes?

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (08/10/90)

! In-reply-to: davidsen@crdos1.crd.ge.COM's message of 10 Aug 90 13:03:28 GMT

In article <MEISSNER.90Aug10101645@osf.osf.org> meissner@osf.org (Michael Meissner) writes:
| Regarding 64 bit ints, and such -- another thing that will soon need
| to be 64 bits on UNIX systems, is the value time_t returns.  This is
| the number of seconds since the UNIX epoch (Jan 1, 1970), and with 32
| bit signed ints, it runs out on Monday January 18 at 22:14:07 in the
| year 2038.

  Soon? I'm not too worried about this, a change to unsigned will give
us to the end of the century, and frankly I doubt that UNIX will be
around that long, and I know I sure as hell won't be!

  If UNIX is around in fifty years, we'll all be running SysV.30 on
embedded nanotech in our brains, connected with TFS, the Telepathic File
System. The techies will run BSD 4.16, and FSF and OSF will be almost
ready to ship a production release.
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
 "This is your computer. This is your computer on OS/2. Any questions?"

sysmgr@KING.ENG.UMD.EDU (Doug Mohney) (08/11/90)

In article <MEISSNER.90Aug10101645@osf.osf.org>, meissner@osf.org (Michael Meissner) writes:
>Regarding 64 bit ints, and such -- another thing that will soon need
>to be 64 bits on UNIX systems, is the value time_t returns.  This is
>the number of seconds since the UNIX epoch (Jan 1, 1970), and with 32
>bit signed ints, it runs out on Monday January 18 at 22:14:07 in the
>year 2038.

You think anything which is currently running UNIX is going to still be 
chugging 48 years from now? You think we'll be stuck with UNIX for
48 years? ;-)

khb@chiba.Eng.Sun.COM (Keith Bierman - SPD Advanced Languages) (08/11/90)

In article <0093AFE9.0CF9BA80@KING.ENG.UMD.EDU> sysmgr@KING.ENG.UMD.EDU (Doug Mohney) writes:

...
   >Regarding 64 bit ints, and such -- another thing that will soon need
   >to be 64 bits on UNIX systems, is the value time_t returns.  This is
   >the number of seconds since the UNIX epoch (Jan 1, 1970), and with 32
   >bit signed ints, it runs out on Monday January 18 at 22:14:07 in the
   >year 2038.

   You think anything which is currently running UNIX is going to still be 
   chugging 48 years from now? You think we'll be stuck with UNIX for
   48 years? ;-)

Astro types have dealt with this over the years. It is merely
necessary to pick a new epoch. At JPL, for example, the epoch was
measured from 1950 until a few years back. (* yes there is special sw
to handle the cross over *)

There are good reasons for 64-bit machines; but preservation of the
traditional unix epoch is hardly one of them!

--
----------------------------------------------------------------
Keith H. Bierman    kbierman@Eng.Sun.COM | khb@chiba.Eng.Sun.COM
SMI 2550 Garcia 12-33			 | (415 336 2648)   
    Mountain View, CA 94043

henry@zoo.toronto.edu (Henry Spencer) (08/12/90)

In article <MEISSNER.90Aug10101645@osf.osf.org> meissner@osf.org (Michael Meissner) writes:
>Regarding 64 bit ints, and such -- another thing that will soon need
>to be 64 bits on UNIX systems, is the value time_t returns.  This is
>the number of seconds since the UNIX epoch (Jan 1, 1970), and with 32
>bit signed ints, it runs out on Monday January 18 at 22:14:07 in the
>year 2038.

Note that neither ANSI C nor 1003.1 promises that time_t is signed...
although changing that would itself be a bit disruptive.
-- 
It is not possible to both understand  | Henry Spencer at U of Toronto Zoology
and appreciate Intel CPUs. -D.Wolfskill|  henry@zoo.toronto.edu   utzoo!henry

stevem@f40.inmos.co.uk (Steve Maudsley) (08/16/90)

In <40713@mips.mips.COM> John Mashey discusses some of the issues of space
relating to DRAM. 

To support his premise that big word-size machines will be in common
use, I would like to pass on a few observations about the 
semiconductor industry

	1. It takes about 10 years for a new manufacturing technique to 
	  become used in production from the time that the proof-of-concept
	  has occured in the labs. Hence, we can identify now the techniques
	  that will be in use in 10 years time.

	2. With a judicious use of imagination, there are now a sufficient set
	  of techniques availible to produce 1GBit monolithic DRAMs in 2000.
          These chips will be only about twice as large as current generation
	  DRAMs and that only to get the signal wires out.

	3. In volume production, all DRAM technologies cost the same amount
	  per chip (in dollars), regardless of the generation.

This nebulous set of observations implies that a 16Gbyte machine will have the
same memory cost as a current generation 64Mbyte machine, which we now commonly
use as NFS or X servers and consider an acceptable cost. NOTE: this is real
memory, not virtual memory which is typically 10 times bigger because with 
current technologies it is 10 times cheaper. Therefore, machines with 160Gbyte
of address space will be affordable.

You will need more than 32bits address space for these machines.

I haven't addressed the issue of what you do with it, but certainly we will be
using machines this size for simulating the logic circuits that we will be 
building with the same technologies, because that problem has to be sized to
the number of transistors that we can manufacture.

Stephen 

jkenton@pinocchio.encore.com (Jeff Kenton) (08/16/90)

From article <9660@ganymede.inmos.co.uk>, by stevem@f40.inmos.co.uk (Steve Maudsley):
> 
>	. . . 
> 
> This nebulous set of observations implies that a 16Gbyte machine will have the
> same memory cost as a current generation 64Mbyte machine, which we now commonly
> use as NFS or X servers and consider an acceptable cost. NOTE: this is real
> memory, not virtual memory which is typically 10 times bigger because with 
> current technologies it is 10 times cheaper. Therefore, machines with 160Gbyte
> of address space will be affordable.
> 

Probably right.  In addition to the question of what to do with 16Gbyte
(or 160 Gbyte) is the problem of how fast will machines have to be to
use that memory.  Right now, zeroing memory on a 16Mbyte machine takes
a noticeable number of seconds.  Will machines be 1000 times faster by
the time we have 1000 times more memory?










- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      jeff kenton  ---	temporarily at jkenton@pinocchio.encore.com	 
		   ---  always at (617) 894-4508  ---
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (08/17/90)

In article <9660@ganymede.inmos.co.uk> stevem@inmos.co.uk (Steve Maudsley) writes:

| 	2. With a judicious use of imagination, there are now a sufficient set
| 	  of techniques availible to produce 1GBit monolithic DRAMs in 2000.
|           These chips will be only about twice as large as current generation
| 	  DRAMs and that only to get the signal wires out.

  If this comes to pass, we will probably see a trend away from the use
of chips with 1 bit datapath. The possibility of going 4 or 8 bytes wide
gives a useful size memory system on a chip. I have speculated before
that when this happens you will see ECC on the memory chip, since it can
be at least as fast as off chip, and requires a lot fewer support chips
and traces. Remember that the most expensive real estate on earth is the
square inches on a PC board.

  You could also postulate that as sizes of gates go down, at some point
static memory starts to look good again, if only because the capacitors
don't shrink as fast as the gates. Poof! There goes all that refresh
circuitry.

  Old people (like me) can remember the traditional tube radio with five
tubes. Now how about the five chip workstation:

	CPU:	32 or 64 bit, 2MB cache, FPU
	Memory:	32MB, error signals out are {soft,hard} error
	i/o:	16S+2P, SCSI, ethernet, keyboard, mouse
	video:	4k x 3k x {15,18} bits, with NTSC i/o and some DSP
		(yes, I think the PC 4:3 aspect ration will be used)

  Looks like only four chips. in 15 years I'm sure that the displays up
to 14" will be some kind of solid state thing, like microLED matrix or
something. I'm a lot less sure about 10 years. Let's call that the fifth
chip, and substitute a really good color LCD until then.
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
       "This is your PC. This is your PC on OS/2. Any questions?"

joel@cfctech.cfc.com (Joel Lessenberry) (08/17/90)

In article <9660@ganymede.inmos.co.uk> stevem@inmos.co.uk (Steve Maudsley) writes:
>This nebulous set of observations implies that a 16Gbyte machine will have the
>same memory cost as a current generation 64Mbyte machine, which we now commonly
>use as NFS or X servers and consider an acceptable cost. NOTE: this is real
>memory, not virtual memory which is typically 10 times bigger because with 
>current technologies it is 10 times cheaper. Therefore, machines with 160Gbyte
>of address space will be affordable.
>
>You will need more than 32bits address space for these machines.
>
>I haven't addressed the issue of what you do with it, but certainly we will be
>using machines this size for simulating the logic circuits that we will be 
>building with the same technologies, because that problem has to be sized to
>the number of transistors that we can manufacture.
>
>Stephen 

rpeglar@csinc.UUCP (Rob Peglar) (08/17/90)

In article <9660@ganymede.inmos.co.uk>, stevem@f40.inmos.co.uk (Steve Maudsley) writes:
> In <40713@mips.mips.COM> John Mashey discusses some of the issues of space
> relating to DRAM. 
> 
> To support his premise that big word-size machines will be in common
> use, I would like to pass on a few observations about the 
> semiconductor industry
> 
> 	1. It takes about 10 years for a new manufacturing technique to 
> 	  become used in production from the time that the proof-of-concept
> 	  has occured in the labs. Hence, we can identify now the techniques
> 	  that will be in use in 10 years time.
> 
> 	2. With a judicious use of imagination, there are now a sufficient set
> 	  of techniques availible to produce 1GBit monolithic DRAMs in 2000.
>           These chips will be only about twice as large as current generation
> 	  DRAMs and that only to get the signal wires out.
> 
> 	3. In volume production, all DRAM technologies cost the same amount
> 	  per chip (in dollars), regardless of the generation.

No problems with 1. and 2. above.  3. assumes "cost"==marginal cost.  It is
valid to note that as time goes by (and DRAM becomes denser), the cost of
starting up - capital expense - goes up in both real(constant) and today's
dollars.  This is not trivial; in fact, it is the major reason why US
"players" in the semiconductor industry are reluctant to begin projects.
Not so for other nations where the cost of capital is (relative to US)
quite low.  US Memories, where are you?  BTW, no value judgements here, don't
read any in, please.

> 
> This nebulous set of observations implies that a 16Gbyte machine will have the
> same memory cost as a current generation 64Mbyte machine, which we now commonly
> use as NFS or X servers and consider an acceptable cost. NOTE: this is real
> memory, not virtual memory which is typically 10 times bigger because with 
> current technologies it is 10 times cheaper. Therefore, machines with 160Gbyte
> of address space will be affordable.
> 
> You will need more than 32bits address space for these machines.
> 
> I haven't addressed the issue of what you do with it, but certainly we will be
> using machines this size for simulating the logic circuits that we will 

I know quite a few people who would be delighted to have a microprocessor that
had a 64-bit address (text) space.  They would be even more delighted to have a
48-bit or 64-bit data space.  As for "ya gotta have the disk to back this", 
sure, but the cost of external storage (non-volatile) - french for "disk" -
is falling fast ($/GB).  Magnetic media are around $2x00/GB, shop around for
the best x.  Optical media is following the trend.  Even down and dirty
DRAM would be (probably) $70,000-80,000/GB;  certainly feasible for some
people.

The argument that 16GB of disk is unimaginable is rapidly vaporizing into
the other "gee, it's only a PC" myths of days past.  The day of 64-bit
microprocessors backed by hundreds of MB of fast DRAM backed by tens of
GB of disk *on the desktop* is n years away.  I am excited because n is
small.


Rob






-- 
Rob Peglar	Comtrol Corp.	2675 Patton Rd., St. Paul MN 55113
		A Control Systems Company	(800) 926-6876

...uunet!csinc!rpeglar

prins@prins.cs.unc.edu (Jan Prins) (08/17/90)

jkenton@pinocchio.encore.com (Jeff Kenton) writes:

|> [...]  In addition to the question of what to do with 16Gbyte
|>(or 160 Gbyte) is the problem of how fast will machines have to be to
|>use that memory.  Right now, zeroing memory on a 16Mbyte machine takes
|>a noticeable number of seconds.  Will machines be 1000 times faster by
|>the time we have 1000 times more memory?

A machine with a lot of physical memory need not be restricted to have
a single processor.  I think it is within *current* technology to build
a 64K processor CM-2 or MP-1 with 256KB of memory per processor.  That's
16GB of memory that might also be serially accessible to a scalar processor
incorporated in the system which (just to tie in the subject line) would need 
>32 bits to address that memory.  To "zero" all of the memory, use all of the 
processors; it should take less than a second.

                               
--\--  Jan Prins  (prins@cs.unc.edu)       "The claim is `always'...
  /    Computer Science Dept.                 ... no, wait, it is `never'..."
--\--  UNC Chapel Hill  

gadbois@geier.philosophie.uni-stuttgart.de (David Gadbois) (08/17/90)

In article <224@csinc.UUCP> rpeglar@csinc.UUCP (Rob Peglar) writes:

   From: rpeglar@csinc.UUCP (Rob Peglar)
   Date: 17 Aug 90 13:36:42 GMT

   [...]

   I know quite a few people who would be delighted to have a
   microprocessor that had a 64-bit address (text) space.  They would
   be even more delighted to have a 48-bit or 64-bit data space.  As
   for "ya gotta have the disk to back this", sure, but the cost of
   external storage (non-volatile) - french for "disk" - is falling
   fast ($/GB).  Magnetic media are around $2x00/GB, shop around for
   the best x.  Optical media is following the trend.  Even down and
   dirty DRAM would be (probably) $70,000-80,000/GB; certainly
   feasible for some people.

I have noted this before, but it probably bears repeating:  While big
address spaces are certainly desirable, don't forget that we are
talking about powers of 2 here.  Assuming that media for backing store
costs $1.00 per megabyte, enough to support a full 64-bit address
space would set you back $17,592,186,044,416.00.  A 48-bit space at
those rates would cost $268,435,456.00, and even a measly 40-bit (just
one terabyte) one is over a million bucks.  Media costs are going to
have to drop a lot for really big address spaces to be practical.

--David Gadbois
gadbois@cs.utexas.edu

mash@mips.COM (John Mashey) (08/18/90)

In article <GADBOIS.90Aug17183952@geier.philosophie.uni-stuttgart.de> gadbois@cs.utexas.edu writes:
>In article <224@csinc.UUCP> rpeglar@csinc.UUCP (Rob Peglar) writes:
>
>   From: rpeglar@csinc.UUCP (Rob Peglar)
>   Date: 17 Aug 90 13:36:42 GMT
>
>   [...]
>
>   I know quite a few people who would be delighted to have a
>   microprocessor that had a 64-bit address (text) space.  They would
>   be even more delighted to have a 48-bit or 64-bit data space.  As
>   for "ya gotta have the disk to back this", sure, but the cost of
>   external storage (non-volatile) - french for "disk" - is falling
>   fast ($/GB).  Magnetic media are around $2x00/GB, shop around for
>   the best x.  Optical media is following the trend.  Even down and
>   dirty DRAM would be (probably) $70,000-80,000/GB; certainly
>   feasible for some people.
>
>I have noted this before, but it probably bears repeating:  While big
>address spaces are certainly desirable, don't forget that we are
>talking about powers of 2 here.  Assuming that media for backing store
>costs $1.00 per megabyte, enough to support a full 64-bit address
>space would set you back $17,592,186,044,416.00.  A 48-bit space at
>those rates would cost $268,435,456.00, and even a measly 40-bit (just
>one terabyte) one is over a million bucks.  Media costs are going to
>have to drop a lot for really big address spaces to be practical.

This is making an assumption that is untrue, i.e., that there is
rule that says you must have backing store >= virtual memory size.
1) For some classes of problems, you would never try to run them in
physical memory size low enough that you would actually page very much.
2) UNIXes are generally doing better about having less static allocation
of swap space.
3) Maybe somebody can cite the reference, but I recall a paper a fews back
from Gould talking about this exact issue, in some USENIX conference,
where people wanted to run big simulations without needing so much
swap space.
4) Finally, as pointed out before, there are problem classes for which
the practicality is determined by the ability to get more than 32 bits
of virtual address space, regardless of how much physical space is
needed behind it to make performance reasonable.
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	 mash@mips.com OR {ames,decwrl,prls,pyramid}!mips!mash 
DDD:  	408-524-7015, 524-8253 or (main number) 408-720-1700
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

usenet@nlm.nih.gov (usenet news poster) (08/18/90)

gadbois@cs.utexas.edu writes:
> I have noted this before, but it probably bears repeating:  While big
> address spaces are certainly desirable, don't forget that we are
> talking about powers of 2 here.  Assuming that media for backing store
> costs $1.00 per megabyte, enough to support a full 64-bit address
> space would set you back $17,592,186,044,416.00.  A 48-bit space at
> those rates would cost $268,435,456.00, and even a measly 40-bit (just
> one terabyte) one is over a million bucks.  Media costs are going to
> have to drop a lot for really big address spaces to be practical.

Sure for physical memory and we are all spoiled by workstations with
enough RAM that nothing ever page faults anymore (:-), but DEC started
shipping VAXes with 128k of RAM, a 70 MB disk and a 4GB virtual address
space (and it *did* page fault).  Virtual space > physical space can be
useful for some problems.  With very high speed optical communications
coming on line, the concept of a site wide address space could make
sense.  In setting up such a system you would almost certainly want 
physical addresses to be quite sparse ("Sorry, you can't add a disk to
that machine without spliting addresses or reconfiguring the whole
thing in a bigger slot...").

We currently have about 40GB of text data online on various machines,
and I certainly wouldn't mind having it all byte addressable.  You could 
do it in 48 bits, but is it really going to save you that much compared
to going straight to 64?

> --David Gadbois

David States

rpw3@rigden.wpd.sgi.com (Rob Warnock) (08/18/90)

In article <15674@thorin.cs.unc.edu> prins@prins.cs.unc.edu (Jan Prins) writes:
+---------------
| jkenton@pinocchio.encore.com (Jeff Kenton) writes:
| |>use that memory.  Right now, zeroing memory on a 16Mbyte machine takes
| |>a noticeable number of seconds.  Will machines be 1000 times faster by
| |>the time we have 1000 times more memory?
| A machine with a lot of physical memory need not be restricted to have
| a single processor....  To "zero" all of the memory, use all of the 
| processors; it should take less than a second.
+---------------

Nor, for bulk clearing (which is a major overhead for many Unices), do you
have to clear with plain CPU instructions. For example, suppose your memory
is N-way interleaved. It's practically free to add some magic to the memory
controller that lets the CPU store "N" words at a time with the same value
(by cycling all N memories in parallel). Then you can clear in 1/Nth the time.

Other tricks include using DMA or graphics devices to clear memory for you.

Assuming a memory 64 bits wide, 8-way interleaved, clearing with some kind
of hardware-assisted "page mode" access, and a 40ns page mode cycle time on
the RAM, you can clear at a rate of 0.67 sec/Gbyte, or 10.7 seconds for 16 GB.
[If I've done my arithmetic correctly... ;-}  ]

-Rob

cet1@cl.cam.ac.uk (C.E. Thompson) (08/18/90)

In article <1990Aug17.215527.5822@nlm.nih.gov> states@tech.NLM.NIH.GOV (David States) writes:
>Sure for physical memory and we are all spoiled by workstations with
>enough RAM that nothing ever page faults anymore (:-), but DEC started
>shipping VAXes with 128k of RAM, a 70 MB disk and a 4GB virtual address
>space (and it *did* page fault).  Virtual space > physical space can be
>useful for some problems.  

The VAX had (has) a 4GB virtual address space, but you can't spread objects
throughout the full range. To keep the page tables manageable you have to
use P0 space from the bottom, P1 space from the top, and S space from the
bottom. (Interchange top and bottom if you draw your pictures the other
way up...) What sort of virtual memory architecture are the 64-bit address
proponents envisaging? Reverse map?

Chris Thompson
JANET:    cet1@uk.ac.cam.phx
Internet: cet1%phx.cam.ac.uk@nsfnet-relay.ac.uk

peter@ficc.ferranti.com (Peter da Silva) (08/20/90)

In article <9660@ganymede.inmos.co.uk> stevem@inmos.co.uk (Steve Maudsley) writes:
[machines with 16G of RAM will be affordable in the year 2000]

> You will need more than 32bits address space for these machines.

Depends on whether that's 16Gb of RAM per CPU or not. With 1Gb chips organised
in (say) 32bx32M for 128MB, a 2-chip compute server could cost the equivalent
of $20. Typical major software costs significantly more than this. With a ROM
file system using a single 16MB ROM chip containing the necessary junk to boot
UNIX (or equivalent network-capable O/S) and the software, and an ethernet,
FDDI, or whatever network chip you could sell software modules that you'd buy
and plug into your home PC (which would be a network with a display server
and file server running something like X+UNIX). The actual software could
easily be running on any CPU out there, with any native O/S. Compatibility
would be a matter of network type and network protocols...

For the low end user, 32 bits should suffice for a *long* time to come.
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
peter@ferranti.com (currently not working)
peter@hackercorp.com


-- 

jjn@hare.cdc.com (jj needham 234-4312) (09/05/90)

In article <18286@ultima.socs.uts.edu.au> jeremy@sinope.socs.uts.edu.au (Jeremy Fitzhardinge) writes:
>rcpieter@svin02.info.win.tue.nl (Tiggr) writes:
>
>>>Does anyone seriously believe that a sane computer vendor will create
>>>a new flavor of hardware in the nineties, with no evidence of any
>>>customers?
>>
>>Who claims there is no evidence of customers?  And if there are
>>customers, somebody will jump in the market with something to suit
>>there needs.  I wouldn't mind to have a nice machine with 64bit
>>registers, databus and addressbus, where the addressbus adresses
>>bits, and not bytes, giving 2^64 BITS of memory.  What I don't
>>really like is thinking about alignment restrictions...
>
>
>The C compiler I use on it is a subset of C that doesn't take full advantage

I may have missed too much of this conversation over the past week or so
but was surprised no CDC people have commented.  I exclude Rob
Peglar and company as the Cyber 200/ETA is different from the Cyber 180.
CDC has lived with a full 64 bit architecture all through the eighties.  

Cyber 180 architecture addressed many problems that 170 (and to a lesser
degree early Cray stuff) lacked.  Security, memory and 8 bit bytes.  16
rings of hardware support in addressing with locks and keys thrown in.
48 bit virtual address. (32bits/segment-12 bits worth of segments-4 bits 
of hardware).  Not exactly your garden variety PDP-11 clone.

Try using the same media (50K gate CMOS or whatever) as your competitors
who only have to implement 32 bits worth of CPU.  Half your expendable
chip real estate is gone. 

I have spent time working on both a C compiler and ported Unix applications
to Cyber 180 series.  This was very frustating as you try and compete with
Vaxes and such.  Pointers are clearly incompatible with any idea of an _int_.
The CPU by design contains both address registers are not data registers.
6000, 170, 180 and Crays have A,X (sometimes B,V) register sets and you have to
constantly move pointers back and forth between address and data registers to
get the code to work.  This may not be a problem in other 64 bit systems, but
how many are commercially available?  One thing CDC does not have, competition 
in the 64-bit production OS market.

Don't even mention porting something like Oracle.  Oracle Corp. informs 
us that we are one of the few companies to sucessfully port PL#SQL 
to a non-32 bit architecture.  The Oracle kernel wasn't much easier.

In trying to improve both a C compiler and a C application I have found that
the 64 bit register file is best left to an accelorator.  Like the man
said, who needs it?  CDC almost died trying to answer that question. If it
don't look like a PDP-11, your doomed, man!   

People who need 64-bits don't need it to run _egrep_.  They it need for a
specific numeric purpose.  Having a 64-bit floating point unit as part
of a 32 bit scalar implementation seems to be a good comprimise.  I 
guess everybody always needs more address bits.




Well that ought to stir up some dust ...

jeff

Disclaimer:				| Jeff Needham
If you become any more forgetful, you  	| Oracle Performance Group
will be qualified enough to work in 	| Control Data - Santa Clara, CA
Performance Analysis!			| INTERNET jjn@hare.udev.cdc.com

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (09/05/90)

In article <25414@shamash.cdc.com> jjn@hare.udev.cdc.COM (jj needham 234-4312) writes:

|                                       One thing CDC does not have, competition 
| in the 64-bit production OS market.

  Has something happened to Cray?
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
    VMS is a text-only adventure game. If you win you can use unix.

jjn@hare.cdc.com (jj needham 234-4312) (09/06/90)

In article <2486@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes:
>In article <25414@shamash.cdc.com> jjn@hare.udev.cdc.COM (jj needham 234-4312) writes:
>
>|                                       One thing CDC does not have, competition 
>| in the 64-bit production OS market.
>
>  Has something happened to Cray?
>-- 
>bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
>    VMS is a text-only adventure game. If you win you can use unix.

Fair enough.  I had considered mentioning Cray, SCS, and Convex, but I guess
I was thinking more along the lines of general purpose machines;VAX,370,
B5000.  The Cray isn't usally considered a general purpose computer, usually.

CDC marketed Cyber 180s in response to what people do on a Vax, just about
anything.  The Vax has such a head start in C applications, that it was
always hard to compete on these terms.  The ETA part of CDC product line
was what I would equate with the Cray.  

later,jeff
jeff




Disclaimer:				| Jeff Needham
If you become any more forgetful, you  	| Oracle Performance Group
will be qualified enough to work in 	| Control Data - Santa Clara, CA
Performance Analysis!			| INTERNET jjn@hare.udev.cdc.com