jeff@nsx (Jeff Barr) (02/14/91)
Voracious_users_of_memory and assumers_that_sizeof(char *)==sizeof(int): Note that people who are building new processor chips (e.g. the MIPS R4000) say that by 1993 typical high-end micro-based systems are going to have more than 4 gigabytes of address space, and in many cases this much real memory. The next jump is to 64 bits, which should last quite a while (I think 64 bits will address each molecule in your body individually). This is 18,446,744,073,709,551,616 bytes. There may not be another address space upgrade in the future of the world. This is history in the making. You can tell your grandchildren about this... The address space is growing because (apparently) mapping files into the address space is now a common practice, to avoid all of that icky file I/O. As always, this means that the size of pointers is in no way related to the size of any other data type, and our code must not assume this. Any good ideas on what to do with this much space? Jeff
davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (02/14/91)
In article <1991Feb13.160718.25759@visix.com> jeff@nsx (Jeff Barr) writes: | Voracious_users_of_memory and assumers_that_sizeof(char *)==sizeof(int): | | Note that people who are building new processor chips (e.g. | the MIPS R4000) say that by 1993 typical high-end micro-based systems | are going to have more than 4 gigabytes of address space, and in many | cases this much real memory. I guess if you define high end micros to mean those with 4GB of memory, then this will be true. There will always be problems which can use this much memory, but somehow I can't see why there would be that much memory on a typical system. As long as cost and failure rate are related to memory size in some fairly linear way, I think there will be a better fit of hardware to use. I suspect that 90% of the users of computers never run finite element analysis, linear regression, or anything else which takes 4GB. They read mail, edit files, develop software, do graphics, and generally don't do anything intensive. You can even run GNUemacs under X-windows without paging if you allow about 32MB per user. Serious graphics (4k x 4k x 24bit) will only take 50MB/image, so you can reasonably run anything remotely well written in 512MB, and even that could legitimately be called a special case. -- bill davidsen (davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen) "I'll come home in one of two ways, the big parade or in a body bag. I prefer the former but I'll take the latter" -Sgt Marco Rodrigez
wolfe@vw.ece.cmu.edu (Andrew Wolfe) (02/14/91)
Better yet; I have been told that it is estimated that 64-bits will address all of the subatomic particles in our universe.
mcdonald@aries.scs.uiuc.edu (Doug McDonald) (02/15/91)
In article <WOLFE.91Feb14104818@vw.ece.cmu.edu> wolfe@vw.ece.cmu.edu (Andrew Wolfe) writes: > >Better yet; > >I have been told that it is estimated that 64-bits will address all of the >subatomic particles in our universe. But it is woefully inadequate to describe all the possible arrangements of atoms in a grain of sand. Doug McDonald
djh@xipe.osc.edu (David Heisterberg) (02/15/91)
In article <WOLFE.91Feb14104818@vw.ece.cmu.edu> wolfe@vw.ece.cmu.edu (Andrew Wolfe) writes: >I have been told that it is estimated that 64-bits will address all of the >subatomic particles in our universe. If that were true you know the U.S. Post Office would jump on it. And I though ZIP+4 was bad. However, 2^64 is less than Avogadro's number (6.023 x 10^23) and 12 grams of carbon contains that many atoms. -- David J. Heisterberg djh@osc.edu And you all know The Ohio Supercomputer Center djh@ohstpy.bitnet security Is mortals' Columbus, Ohio 43212 ohstpy::djh chiefest enemy.
ccplumb@rose.uwaterloo.ca (Colin Plumb) (02/15/91)
wolfe@vw.ece.cmu.edu (Andrew Wolfe) wrote: > >Better yet; > >I have been told that it is estimated that 64-bits will address all of the >subatomic particles in our universe. Sorry, there are about 10^80 subatomic particles in the universe, certainly much more than 10^70. These numbers correspond to 266 and 233 bits, respectively. I wonder if there aren't already more than 2^64 bits of computer storage (DRAM, SRAM, hard drives, floppies, CD-ROM's, Exabyte tapes, magtapes, dusty card decks, etc.) in the world already. Probably not, but if you include audio CD's, we're way up there. Does anyone want to hazard a guess as to the correct power of 2? -- -Colin
burley@geech.ai.mit.edu (Craig Burley) (02/15/91)
In article <WOLFE.91Feb14104818@vw.ece.cmu.edu> wolfe@vw.ece.cmu.edu (Andrew Wolfe) writes:
I have been told that it is estimated that 64-bits will address all of the
subatomic particles in our universe.
Not exactly; 64 bits might be enough to assign each subatomic particle in
our universe a distinct address, but that isn't quite the same thing as
actually addressing them... :-) (Hmm, it would give new meaning to the
phrase "pointer to string", wouldn't it?)
Even so, clearly 64 bits will not be enough. I mean, who wants to commit
to a new architecture that can represent only ONE universe at a time?
Sheesh, the last thing we need is for somebody to tack a virtual universe
segment descriptor onto the 64-bit address and make a 96-bit disjoint
address space or some such thing. :-)
More seriously, one of the uses for pointers is to have them fly around on a
network and have enough information to identify target node and address within
the node. In systems like this, even 64 bits might not be enough when they
get enough nodes (with enough memory) on them. 96 bits, on the other hand,
might well be enough for the next 25-50 years or more. Though I'm not sure
whether anyone has shown that systems built out of distinct processors that
intercommunicate using pointers into each others' address spaces is a better
approach than other, less architecturally demanding, approaches.
--
James Craig Burley, Software Craftsperson burley@ai.mit.edu
adamd@rhi.hi.is (Adam David) (02/15/91)
In <1991Feb13.160718.25759@visix.com> jeff@nsx (Jeff Barr) writes: >last quite a while (I think 64 bits will address each molecule in >your body individually). This is 18,446,744,073,709,551,616 bytes. >There may not be another address space upgrade in the future of the >world. This is history in the making. You can tell your grandchildren >about this... I tend to think it will level off at 128 or 256 bit addressing, and sooner than we would predict by comparing with today's technology. Our grandchildren will tell us that it needed to be bigger after all. >The address space is growing because (apparently) mapping files >into the address space is now a common practice, to avoid all of that >icky file I/O. real-time 3D display memory takes a fair bit of space, if that were to be memory-mapped the address space would also have to be large. >Any good ideas on what to do with this much space? Well for starters, it might be enough space for a pretty convincing virtual world by today's standards. When AI really gets off the ground large spaces may well be necessary to this, and whatever the AI comes up with on its own. The technology has not become manifest yet for any of this, but how much of what we have today was predicted by industry scientists? It was the Science fiction writer / visionary type who got it right more often (when he wasn't totally wrong that is). The seeds of the necessary technological breakthroughs are certainly available today, it's only a matter of synthesising accumulated knowledge and experience. A few thoughts... -- Adam David. adamd@rhi.hi.is
jjensen@convex.UUCP (James Jensen) (02/15/91)
In article <WOLFE.91Feb14104818@vw.ece.cmu.edu> wolfe@vw.ece.cmu.edu (Andrew Wolfe) writes: > >Better yet; > >I have been told that it is estimated that 64-bits will address all of the >subatomic particles in our universe. Reality check time: The number of atoms in 1 gram of hydrogen = 6.02 x 10^23(22? remembered from chemistry long ago.) This takes around 80 bits to represent. Maybe you were told 64 bytes, Which seems more likely to me. Jim Jensen - jjensen@convex.com
jap@convex.cl.msu.edu (Joe Porkka) (02/15/91)
ccplumb@rose.uwaterloo.ca (Colin Plumb) writes: >wolfe@vw.ece.cmu.edu (Andrew Wolfe) wrote: >> >>Better yet; >> >>I have been told that it is estimated that 64-bits will address all of the >>subatomic particles in our universe. [stuff about 64 not being enuf deleted] So, lets say 233 or more bits This would also require sub atomic memories, right? Each represented bit would have to be a fraction of a particle to get more bits than particles.
drh@duke.cs.duke.edu (D. Richard Hipp) (02/15/91)
In article <WOLFE.91Feb14104818@vw.ece.cmu.edu> wolfe@vw.ece.cmu.edu (Andrew Wolfe) writes: > >Better yet; > >I have been told that it is estimated that 64-bits will address all of the >subatomic particles in our universe. Methinks you've been told wrong. 2**64 is approximately 2e19. There are about 6e23 atoms (30000 times 2**64) in a mole of any substance. The earth has a total volume of about 9e20 cubic meters (roughly 50 times 2**64). A 2**64 byte memory system in which each bit was stored in a cell of one square micron would fit inside a cube of less than 6 meters per side. Perhaps you heard that 64 DIGITS would suffice to address every subatomic particle...
richard@aiai.ed.ac.uk (Richard Tobin) (02/15/91)
In article <WOLFE.91Feb14104818@vw.ece.cmu.edu> wolfe@vw.ece.cmu.edu (Andrew Wolfe) writes: >I have been told that it is estimated that 64-bits will address all of the >subatomic particles in our universe. Given that 24 litres of a gas contains 6.02 * 10^23 (which is close to 2^79) molecules, this seems implausible. -- Richard -- Richard Tobin, JANET: R.Tobin@uk.ac.ed AI Applications Institute, ARPA: R.Tobin%uk.ac.ed@nsfnet-relay.ac.uk Edinburgh University. UUCP: ...!ukc!ed.ac.uk!R.Tobin
davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (02/15/91)
In article <WOLFE.91Feb14104818@vw.ece.cmu.edu> wolfe@vw.ece.cmu.edu (Andrew Wolfe) writes: | I have been told that it is estimated that 64-bits will address all of the | subatomic particles in our universe. Assuming that you have a 1 gigaBYTE bus on that memory, it will take 316 years (almost 317) to swap a program in. -- bill davidsen (davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen) "I'll come home in one of two ways, the big parade or in a body bag. I prefer the former but I'll take the latter" -Sgt Marco Rodrigez
bs@gauss.mitre.org (Robert D. Silverman) (02/15/91)
In article <BURLEY.91Feb14153832@geech.ai.mit.edu> burley@geech.ai.mit.edu (Craig Burley) writes: :In article <WOLFE.91Feb14104818@vw.ece.cmu.edu> wolfe@vw.ece.cmu.edu (Andrew Wolfe) writes: : : I have been told that it is estimated that 64-bits will address all of the : subatomic particles in our universe. : This claim is grossly false. In fact, it isn't even close. 2^64 ~ 1.84 x 10^19 1 mole ~ 6.02 x 10^23 Thus, there is 4 orders of magnitude difference between 64 bit addressing and just 1 mole of matter. It has been estimated that there are "around" 10^79 atoms in the universe (from estimates of mass). This is off by 60 orders of magnitude from the supposed 64 bit address space. Even if 10^79 is off by 3 or 4 orders of magnitude, it still makes the above claim ridiculous. Ye gads! Doesn't anyone learn basic physics anymore? Can't anyone do arithmetic? -- Bob Silverman #include <std.disclaimer> Mitre Corporation, Bedford, MA 01730 "You can lead a horse's ass to knowledge, but you can't make him think"
sls@beaner.cs.wisc.edu (Steve Scott) (02/16/91)
In article <WOLFE.91Feb14104818@vw.ece.cmu.edu> wolfe@vw.ece.cmu.edu (Andrew Wolfe) writes: > >Better yet; > >I have been told that it is estimated that 64-bits will address all of the >subatomic particles in our universe. Well, I agree that 2^64 is BIG, but it's nowhere close to that big. The number of atoms in the universe is something like 10^72 (give or take a few orders of magnitude :-) ). --Steve
rmc@snitor.UUCP (Russell Crook) (02/16/91)
I tried mailing this to jeff@nsx, but it bounced. Still... > (re: addressing every molecule in your body, and telling grandchildren > about the novelty of an address space extension): > > Now, where's my pedant's hat? Ah, here we go ... :-> > > 64 bits is not enough to address every molecule in your body. > 2**30 is very close to 10**9; hence 2**64 is roughly 2**4 * 10**9 * 10**9 > or 1.6 * 10**19. Since Avogadro's number (number of molecules in one mole) > is a bit more than 6 * 10**23, and one mole of water is 18 grams, then > 2**64 would address about (1.6 * 10**19)/(6 * 10**23) or 2.7 * 10**-5 moles > of water. Since (pant pant pedant pant) the human body is at least half > water (I seem to recall 75%+, but who cares :->), > the address space is insufficient for the alloted task. :-> :-> > > Now, as an exercise for the reader (who, if he/she/it has had any sense > will have given up by this point :->), the following question: > > Assuming the laws of physics won't get in the way (a relatively large > assumption, I must say), and that the current > "4N bytes of memory in three years for the same price as N bytes today" > assumption holds, when will 2**64 bytes be as cheap as 2**20 bytes (i.e., > one megabyte) today? > > Two powers of two every three years, 44 powers to go: 44/2 * 3 = 66 years. > > So, you SHOULD be able to tell this story to your grandchildren... :-> > > Pedant hat off! > Needless to say, I consider my body to be somewhat :-> smaller than the universe as a whole, so 2**64 is certain not to suffice to address the universe. Regards... -- ------------------------------------------------------------------------------ Russell Crook, Siemens Nixdorf Information Systems, Toronto Development Centre 2235 Sheppard Ave. E., Willowdale, Ontario, Canada M2J 5B5 +1 416 496 8510 uunet!{imax,lsuc,mnetor}!nixtdc!rmc, rmc%nixtdc.uucp@{eunet.eu,uunet.uu}.net, rmc.tor@nixdorf.com (in N.A.), rmc.tor@nixpbe.uucp (in Europe) "... technology so advanced, even we don't know what it does." -- ------------------------------------------------------------------------------ Russell Crook, Siemens Nixdorf Information Systems, Toronto Development Centre 2235 Sheppard Ave. E., Willowdale, Ontario, Canada M2J 5B5 +1 416 496 8510 uunet!{imax,lsuc,mnetor}!nixtdc!rmc, rmc%nixtdc.uucp@{eunet.eu,uunet.uu}.net,
mahar@jetsun.weitek.COM (Mike Mahar) (02/16/91)
In article <WOLFE.91Feb14104818@vw.ece.cmu.edu> wolfe@vw.ece.cmu.edu (Andrew Wolfe) writes: > >Better yet; > >I have been told that it is estimated that 64-bits will address all of the >subatomic particles in our universe. This is clearly untrue. If you have 64Mbit ram it would only take 2.3 trillion of them to fully populate the memory space. 2.3 trillion is a lot but we probably have about that much memory in the world today. It is interestion to note that if you could read or write one 64 bit word every 10ns it would take 731 years to touch every bit in the machine. -- "The bug is in the package somewhere". | Mike Mahar - Anyone who has used Ada | UUCP: {turtlevax, cae780}!weitek!mahar
przemek@liszt.helios.nd.edu (Przemek Klosowski) (02/16/91)
Not even close. 2^64 = (2^10)^6.4 = approx. (10^3)^6.4, or more or less 10^19. Now the Avogadro number (the number of atoms in a mole, e.g. in 28 grams of silicon) is 6*10^23. You are off by several orders of magnitude :^) -- przemek klosowski (przemek@ndcvx.cc.nd.edu) Physics Dept University of Notre Dame IN 46556
hrubin@pop.stat.purdue.edu (Herman Rubin) (02/16/91)
In article <3206@crdos1.crd.ge.COM>, davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) writes: ..................... > Assuming that you have a 1 gigaBYTE bus on that memory, it will take > 316 years (almost 317) to swap a program in. The CYBER 205, definitely not the fastest machine in the world, can move roughly 50 megawords (400 megabytes) per second per pipe. Now even allowing for overhead (vector units are limited to 65535 words, and setup costs), this time will not be doubled. I see no way to get your pessimistic result. -- Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399 Phone: (317)494-6054 hrubin@l.cc.purdue.edu (Internet, bitnet) {purdue,pur-ee}!l.cc!hrubin(UUCP)
mccalpin@perelandra.cms.udel.edu (John D. McCalpin) (02/17/91)
In article <3206@crdos1.crd.ge.COM>, davidsen@crdos1.crd.ge.COM writes about a program which fills a 64-bit address space: > Assuming that you have a 1 gigaBYTE bus on that memory, it will take > 316 years (almost 317) to swap a program in. On 16 Feb 91 14:25:46 GMT, hrubin@pop.stat.purdue.edu (Herman Rubin) replied: Herman> [...] I see no way to get your pessimistic result. Herman probably did not understand what Davidsen was saying. I get a slightly different number from the following calculation: perelandra 1% bc m=2^64 % number of bytes addressable by 64 bits 18,446,744,073,709,551,616 % a big number 2*10^19 r=1,000,000,000 % 1 GB/s data bus rate m/r 18,446,744,073 % time in seconds for transfer m/r/(86400*365) 584 % time in years for transfer quit -- John D. McCalpin mccalpin@perelandra.cms.udel.edu Assistant Professor mccalpin@brahms.udel.edu College of Marine Studies, U. Del. J.MCCALPIN/OMNET
hrubin@pop.stat.purdue.edu (Herman Rubin) (02/18/91)
In article <MCCALPIN.91Feb16123129@pereland.cms.udel.edu>, mccalpin@perelandra.cms.udel.edu (John D. McCalpin) writes: > In article <3206@crdos1.crd.ge.COM>, davidsen@crdos1.crd.ge.COM writes > about a program which fills a 64-bit address space: > > > Assuming that you have a 1 gigaBYTE bus on that memory, it will take > > 316 years (almost 317) to swap a program in. > > On 16 Feb 91 14:25:46 GMT, hrubin@pop.stat.purdue.edu (Herman Rubin) replied: > > Herman> [...] I see no way to get your pessimistic result. > > Herman probably did not understand what Davidsen was saying. > I get a slightly different number from the following calculation: I agree that I did not understand what Davidsen was saying. Now I do, and I wonder why he was saying it. Is he saying that the full address space must be fillable by the technology of today or the near future? Having seen the development of computers, a problem has arisen every time the number of address bits made possible by architectural advances has exceeded the space provided for in the current architecture. Various kludges have to be adopted. We are now reaching the stage where 32 bits for address are not enough. For good reasons, having word size a power of 2 has come in. So we go to 64 bits for address space, and maybe we can go a few decades without changing it. -- Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399 Phone: (317)494-6054 hrubin@l.cc.purdue.edu (Internet, bitnet) {purdue,pur-ee}!l.cc!hrubin(UUCP)
mash@mips.COM (John Mashey) (02/21/91)
In article <3197@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes: >In article <1991Feb13.160718.25759@visix.com> jeff@nsx (Jeff Barr) writes: >| Voracious_users_of_memory and assumers_that_sizeof(char *)==sizeof(int): >| Note that people who are building new processor chips (e.g. >| the MIPS R4000) say that by 1993 typical high-end micro-based systems >| are going to have more than 4 gigabytes of address space, and in many >| cases this much real memory. > I guess if you define high end micros to mean those with 4GB of >memory, then this will be true. There will always be problems which can >use this much memory, but somehow I can't see why there would be that >much memory on a typical system. As long as cost and failure rate are Note: I don't think anybody has claimed that 4GB of memory will be "typical" real soon [obviously, if there is a "typical" number it is 640KB :-)]. We do claim that: a) More than one company that builds micro-based servers will obviously offer 4GB of real memory within just a few years. b) People often buy 50%-max memory sizes, and a few do buy maxed machines, or more likely, upgrade there. c) There are important applications that care about this. They are not necessarily "typical" applications, just ones that happen to be very important to reasonable numbers of people, not leading-edge crazies. Elsewhere, there was a question about desktop applications that might want this. I thought I posted a long discussion on this, but let me resummarize: 1) Databases 2) Video 3) Image 4) CAD 5) G.I.S. 6) Technical number-crunch Of these, workstations are relevant to every one for software development, and at least for 2-5 for end-user usage. Again, this discussion is NOT claiming that your word processor, spreadsheet, and mail programs suddenly will croak if they don't go 64-bits, or that all 32-bit machines are suddenly obsolete, but that convenient more-than-32-bit addressing will be an enabler for certain classes of applications that are already important, and likely to be more widespread if they can be made cheap. -- -john mashey DISCLAIMER: <generic disclaimer, I speak for me only, etc> UUCP: mash@mips.com OR {ames,decwrl,prls,pyramid}!mips!mash DDD: 408-524-7015, 524-8253 or (main number) 408-720-1700 USPS: MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086
dmocsny@minerva.che.uc.edu (Daniel Mocsny) (02/22/91)
In article <46049@mips.mips.COM> mash@mips.COM (John Mashey) writes: >Elsewhere, there was a question about desktop applications that might >want this. I thought I posted a long discussion on this, but >let me resummarize: > 1) Databases > 2) Video > 3) Image > 4) CAD > 5) G.I.S. > 6) Technical number-crunch Pardon me if I am repeating points you made earlier, but let me emphasize that having ridiculous amounts of memory available could potentially speed up lots of things. For example, in chemical engineering we often simulate processes that require a program to evaluate equations of state, physical property correlations, etc., repeatedly at similar conditions. Depending on the form of the equation of state or property correlation, the evaluation may be somewhat lengthy, and/or implicit (i.e., requiring numerical convergence). A good-sized simulation evaluates an equation of state many times (once per grid point, per time step). Since conditions may not change appreciably over a few grid points or time steps, much of the calculation will be redundant. Researchers exploit this to speed up iterative calculations, i.e., by using the result of the previous solution for the grid point to guess a starting value for the next one. However, no matter how fast the equation of state routine is, a table-lookup routine would be faster. Even with an interpolation for more precision, it will usually be better, especially when the alternative is a routine requiring convergence. So then, given unlimited memory, we can extend the notion of "caching" to include any potentially redundant calculation. *Many* application programs involve some element of redundancy. People don't solve completely unique problems every time they fire up a computer. So if I wanted to do a bunch of simulation runs, I would be happy to build up a set of large interpolation tables that could speed things up by a factor of 10 or 100. Since these tables would be multi-variable, and precision would be nice also, no meaningful upper bound exists on the amount of memory that might be potentially useful. In principle, I'd like to have lookup tables containing every potentially-useful computed result in my field. Similarly, enormous amounts of memory could help personal computers and workstations cope with the "bursty" workload of the typical user. While the user is twiddling her thumbs, the CPU need not be idling. Instead, it can be stockpiling its memory with all sorts of potentially useful things. Later, when the user happens to request one of those things, the CPU will get it much faster from memory than if it had to build it up from scratch again. The more memory available, the better such a "work-ahead" strategy could work. By compiling statistics on the user's work habits, the computer could possibly anticipate the user's next likely command(s), and get a head-start during idle periods. -- Dan Mocsny Internet: dmocsny@minerva.che.uc.edu
Michael.Marsden@newcastle.ac.uk (Michael Marsden) (02/22/91)
dmocsny@minerva.che.uc.edu (Daniel Mocsny) writes: >............. However, no matter how fast the equation of state >routine is, a table-lookup routine would be faster. Even with an >interpolation for more precision, it will usually be better, especially >when the alternative is a routine requiring convergence. >So then, given unlimited memory, we can extend the notion of "caching" >to include any potentially redundant calculation. *Many* application >programs involve some element of redundancy. People don't solve >completely unique problems every time they fire up a computer. So if >.............. This sounds a bit like a reduction machine - which stores the results of each function, and if that function is called again, it simply returns the previously computed result. This uses a very large amount of memory.... >2^32 bits could be useful for evaluating larger programs... References: (None of which I have read) Thakkar "Selected reprints on Dataflow and Reduction Architectures", IEEE 1986? Traub "An Abstract parallel Graph Reduction Machine", 12th Computer Architecture Symposium Turner "A New Implementation Technique for Applicative Languages", Software Practice and Experience, Sept 1979 and possibly also: Quinn & Deo "Parallel Graph Algorithms", Computing Surveys 1984. -Mike Mars .--------* Mike ________________________________ Michael.Marsden | Grad. /| /| /| /| / "..never write device drivers | @ | Student / |/ | /_| /_| \ while on acid!" -XXXXXXXXXX | Uk.Ac.Newcastle |__________/ |/ |/ \__/ *----NOT-mjd!!-----------'
lindsay@gandalf.cs.cmu.edu (Donald Lindsay) (02/23/91)
In article <7517@uceng.UC.EDU> dmocsny@minerva.che.uc.edu (Daniel Mocsny) writes: >...let me emphasize that having ridiculous amounts of memory >available could potentially speed up lots of things. >So then, given unlimited memory, we can extend the notion of "caching" >to include any potentially redundant calculation. *Many* application >programs involve some element of redundancy. People don't solve >completely unique problems every time they fire up a computer. So if >I wanted to do a bunch of simulation runs, I would be happy to build >up a set of large interpolation tables that could speed things up by >a factor of 10 or 100. This is reminiscent of the old "Godelization" jokes, whereby every program output had to be registered under its Government-assigned Godel number, so that no one would ever have to recompute it... >By compiling statistics on the user's work habits, the computer could >possibly anticipate the user's next likely command(s), and get a >head-start during idle periods. A recent Carnegie Mellon thesis was on anticipating user commands. One of the major issues is insuring that all uncommanded actions be undoable. Given that, anticipation is definitely a winning idea in selected problem domains. The following was posted two years ago, but it seems relevant again: Big memories may turn out to be useful in and of themselves. The group at Sandia that won the Gordon Bell Award - the people with the 1,000 X speedup - reported an interesting wrinkle. They had a program described as: Laplace with Dirichlet boundary conditions using Green's function. (If you want that explained, sorry, ask someone else.) They reduced the problem to a linear superposition, and then as the last step, they did a matrix multiply to sum the answers. This took 128 X as much memory as "usual" ( 256 MB instead of 2 MB ), but made the problem 300 X smaller, in terms of the FLOPs required. One of perennial topics in the OS world is the latest idea for using memory. I don't see why other problem domains shouldn't also find ways to spend memory. -- Don D.C.Lindsay .. temporarily at Carnegie Mellon Robotics
glew@pdx007.intel.com (Andy Glew) (02/23/91)
One of perennial topics in the OS world is the latest idea for using memory. I don't see why other problem domains shouldn't also find ways to spend memory. -- Don D.C.Lindsay .. temporarily at Carnegie Mellon Robotics Anecdote: Gould NP1 was supposed to be a massive memory system. It was not supposed to be shipped with less than 256 megabytes of memory. This was based on the company's chief technical guru's projections of where DRAM prices were headed. But then the DRAM drought occurred, when all the US companies got out of memory production, and DRAM prices and densities stayed nearly level for a few years. The first NP1's had to be shipped with "only" 64 megabytes. Cutting the kernel down so that a 16 megabyte system was useable was a thorny project. The NP1 UNIX team used a lot of those ideas for using cheap memory. And then memory wasn't so cheap anymore... -- Andy Glew, glew@ichips.intel.com Intel Corp., M/S JF1-19, 5200 NE Elam Young Parkway, Hillsboro, Oregon 97124-6497
dmocsny@minerva.che.uc.edu (Daniel Mocsny) (02/25/91)
In article <GLEW.91Feb22201710@pdx007.intel.com> glew@pdx007.intel.com (Andy Glew) writes: >Anecdote: Gould NP1 was supposed to be a massive memory system. It was not >supposed to be shipped with less than 256 megabytes of memory. This was based >on the company's chief technical guru's projections of where DRAM prices >were headed. > But then the DRAM drought occurred, when all the US companies got >out of memory production, and DRAM prices and densities stayed nearly >level for a few years. This was certainly a short-term problem for Gould, and in the fast- moving computer industry, short-term problems can be fatal. However, from an historical perspective, was the DRAM shortage anything more than a temporary glitch? I.e., did it perturb the historical DRAM price trend in any noticeable way? From what I have read, the DRAM shortage was a combination of unforeseen increases in user demand vs. capacity reductions due to falling profit margins, plus a healthy dose of USA gov't. protectionist intervention. While we can't predict the future of gov't. trade policy, we can certainly argue that increasing user DRAM demand by itself is unlikely to interrupt DRAM price trends permanently. That is because DRAM production uses little in the way of permanently scarce resources (as far as I know). As long as the price level is high enough to insure that someone makes a profit, DRAM makers should (eventually) be able to meet any demand. In most other industries, two years to respond to a demand surge is not such a big deal. But in the computer industry, that is a whole product generation. It is also enough time for your competitor to double in size. > The NP1 UNIX team used a lot of those ideas for using cheap memory. >And then memory wasn't so cheap anymore... Yes, but if the rest of the industry had been as smart as Gould, the DRAM shortage wouldn't have occurred. Gould understood the utility of big memory, as well as the desire of users to buy and use big memory. Too bad for Gould, and the users, that the other decision-makers were mostly members of the "Desktop users do not need more than X KB" crowd. While users have trouble using all the computer power they can get their hands on, because of vendor-produced obstacles like poor ergonomics, incompatibilities, etc., the general rule is simple: the more you lower the "total-system" cost of computer power, the more of it users will buy. -- Dan Mocsny Internet: dmocsny@minerva.che.uc.edu
jgreen@alliant.alliant.com (John C Green Jr) (02/26/91)
mash@mips.COM (John Mashey) states: > We do claim that: > a) More than one company that builds micro-based servers will obviously > offer 4 GB of real memory within just a few years. I agree. Today, 4 days, not years, later Alliant announced a 4 GB real memory Intel i860 micro-based `air cooled supercomputer' (crayette, affordable supercomputer, minisupercomputer) server for scientific applications. Obviously we won't have the market to ourselves for `just a few years.' Just a few months is more likely. The details: Main memory: 256 MB cards. 16 card systems. 4 GB total Global Cache: 2 MB cards. 8 card systems. 16 MB total Local Cache per processor: 256 KB/CPU. 14 CPU systems. 3.5 MB total On Chip Cache per processor: 4/8 KB I/D. 14 CPU systems. 56/112 KB total Alliant now offers two memory cards: 64 MB and 256 MB Alliant now offers two global cache cards: 512 KB and 2 MB Alliant now offers two CPU cards: 4 * i860 & no Local Cache and 2 * i860 & 2 * 256 KB Local Cache All 8 combinations of memory/global cache/CPU cards are legal configurations.
tdonahue@prost.bbn.com (Tim Donahue) (02/27/91)
In article <4517@alliant.Alliant.COM>, jgreen@alliant (John C Green Jr) writes: >mash@mips.COM (John Mashey) states: > >> We do claim that: >> a) More than one company that builds micro-based servers will obviously >> offer 4 GB of real memory within just a few years. > >I agree. Today, 4 days, not years, later Alliant announced a 4 GB real memory >Intel i860 micro-based `air cooled supercomputer' (crayette, affordable >supercomputer, minisupercomputer) server for scientific applications. Congratulations! How many CPUs are in the largest system (14?), what does it cost, and what does it score on SPECthruput? > >Obviously we won't have the market to ourselves for `just a few years.' Just a >few months is more likely. > You don't have it to yourself now. As noted in this space before, a 128-processor TC2000 with (128 * 16 Mb =) 2 Gb of real memory is sold, installed, and running just fine. We announced the architecture, scalable to 500+ CPUs, in June of 1989. We think "the 128" is the world's largest U**X (insert favorite qualifier here)-computer. We're happy to build a 256 (with 4 Gb), call us at 617-873-6000... Cheers, Tim
pcg@cs.aber.ac.uk (Piercarlo Grandi) (02/28/91)
On 23 Feb 91 04:17:10 GMT, glew@pdx007.intel.com (Andy Glew) said: lindsay> One of perennial topics in the OS world is the latest idea for Lindsay> using memory. No, no, it is the perennial race to *waste* memory. It is *wasting* memory that increases sales, not putting it to good use :-). glew> Anecdote: Gould NP1 was supposed to be a massive memory system. It glew> was not supposed to be shipped with less than 256 megabytes of glew> memory. [ ... ] Cutting the kernel down so that a 16 megabyte system glew> was useable was a thorny project. Pure billjoysm. I cannot imagine properly designed mechanisms that take advantage of big memory but that perform poorly with smaller memories. Save for trace scheduling maybe, but that is definitely not general purpose. glew> The NP1 UNIX team used a lot of those ideas for using cheap glew> memory. Like 32KB pages that are a win only if you do numerical calculations in the same memory order as the matrix is stored, so that you have less TLB reloads, for a speed up of maybe 5%, and in all other cases *waste* 50-80% of memory? :-). Let's distinguish between ideas for *using* cheap memory and ideas for *productively using* cheap memory. And let's distinguish between programs with hardwired assumptions and flexible programs. -- Piercarlo Grandi | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk Dept of CS, UCW Aberystwyth | UUCP: ...!mcsun!ukc!aber-cs!pcg Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk
dfields@radium.urbana.mcd.mot.com (David Fields) (03/01/91)
|>glew> The NP1 UNIX team used a lot of those ideas for using cheap |>glew> memory. |> |>Like 32KB pages that are a win only if you do numerical calculations in |>the same memory order as the matrix is stored, so that you have less TLB |>reloads, for a speed up of maybe 5%, and in all other cases *waste* |>50-80% of memory? :-). |> |>Let's distinguish between ideas for *using* cheap memory and ideas for |>*productively using* cheap memory. And let's distinguish between |>programs with hardwired assumptions and flexible programs. |>-- |>Piercarlo Grandi Sorry, the NP1 used 8k pages. I was only on the periphery of the project and while I don't think any of the developers were satisified with the amount of memory used or wasted it wasn't any where near the 50-80% you jest. If I believed there was a target market for the NP1 I might try to defend the design decisions but .... It was fast for the time, however, and it held up very well with very heavy multi-user loads. Dave Fields // Motorola Computer Group // dfields@urbana.mcd.mot.com
clc5q@madras.cs.Virginia.EDU (Clark L. Coleman) (03/01/91)
The referenced articles discuss multiprocessor systems with 4GB of real
memory, or the potential to be built with 4GB of real memory.
The point seemed to be that 32-bit addressing was not sufficient, as we
are going to hit the 4GB barrier soon.
My question: Do these machines put all 4GB or 2GB in one linear address
space? I tend to doubt that each of the 128 Intel i860 processors in
the referenced post can address each others' memory in this fashion.
More likely that they each need to address much less than 4GB, in which
case we seem to have lost the original point of the discussion.
I am sure that we will need more than 4GB at some point (especially for
some rare but demanding applications), and that we can chew up 4GB of
address space with memory mapping of the I/O, but the question remains
whether 4GB of REAL memory (not disk drives mapped into memory),
addressable by any single processor (including a processor within a
multiprocessor system) is "just around the corner".
By the way, the Hewlett Packard Precision Architecture can use two
32-bit address registers to perform 64-bit addressing, and has done so
for several years now. Yet press announcements from various companies
seem to claim that they are now the first with this capability. 64-bit
ALU registers are a different story.
-----------------------------------------------------------------------------
"The use of COBOL cripples the mind; its teaching should, therefore, be
regarded as a criminal offence." E.W.Dijkstra, 18th June 1975.
||| clc5q@virginia.edu (Clark L. Coleman)
brooks@physics.llnl.gov (Eugene D. Brooks III) (03/04/91)
In article <1991Feb28.183404.19076@murdoch.acc.Virginia.EDU> clc5q@madras.cs.Virginia.EDU (Clark L. Coleman) writes: >My question: Do these machines put all 4GB or 2GB in one linear address >space? I tend to doubt that each of the 128 Intel i860 processors in >the referenced post can address each others' memory in this fashion. >More likely that they each need to address much less than 4GB, in which >case we seem to have lost the original point of the discussion. The BBN-TC2000 supports both local memory and interleaved memory which is addressed as linear cache line address which round robin through the machine. The natural division, which we are close to on our 128 node machine, is putting half the memory in the local memories and half in the interleaved shared memory pool. On the current machine we have this gives 1GB of linearly addressible shared memroy. On a machine with ten times as many processors, possibly with larger memories on each processor card, this would exceed 10 GB of linearly addressible shared memory. We expect any vendor developing highly parallel supercomputers in the future to be offering machines with more than one thousand processors. It is clear that addressing must extend beyond 32 bits very soon, for both file systems larger than 2GB and real physical memories larger than 2GB.
cprice@mips.com (Charlie Price) (03/09/91)
In article <1991Feb28.183404.19076@murdoch.acc.Virginia.EDU> clc5q@madras.cs.Virginia.EDU (Clark L. Coleman) writes: > >The referenced articles discuss multiprocessor systems with 4GB of real >memory, or the potential to be built with 4GB of real memory. > >... but the question remains >whether 4GB of REAL memory (not disk drives mapped into memory), >addressable by any single processor (including a processor within a >multiprocessor system) is "just around the corner". The R6000 processor (MIPS ECL) has a 36-bit physical address, so the CHIP can address 64 GBytes of physical memory. The virtual space for any process is 2 GBytes (shades of an 11/70!). Whether anybody ever builds a box with this that has even 4 GBytes in it is an entirely separate question. -- Charlie Price cprice@mips.mips.com (408) 720-1700 MIPS Computer Systems / 928 Arques Ave. MS 1-03 / Sunnyvale, CA 94086-23650
aduane@urbana.mcd.mot.com (Andrew Duane) (03/13/91)
In article <839@spim.mips.COM> cprice@mips.com (Charlie Price) writes: >The R6000 processor (MIPS ECL) has a 36-bit physical address, >so the CHIP can address 64 GBytes of physical memory. >The virtual space for any process is 2 GBytes (shades of an 11/70!). Reminds me of the T-Shirts that DEC put out at a DECUS about 10 years ago, with the slogan: "I don't care what people say 36 bits are here to stay." They cancelled the DEC-20 line a few months later, but I guess history proved them right after all ... Andrew L. Duane (JOT-7) w:(408)366-4935 Motorola Microcomputer Design Center decvax!cg-atla!samsung!duane 10700 N. De Anza Boulevard uunet/ Cupertino, CA 95014 duane@samsung.com Only my cat shares my opinions, and she's bit-sliced.