webster@bnr.ca (Brent Webster) (05/08/91)
Just a few comments on Apple's version of virtual memory. I have a lowly Mac II with 8Meg of ram and a 40Meg hard-drive running 7.0b4 OS. I been running a Smalltalk-80 application on my SparcStation whose image size is about 6.5 Megs and I wanted to try running it on my Mac (speed initially is not a concern). I went to reconfigure my memory via the Memory control panel but it only allowed me to increase the memory to 11 Megs. When I rebooted my Mac, to my astonishment, my harddisk was short 11Megs and not the 3Megs plus some overhead which I expected. I can live with that but I am wishing for a more elegant solution. Running "About this Macintosh..." indicated that the System was ONLY taking about 1.8Meg so I figured I had 9Megs to play with. WRONG!!! The "Largest Unused Block:" was about 7.2 Meg and that's all the Smalltalk-80 application could see. My questions are: Does Smalltalk-80 have to be reworked to see the other 2Meg or do all Mac applications only get to use the "Largest Unused Block:"? Is the "Largest Unused Block:" limited by the size of the actual ram on board your Mac? Will I ever be able to run a 12Meg application on a Mac containing 8Meg of ram? Is Apple's version of virtual memory always going to be HARDDISK hungry? Or is this all fixed in the official "May 13" release of System 7? ************************************************************ Brent Webster (613) 763-4962 * E-mail: * webster@bnr.ca (NetNorth) Bell-Northern Research Ltd * webster@bnr.ca.bitnet P.O. Box 3511, Station C * OTTAWA, Ont, Canada * K1Y 4H7 *
ml27192@uxa.cso.uiuc.edu (Mark Lanett) (05/08/91)
webster@bnr.ca (Brent Webster) writes: >I went to reconfigure my memory via the Memory control panel but it >only allowed me to increase the memory to 11 Megs. When I rebooted my >Mac, to my astonishment, my harddisk was short 11Megs and not the >3Megs plus some overhead which I expected. I can live with that >but I am wishing for a more elegant solution. There are two ways to deal with this when implementing VM: keep only extra stuff on disk or keep a copy of everything on disk. Apple puts everything on disk since then if a block in RAM has not been changed it does not need to be written out to disk again. >Running "About this Macintosh..." indicated that the System was ONLY >taking about 1.8Meg so I figured I had 9Megs to play with. WRONG!!! >The "Largest Unused Block:" was about 7.2 Meg and that's all the >Smalltalk-80 application could see. You do have 9 megs, but it's not contiguous. Your ROMS may be in the middle of that space, or NuBus cards. All cards should be moved to the highest slots (away from the power supply) to minimize fragmentation. > Will I ever be able to run a 12Meg application on a Mac > containing 8Meg of ram? Yes, If you run in 32-bit mode, if your machine and applications support it (Only IIci's and later do). -- //----------------------------------------------------------------------------- Mark Lanett ml27192@uxa.cs.uiuc.edu Software Tools Group, NCSA mlanett@ncsa.uiuc.edu
glenn@gla-aux.uucp (Glenn L. Austin) (05/09/91)
webster@bnr.ca (Brent Webster) writes: >Just a few comments on Apple's version of virtual memory. >I have a lowly Mac II with 8Meg of ram and a 40Meg hard-drive >running 7.0b4 OS. I been running a Smalltalk-80 application on >my SparcStation whose image size is about 6.5 Megs and I wanted to >try running it on my Mac (speed initially is not a concern). >I went to reconfigure my memory via the Memory control panel but it >only allowed me to increase the memory to 11 Megs. When I rebooted my >Mac, to my astonishment, my harddisk was short 11Megs and not the >3Megs plus some overhead which I expected. I can live with that >but I am wishing for a more elegant solution. This is so that the entire memory map is available. If only the additional RAM was allocated, where would you put the current RAM so that you could swap in the additional RAM? There would always be some overhead, and by allocating the full memory size, you remove problems with double-writing, memory collisions, etc. >Running "About this Macintosh..." indicated that the System was ONLY >taking about 1.8Meg so I figured I had 9Megs to play with. WRONG!!! >The "Largest Unused Block:" was about 7.2 Meg and that's all the >Smalltalk-80 application could see. If you have any cards in NuBus slots, they leave 1MB "holes" in the memory map. If you want to increase the largest block, move all the cards as close to the power supply as possible (on 6-slot Macs), or as suggested by Apple in the docs (which I haven't seen yet...) >My questions are: > Does Smalltalk-80 have to be reworked to see the other 2Meg or > do all Mac applications only get to use the "Largest Unused > Block:"? I'll bet if you went back into the Finder, you would see that the newest "Largest Unused Block" was now either 1MB or 2MB... The memory is partitioned under Multifinder (and System 7), and the memory (map) must be contiguous within the block. > Is the "Largest Unused Block:" limited by the size of the actual > ram on board your Mac? Yes and no. The largest unused block is an "artifact" of the partitioning mentioned above. > Will I ever be able to run a 12Meg application on a Mac > containing 8Meg of ram? If you allocate the memory correctly, yes. If I moved my two video boards to the highest slots, I'd be able to run an application which used the contiguous memory allocated from the space used by slots 9-C, adding a total of 4MB to both my available memory and largest unused block. > Is Apple's version of virtual memory always going to be > HARDDISK hungry? Almost everybody elses is. Why should Apple be any different? :-) -- =============================================================================== | Glenn L. Austin - Mac Wizard and Auto Racing Driver | | Usenet: glenn@gla-aux.uucp | | "Turn too soon, run out of room. Turn too late, much better fate." |
dbert@mole.gnu.ai.mit.edu (Douglas Siebert) (05/09/91)
In article <1991May9.042425.598@gla-aux.uucp> glenn@gla-aux.uucp (Glenn L. Austin) writes: >webster@bnr.ca (Brent Webster) writes: >>Just a few comments on Apple's version of virtual memory. >>I have a lowly Mac II with 8Meg of ram and a 40Meg hard-drive >>running 7.0b4 OS. I been running a Smalltalk-80 application on >>my SparcStation whose image size is about 6.5 Megs and I wanted to >>try running it on my Mac (speed initially is not a concern). > >>I went to reconfigure my memory via the Memory control panel but it >>only allowed me to increase the memory to 11 Megs. When I rebooted my >>Mac, to my astonishment, my harddisk was short 11Megs and not the >>3Megs plus some overhead which I expected. I can live with that >>but I am wishing for a more elegant solution. > >This is so that the entire memory map is available. If only the additional >RAM was allocated, where would you put the current RAM so that you could >swap in the additional RAM? There would always be some overhead, and by >allocating the full memory size, you remove problems with double-writing, > >memory collisions, etc. I wonder how long virtual memory will remain a useful solution however. Somewhere in the Mac newsgroups in the last few days someone mentioned how the memory used by the "power" user doubles in size each year. Now considering all hubbub about older Macs being limited to 16M, I'll assume that the 1991 power user uses 16M. Take that back ten years, to 1981, and you get 16K (my Atari 800 back then had 48K so I guess I was just ahead of my time! :) ) Now I, along with 99.9% of other computer users at the time, would have laughed at the notion that in ten years 16M would be a "limitation" people are bitching about. Virtual memory provides a temporary solution, but the way things are going, I'll wager virtual memory will not be used ten years from now. Because if memory usage continues to double, we'll be running around 16G (that's 16,384M for those of you not aware what a "G" is :) ) Of course we'll have run into the 680x0's addressable limit of 4G by then, but many of us will have migrated to 64-bit processors by then. Anyway, if we have even 2G of physical RAM by then (a conservative estimate I'd bet) we'll need to allocate several GIGS of memory on our harddisks to act as virtual memory. Now even if hard disk technology has progressed as well as it has in the past 10 years (from 10M winchesters to 1G Fujitsus) I'd hate to think how *slow* things would be, if we try to use much of that virtual memory. Of course we'll probably all be using IBM's SHRAM chips arrayed to give us gig after gig of "removal RAM" by then anyway....:) -- Doug Siebert | dbert@gnu.ai.mit.edu MBA Student (2nd year) | "All opinions expressed herein are obviously (starting MS in CS this fall?) | superior to yours or you wouldn't have need The University of Iowa | to be reading this, now would you?" :-)
philip@pescadero.stanford.edu (Philip Machanick) (05/10/91)
In article <1991May9.162601.19210@mintaka.lcs.mit.edu> dbert@mole.gnu.ai.mit.edu (Douglas Siebert) writes: >Virtual memory provides a temporary solution, but the way things are going, >I'll wager virtual memory will not be used ten years from now. Because if >memory usage continues to double, we'll be running around 16G (that's 16,384M >for those of you not aware what a "G" is :) ) Of course we'll have run into >the 680x0's addressable limit of 4G by then, but many of us will have migrated >to 64-bit processors by then. > >Anyway, if we have even 2G of physical RAM by then (a conservative estimate I'd >bet) we'll need to allocate several GIGS of memory on our harddisks to act as >virtual memory. Now even if hard disk technology has progressed as well as it >has in the past 10 years (from 10M winchesters to 1G Fujitsus) I'd hate to >think how *slow* things would be, if we try to use much of that virtual memory. You raise some interesting questions. I don't think they mean the end for VM, but rather the need to reimplement it. For example, one idea being worked on by VM researchers is the notion that it may be faster to recompute some data than to page it in and out, so the paging system gives the program the option of keeping the data or not when it wants to replace pages. Also, there is serious work on getting higher speed out of disks. Disk arrays give you much faster transfer by putting several disks in parallel. Another point: more mulitasking may mask the cost of paging, by having more alternative processes to schedule when a page fault occurs. It is amazing that the Mac OS has lasted as well as it has (even if it's starting to look like a case of the 700-year old axe which has had 3 new blades and 2 new handles). Philip Machanick
poorman@convex.com (Peter W. Poorman) (05/10/91)
Another point to consider is that virtual memory allows two (or more) processes to share a portion of their address space. This is an incredibly powerful capability. Don't confuse virtual memory with demand paging. --Pete Poorman poorman@convex.com
ml27192@uxa.cso.uiuc.edu (Mark Lanett) (05/10/91)
dbert@mole.gnu.ai.mit.edu (Douglas Siebert) writes: >In article <1991May9.042425.598@gla-aux.uucp> glenn@gla-aux.uucp (Glenn L. Austin) writes: >I wonder how long virtual memory will remain a useful solution however. [para deleted] >Virtual memory provides a temporary solution, but the way things are going, >I'll wager virtual memory will not be used ten years from now. Because if >memory usage continues to double, we'll be running around 16G (that's 16,384M >for those of you not aware what a "G" is :) ) Of course we'll have run into >the 680x0's addressable limit of 4G by then, but many of us will have migrated >to 64-bit processors by then. Virtual memory is never a usefull solution -- it's only a temporary, slightly- inconvenient solution to not having enough RAM. If you only have 8 megs and need to run something larger _just_once_ it's great: no having to buy more memory, just use the hard disk. If you need that much memory on a regular basis, tho', you'll quickly find that it's much too slow. Also, it _does_ cost you that disk space. (How many optical drives do you have hooked up to give you that Gig, anyway? :-)) -- //----------------------------------------------------------------------------- Mark Lanett ml27192@uxa.cs.uiuc.edu Software Tools Group, NCSA mlanett@ncsa.uiuc.edu
norton@extro.ucc.su.OZ.AU (Norton Chia) (05/10/91)
While I can appreciate the ability to access so much memory, ultimately, we'll still have to address the issue whether the CPU can handle that much workload. You may find that the current 680x0 chips have been able to address a lot of memory for some time, but eventually, performance will still be hampered due to the CPU's inability to keep up with all the information it has to deal with. I think the ability to access a lot of RAM either real or virtual is great, but only of the need arises. For a personal desktop computer/workstation, it's up to the individual to decide. For my money, I'll need the bare minimum of physical RAM that can hold at least 150% of the largest process requirements, and let the other processes take a backseat. Don't forget, even if the 68040 is so much more powerful than the lowly 68000, it's only that much more... probably at most 20 times realistically. Can you imagine having a machine 20 times faster than a Classic? C'mon, you'll still be left wanting! Anyway, arguments aside, I think it is very healthy to want more than we ever do at present. That's the way to advance and stop from feeling complacent. Original post as follows:--- ml27192@uxa.cso.uiuc.edu (Mark Lanett) writes: >dbert@mole.gnu.ai.mit.edu (Douglas Siebert) writes: >>In article <1991May9.042425.598@gla-aux.uucp> glenn@gla-aux.uucp (Glenn L. Austin) writes: >>I wonder how long virtual memory will remain a useful solution however. >[para deleted] >>Virtual memory provides a temporary solution, but the way things are going, >>I'll wager virtual memory will not be used ten years from now. Because if >>memory usage continues to double, we'll be running around 16G (that's 16,384M >>for those of you not aware what a "G" is :) ) Of course we'll have run into >>the 680x0's addressable limit of 4G by then, but many of us will have migrated >>to 64-bit processors by then. >Virtual memory is never a usefull solution -- it's only a temporary, slightly- >inconvenient solution to not having enough RAM. If you only have 8 megs and >need to run something larger _just_once_ it's great: no having to buy more >memory, just use the hard disk. If you need that much memory on a regular basis, >tho', you'll quickly find that it's much too slow. Also, it _does_ cost you that >disk space. (How many optical drives do you have hooked up to give you that >Gig, anyway? :-)) >-- >//----------------------------------------------------------------------------- >Mark Lanett ml27192@uxa.cs.uiuc.edu >Software Tools Group, NCSA mlanett@ncsa.uiuc.edu -- <<<< My employers ignore me, I'm on my own when I speak out in public :^( >>>> < Norton Chia | Mail me on norton@extro.ucc.su.OZ.AU APPLELINK:AUST0240 > < Micro Support | "There are only 3 types of people in the world: > < Uni of Sydney | Those who can count, and those who can't" >
edgar@function.mps.ohio-state.edu (Gerald Edgar) (05/10/91)
> If you need that much memory on a regular basis, >tho', you'll quickly find that it's much too slow. There are exceptions to this. I find that my use of VM is mostly keeping a lot of applications open (but not being used). Almost the only time there are disk swaps is when I switch from one program to another. So I haven't noticed much slowdown. I think I miss the 14 megs on my disk more than I miss the extra 8 megs of RAM... When I get a bigger disk, this may reverse. -- Gerald A. Edgar Internet: edgar@mps.ohio-state.edu Department of Mathematics Bitnet: EDGAR@OHSTPY The Ohio State University telephone: 614-292-0395 (Office) Columbus, OH 43210 -292-4975 (Math. Dept.) -292-1479 (Dept. Fax)
ml27192@uxa.cso.uiuc.edu (Mark Lanett) (05/11/91)
edgar@function.mps.ohio-state.edu (Gerald Edgar) writes: >> If you need that much memory on a regular basis, >>tho', you'll quickly find that it's much too slow. >There are exceptions to this. I find that my use of VM is mostly keeping >a lot of applications open (but not being used). Almost the only time >there are disk swaps is when I switch from one program to another. >So I haven't noticed much slowdown. Unfortunately this doesn't work well for me. I find that when some programs are paged out, paging them back in results in a crash. This is mainly SADE, but the MPW Linker and MS Word have also failed when being paged back in. I don't know if this is a bug in VM or the programs... >I think I miss the 14 megs on my disk more than I miss the extra 8 megs >of RAM... When I get a bigger disk, this may reverse. In the case of VM, faster is better than bigger. Of course, the two go hand-in-hand. >-- > Gerald A. Edgar Internet: edgar@mps.ohio-state.edu > Department of Mathematics Bitnet: EDGAR@OHSTPY > The Ohio State University telephone: 614-292-0395 (Office) > Columbus, OH 43210 -292-4975 (Math. Dept.) -292-1479 (Dept. Fax) -- //----------------------------------------------------------------------------- Mark Lanett ml27192@uxa.cs.uiuc.edu Software Tools Group, NCSA mlanett@ncsa.uiuc.edu
peirce@outpost.UUCP (Michael Peirce) (05/11/91)
In article <1991May10.124723.21125@zaphod.mps.ohio-state.edu>, edgar@function.mps.ohio-state.edu (Gerald Edgar) writes: > > If you need that much memory on a regular basis, > >tho', you'll quickly find that it's much too slow. > > There are exceptions to this. I find that my use of VM is mostly keeping > a lot of applications open (but not being used). Almost the only time > there are disk swaps is when I switch from one program to another. > So I haven't noticed much slowdown. This is how I use it. It lets me keep ALL my tools open so I can easily switch between MPW, ResEdit, MacDraw II, uAccess, Finder, and the program I am working on (for example). They each get a big partition and when I switch over to a particular program and use it briefly that program is pretty much all in memory. This is MUCH more convenient than having to quit and relaunch! -- michael, who just bought a new 200M drive so he can use LOTS of VM! -- Michael Peirce -- outpost!peirce@claris.com -- Peirce Software -- Suite 301, 719 Hibiscus Place -- Macintosh Programming -- San Jose, California 95117 -- & Consulting -- (408) 244-6554, AppleLink: PEIRCE
cse0735@desire.wright.edu (05/15/91)
In article <1991May8.143042.20137@bigsur.uucp>, webster@bnr.ca (Brent Webster) writes: > > Is Apple's version of virtual memory always going to be > HARDDISK hungry? > > Or is this all fixed in the official "May 13" release of > System 7? I suspect that it was a trade off for speed since Apple couldn't assume that everyone was going to have a super fast backing store. By doing a direct 1:1 map of the virtual memory to disk space, tey didn't have to go through extra logic to locate where pages of memory were stored on the disk. And when you are running on a machine that doesn't have an I/O coprocessor for hard disk access, and the possibility of slow drives, I think they could use every iota of speed they could squeeze out of thier memory swapper. (***********************************) (* Chris Blouch *) (* Wright State University *) (* Dayton, Ohio *) (* *) (***********************************)