chip@tct.uucp (Chip Salzenberg) (05/25/90)
[[ Followups to comp.arch ]] According to jca@pnet01.cts.com (John C. Archambeau): >peter@ficc.ferranti.com (Peter da Silva) writes: >>Did you know that C-news runs in small model? > >So what if C-News runs in small model. >A vast majority of C compilers won't. Competent C compilers can be written in small model. I once worked on a C compiler that ran on a PDP-11, which as everyone knows, is limited to 64K of data under most (all?) Unix implementations. The old saw that programs will expand to fill the memory available to them is true. It points out that the primary reason why mundane programs use large memory spaces is the tendency of programmers to use brute force to attack problems until the computer they're using runs out of force. It used to be that the brute force line was crossed quite early; not so today. Too bad. I have in the past focussed almost exclusively on kernel bloat as the Evil Memory Waster Of Our Time. However, I now believe that I was mistaken. As much as the Unix kernel hackers have caused their baby to grow in recent years, the utility programs and support code have caused as much, if not more, bloat than the kernel. There is plenty of blame to go around. As Henry Spencer has so often pointed out, thinking small seems to be a lost art[*], which is a pity. The X window system could use a small thinker, possibly for the purpose of discarding X entirely. [*] Were I a cynic, I might wonder if thought of any kind is in short supply among today's programmers. I might also cite Sturgeon's Law: "Ninety percent of everything is crap." However, as I am not a cynic, I shall refrain. -- Chip Salzenberg at ComDev/TCT <chip%tct@ateng.com>, <uunet!ateng!tct!chip>
raob@mullian.ee.mu.oz.au (richard oxbrow) (05/26/90)
In article <265D2FE5.2513@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes: >[[ Followups to comp.arch ]] > >... >As Henry Spencer has so often pointed out, thinking small seems to be >a lost art[*], which is a pity. The X window system could use a small >thinker, possibly for the purpose of discarding X entirely. But then we wouldn't need to buy that extra 8 Mbytes of RAM and what would we do with all those spare cycles left over ;^) Any way I sure some body is benefitting from the "fat slow software" syndrome. richard .. >[*] Were I a cynic, I might wonder if thought of any kind is in short >supply among today's programmers. I might also cite Sturgeon's Law: >"Ninety percent of everything is crap." However, as I am not a cynic, >I shall refrain. (Maybe I should have posted this to alt.conspiracy) richard oxbrow | ee eng, uni of melbourne |Internet raob@mullian.ee.mu.OZ.AU parkville 3052 australia | fax +[061][03]344 6678 |Uunet ..!uunet!munnari!mullian!raob
jca@pnet01.cts.com (John C. Archambeau) (05/27/90)
chip@tct.uucp (Chip Salzenberg) writes: >[[ Followups to comp.arch ]] > >According to jca@pnet01.cts.com (John C. Archambeau): >>peter@ficc.ferranti.com (Peter da Silva) writes: >>>Did you know that C-news runs in small model? >> >>So what if C-News runs in small model. >>A vast majority of C compilers won't. > >Competent C compilers can be written in small model. I once worked on >a C compiler that ran on a PDP-11, which as everyone knows, is limited >to 64K of data under most (all?) Unix implementations. Which brings forth the argument in favor of progress. How many people actually use PDP-11's anymore? I've seen a few go in and out at garage sales. // JCA /* **--------------------------------------------------------------------------* ** Flames : /dev/null | Small memory model only for ** ARPANET : crash!pnet01!jca@nosc.mil | Unix? Get the (*bleep*) out ** INTERNET: jca@pnet01.cts.com | of here! ** UUCP : {nosc ucsd hplabs!hd-sdd}!crash!pnet01!jca **--------------------------------------------------------------------------* */
ian@sibyl.eleceng.ua.OZ (Ian Dall) (05/27/90)
In article <265D2FE5.2513@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes: >The old saw that programs will expand to fill the memory available to >them is true. It points out that the primary reason why mundane >programs use large memory spaces is the tendency of programmers to use >brute force to attack problems until the computer they're using runs >out of force. It used to be that the brute force line was crossed >quite early; not so today. Too bad. Not entirely. Sure it would be nice if all code was compact, but achieving it isn't free. I too did my first serious programming on PDP-11's (running RT-11 which left one with less the 64k (~45k ?) to work in). I spent a *lot* of time shoe horning programs into limited space. Younger programmers might not have the same skills in that area, but they *don't need them*. We all have to get used to the fact that memory is now about $80/MB and swap space is about $10/MB. By the time the project you are working on is finished, these prices might have halved. There is just no point in being too stingy with either! Indeed, these days, an employer could be justified in berating a programmer for wasting time reducing the size of a program instead of increasing it's speed or functionality or getting it to market earlier! The same argument applies to a lesser degree to speed. Generally, so long as it is small "enough" and fast "enough" (*), there are better things to do with your time than making it smaller or faster. The reality of the industry is that improvements in software lag behind improvements in hardware, and it makes sense to trade size and performance for speed of development. I don't mean to imply that there is often big gains to be made by redesigning things from time to time. In operatin systems, Mach seems to me a step in the right direction, but I think the improvements are largely a cleaner better partitioned OS with any reduction in code size being a fringe benefit. (*) Don't bother telling me that you can always use a faster numerical analysis routine, so can I. There are some programs which will not, in the forseeable future, ever be fast "enough". -- Ian Dall life (n). A sexually transmitted disease which afflicts some people more severely than others.
lm@snafu.Sun.COM (Larry McVoy) (05/28/90)
In article <640@sibyl.eleceng.ua.OZ> ian@sibyl.OZ (Ian Dall) writes: >Not entirely. Sure it would be nice if all code was compact, but >achieving it isn't free. I too did my first serious programming on >PDP-11's (running RT-11 which left one with less the 64k (~45k ?) to >work in). I spent a *lot* of time shoe horning programs into limited >space. Younger programmers might not have the same skills in that >area, but they *don't need them*. We all have to get used to the fact >that memory is now about $80/MB and swap space is about $10/MB. By the >time the project you are working on is finished, these prices might >have halved. There is just no point in being too stingy with either! Yes there is. It takes time to load that 5meg application. Disk time, page fault time. It takes cache lines, which are not plentiful. The fact that memory is, currently, cheap does not give us the right to squander it. Last I checked a xclock was 1.3 megs. Is this reasonable? --- Larry McVoy, Sun Microsystems (415) 336-7627 ...!sun!lm or lm@sun.com
greg@sce.carleton.ca (Greg Franks) (05/29/90)
In article <2832@crash.cts.com> jca@pnet01.cts.com (John C. Archambeau) writes: >Which brings forth the argument in favor of progress. How many people >actually use PDP-11's anymore? I've seen a few go in and out at garage sales. Ontario Hydro's got nine at the Darlington NGS running the reactors.* Maybe they replaced them with VAXes running PDP-11 emulation - probably not though. Back when the station was designed, chips like the 68000 and 8088 did not even exist. I wonder what's running the Hubble Space telescope. I'll bet it isn't an 88000, SPARC, 29K or R3000. * 256K and drum memory. The older stations use Varian computers (remember them?) and ancient IBM's with a whopping 32 kilowords of store. 32 K to run a reactor - 600 K to edit its source code :-(. -- Greg Franks, (613) 788-5726 |"The reason that God was able to Systems Engineering, Carleton University,|create the world in seven days is Ottawa, Ontario, Canada K1S 5B6. |that he didn't have to worry about greg@sce.carleton.ca uunet!mitel!sce!greg|the installed base" -- Enzo Torresi
mitchh@gold.GVG.TEK.COM (Mitch Hendrickson) (05/30/90)
In article <2832@crash.cts.com> jca@pnet01.cts.com (John C. Archambeau) writes: >Which brings forth the argument in favor of progress. How many people >actually use PDP-11's anymore? I've seen a few go in and out at garage sales. Well, we for one are still building very functional video editing systems (basically a realtime control problem) based on PDP-11's (we did recently start requiring memory management hardware and at least 256k, but...). We're doing far more with them than many competitors with "more advanced" hardware. Works for us.... -Mitch
keller@saturn.ucsc.edu (Jeffrey M. Keller) (05/30/90)
Your xclock binary is 1.3Meg? It's 48K in X11R4 under SunOs 4.0.3 (sparc). Jeff Keller keller@saturn.ucsc.edu (408)425-5416 Who *did* shoot JR?
atk@boulder.Colorado.EDU (Alan T. Krantz) (05/30/90)
In article <3886@darkstar.ucsc.edu> keller@saturn.ucsc.edu (Jeffrey M. Keller) writes: >Your xclock binary is 1.3Meg? It's 48K in X11R4 under SunOs 4.0.3 (sparc). > >Jeff Keller keller@saturn.ucsc.edu (408)425-5416 > > Who *did* shoot JR? I suspect the difference is whether you are using dynamic libraries or static libraries??? ------------------------------------------------------------------ | Mail: 1830 22nd street Email: atk@boulder.colorado.edu| | Apt 16 Vmail: Home: (303) 939-8256 | | Boulder, Co 80302 Office: (303) 492-8115 | ------------------------------------------------------------------
lm@snafu.Sun.COM (Larry McVoy) (05/30/90)
In article <3886@darkstar.ucsc.edu> keller@saturn.ucsc.edu (Jeffrey M. Keller) writes: >Your xclock binary is 1.3Meg? It's 48K in X11R4 under SunOs 4.0.3 (sparc). Yeah? Tell me about it. That executable is dynamically linked. Go take a look at how much it uses when it is in memory. I think you'll find that a meg or more is very close. The xclock I run uses about a meg. The 1.3 number is an R3 number I think, from a Dec 3100. --- Larry McVoy, Sun Microsystems (415) 336-7627 ...!sun!lm or lm@sun.com
keller@saturn.ucsc.edu (Jeffrey M. Keller) (05/30/90)
In article <21667@boulder.Colorado.EDU> atk@boulder.Colorado.EDU (Alan T. Krantz) writes: >In article <3886@darkstar.ucsc.edu> keller@saturn.ucsc.edu (Jeffrey M. Keller) writes: >I suspect the difference is whether you are using dynamic libraries or >static libraries??? Yes, of course. But surely shared libraries are a legitimate means of conserving resources? True, the libraries do still have to be stored and loaded... I suppose one could define an "effective size" which would still be rather large. ===================================================================== Jeff Keller keller@saturn.ucsc.edu (408)425-5416 THIS LIFE IS A TEST. IT IS ONLY A TEST. HAD THIS BEEN A REAL LIFE, YOU WOULD HAVE BEEN GIVEN INSTRUCTIONS ON WHERE TO GO AND WHAT TO DO.
guy@auspex.auspex.com (Guy Harris) (05/31/90)
>Yes, of course. But surely shared libraries are a legitimate means of >conserving resources? True, the libraries do still have to be stored >and loaded... I suppose one could define an "effective size" which would >still be rather large. In which case, don't forget to include the size of the big shared library called "/unix" or "/stand/unix" or "/vmunix" or whatever... (yes, I'm completely serious about calling it a shared library).
mcculley@alien.enet.dec.com (05/31/90)
In article <854@sce.carleton.ca>, greg@sce.carleton.ca (Greg Franks) writes... >In article <2832@crash.cts.com> jca@pnet01.cts.com (John C. Archambeau) writes: >>Which brings forth the argument in favor of progress. How many people >>actually use PDP-11's anymore? I've seen a few go in and out at garage sales. > >[...]I wonder what's running the >Hubble Space telescope. I'll bet it isn't an 88000, SPARC, 29K or >R3000. > That reminds me, when I first joined Digital (about 10 years ago) teaching RSX customer training courses, my first onsite course was taught in an IBM (!) facility where they were working on a contract to do a ground support system for the space telescope using an 11/70 running RSX-11M. Seems IBM was (is?) a DEC OEM... :-) :-) (I think they produced some Unibus hardware options for interprocessor comms.) Bruce McCulley RSX Software Development Digital Equipment Corp.
peter@ficc.ferranti.com (Peter da Silva) (05/31/90)
In article <640@sibyl.eleceng.ua.OZ> ian@sibyl.OZ (Ian Dall) writes: > We all have to get used to the fact > that memory is now about $80/MB and swap space is about $10/MB. Yes, memory is cheap. It's not free. And when you run out of it, it gets very expensive indeed. Anyone who's tried to do something a little too big with X and run out of what seemed like plenty of RAM when you bought it knows what I mean. Can you say "thrashing"? Also, cutting down program size also speeds things up: paging isn't free, either. Finally, that $80/MB is misleading. First, machines generally have a hard limit as to how much memory can be crammed in. After that there is a *sharp* knee in the price curve. Second, that memory has to be addressed, which takes silicon, PCB area, and traces. Multiply by your production run and you can save big bucks by being a little skimpy on memory. Besides, it will increase the sale of those expensive add-on boards and upgrades. -- `-_-' Peter da Silva. +1 713 274 5180. <peter@ficc.ferranti.com> 'U` Have you hugged your wolf today? <peter@sugar.hackercorp.com> @FIN Dirty words: Zhghnyyl erphefvir vayvar shapgvbaf.
yarvin-norman@CS.YALE.EDU (Norman Yarvin) (05/31/90)
In article <640@sibyl.eleceng.ua.OZ> ian@sibyl.OZ (Ian Dall) writes: > We all have to get used to the fact > that memory is now about $80/MB and swap space is about $10/MB. And _I/O bandwidth_ is still expensive -- see other discussions in this newsgroup. That 1 MB of swap space is going to take a couple of seconds so swap out, and another couple of seconds to swap back in. Norman Yarvin yarvin-norman@cs.yale.edu
jkrueger@dgis.dtic.dla.mil (Jon) (05/31/90)
> [many differing points from Peter, Ian, Henry, John]
Some more differing points (reconcile 'em all for free prize :-)
Tyranny of growth rate applies to space as well as time. Cheaper storage
forgives increasingly large constant factors, but not exponential growth.
Added complexity is a hidden cost of bloat. The larger system is often
the less well-defined systems; The costs of developing, validating,
maintaining, and/or costs of unreliablity, may grow too.
"Software: fast, cheap, reliable -- choose any two". Again "fast"
could mean "small". Explosive growth of space may be a symptom of
getting reliable software at low development cost, but predicts higher
operational cost. Sometimes tradeoffs must be made, but you can trade
the right things for the right things.
-- Jon
--
Jonathan Krueger jkrueger@dtic.dla.mil uunet!dgis!jkrueger
Drop in next time you're in the tri-planet area!
drd@siia.mv.com (David Dick) (05/31/90)
In <2832@crash.cts.com> jca@pnet01.cts.com (John C. Archambeau) writes: >Which brings forth the argument in favor of progress. How many people >actually use PDP-11's anymore? I've seen a few go in and out at garage sales. There must be an enormous number of them in the field. DEC recently bowed to their pressure and released two new processors based on the (J11?) chipset they've been using most recently. David Dick Software Innovations, Inc. [the Software Moving Company(sm)]
ian@sibyl.eleceng.ua.OZ (Ian Dall) (06/02/90)
In article <136298@sun.Eng.Sun.COM> lm@sun.UUCP (Larry McVoy) writes: >In article <640@sibyl.eleceng.ua.OZ> ian@sibyl.OZ (Ian Dall) writes: >>Not entirely. Sure it would be nice if all code was compact, but >>achieving it isn't free...... >> We all have to get used to the fact >>that memory is now about $80/MB and swap space is about $10/MB. By the >>time the project you are working on is finished, these prices might >>have halved. There is just no point in being too stingy with either! > >Yes there is. It takes time to load that 5meg application. Disk time, >page fault time. It takes cache lines, which are not plentiful. Don't forget the "too" I put in front of "stingy". I never claimed that program size wasn't a consideration, only that it wasn't as important a consideration as it used to be. 1.3M does sound too big for an xclock. (Makes emacs seem lean and mean!) We have all our xclients compiled with dynamic libraries here, so it is hard to check. It would be interesting to work out what the effective working set is. One point about programs which use "standard" libraries is that simple programs tend to be big but complex programs are not all that much bigger (the libraries get loaded in both cases). That is to say that the relationship between executable size and apparant complexity is not linear. I noticed very early in my fore mentioned PDP-11 programming era that a (very) small fortran program actually generated quite a large executable because it pulled in most of the run time support library. Perhaps a similar thing happens with X libraries. None of which should be taken to mean that I think 1.3MB is reasonable for an X clock. If your figures are accurate, some effort should be spent determining why. -- Ian Dall life (n). A sexually transmitted disease which afflicts some people more severely than others.
pjg@acsu.Buffalo.EDU (Paul Graham) (06/04/90)
lm@snafu.Sun.COM (Larry McVoy) writes: |keller@saturn.ucsc.edu (Jeffrey M. Keller) writes: |>Your xclock binary is 1.3Meg? It's 48K in X11R4 under SunOs 4.0.3 (sparc). |Yeah? Tell me about it. That executable is dynamically linked. Go take |a look at how much it uses when it is in memory. i wouldn't comment on this except that some folks might think "aha that darned x" (well they will anyway but so what). i don't use xclock but i fired one up and darned if it didn't check in at 844K. foo on that. the clock program i do use weighs in at 136K. the private portion of my xclock has 32K of code while dclock has 205K of code. both are dynamically linked.
dik@cwi.nl (Dik T. Winter) (06/04/90)
In article <27415@eerie.acsu.Buffalo.EDU> pjg@acsu.Buffalo.EDU (Paul Graham) writes: > lm@snafu.Sun.COM (Larry McVoy) writes: > > |keller@saturn.ucsc.edu (Jeffrey M. Keller) writes: > |>Your xclock binary is 1.3Meg? It's 48K in X11R4 under SunOs 4.0.3 (sparc). > > i wouldn't comment on this except that some folks might think "aha that darned > x" (well they will anyway but so what). i don't use xclock but i fired one up > and darned if it didn't check in at 844K. foo on that. the clock program i > do use weighs in at 136K. the private portion of my xclock has 32K of code > while dclock has 205K of code. both are dynamically linked. Oh dear, and I am using xpclock which is larger than xclock and likes CPU. Though I do not use it on a Sun 3, otherwise I could do nothing any more on it. -- dik t. winter, cwi, amsterdam, nederland dik@cwi.nl
keller@saturn.ucsc.edu (Jeffrey M. Keller) (06/04/90)
In article <27415@eerie.acsu.Buffalo.EDU> pjg@acsu.Buffalo.EDU (Paul Graham) writes: >lm@snafu.Sun.COM (Larry McVoy) writes: > >|keller@saturn.ucsc.edu (Jeffrey M. Keller) writes: >|>Your xclock binary is 1.3Meg? It's 48K in X11R4 under SunOs 4.0.3 (sparc). > >|Yeah? Tell me about it. That executable is dynamically linked. Go take >|a look at how much it uses when it is in memory. > >i wouldn't comment on this except that some folks might think "aha that darned >x" (well they will anyway but so what). i don't use xclock but i fired one up >and darned if it didn't check in at 844K. ... Well, i wasn't going to comment either, but... ;-) As i pointed out to McVoy in email, the binary size seems to be inflated by the dynamic linking: in X11R3 on a Sun-3 under SunOS 3.5 (no shared libraries), the xclock binary is 376K. Granted, that's still huge, but it suggests to me that the 1.3M (and even the 844K) figure is misleading. Since i approve of shared libraries, i would like to think that the effective cost of the 1.3M (or whatever) dynamically linked xclock is <= that of the 376K statically linked one. -- Jeff Keller keller@saturn.ucsc.edu (408)425-5416 THIS LIFE IS A TEST. IT IS ONLY A TEST. HAD THIS BEEN A REAL LIFE, YOU WOULD HAVE BEEN GIVEN INSTRUCTIONS ON WHERE TO GO AND WHAT TO DO.
quiroz@lemon.cs.rochester.edu (Cesar Quiroz) (06/04/90)
In article <4042@darkstar.ucsc.edu>, keller@saturn.ucsc.edu (Jeffrey M. Keller) wrote: | Since i approve of shared libraries, i would like to think that | the effective cost of the 1.3M (or whatever) dynamically linked | xclock is <= that of the 376K statically linked one. I am sure the poster had a better reason to think that 1.3 amounts to less than 0.376, beyond his approval or not of shared libraries. It would be a bad time for Computer Science and Engineering if wishful thinking became an accepted style of argumentation. -- Cesar Augusto Quiroz Gonzalez Department of Computer Science University of Rochester Rochester, NY 14627
pjg@acsu.Buffalo.EDU (Paul Graham) (06/05/90)
keller@saturn.ucsc.edu (Jeffrey M. Keller) writes: |pjg@acsu.Buffalo.EDU (Paul Graham) writes: [lm@snafu says xclock is big i agree] |Well, i wasn't going to comment either, but... ;-) |As i pointed out to McVoy in email, the binary size seems to be inflated |by the dynamic linking: in X11R3 on a Sun-3 under SunOS 3.5 (no shared |libraries), the xclock binary is 376K. this is not particularly supported by the fact that the clock program i do use has a resident size less than the static text size. it is also dynalinked.
lm@snafu.Sun.COM (Larry McVoy) (06/05/90)
In article <4042@darkstar.ucsc.edu> keller@saturn.ucsc.edu (Jeffrey M. Keller) writes: >Well, i wasn't going to comment either, but... ;-) >As i pointed out to McVoy in email, the binary size seems to be inflated >by the dynamic linking: in X11R3 on a Sun-3 under SunOS 3.5 (no shared >libraries), the xclock binary is 376K. Granted, that's still huge, but >it suggests to me that the 1.3M (and even the 844K) figure is misleading. >Since i approve of shared libraries, i would like to think that the effective >cost of the 1.3M (or whatever) dynamically linked xclock is <= that of the >376K statically linked one. >-- >Jeff Keller keller@saturn.ucsc.edu (408)425-5416 Let me try one last time. After this I give up. The xclock number that I have is from an old version on a Dec 3100 running X11R3 (I think). And it was around 1.3meg (around means that it might have been a 50 or a 100K bigger or smaller). Furthermore, an xclock running a sparc station 1, running SunOS 4.1, X11 R4 FCS (no patches), uses about a meg of translations. That means that that process is actively using at least 1 meg of virtual address space and that that portion of its address space is backed by real incore pages. Many of those pages have the potential to be shared. That does not mean that they are shared. That does not reduce or increase the effective size of the binary (well, maybe there's a page of dynamic linking junk, but that's noise). You still have to bring all of those pages in from disk. You have to have all of those (unshared) translations loaded up. You have to run the pager over all of those pages. They don't go away. More processes may be using them but they are really there. In memory. Using up time and space. Just as a sanity check, compare to the first version of the Mac. That had the OS (what there was of it - basically a file system) and the window system in a 128K ROM. Your average X server is about a meg or more. On the *same* architecture. --- Larry McVoy, Sun Microsystems (415) 336-7627 ...!sun!lm or lm@sun.com
keller@saturn.ucsc.edu (Jeffrey M. Keller) (06/05/90)
In article <1990Jun4.134439.27540@cs.rochester.edu> quiroz@lemon.cs.rochester.edu (Cesar Quiroz) writes: [included posting deleted] > >I am sure the poster had a better reason to think that 1.3 amounts >to less than 0.376, beyond his approval or not of shared libraries. >It would be a bad time for Computer Science and Engineering if >wishful thinking became an accepted style of argumentation. >-- > Cesar Augusto Quiroz Gonzalez > Department of Computer Science > University of Rochester > Rochester, NY 14627 Yes, he did have a reason to think that, as well as an guess as to why the dynamically linked binary is larger. With shared libraries and virtual memory, there's no need to worry about granularity in the libraries -- you simply link in the whole thing, and page in on demand the parts you're actually using. Furthermore, these parts may already be in physical memory, if some other program is using them. This explains, simultaneously, both why the dynamically linked version appears larger than the statically linked one, and how it might still be effectively smaller. -- Jeff Keller keller@saturn.ucsc.edu (408)425-5416 THIS LIFE IS A TEST. IT IS ONLY A TEST. HAD THIS BEEN A REAL LIFE, YOU WOULD HAVE BEEN GIVEN INSTRUCTIONS ON WHERE TO GO AND WHAT TO DO.
fouts@bozeman.ingr.com (Martin Fouts) (06/05/90)
In article <136298@sun.Eng.Sun.COM> lm@snafu.Sun.COM (Larry McVoy) writes: From: lm@snafu.Sun.COM (Larry McVoy) In article <640@sibyl.eleceng.ua.OZ> ian@sibyl.OZ (Ian Dall) writes: >Not entirely. Sure it would be nice if all code was compact, but >achieving it isn't free. I too did my first serious programming on >PDP-11's (running RT-11 which left one with less the 64k (~45k ?) to >work in). I spent a *lot* of time shoe horning programs into limited >space. Younger programmers might not have the same skills in that >area, but they *don't need them*. We all have to get used to the fact >that memory is now about $80/MB and swap space is about $10/MB. By the >time the project you are working on is finished, these prices might >have halved. There is just no point in being too stingy with either! Yes there is. It takes time to load that 5meg application. Disk time, page fault time. It takes cache lines, which are not plentiful. The fact that memory is, currently, cheap does not give us the right to squander it. Last I checked a xclock was 1.3 megs. Is this reasonable? No. On my system xclock is -rwxrwxr-x 2 7362 sys 104696 May 21 16:25 /usr/bin/X11/xclock* It will get smaller next month when I install the shared library version of the X programming libraries. (The resident image on my current system is 500k, the full swap image including the shared standard library entry points, etc is 600k.) Ian Dall said there is no point in being ***too*** stingy with either. (My emphasis) and I agree. There is a real tradeoff between cost of developement, cost of maintenance, lifetime and size. Should I go bum another 100 bytes out of xclock? (200?) No. If I care about clocks, I should spend my time on an xclock variant with more neat features. (Alarms, calendars, etc..) -- Martin Fouts UUCP: ...!pyramid!garth!fouts ARPA: apd!fouts@ingr.com PHONE: (415) 852-2310 FAX: (415) 856-9224 MAIL: 2400 Geng Road, Palo Alto, CA, 94303 If you can find an opinion in my posting, please let me know. I don't have opinions, only misconceptions.
seanf@sco.COM (Sean Fagan) (06/07/90)
In article <136625@sun.Eng.Sun.COM> lm@sun.UUCP (Larry McVoy) writes: >Just as a sanity check, compare to the first version of the Mac. That had >the OS (what there was of it - basically a file system) and the window >system in a 128K ROM. Your average X server is about a meg or more. On the >*same* architecture. Apples (pardon the pun 8-)) to oranges. Or, more correctly, peas to 200 lb. pumpkins, I guess. The Mac graphical interface (the stuff in ROM, basicly) doesn't have to worry about TCP/IP communication, doesn't have a split client / server model, etc. It is a single-user system, for running a single application at a time, much like MS-DOS. X, on the other hand, is this *huge* monolithic *thing* which falls down on top of an existing OS, one which was not prepared, really, to deal with something of this order. It has everything but the kitchen sink (you get that with GNU EMACS 8-)). So it's an unfair comparison. -- -----------------+ Sean Eric Fagan | "It's a pity the universe doesn't use [a] segmented seanf@sco.COM | architecture with a protected mode." uunet!sco!seanf | -- Rich Cook, _Wizard's Bane_ (408) 458-1422 | Any opinions expressed are my own, not my employers'.