chip@tct.uucp (Chip Salzenberg) (05/30/90)
According to jca@pnet01.cts.com (John C. Archambeau): >chip@tct.uucp (Chip Salzenberg) writes: >>Competent C compilers can be written in small model. I once worked on >>a C compiler that ran on a PDP-11, which as everyone knows, is limited >>to 64K of data under most (all?) Unix implementations. > >Which brings forth the argument in favor of progress. How many people >actually use PDP-11's anymore? PDP-11 usage statistics matter not at all. The point is that it can be done, but some people would have you think that it can't be done, so they can escape the mental effort required to do it. The "What do you want to do, return to the dark ages?" retort reminds me of a quote from Theodor Nelson, who in turn was quoting a computer consultant of the 70s: "If it can't be done in COBOL, I just tell them it can't be done by computer. It saves everyone a lot of time." Obviously this consultant was a trogolodyte. One would hope that such attitudes are a thing of the past. Substitute "four megabytes of RAM" for "COBOL", however, and you get a depressingly accurate summary of the attitude of the day. Am I implying that that 4M-or-die programmers are trogolodytes as well? You bet your data space I am. -- Chip Salzenberg at ComDev/TCT <chip%tct@ateng.com>, <uunet!ateng!tct!chip>
jtc@van-bc.UUCP (J.T. Conklin) (05/31/90)
In article <2662D045.3F02@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes: >Substitute "four megabytes of RAM" for "COBOL", however, >and you get a depressingly accurate summary of the attitude >of the day. Am I implying that that 4M-or-die programmers >are trogolodytes as well? You bet your data space I am. Although I agree with Chip in general, there are some cases where using memory is better than scrimping on principle. I'm sure that many faster algorithms had to be passed by because of limited address space. Some of the GNU equivelents of UNIX programs are many times faster because of the faster, yet more memory intensive, algorithms. I don't think I have to mention another optimization that ``wastes'' memory: large lookup tables. It was quite common to be required to re-compute indexes each iteration because there wasn't enough memory. Another unrelated application is high resolution image processing. Is procesing 16MB frame-buffer with kerjillions of processors doing ray- tracing wasting mmoryy? On the other hand, there is something to be said about giving beginning programmers 6 MHz Xenix/286 machines to work on. I think you'd be suprised at the small, fast, and portable code that can come out of that enviornment. I recomend it, as the good habits that result will last for life. To summarize, I have written programs that need 4M to run --- only because it takes 4M to do the job. Programs that require less, take less. I do not consider myself a trogolodyte. --jtc -- J.T. Conklin UniFax Communications Inc. ...!{uunet,ubc-cs}!van-bc!jtc, jtc@wimsey.bc.ca
chip@tct.uucp (Chip Salzenberg) (06/01/90)
According to jtc@van-bc.UUCP (J.T. Conklin): >I'm sure that many faster algorithms had to be passed by because >of limited address space. Some of the GNU equivelents of UNIX >programs are many times faster because of the faster, yet more >memory intensive, algorithms. However, as has been pointed out before, the memory isn't free, paging takes time, swap space isn't free, etc. At the very least, where practical, programs with memory-eating algorithms should include a more frugal algorithm as an option. IMHO, of course. >Another unrelated application is high resolution image processing. Is >procesing 16MB frame-buffer with kerjillions of processors doing ray- >tracing wasting mmoryy? Well, there are exceptions to every rule. :-) >On the other hand, there is something to be said about giving >beginning programmers 6 MHz Xenix/286 machines to work on. Amen. -- Chip, the new t.b answer man <chip%tct@ateng.com>, <uunet!ateng!tct!chip>
wsd@cs.brown.edu (Wm. Scott `Spot' Draves) (06/02/90)
In article <266577FA.6D99@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes: According to jtc@van-bc.UUCP (J.T. Conklin): >On the other hand, there is something to be said about giving >beginning programmers 6 MHz Xenix/286 machines to work on. Amen. If you are suggesting that novice programmers be given slow/obsolete hardware so that they learn to write efficient code, I would disagree with you strongly. Efficiency is just one of many attributes that are generally desirable in programs. Learning to program on a machine that is slower than the state of the art will artificially skew the importance of eff. programming. One of the wonderful things about 20Mip 32Mb workstations is that I don't have to worry about eff. when writing most code. I can concentrate on other issues such as clarity of code, speed of execution, speed of development, fancy features, ... by "eff." i mean "frugal of code and data". -- Scott Draves Space... The Final Frontier wsd@cs.brown.edu uunet!brunix!wsd
mike@thor.acc.stolaf.edu (Mike Haertel) (06/02/90)
In article <266577FA.6D99@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes: >According to jtc@van-bc.UUCP (J.T. Conklin): >>On the other hand, there is something to be said about giving >>beginning programmers 6 MHz Xenix/286 machines to work on. > >Amen. Not a 286! If you want to teach someone about memory constraints give them a PDP-11 running UNIX v7. A much cleaner architecture. The problem is, people all too often assume that their past experience defines how things "should" be, and so when they in turn design things in the future they apply their preconceptions. We don't need any intellectual descendents of the 286. -- Mike Haertel <mike@acc.stolaf.edu> ``There's nothing remarkable about it. All one has to do is hit the right keys at the right time and the instrument plays itself.'' -- J. S. Bach
bpendlet@bambam.UUCP (Bob Pendleton) (06/13/90)
From article <2662D045.3F02@tct.uucp>, by chip@tct.uucp (Chip Salzenberg): > Substitute "four megabytes of RAM" for "COBOL", however, > and you get a depressingly accurate summary of the attitude > of the day. Am I implying that that 4M-or-die programmers > are trogolodytes as well? You bet your data space I am. > -- > Chip Salzenberg at ComDev/TCT <chip%tct@ateng.com>, <uunet!ateng!tct!chip> A long time ago (about 10 years), at a company that has since changed its name several times, I and 3 other damn good programmers spent a year or so writing the runtime support libraries for a COBOL system that generated code for an 8080 based "terminal" called the UTS400. The compiler ran on a number of different machines and generated code that ran on the '400. You linked the code with our runtime code and you got an application you could down load to an eight inch floppy and then boot on the '400. Our library did all the weird arithmetic and data formatting that COBOL needs. It also implemented a disk file system, host communications, screen formatting, data entry validation, multithreading (yes it was a multiuser system, up to 4 users if I remember correctly), and segment swapping. It fit in 10K bytes. Normal '400s had 24K, some had 32K. I know that at least one 20K lines COBOL program ran on the machine all day, every day. Marketing decided we should also support indexed sequential files. They "gave" us 1K to implement it. That is, the code for the indexed sequential file system could not increase the size of the library by more than 1K bytes. We wrote the indexed sequential files module in 2K and rewrote the rest of the system to fit in 9K. So when people tell me they have done incredible things in tiny memories on absurd machines I beleive them. I've even been know to buy them a drink. Yes, it can be done. But for most things it is an absurd waste of time. I can write code 5 to 10 times faster when I DON'T have to worry about every byte I spend than when I'm memory tight. And I can write code that RUNS several times faster when I'm free with memory than when I have to count every byte. Some times you must run a ton of program on a pound of computer. Many, if not most, commercial programs in the MS-DOS world fall into that realm. But, most programming done in the name of "memory efficiency" is just wasted time. You have to sell a lot of copies to make back the cost of all that code tightening. Not to mention what it does to the cost of further development. Bob P. P.S. I also learned an important lesson on the power of structured design and prototyping form this project. But, that's another story. -- Bob Pendleton, speaking only for myself. UUCP Address: decwrl!esunix!bpendlet or utah-cs!esunix!bpendlet X: Tools, not rules.
hedrick@athos.rutgers.edu (Charles Hedrick) (06/16/90)
Indeed. I ported Kermit to Minix. It took me several days to do. On other versions of Unix you do it by typing "make", and maybe fixing a few system dependencies. The time was spent removing help facilities and shortening text strings to get it to fit. This is not the way I want to spend my time (aside from being irked that Kermit's nice user interface is being butchered in the process).
peter@ficc.ferranti.com (Peter da Silva) (06/16/90)
In article <Jun.16.00.15.42.1990.13822@athos.rutgers.edu> hedrick@athos.rutgers.edu (Charles Hedrick) writes: > Indeed. I ported Kermit to Minix. It took me several days to [get it > to fit] Indeed. Which kermit were you using? Ours runs fine in small model. + which kermit /usr/bin/kermit + size /usr/bin/kermit 62124 + 30776 + 8606 = 101506 = 0x18c82 + file /usr/bin/kermit /usr/bin/kermit: separate executable not stripped + dates /usr/bin/kermit C-Kermit, 4C(057) 31 Jul 85 Unix tty I/O, 4C(037), 31 Jul 85 Unix file support, 4C(032) 25 Jul 85 C-Kermit functions, 4C(047) 31 Jul 85 Wart Version 1A(003) 27 May 85 C-Kermit Protocol Module 4C(029), 11 Jul 85 Unix cmd package V1A(021), 19 Jun 85 User Interface 4C(052), 2 Aug 85 Connect Command for Unix, V4C(014) 29 Jul 85 Dial Command, V2.0(008) 26 Jul 85 Script Command, V2.0(007) 5 Jul 85 -- Peter da Silva. `-_-' +1 713 274 5180. <peter@ficc.ferranti.com>