[comp.realtime] Bloat costs

chip@tct.uucp (Chip Salzenberg) (05/30/90)

According to jca@pnet01.cts.com (John C. Archambeau):
>chip@tct.uucp (Chip Salzenberg) writes:
>>Competent C compilers can be written in small model.  I once worked on
>>a C compiler that ran on a PDP-11, which as everyone knows, is limited
>>to 64K of data under most (all?) Unix implementations.
>
>Which brings forth the argument in favor of progress.  How many people
>actually use PDP-11's anymore?

PDP-11 usage statistics matter not at all.  The point is
that it can be done, but some people would have you think
that it can't be done, so they can escape the mental effort
required to do it.

The "What do you want to do, return to the dark ages?"
retort reminds me of a quote from Theodor Nelson, who in
turn was quoting a computer consultant of the 70s:

    "If it can't be done in COBOL,
     I just tell them it can't be done by computer.
     It saves everyone a lot of time."

Obviously this consultant was a trogolodyte.  One would
hope that such attitudes are a thing of the past.

Substitute "four megabytes of RAM" for "COBOL", however,
and you get a depressingly accurate summary of the attitude
of the day.  Am I implying that that 4M-or-die programmers
are trogolodytes as well?  You bet your data space I am.
-- 
Chip Salzenberg at ComDev/TCT   <chip%tct@ateng.com>, <uunet!ateng!tct!chip>

jtc@van-bc.UUCP (J.T. Conklin) (05/31/90)

In article <2662D045.3F02@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
>Substitute "four megabytes of RAM" for "COBOL", however,
>and you get a depressingly accurate summary of the attitude
>of the day.  Am I implying that that 4M-or-die programmers
>are trogolodytes as well?  You bet your data space I am.

Although I agree with Chip in general, there are some cases where
using memory is better than scrimping on principle.

I'm sure that many faster algorithms had to be passed by because
of limited address space.  Some of the GNU equivelents of UNIX
programs are many times faster because of the faster, yet more
memory intensive, algorithms.

I don't think I have to mention another optimization that ``wastes''
memory: large lookup tables.  It was quite common to be required to
re-compute indexes each iteration because there wasn't enough memory.

Another unrelated application is high resolution image processing.  Is
procesing 16MB frame-buffer with kerjillions of processors doing ray-
tracing wasting mmoryy?


On the other hand, there is something to be said about giving
beginning programmers 6 MHz Xenix/286 machines to work on.  I
think you'd be suprised at the small, fast, and portable code
that can come out of that enviornment.  I recomend it, as the
good habits that result will last for life.


To summarize, I have written programs that need 4M to run --- only
because it takes 4M to do the job.  Programs that require less, take
less. I do not consider myself a trogolodyte.

	--jtc

-- 
J.T. Conklin	UniFax Communications Inc.
		...!{uunet,ubc-cs}!van-bc!jtc, jtc@wimsey.bc.ca

chip@tct.uucp (Chip Salzenberg) (06/01/90)

According to jtc@van-bc.UUCP (J.T. Conklin):
>I'm sure that many faster algorithms had to be passed by because
>of limited address space.  Some of the GNU equivelents of UNIX
>programs are many times faster because of the faster, yet more
>memory intensive, algorithms.

However, as has been pointed out before, the memory isn't
free, paging takes time, swap space isn't free, etc.  At the
very least, where practical, programs with memory-eating
algorithms should include a more frugal algorithm as an
option.  IMHO, of course.

>Another unrelated application is high resolution image processing.  Is
>procesing 16MB frame-buffer with kerjillions of processors doing ray-
>tracing wasting mmoryy?

Well, there are exceptions to every rule.  :-)

>On the other hand, there is something to be said about giving
>beginning programmers 6 MHz Xenix/286 machines to work on.

Amen.
-- 
Chip, the new t.b answer man    <chip%tct@ateng.com>, <uunet!ateng!tct!chip>

wsd@cs.brown.edu (Wm. Scott `Spot' Draves) (06/02/90)

In article <266577FA.6D99@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
   According to jtc@van-bc.UUCP (J.T. Conklin):

   >On the other hand, there is something to be said about giving
   >beginning programmers 6 MHz Xenix/286 machines to work on.

   Amen.

If you are suggesting that novice programmers be given slow/obsolete
hardware so that they learn to write efficient code, I would disagree
with you strongly.

Efficiency is just one of many attributes that are generally
desirable in programs.  Learning to program on a machine that is
slower than the state of the art will artificially skew the importance
of eff. programming.

One of the wonderful things about 20Mip 32Mb workstations is that I
don't have to worry about eff. when writing most code.  I can
concentrate on other issues such as clarity of code, speed of
execution, speed of development, fancy features, ...

by "eff." i mean "frugal of code and data".

--

Scott Draves		Space... The Final Frontier
wsd@cs.brown.edu
uunet!brunix!wsd

wwm@pmsmam.uucp (Bill Meahan) (06/02/90)

In article <WSD.90Jun1130958@miles.cs.brown.edu> wsd@cs.brown.edu (Wm. Scott `Spot' Draves) writes:
>In article <266577FA.6D99@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
>   According to jtc@van-bc.UUCP (J.T. Conklin):
>
>  [stuff deleted]
>
>One of the wonderful things about 20Mip 32Mb workstations is that I
>don't have to worry about eff. when writing most code.  I can
>concentrate on other issues such as clarity of code, speed of
>execution, speed of development, fancy features, ...
>
>by "eff." i mean "frugal of code and data".
>

May I be among the first to say HORSEPUCKY!

There seems to be a mindset among many CS majors that
"memory is cheap and hardware is fast, so why worry about efficiency?"

This kind of thinking is the result of looking only at chip prices and
the latest hot-rod announcements.  In truth, only a SMALL subset of the
(potential) customers for any given piece of software are running the
'latest and greatest' with beaucoup RAM.  The rest of us are running on
whatever we've got now and often this is older equipment or 'bare-bones'
versions of the hotter stuff because that was all we could afford.

There is a simple financial reality that is often overlooked:

	1) Regardless of the **theoretical prices**, if I don't HAVE 'it'
	   I have to go buy it.
	2) The money I have to go buy 'it' with could also go towards
	   the purchase of other things.
	3) Therefore, I have to demonstrate (to myself, my spouse,
	   my manager, the bean-counters, etc) that buying 'it' has
	   sufficient return on investment to justify THAT purchase
	   instead of some other.
	4) It is very hard to justify continual upgrades of equipment
	   just to get the 'latest and greatest' features, unless these
	   features translate DIRECTLY into some real benefit.
	5) If the latest and greatest is not directly upwards compatible
	   with my current configuration, there is an ADDITONAL hidden cost
	   associated with converting/replacing my current installed base
	   of software and hardware.
	6) Even 'cheap' upgrades get expensive if you have to buy a lot
	   of copies.  (This site has over 250 PC's, think the Controller
	   wants to spend $500 each to upgrade the memory just to get some
	   fancier display?)
	7) Customers DON'T CARE how clear/modular/elegant your code is
	   unless the clarity/elegance/whatever has some demonstratable
	   benefit to THEM!

Maybe all CS majors should be forced to take a few economics courses along
with the rest of their curriculum!

FAST, SMALL, CHEAP   <--- Pick any 2, you can't have all 3.
-- 
Bill Meahan  WA8TZG		uunet!mailrus!umich!pmsmam!wwm
I speak only for myself - even my daughter's cat won't let me speak for her!

mike@thor.acc.stolaf.edu (Mike Haertel) (06/02/90)

In article <266577FA.6D99@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
>According to jtc@van-bc.UUCP (J.T. Conklin):
>>On the other hand, there is something to be said about giving
>>beginning programmers 6 MHz Xenix/286 machines to work on.
>
>Amen.

Not a 286!  If you want to teach someone about memory constraints give
them a PDP-11 running UNIX v7.  A much cleaner architecture.

The problem is, people all too often assume that their past experience
defines how things "should" be, and so when they in turn design things in
the future they apply their preconceptions.  We don't need any intellectual
descendents of the 286.
--
Mike Haertel <mike@acc.stolaf.edu>
``There's nothing remarkable about it.  All one has to do is hit the right
  keys at the right time and the instrument plays itself.'' -- J. S. Bach

aglew@oberon.csg.uiuc.edu (Andy Glew) (06/09/90)

>With an orthogonal architecture and a good compiler, you can write
>maintainable programs in high-level languages and still produce
>products that run quickly on machines with a lot fewer than 20 MIPS.

With a good compiler you don't care how orthogonal the architecture is.
--
Andy Glew, aglew@uiuc.edu

bp@retiree.cis.ufl.edu (Brian Pane) (06/09/90)

In article <1990Jun1.200333.10672@pmsmam.uucp> wwm@pmsmam.UUCP (Bill Meahan) writes:
>In article <WSD.90Jun1130958@miles.cs.brown.edu> wsd@cs.brown.edu (Wm. Scott `Spot' Draves) writes:
>>In article <266577FA.6D99@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
>>   According to jtc@van-bc.UUCP (J.T. Conklin):
>>
>>  [stuff deleted]
>>
>>One of the wonderful things about 20Mip 32Mb workstations is that I
>>don't have to worry about eff. when writing most code.  I can
>>concentrate on other issues such as clarity of code, speed of
>>execution, speed of development, fancy features, ...
>>
>>by "eff." i mean "frugal of code and data".
>>
>
>May I be among the first to say HORSEPUCKY!
>
>There seems to be a mindset among many CS majors that
>"memory is cheap and hardware is fast, so why worry about efficiency?"
>
>This kind of thinking is the result of looking only at chip prices and
>the latest hot-rod announcements.  In truth, only a SMALL subset of the

If such a mindset exists, it is not because of the abundance of powerful
hardware.  It is because CS majors are taught to build robust, maintainable,
and therefore seemingly elegant programs rather than compact and clever
programs.  If we get used to writing ruthlessly brilliant programs,
we'll only add to the "software crisis" when we graduate.

I agree that efficiency is important, but  it must be kept in
its proper perspective.  This group is devoted to the implementation
of a UNIX-like OS on an architecture that should have been allowed to
die ten years ago.  No matter how well you write your C code, an average
compiler will probably produce disgracefully slow executables.  There
is little to be gained by writing efficient C programs for inefficient
machines.  You *can* write fairly efficient code for the 8086--in
assembly language.  However, few people have that much time to waste.
While you're shouting about the expense of "improved" software and the
expense of the hardware on which such software must run, don't forget
about the cost of programmer time.

Finally, note that large and "inefficient" programs advance the state
of the art in software more often than small and clever programs.
Consider X Windows.  It is a huge system designed for flexibility
rather than efficiency, and it requires significant hardware power,
but it has revolutionized the way we use computers.

>Maybe all CS majors should be forced to take a few economics courses along
>with the rest of their curriculum!
>
Don't blame us for the economic problems of software development; blame the
EE's who design the hardware.  With an orthogonal architecture and a
good compiler, you can write maintainable programs in high-level languages
and still produce products that run quickly on machines with a lot fewer
than 20 MIPS.


>FAST, SMALL, CHEAP   <--- Pick any 2, you can't have all 3.
Not yet.  And not ever, if we all devote our efforts to
optimizing tiny programs for tiny machines.  20-MIPS
workstations will become affordable only when lots of software
is available for them.

>Bill Meahan  WA8TZG		uunet!mailrus!umich!pmsmam!wwm
>I speak only for myself - even my daughter's cat won't let me speak for her!

-Brian F. Pane
-------------------------------------------------------------------------
Brian Pane	University of Florida Department of Computer Science
bp@beach.cis.ufl.edu		Class of 1991

"If you can keep your expectations tiny,
 you'll get through life without being so whiny" - Matt Groening

#ifdef OFFENDED_ANYONE
#  include "disclaimer.h"
// Sorry to indulge in such 8086-bashing, folks, but I had a point to make.
#endif
-------------------------------------------------------------------------

peter@ficc.ferranti.com (Peter da Silva) (06/09/90)

In article <23473@uflorida.cis.ufl.EDU> bp@beach.cis.ufl.edu (Brian Pane) writes:
> If such a mindset exists, it is not because of the abundance of powerful
> hardware.  It is because CS majors are taught to build robust, maintainable,
> and therefore seemingly elegant programs rather than compact and clever
> programs.  If we get used to writing ruthlessly brilliant programs,
> we'll only add to the "software crisis" when we graduate.

Lots of nice buzzwords there, fella. Trouble is, it doesn't mean anything.
First of all, I haven't noticed that much, if any, difference in the quality
of net contributions from academia and industry. Quantity, yes... industry
can't afford the time to write the latest and greatest freeware. Second,
nobody's advocating gratuitous microefficiency here, just a consideration
of space-time tradeoffs in choosing algorithms. Like not loading a whole
file when you can get away with reading a line at a time. Or if you *do*,
check how much there is to read before you read it instead of just allocating
a big array and doubling in size when it fills up. Using a simplistic
algorithm makes as much sense as using bubble-sort on a megabyte array.

> Finally, note that large and "inefficient" programs advance the state
> of the art in software more often than small and clever programs.
> Consider X Windows.

Yes, lets.

> It is a huge system designed for flexibility
> rather than efficiency, and it requires significant hardware power,
> but it has revolutionized the way we use computers.

Actually, it was the Xerox Star and the Apple Macintosh that did that.
Machines with a fraction of the resources of the typical X workstation.
-- 
`-_-' Peter da Silva. +1 713 274 5180.  <peter@ficc.ferranti.com>
 'U`  Have you hugged your wolf today?  <peter@sugar.hackercorp.com>
@FIN  Dirty words: Zhghnyyl erphefvir vayvar shapgvbaf.

kt4@prism.gatech.EDU (Ken Thompson) (06/11/90)

>In article <266577FA.6D99@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
>   According to jtc@van-bc.UUCP (J.T. Conklin):
>
>  [stuff deleted]
>
>One of the wonderful things about 20Mip 32Mb workstations is that I
>don't have to worry about eff. when writing most code.  I can
>concentrate on other issues such as clarity of code, speed of
>execution, speed of development, fancy features, ...
>>
>by "eff." i mean "frugal of code and data".
>

I strongly disagree that efficiency(including code/date size) can reasonably 
be ignored. No matter how quickly the power of machines grow, the things that
we want to do with them grow even faster. I believe it is a grave mistake
not to be concerned with the efficiency of the algorithms used in programming.
IMHO, this attitude has led to a severe decline in the capability of software
vs. the hardware resources required to execute it. Note I did not say 
anything about the cost of these resources. I find this depressing to say
the least.

				Ken
 

-- 
Ken Thompson  GTRI, Ga. Tech, Atlanta Ga. 30332 Internet:!kt4@prism.gatech.edu
uucp:...!{allegra,amd,hplabs,ut-ngp}!gatech!prism!kt4
"Rowe's Rule: The odds are five to six that the light at the end of the
tunnel is the headlight of an oncoming train."       -- Paul Dickson

greg@sce.carleton.ca (Greg Franks) (06/13/90)

In article <23473@uflorida.cis.ufl.EDU> we find:
...
>>There seems to be a mindset among many CS majors that
>>"memory is cheap and hardware is fast, so why worry about efficiency?"
>>
>>This kind of thinking is the result of looking only at chip prices and
>>the latest hot-rod announcements.  In truth, only a SMALL subset of the
>
>If such a mindset exists, it is not because of the abundance of powerful
>hardware.  It is because CS majors are taught to build robust, maintainable,
>and therefore seemingly elegant programs rather than compact and clever
>programs.  If we get used to writing ruthlessly brilliant programs,
>we'll only add to the "software crisis" when we graduate.

David Parnas would beg to differ.  He is not certain which is worse,
an Engineer who has been writing Fortran for the last 20 years, or a
present day CS major.  The former do not know ``modern'' programming
practices, hence they produce goto-full programs that do one thing
rather well.  The latter produce ``elegant'' programs that not only do
what the customer wanted (maybe), but twenty billion other things as
well.  After all does `ls' really need 18 different options?
Unfortunately, computer programming still seems to live in the CISC
era.

Prof. Parnas recently wrote an article in IEEE Computer on this very
subject.  I recommend reading it.

From:  "just call me Tex (as in massacre) - my productivity is
measured in negative lines"  :-) :-) :-)
-- 
Greg Franks, (613) 788-5726              |"The reason that God was able to
Systems Engineering, Carleton University,|create the world in seven days is
Ottawa, Ontario, Canada  K1S 5B6.        |that he didn't have to worry about
greg@sce.carleton.ca uunet!mitel!sce!greg|the installed base" -- Enzo Torresi

bpendlet@bambam.UUCP (Bob Pendleton) (06/13/90)

From article <2662D045.3F02@tct.uucp>, by chip@tct.uucp (Chip Salzenberg):

> Substitute "four megabytes of RAM" for "COBOL", however,
> and you get a depressingly accurate summary of the attitude
> of the day.  Am I implying that that 4M-or-die programmers
> are trogolodytes as well?  You bet your data space I am.
> -- 
> Chip Salzenberg at ComDev/TCT   <chip%tct@ateng.com>, <uunet!ateng!tct!chip>

A long time ago (about 10 years), at a company that has since changed
its name several times, I and 3 other damn good programmers spent a
year or so writing the runtime support libraries for a COBOL system
that generated code for an 8080 based "terminal" called the UTS400.
The compiler ran on a number of different machines and generated code
that ran on the '400. You linked the code with our runtime code and
you got an application you could down load to an eight inch floppy and
then boot on the '400. 

Our library did all the weird arithmetic and data formatting that
COBOL needs.  It also implemented a disk file system, host
communications, screen formatting, data entry validation,
multithreading (yes it was a multiuser system, up to 4 users if I
remember correctly), and segment swapping. It fit in 10K bytes. Normal
'400s had 24K, some had 32K. I know that at least one 20K lines COBOL
program ran on the machine all day, every day. 

Marketing decided we should also support indexed sequential files.
They "gave" us 1K to implement it. That is, the code for the indexed
sequential file system could not increase the size of the library by
more than 1K bytes.  We wrote the indexed sequential files module in
2K and rewrote the rest of the system to fit in 9K. 

So when people tell me they have done incredible things in tiny
memories on absurd machines I beleive them. I've even been know to buy
them a drink. 

Yes, it can be done. But for most things it is an absurd waste of
time. I can write code 5 to 10 times faster when I DON'T have to
worry about every byte I spend than when I'm memory tight. And I can
write code that RUNS several times faster when I'm free with memory
than when I have to count every byte. 

Some times you must run a ton of program on a pound of computer. Many,
if not most, commercial programs in the MS-DOS world fall into that
realm. But, most programming done in the name of "memory efficiency"
is just wasted time. You have to sell a lot of copies to make back the
cost of all that code tightening. Not to mention what it does to the
cost of further development. 

			Bob P.

P.S.

I also learned an important lesson on the power of structured design
and prototyping form this project. But, that's another story. 

-- 
              Bob Pendleton, speaking only for myself.
UUCP Address:  decwrl!esunix!bpendlet or utah-cs!esunix!bpendlet

                      X: Tools, not rules.

oz@yunexus.UUCP (Ozan Yigit) (06/14/90)

In article <23473@uflorida.cis.ufl.EDU> bp@beach.cis.ufl.edu (Brian Pane) 
babbles:

>Finally, note that large and "inefficient" programs advance the state
>of the art in software more often than small and clever programs.

And, you are writing this on an operating system that advanced the "state
of the art" without appearently needing 1/50th of what you may have on
your desk as a computing resource. So ironic.

oz
-- 
First learn your horn and all the theory.	Internet: oz@nexus.yorku.ca
Next develop a style. Then forget all that 	uucp: utzoo/utai!yunexus!oz
and just play.		Charlie Parker [?]	York U. CCS: (416) 736 5257

hedrick@athos.rutgers.edu (Charles Hedrick) (06/16/90)

Indeed.  I ported Kermit to Minix.  It took me several days to do.
On other versions of Unix you do it by typing "make", and maybe
fixing a few system dependencies.  The time was spent removing help
facilities and shortening text strings to get it to fit.  This is
not the way I want to spend my time (aside from being irked that
Kermit's nice user interface is being butchered in the process).

peter@ficc.ferranti.com (Peter da Silva) (06/16/90)

In article <Jun.16.00.15.42.1990.13822@athos.rutgers.edu> hedrick@athos.rutgers.edu (Charles Hedrick) writes:
> Indeed.  I ported Kermit to Minix.  It took me several days to [get it
> to fit]

Indeed. Which kermit were you using? Ours runs fine in small model.

+ which kermit
/usr/bin/kermit
+ size /usr/bin/kermit 
62124 + 30776 + 8606 = 101506 = 0x18c82
+ file /usr/bin/kermit 
/usr/bin/kermit:	separate executable not stripped
+ dates /usr/bin/kermit
C-Kermit, 4C(057) 31 Jul 85
Unix tty I/O, 4C(037), 31 Jul 85
Unix file support, 4C(032) 25 Jul 85
C-Kermit functions, 4C(047) 31 Jul 85
Wart Version 1A(003) 27 May 85
C-Kermit Protocol Module 4C(029), 11 Jul 85
Unix cmd package V1A(021), 19 Jun 85
User Interface 4C(052), 2 Aug 85
Connect Command for Unix, V4C(014) 29 Jul 85
Dial Command, V2.0(008) 26 Jul 85
Script Command, V2.0(007) 5 Jul 85
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.
<peter@ficc.ferranti.com>